So, Google is definitely shifting to machine learnings as its next major growth frontier and is even retrofitting its workforce to be more familiar with the concept of an algorithmic approach for solving complex problems by creating a software stack that can sort of construct itself as data is fed out of a statistical database.I guess, there’s potential upside in the way products interact with users, but in terms of monetization it’s really hard to get a read on Google’s intentions with AI/Machine Learning. Is it more incremental to pre-existing products or are they going to design entirely new apps on the basis of this technology? Given the large amount of data that Google does process, it’s fairly easy to develop interlinking relationships with data points and thus create complex voice/visual driven interfaces that can be monetized via an ad-monetization model. That being the case, the company has yet to convert a substantial percentage of its engineers into Machine-Learning “proficianados,” as was alluded to in an interview with key developers in a full-feature length article written by Steven Levy at Back Alley. Anyhow, here was some of Steven’s key points:Jeff Dean, who is to software at Google as Tom Brady is to quarterbacking in the NFL. Today, he estimates that of Google’s 25,000 engineers, only a “few thousand” are proficient in machine learning. Maybe ten percent. He’d like that to be closer to a hundred percent. “ Traditional AI methods of language understanding depended on embedding rules of language into a system, but in this project, as with all modern machine learning, the system was fed enough data to learn on its own, just as a child would. “I didn’t learn to talk from a linguist, I learned to talk from hearing other people talk,” says Corrado (co-founder at Google Brain project).Google still saves plenty of goodies for its own programmers. Internally, the company has a probably unparalleled tool chest of ML prosthetics, not the least of which is an innovation it has been using for years but announced only recently — the Tensor Processing Unit. This is a microprocessor chip optimized for the specific quirks of running machine language programs, similar to the way as Graphics Processing Units are designed with the single purpose of speeding the calculations that throw pixels on a display screen.The block quote was pretty long, but believe me, the article was pretty unbelievably long (I haven’t written something that long since 2013). In any case, the key points to keep in mind with Google’s AI/Machine Learning efforts is organizational resistance, as it requires engineers to learn new skills, and to change their programming paradigm from one in which a system is “fixed” to execute on a narrowly confined set of instruction (usually execution of specific commands via a programming language that gets compiled into more digestible chunks for a computer to understand) to a system that is sort of self-constructing, I guess the most apt description is teaching a computer to “program itself” rather than giving direct instructions to the computer. If they develop a great system for doing this (and so far they believe they’re at the cutting edge), there’s the potential for new innovations that go well beyond the scope of human comprehension. The potential upside is compelling, but it’s difficult to fully encapsulate what this actually means for shareholders. Furthermore, there’s a slow transition internally to fully incorporate machine learnings into every web-based application, but there’s no denying the usefulness. However, to improve data center efficiency, Google has designed custom silicon, most likely on an FPGA or they’ve designed an ASIC (application specific integrated circuit) to improve the instruction per cycle for machine learning programs. It’s hard to imagine exactly how they designed the custom silicon to better utilize resources, but in general FPGAs or ASICs tend to boost performance anywhere from 2x to 10x for very specific workloads. Since, Google is already committed on the software engineering side, it’s really no surprise that the company is deploying resources to ensure a continuous runway for computer resources to match the complexity of these programs.In either case, the company is focalizing on machine learning as the next wave of applications for consumers. It’s not clear what will be created. We just know that a new wave of applications are coming from Google. And, I believe this is where we begin the transition to the semantic web or web 3.0 era. After all, the guys at Google don't even know what they're going to create. The machine is building itself now. We'll sit back and watch as these large data centers self-construct programs out of massive databases, and I guess Google will eventually figure out the monetization aspect after they determine what they can create. Since, we don't know, and they don't know it's hard to model any of these incremental observations into a financial model. However, with decelerating growth metrics in its core advertising business it's nice to know that the collective Google "brain power" keeps the company relevant. In other words, Google is at the forefront of next generation technologies, and so as long as they can continue to innovate, we won't have to worry about whether they'll become the next Kodak, but rather the extent to which they can continue to keep the growth runway intact. If you ask a super computer to make new applications, it probably won't run out of ideas. That's the sheer brilliance of it.