Unfortunately, I have to announce that this blog post will have more than two parts. This part will deal with some important concepts that will inform some of the conclusions I will make later.
First a recap of Part 1.
Part 1 mostly dealt with the departure of Paul Otellini, the former CEO. I thought my post on this topic would be my last post about Intel but then I got the idea for the Resurrection of Intel and wrote a bit about the death of Wintel. Wintel was the very successful informal alliance between Intel and Microsoft that was the driving force of the PC era.
Computer Technology 101
The first concept is Moore’s law which was first described by Gordon Moore, one of the founders of Intel, in a paper in 1965. He said the number of components in integrated circuits would double every year. He altered that in 1975 to a doubling every two years. Most people think that Moorse’s law refers to the power/performance of such things as microprocessors and computer memory. Clearly there is a strong correlation.
When I was at Intel (1984-1999), we use to say we could either double the performance for the same dollars or offer the same performance for half the dollars. Both are exponential functions. If we are very conservative about the rate of doubling, we can say that the performance will increase by 20 times in ten years for the same dollars and will cost about about a 1/20th for the same performance. Of course, there can be all kinds of combinations of cost/performance in between. The actual improvements have been greater than this for more than the last twenty years. Moore’s law is expected to continue for at least another decade. It is really the fuel of the engine of change, not only in the computer industry, but any activity which is sensitive to the cost of computing.
Not all factors that effect computers follow Moore’s Law. For instance, the speed of our access to the Internet has improved very slowly over time. This is partly due to the nature of communication technology which often requires human beings to dig up streets or erect towers. It also has to do with the objectives of the industries that control our access to the Internet. Some have referred to this as Moron’s law. Just look at the size of your desk top monitor, laptop, tablet or phone and number of pixels they contain. Clearly monitors have not and cannot follow Moore’s law. Otherwise we would go from a 27 inch monitor on our desk to one that was 22 feet with the same number of pixels per inch square or the same size as now with 100 times the number of pixels per inch (something we could not see) or have the same product sold at a ten dollar price.
One of the key factors that results from Moore’s law is the increase in device density, i.e. the amount of components in a given amount of space. This comes from the reduction in size to create elements like transistors. This is often referred to as miniaturization. This is most evident when we see devices like the iPhone. Things that have to fit in our pockets and purses have limited size. So the choice is really to provide more capabilities for the same price or the same capabilities for a high price. Companies like Apple choose the former strategy to keep their revenues and profits up. Newer entrants like Samsung will choose to offer a product at a lower price to target customers that cannot afford the Apple product or who would be attracted by price to change vendors. Apple’s strategy can only work as long as they can keep adding capabilities to justify their higher price (there are also non performance considerations like brand, integration and service). But eventually, it will become harder and harder to add capabilities to take advantage of the additional power. For instance, given the form factor of a phone, it is hard to add more pixels once you have as many pixels as the eye can see. Companies like Apple can end up being primarily in a replacement business as customers replace their products with something only slightly better in order to avoid switching out costs.
The Art of Partition
Another very important factor in the design of computing devices is what I will call partitioning. The engineers put different things together to create a product. For instance, a processor, communications chips, battery, power supply, and display. The architecture may not take into consideration the rate of change of all these different elements. I think it is easiest to understand if we reference a desktop computer. I think that an all in one approach is not good value. Take an iMac. The processor technology will follow Moore’s law but the display will not. So two years later you are buying a new system even though you have a perfectly fine display. That is why I have a mac mini, wireless keyboard, mouse and a Cinema Display. I actually upgrade my mac mini every year. I upgrade my display maybe every three years.
We can also observe this phenomena when looking at flat panel displays. Many consumer electronics companies are offering Flat Panel Displays with Internet abilities. You can get youtube, Netflix, Hulu etc. But the display is something you might keep 5-8 years. Clearly, the internet function of the Display will not keep up with the changes and new options of the rapidly advancing Internet. I have an AppleTV connected to my big Display. It is the third generation of Apple TV that I have bought for under $100 but I have had the same display the whole time.
Computing moves to the Cloud
Ok, why am I going through all this? It is because of the change from client-based computing to cloud-based computing which as been enabled by broadband internet access that has reached a critical point in speed, responsiveness and reliability. At the same time we are going from one device such as a desk top computer, to multiple devices including our phone, TV, and even our appliances greatly increasing the complexity of our personal computing environment. Since computing in the cloud is not constrained by form factor it can follow Moore’s law. Voice on mobile devices is a great demonstration of this. The voice recognition for Apple’s Siri and Google’s Now happen in the cloud. Voice is a very computational and data driven process. It would be hard to achieve a high degree of accuracy in today’s smart phone, so the Smart Phone just collects the voice and sends it to the Cloud. That means that as the software in the cloud processes my voice it will learn and improve. Now when I go to another device such as my TV, I can use my voice. I believe over time more and more of computing will happen in the cloud.
What will that mean to our devices? Form factor will be dictated by their use. A phone has to fit into our pocket (or at least what we think is a phone today). A electronic book, game device and tablet have certain constraints. Our TV’s have others. But if the computing moves to the cloud, the capabilities of the devices will not be able to increase fast enough to follow Moore’s Law. That means they will be come cheaper. They will become so cheap that every form factor from a watch to a refrigerator will become intelligent gateway to the cloud. Also the size of devices will reduce. Phones will be come thinner, TV’s will become flatter, and new devices like smart watches will appear. This results from the increased density of components. We are seeing a major movement on mobile devices to Apps. I think that we will continue to see Apps but in reality they will just be simple front ends to Cloud based applications.
But it is Cloudy
But there is not just one cloud. Google has their cloud as does Apple, Microsoft and even Facebook.
Apple understands that truly open systems collapse from complexity. Apple is trying to create an ecosystem where all the devices can connect to their cloud and live in harmony. They problem is that Apple has no idea about what to do in the cloud. Siri is their only example of Cloud computing that I know of. Steve Jobs thought of the cloud as a place where data lives. He apparently did not understand that the cloud is where computing will take place. Since his death, there has been no indication that his successor, Tim Cook, understands this any better.
Google, gets the cloud much better and has a good chance to be the leader in the next generation of computing but their implementations are often poor and mediocre (although I have just started using Google Hangout and it is pretty impressive). Google is not really in charge of the design of most devices using their Android OS which results is a very fragmented market. Google will most likely extend their direct presence in hardware but if GoogleTV is any indication, they will not do very well. The Google Glass thing is a big fake out in my opinion although it was probably the best way for Larry to get Sergy our of his hair. Googles deployment of fiber optic broadband networks could create a major disruption but it would take a decade to build out national fiber network in the USA for instance.
In Part 3 I will discuss what Microsoft and Intel can do to Resurrect Wintel and dominate the next phase of computing.