In the previous article, we outlined ten layers within the software space, each building on the other. In this article, we'll dig deeper into one of these, and we'll start at the top with systems.
Software systems themselves, as an entity reliant upon many iterative innovations "under the hood," have incredible variety where we could likely outline ten layers within just the system layer described before. How is there that much variety only within the system layer?
As Marc Andreessen put it several years ago, "Software is eating the world." The iterative nature of software has enabled an explosion in capability over the past few years, which is unlikely to slow down anytime soon, given the ever simpler options available to build valuable software systems. At first, the system layer was constrained by the layers before it, where any software system being brought to market would require management of the layers underneath. Today, all large software systems rely on infrastructure that has been decades in the making. The current explosion of new software is chiefly thanks to the investments made into this foundational computing layer.
The ability to rent software infrastructure is a critical element of the continued expansion of software use cases. While 20 years ago, it took significant investment across the stack to bring software to life, fantastic software applications are now possible with effort in just one area of the software stack -- the rest can be rented from infrastructure providers. The ease with which we can create new software systems today also further reinforces that ease -- a self-reinforcing feedback loop of software expansion.
Before the public cloud, to have software distributed at scale required owning and operating the servers that could power your system. At first, this would be servers in someone's bedroom or garage and grow into data centers from there. The infrastructure supporting your software would be managed by you, growing alongside the system's scale of use. Such management requires knowledge of the full stack, from how your system is interfacing with the Internet to keeping your servers cool so they don't overheat and malfunction.
A company building such a system would need individuals familiar with all these aspects, and as the system grew, so would the team supporting it. Alongside this growth, individuals specialize to be able to improve the system further. Some start to focus purely on the hardware setup and maintenance. Others focus on connective tissue, enabling others to write code that then provides their customers' value.
Amazon, Google, and Microsoft all started this way: they had to host their services for the world to use, and as the world found those services useful, they grew their teams and data centers into the behemoths they are today. This experience led them to see the economies of scale possible should they provide infrastructure for a fee.
Data centers and the associated hardware management have built-in economies of scale, which means that the bigger they get, the cheaper it is to provide a single "infrastructure service unit." It is then only logical that these big companies opened up their infrastructure into what's now known as the public cloud to continue increasing their scale and reduce their average costs.
This self-reinforcing cycle currently has no clear end in sight, as there are limitless software applications that could make use of cloud infrastructure. The more use cases come to use this, the more the infrastructure providers can invest in further technological advances to further reduce costs and open up new use cases. Especially now, the alternative of creating one's own data center and managing one's hardware stack is far more expensive than merely renting these services from a cloud provider. Due to the scale these providers have already reached, the rent they charge is an excellent deal!
There's evident hesitation to "lock-in" to these providers, but in a way, it's similar to electricity. Providing electricity to homes is something that only a handful of companies do. It is a business where economies of scale help reduce costs, whose savings are shared with consumers. The same thing is happening with the cloud, where computing is becoming a commodity much like electricity. I'm sure there were initial fears of the electric companies having a monopoly, but once it was clear that consumers benefitted from this new technology, those fears gradually faded away. I believe we are in a similar boat right now, where there are genuine fears that these large software companies have a monopoly over distributed compute power. Without them, though, much of the software we love to use daily and rely on would not be possible!
These systems that major cloud providers have opened up to the public are compelling because they can help operate limitless applications of software. While twenty years ago, a software system supporting a million concurrent users would require massive amounts of capital expenditures and personnel investment to get off the ground, a solo developer can today build such a system with the help of infrastructure and service providers.
The fun part is there is still so much more that can be done with software. As much as software has improved over the past twenty years, we can expect considerably more improvements over the next twenty, again due to software's iterative nature. Since it is now so much easier to bring to life a powerful software system, more such solutions will be created, and more useful ones will come to stick around.
It's like a snowball, slowly but surely gathering more and more snow. What started with boolean algebra over a hundred years ago turned into logic gates, leading to manually cranked computing engines (like Babbage's difference engine). On and on, these innovations iterated upon themselves over the past century, a snowball gradually gaining snow and momentum. We are now at a point where this snowball has gotten quite big, and it's still accelerating -- the bottom of the mountain that would stop this snowball is still not in sight.
Benjamin is a passionate software engineer with a strong technical background, with ambitions to deliver a delightful experience to as many users as possible. He previously interned at Google, Apple and LinkedIn. He built his first PC at 15, and has recently upgraded to iOS/crypto-currency experiments. Benjamin holds a bachelor's degree in computer science from UCLA and is completing a master’s degree in Software Engineering at Harvard University.