Ken Fromm | readwriteweb
Even with the rise of cloud computing, the world still revolves around servers. That won’t last, though. Cloud apps are moving into a serverless world, and that will big implications for the creation and distribution of software and applications.
Guest author Ken Fromm is VP of Business Development at Iron.io, makers of industrial-strength cloud services for data processing and message handling.
The Server Backstory
In the pre-cloud days, developers who wanted to build an application needed to think a lot about servers. They needed to budget for them, plan for them, connect them, power them and house them. They had to buy or lease the servers, the power supplies, cabling and cooling – and then set it all up in their datacenter or in a colocation facility.
Over time, the colocation facilities began taking out many parts of the equation – providing racks, power, Internet access and other key resources. Even so, dealing with provisioning, clustering, and maintaining servers required spending lots of money (capital expenditures, power, internet, cooling, security), tons of time and detailed planning (contingency, develop/test/produce, site growth, and so on).
Enter The Cloud
In the last two years we’ve seen a seismic shift in computing. It’s no longer “Why cloud?” or even “How cloud?” Infrastructure-as-a-Service (IaaS) has delivered dramatic improvement on cost, agility, scalability – and yes, with the right architecture, reliability. The cloud has simple removed a significant chunk of work around managing and provisioning servers.
Cloud infrastructure companies like AWS, Rackspace and others can now provide an almost limitless supply of virtual machines. With no upfront costs and with just a bit of effort, developers can fire up servers with their operating system of choice, load in their applications (custom or open-source), and they’re off and running. Launching hundreds of servers and coordinating among them is a bit more work but it’s still far easier than it was just six years ago.
Total cost of ownership of servers has fallen dramatically. At a Hackathon last summer, one serial entrepreneur recalled buying servers for his first company at hundreds of thousands of dollars apiece and investing a great deal of effort in their care and feeding. His second company leased its servers by the year, but still had to put in lots of hands-on effort. His third company leased server time by the month, and his current operation – a successful cloud communications company – rents servers by the hour, on demand, for pennies.
This shift in capital outlay, planning and provisioning timeframe would have been inconceivable in the days of Internet 1.0 or even at the onset of Web 2.0. It’s no surprise that processing speeds have increased and server and memory costs have dropped. But Moore’s Law didn’t exactly cover the case of being able to rent hundreds of cores by the hour at a cost of pennies per hour and provision them through easy to use software interfaces.
The Shift Isn’t Over
This shift in timeframes and pricing is still in motion. Thinking about servers in terms of hours is really just a business construct. It makes sense from a pricing standpoint and from an architectural perspective.
Web app teams typically look at loads across hour-long time slices and plan to scale based on these traffic patterns.
They can now autoscale or provision for more servers at particular times of day under heavy loads or if an app is growing in popularity.
Moving Away From Standalone Apps
But this works only when you look at the world in terms of applications and blocks of servers to host them. The concept of an “application” in the cloud is quickly evolving.
The monolithic application built on Ruby on Rails, Python and Django, or other Web app frameworks is giving way to a distributed system spread across a number of applications, processes and data stores. It’s no longer about building a “Web app.” It’s about building a distributed system of loosely coupled components in the cloud.
An increasing number of applications – mobile apps and systems of connected devices, for example – aren’t based on the notion of a server-based application. There are client apps and back-end data storage, but the processing is increasingly taking place asynchronously outside of an app framework. Runtime apps are often used to process all the inputs but that’s only because mobile compute clouds and processing tiers are only now coming on the scene.
When you think about sites that monitor prices in real-time across hundreds of retail sites or ones that process purchases, views, clicks, checkins and other indicators of interest to provide personalized recommendations, the processing and orchestration at the core of the application lies behind the scenes – the front-end app is just the delivery vehicle
But this changing focus doesn’t map so well into the world of applications and servers. Developers working in a distributed world are hard pressed to translate the things they’re doing into sets of servers. Their worldview is increasingly around tasks and process flows, not applications and servers – and their units of measures for compute cycles is in seconds and minutes, not hours. In short, their thinking is becoming serverless.
The phrase “serverless” doesn’t mean servers are no longer involved. It simply means that developers no longer have to think that much about them. Computing resources get used as services without having to manage around physical capacities or limits. Service providers increasingly take on the responsibility of managing servers, data stores and other infrastructure resources. Developers could set up their own open source solutions, but that means they have to manage the servers and the queues and the loads.
Multiply this effort by the number services an app might consume (task processing, message queues, SMTP servers, payment services), hosted services quickly start to look like the future of computing.
Industrial-Scale Compute Power
The classic analogy is the generation of power. The progression moved from ox-driven water pumps to water-driven mill stones to individual coal-fired factories and ultimately to industrial-scale power plants and transmission lines. This last step – the industrialization of power – transformed industry and the world. It dramatically lowered the cost of building and making things, transformed cities and homes and ushered in new inventions, services and businesses.
The idea of plugging a light, a radio or a TV – or a sewing machine, lathe, or power drill – into a wall or overhanging socket – went from unheard of to transformational to taken for granted.
Elastic Computing Services
Similarly, by plugging into an elastic computing service, developers don’t need to provision their resources based on current or anticipated loads, or put a lot of effort into planning for new projects. Just as Virtual Machines have made it easy to spin up servers to create new applications, elastic/on-demand computing services make it simple to grow.
Consuming computing resources as services means that developers are not paying for resources that they’re not using. Regardless of the number of projects in production, developers using hosted services don’t have to worry about managing resources.
Going serverless lets developers shift their focus from the server level to the task level. Serverless solutions let developers focus on what their application or system needs to do by taking away the complexity of the backend infrastructure.
Just like cloud computing a few years ago, the serverless approach has found its most vocal adherents with startups and independent developers. One reason is affordability, another is the ability to scale quickly, and a third is not having to worry about things that aren’t strategic to their businesses. As the category matures and more developers become familiar with this new approach, it will move into larger organizations. Because it’s becoming increasing clear to everyone, the future of computing will be serverless.
This article original appeared at readwriteweb.