Charles Babcock | Information Week
Cloud computing has rewritten decades of technology rules. Take a closer look at 10 innovators who helped make it possible.
Cloud Computing Giants
It’s hard to write history when you’re still in the thick of recording it. However, in cloud computing we’ve amassed just enough background to name some of the early pioneers who’ve helped establish the relatively new computing paradigm.
The list is neither exhaustive nor all inclusive. And, undoubtedly, there will be other lists, highlighting other quiet innovators whose names we’re just beginning to hear, and whose accomplishments will be well-known in the coming years.
But for IT managers in the midst of considering or adopting cloud computing, this list offers a commentary on where we have so recently come from, and where we may be going in the near future.
This list necessarily ignores how even these pioneers are standing on the shoulders of giants themselves. Consider, for example, the key work accomplished on distributed systems at Sun Microsystems and the early cluster builders, who preceded Google, Facebook, Microsoft and Rackspace on the cloud front.
Still, cloud development has moved at an accelerated pace compared to how long it took personal computing or client-server computing to emerge. Amazon Web Services Simple Storage Service (S3) service launched just six years ago, followed by Enterprise Compute Cloud (EC2). Google AppEngine launched in 2008. Microsoft’s beta version of Azure cloud services came in 2009.
The cloud paradigm is less than a decade old, but from the start, there seemed to be an understanding among its diverse pioneers that a new era was dawning and it would share a set of common characteristics. Any list of cloud computing pioneers would have Amazon’s Werner Vogels near the top. But the architects and hands-on implementers who made his evangelism real, like Chris Pinkham, also deserve a nod.
Even the individuals named are in the habit of saying progress in the cloud is seldom an individual effort. Usually cloud advances are established by a large group of collaborators, and more often than not they are working in full public view with an open source code project like OpenStack (or Eucalyptus or CloudStack) or the Open Compute hardware project.
But some individuals were standing there before the pattern of cloud computing emerged. They acted at a time when the notion was still under attack. In believing, they risked being branded as charlatans and producers of mere vaporware, when in fact they were forging ahead to help define a new era.
Delve into our look at 10 pioneers of the cloud computing era. The order in which they should appear will remain under heavy debate as long as the cloud history is still being written.
Werner Vogels, CTO and VP of Amazon Web Services, joined Amazon in 2004 as director of systems research, coming from a computer science research post at Cornell University. In Holland, he had been a student of some of the leading minds in computing. The late Jim Gray, a Turing Award winner “for seminal contributions to database and transaction processing research and technical leadership in system implementation,” was a proctor for Vogels’ defense of his PhD thesis at the Vrije Universiteit in Amsterdam. At Vrije, Vogels’ advisers included Andrew Tannenbaum, who wrote standard textbooks on operating systems as well as the code for the Minix operating system, and Henri Bal, a specialist in large, parallel systems.
He became Amazon CTO early in 2005 and later that year was named VP. He’s had a vision of a new type of distributed system, one that relied on inexpensive parts but could scale out infinitely, making the Amazon Compute Cloud elastic and not come to a halt if a piece of hardware failed underneath it. He was an advocate of Amazon getting into the business of distributing virtual server computing cycles over the Internet and charging on a basis of time, and got the chance to advocate enterprises adopt it as Amazon’s first “outward facing” CTO. He has been a tireless evangelizer for greater use of the Amazon public cloud. His expertise, commitment and credibility were essential to establishing the broad acceptance that Amazon Web Services enjoyed from an early stage.
Before Werner Vogels got a cloud infrastructure to evangelize at Amazon, there was Chris Pinkham, designer of Amazon Enterprise Compute Cloud (EC2). Actually, designing the Amazon infrastructure was one of those collaborative ventures, like Sergey Brin and Larry Page at Google, where two heads are better than one. Pinkham was the project’s managing director; Amazon software architect Christopher Brown was lead developer. Together they produced Amazon’s first public cloud infrastructure.
I once thought Amazon Web Services must have sprung out of Amazon.com spare capacity. Not so. Initially they were two separate things, with the cloud merely the tail of the online-merchandising dog.
Amazon.com IT operations manager Jesse Robbins has told the story of how he jealously guarded the retail operation’s data centers and didn’t let experimenters near them. Pinkham, who gained expertise by running the first Internet service provider in South Africa, had joined Amazon in 2000 as director of its network engineering group, then became VP responsible for IT infrastructure worldwide.
Amazon had been discussing internally the possibility of creating a public-facing, virtualized infrastructure that could be sold as a service. Pinkham was the most likely candidate to pull it off. But “Chris really, really wanted to be back in South Africa,” Robbins once told blogger Carl Brooks, who wrote: “Rather than lose the formidable talent … Amazon brass cleared the project and off [Pinkham and Brown] went [to work in South Africa] with a freedom to innovate that many might be jealous of.”
Pinkham had the knowledge of how things needed to scale in a Web service environment. Both he and Brown set about exploiting the possibilities of a fully virtualized data center. EC2 was developed with different goals than the retail operation: The customer would have to be able to self-provision a virtual server, receive separate chargeback and have enough control to allow for virtual server launch, load balancing, storage activation and adding services such as database.
The two pulled it off, and Amazon EC2 was born. In 2006 Pinkham left Amazon to start a new company, Nimbula. He now proselytizes its software, Vogels-style, saying it generalizes the Amazon environment for companies to use as a private cloud.
Randy Bias, cofounder and CTO of CloudScaling, has been a specialist in IT infrastructure since 1990, which positioned him to think through and lead some of the leading cloud computing innovations. He was a pioneer implementer of infrastructure-as-a-service as VP of technology strategy at GoGrid, a division of hosting provider ServePath. GoGrid launched a public beta of its Grid infrastructure in March 2008.
He pioneered one of the first multi-platform, multi-cloud management systems at CloudScale Networks and went on to found CloudScaling, where he was a successful implementer of large-scale clouds based on a young and unproven open source code software stack, OpenStack. Those large-scale clouds included KT, the largest cloud service in Korea (formerly known as Korea Telecom), and big data center services provider Internap.
Part of the support OpenStack receives is based on these implementations, and Bias was elected as one of eight gold-sponsor board members of the OpenStack Foundation. He keeps an unvarnished point of view on cloud claims and cloud pretensions, and is known for his uncompromising point of view. In 2009, he advocated the efficiencies of cloud computing as a way to counter climate change.
The O’Reilly Radar blog says Bias “led the open licensing of GoGrid’s API, which inspired Sun Microsystems, Rackspace Cloud, VMware and others to open license their cloud APIs.”
Jonathan Bryce liked working with computers as a youth and had an older brother who was one of Rackspace’s first 12 employees. He urged Jonathan to work at Rackspace, and Bryce became familiar with many phases of the operation, from racking servers to customer service and technical support. He partnered with website designer and friend Todd Morey to host sites on their own rented servers in Rackspace. They left Rackspace in 2005 to branch out into their own website building and hosting business, Mosso Cloud, named for an Italian musical notation phrase that means “to play faster and with more passion.”
But Mosso still ran on servers in the Rackspace data center. Rackspace executives saw the relationship between its hosting-services business and emerging uses of cloud computing, so they asked Bryce to keep building out the Mosso Cloud. He had a system that could launch applications on a website and was thinking about a virtual machine launching system. Then Rackspace bought Slicehost, which already had such a system. Its virtual machine management became part of Mosso, and Bryce rejoined the company as the head of Rackspace Cloud.
Rackspace attempted to expand its cloud computing business by distinguishing itself from the market leader, Amazon Web Services. It offered smaller, get-started virtual servers, at $0.015 an hour. And it opened up its cloud API, prompting NASA to propose that they combine their cloud efforts in a joint project, OpenStack. By 2009, Rackspace saw OpenStack as both the means of spreading a common cloud computing base in private companies that could interoperate with Rackspace, and a means of changing the terms of competition with Amazon.
Rackspace led OpenStack as a sponsor, but realized it would have greater appeal as a more broadly sponsored project. It turned over management to the newly formed OpenStack Foundation in September. Both Cisco’s CTO of cloud computing, Lew Tucker, and Red Hat’s Brian Stevens, both members of the foundation’s board, said Bryce was their top candidate to become its executive director, a post he accepted. At age 31, he’s an innovative spirit with implementation experience who asserted himself when it still wasn’t clear which direction cloud computing would follow.
Lew Tucker already had 20 years of software development and engineering under his belt when the cloud era rolled around. He was quick to recognize that his previous projects were pointing in the cloud’s direction.
He had been CTO and VP of engineering at Radar networks, producer of the Twine social network and VP of the AppExchange at Salesforce.com. His big-company experience brought a different voice to the debate over cloud, one of experienced and toughened engineering that said cloud not only could be, but also should be the next wave of computing.
Tucker was CTO of cloud computing at Sun Microsystems in 2008-2010, a crucial period when Oracle acquired Sun, and where his depth of knowledge countered Oracle’s fatuous putdowns of cloud computing. After the acquisition, Oracle CEO Larry Ellison interviewed Tucker; Tucker said it took only three minutes before both men had made up their minds. In that short time, Oracle lost one of the few spokesmen capable of rolling back the skepticism that Oracle would ever be serious about cloud computing, something that it’s still reaching for as it reverses course and wades more deeply into the field.
Tucker is now CTO of cloud computing at Cisco Systems, a tireless advocate (and board member) for OpenStack and an ignorer of boundaries — as long as the other party can talk about cloud computing. At the recent Cloud Expo, he ducked into a meeting room to pay his regards to Rich Wolski, head of the Eucalyptus open source project at the University of California at Santa Barbara. Eucalyptus might be painted as an OpenStack competitor, but in Tucker’s eyes Wolski’s simply another passionate cloud enthusiast. He does the same on the OpenStack board of directors, where he’s part of the social cohesion that holds competing members together.
Rich Wolski is the co-founder and CTO of Eucalyptus Systems who decided that Amazon’s public cloud APIs were so important that they should have open source code counterparts — even if Amazon Web Services was against it.
He has been criticized on several fronts. One, his approach to cloud computing was too narrow — it was based only on Amazon’s example and initiative. Another: if Amazon wished to make its APIs open source, it could do so; if it didn’t, it could make life difficult for an open source project that was doing so.
Wolski ignored the critics and pushed ahead both with his open source code leadership and Eucalyptus Systems, which makes a stack of software for building private clouds with Amazon EC2 compatibility. Amazon executives, for years unresponsive to Eucalyptus’ entreaties to join the open source project, announced in late May that Amazon would partner with Eucalyptus Systems as a provider of private cloud APIs. It was, finally, a blessing on Wolski’s initiative.
Also a computer science professor, Wolski is a person of strong convictions who believes the world will convert to a new style of computing — and that Eucalyptus is destined to play a role in the conversion.
In the early days of cloud computing, NASA CTO Chris Kemp took several leading concepts of how to assemble a low cost, horizontally scalable data center and put them to work at the NASA Ames Research Center in Mountain View, Calif.
One concept was placing banks of standard x86 server racks in a shipping container with one power supply and network hookup. The container was dropped off by supplier Verari, and hooked up and ready to start accepting workloads in a few days, compared to the long time it takes to construct a new, permanent data center. He also ensured a close tie-in to MAE-West, a major Internet access point, which NASA already had at Ames.
Kemp initially created the Nebula cloud project to collect big data from NASA research projects, such as the Mars mapping project. But Kemp also conceived of a mobile cloud data center that could be transported to different locations to provide onsite compute power, no matter where a spacecraft was launched or an interplanetary mission was managed.
Kemp also advocated sharing NASA data, and both Google and Microsoft have used telescopic images and mapping from the Mars Reconnaissance Orbiter to create public image libraries online. He also initiated the OpenStack open source code project when NASA sought to team up with Rackspace to combine cloud computing software assets.
In March 2011, Chris Kemp resigned his post with NASA, an agency with which he had dreamed of working since he was a child, to become founder and CEO of Nebula. He was leaving, he said, “to find a garage in Palo Alto to do the work I love,” a turn of phrase that showed he would be equally at home walking the halls of Congress or working the venture capital hallways of Menlo Park, Calif.
Not an imposing figure in stature, he is nevertheless an indomitable one. In a debate among Eucalyptus Systems, Citrix CloudStack and OpenStack at GigaOm’s Structure 2012, Kemp, speaking for OpenStack, was hemmed in by CloudStack’s Sameer Dholakia and Eucalyptus’ Marten Mickos, who seemed to have jointly aimed their sharpest comments at OpenStack. In answer, Kemp declared that he would be on the stage the following year without either of them as OpenStack grew larger. It was a brash, if not rash, comment, but one that nevertheless brought a moment of breathing room in which to talk about OpenStack capabilities and momentum.
Marc Benioff, CEO of Salesforce.com, stands out as the pioneer and guerrilla marketer of software-as-a-service. He drew attention to the concept at a time when it was widely disregarded as an aberration of limited use by brazenly advancing the concept of cloud services as the “death of software.” He meant that on-premises software, the systems that have been making enterprise data centers run since 1964, were going away, replaced by software running in a remote data center accessible over the Internet.
Much has already been written about the successful establishment of Salesforce.com, which doesn’t need repetition here. But for his role in winning respect for the concept of SaaS, no one matches the standing of Benioff.
The phrase, “the data center as the computer,” comes so close to capturing what a cloud data center is about that a tip of the hat has to go to Urs Holzle. The senior VP for technical infrastructure at Google led the design and build-out of the search engine’s supporting infrastructure and supplied a pattern for Amazon, Microsoft, GoGrid and others to follow.
As one of Google’s first 10 employees, Holzle refused to be caught in the limits of what was then available from technology providers. Servers hadn’t been designed for the cloud data center, so Google manufactured its own, according to the tenets that Holzle laid down. A Google data center is designed to use about half the power of a conventional enterprise data center.
In 2009, Holzle and fellow Google architect Luiz Andre Barroso captured in a Google whitepaper the concepts essential to building a worldwide string of search engine data centers. It was called “The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines.”
Holzle is a former associate professor of computer science at the University of California at Santa Cruz. He received a PhD from Stanford in the efficient use of programming languages. He is co-sponsor, with VMware CEO Pat Gelsinger, of the Climate Savers Computing Initiative, and he co-authored a second paper with Barroso, “The Case For Energy Proportional Computing,” which outlines ways for servers to use only the energy required to execute the current workload. The paper is credited with pushing Intel and other manufacturers to find ways to adjust the current consumed by their chips.
Frank Frankovsky worked as Dell’s director of Data Center Solutions during the crucial period of 2006-2009, building up the hardware maker’s ability to sell rack-mount servers to search engine and Web service companies seeking to build new, more efficient data centers.
The unit’s been a key, behind-the-scenes business that has kept Dell a leading player in server hardware. If Data Center Solutions had been broken out as a separate business, it would have been the number-three seller of servers in the U.S. in early 2010, Dell executives told InformationWeek during a visit to the Dell campus.
In October 2009, Frankovsky become director of hardware design and supply chain at Facebook during a crucial period in its expansion. While there, he advocated that cloud server design be based on publicly pooled intelligence, despite Google’s insistence that its server and data center designs were a competitive advantage. In April 2011, Mark Zuckerberg and other Facebook officials announced the launch of the Open Compute project to set standards for efficient cloud servers.
“The benefits of sharing so far outweigh the benefits of keeping it all closed,” Frankovsky told Venture Beat in July 2012.
As an organizer of the OpenCompute.org project, Frankovsky helped pull in innovative and potentially competing projects behind the Open Compute standard. Financial services companies had watched the Google example and sought cloud computing servers of their own. Intel and AMD had been asked by their Wall Street customers to produce their version of a cloud server, examples that were donated to the new organization.
“What began a few short months ago as an audacious idea — what if hardware were open? — is now a fully formed industry initiative, with a clear vision, a strong base to build from and significant momentum,” Frankovsky wrote in an Oct. 27, 2011 blog.