Brandon Butler| Networkworld

High performance cloud computing company Cycle Computing is no stranger to spinning up massive clusters of servers in Amazon’s public cloud, but this week the company says it recently ran one of its largest jobs ever, one that used 10,598 multi-core instances. Cycle Computing provisioned Amazon Web Services Elastic Compute Cloud (EC2) servers for a pharmaceutical client to simulate a drug test. It took two hours to configure and ran for nine hours, for a total cost of $4,362. If the infrastructure had been built by the company, Cycle estimates it would have taken a 12,000-square-foot data center and cost $44 million. Cycle says it’s the biggest job the company has performed in terms of the number of virtual machine instances that have been used for a single run.

Cycle Computing’s software provisions large amounts of cloud-based compute resources for HPC jobs. This particular run — for a pharmaceutical company that Cycle would not name — involved testing how millions of different compounds would interact with a protein that’s commonly associated with a certain type of cancer.

Normally, running such a scientific experiment would be a hefty and costly job. Cycle estimates the task would take 341,700 hours to run on a single machine. Using Amazon’s cloud and the combined power of roughly 10,600 virtual machine instances, Cycle finished the job in 11 hours total, the company explains in a blog post.

Cycle used Amazon’s spot market to provision the resources, which are unreserved virtual machines in AWS’s cloud. Unreserved instances are more expensive than on-demand instances, but provide more flexibility to the user. The Cycle software scheduled the virtual machines, scaled them to capacity and ran them at 99% CPU capacity. Cycle used open source tool Chef to configure the cluster.

Four types of instances in Amazon’s cloud were used, with eight-core c1.xlarges and four-core m1.xlarges being the most common. The four-core instances come with 1690GB of storage, 15GB of memory and are optimized for high input/output processing.

It was a big job for Cycle, but not the company’s biggest. Last year, Cycle ran a 50,000-core job that used 6,732 instances for computational chemistry company Schrodinger. That job had fewer instances but more compute cores.

Cycle Computing CEO Jason Stowe said the company used to alert Amazon as to when it would be running these massive cluster compute cycles, but now HPC jobs are becoming “pedestrian.” Amazon even builds HPC clusters itself.

“We just handled 10,600 servers, and our software built the environment, secured it, scheduled data across, scaled it, and tracked everything for audit/reporting purposes,” a blog post on Cycle’s website reads. “Chef 11 handled configuration for all of them. But now we’re ready to add zeros here, and so is our software.”

Making it easier for organizations to quickly adopt and deploy big data and cloud computing solutions, IBM (NYSE: IBM ) today announced major advances to its Pure Systems family of expert integrated systems . Now, organizations challenged by limited IT skills and resources can quickly comb through massive volumes of data and uncover critical trends that can dramatically impact their business.

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

Copyright © 2024 xcluesiv.com All rights reserved

Log in with your credentials

Forgot your details?