Beth Pariseau| Techtarget

The cloud offers easy resource provisioning and flexible pricing, but there are several cloud computing costs beyond the instance price lists to consider before deploying workloads there.

Beyond the initial server instance, cloud computing pricing usually includes storage, networking, load balancing, security, redundancy, backup, application services and operating system licenses. Certain cloud computing costs — resource contention, storage, bandwidth and redundancy — can come as a surprise.

Server instance performance and provisioning costs

So-called noisy neighbors — server instances from other tenants in the cloud that share the same hardware and cause resource contention — are a common problem.

“In a public-cloud world, you tend to have to have more capacity because of noisy neighbors consuming more resources than you’d expect on that shared host, the fact that certain hosts will become unresponsive, or the fact that you have to replace hosts frequently,” said Jim O’Neill, CIO of the hosted marketing software provider HubSpot Inc., based in Cambridge, Mass. “We tend to see extra investment in additional capacity just to keep performance at a known state.”

While overprovisioning can sometimes be necessary, it’s also easy to go overboard in the opposite direction, according to Jared Reimer, co-founder of Cascadeo Corp., an IT consulting firm located in Mercer Island, Wash.

“It’s very expensive if you are not continuously studying and right-sizing instances, storage pools, memory footprint, etc.,” Reimer said. “We’ve seen companies, for example, take their VMware Converter, convert their virtual machine images into Amazon instances, boot them up and then get a huge bill and go, ‘What happened’? And the answer is that they made all of them large instances even when there was no legitimate reason to do so.”

Cloud computing costs can soar when new customers assume that an initial deployment needs to be as large as the internal infrastructure being converted to cloud, according to Anthony Pagano, director of StrataScape Technologies, an IT consulting firm based in Philadelphia.

“If the business goes through a slow period … you can always scale back,” Pagano said. “When you start out really high and want to scale back, you can’t go below [your original footprint]. And if you’ve set yourself too high a bar, it becomes a renegotiation and a hidden cost for you.”

Don’t forget the data storage

Storage performance and contention have long been problems in the cloud, and while there are ways to improve, they aren’t free, according to Sean Perry, CIO for Robert Half International Inc., a professional staffing firm based in Menlo Park, Calif.

Amazon Web Services (AWS), in particular, has added features to its storage options, so users can get an Elastic Block Storage (EBS) optimized instance and designated IOPS per volume.

These costs “look small at the time, but as you multiply out hours and days and bytes, all of a sudden you’re talking significant money,” Perry said.

For example, the company simulated load tests in May to determine how fast it could load data into a SharePoint instance running on AWS, and used the Provisioned IOPS feature along with the EBS-optimized instances.

“And all of a sudden we’re looking at a bill like, ‘Holy smokes!'” Perry recalled. The amount spent for Provisioned IOPS represented 50% of the total AWS charges for one of the company’s teams — roughly $11,000 in total charges.

While many consider cloud backup a low-cost alternative to traditional backup, it’s easy to get caught off guard by storage-related cloud computing costs, specifically snapshot backups, which capture data at multiple points in time during a given day.

It’s easy to do an automatic snapshot backup and forget to prune back the data set, according to Cascadeo’s Reimer. This means the data footprint for the backup grows endlessly.

“And then they get a $20,000 storage bill and think, ‘What the heck happened?'” Reimer said. “The answer is: You have 10,000 different backups — you need to go back and clean that out.”

Disaster recovery, monitoring and network bandwidth considerations

Server instance redundancy for disaster recovery is an often-overlooked cost, according to a director of engineering for a Fortune 100 company who requested anonymity. Any new cloud deployment for this company requires at least 12 to 24 machines, just to have a presence with the proper redundancy and controls in place.

“There’s a minimum footprint we have to run, there are the ongoing replication costs to keep our data out there, there’s monitoring and control, and then there’s any optimization you have to do,” he said.

Then there’s network bandwidth. While it’s clear up front what vendors such as Amazon and Rackspace charge for network bandwidth, it must be considered along with server and storage costs just as carefully.

“If you’re highly dependent on your previous data center location, it might not be a good fit, and if you’re making calls to the back end all the time, it’s something you have to be aware of,” said Matt Lipinski, architect for Reed Elsevier Technology Services, based in Miamisburg, Ohio.

In certain situations, bandwidth costs may not be immediately obvious, according to Cascadeo’s Reimer.

For example, with AWS Direct Connect, where customers have their own private-line circuit that goes directly to the Amazon data center, Amazon still meters that traffic going out. So even though the customer is paying for a fixed-capacity circuit from a telecom carrier, “You’re effectively paying for it twice,” Reimer said.


Leave a reply

Your email address will not be published. Required fields are marked *


Copyright © 2024 All rights reserved

Log in with your credentials

Forgot your details?