Amazon data center

(Edit 3/16/2012: I am surprised that this post is picked up by a lot of media outlets. Given the strong interest, I want to emphasize what is measured and what is derived. The # of server racks in EC2 is what I am directly observing. By assuming 64 physical servers in a rack, I can derive the rough server count. But remember this is an *assumption*. Check the comments below that some think that AWS uses 1U server, others think that AWS is less dense. Obviously, using a different assumption, the estimated server number would be different. For example, if a credible source tells you that AWS uses 36 1U servers in each rack, the number of servers would be 255,600. An additional note: please visit my disclaimer page. This is a personal blog, only represents my personal opinion, not my employer’s.)

Similar to the EC2 CPU utilization rate, another piece of secret information Amazon will never share with you is the size of their data center. But it is really informative if we can get a glimpse, because Amazon is clearly a leader in this space, and their growth rate would be a great indicator of how well the cloud industry is doing.

Although Amazon would never tell you, I have figured out a way to probe for its size. There have been earlyguesstimates on how big Amazon cloud is, and there are even tricks to figure out how many virtual machines are started in EC2, but this is the first time anyone can estimate the real size of Amazon EC2.

The methodology is fully documented below for those inquisitive minds. If you are one of them, read it through and feel free to point out if there are any flaws in the methodology. But for those of you who just want to know the numbers: Amazon has a pretty impressive infrastructure. The following table shows the number of server racks and physical servers each of Amazon’s data centers has, as of Mar. 12, 2012. The column on server racks is what I directly probed (see the methodology below), and the column on number of servers is derived by assuming there are 64 blade servers in each rack.

data center\size # of server racks # of blade servers
US East (Virginia) 5,030 321,920
US West (Oregon) 41 2,624
US West (N. California) 630 40,320
EU West (Ireland) 814 52,096
AP Northeast (Japan) 314 20,096
AP Southeast (Singapore) 246 15,744
SA East (Sao Paulo) 25 1,600
Total 7,100 454,400

The first key observation is that Amazon now has close to half a million servers, which is quite impressive. The other observation is that the US east data center, being the first data center, is much bigger. What it means is that it is hard to compete with Amazon on scale in the US, but in other regions, the entry barrier is lower. For example, Sao Paulo has only 25 racks of servers.

I also show the growth rate of Amazon’s infrastructure for the past 6 months below. I only collected data for the US east data center because it is the largest, and the most popular data center. The Y axis shows the number of server racks in the US east data center.

EC2 US east data center growth in the number of server racks

Besides their size, the growth rate is also pretty impressive. The US east data center has been adding roughly 110 racks of servers each month. The growth rate looks roughly linear, although recently it is showing signs of slowing down.

Probing methodology

Figuring out EC2′ size is not trivial. Part of the reason is that EC2 provides you with virtual machines and it is difficult to know how many virtual machines are active on a physical host. Thus, even if we can determine how many virtual machines are there, we still cannot figure out the number of physical servers. Instead of focusing on how many servers are there, our methodology probes for the number of server racks out there.

It may sound harder to probe for the number of server racks. Luckily, EC2 uses a regular pattern of IP address assignment, which can be exploited to correlate with server racks. I noticed the pattern by looking at a lot of instances I launched over time and running traceroutes between my instances.  The pattern is as follows:

  • Each EC2 instance is assigned an internal IP address in the form of 10.x.x.x.
  • Each server rack is assigned a 10.x.x.x/22 IP address range, i.e., all virtual machines running on that server rack will have the same 22 bits IP prefix.
  • A 10.x.x.x/22 IP address range has 1024 IP addresses, but the first 256 are reserved for DOM0 virtual machines (system management virtual machine in XEN), and only the last 768 are used for customers’ instances.
  • Within the first 256 addresses, two at address 10.x.x.2 and 10.x.x.3 are reserved for routers on the rack. These two routers are arranged in a load balanced and fault-tolerant configuration to route traffic in and out of the rack. I verified that the uplink capacity from 10.x.x.2 and 10.x.x.3 are roughly 2 Gbps total, further suggesting that they are routers each with a 1Gbps uplink.

Understanding the pattern allows us to deduce how many racks are there. In particular, if we know a virtual machine at a certain internal IP address (e.g., 10.2.13.243), then we know there is a rack using the /22 address range (e.g., a rack at 10.2.12.x/22). If we take this to the extreme where we know the IP address of at least one virtual machine on each rack, then we can see all racks in EC2.

So how can we know the IP addresses of a large number of virtual machines? You can certainly launch a large number of virtual machines and record the internal IP addresses that you get, but that is going to be costly. If you are RightScale, where a large number of instances are launched through your service, you may not be able to take this approach. Another approach is to scan the whole IP address space and watch when an instance responds back to a ping. There are several problems with this approach. First, it may be considered port scanning, which is a violation of AWS’s policy. Second, not all live instances respond to ping, especially with AWS’ security group blocking all ports by default. Lastly, the whole IP address space in 10.x.x.x is huge, which would take a considerable amount of time to scan.

While you may be discouraged at this point, it turns out there is another way. In addition to the internal IP address we talked about, each AWS instance also has an external IP address. Although we cannot scan the external IP addresses either (so as not to violate the port scanning policy), we can leverage DNS translation to figure out the internal IP addresses. If you query DNS for an EC2 instance’s public DNS name from inside EC2, the DNS server will return its internal IP address (if you query it from outside of EC2, you will get the external IP instead). So, all we are left to do is to get a large number of EC2 instances’ public DNS names. Luckily, we can easily derive the list of public DNS names, because EC2 instances’ public DNS names are directly tied to their external IP addresses. An instance at external IP address x.y.z.w (e.g., 50.17.204.150) will have a public DNS name ec2-x-y-z-w…..amazonaws.com (e.g., ec2-50-17-204-150.compute-1.amazonaws.com if in US east data center). To enumerate all public DNS names, we just have to find out all public IP addresses. Again, this is easy to do because EC2 publishs all public IP addresses they use here.

Once we determined the number of server racks, we just multiply it by the number of physical servers on the rack. Unfortunately, we do not know how many physical servers are on each rack, so we have to make assumptions. I assume Amazon has dense racks, each rack has 4 10U chassis, and each chassis holds 16 blades for a total of 64 blades/rack.

Let us recap how we can find all server racks.

  • Enumerate all public IP addresses EC2 uses
  • Translate a public IP address to its public DNS name (e.g., ec2-50-17-204-150.compute-1.amazonaws.com)
  • Run a DNS query inside EC2 to get its internal IP address (e.g., 10.2.13.243).
  • Derive the rack’s IP range from the internal IP address (e.g., 10.2.12.x/22).
  • Count how many unique racks we have seen, then multiple it by the number of physical servers in a rack (I assume it is 64 servers/rack).

Caveat

Even though my methodology could provide insights that are never possible before, it has its shortcomings, which could lead to inaccurate results. The limitations are:

  • The methodology requires an active instance on a rack for the rack to be observed. If the rack has no instances running on it, we cannot count it.
  • We cannot know how many physical servers are in a rack. I assume Amazon has dense racks, each rack has 4 10U chassis, and each chassis holds 16 blades.
  • My methodology cannot tell whether the racks I observe are for EC2 only. It could be possible that other AWS services (such as S3, SQS, SimpleDB) run on virtual servers on the same set of racks. It it also possible that they run on dedicated racks, in which case, AWS is bigger than what I can observe. So, what I am observing is only a lower bound on the size of AWS.

Host server CPU utilization in Amazon EC2 cloud

One potential benefit of using a public cloud, such as Amazon EC2, is that a cloud could be more efficient. In theory, a cloud can support many users, and it can potentially achieve a much higher server utilization through aggregating a large number of demands. But is it really the case in practice?  If you ask a cloud provider, they most likely would not tell you their CPU utilization. But this is a really good information to know. Besides settling the argument whether cloud is more efficient, it is very interesting from a research angle because it points out how much room we have in terms of improving server utilization.

To answer this question, we came up with a new technique that allows us to measure the CPU utilization in public clouds, such as Amazon EC2. The idea is that if a CPU is highly utilized, the CPU chip will get hot over time, and when the CPU is idle, it will be put into sleep mode more often, and thus, the chip will cool off over time. Obviously, we cannot just stick a thermometer into a cloud server, but luckily, most modern Intel and AMD CPUs are all equipped with an on-board thermal sensor already. Generally, there is one thermal sensor for each core (e.g., 4 sensors for a quad-core CPU) which can give us a pretty good picture of the chip temperature. In a couple of cloud providers, include Amazon EC2, we are able to successfully read these temperature sensors. To monitor CPU utilization, we launch a number of small probing virtual machines (also called instances in Amazon’s terminology), and we continuously monitor the temperature changes. Because of multi-tenancy, other virtual machines will be running on the same physical host. When those virtual machines use CPU, we will be able to observe temperature changes. Essentially, the probing virtual machine is monitoring all other virtual machines sitting on the same physical host. Of course, deducing CPU utilization from CPU temperature is non-trivial, but I won’t bore you with the technical details here. Instead, I refer interested readers to the research paper.

We have carried out the measurement methodology in Amazon EC2 using 30 probing instances (each runs on a separate physical host) for a whole week. Overall, the average CPU utilization is not as high as many have imagined. Among the servers we measured, the average CPU utilization in EC2 over the whole week is 7.3%. This is certainly lower than what an internal data center could achieve. In one of the virtualized internal data center we looked at, the average utilization is 26%, more than 3 times higher than what we observe in EC2.

Why is CPU utilization not higher? I believe it results from a key limitation of EC2, that is, EC2 caps the CPU allocation for any instance. Even if the underlying host has spare CPU capacity, EC2 would not allocate additional cycles to your instance. This is rational and necessary, because, as a public cloud provider, you must guarantee as much isolation as possible in a public infrastructure so that one greedy user could not make another nice user’s life miserable. However, the downsize of this limitation is that it is very difficult to increase the physical host’s CPU utilization. In order for the utilization to be high, all instances running on the same physical host have to use the CPU at the same time. This is often not possible. We have the first-hand experience of running a production web application in Amazon. We know we need the capacity at peak time, so we provisioned an m1.xlarge server. But we also know that we cannot use the allocated CPU 100% of the time. Unfortunately, we have no way of giving up the extra CPU so that other instances can use it. As a result, I am sure the underlying physical host is very underutilized.

One may argue that the instance’s owner should turn off the instance when s/he is not using it to free up resources, but in reality, because an instance is so cheap, people never turn it off. The following figure shows a physical host that we measured. The physical host gets busy consistently shortly before 7am UTC time (11pm PST) on Sunday through Thursday, and it stays busy for roughly 7 hours. The regularity has to come from the same instance, and given that the chance of landing a new instance on the same physical host is fairly low, you can be sure that the instance was on the whole time, even during the time it is not using the CPU. Our own experience with Steptacular— the production web application — also confirms that. We do not turn it off during off peak because there is so much state stored on the instance that it is big hassle to shut it down and turn it back on.

 

CPU utilization on one of the measured server

 

Compared to other cloud providers, Amazon does enjoy an advantage of having many customers; thus, it is in the best position to have a higher CPU utilization. The following figure shows the busiest physical host that we profiled. A couple of instances on this physical host are probably running a batch job, and they are very CPU hungry. On Monday, two or three of these instances get busy at the same time. As a result, the CPU utilization jumped really high. However, the overlapping period is only few hours during the week, and the average utilization come out to be only 16.9%. It is worth noting that this busiest host that we measured still has a lower CPU utilization than the average CPU utilization we observed in an internal data center.

CPU utilization of a busy EC2 server
You may walk away from this disappointed to know that public cloud does not have an efficiency advance. But, I think from a research stand point, this is actually a great news. It points out that there is a significant room to improve, and research in this direction can lead to a big impact on cloud provider’s bottom line.

Launch a new site in 3.5 weeks with Amazon

Getting started quick is one of the reasons that people adopted cloud, and that is why Amazon Web Services (AWS) is so popular. But people often overlook the fact that the retail part of Amazon is also amazing. If your project involves supply chain, you can also leverage Amazon retail to get up and running quickly.

We recently launched a wellness pilot project at Accenture where we leveraged both Amazon retail and Amazon web services. The Steptacular pilot is designed to encourage Accenture US employees to lead a healthy lifestyle. We all had our new year resolutions, but we always procrastinate, and we never exercise as much as we should. Why? Because there is a lack of motivation and engagement. The Steptacular pilot uses a pedometer to track a participant’s physical activity, then it leverages concepts in Gamification, uses social incentive (peer pressure) and monetary incentive to constantly engage participants. I will talk about the pilot and its results in details in a future post, but in this post, let me share how we are able to launch within 3.5 weeks, the key capabilities we leveraged from Amazon and some lessons we learned from this experience.

Supply chain side

The Steptacular pilot requires participants to carry a pedometer to track their physical activity. This is the first step of increasing engagement — using technology to alleviate the hassle of manual (and inaccurate) entry. We quickly locked into the Omron HJ-720 model because it is low cost and it has a USB connector so that we can automate the step upload process.

We got in touch with Omron. The guys at Omron are super nice. Once they learned what we are trying to do, they immediately approved us as a reseller. That means we can buy pedometer at the wholesale price. Unfortunately, we still have to figure out how we can get the devices into our participants’ hands. Accenture is a distributed organization with 42 offices in the US alone. To make the matter worse, many consultants work from client sites, so it is not feasible to distribute in person. We seriously considered three options:

  1. Ask our participants to order directly from Amazon. This is the solution we chose in the end, after connecting with the Amazon buyer in charge of the Omron pedometer and being assured that they will have no problem handling the volume. It turns out that this not only saves us a significant amount of shipping hassle, but it is also very cost effective for our participants.
  2. Be a vendor ourselves and uses Amazon for supply chain. Although I did not know about it before, I am pleasantly surprised to learn about the Fulfillment by Amazon capability. This is Amazon’s cloud for supply chain. Like a cloud, this is provided as a service — you store your merchandise in Amazon’s warehouse, and they handle the inventory and shipping. Also, like a cloud, it is pay per use with no long term commitment. Although equally good at reducing hassle for us, we did not find that we can save cost. Amazon retail is so efficient and has such a small margin that we realize we cannot compete even though we are happy with a 0% margin and even though we (supposedly) pay for the same wholesale price.
  3. Ship and manage by ourselves. The only way we could be cheaper is if we manage the supply chain and shipping logistics ourselves, and of course, this is assuming that we work for free. However, the amount of work is huge, and none of us wants to lick envelope for a few weeks, definitely not for free.

The pilot officially launched on Mar. 29th. Besides Amazon itself, another Amazon affiliate, J&R music, also sells the same pedometer on Amazon’s website. Within a few minutes, our participants were able to totally drain J&R’s stock. However, Amazon remained in stock for the whole duration. Within a week, they sold roughly 3,000 pedometers pedometers. I am sure J&R is still mystified by the sudden surge in demand. If you are from J&R, my apologies for not giving adequate warning ahead and kudos to you for not overcommitting your stock like many TouchPad vendors did recently (I am one of those burned by OnSale).

In addition to managing device distribution, we also have to worry about how to subsidize our participants. Our sponsors agreed to subsidize each pedometer by $10 to ease the adoption, but we could not just write each participant a $10 check — that is too much work. Again, Amazon came to the rescue. There are two options. One is that Amazon could generate a bunch of one-time-use $10 discount code which is specifically tied to the pedometer product, then, based on how many are redeemed, Amazon could bill us for the total cost. The other option is that we could buy a bunch of $10 gift cards in bulk and distribute to our participants electronically. We ultimately chose the gift card option for its flexibility and also for the fact that it is not considered a discount so that the device would still cost more than $25 for our participants to qualify for super saver shipping. Looking back, I do regret choosing the gift card option, because managing squatters turns out to be a big hassle, but that is not Amazon’s fault, it is just human nature.

Technology platform side

It is a no-brainer to use Amazon to leverage its scaling capabilities, especially for a short-term quick project like ours. One key thing we learned from this experience is that you should only use what you need. Amazon web services offer a wide range of services, all designed for scale, so it is likely that you will find a service that serves your need.

Take for example the email service Amazon provides. Initially, we used Gmail for sending out signup confirmations and email notifications. During the initial scaling trial, we soon hit Gmail’s limit on how fast we can send emails. Once realizing the problem, we quickly switched to Amazon SES (Simple Email Service). There is an initial cap on how many we can send, but it only took a couple of emails for us to lift the limit. With a couple of hours of coding and testing, we all of a sudden can send thousands of emails at once.

In addition to SES, we also leveraged AWS’ CloudWatch service to enable us to closely monitor and be alerted of system failures. Best of all, it all comes for free without any development effort from our side.

Even though Amazon web services offer a large array of services, you should only choose what you absolutely need. In other words, do not over engineer. Let us taking auto scaling as an example. If you host a website in Amazon, it is natural to think about putting in an auto-scaling solution, just in case to handle the unexpected. Amazon has its auto scaling solution, and we, at the Accenture labs, have even developed an auto-scaling solution called WebScalar in the past. If you are Netflix, it makes absolute sense to do so because your traffic is huge and it fluctuates widely. But if you are smaller, you may not need to scale beyond a single instance. If you do not need it, it is extra complexity that you do not want to deal with especially when you want to launch quick. We estimated that we will have around 4,000 participants, and when we did a quick profiling, we figured that a standard extra-large instance in Amazon would be adequate to handle the load. Sure enough, even though the website experienced a slow down for a short period of time during launch, it remains adequate to handle the traffic for the whole duration of the pilot.

We also learned a lesson on fault tolerance — really think through your backup solution. Steptacular survived two large-scale failures in the US East data center. We enjoyed peace of mind partly because we are lucky, partly because we have a plan. Steptacular uses an instance-store instance (instead of an EBS instance). We made the choice mainly for performance reasons — we want to free up the network bandwidth and leverage the local hard disk bandwidth. This turns out to have saved us from the first failure in Apr. which is caused by EBS blocks failure. Even though we cannot count on EBS for persistency, we build in our own solution. Most static content on the instance is bundled into a Amazon Machine Image (AMI). There are two pieces of less static content (the content that changes often) stored on the instance: the website logic and the steps database. The website logic is stored in a Subversion repository and the database is synced to another database running outside of the US East data center. This architecture allows us to be back up and running quickly, by first launching our AMI, then check out website code from repository and lastly dump and reload the database from the mirror. Even though we did not have to initiate this backup procedure, it is good to have the peace of mind knowing your data is safe.

Thanks to Amazon, both Amazon retail and Amazon web services, we are able to pull off the pilot in 3.5 weeks. More importantly, the pilot itself has collected some interesting results on how we can motivate people to exercise more. But I will leave that to a future post after we have a chance to dig deep into the data.

Acknowledgments

Launching Steptacular in 3.5 weeks would not have been possible without the help of many people. We would like to especially thank the following folks:

  • Jim Li from Omron for providing both hardware, software and logistics support
  • Jeff Barr from Amazon for connecting us with the right folks at Amazon retail
  • James Hamilton from Amazon for increasing our email limit on the spot
  • Charles Allen from Amazon for getting us the gift codes quickly
  • Tiffany Morley and Helen Shen from Amazon for managing the inventory so that the pedometer miraculously stayed in stock despite the huge demand

Last but not least, big kudos to the Steptacular team, which includes several Stanford students, who worked really hard even through the finals week to get the pilot up and running. They are one of the best team I proudly have ever worked with.

How to run MapReduce in Amazon EC2 spot market

If you often run large-scale MapReduce/Hadoop jobs in Amazon EC2, you must have thought about using the spot market. EC2′s spot market price for a spot instance is typically 60+% less than that of an on-demand instance. For a large job, where you use many instances for many hours, a 60+% saving could be a substantial amount.

Unfortunately, using spot market has not been trivial. In exchange for the lower price, Amazon has your explicit agreement that they can terminate you at any time. This is a problem since you may lose all your work. A research paper from HotCloud last year showed that even adding more spot instances (not replacing existing nodes) could be detrimental to a running MapReduce job. In other words, you add more resources to your cluster, but your running time could actually be longer.

Beyond lengthening your computation, spot market could even make you lose your data. Existing MapReduce implementations, such as Google’s internal implementation or Hadoop, are designed with failure in mind already. However, the assumed scenario is a hardware failure, i.e., a small fraction of nodes may go down at any time. This assumption is not true in the spot market environment, where all nodes of a cluster may fail at the same time. You not only can lose all your states (when the master nodes go down), but you can also lose all your data (when nodes holding replicas for a piece of data all go down).

What about bidding for a really high price for your spot instances, and hoping that Amazon never increases the price that high? Unfortunately there is no guarantee on how high the spot market price could be. There are several occasions last year where the spot instances price actually exceeded the on-demand instances. This is likely because some guys were bidding at a high-than-on-demand-instance price, and Amazon really needed to kill those instances to free up capacity.

While the naive approach of bidding at a high price may not work, I am happy to report that there is a new technique that can help you leverage spot market to save money. We recently developed a MapReduce implementation that could tolerate large-scale node failures (e.g., when your bid price is below Amazon’s spot price). Even if all nodes in your cluster are terminated, we can guarantee that no state is lost, and that you can continue make forward progress when your cluster comes back online (e.g., when your bid price is higher than Amazon’s spot price).

Our implementation leverages two key things. First, when Amazon terminates your instance, it is not a hard power off. Instead, it is a soft OS shutdown, where you have a couple of minutes to execute your shutdown script. We modified our shutdown script where we save the current progress and generate a new task for the remaining work so that another node can take over in the future. In other words, we use on-demand checkpointing to save states only when needed.

Second, we constantly save intermediate data in order to minimize the volume of state we have to save in the shutdown phase. Our solution is built on Cloud MapReduce, which constantly streams intermediate data out of the local node. In comparison, other MapReduce implementations, such as Hadoop, save all intermediate data locally before a task finishes. This could result in too large a dataset to save during the short shutdown window.

I would not belabor the details of our implementation, except mentioning that it was published last week at USENIX HotCloud conference. You can read the Spot Cloud MapReduce paper for the full details.

Terremark announces new lower price

Competition is a good thing. Continuous drop in the hardware price is a good thing. Improvement in efficient is also a good thing. All those should translate into a lower computing cost to you — the end user — over time. That is exactly what Terremark has done today — lowering its cloud offering price, in one case, up to 42% lower. Wecompared Terremark’s price to EC2 before, it is time to update the comparison based on the new pricing.

Following what we did to get EC2′s unit cost, we can run regression analysis to estimate Terremark’s unit cost. We only consider Linux VMs, in order not to factor is software license cost. We assume the Cost = c * CPU + m * RAM (Terremark charges storage separately from the VM cost at $0.25/GB/month). The regression determines the unit cost to be

c = 1.31 cents/VPU/hour
m = 5.38 cents/GB/hour

Not surprisingly, both unit costs are lower than their previous price (used to be 2.06 cents/VPU/hour, and used to be 6.46 cents/GB/hour).

The regression result still does not fit the real cost very well. Terremark offers economy-of-scale in its cost model, where it heavily discounts both CPU and RAM as you move up in the configuration. The following table shows both the newer reduced cost (color green) and the cost as determined by the estimated parameters (color red) for the various VM configurations.

memory (GB)\CPU 1 VPU 2 VPU 4 VPU 8 VPU
0.5 3.5 4 4 / 5.31 4.5 / 7.93 4.913.17
1 6 / 6.69 7 / 8 8 / 10.62 10 / 15.86
1.5 9 / 9.38 10.5 / 10.69 12 / 13.31 13.5 / 18.55
2 12 12.07 14.1 13.38 16.1 16 20 21.24
4 21.7 22.82 27.1 24.13 30.1 26.75 35.9 31.99
8 40.1 44.33 48.2 45.64 56.7 48.26 63.4 53.5
12 60.2 65.85 68.6 67.16 76.2 69.78 82.4 75.02
16 80.3 87.36 84.4 88.67 89.9 91.29 93.2 96.53

Again, we compare cost by comparing with a fictitious EC2 instance with the exact same spec. For simplicity, we assume a VPU in a Terremark’s VM can get the full attention of a physical core. This is a more common case because Terremark uses VMWare’s DRS (Distributed Resource Scheduler), which can dynamically reassign virtual cores to a different physical core to avoid contention.

The following table shows the EC2 equivalent cost assuming a virtual core can get the full power of the physical core.

memory (GB) VPU Terremark price (cents/hour) Equivalent EC2 cost (cents/hour) Terremark cost/EC2 cost
0.5 1 3.5 4.09 0.86
0.5 2 4 7.17 0.56
0.5 4 4.5 13.33 0.34
0.5 8 4.9 25.65 0.19
1 1 6 5.09 1.18
1 2 7 8.17 0.86
1 4 8 14.33 0.56
1 8 10 26.66 0.38
1.5 1 9 6.1 1.48
1.5 2 10.5 9.18 1.14
1.5 4 12 15.34 0.78
1.5 8 13.5 27.66 0.49
2 1 12 7.1 1.69
2 2 14.1 10.18 1.38
2 4 16.1 16.35 0.98
2 8 20 28.67 0.7
4 1 21.7 11.12 1.95
4 2 27.1 14.21 1.91
4 4 30.1 20.37 1.48
4 8 35.9 32.69 1.1
8 1 40.1 19.17 2.09
8 2 48.2 22.25 2.17
8 4 56.7 28.41 2.0
8 8 63.4 40.73 1.56
12 1 60.2 27.21 2.21
12 2 68.6 30.29 2.26
12 4 76.2 36.45 2.09
12 8 82.4 48.78 1.69
16 1 80.3 35.25 2.28
16 2 84.4 38.33 2.20
16 4 89.9 44.5 2.02
16 8 93.2 56.82 1.64

Like we observed before, there are several configurations where Terremark is much cheaper than EC2. The 8VPU+0.5GB configuration is still the cheapest at 19% of the equivalent EC2 cost. What is different from before is that the larger VM configurations are getting significantly cheaper. For example, the 16GB+8VPU configuration costs only 1.64 over its EC2 equivalent, compared to a ratio of 2.83 before. This means that it is getting more economical to run larger VMs in Terremark.

Let us hope the trend continues that cloud providers continue to reduce the cost of computing so that we can pay less for the same work or get more work done for the same budget.

EC2 spot pricing coming to Cluster Compute and Cluster GPU instances

AWS EC2 introduced the Cluster Compute and Cluster GPU instances few months back. Those instances are very good for high-throughput HPC applications, unfortunately, they have not been available in the spot market. I think that is about to change, the Cluster Compute and Cluster GPU instances should be available through the spot market shortly.

Over the weekend (Feb. 6, 2011), I have noticed that both cc1.4xlarge and cg1.4xlarge instances are showing up on AWS’s console when you query the price history. But that went away quickly the next day. I am guessing they were testing the features. Starting today (Feb. 8, 2011), you can query AWS API or use the AWS command line tool to see the price history for cc1.4xlarge instances. Furthermore, you can start bidding for cc1.4xlarge already (it was not possible over the weekend because the “current price” is not set for those instances).

Although there is no official announcement yet, I think it is just a matter of time. Have fun crunching through your HPC applications for cheap.

 

 

Comparing cloud providers on VM cost

How do you compare two IaaS clouds? Is Amazon EC2′s small standard instance (1 ECU, 1.7GB RAM, 160GB storage) cheaper or is Rackspace cloud’s 256MB server (4 cores, 256MB RAM, 10GB storage) cheaper? It is obviously simpler to compare them if you focus only on one metric. For example, let us assume your application is CPU bound and it does not require much memory at all. Then you should focus solely on the CPU power a cloud VM gives you. We have translated GoGridRackspace, and Terremark‘s VM configurations into their equivalent ECU, so you can simply take a ratio between the cost and the ECU rating and pick the lowest ratio. Unfortunately, real-life applications are never that simple. They demand CPU cycle, memory, as well as hard disk storage capacity. So, how do you compare apple-to-apple?

The methodology

Since no methodology exists yet, we will propose one. Since the comparison results depend highly on the methodology chosen, we first will spell out the methodology we use so that if you have a different one and you come up with a different result, you can trace the source of the difference. If you see areas where we can improve the methodology, please do leave a comment. The methodology works as follows:

  1. We first break down the cost components in Amazon EC2. We assume Amazon has priced their instances using a linear model, i.e., the cost is equal to CPU Mem Storage, where c is the unit cost of CPU per ECU per hour, m is the unit cost of memory per GB per hour, and s is the unit cost of storage per GB per hour. Amazon provides several types of instances, each with a different combination of CPU, memory and storage, which is enough of a hint for us to use regression analysis to estimate c, m and s. The details are in our ECU cost breakdown analysis.
  2. Once we have the unit cost in EC2, we can compare it with another cloud provider. We take one VM configuration from a cloud provider at a time, we then compute what Amazon EC2 would charge for an instance with the exact same specification if EC2 were to offer it. This can be easily done by multiplying the EC2 unit costs (cm, and s) with the amount of CPU, RAM, and storage in the VM, and add them up. Of course, this is hypothetical, because EC2 does not offer an instance with an exact same spec. So even if the EC2 price is lower, you cannot just buy a substitute from Amazon. However, this gives us a good sense of the relative cost.

We have done the analysis with GoGridRackspace, and Terremark.

We can compute a ratio between a cloud VM’s cost with its hypothetical equivalent in EC2. The following lists the top few VMs that have the lowest ratio. If you are curious about the ratio for other VM configurations, feel free to dig into the individual posts on each provider. The ratio listed is assuming that you will get the maximum CPU allowed under bursting, which is frequently the case in those cloud providers. Further, the ratio listed is comparing with EC2 N. Virginia data center. Other EC2 data centers have a higher cost.

Provider RAM (GB) CPU (cores) storage (GB) cost ratio with an equivalent in EC2
Rackspace 0.25 4 10 0.168
Terremark 0.5 8 charged separately at $0.25/month/GB 0.19
Rackspace 0.5 4 20 0.314
Terremark 0.5 4 charged separately at $0.25/month/GB 0.338
Terremark 1 8 charged separately at $0.25/month/GB 0.375
Terremark 1.5 8 charged separately at $0.25/month/GB 0.491

 

How to use this data?

Due to the limitations of this methodology (comparing with a hypothetical equivalent in EC2), it only makes sense if one of the cloud provider you are comparing is Amazon EC2. In other words, do not compare Rackspace with Terremark based on the ratio.

It also makes no sense to use our results if you know the exact specification for your server. In that case, you should find a minimum VM configuration that is just barely bigger than your requirement and compare price.

Our results are useful if your application is flexible. For example, instead of using one m1.small instance in EC2, you could use several Rackspace 256MB VMs to achieve a dramatic cost savings. Examples of a flexible application include a batch application, such as a MapReduce job, which could be chopped down to a finer granularity. Another example could be web servers in a web server farm, where the load balancer can divide up the work to take advantage of whatever computation capacity provisioned on the web server.

Our results are also useful if you want to get a high level overview. Consider an enterprise purchaser who wants to choose a cloud platform. There are many dimensions he has to consider, e.g., features, cost, SLA, contract terms….. Doing a deep analysis at the beginning is just going to be overwhelming. Since Amazon is a big player in cloud, it most likely will be part of the evaluation. Having a ratio would give a ten-thousand-feet view such that the decision maker would know whether an alternative cloud would save him money. Then, as the evaluation progresses, he can dig deeper into a finer comparison.

Caveats:

There are many caveats in using our results that we should spell out.

  • This is only comparing a VM cost, including its CPU, memory and storage. But, it does not include other costs, such as bandwidth transfers. The bandwidth cost varies wildly, for example, GoGrid offers free inbound traffic, which can translate into a significant cost saving.
  • When we compare CPUs, we are only comparing their processing power, not their IO capabilities (both disk and network IO). In Amazon, we sometimes observe degraded IO performance, possibly due to competing VMs on the same host. It is a sad side effect of using popular cloud offerings.
  • As we mentioned, this only applies to fungible applications that can take full advantage of provisioned CPU, memory and storage resources. For example, if you cannot take advantage of the provisioned RAM, it does not matter if it is a good deal. You are wasting the memory, and you may be better off with a VM configuration from a different cloud provider with a smaller provisioned RAM.
  • This is not a substitute for feature comparisons. For example, GoGrid offers free F5 hardware load balancer. If you need a hardware load balancer, you should consider that separately.

Terremark cost comparison with Amazon EC2

(Earlier posts in this series are: EC2 cost break downGoGrid & EC2 cost comparisonRackspace & EC2 cost comparison)

In this post, let us compare the VM cost between Terremark vCloud express and Amazon EC2. Terremark is one of the first cloud providers based on VMWare technology. Unlike EC2, Rackspace and GoGrid, which use Xen as the hypervisor, Terremark uses VMWare’s ESX hypervisor, which arguably is richer in functionality.

Following the methodology we have used so far, we need to first understand Terremark’s hardware infrastructure and its resource allocation policy. Using the same technique we used for EC2′s hardware analysis, we determine that Terremark runs on a platform with two sockets of Quad-core AMD Opteron 8389 processors. PassMark does not have a benchmark result for this processor, so we have to run the benchmark ourselves. We used the 16GB+8VPU configuration — its largest — to minimize interference from other VMs, and we run it multiple times late at night to ensure that we are indeed measuring the underlying hardware’s capacity. On average, the PassMark CPU mark result is 7100, which is roughly 18 ECU.

Terremark uses the ESX hypervisor’s default policy for scheduling CPU, i.e., a core shares the CPU equally with another core regardless of how much memory the VM has. This is different from GoGrid and Rackspace where the CPU is shared proportional to the amount of RAM a VM has. The scheduling policy can be verified by reading the GuestSDK API exposed by VMTools. By reading the API, we know that a VM not only has no minimum guaranteed CPU, but it also does not have a maximum burst limit. Each virtual core of a VM is assigned a CPU share of 1000, regardless of the memory it is allocated. Thus, the more cores a VM has, the more shares of the CPU it will get (e.g., 1VPU has 1000 shares, and 8VPU has 8000 shares).

It is difficult to determine how many VMs could be on a physical host, which determines the minimum guaranteed CPU. We are told in their forum that each physical host has 128GB of memory, which can accommodate at least 8 VMs, for example, each with 8 VPU+16GB RAM (its largest configuration). VMWare ESX hypervisor allows over-committing memory, so in theory, there could be many more VMs on a host. When we launched a vanilla 512MB VM, we learned from the Guest API that our VM only occupied 148MB RAM. Clearly, there is lots of room to over-commit, even though we see no evidence that they are doing so. Assuming there is no over-commitment, there still could be a lot of VMs competing for the CPU. In the worst case, all VMs on the host have 512MB RAM and 8VPU, which consume the least memory, but gain the maximum CPU weights. A physical host can host 256 such VMs, leaving a negligible CPU share for each VM. If a VM has only one core, it owns only 1/(8*256) share of the CPU, and an 8 VPU (8 virtual cores) VM owns only 1/256 share of the CPU.

Following what we did to get EC2′s unit cost, we can run regression analysis to estimate Terremark’s unit cost. We assume the Cost = c * CPU + m * RAM (Terremark charges storage separately from the VM cost at $0.25/GB/month). The regression determines the unit cost to be

c = 2.06 cents/VPU/hour
m = 6.46 cents/GB/hour

The regression result does not fit the real cost very well. The following table shows both the original cost (color green) and the cost as determined by the estimated parameters (color red) for the various VM configurations.

memory (GB)\CPU 1 VPU 2 VPU 4 VPU 8 VPU
0.5 3.5 5.29 4 / 7.36 4.5 / 11.48 5 / 19.72
1 6 / 8.53 7 / 10.6 8 / 14.7 10 / 23
1.5 9 / 11.8 10.5 / 13.8 12 / 17.9 13.6 / 26.2
2 12 15 14.1 17 16.1 21.2 20.1 29.4
4 24.1 27.9 28.1 30 30.1 34.1 40.2 42.4
8 40.2 53.8 48.2 55.8 60.2 60 80.3 68.2
12 60.2 79.6 72.3 81.7 90.3 85.8 120.5 94.1
16 80.3 105.5 96.4 107.5 120.5 111.7 160.6 112

The reason that the regression analysis does not work well here is that Terremark heavily discounts both CPU and RAM as you move up in the configuration. Our linear model does not capture the economy of scale very well. However, we can think of the linear regression as a trend line, and the trend line indicates that Terremark is likely more expensive than EC2. For example, it costs 6.46 cents/GB/hour for its RAM, which is much higher than the 2.01 cents Amazon values its RAM at.

Another way to compare cost is to use EC2′s unit cost to figure out what an equivalent configuration will cost in EC2. The following table shows the cost comparison where we assume you can only get the minimum CPU at the worst case, where all other VMs are busy and a physical host is fully loaded with 8VPU+0.5GB VMs (without over-commitment). Each row shows the RAM and CPU configuration, Terremark’s price, what it would cost in EC2, and the ratio between Terremark and EC2 cost.

memory (GB) VPU Terremark price (cents/hour) Equivalent EC2 cost (cents/hour) Terremark cost/EC2 cost
0.5 1 3.5 1.02 3.44
0.5 2 4 1.03 3.89
0.5 4 4.5 1.05 4.27
0.5 8 5 1.10 4.54
1 1 6 2.02 2.97
1 2 7 2.03 3.44
1 4 8 2.06 3.89
1 8 10 2.11 4.75
1.5 1 9 3.03 2.97
1.5 2 10.5 3.04 3.45
1.5 4 12 3.06 3.92
1.5 8 13.6 3.11 4.37
2 1 12 4.03 2.98
2 2 14.1 4.05 3.49
2 4 16.1 4.07 3.96
2 8 20.1 4.12 4.88
4 1 24.1 8.06 2.99
4 2 28.1 8.07 3.48
4 4 30.1 8.09 3.72
4 8 40.2 8.14 4.94
8 1 40.2 16.1 2.5
8 2 48.2 16.11 2.99
8 4 60.2 16.13 3.73
8 8 80.3 16.18 4.96
12 1 60.2 24.14 2.49
12 2 72.3 24.15 2.99
12 4 90.3 24.18 3.73
12 8 120.5 24.23 4.97
16 1 80.3 32.18 2.49
16 2 96.4 32.2 2.99
16 4 120.5 32.22 3.74
16 8 160.6 32.27 4.98

The table shows that Terremark is 2.49 to 4.98 times more expensive than an equivalent in EC2. This is mainly due to the way Terremark shares CPUs. A 0.5GB VM in Terremark shares the CPU equally with a 16GB VM; thus, in the worst case, a VM may get very little CPU. Since Terremark does not set a minimum guarantee on the CPU share in the hypervisor, we have to assume the worst case.

In reality, you are unlikely to encounter the worst case, and you are very likely to get the full attention of a physical core. The reason is not only because the majority of VMs have more than 0.5GB (so that you can pack fewer of them on a host), but also because Terremark uses VMWare’s DRS (Distributed Resource Scheduler). We have noticed that, when we drive up the load on our VMs, our VMs are often moved (through VMotion) to a different host, presumably to avoid contention. Thus, unless the whole cluster gets really busy, it is unlikely that your VM would have a lot of other busy VMs to contend with on the same host. The following table shows the EC2 equivalent cost assuming a virtual core can get the full power of the physical core.

memory (GB) VPU Terremark price (cents/hour) Equivalent EC2 cost (cents/hour) Terremark cost/EC2 cost
0.5 1 3.5 4.09 0.86
0.5 2 4 7.17 0.56
0.5 4 4.5 13.33 0.34
0.5 8 5 25.65 0.19
1 1 6 5.09 1.18
1 2 7 8.17 0.86
1 4 8 14.33 0.56
1 8 10 26.66 0.38
1.5 1 9 6.1 1.48
1.5 2 10.5 9.18 1.14
1.5 4 12 15.34 0.78
1.5 8 13.6 27.66 0.49
2 1 12 7.1 1.69
2 2 14.1 10.18 1.38
2 4 16.1 16.35 0.98
2 8 20.1 28.67 0.7
4 1 24.1 11.12 2.17
4 2 28.1 14.21 1.98
4 4 30.1 20.37 1.48
4 8 40.2 32.69 1.23
8 1 40.2 19.17 2.1
8 2 48.2 22.25 2.17
8 4 60.2 28.41 2.12
8 8 80.3 40.73 1.97
12 1 60.2 27.21 2.21
12 2 72.3 30.29 2.39
12 4 90.3 36.45 2.48
12 8 120.5 48.78 2.47
16 1 80.3 35.25 2.28
16 2 96.4 38.33 2.51
16 4 120.5 44.5 2.71
16 8 160.6 56.82 2.83

There are several configurations where Terremark is much cheaper than EC2. The 8VPU+0.5GB configuration is the cheapest at 19% of the equivalent EC2 cost. This is due to two reasons. First, the 8 VPU has more scheduling weight, and it can compete for the full power of the physical host. Second, the RAM is the smallest. As we have seen, Terremark values RAM more than EC2 does (m=6.46 cents/GB/hour vs. EC2 m=2.01 cents/GB/hour), so the less RAM a configuration has, the lower the cost. The cost savings go away as you add more RAM and more CPU to the configuration.

Rackspace cost comparison with Amazon EC2

(Earlier posts in this series are: EC2 cost break downGoGrid & EC2 cost comparison)

We looked at Amazon EC2 and GoGrid cost earlier. Let us examine another IaaS provider — Rackspace cloud. The first step again is to unify on the same unit of measurement on the CPU power. Using the same methodology as we used for EC2′s hardware analysis, we determine that Rackspace runs on a platform with two sockets of Quad-Core AMD Opteron 2374 HE processor. According to PassMark-CPU Mark results, this platform has a CPU mark score of 4642, which is roughly 12 ECU. Rackspace cloud’s FAQ states that “For Linux distributions, each Cloud Server is assigned four virtual cores and the amount of CPU cycles allocated to these cores is weighted based on the size of the Cloud Server.” From talking to Rackspace support, we know that each physical host has 32GB of RAM, and it can host at most 2 16GB (15.5GB to be precise) VMs. Therefore, a 16GB VM would own the complete 4 cores it is allocated, i.e., the 16GB VM has a guaranteed capacity of half of the platform, which is 6 ECU. Since Rackspace states that the CPU is proportionally shared based on the RAM, we can derive the minimum guaranteed CPU based on how many other VMs could fit on the same physical host. The following table lists the minimum CPU and the maximum CPU (assuming full bursting when all other VMs are idle). Again, we are only concerned about Linux VMs, as they do not include license costs, so they more accurately represent the true hardware cost.

RAM (GB) Storage (GB) Min CPU (ECU) Max CPU (ECU) Cost (cents/hour)
0.256 10 0.09375 6 1.5
0.512 20 0.1875 6 3
1 40 0.375 6 6
2 80 0.75 6 12
4 160 1.5 6 24
8 320 3 6 48
16 620 6 6 96

Similar to GoGrid, Rackspace only charges based on the RAM, so it is not possible to determine how it values each component (i.e., CPU, RAM and storage) separately, as we have done for EC2. However, it is possible to project what a similar configuration would cost in EC2 using the unit cost we have derived from the EC2 cost breakdown. The results are shown in the following table where we assume a VM only gets its minimum guaranteed CPU. Each row corresponds to one VM configuration, which is denoted by its RAM size in the first column. We also show the ratio between the Rackspace cost and the projected equivalent EC2 cost.

RAM (GB) Rackspace cost (cents/hour) Equivalent EC2 cost (cents/hour) Rackspace cost/EC2 cost
0.256 1.5 0.8 1.87
0.512 3 1.6 1.87
1 6 3.16 1.9
2 12 6.32 1.9
4 24 12.6 1.9
8 48 25.3 1.9
16 96 50.2 1.91

Since a Rackspace VM can burst if other VMs on the same host are idle, it could potentially grab a much larger share of the CPU. The following table shows the cost comparison assuming that the VM bursts to its fullest extent.

RAM (GB) Rackspace cost (cents/hour) Equivalent EC2 cost (cents/hour) Rackspace cost/EC2 cost
0.256 1.5 8.89 0.17
0.512 3 9.56 0.31
1 6 10.86 0.55
2 12 13.5 0.89
4 24 18.8 1.28
8 48 29.4 1.63
16 96 50.2 1.91

If your VM is only getting the minimum guaranteed CPU, Rackspace is about 1.9 times more expensive than an equivalent in EC2. However, in our experience, we can frequently grab a much larger share of the CPU. Assuming you can grab the full 4 cores, the 256MB, 512MB, 1GB, and 2GB VMs are a great bargain, which are 17%, 31%, 55%, and 89% of the equivalent EC2 cost respectively.

GoGrid cost comparison with Amazon EC2

updated 1/30/2011 to include our own PassMark benchmark result and include GoGrid’s prepaid plan. Then updated 2/1/2011 to include cost/ECU comparison and clarifications.

(Other posts in the series are: EC2 cost break downRackspace & EC2 cost comparisonTerremark and EC2 cost comparison).

Continue on our series on cost comparison between IaaS cloud providers, we will look at GoGrid’s cost structure in this post. It is easier to compare RAM and storage apple-to-apple because all cloud providers standardize on the same unit, e.g., GB. To have a meaningful comparison on CPU, we must similarly standardize on a common unit of measurement. Unfortunately, the cloud providers do not make this easy, so we have to do the conversion ourselves.

Because Amazon is a popular cloud provider, we decide to standardize on its unit of measurement — the ECU (Elastic Compute Unit). In our EC2 hardware analysis, we concluded that an ECU is equivalent to a PassMark-CPU Mark score of roughly 400. We have run the benchmark in Amazon’s N. Virginia data center on several types of instances to verify experimentally that the CPU Mark score does scale linearly as the instance’s advertised ECU rating.

All we need to do now is to figure out GoGrid’s PassMark-CPU Mark number. This is easy to do if we know the underlying hardware. Following the same methodology we used for the EC2 hardware analysis, we find that the GoGrid infrastructure consists of two types of hardware platform: one with dual-socket Intel E5520 processors, another with dual-socket Intel X5650 processors. According to PassMark-CPU mark results, we know the dual-socket E5520 has a score of 9,174 and the dual-socket X5650 has a score of 15,071. GoGrid enables hyperthreading, so the dual-socket E5520 platform has 16 cores, and the dual-socket X5650 platform has 24 cores. Hyperthreading does not really double the performance because there is still only one physical core which is hardware-threaded by two virtual cores.

Instead of relying on PassMark’s reported result, we also run the benchmark ourselves to get a true measure of performance. We run the benchmark late at night for several times to make sure that the result is stable and that we are getting the maximum CPU allowed by bursting. PassMark benchmark only runs on Windows OS, and in Windows, we can only see up to 8 cores. As a result, the 8GB(8cores) and 16GB(8cores) VMs both return a CPU mark result of roughly 7850, which is 19.5 ECU. The 4GB(4cores) VM returns a CPU mark result of roughly 3,800, which is 9.6 ECU. And, the 2GB(2cores) VM returns a CPU mark of roughly 1,900, which is 4.8 ECU. Since there are no 1GB(1core) or 0.5GB(1core) Windows VM, we project their maximum CPU power to be half of a 2-core VM at 2.4 ECU. Lastly, since we cannot measure the 16 cores performance, we use the reported E5520 benchmark result of 9174 from PassMark instead as its maximum, which is 23 ECU. These numbers determine the maximum CPU when bursting full. Based on GoGrid’s VM configuration, we can then determine the minimum guaranteed CPU from maximum CPU.

The translation from GoGrid’s CPU allocation to an equivalent ECU is shown in the following table. Each row of the table corresponds to one GoGrid’s VM configuration, where we list the amount of CPU, RAM and storage in each configuration. We also list GoGrid’s current pay-as-you-go VM price as the last column for reference.

Min CPU (cores) Min CPU (ECU) Max CPU (cores) Max CPU (ECU) RAM (GB) Storage (GB) pay-as-you-go Cost (cents/hour)
0.5 1.2 1 2.4 0.5 25 9.5
1 2.4 1 2.4 1 50 19
1 2.4 2 4.8 2 100 38
3 7.2 4 9.6 4 200 76
6 14.4 8 19.2 8 400 152
8 19.2 16 23 16 800 304

One way to compare GoGrid and EC2 is to purely look at the cost per ECU. The following table shows the cost/ECU for GoGrid VMs assuming all of them get the maximum possible CPU. We list two cost/ECU results, one based on their pay-as-you-go price of $0.19/RAM-hour, another based on their Enterprise cloud prepaid plan of $0.05/RAM-hour.

RAM (GB) Max CPU (ECU) pay-as-you-go cost/ECU
(cents/ECU/hour)
prepaid cost/ECU
(cents/ECU/hour)
0.5 2.4 3.96 1.04
1 2.4 7.91 2.08
2 4.8 7.91 2.08
4 9.6 7.91 2.08
8 19.2 7.91 2.08
16 23 13.2 3.48

In comparison, the following table shows EC2 cost/ECU for the nine different types of instances in the N. Virginia data center.

instance CPU (ECU) RAM (GB) cost/ECU (cents/ECU/hour)
m1.small 1 1.7 8.5
m1.large 4 7.5 8.5
m1.xlarge 8 15 8.5
t1.micro 0.35 0.613 5.71
m2.xlarge 6.5 17.1 7.69
m2.2xlarge 13 34.2 7.69
m2.4xlarge 26 68.4 7.69
c1.medium 5 1.7 3.4
c1.xlarge 20 7 3.4

Comparing on cost/ECU only makes sense when your application is CPU bound, i.e., your memory requirement is always less than what the instance gives you.

Here, we propose a different way, comparing them by taking into account the CPU, the RAM and storage allocation altogether. Ideally, if we can derive the unit cost of each, we can straightforwardly compare. Unfortunately, GoGrid charges purely based on RAM hours, it is not possible to figure out how it values CPU, RAM and storage separately, like we have done for Amazon EC2. If we do a regression analysis, the result will show that CPU and storage cost nothing, and RAM bears all the cost.

Since we cannot compare the unit cost, we propose a different approach. Basically, we take one VM configuration from GoGrid, and try to figure out what a hypothetical instance with the exact same specification would cost in EC2 if Amazon were to offer it. We can project what EC2 would charge for such a hypothetical instance because we know EC2′s unit cost from our EC2 cost break down.

The following table shows what a VM will cost in EC2 if the same configuration is offered there, assuming we only get the minimum guaranteed CPU. Each row of the table corresponds to one GoGrid VM configuration, where we only list the RAM size for that configuration (see the previous table for a configuration’s CPU and storage size). We also show the ratio between the GoGrid pay-as-you-go price and the projected EC2 cost.

RAM (GB) GoGrid pay-as-you-go cost (cents/hour) Equivalent EC2 cost (cents/hour) GoGrid cost/hypothetical EC2 cost
0.5 9.5 3.05 3.12
1 19 6.09 3.12
2 38 8.9 4.27
4 76 21.1 3.6
8 152 42.2 3.6
16 304 71.2 4.27

Unlike EC2, other cloud providers, including GoGrid, all allow a VM to burst beyond their minimum guaranteed capacity if there are free cycles available. The following table compares the cost under the optimistic scenario where you get the maximum CPU possible.

RAM (GB) GoGrid pay-as-you-go cost (cents/hour) Equivalent EC2 cost (cents/hour) GoGrid cost/EC2 cost
0.5 9.5 4.69 2.03
1 19 6.1 3.12
2 38 12.2 3.12
4 76 24.4 3.12
8 152 48.7 3.12
16 304 76.4 3.98

As Paul from GoGrid pointed out, GoGrid also offers a prepaid plan that is significantly cheaper than the pay-as-you-go plan. This is different from Amazon’s reserved instance where you get a discount if you pay an up-front fee. Although cheaper, Amazon’s reserved instance pricing only applies to that one instance you reserved, and when you need to dynamically scale, you cannot benefit from the lower price. GoGrid’s prepaid plan allows you to use the discount on any instances. To see the benefits of buying bulk, we also compare EC2 cost with GoGrid’s Enterprise Cloud prepaid plan, which costs $9,999 a month, but entitles you to 200,000 RAM hours at $0.05/hour. For brevity, we do not compare with other prepaid plans, which you can easily do yourself following our methodology.

The following table shows what a VM will cost in EC2 if the same configuration is offered there, assuming we only get the minimum guaranteed CPU.

RAM (GB) GoGrid Enterprise cloud pre-paid cost (cents/hour) Equivalent EC2 cost (cents/hour) GoGrid cost/EC2 cost
0.5 2.5 3.05 0.82
1 5 6.09 0.82
2 10 8.9 1.12
4 20 21.1 0.95
8 40 42.2 0.95
16 80 71.2 1.12

The following table compares the cost under the optimistic scenario where you get the maximum CPU possible.

RAM (GB) GoGrid enterprise cloud pre-paid cost (cents/hour) Equivalent EC2 cost (cents/hour) GoGrid cost/EC2 cost
0.5 2.5 4.69 0.53
1 5 6.1 0.82
2 10 12.2 0.82
4 20 24.4 0.82
8 40 48.7 0.82
16 80 76.4 1.05

Under GoGrid’s pay-as-you-go plan, we can see that GoGrid is 2 to 4 times more expensive than a hypothetical instance in EC2 with an exact same specification. However, if you can buy bulk, the cost is significantly lower. The smaller 0.5GB server could be as cheap as 53% of the cost of an equivalent EC2 instance.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值