- 10 May
5 Unique Features Of Google Compute Engine That No IaaS Provider Could Match
Google Compute Engine (GCE), the infrastructure service of Google Cloud Platform, is a late entrant in the market. Amazon EC2 was announced in 2006 while Microsoft added VMs to Azure in 2012. Google announced the general availability of GCE only in late 2013.
Despite being the laggard in the IaaS segment, GCE enjoys certain differentiating capabilities that the competition could not match. Here are five features that showcase the engineering brilliance of Google.
1. Sustained Usage Discounts
Infrastructure utilization dictates the economies of scale in the IaaS business. In a lot of ways, it is similar to the aviation industry where the operational cost is reduced with the increase in passengers. Since the IaaS providers invest in infrastructure capacity upfront, they need to ensure the best possible resource utilization. To encourage customers to run their workloads for a longer term, cloud providers offer infrastructure at a lower price than the on-demand price. Amazon EC2 Reserved Instances feature is an example of such pricing schemes. Microsoft offers 5 percent discount on Azure when a customer commits to a 12-month term.
In the initial days, it was simple to understand the concept of reserved instances, but with the growing demand and evolving usage patterns, Amazon made it complex and confusing. The customer has to spend more time to calculate the reserved instance pricing than choosing the right EC2 instance type.
As the name implies, Google Compute Engine’s sustained usage discounts reward customers for their sustained usage of compute resources. Unlike EC2 or Azure VMs, the customer need not commit for long-term to enjoy reduced pricing. Even on-demand GCE VMs will get the benefit of sustained usage discounts. All the customer needs to do is to keep running the VM for an entire month. At the end of the billing cycle, Google automatically adds the discount to the bill. The discount increases with usage with customers getting up to a 30% net discount for instances that run the entire month.
Even if the customer doesn’t run the same VM for the whole month, Google treats multiple, non-overlapping instances running the same region and zone as one VM to apply the discount. Such instances are called inferred instances in GCE terminology.
This feature blurs the line between on-demand pricing and reserved pricing by offering discounted price to all the GCE customers. No other IaaS provider could match Google in offering sustained usage discounts. The closest that comes to this feature is EC2 CPU credits which are available only for T2 instance family that come with burstable performance.
2. Preemptible VMs
Hot on the heels of sustained usage discounts comes the preemptible VMs capability of GCE. It is designed for the same reason as sustained usage discounts – to drive maximum utilization of infrastructure.
Amazon launched Spot Instances in 2009 to enable customers to bid on unused EC2 instances. The hourly price for a spot instance is decided by Amazon, which fluctuates depending on the supply of and demand for Spot instances. This model works similar to the spot market in the financial industry where financial instruments or commodities are traded for immediate delivery. Spot instances may be terminated anytime when the bid price goes up. Customers are expected to run only those applications that can tolerate the abrupt shutdown.
According to Google, preemptible VMs are highly affordable, and short-lived compute instances suitable for batch jobs and fault-tolerant workloads. They may offer up to 70 percent price reduction when compared to the regular VMs. If preemptible VMs are similar to Amazon EC2 spot instances, what’s the difference?
Unlike EC2 spot instances, customers need not bid for unused capacity. It removes the pain involved in complex bidding and gambling on fluctuating market movements. Any Google Compute Engine VM can be launched in preemptive mode. The VM may be terminated under two circumstances – The specific region or zone where the VM is deployed has run out of capacity, or the VM has been running for the last 24 hours. There are no restrictions on the type of VMs that can be launched under the preemptive mode. Short-lived or fault-tolerant workloads such as big data clusters, media transcoding, and web crawling are great candidates for preemptible VMs.
While Amazon has EC2 spot instances, Azure doesn’t have the bidding mechanism to launch VMs.
3. Custom VM Sizes
One of the biggest challenges in migrating workloads to the cloud is choosing the right instance types. Mapping on-premises resource configuration with virtual infrastructure exposed by cloud providers could be a daunting task.
As an early mover in this space, Amazon EC2 kept adding new instance types and families to its portfolio. With over 40 instance types classified into half-dozen families, the list is confusing and overwhelming. Amazon did very little to guide customers in identifying an ideal instance type.
Azure is no less complex when it comes to the choice of VM types. It has an ever-growing list of over 29 instance types spread across 4 categories.
The complexity of choosing the correct instance type adds a significant cost to cloud migration and deployment. Once a customer switches to a reserved instance on Amazon EC2, he cannot change the instance type. The only option is to resell the reserved instance in the market place.
Google Compute Engine offers custom VM sizes where customers can precisely choose the number of CPU cores and the amount of memory required for their workload. Depending on the zone where the VM is launched, customers can choose anywhere between 1 core to 32 cores of CPU and up to 6.5GB of RAM per vCPU. The VMs can run popular flavors of Linux or Windows operating systems.
Except IBM SoftLayer, no other cloud provider offers custom VM types. When combined with preemptible VMs and sustained usage discounts, this feature offers great value to customers.
4. Online Disk Resizing
Switching to the virtual infrastructure has many benefits including rapid scale out and scale in. While IaaS providers offer elastic scaling of compute resources, it is hard to emulate the same with block storage. Mature cloud providers offer SSD-backed persistence for block storage devices, which are independent of the VM. The data stored in block storage will be available even after a graceful or abrupt termination of virtual machines.
Amazon has Elastic Block Storage (EBS) that can be attached to EC2 instances. EBS volumes can be periodically backed up as point-in-time snapshots. When an EBS volume runs out of space, customers are expected to stop the running instance, detach the EBS volume, restore the latest snapshot to a new EBS with a larger volume, and to reattach that to the instance before starting it. This process involves downtime of the instance. Microsoft Azure is no different when it comes to resizing an attached disk.
Persistent disks in Google Compute Engine are the counterparts of Elastic Block Storage of Amazon EC2. They provide long-term, durable storage to VMs.
Google recently introduced online resizing of disks with no downtime of virtual machines. This feature avoids the cumbersome workflow of taking the VMs offline, restoring the snapshot, and reattaching them. Of course, if you are running I/O-intensive workloads, you must gracefully drain the connections before initiating the resizing task. The customer can either use the portal or the command line interface to resize the live disk. After expanding the disk space, they have to follow the routine that’s specific to the operating system to claim the available space.
Online resizing of disks is certainly one of the innovative features of Google Compute Engine that no other competitor could imitate.
5. Shared Storage
It’s common to find NAS or SAN appliances deployed in enterprise data centers. They offer shared storage to applications and end users. The biggest gap between on-premises and cloud comes in the form of shared storage. Enterprise customers find it extremely hard to emulate NAS-like configurations in the public cloud. Configuring NFS or other shared file systems will negatively impact the performance of applications.
Block storage devices like Amazon EBS or Azure Page Blobs can be attached to one and only one instance at a time. This seriously limits the functionality of the disks by confining them to just one VM. Though AWS and Azure offer shared file system services, they don’t match the performance of an SSD-backed storage device.
It is possible to attach one persistent disk to multiple running instances in Google Compute Engine. The only caveat is that the disk is available in read-only mode to the VMs. Even when the data is read only, it helps customers in emulating many scenarios that are close to the on-premises deployment. For example, a persistent disk can be pre-loaded with large datasets which will become instantly available to all the VMs. I have seen this feature utilized by customers migrating large content management systems to Google Compute Engine.
For other scenarios where a read/write file system is required for multiple VMs, Google partnered with Red Hat to bring Gluster File System to Compute Engine. This emulates a shared file system on Google Cloud Platform.
Source : Forbes
About the Author