Overview
There are many public cloud vendors, but the three that stand out due to controlling most of the market are Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). This article will look at the instance types offered by GCP.
An instance in cloud computing terminology is anything hosted on a cloud service that consumes compute resources, which are generally CPU, memory, and storage. For example, an instance could be a virtual machine (VM), database, or other resource-consuming entity.
As shown in the table below, GCP classifies instances into four main families; most public clouds will adopt a similar categorization scheme for their instance types, albeit using slightly different terminology.
Instance Category | Instance Type | Description |
---|---|---|
General-Purpose | Cost-optimized: E2Balanced: N1, N2, N2DScale-out optimized: Tau T2D, Tau T2A | These instance types are used when there are no specific requirements and a balance of compute and memory resources is required. They offer the best price-to-performance ratio for a variety of workloads. |
Compute-Optimized | C2, C2D | These instances offer more CPU power and better performance, with a choice of sizing and processing technologies. |
Memory-Optimized | M1, M2 | These instances are used when more memory is needed. They provide the most affordable price per GB of memory of all instance types. |
Accelerator-Optimized | A2 | This type makes use of GPUs shared among several instances. They are typically very expensive and only used for graphics-intensive workloads requiring GPU processing. |
Comparison of GCP Instance Types
Let’s start with a summary before we delve deeper into explaining the differences between the instance types. The table below shows which processor type instances use, their maximum memory, internal network speeds, and local SSD and GPU support.
Instance Type | Processor Types | Maximum vCPUs | Maximum Memory | Local SSD | Maximum Network Speed (Gbps) | GPU |
---|---|---|---|---|---|---|
E2 | Intel and AMD EPYC | 32 | 128 GB | No | 16 | No |
N2 | Intel Ice Lake and Cascade Lake | 128 | 864 GB | Yes | 100 | No |
Tau T2D | AMD EPYC Milan | 60 | 240 GB | No | 32 | No |
C2 | Intel Cascade Lake | 60 | 240 GB | Yes | 100 | No |
C2D | AMD EPYC Milan | 112 | 224 GB | Yes | 100 | No |
M1 | Intel Broadwell | 160 | 4 TB | Yes | 32 | No |
M2 | Intel Cascade Lake | 416 | 12 TB | No | 32 | No |
A2 | Intel Cascade Lake | 96 | 1.5 TB | Yes | 100 | Yes |
General-Purpose Instances
The general-purpose series is the best option for web servers and databases and is a safe bet if you’re unsure which family to choose. The general-purpose type also allows for custom sizing: If the predefined types don’t meet your requirements, you can customize them to allow a combination of any number of vCPUs and amount of memory.
The general purpose family type is composed of the following machine series:
- E2 and E2 Shared-Core (Cost-Optimized)
- N1, N1 Shared-Core, N2, N2D, and Tau T2D
E2 and E2 Shared-Core (Cost-Optimized)
The E2 and E2 shared-core cost-optimized instances offer the best performance-to-cost ratios of all the general-purpose options. They are suitable for most workloads and are the most popular choice when creating resources. These options provide a good balance of resources and are the most common choice unless a specific use case exists.Â
See the best multi-cloud management solution on the market, and when you book & attend your CloudBolt demo we’ll send you a $75 Amazon Gift Card.
E2 Series
The E2 series offers 2 to 32 vCPUs with 0.5 GB to 8 GB of memory per virtual CPU. This series is further broken down into three subtypes based on the amount of memory per vCPU:
- E2 Standard: 4 GB of memory per vCPU
- E2 High-Memory: 8 GB of memory per vCPU
- E2 High-CPU: 1 GB of memory per vCPU
E2 Shared-Core Series
Shared-core offerings work on the principle of bursting, which is when a VM uses over 100% of its allocated compute resources, i.e., more than its given nominal maximum CPU cycles. It can do so for a few seconds before returning to its limit. Bursting works in the form of credits accumulated over time when the VM is not using its total CPU allocation. These earned credits are then used during the bursting period.
E2 shared-core offers 0.25 to 1 vCPU and 0.5 GB to 8 GB of memory.
Limitations
The E2 series offers no sustained-use discount since it offers low on-demand cost. E2 machines do not support virtual graphics processors, SSDs, or nested virtualization. Nested virtualization is where a virtualization host or hypervisor runs inside a virtual machine; for example, running a Hyper-V host within an E2 instance is not supported.
Summary and Comparison of E2 and E2 Shared-Core
For your reference, please see a summary of the available E2 and E2-shared compute resources below.
Features | E2 | E2 Shared-Core |
---|---|---|
Up to 32 vCPUs and 128 GB of total memory | Yes | Yes |
Custom sizing | Yes | Yes |
Lowest on-demand pricing of all general machine types | Yes | Yes |
Bursting | No | Yes |
No sustained-use discount | No | No |
Low on-demand and committed use pricing | Yes | Yes |
Virtual GPU, SSD, and nested virtualization | No | No |
N2 Series
The N2 series offers flexible machine sizing with 2 to 128 virtual CPUs and 0.5 GB to 8 GB of memory each. Unlike the E2 series, which offers both AMD and Intel CPUs, the N2 series only uses Intel.
N2 uses Cascade Lake processors as the default for instances with up to 80 vCPU. For VMs larger than 80 vCPU, Ice Lake processors are used. To use Ice Lake processors, regardless of the size, you must set a minimum CPU platform (a CPU generation).
Ice Lake should only be used if explicitly required by your workloads because these processors are region-dependent and may not be available after region migration or in the event of a disaster because the DR availability zone may not support them.
N2 machine types are further divided into three types:
- N2 Standard: 4 GB of memory per vCPU
- N2 High-Memory: 8 GB of memory per vCPU
- N2 High-CPU: 1 GB of memory per vCPU
Platform
|
Multi Cloud Integrations
|
Cost Management
|
Security & Compliance
|
Provisioning Automation
|
Automated Discovery
|
Infrastructure Testing
|
Collaborative Exchange
|
---|---|---|---|---|---|---|---|
CloudHealth
|
âś”
|
âś”
|
âś”
|
||||
Morpheus
|
âś”
|
âś”
|
âś”
|
||||
CloudBolt
|
âś”
|
âś”
|
âś”
|
âś”
|
âś”
|
âś”
|
âś”
|
N2D Series
The N2D series is available with AMD processors only, specifically AMD EPYC Milan or Rome; the third-generation EPYC Rome processors are used by default. Milan is only available in specific regions, and a minimum CPU platform must be configured.
N2D offers the biggest resource pool of any general purpose series, with up to 224 vCPUs and 896 GB of memory. It offers three different processor-to-memory ratios:
- N2D Standard: 4 GB of memory per vCPU
- N2D High-Memory: 8 GB of memory per vCPU
- N2D High-CPU: 1 GB of memory per vCPU
Tau T2D Series
The Tau T2D series runs exclusively on AMD EPYC Milan processors, which are available only in specific regions, and custom sizing is not supported. Simultaneous multithreading (SMT) is disabled, so each vCPU is equal to an entire physical core. Tau T2D instances offer up to 60 vCPU with 4 GB of memory per vCPU.
Compute-Optimized Instances
Compute-optimized machine types are designed for CPU-intensive workloads. They offer features like non-uniform memory access (NUMA) to provide reliable, consistent performance and, consequently, are some of the most expensive instance types. GCP offers two compute-optimized machine types: the C2 and the C2D series.
C2 Series
The C2 series provides access to the underlying server platform, letting you fine-tune performance and offering considerably more computing power than the general purpose family.
C2 machines offer 4 to 60 vCPUs with total memory of up to 240 GB (4 GB of memory for each vCPU). They also provide high bandwidth—50 Gbps and 100 Gbps internal traffic rates—making them particularly useful to enable applications to access large quantities of data without bottlenecking.
The C2 series uses Intel’s 3.9 GHz Cascade Lake processor, which offers the highest performance per virtual core. These are especially valuable for single-thread performance workloads using Intel’s AVX512 capabilities. They have a register width twice that of their predecessors, which increases performance and decreases latency at twice the operations per second (FLOPS).
C2D Series
The C2D series takes things up a notch from the C2 by offering GCP’s largest compute platform: up to 112 vCPU per instance and a total of 896 GB of memory. C2D offers the greatest amount of last level cache (LLC) per core, making it perfect for high-performance computing, gaming servers, or scientific modeling.
C2D is available in three compute/memory configurations:
- C2D Standard: 4 GB of memory per vCPU
- C2D High-Memory: 8 GB of memory per vCPU
- C2D High-CPU: 2GB of memory per vCPU
Limitations
Both C2 and C2D have the following limitations:
- No support for regional persistent disks
- No support for vGPU
- Only available in specific zones where the processor type is supported
Memory-Optimized instances
Memory-optimized instances are used for workloads with aggressive memory demands where a general-purpose instance would be less than ideal, such as running a large enterprise database. Memory-optimized instances can also replace general-purpose instances that require more memory than computing power, such as small databases or file servers. They offer the lowest cost per GB of memory, making them a great choice for running databases and applications that perform better with more memory than computation power.
GCP offers two memory-optimized series. The older M1 machine series provides 14 to 24 GB of memory per vCPU (a total of 4 TB per instance). The newer M2 machines can have up to 12 TB of memory per instance, perfect for running large enterprise databases like SAP HANA.
Limitations
Memory-optimized machines have the following limitations:
- No regional persistent disks are available with the M2 series
- M2 is only available in selected zones and on specific CPUs
- No support for virtual graphics
Accelerator-Optimized
GCP uses NVIDIA’s new Ampere A100 GPU for its A2 graphics-optimized machine series, which it calls “accelerator-optimized.” The A2 series comes with a fixed number of vGPUs with a multiplier of 2 vGPUs for every step up. Options start from 12 vCPU and go up to 96 vCPU. There is also support for up to 257 TB of attached storage.
Limitations
Graphics-optimized machines have the following limitations:
- No persistent regional disk available
- Only support for Intel’s Cascade Lake CPU
- Availability only in certain regions
Cost Factors
The general-purpose family is the way to go if you don’t have any special requirements. It offers the best balance of CPU, memory, and storage per dollar.
E2 and E2 shared-core are best for running applications with low CPU and memory demands. A typical example would be running proxy servers that service minimal requests, a small domain controller, or a jump server.
All instance types except shared-core are billed when running, so an excellent way to reduce costs is configuring a scheduler to turn off nonessential or unused instances. However, this cannot benefit shared-core machines with bursting, as bursting accumulates credits when running below the maximum. Turning off shared-core VMs would result in the loss of any earned credits.
Consider using the N2 family if your E2 instances are underpowered. They are an excellent option for virtual desktop infrastructure (VDI) and offer the same balanced performance as most desktops or laptops. N2 is ideal for enterprises with several hundred (or more) virtual desktops.
Memory-optimized instances also offer great value for money at the lowest cost per GB of memory. These instances can be used as a replacement for any general purpose server where additional memory is required at little cost overhead. A good example here is running a file server on a small, memory-optimized instance rather than a mid-sized general-purpose one. The costs would be comparable, but the additional memory will help handle incoming file requests more efficiently.
You can also use memory-optimized instances to run large enterprise databases since they offer high bandwidth alongside their increased memory. High-end memory-optimized machines are costly, though, and should only be used where necessary.
Compute-optimized instances are almost always used for specific purposes. Unlike optimized memory machines, which you can use for general applications, compute-optimized machines don’t offer great value for money if not used for compute-heavy workloads. Even at the low-end, they are costly, providing no advantage when used for everyday needs.
All the options discussed above are somewhat interchangeable, with certain choices being better options for particular applications or use cases. This is not the case for accelerator-optimized instances, however. The A2 family is very expensive, so it is only ever used where GPU resources are required for the application (since these are not offered on other instance types).
Conclusion
Configured correctly, public clouds can offer significant savings compared to local infrastructure, but they can become very expensive if not appropriately sized. With so many options to choose from, matching the right type for your ever-changing workload can be a daunting task. The most practical approach is to use specialized third-party cost savings and right-sizing tools that use sophisticated algorithms to identify costly but underutilized instances and recommend more appropriate configurations.
Related Blogs
The New FinOps Paradigm: Maximizing Cloud ROI
Featuring guest presenter Tracy Woo, Principal Analyst at Forrester Research In a world where 98% of enterprises are embracing FinOps,…
Cloud Automation and Orchestration: A Comprehensive Guide to Streamlined IT Operations
As businesses increasingly adopt cloud technologies, managing these environments has become more complex. To optimize resources, reduce costs, and accelerate…