Concepts

When designing architectures for businesses on AWS, it’s crucial to understand the different size and speed requirements for various services to ensure they align with business needs. AWS provides a wide range of resources and services that can be customized for performance, cost, and scalability. Here, we’ll look at how to select the right sizes and speeds for compute, storage, and networking resources.

Compute

For compute resources, AWS offers instances with varying CPU, memory, storage, and networking capacities. Amazon EC2 instances come in different families optimized for general purpose, compute-optimized, memory-optimized, accelerated computing, and storage-optimized tasks.

To meet different business requirements, you need to consider:

  • vCPU and Memory: Choose the number of vCPUs and amount of memory based on the application requirements. For example, a t3.medium instance with 2 vCPUs and 4 GiB of memory might suffice for a small web application, while a compute-heavy application might require a c5.9xlarge instance with 36 vCPUs and 72 GiB of memory.
  • Network Performance: Select an instance that provides the required network bandwidth. A c5n.large instance offers up to 25 Gbps of network bandwidth suitable for network-intensive applications.
  • Burst Capability: Some instances, like the T3 family, offer burstable performance for workloads with intermittent spikes.

Example Instance Types based on Workload:

Workload Type Example Instance Type vCPUs Memory (GiB) Network Bandwidth
General Purpose t3.medium 2 4 Up to 5 Gbps
Compute Optimized c5.9xlarge 36 72 Up to 10 Gbps
Memory Optimized r5.large 2 16 Up to 10 Gbps
Storage Optimized i3en.large 2 16 Up to 25 Gbps

Storage

The choice of storage depends on the type of data access (block, file, or object), throughput, IOPS (Input/Output Operations Per Second), and latency requirements.

  • Amazon EBS: For block storage, you can choose between SSD-backed (e.g., General Purpose GP3, Provisioned IOPS PIOPS) and HDD-backed (e.g., Throughput Optimized HDD, Cold HDD) volumes. GP3 volumes offer a baseline performance of 3,000 IOPS and a throughput of 125 MiB/s, scalable up to 16,000 IOPS and 1,000 MiB/s.
  • Amazon S3: For object storage, consider the access patterns (frequent, infrequent, archival). S3 Standard is suitable for frequently accessed data, S3 Intelligent-Tiering is cost-effective for data with unknown access patterns, while S3 Glacier is designed for long-term archival.

Example Storage Choices:

Storage Type Use Case Size Throughput Latency
EBS GP3 (SSD) General purpose, boot volumes Up to 16 TiB Baseline at 125 MiB/s Low
EBS PIOPS (SSD) I/O-intensive applications Up to 16 TiB Provisioned up to 64,000 IOPS Very Low
S3 Standard Frequently accessed data Unlimited Scales with the number of requests Low
S3 Glacier Long-term data archiving Unlimited Retrieval times from minutes to hours High (compared to S3 Standard)

Networking

Networking speed is essential for applications requiring high throughput and low latency. You need to choose the appropriate service for your use case:

  • Amazon VPC: Enables you to launch AWS resources into a virtual network tailored for your application. You should consider the bandwidth of the VPC endpoint connections and whether you need a Transit Gateway to connect VPCs with higher throughput needs.
  • Amazon Route 53: Offers DNS services that route end user requests to Internet applications. It’s crucial for the speed and reliability of accessing the applications.
  • AWS Direct Connect: For dedicated network connections from on-premises to AWS, you can opt for different speeds varying from 50 Mbps to 100 Gbps.

When selecting speeds and sizes for resources on AWS, businesses should also consider the cost implications and potential for future scalability. Leveraging auto-scaling and monitoring services like Amazon CloudWatch can help adjust the resource allocation dynamically as per the demand, ensuring both efficient performance and cost optimization. Proper sizing and scaling are not only important for meeting current requirements but also for adapting to future changes in business needs.

Answer the Questions in Comment Section

T/F: When selecting an EC2 instance size, CPU and memory are the only factors you need to consider for your business requirements.

  • Answer: False

While CPU and memory are critical factors, you also need to consider network performance, storage performance, and the nature of the workload when selecting an EC2 instance size.

T/F: Amazon EBS Provisioned IOPS SSD (io1/io2) volumes can provision up to 64,000 IOPS per volume.

  • Answer: True

Amazon EBS io1/io2 volumes are designed to meet the needs for I/O-intensive workloads, and you can provision up to 64,000 IOPS per volume.

Which AWS storage service is optimized for infrequently accessed data?

  • A) Amazon S3 Standard
  • B) Amazon S3 Glacier
  • C) Amazon S3 Intelligent-Tiering
  • D) Amazon EBS

Answer: B) Amazon S3 Glacier

Amazon S3 Glacier is optimized for data archiving and long-term backup, suitable for infrequently accessed data.

T/F: Amazon RDS does not allow you to choose your database instance size to meet the business requirements.

  • Answer: False

Amazon RDS allows you to select from a variety of database instance sizes to meet your performance and capacity needs.

What is the maximum throughput performance you can get with Amazon EFS?

  • A) 3 GB/s
  • B) 10 GB/s
  • C) 500 MB/s
  • D) 1 GB/s

Answer: B) 10 GB/s

Amazon EFS is designed to provide up to 10 GB/s of throughput performance.

Which Amazon EC2 instance type is best suited for compute-optimized workloads?

  • A) C5
  • B) T3
  • C) M5
  • D) R5

Answer: A) C5

The C5 instance type is optimized for compute-intensive workloads, offering a high performance at a low cost per compute ratio.

T/F: Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is designed for data that is accessed less frequently but requires rapid access when needed.

  • Answer: True

S3 Standard-IA is ideal for data that is accessed infrequently but requires fast access when needed, with lower storage cost and higher retrieval fees.

You can scale out Amazon DynamoDB by increasing the number of:

  • A) Read replicas
  • B) Storage volumes
  • C) Provisioned read and write capacity units
  • D) EC2 instances

Answer: C) Provisioned read and write capacity units

DynamoDB scales by adjusting the provisioned read and write capacity units to handle higher load.

Which AWS service provides a managed distributed cache environment to improve the speed of web applications by allowing you to retrieve information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases?

  • A) Amazon RDS
  • B) Amazon EC2
  • C) Amazon ElastiCache
  • D) Amazon S3

Answer: C) Amazon ElastiCache

Amazon ElastiCache provides a high-performance, in-memory cache which reduces the load on databases for read-intensive workloads.

T/F: AWS Auto Scaling cannot adjust the desired capacity of EC2 instances based on changing traffic patterns to your application.

  • Answer: False

AWS Auto Scaling adjusts the number of EC2 instances dynamically based on traffic patterns to ensure that the desired capacity is met.

What is the maximum burstable performance for network throughput offered by an Amazon EC2 tmicro instance?

  • A) Up to 5 Gbps
  • B) Up to 10 Gbps
  • C) 1 Gbps
  • D) Up to 100 Mbps

Answer: D) Up to 100 Mbps

The Amazon EC2 tmicro instance provides burstable performance for network throughput up to 100 Mbps.

T/F: When designing a system to meet business requirements, you should always select the resource with the highest performance to ensure future scalability.

  • Answer: False

It’s important to select resources that match the current and near-term expected demands to optimize costs. Over-provisioning can lead to unnecessary expenses, and future scalability can be addressed with proper architectural choices that allow for resource adjustment as necessary.

0 0 votes
Article Rating
Subscribe
Notify of
guest
20 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Sarah Lemaire
8 months ago

Great blog post, it really helped me understand the various instance types required for meeting business performance requirements!

Michelle Hudson
7 months ago

Can anyone explain the difference between GP2 and GP3 EBS volumes?

Alison Rodriguez
8 months ago

I loved the details on S3 storage classes. Can someone tell me what the best use case for S3 Intelligent-Tiering is?

Concepción Flores
8 months ago

Is it better to use single AZ or multi-AZ RDS deployments for production?

Stozhar Vivcharik
7 months ago

The explanation on Lambda function limits was really helpful, thanks!

Gül Oraloğlu
8 months ago

Are there any best practices for optimizing cost with EC2 instances?

Elijah Wright
8 months ago

Can someone explain when to use CloudFront over S3 Transfer Acceleration?

Dragan Hubert
6 months ago

I need some advice on choosing the right database solution. Any thoughts on Aurora vs RDS?

20
0
Would love your thoughts, please comment.x
()
x