Question.16 A company used Amazon EC2 instances to deploy a web fleet to host a blog site. The EC2 instances are behind an Application Load Balancer (ALB) and are configured in an Auto Scaling group. The web application stores all blog content on an Amazon EFS volume. The company recently added a feature for bloggers to add video to their posts, attracting 10 times the previous user traffic. At peak times of day, users report buffering and timeout issues while attempting to reach the site or watch videos. Which is the MOST cost-efficient and scalable deployment that will resolve the issues for users? (A) Reconfigure Amazon EFS to enable maximum I/O. (B) Update the blog site to use instance store volumes for storage. Copy the site contents to the volumes at launch and to Amazon S3 at shutdown. (C) Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3. (D) Set up an Amazon CloudFront distribution for all site contents, and point the distribution at the ALB. |
16. Click here to View Answer
Answer: C
Explanation:
The best solution is to leverage Amazon CloudFront to distribute the video content stored in Amazon S3. The problem is that the current architecture, relying on EFS and EC2 instances behind an ALB, is struggling to handle the increased traffic from video content, causing buffering and timeouts.
Option C addresses this directly by offloading the video delivery to CloudFront. CloudFront is a CDN (Content Delivery Network) specifically designed for low-latency, high-throughput content delivery. By storing the videos in S3 and serving them through CloudFront, the load on the EC2 instances, EFS, and ALB is significantly reduced. S3 provides scalable and cost-effective storage for the videos, and CloudFront caches the video content at edge locations closer to the users, drastically improving the user experience and reducing latency. This also means your origin server (ALB/EC2) don’t need to serve the videos, thus reducing the overall CPU/Memory usage on the EC2 instances.
Option A, reconfiguring EFS for maximum I/O, may improve performance, but it won’t scale as effectively as a CDN and will be more expensive. EFS is not designed for serving static content directly to a large number of users.
Option B, using instance store volumes, is not a persistent storage solution and would require complex data synchronization mechanisms between instances and S3. This adds unnecessary operational overhead and is not a cost-effective or scalable solution. Additionally, instance store volumes are ephemeral, meaning data is lost when the instance is stopped or terminated.
Option D, using CloudFront for all site content, while potentially beneficial, is not the most cost-efficient way to solve the current issue. The primary bottleneck is the video content, so focusing on distributing that through CloudFront offers the best balance of cost and performance improvement. Migrating all site content would require a more complex setup with Origin Access Identity to protect the S3 bucket, adding unnecessary complexity given the primary issue is video delivery.
Therefore, option C is the most cost-efficient and scalable deployment for resolving the user experience issues. It leverages the strengths of S3 for storage and CloudFront for content delivery to offload the video delivery from the web servers, improving performance and scalability while minimizing cost.
Further Research:
Amazon EFS: https://aws.amazon.com/efs/
Amazon CloudFront: https://aws.amazon.com/cloudfront/
Amazon S3: https://aws.amazon.com/s3/
Question.17 A company with global offices has a single 1 Gbps AWS Direct Connect connection to a single AWS Region. The company’s on-premises network uses the connection to communicate with the company’s resources in the AWS Cloud. The connection has a single private virtual interface that connects to a single VPC. A solutions architect must implement a solution that adds a redundant Direct Connect connection in the same Region. The solution also must provide connectivity to other Regions through the same pair of Direct Connect connections as the company expands into other Regions. Which solution meets these requirements? (A) Provision a Direct Connect gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the Direct Connect gateway. Connect the Direct Connect gateway to the single VPC. (B) Keep the existing private virtual interface. Create the second Direct Connect connection. Create a new private virtual interface on the new connection, and connect the new private virtual interface to the single VPC. (C) Keep the existing private virtual interface. Create the second Direct Connect connection. Create a new public virtual interface on the new connection, and connect the new public virtual interface to the single VPC. (D) Provision a transit gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the transit gateway. Associate the transit gateway with the single VPC. |
17. Click here to View Answer
Answer: A
Explanation:
The correct answer is A because it leverages the AWS Direct Connect Gateway (DXGW) to achieve both redundancy and connectivity to multiple regions. Here’s a breakdown:
- Redundancy: Creating a second Direct Connect connection and connecting both to the DXGW ensures that if one connection fails, traffic can failover to the other, providing high availability. Each connection has its own private virtual interface (VIF).
- Multi-Region Connectivity: The DXGW acts as a central hub, allowing connections from multiple Direct Connect locations (on-premises) to reach multiple AWS Regions. This fulfills the requirement of extending connectivity to other Regions as the company expands.
Option B is incorrect because connecting each Direct Connect connection directly to the same VPC does not inherently allow cross-region connectivity. While redundant, it’s limited to the single VPC and Region.
Option C is incorrect. Using a public virtual interface is not the appropriate way to connect to resources within a VPC. Public VIFs are used to access public AWS services and are not intended for private network connectivity to a VPC.
Option D suggests using a Transit Gateway, which is a valid option for connecting multiple VPCs and on-premises networks. However, in this scenario, a Direct Connect Gateway is a more suitable choice since it is specifically designed to handle Direct Connect connections and their integration with AWS networks, including cross-region connectivity. Transit Gateway adds unnecessary complexity since the requirement is about Direct Connect redundancy and multi-region Direct Connect connectivity.
In summary, option A provides the most cost-effective and appropriate solution by utilizing the Direct Connect Gateway to establish redundant Direct Connect connections and enabling future expansion to other AWS Regions through the same connections, all while maintaining private connectivity to VPCs.
Supporting Links:
AWS Direct Connect Resiliency Recommendations: https://aws.amazon.com/premiumsupport/knowledge-center/direct-connect-resiliency/
AWS Direct Connect Gateway: https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html
Question.18 A company has a web application that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by custom recognition software for categorization. The website contains static content that has variable traffic with peaks in certain months. The architecture consists of Amazon EC2 instances running in an Auto Scaling group for the web application and EC2 instances running in an Auto Scaling group to process an Amazon SQS queue. The company wants to re-architect the application to reduce operational overhead using AWS managed services where possible and remove dependencies on third-party software. Which solution meets these requirements? (A) Use Amazon ECS containers for the web application and Spot instances for the Auto Scaling group that processes the SQS queue. Replace the custom software with Amazon Rekognition to categorize the videos. (B) Store the uploaded videos in Amazon EFS and mount the file system to the EC2 instances for the web application. Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos. (C) Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notification to publish events to the SQS queue. Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos. (D) Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the web application and launch a worker environment to process the SQS queue. Replace the custom software with Amazon Rekognition to categorize the videos. |
18. Click here to View Answer
Answer: C
Explanation:
The correct answer is C because it leverages AWS managed services to minimize operational overhead and removes dependencies on EBS and third-party software, aligning perfectly with the stated requirements. Hosting the static web application on Amazon S3 utilizes its highly scalable and cost-effective object storage capabilities, eliminating the need for EC2 instances and Auto Scaling for static content. S3’s event notifications trigger the SQS queue when new videos are uploaded, creating an event-driven architecture. Processing the SQS queue with AWS Lambda functions, which invoke the Amazon Rekognition API, completely replaces the EC2-based processing with serverless, managed compute. Lambda eliminates the operational burden of managing EC2 instances for queue processing. Amazon Rekognition provides managed video analysis capabilities, replacing the custom software and related operational overhead. This serverless approach optimizes cost and reduces management overhead significantly. Using S3, SQS, Lambda, and Rekognition leads to a fully managed, scalable, and cost-effective solution that meets all specified needs. Alternatives A, B, and D do not fully utilize AWS managed services and maintain dependencies on EC2 instances and Auto Scaling for static content serving or video storage, which are not ideal for reducing operational overhead as much as option C.
Further reading:
Amazon Rekognition: https://aws.amazon.com/rekognition/
Amazon S3: https://aws.amazon.com/s3/
Amazon SQS: https://aws.amazon.com/sqs/
AWS Lambda: https://aws.amazon.com/lambda/
Question.19 A company has a serverless application comprised of Amazon CloudFront, Amazon API Gateway, and AWS Lambda functions. The current deployment process of the application code is to create a new version number of the Lambda function and run an AWS CLI script to update. If the new function version has errors, another CLI script reverts by deploying the previous working version of the function. The company would like to decrease the time to deploy new versions of the application logic provided by the Lambda functions, and also reduce the time to detect and revert when errors are identified. How can this be accomplished? (A) Create and deploy nested AWS CloudFormation stacks with the parent stack consisting of the AWS CloudFront distribution and API Gateway, and the child stack containing the Lambda function. For changes to Lambda, create an AWS CloudFormation change set and deploy; if errors are triggered, revert the AWS CloudFormation change set to the previous version. (B) Use AWS SAM and built-in AWS CodeDeploy to deploy the new Lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback if Amazon CloudWatch alarms are triggered. (C) Refactor the AWS CLI scripts into a single script that deploys the new Lambda version. When deployment is completed, the script tests execute. If errors are detected, revert to the previous Lambda version. (D) Create and deploy an AWS CloudFormation stack that consists of a new API Gateway endpoint that references the new Lambda version. Change the CloudFront origin to the new API Gateway endpoint, monitor errors and if detected, change the AWS CloudFront origin to the previous API Gateway endpoint. |
19. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Use AWS SAM and built-in AWS CodeDeploy to deploy the new Lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback if Amazon CloudWatch alarms are triggered.
Here’s why:
AWS SAM (Serverless Application Model) simplifies building and deploying serverless applications. It provides a higher-level abstraction over CloudFormation, making it easier to define serverless resources like Lambda functions, API Gateway, and CloudFront. Crucially, SAM integrates seamlessly with AWS CodeDeploy for safe and controlled deployments of Lambda functions.
CodeDeploy allows for gradual traffic shifting to new Lambda versions, which is essential for minimizing the impact of errors during deployments. This can be done using deployment strategies like Canary or Linear deployments. Canary deployments release the new version to a small subset of users, while Linear deployments increase traffic gradually over time. This controlled release allows you to identify issues before they affect the entire user base.
Pre-traffic and post-traffic test functions further enhance the safety of deployments. These functions automatically run before and after traffic is shifted to the new version, respectively. They can execute automated tests to verify that the new version is functioning correctly and meeting performance requirements. If the tests fail, CodeDeploy can automatically roll back the deployment to the previous working version.
Moreover, the integration with Amazon CloudWatch alarms adds another layer of safety. CloudWatch alarms can monitor metrics like error rates, latency, or resource utilization. If any of these metrics exceed predefined thresholds after the new version is deployed, the alarms can trigger a rollback, ensuring that the application remains stable.
Option A is less ideal because while CloudFormation provides infrastructure-as-code capabilities, using change sets for reverting doesn’t provide the granularity and automated rollback mechanisms that CodeDeploy offers. Also managing nested stacks and manually reverting can be time-consuming.
Option C proposes a single CLI script, which, while simpler, lacks the safety features of gradual traffic shifting and automated rollback provided by CodeDeploy. It increases the risk of impacting the entire user base if an error is introduced.
Option D introduces new API Gateway endpoints and CloudFront origins, which adds unnecessary complexity to the deployment process. This also potentially leads to inconsistencies and higher operational overhead. SAM and CodeDeploy provide a more streamlined and automated solution.
In summary, SAM and CodeDeploy offer the best combination of rapid deployment, error detection, and automated rollback for serverless applications, addressing the company’s requirements effectively.
Supporting Links:
Lambda Deployment using CodeDeploy: https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-lambda.html
AWS SAM: https://aws.amazon.com/serverless/sam/
AWS CodeDeploy: https://aws.amazon.com/codedeploy/
Question.20 A company is planning to store a large number of archived documents and make the documents available to employees through the corporate intranet. Employees will access the system by connecting through a client VPN service that is attached to a VPC. The data must not be accessible to the public. The documents that the company is storing are copies of data that is held on physical media elsewhere. The number of requests will be low. Availability and speed of retrieval are not concerns of the company. Which solution will meet these requirements at the LOWEST cost? (A) Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint. Configure the S3 bucket to allow access only through that endpoint. (B) Launch an Amazon EC2 instance that runs a web server. Attach an Amazon Elastic File System (Amazon EFS) file system to store the archived data in the EFS One Zone-Infrequent Access (EFS One Zone-IA) storage class Configure the instance security groups to allow access only from private networks. (C) Launch an Amazon EC2 instance that runs a web server Attach an Amazon Elastic Block Store (Amazon EBS) volume to store the archived data. Use the Cold HDD (sc1) volume type. Configure the instance security groups to allow access only from private networks. (D) Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 Glacier Deep Archive storage class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint. Configure the S3 bucket to allow access only through that endpoint. |
20. Click here to View Answer
Answer: A
Explanation:
The best solution is A because it leverages S3 One Zone-IA, which is designed for infrequently accessed data that doesn’t require the high availability of standard S3. Since the requirements explicitly state that availability and speed are not concerns, and the data is backed up elsewhere, the lower cost of S3 One Zone-IA makes it ideal. Configuring an S3 interface endpoint ensures that traffic to S3 stays within the VPC, meeting the requirement that data is not publicly accessible. While website hosting is mentioned, it’s not actually enabled for public access in this scenario because access is controlled via the interface endpoint and VPC routing. This approach efficiently balances cost and security.
Option D, while using Glacier Deep Archive (the cheapest S3 storage), is incorrect because Glacier Deep Archive is designed for extremely infrequent access and retrieval times of hours, which is unlikely to meet the need for employee access, however infrequent. Although not explicitly ruled out, Option A is cheaper and suitable given all the other requirements.
Options B and C are less suitable because they involve running EC2 instances, which incur compute costs in addition to storage costs. They also require more management overhead compared to S3. EFS and EBS are more suited for active file systems and block storage needs, respectively, not cost-effectively archiving infrequently accessed data. Even using the One Zone-IA tier of EFS, it is still more expensive than S3 One Zone-IA. Also, the maintenance overhead of EBS and EFS will be a burden. S3, being a fully managed service, is the simplest and likely lowest cost.
Here are some resources for further reading:
Amazon EFS Storage Classes: https://aws.amazon.com/efs/pricing/
Amazon S3 Storage Classes: https://aws.amazon.com/s3/storage-classes/
Amazon S3 Glacier Deep Archive: https://aws.amazon.com/s3/storage-classes/glacier/
VPC Endpoints: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html