Question.6 A retail company needs to provide a series of data files to another company, which is its business partner. These files are saved in an Amazon S3 bucket under Account A, which belongs to the retail company. The business partner company wants one of its IAM users, User_DataProcessor, to access the files from its own AWS account (Account B). Which combination of steps must the companies take so that User_DataProcessor can access the S3 bucket successfully? (Choose two.)A.Turn on the cross-origin resource sharing (CORS) feature for the S3 bucket in Account A. B.In Account A, set the S3 bucket policy to the following: ![]() C.In Account A, set the S3 bucket policy to the following: ![]() D.In Account B, set the permissions of User_DataProcessor to the following: ![]() E.In Account B, set the permissions of User_DataProcessor to the following: ![]() |
6. Click here to View Answer
Answer: C
Explanation:
Doesn’t make sense for account B to control access to resources in account A. So D is NOT the answer.Account A owns the bucket and sets the bucket policy to allow access to a principal/user in Account B
C & D.But the ListBucket action won’t work as you need to mention the arn of the bucket itself as well (without the /*)
Question.7 A company is running a traditional web application on Amazon EC2 instances. The company needs to refactor the application as microservices that run on containers. Separate versions of the application exist in two distinct environments: production and testing. Load for the application is variable, but the minimum load and the maximum load are known. A solutions architect needs to design the updated application with a serverless architecture that minimizes operational complexity. Which solution will meet these requirements MOST cost-effectively? (A) Upload the container images to AWS Lambda as functions. Configure a concurrency limit for the associated Lambda functions to handle the expected peak load. Configure two separate Lambda integrations within Amazon API Gateway: one for production and one for testing. (B) Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Container Service (Amazon ECS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the ECS clusters. (C) Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Kubernetes Service (Amazon EKS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the EKS clusters. (D) Upload the container images to AWS Elastic Beanstalk. In Elastic Beanstalk, create separate environments and deployments for production and testing. Configure two separate Application Load Balancers to direct traffic to the Elastic Beanstalk deployments. |
7. Click here to View Answer
Answer: B
Explanation:
Option B is the most cost-effective and operationally simple serverless solution for containerized microservices with variable load, compared to the other options. Here’s why:
- Cost-Effectiveness: ECS Fargate provides a pay-as-you-go pricing model where you only pay for the compute and memory resources your containers consume. This aligns well with variable loads, ensuring minimal cost when the application is not heavily used. Lambda (Option A) while serverless, may become complex and expensive when dealing with entire containerized applications due to its invocation-based pricing and size limitations, which are not suitable for refactoring large traditional web applications. Option C is also not the most cost-effective, as EKS has more overhead and operational complexity compared to ECS Fargate. Elastic Beanstalk (Option D) abstracts away some infrastructure management but is not as cost-effective as Fargate for handling variable loads in the context of containerized microservices, due to EC2 instance usage even when the application is idle.
- Operational Simplicity: ECS Fargate abstracts away the underlying infrastructure management (EC2 instances, cluster management), reducing operational overhead. The automatic scaling of ECS services easily manages the varying loads in production and testing environments. Two separate Application Load Balancers allow for independent routing and management of traffic to the respective environments. Lambda may introduce complexity if the application needs more than simple event-driven tasks. EKS (Option C) introduces additional operational complexity compared to ECS, as Kubernetes requires expertise and more configuration. Elastic Beanstalk (Option D) offers managed services, but it is still less serverless than Fargate because you need to choose an EC2 instance type based on your maximum expected load and manually configure scaling policies.
- Suitable for Microservices: ECS Fargate is designed for running containerized microservices. Option B allows easy scaling and management of microservices, making it suited to the requirements of a traditional web application being refactored into microservices.
- Serverless: Both Lambda (Option A) and ECS Fargate (Option B and C) allow for serverless container deployments, but ECS Fargate is more appropriate when dealing with the entire application in containers.
- Distinct Environments: Configuring two separate clusters in ECS (Option B) for the production and testing environments will provide isolation and the ability to run separate versions of the application without risk of impacting one another.
Therefore, the best option is B as it balances cost-effectiveness, operational simplicity, and the microservices architecture.
Here are some resources for further research:
AWS Elastic Beanstalk: https://aws.amazon.com/elasticbeanstalk/
AWS ECS Fargate: https://aws.amazon.com/fargate/
AWS Lambda: https://aws.amazon.com/lambda/
AWS EKS: https://aws.amazon.com/eks/
Question.8 A company has a multi-tier web application that runs on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The ALB and the Auto Scaling group are replicated in a backup AWS Region. The minimum value and the maximum value for the Auto Scaling group are set to zero. An Amazon RDS Multi-AZ DB instance stores the application’s data. The DB instance has a read replica in the backup Region. The application presents an endpoint to end users by using an Amazon Route 53 record. The company needs to reduce its RTO to less than 15 minutes by giving the application the ability to automatically fail over to the backup Region. The company does not have a large enough budget for an active-active strategy. What should a solutions architect recommend to meet these requirements? (A) Reconfigure the application’s Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function. (B) Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Configure Route 53 with a health check that monitors the web application and sends an Amazon Simple Notification Service (Amazon SNS) notification to the Lambda function when the health check status is unhealthy. Update the application’s Route 53 record with a failover policy that routes traffic to the ALB in the backup Region when a health check failure occurs. (C) Configure the Auto Scaling group in the backup Region to have the same values as the Auto Scaling group in the primary Region. Reconfigure the application’s Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Remove the read replica. Replace the read replica with a standalone RDS DB instance. Configure Cross-Region Replication between the RDS DB instances by using snapshots and Amazon S3. (D) Configure an endpoint in AWS Global Accelerator with the two ALBs as equal weighted targets. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function. |
8. Click here to View Answer
Answer: B
Explanation:
The correct answer is B because it provides a cost-effective and automated failover solution that meets the RTO requirement.
Here’s a breakdown of why option B is the best choice:
- Route 53 Failover Policy: Using a failover policy in Route 53 ensures that traffic is automatically routed to the backup region when the primary region is deemed unhealthy. This aligns directly with the requirement for automated failover. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
- Health Checks: Route 53 health checks monitor the application’s availability. This provides a robust and accurate way to detect failures in the primary region, triggering the failover process. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/health-checks-creating-deleting.html
- Lambda Function: A Lambda function in the backup region automates the promotion of the read replica to a standalone database instance and adjusts the Auto Scaling group’s configuration to start the application servers. This minimizes manual intervention and reduces the overall failover time. https://aws.amazon.com/lambda/
- SNS Notification: Amazon SNS acts as a reliable mechanism to notify the Lambda function when the health check fails. This ensures that the failover process is initiated promptly. https://aws.amazon.com/sns/
Why other options are less suitable:
Option D: AWS Global Accelerator is suitable for improving global application performance, but in this specific case, it is more expensive than the other options, and the question specifies cost constraints. Similarly to option A, triggering failover solely based on HTTP 5XX errors might be too reactive and miss other types of failures.
Option A: Latency-based routing doesn’t provide automatic failover. Traffic might still be directed to the unhealthy primary region. Triggering failover solely based on HTTP 5XX errors might be too reactive and miss other types of failures.
Option C: Latency-based routing doesn’t provide automatic failover. While Cross-Region Replication can work, setting it up using S3 snapshots and manual restoration would exceed the RTO of 15 minutes. Also, configuring the Auto Scaling group in the backup Region to have the same values as in the primary Region defeats the “minimum cost” requirement.
Question.9 A company is hosting a critical application on a single Amazon EC2 instance. The application uses an Amazon ElastiCache for Redis single-node cluster for an in-memory data store. The application uses an Amazon RDS for MariaDB DB instance for a relational database. For the application to function, each piece of the infrastructure must be healthy and must be in an active state. A solutions architect needs to improve the application’s architecture so that the infrastructure can automatically recover from failure with the least possible downtime. Which combination of steps will meet these requirements? (Choose three.) (A) Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are part of an Auto Scaling group that has a minimum capacity of two instances. (B) Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are configured in unlimited mode. (C) Modify the DB instance to create a read replica in the same Availability Zone. Promote the read replica to be the primary DB instance in failure scenarios. (D) Modify the DB instance to create a Multi-AZ deployment that extends across two Availability Zones. (E) Create a replication group for the ElastiCache for Redis cluster. Configure the cluster to use an Auto Scaling group that has a minimum capacity of two instances. (F) Create a replication group for the ElastiCache for Redis cluster. Enable Multi-AZ on the cluster. |
9. Click here to View Answer
Answer: ADF
Explanation:
Here’s a detailed justification for the answer ADF:
A. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are part of an Auto Scaling group that has a minimum capacity of two instances.
- Justification: This ensures high availability for the EC2 instances hosting the application. The Elastic Load Balancer (ELB) distributes traffic, preventing a single point of failure. The Auto Scaling group automatically replaces unhealthy EC2 instances, minimizing downtime. Keeping a minimum of two instances ensures redundancy.
- Supporting Concepts: Load balancing, Auto Scaling, redundancy, high availability.
- Authoritative Link: https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.html
D. Modify the DB instance to create a Multi-AZ deployment that extends across two Availability Zones.
- Justification: Amazon RDS Multi-AZ deployments provide high availability and failover capabilities. If the primary DB instance fails, RDS automatically fails over to the standby instance in another Availability Zone, minimizing downtime. This ensures the database layer is resilient to failures.
- Supporting Concepts: Database replication, failover, Availability Zones, high availability.
- Authoritative Link: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
F. Create a replication group for the ElastiCache for Redis cluster. Enable Multi-AZ on the cluster.
- Justification: Enabling Multi-AZ with ElastiCache for Redis creates a read replica in a different Availability Zone. In case of a failure in the primary node, ElastiCache automatically promotes the replica to be the primary node. This ensures minimal downtime and data loss.
- Supporting Concepts: Redis replication, failover, Availability Zones, high availability.
- Authoritative Link: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
Why other options are not optimal:
E: Auto Scaling Group does not apply to ElastiCache Redis in this context. ElastiCache has its own mechanism (replication groups and Multi-AZ) for HA and failover.
B: Configuring EC2 instances in unlimited mode affects CPU credit usage but doesn’t directly address instance failure recovery.
C: While creating a read replica is good, manually promoting it during a failure is not automated and would lead to increased downtime compared to Multi-AZ.
Question.10 A retail company is operating its ecommerce application on AWS. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses an Amazon RDS DB instance as the database backend. Amazon CloudFront is configured with one origin that points to the ALB. Static content is cached. Amazon Route 53 is used to host all public zones. After an update of the application, the ALB occasionally returns a 502 status code (Bad Gateway) error. The root cause is malformed HTTP headers that are returned to the ALB. The webpage returns successfully when a solutions architect reloads the webpage immediately after the error occurs. While the company is working on the problem, the solutions architect needs to provide a custom error page instead of the standard ALB error page to visitors. Which combination of steps will meet this requirement with the LEAST amount of operational overhead? (Choose two.) (A) Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3. (B) Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Target.FailedHealthChecks is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a publicly accessible web server. (C) Modify the existing Amazon Route 53 records by adding health checks. Configure a fallback target if the health check fails. Modify DNS records to point to a publicly accessible webpage. (D) Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Elb.InternalError is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a public accessible web server. (E) Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page. |
10. Click here to View Answer
Answer: AE
Explanation:
The correct answer is AE. Here’s why:
A. Create an Amazon S3 bucket… Upload the custom error pages to Amazon S3. This is necessary because you need a place to host the custom error page. Amazon S3 is ideal for serving static content like HTML error pages with high availability and scalability. S3 can be configured to serve static websites directly.
E. Add a custom error response by configuring a CloudFront custom error page. CloudFront is already being used as a CDN in front of the ALB. CloudFront custom error pages allow you to intercept HTTP error codes (like 502) returned by the origin (ALB) and serve a custom page instead. This is the simplest and most direct way to replace the ALB’s default error page without significantly altering the existing architecture or introducing complex routing logic. CloudFront is configured to deliver your website to the end users. By setting up custom error pages, you modify the content delivered by CloudFront when an origin error, such as a 502, occurs.
Why other options are not the best choices:
- B & D: While a CloudWatch alarm triggering a Lambda function to modify the ALB routing could work, it is significantly more complex and adds operational overhead. Changing ALB rules dynamically based on health checks introduces potential race conditions and complexities in maintaining the application. Lambda is designed to execute code when triggered, but is not the most appropriate tool for this situation.
- C: Modifying Route 53 records to point to a different web server is a global DNS change. This is a far more disruptive solution than simply configuring CloudFront’s error pages. Additionally, DNS propagation times mean that not all users would see the custom error page immediately. DNS management is complex, so it’s better to avoid changing it.
In summary: Hosting the error page on S3 and configuring CloudFront’s custom error pages is the most efficient and least disruptive approach. It leverages existing infrastructure (CloudFront) and minimizes operational overhead.
Supporting Links:
CloudFront Custom Error Pages: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html
Amazon S3 Static Website Hosting: https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html