Question.1 A company needs to architect a hybrid DNS solution. This solution will use an Amazon Route 53 private hosted zone for the domain cloud.example.com for the resources stored within VPCs. The company has the following DNS resolution requirements: On-premises systems should be able to resolve and connect to cloud.example.com. All VPCs should be able to resolve cloud.example.com. There is already an AWS Direct Connect connection between the on-premises corporate network and AWS Transit Gateway. Which architecture should the company use to meet these requirements with the HIGHEST performance? (A) Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver. (B) Associate the private hosted zone to all the VPCs. Deploy an Amazon EC2 conditional forwarder in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the conditional forwarder. (C) Associate the private hosted zone to the shared services VPCreate a Route 53 outbound resolver in the shared services VPAttach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the outbound resolver. (D) Associate the private hosted zone to the shared services VPC. Create a Route 53 inbound resolver in the shared services VPC. Attach the shared services VPC to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver. |
1. Click here to View Answer
Answer: A
Explanation:
The correct answer is A because it provides a highly performant and scalable hybrid DNS solution. Here’s a detailed justification:
- Private Hosted Zone Association: Associating the private hosted zone with all VPCs ensures that resources within each VPC can directly resolve the
cloud.example.com
domain names using Route 53’s DNS servers. This is the fundamental requirement for internal DNS resolution within AWS. - Inbound Resolver for On-Premises Access: The Route 53 inbound resolver, placed in a shared services VPC, provides a dedicated endpoint for on-premises systems to forward DNS queries for
cloud.example.com
. Inbound resolvers allow on-premises networks to query the Route 53 private hosted zone in AWS. - Transit Gateway Integration: Attaching all VPCs to the Transit Gateway establishes connectivity between them and the on-premises network via the existing Direct Connect. This creates a consistent network path for DNS queries.
- On-Premises DNS Forwarding: Creating forwarding rules on the on-premises DNS server that point to the inbound resolver ensures that any requests for
cloud.example.com
are directed to AWS for resolution within the private hosted zone. - Performance Advantages: Option A leverages Route 53’s managed infrastructure, which is highly scalable and resilient, providing better performance compared to deploying and managing EC2-based conditional forwarders (Option B). It also doesn’t limit the scope of association only to the shared service VPC (Options C and D).
- Correctness of Options C & D: Options C and D limit the private hosted zone association only to the shared service VPC making it impossible for other VPC’s resources to use DNS to resolve resources within AWS.
Authoritative Links:
AWS Transit Gateway: https://aws.amazon.com/transit-gateway/
Route 53 Private Hosted Zones: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/private-hosted-zones.html
Route 53 Resolver: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html
Question.2 A company is providing weather data over a REST-based API to several customers. The API is hosted by Amazon API Gateway and is integrated with different AWS Lambda functions for each API operation. The company uses Amazon Route 53 for DNS and has created a resource record of weather.example.com. The company stores data for the API in Amazon DynamoDB tables. The company needs a solution that will give the API the ability to fail over to a different AWS Region. Which solution will meet these requirements? (A) Deploy a new set of Lambda functions in a new Region. Update the API Gateway API to use an edge-optimized API endpoint with Lambda functions from both Regions as targets. Convert the DynamoDB tables to global tables. (B) Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables. (C) Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables. (D) Deploy a new API Gateway API in a new Region. Change the Lambda functions to global functions. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables. |
2. Click here to View Answer
Answer: C
Explanation:
Let’s analyze why option C is the correct solution for creating a cross-region failover strategy for the weather data API.
The primary goal is to ensure the API remains available even if the primary AWS Region experiences an outage. This requires replicating the API functionality and data in another Region and configuring DNS to automatically switch to the secondary Region in case of a failure.
Option C leverages several key AWS features to achieve this. First, it replicates the entire API infrastructure by deploying a new API Gateway API and Lambda functions in a second Region. This ensures that a fully functional backup API is ready and waiting.
Crucially, it uses a Route 53 failover record. A failover record in Route 53 allows you to define a primary and a secondary record. Route 53 health checks monitor the primary endpoint (the API Gateway in the primary region). If the health check fails, Route 53 automatically begins routing traffic to the secondary endpoint (the API Gateway in the secondary region). This provides automatic and fast failover. [https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html]
Target health monitoring is enabled to ensure that the primary API Gateway endpoint is actively monitored for its health status. This is vital for timely failover. If the primary API Gateway or its underlying resources become unavailable, the health check will fail, triggering the failover.
Finally, the solution converts the DynamoDB tables to global tables. DynamoDB Global Tables provide multi-region, active-active database replication, enabling low-latency access to data from anywhere in the world and providing resilience in the face of regional outages. [https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html] This ensures data consistency and availability across both Regions.
Now, let’s examine why the other options are less suitable.
Option A uses an edge-optimized API endpoint and attempts to target Lambda functions in both Regions from the same API Gateway. While edge-optimized API Gateways offer lower latency, they are not designed for regional failover. Moreover, targeting Lambda functions from different regions from a single API Gateway can introduce significant latency and complexity. Edge-optimized API Gateways are about improving latency for clients; not about multi-region failover.
Option B employs a multivalue answer in Route 53. While multivalue answers can distribute traffic across multiple endpoints, they are not designed for failover. Route 53 will return multiple IP addresses, and clients will attempt to connect to them randomly. It doesn’t guarantee that the healthy endpoint will be used during an outage.
Option D suggests using “global functions,” which do not exist directly as a feature in AWS Lambda. Lambda functions are region-specific. Although Lambda functions can be invoked across Regions, it requires additional configuration and is not a built-in feature called a “global function.” Similar to option B, using a multivalue answer also does not guarantee failover.
In summary, only option C provides a comprehensive solution that includes replicating the API and Lambda functions, configuring Route 53 for automatic failover with health checks, and ensuring data availability and consistency through DynamoDB Global Tables.
Question.3 A company uses AWS Organizations with a single OU named Production to manage multiple accounts. All accounts are members of the Production OU. Administrators use deny list SCPs in the root of the organization to manage access to restricted services. The company recently acquired a new business unit and invited the new unit’s existing AWS account to the organization. Once onboarded, the administrators of the new business unit discovered that they are not able to update existing AWS Config rules to meet the company’s policies. Which option will allow administrators to make changes and continue to enforce the current policies without introducing additional long-term maintenance? (A) Remove the organization’s root SCPs that limit access to AWS Config. Create AWS Service Catalog products for the company’s standard AWS Config rules and deploy them throughout the organization, including the new account. (B) Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the new account to the Production OU when adjustments to AWS Config are complete. (C) Convert the organization’s root SCPs from deny list SCPs to allow list SCPs to allow the required services only. Temporarily apply an SCP to the organization’s root that allows AWS Config actions for principals only in the new account. (D) Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the organization’s root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Config are complete. |
3. Click here to View Answer
Answer: D
Explanation:
The correct answer is D. Here’s a detailed justification:
The problem stems from deny list SCPs at the root of the organization preventing modifications to AWS Config rules in the new account. We need a solution that temporarily grants the necessary permissions to the new account’s administrators to update the rules while still maintaining the organization’s overall security posture.
Option D provides the best temporary and minimally disruptive solution. It creates an “Onboarding” OU and places the new account there. A more permissive SCP allowing Config actions is applied to the “Onboarding” OU, effectively overriding the deny list SCP at the root only for accounts in that OU. The new account administrators can then make the necessary Config rule changes. Crucially, it also involves moving the root SCP to the Production OU. This action restricts the SCP’s effect only to the Production OU, preventing it from impacting the Onboarding OU. Once the modifications are complete, the new account is moved to the Production OU, inheriting the standard restrictions. The temporary OU and SCP are then no longer needed and can be removed or kept for future onboarding.
Option A is undesirable because removing the root SCPs entirely would weaken the organization’s security posture for all accounts, not just the new one. While Service Catalog could be a long-term solution, it doesn’t immediately address the immediate need to modify existing rules in the new account.
Option B provides a similar temporary OU approach but doesn’t address the placement of the original SCP. The root SCP would still impact accounts placed in the root, creating further issues and complications for other accounts managed in the root.
Option C converting to allow list SCPs is a significant undertaking with potentially wide-ranging impacts and isn’t a suitable temporary solution. Temporarily allowing Config actions only for principals in the new account via SCP at root level is difficult to configure correctly and could unintentionally grant wider permissions than intended. It doesn’t isolate the impact of the relaxed permissions, and maintains all other accounts still subject to the restrictive root SCP policies.
In summary, Option D provides a scoped, temporary, and reversible solution that balances the need to grant necessary permissions with the organization’s security requirements.
Relevant AWS Documentation:
Question.4 A company is running a two-tier web-based application in an on-premises data center. The application layer consists of a single server running a stateful application. The application connects to a PostgreSQL database running on a separate server. The application’s user base is expected to grow significantly, so the company is migrating the application and database to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing. Which solution will provide a consistent user experience that will allow the application and database tiers to scale? (A) Enable Aurora Auto Scaling for Aurora Replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled. (B) Enable Aurora Auto Scaling for Aurora writers. Use an Application Load Balancer with the round robin routing algorithm and sticky sessions enabled. (C) Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing and sticky sessions enabled. (D) Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled. |
4. Click here to View Answer
Answer: C
Explanation:
The correct answer is C. Here’s a detailed justification:
The scenario requires scaling both the application and database tiers while maintaining a consistent user experience for a stateful application migrating to AWS.
- Aurora Auto Scaling for Aurora Replicas: Aurora Auto Scaling dynamically adjusts the number of Aurora Replicas in response to changes in application demand. Aurora Replicas handle read traffic, offloading the writer node and improving read performance, which is crucial for scaling the database tier without impacting write operations or application availability. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Scaling.html
- Application Load Balancer (ALB) with Round Robin and Sticky Sessions: An ALB is best suited for routing HTTP/HTTPS traffic, which is typical for web applications. Using the round-robin routing algorithm distributes requests evenly across multiple EC2 instances within the Auto Scaling group. Sticky sessions (also known as session affinity) are critical for maintaining stateful application data. They ensure that all requests from a given user are consistently routed to the same EC2 instance, preserving the user’s session data and ensuring a consistent experience. https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
- Why other options are incorrect:
- Option A & D: Network Load Balancer (NLB): NLBs operate at Layer 4 (TCP/UDP), and while they provide high throughput and low latency, they are not ideal for HTTP/HTTPS traffic and do not inherently support sticky sessions based on application-layer cookies (necessary for stateful applications). NLBs support source IP stickiness, but this is generally less reliable than cookie-based stickiness.
- Option B & D: Aurora Auto Scaling for Aurora Writers: While scaling the writer instance is possible, it’s generally more disruptive. Scaling replicas handles read scaling far more effectively and is much less prone to causing issues. You scale the writers typically only when there are writer performance issues, not general scaling of the application load.
- Option A & B: Least Outstanding Requests routing algorithm: While this algorithm can be beneficial in some scenarios, it’s often less predictable than round robin in a well-managed Auto Scaling group. Also, for a stateful application, stickiness is far more important than balancing on number of requests.
In summary, Answer C offers the most comprehensive solution for scaling both application and database tiers of a stateful web application using Aurora PostgreSQL and EC2 Auto Scaling, while ensuring a consistent user experience.
Question.5 A company uses a service to collect metadata from applications that the company hosts on premises. Consumer devices such as TVs and internet radios access the applications. Many older devices do not support certain HTTP headers and exhibit errors when these headers are present in responses. The company has configured an on-premises load balancer to remove the unsupported headers from responses sent to older devices, which the company identified by the User-Agent headers. The company wants to migrate the service to AWS, adopt serverless technologies, and retain the ability to support the older devices. The company has already migrated the applications into a set of AWS Lambda functions. Which solution will meet these requirements? (A) Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a CloudFront function to remove the problematic headers based on the value of the User-Agent header. (B) Create an Amazon API Gateway REST API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Modify the default gateway responses to remove the problematic headers based on the value of the User-Agent header. (C) Create an Amazon API Gateway HTTP API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Create a response mapping template to remove the problematic headers based on the value of the User-Agent. Associate the response data mapping with the HTTP API. (D) Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a [email protected] function that will remove the problematic headers in response to viewer requests based on the value of the User-Agent header. |
5. Click here to View Answer
Answer: A
Explanation:
The correct answer is A. Here’s why:
The primary requirement is to remove problematic HTTP headers from responses based on the User-Agent header to support older devices while migrating to serverless technologies in AWS.
- Option A (CloudFront + ALB + Lambda + CloudFront Function): This solution perfectly aligns with the requirements. CloudFront, a CDN, is used for caching and edge processing. A CloudFront function allows you to run lightweight code at the edge, closest to the user. In this case, a CloudFront function is created to inspect the User-Agent header of incoming requests and selectively strip out problematic headers before the response is sent to the client. The ALB acts as a load balancer in front of the Lambda functions and distributes traffic. It integrates well with Lambda and provides necessary features such as routing. This approach is scalable, efficient, and avoids burdening the Lambda functions with header manipulation.
- Option B (API Gateway REST API + Lambda + Gateway Response Modification): While API Gateway can invoke Lambda functions and modify responses, altering the default gateway responses based on the User-Agent is not a standard or recommended feature. This option would involve more complex configurations and could potentially affect all responses, not just those for older devices. API Gateway’s transformation capabilities are less flexible than CloudFront functions for this specific header manipulation scenario.
- Option C (API Gateway HTTP API + Lambda + Response Mapping Template): HTTP APIs are lightweight but typically offer less functionality than REST APIs. While response mapping templates can transform data, the ability to conditionally remove headers based on the User-Agent is more cumbersome and less efficient than using CloudFront Functions.
- Option D (CloudFront + ALB + Lambda + [email protected] function): This option is similar to option A, but it incorrectly uses Lambda@Edge. Lambda@Edge functions are designed for requests sent to CloudFront. The function needs to manipulate the response headers as they are returned to the consumer, a viewer response event. Also, Lambda@Edge functions have more stringent limitations on execution time and regions than CloudFront Functions. CloudFront Functions are designed specifically for lightweight modifications like header manipulation.
Justification:
- Edge Processing: CloudFront allows for processing requests and responses at the edge, reducing latency and offloading processing from the origin (Lambda functions). This is beneficial for optimizing the user experience for older devices.
- Header Manipulation with CloudFront Functions: CloudFront Functions provide a mechanism to execute JavaScript code directly in CloudFront locations. They’re specifically designed for lightweight tasks such as modifying HTTP headers based on request attributes (like User-Agent). This enables precise targeting of older devices without affecting newer ones.
- Scalability and Serverless: The solution leverages serverless technologies (Lambda, CloudFront) for automatic scaling and cost efficiency.
- ALB as a Router: The ALB provides load balancing and routing to the appropriate Lambda functions based on the request. This allows the functions themselves to remain focused on the core application logic.
- Minimal Impact on Lambda Functions: By handling the header removal in CloudFront, the Lambda functions don’t need to be modified, reducing complexity and potential for errors.
Authoritative Links:
AWS Lambda: https://aws.amazon.com/lambda/
CloudFront Functions: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html
Amazon CloudFront: https://aws.amazon.com/cloudfront/
Application Load Balancer: https://aws.amazon.com/elasticloadbalancing/application-load-balancer/