Question.36 A company’s solutions architect is reviewing a web application that runs on AWS. The application references static assets in an Amazon S3 bucket in the us-east-1 Region. The company needs resiliency across multiple AWS Regions. The company already has created an S3 bucket in a second Region. Which solution will meet these requirements with the LEAST operational overhead? (A) Configure the application to write each object to both S3 buckets. Set up an Amazon Route 53 public hosted zone with a record set by using a weighted routing policy for each S3 bucket. Configure the application to reference the objects by using the Route 53 DNS name. (B) Create an AWS Lambda function to copy objects from the S3 bucket in us-east-1 to the S3 bucket in the second Region. Invoke the Lambda function each time an object is written to the S3 bucket in us-east-1. Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins. (C) Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region. Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins. (D) Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region. If failover is required, update the application code to load S3 objects from the S3 bucket in the second Region. |
36. Click here to View Answer
Answer: C
Explanation:
The correct answer is C because it offers a highly resilient and automated solution with minimal operational overhead, leveraging native AWS services for replication and content delivery.
Here’s a detailed justification:
- S3 Replication: S3 Cross-Region Replication (CRR) automatically and asynchronously copies objects between S3 buckets in different AWS Regions. This ensures data redundancy and availability in the second Region without requiring custom code or manual intervention. This satisfies the resiliency requirement efficiently. https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html
- CloudFront Origin Groups: Amazon CloudFront origin groups provide a mechanism for failover between origins. By configuring both S3 buckets as origins within an origin group, CloudFront can automatically switch to the secondary bucket if the primary bucket becomes unavailable. This ensures that the application continues to serve static assets even in the event of a regional outage. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/origin-groups.html
- Least Operational Overhead: This solution is highly automated. S3 replication manages data synchronization, and CloudFront manages failover. No custom code (like Lambda functions) or manual intervention is required, minimizing operational overhead.
Let’s examine why the other options are less desirable:
- Option A: Writing to both S3 buckets from the application requires code changes and adds latency to object uploads. Route 53 weighted routing can distribute traffic, but it doesn’t guarantee immediate failover and may result in inconsistent data if writes aren’t perfectly synchronized.
- Option B: Using a Lambda function to copy objects adds complexity and potential points of failure. Lambda invocation for every object write can incur costs and latency. While CloudFront is helpful, the replication method is less efficient than S3’s built-in replication.
- Option D: Manually updating application code for failover is time-consuming and error-prone, increasing operational burden. The recovery time objective (RTO) would be much higher compared to using CloudFront origin groups.
In conclusion, option C provides the most efficient and resilient solution by utilizing S3 replication for automatic data synchronization and CloudFront origin groups for seamless failover, minimizing operational overhead and ensuring high availability of static assets.
Question.37 A company is hosting a three-tier web application in an on-premises environment. Due to a recent surge in traffic that resulted in downtime and a significant financial impact, company management has ordered that the application be moved to AWS. The application is written in .NET and has a dependency on a MySQL database. A solutions architect must design a scalable and highly available solution to meet the demand of 200,000 daily users. Which steps should the solutions architect take to design an appropriate solution? (A) Use AWS Elastic Beanstalk to create a new application with a web server environment and an Amazon RDS MySQL Multi-AZ DB instance. The environment should launch a Network Load Balancer (NLB) in front of an Amazon EC2 Auto Scaling group in multiple Availability Zones. Use an Amazon Route 53 alias record to route traffic from the company’s domain to the NLB. (B) Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front of an Amazon EC2 Auto Scaling group spanning three Availability Zones. The stack should launch a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a Retain deletion policy. Use an Amazon Route 53 alias record to route traffic from the company’s domain to the ALB. (C) Use AWS Elastic Beanstalk to create an automatically scaling web server environment that spans two separate Regions with an Application Load Balancer (ALB) in each Region. Create a Multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a cross-Region read replica. Use Amazon Route 53 with a geoproximity routing policy to route traffic between the two Regions. (D) Use AWS CloudFormation to launch a stack containing an Application Load Balancer (ALB) in front of an Amazon ECS cluster of Spot instances spanning three Availability Zones. The stack should launch an Amazon RDS MySQL DB instance with a Snapshot deletion policy. Use an Amazon Route 53 alias record to route traffic from the company’s domain to the ALB. |
37. Click here to View Answer
Answer: B
Explanation:
The correct answer is B because it provides a scalable, highly available, and cost-effective solution for migrating the .NET application and its MySQL database to AWS. Let’s break down why:
- AWS CloudFormation: Using CloudFormation allows for infrastructure-as-code, enabling repeatable and consistent deployments. This is crucial for managing complex environments and future updates.
- Application Load Balancer (ALB): ALBs distribute incoming application traffic across multiple targets, such as EC2 instances or containers, in multiple Availability Zones. This ensures high availability and fault tolerance. https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
- Amazon EC2 Auto Scaling Group: An Auto Scaling group ensures that the number of EC2 instances matches the demand. It automatically adjusts capacity to maintain performance and availability, scaling out during peak times and scaling in during periods of low traffic. Spanning three Availability Zones increases resilience. https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html
- Amazon Aurora MySQL Multi-AZ DB Cluster: Aurora MySQL offers significantly better performance than standard MySQL and is designed for high availability. The Multi-AZ deployment automatically replicates data to standby instances in other Availability Zones for failover protection. https://aws.amazon.com/rds/aurora/
- Retain Deletion Policy: This policy prevents the database from being accidentally deleted when the CloudFormation stack is deleted, ensuring data preservation.
- Amazon Route 53 Alias Record: Using an alias record in Route 53 directly links the company’s domain name to the ALB, providing a simplified and efficient way to route traffic. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html
Option A is less ideal because Elastic Beanstalk, while simplifying deployment, offers less granular control compared to CloudFormation. An NLB is optimized for TCP traffic, whereas an ALB is more suitable for web applications due to its Layer 7 capabilities.
Option C introduces unnecessary complexity by using multi-region deployment with geoproximity routing and can increase costs. While redundancy is good, a properly configured Multi-AZ deployment is usually sufficient for most high availability requirements and is simpler to manage and more cost-effective initially.
Option D uses Spot Instances for the EC2 instances, which can be interrupted and are not suitable for a mission-critical application needing sustained availability. Furthermore, RDS MySQL with a Snapshot deletion policy poses a data loss risk upon stack deletion.
Therefore, option B provides the most appropriate balance of scalability, high availability, manageability, and cost-effectiveness for the specified requirements.
Question.38 A company is using AWS Organizations to manage multiple AWS accounts. For security purposes, the company requires the creation of an Amazon Simple Notification Service (Amazon SNS) topic that enables integration with a third-party alerting system in all the Organizations member accounts. A solutions architect used an AWS CloudFormation template to create the SNS topic and stack sets to automate the deployment of CloudFormation stacks. Trusted access has been enabled in Organizations. What should the solutions architect do to deploy the CloudFormation StackSets in all AWS accounts? (A) Create a stack set in the Organizations member accounts. Use service-managed permissions. Set deployment options to deploy to an organization. Use CloudFormation StackSets drift detection. (B) Create stacks in the Organizations member accounts. Use self-service permissions. Set deployment options to deploy to an organization. Enable the CloudFormation StackSets automatic deployment. (C) Create a stack set in the Organizations management account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets automatic deployment. (D) Create stacks in the Organizations management account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets drift detection. |
38. Click here to View Answer
Answer: C
Explanation:
The correct answer is C. Here’s a detailed justification:
To achieve automated deployment of the SNS topic across all AWS accounts within an AWS Organization, CloudFormation StackSets with service-managed permissions is the ideal approach. StackSets allows you to manage CloudFormation stacks across multiple AWS accounts and regions from a central management account.
Option A is incorrect because StackSets should be managed from the organization’s management account, not from individual member accounts.
Option B is incorrect because StackSets should be managed from the organization’s management account, not from individual member accounts. Also, when integrating with AWS Organizations, service-managed permissions are the recommended approach for StackSets. Self-managed permissions require more manual configuration and delegation of roles within each member account, complicating management. CloudFormation StackSets automatic deployment feature is the correct approach here for automatic deployment of updates.
Option D is incorrect because StackSets are preferred over individual stacks for managing deployments across multiple accounts. Also, CloudFormation StackSets automatic deployment is the correct approach here for automatic deployment of updates. While drift detection is valuable for monitoring changes, it doesn’t automate the initial deployment as effectively as the automatic deployment feature.
The correct approach, option C, involves creating the StackSet in the Organizations management account. Using service-managed permissions simplifies management, allowing StackSets to assume necessary roles in member accounts without manual role creation in each account. Setting deployment options to deploy to the organization ensures the SNS topic is deployed across all current and future member accounts. Enabling automatic deployment ensures that any updates to the StackSet are automatically propagated to all target accounts, maintaining consistency and reducing manual intervention. This solution adheres to AWS best practices for multi-account management using Organizations and CloudFormation.
Here are some authoritative links for further research:
Permissions models for StackSets: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-permissions.html
AWS CloudFormation StackSets: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-stacksets.html
Using AWS CloudFormation StackSets with Organizations: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs.html
Question.39 A company wants to migrate its workloads from on premises to AWS. The workloads run on Linux and Windows. The company has a large on-premises infrastructure that consists of physical machines and VMs that host numerous applications. The company must capture details about the system configuration, system performance, running processes, and network connections of its on-premises workloads. The company also must divide the on-premises applications into groups for AWS migrations. The company needs recommendations for Amazon EC2 instance types so that the company can run its workloads on AWS in the most cost-effective manner. Which combination of steps should a solutions architect take to meet these requirements? (Choose three.) (A) Assess the existing applications by installing AWS Application Discovery Agent on the physical machines and VMs. (B) Assess the existing applications by installing AWS Systems Manager Agent on the physical machines and VMs. (C) Group servers into applications for migration by using AWS Systems Manager Application Manager. (D) Group servers into applications for migration by using AWS Migration Hub. (E) Generate recommended instance types and associated costs by using AWS Migration Hub. (F) Import data about server sizes into AWS Trusted Advisor. Follow the recommendations for cost optimization. |
39. Click here to View Answer
Answer: ADE
Explanation:
The correct answer is ADE. Here’s why:
- A. Assess the existing applications by installing AWS Application Discovery Agent on the physical machines and VMs. AWS Application Discovery Service (ADS) is the primary tool for gathering detailed information about on-premises workloads, including system configuration, performance metrics, running processes, and network dependencies. The agentless discovery collector (or the agent based AWS Agent) can be used for more in-depth details. This aligns perfectly with the requirement to capture comprehensive details about the on-premises environment.
- D. Group servers into applications for migration by using AWS Migration Hub. AWS Migration Hub is designed to track the progress of application migrations to AWS. It allows you to group servers into applications to simplify the migration process and track the overall status of each application. This satisfies the requirement to divide on-premises applications into groups.
- E. Generate recommended instance types and associated costs by using AWS Migration Hub. After discovering the on-premises environment and grouping the applications, AWS Migration Hub can provide recommendations for EC2 instance types and associated costs based on the collected data. It helps to ensure that the migrated workloads run on AWS in a cost-effective manner, as the requirement specifies.
Why the other options are less suitable:
- B. Assess the existing applications by installing AWS Systems Manager Agent on the physical machines and VMs. While AWS Systems Manager Agent can gather information about instances, it’s not primarily designed for discovering and profiling on-premises environments for migration purposes. Application Discovery Service is the more appropriate tool.
- C. Group servers into applications for migration by using AWS Systems Manager Application Manager. AWS Systems Manager Application Manager focuses on operational management and visibility of applications after they’ve been migrated to AWS, not the initial grouping and discovery for migration planning.
- F. Import data about server sizes into AWS Trusted Advisor. Follow the recommendations for cost optimization. AWS Trusted Advisor is a tool for cost optimization, security, fault tolerance and performance improvement on AWS resources. It does not help with the initial discovery and gathering of information from on-premises environments required for planning a migration.
Question.40 A company is hosting an image-processing service on AWS in a VPC. The VPC extends across two Availability Zones. Each Availability Zone contains one public subnet and one private subnet. The service runs on Amazon EC2 instances in the private subnets. An Application Load Balancer in the public subnets is in front of the service. The service needs to communicate with the internet and does so through two NAT gateways. The service uses Amazon S3 for image storage. The EC2 instances retrieve approximately 1 ТВ of data from an S3 bucket each day. The company has promoted the service as highly secure. A solutions architect must reduce cloud expenditures as much as possible without compromising the service’s security posture or increasing the time spent on ongoing operations. Which solution will meet these requirements? (A) Replace the NAT gateways with NAT instances. In the VPC route table, create a route from the private subnets to the NAT instances. (B) Move the EC2 instances to the public subnets. Remove the NAT gateways. (C) Set up an S3 gateway VPC endpoint in the VPAttach an endpoint policy to the endpoint to allow the required actions on the S3 bucket. (D) Attach an Amazon Elastic File System (Amazon EFS) volume to the EC2 instances. Host the images on the EFS volume. |
40. Click here to View Answer
Answer: C
Explanation:
The most cost-effective and secure solution is to implement an S3 gateway VPC endpoint (Option C). Here’s why:
- Cost Reduction: NAT Gateways charge for data processed. Since the EC2 instances transfer 1 TB of data daily from S3, this generates significant NAT gateway costs. S3 gateway endpoints provide free connectivity to S3 within the VPC.
- Security: S3 gateway endpoints allow EC2 instances to access S3 without traversing the public internet, enhancing security. Traffic stays within the AWS network. An endpoint policy can restrict access to specific S3 buckets and actions, further bolstering security.
- Performance: S3 gateway endpoints are highly available and scalable. By using a gateway endpoint, you eliminate network hops and bottlenecks associated with NAT gateways, potentially improving data transfer speeds.
- Simplicity: Once configured, the gateway endpoint requires minimal ongoing operational effort. The route table entries are managed automatically.
Option A (NAT Instances) is less desirable because NAT instances require manual management, patching, and scaling. They introduce a single point of failure, and they can be more expensive than S3 gateway endpoints, particularly at higher data transfer volumes.
Option B (moving EC2 instances to public subnets) significantly compromises security. It exposes the EC2 instances directly to the internet, which is generally undesirable for security best practices. Removing NAT gateways is not an option in this scenario given the original configuration requirements.
Option D (Amazon EFS) is not suitable because it would require migrating the existing data from S3 to EFS, which is time-consuming. Moreover, EFS is typically more expensive than S3 for large-scale object storage, and might not be designed for image processing at the scale provided.
Therefore, leveraging an S3 gateway VPC endpoint addresses the problem statement’s constraints by providing a secure, cost-effective, and operationally simple solution for accessing S3 from within the VPC.
Authoritative Links:
VPC Pricing: https://aws.amazon.com/vpc/pricing/
VPC Endpoints: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html
S3 Gateway Endpoints: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html