Question.46 A company wants to use AWS development tools to replace its current bash deployment scripts. The company currently deploys a LAMP application to a group of Amazon EC2 instances behind an Application Load Balancer (ALB). During the deployments, the company unit tests the committed application, stops and starts services, unregisters and re-registers instances with the load balancer, and updates file permissions. The company wants to maintain the same deployment functionality through the shift to using AWS services. Which solution will meet these requirements? (A) Use AWS CodeBuild to test the application. Use bash scripts invoked by AWS CodeDeploy’s appspec.yml file to restart services, and deregister and register instances with the ALB. Use the appspec.yml file to update file permissions without a custom script. (B) Use AWS CodePipeline to move the application from the AWS CodeCommit repository to AWS CodeDeploy. Use CodeDeploy’s deployment group to test the application, unregister and re-register instances with the ALand restart services. Use the appspec.yml file to update file permissions without a custom script. (C) Use AWS CodePipeline to move the application source code from the AWS CodeCommit repository to AWS CodeDeploy. Use CodeDeploy to test the application. Use CodeDeploy’s appspec.yml file to restart services and update permissions without a custom script. Use AWS CodeBuild to unregister and re-register instances with the ALB. (D) Use AWS CodePipeline to trigger AWS CodeBuild to test the application. Use bash scripts invoked by AWS CodeDeploy’s appspec.yml file to restart services. Unregister and re-register the instances in the AWS CodeDeploy deployment group with the ALB. Update the appspec.yml file to update file permissions without a custom script. |
46. Click here to View Answer
Answer: D
Explanation:
Here’s a detailed justification for why option D is the correct solution:
The scenario requires a solution that integrates testing, deployment orchestration, service management, and load balancer interaction, all while transitioning from bash scripts to AWS development tools.
- CodePipeline for Orchestration: AWS CodePipeline is the core service for continuous integration and continuous delivery (CI/CD). It automates the build, test, and deploy phases of the software release process. So, it will be the most appropriate for this process. https://aws.amazon.com/codepipeline/
- CodeBuild for Testing: AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. Using CodeBuild for unit testing aligns with the company’s requirements for testing the committed application. https://aws.amazon.com/codebuild/
- CodeDeploy for Deployment & Service Management: AWS CodeDeploy automates code deployments to various compute services such as EC2 instances. The
appspec.yml
file defines deployment actions and is crucial for automating the tasks that the company previously performed through bash scripts. https://aws.amazon.com/codedeploy/ - appspec.yml for Tasks: The
appspec.yml
file within CodeDeploy deployments allows for the execution of custom scripts (bash in this case) at various lifecycle events, such asApplicationStop
,BeforeInstall
,AfterInstall
,ApplicationStart
, andValidateService
. This enables restarting services using bash scripts invoked byappspec.yml
. Additionally, it allows to update file permissions. - ALB Integration: CodeDeploy natively integrates with Application Load Balancers. During deployment lifecycle events, instances within the CodeDeploy deployment group can be automatically deregistered from the ALB before deployment and re-registered after the deployment is successful. This ensures minimal downtime and a seamless deployment process.
Option A is incorrect because it doesn’t use CodePipeline, and it is more robust approach to rely on CodePipeline for a CI/CD pipeline. Option B and C are incorrect as they suggest CodeDeploy itself can perform testing, which is not one of its core functions; CodeBuild is better suited. Option C also incorrectly suggests AWS CodeBuild can directly unregister/re-register instances with the ALB; this is a function handled by CodeDeploy.
In summary, option D provides the most comprehensive solution that addresses all the company’s requirements by utilizing the strengths of CodePipeline for orchestration, CodeBuild for testing, and CodeDeploy for deployment with the help of appspec.yml
for service management and ALB integration.
Question.47 A company runs an application with an Amazon EC2 and on-premises configuration. A DevOps engineer needs to standardize patching across both environments. Company policy dictates that patching only happens during non-business hours. Which combination of actions will meet these requirements? (Choose three.) (A) Add the physical machines into AWS Systems Manager using Systems Manager Hybrid Activations. (B) Attach an IAM role to the EC2 instances, allowing them to be managed by AWS Systems Manager. (C) Create IAM access keys for the on-premises machines to interact with AWS Systems Manager. (D) Run an AWS Systems Manager Automation document to patch the systems every hour (E) Use Amazon EventBridge scheduled events to schedule a patch window. (F) Use AWS Systems Manager Maintenance Windows to schedule a patch window. |
47. Click here to View Answer
Answer: ABF
Explanation:
The correct answer is ABF. Here’s a detailed justification:
A. Add the physical machines into AWS Systems Manager using Systems Manager Hybrid Activations: AWS Systems Manager (SSM) is a management service that allows you to manage your EC2 instances and on-premises servers and virtual machines. To manage on-premises servers with SSM, you need to onboard them as managed instances. Systems Manager Hybrid Activations enable you to register your on-premises servers and VMs with SSM, making them manageable through the same interface as your EC2 instances. This standardization is crucial for centralized patching. https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managed-instances.html
B. Attach an IAM role to the EC2 instances, allowing them to be managed by AWS Systems Manager: EC2 instances need permissions to communicate with SSM. This is achieved by attaching an IAM role to the EC2 instances. This IAM role grants the necessary permissions for SSM Agent, running on the instances, to perform actions on behalf of SSM, such as receiving commands, reporting status, and downloading patches. Without this role, the EC2 instances cannot be managed by SSM. https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-iam-roles.html
F. Use AWS Systems Manager Maintenance Windows to schedule a patch window: Maintenance Windows in SSM provides a defined period to perform potentially disruptive actions, such as patching, on your instances. It allows you to schedule these operations to occur during non-business hours, fulfilling the company’s policy requirement. You can target specific instances or instance groups for patching during the maintenance window. This is a critical element to automate patching within the defined timeframe. https://docs.aws.amazon.com/systems-manager/latest/userguide/maintenance-windows.html
Why other options are incorrect:
E. Use Amazon EventBridge scheduled events to schedule a patch window: While EventBridge could trigger an SSM Automation document, Maintenance Windows provide a more direct and purpose-built feature for scheduling patching operations, including built-in retry mechanisms and concurrency control. Maintenance Windows are more appropriate for this use case.
C. Create IAM access keys for the on-premises machines to interact with AWS Systems Manager: While IAM access keys can be used for programmatic access to AWS services, they are generally discouraged for use on instances (EC2 or on-premise) due to security concerns (key rotation, accidental exposure). Roles are the preferred method for granting permissions to instances. Hybrid Activations is the method by which on-premise machines get added.
D. Run an AWS Systems Manager Automation document to patch the systems every hour: Running a patch automation every hour would likely conflict with the requirement to only patch during non-business hours. Maintenance Windows are designed for scheduled operations.
Question.48 A company has chosen AWS to host a new application. The company needs to implement a multi-account strategy. A DevOps engineer creates a new AWS account and an organization in AWS Organizations. The DevOps engineer also creates the OU structure for the organization and sets up a landing zone by using AWS Control Tower. The DevOps engineer must implement a solution that automatically deploys resources for new accounts that users create through AWS Control Tower Account Factory. When a user creates a new account, the solution must apply AWS CloudFormation templates and SCPs that are customized for the OU or the account to automatically deploy all the resources that are attached to the account. All the OUs are enrolled in AWS Control Tower. Which solution will meet these requirements in the MOST automated way? (A) Use AWS Service Catalog with AWS Control Tower. Create portfolios and products in AWS Service Catalog. Grant granular permissions to provision these resources. Deploy SCPs by using the AWS CLI and JSON documents. (B) Deploy CloudFormation stack sets by using the required templates. Enable automatic deployment. Deploy stack instances to the required accounts. Deploy a CloudFormation stack set to the organization’s management account to deploy SCPs. (C) Create an Amazon EventBridge rule to detect the CreateManagedAccount event. Configure AWS Service Catalog as the target to deploy resources to any new accounts. Deploy SCPs by using the AWS CLI and JSON documents. (D) Deploy the Customizations for AWS Control Tower (CfCT) solution. Use an AWS CodeCommit repository as the source. In the repository, create a custom package that includes the CloudFormation templates and the SCP JSON documents. |
48. Click here to View Answer
Answer: D
Explanation:
The correct answer is D because the Customizations for AWS Control Tower (CfCT) solution is specifically designed to automate the deployment of resources and SCPs to new accounts created through AWS Control Tower Account Factory. It uses a CodeCommit repository as a central source for customizations. This allows you to manage CloudFormation templates and SCPs as code, version them, and apply them consistently across accounts.
Option A is less ideal because while AWS Service Catalog can provision resources, it doesn’t directly integrate with AWS Control Tower’s account creation process for automatic deployment in the same way as CfCT. Moreover, deploying SCPs via the CLI is less automated.
Option B’s CloudFormation StackSets, though powerful, require more manual configuration to target newly created accounts dynamically. While StackSets can be configured for automatic deployment, integrating them with the Control Tower Account Factory requires extra scripting or event handling, which increases complexity. Deploying SCPs via another StackSet to the management account might not achieve the desired granular control and automation at the OU/account level.
Option C, while utilizing EventBridge, also relies on AWS Service Catalog. Though event-driven, the Service Catalog approach lacks the native integration and streamlined deployment mechanisms of CfCT. Also, SCP deployment is performed manually.
CfCT’s main advantage lies in its ability to automate the entire process from account creation to resource and SCP deployment using a centralized configuration repository, providing the most automated solution as requested by the problem statement. It’s purpose-built for extending Control Tower’s governance capabilities.
Supporting References:
AWS CloudFormation: https://aws.amazon.com/cloudformation/
AWS Control Tower Customizations for AWS Control Tower (CfCT): https://aws.amazon.com/solutions/implementations/customizations-for-aws-control-tower/
AWS Organizations SCPs: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
Question.49 An online retail company based in the United States plans to expand its operations to Europe and Asia in the next six months. Its product currently runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. All data is stored in an Amazon Aurora database instance. When the product is deployed in multiple regions, the company wants a single product catalog across all regions, but for compliance purposes, its customer information and purchases must be kept in each region. How should the company meet these requirements with the LEAST amount of application changes? (A) Use Amazon Redshift for the product catalog and Amazon DynamoDB tables for the customer information and purchases. (B)Use Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases. (C) Use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases. (D) Use Aurora for the product catalog and Amazon DynamoDB global tables for the customer information and purchases. |
49. Click here to View Answer
Answer: C
Explanation:
Here’s a detailed justification for why option C is the best solution, along with supporting concepts and links:
Option C, using Aurora with read replicas for the product catalog and additional local Aurora instances in each region for customer information and purchases, offers the most efficient and compliant solution with the least application changes.
The core requirement is a single, globally available product catalog while maintaining regional separation of customer data for compliance. Aurora is a highly scalable and reliable relational database suitable for a product catalog. Leveraging Aurora Read Replicas allows read-only copies of the catalog to be distributed to the new regions (Europe and Asia). This provides low-latency access to product information for customers in each region without requiring complex data synchronization mechanisms. Because read replicas are asynchronous, performance impact on the primary Aurora instance in the US is minimized.
For customer information and purchases, which need to be region-specific due to compliance, creating separate Aurora instances within each region ensures data isolation. This satisfies the regulatory requirement of keeping customer data within the region where it’s collected. Using Aurora consistently across both the product catalog and customer data simplifies database administration and leverages the company’s existing expertise with Aurora.
Compared to other options:
- Option A (Redshift & DynamoDB tables): Introducing Redshift adds unnecessary complexity. Redshift is primarily designed for analytics, not transactional data like a product catalog. The catalog doesn’t require the analytical capabilities of Redshift.
- Option B (DynamoDB Global tables for product catalog): Migrating the entire product catalog to DynamoDB would involve significant application changes. The existing system is designed for Aurora, and a complete migration to NoSQL for the product catalog requires considerable effort.
- Option D (Aurora and DynamoDB global tables for customer data): While DynamoDB global tables are suitable for multi-region replication, using them for customer data still necessitates application modifications to integrate with a new database technology for sensitive information, which is undesirable.
Therefore, option C minimizes application changes by leveraging existing Aurora infrastructure and introducing only the relatively straightforward addition of read replicas. The region-specific Aurora instances for customer data cleanly segregate sensitive information in compliance with regulations.
Supporting concepts:
- Read Replicas: Provide read-only copies of a database, improving read performance and availability without impacting the primary database.
- Database Replication: Copying data across multiple servers or locations for redundancy, availability, and disaster recovery.
- Data Sovereignty/Regional Compliance: Regulations requiring data to be stored and processed within specific geographic regions.
Authoritative links:
AWS Global Infrastructure: https://aws.amazon.com/about-aws/global-infrastructure/
Amazon Aurora Read Replicas: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.ReadReplicas.html
Question.50 A company is implementing a well-architected design for its globally accessible API stack. The design needs to ensure both high reliability and fast response times for users located in North America and Europe. The API stack contains the following three tiers: Amazon API Gateway – AWS Lambda – Amazon DynamoDB – Which solution will meet the requirements? (A) Configure Amazon Route 53 to point to API Gateway APIs in North America and Europe using health checks. Configure the APIs to forward requests to a Lambda function in that Region. Configure the Lambda functions to retrieve and update the data in a DynamoDB table in the same Region as the Lambda function. (B) Configure Amazon Route 53 to point to API Gateway APIs in North America and Europe using latency-based routing and health checks. Configure the APIs to forward requests to a Lambda function in that Region. Configure the Lambda functions to retrieve and update the data in a DynamoDB global table. (C) Configure Amazon Route 53 to point to API Gateway in North America, create a disaster recovery API in Europe, and configure both APIs to forward requests to the Lambda functions in that Region. Retrieve the data from a DynamoDB global table. Deploy a Lambda function to check the North America API health every 5 minutes. In the event of a failure, update Route 53 to point to the disaster recovery API. (D) Configure Amazon Route 53 to point to API Gateway API in North America using latency-based routing. Configure the API to forward requests to the Lambda function in the Region nearest to the user. Configure the Lambda function to retrieve and update the data in a DynamoDB table. |
50. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Here’s a detailed justification:
High Availability & Low Latency: The requirement is to ensure high reliability and fast response times for users in North America and Europe. The key here is providing services geographically closer to the users, minimizing latency.
Route 53 Latency-Based Routing: Option B utilizes Route 53 latency-based routing, which directs users to the API Gateway endpoint that provides the lowest latency. This ensures users are routed to the closest API endpoint. Health checks ensure that only healthy endpoints are used. [https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency]
Regional API Gateways & Lambda Functions: The API Gateway and Lambda functions are deployed in both North America and Europe. This means requests are processed closer to the user’s location, further minimizing latency. Keeping Lambda functions and DynamoDB tables in the same Region optimizes network communication.
DynamoDB Global Tables: DynamoDB global tables provide automatic, multi-Region replication. This means that data written in one Region is automatically replicated to other Regions. Using global tables ensures that data is available in both North America and Europe, regardless of where it was initially written. [https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/globaltables.V2.html] This approach also allows for read and write operations to be handled in the user’s local region whenever possible.
Why other options are not optimal:
Option D: This uses latency-based routing for Route53 but has a single API Gateway, which is not sufficient for redundancy. Also, accessing a single DynamoDB table from all regions would lead to latency issues.
Option A: While it uses Route 53 with health checks, it does not use latency-based routing. This might not direct users to the closest endpoint, potentially leading to higher latency. Configuring DynamoDB tables in only one Region would introduce cross-region read/write operations, increasing latency.
Option C: This introduces a disaster recovery (DR) setup. While DR is important, this design focuses on immediate low-latency performance. The failover approach in this option is slower to respond than latency-based routing. Furthermore, relying on a single API and a DR API increases complexity.