Question.56 A DevOps engineer has automated a web service deployment by using AWS CodePipeline with the following steps: 1. An AWS CodeBuild project compiles the deployment artifact and runs unit tests. 2. An AWS CodeDeploy deployment group deploys the web service to Amazon EC2 instances in the staging environment. 3. A CodeDeploy deployment group deploys the web service to EC2 instances in the production environment. The quality assurance (QA) team requests permission to inspect the build artifact before the deployment to the production environment occurs. The QA team wants to run an internal penetration testing tool to conduct manual tests. The tool will be invoked by a REST API call. Which combination of actions should the DevOps engineer take to fulfill this request? (Choose two.) (A) Insert a manual approval action between the test actions and deployment actions of the pipeline. (B) Modify the buildspec.yml file for the compilation stage to require manual approval before completion. (C) Update the CodeDeploy deployment groups so that they require manual approval to proceed. (D) Update the pipeline to directly call the REST API for the penetration testing tool. (E) Update the pipeline to invoke an AWS Lambda function that calls the REST API for the penetration testing tool. |
56. Click here to View Answer
Answer: AE
Explanation:
The correct answer is AE. Here’s a detailed justification:
A. Insert a manual approval action between the test actions and deployment actions of the pipeline. This is the most appropriate way to introduce a gate for QA approval. By adding a manual approval action after the unit tests (done in the CodeBuild stage) and before the production deployment (done in the CodeDeploy stage), the pipeline will pause and wait for an authorized user (QA team member) to explicitly approve the artifact. This allows the QA team to inspect the artifact and decide whether to proceed with the deployment. This approach aligns with the goal of manual inspection and human-in-the-loop verification before promoting to production.
E. Update the pipeline to invoke an AWS Lambda function that calls the REST API for the penetration testing tool. Directly calling the REST API from within the pipeline is generally not a best practice and might expose credentials. Instead, using a Lambda function provides a secure and manageable way to interact with the external tool. The Lambda function encapsulates the logic for calling the REST API, handling authentication, and potentially transforming the data. The pipeline invokes the Lambda function, which then interacts with the penetration testing tool, providing a decoupled and secure architecture. The Lambda function can be granted specific IAM permissions to access the necessary resources and the REST API, limiting the blast radius of any potential security breach.
Here’s why the other options are incorrect:
- B. Modify the buildspec.yml file for the compilation stage to require manual approval before completion. The
buildspec.yml
file is primarily for defining build commands and doesn’t have a built-in mechanism for manual approval. Trying to implement a manual approval within the build stage would be cumbersome and non-standard. - C. Update the CodeDeploy deployment groups so that they require manual approval to proceed. CodeDeploy deployment groups already provide options for manual approval during the deployment process. However, in this case, manual approval is required to inspect the build artifact before it is deployed to production instances.
- D. Update the pipeline to directly call the REST API for the penetration testing tool. While technically feasible, this is less secure and less maintainable than using a Lambda function. Storing API keys and sensitive information directly in the pipeline definition is a security risk.
Supporting Links:
AWS CodeDeploy: https://aws.amazon.com/codedeploy/
AWS CodePipeline Manual Approval Actions: https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals.html
AWS Lambda: https://aws.amazon.com/lambda/
AWS CodePipeline: https://aws.amazon.com/codepipeline/
AWS CodeBuild: https://aws.amazon.com/codebuild/
Question.57 A company is hosting a web application in an AWS Region. For disaster recovery purposes, a second region is being used as a standby. Disaster recovery requirements state that session data must be replicated between regions in near-real time and 1% of requests should route to the secondary region to continuously verify system functionality. Additionally, if there is a disruption in service in the main region, traffic should be automatically routed to the secondary region, and the secondary region must be able to scale up to handle all traffic. How should a DevOps engineer meet these requirements? (A) In both regions, deploy the application on AWS Elastic Beanstalk and use Amazon DynamoDB global tables for session data. Use an Amazon Route 53 weighted routing policy with health checks to distribute the traffic across the regions. (B) In both regions, launch the application in Auto Scaling groups and use DynamoDB for session data. Use a Route 53 failover routing policy with health checks to distribute the traffic across the regions. (C) In both regions, deploy the application in AWS Lambda, exposed by Amazon API Gateway, and use Amazon RDS for PostgreSQL with cross-region replication for session data. Deploy the web application with client-side logic to call the API Gateway directly. (D) In both regions, launch the application in Auto Scaling groups and use DynamoDB global tables for session data. Enable an Amazon CloudFront weighted distribution across regions. Point the Amazon Route 53 DNS record at the CloudFront distribution. |
57. Click here to View Answer
Answer: A
Explanation:
Here’s a detailed justification for why option A is the best solution, incorporating relevant concepts and links:
Justification:
Option A provides the most complete solution for meeting all requirements. It leverages several key AWS services designed for high availability, disaster recovery, and global distribution.
- Elastic Beanstalk and Auto Scaling: Elastic Beanstalk simplifies the deployment and management of web applications. Auto Scaling within Elastic Beanstalk configurations (or explicitly with Auto Scaling groups as in other options) ensures that the application can automatically scale resources based on demand, meeting the requirement for the secondary region to handle all traffic during failover.
- DynamoDB Global Tables: DynamoDB Global Tables provide near real-time replication of data across AWS Regions. This addresses the requirement for session data replication between regions for a consistent user experience during failover. (https://aws.amazon.com/dynamodb/global-tables/)
- Route 53 Weighted Routing Policy with Health Checks: Route 53 weighted routing is ideal for distributing traffic across multiple regions. Assigning a weight of 99 to the primary region and 1 to the secondary region satisfies the 1% traffic routing requirement to continuously verify system functionality in the secondary region. Health checks monitor the health of the application in each region. If the primary region becomes unhealthy, Route 53 will automatically shift traffic to the healthy secondary region, achieving automatic failover. (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-weighted)
Why other options are less ideal:
- Option B: While Route 53 failover routing provides automatic failover, it doesn’t support the weighted routing requirement for directing a small percentage of traffic to the secondary region for continuous verification.
- Option C: Using Lambda and API Gateway for the entire web application, while possible, introduces unnecessary complexity for a typical web app scenario. RDS for PostgreSQL with cross-region replication is more complex to manage than DynamoDB Global Tables, especially for session data, and it won’t scale as easily. Client-side logic calling API Gateway directly introduces security risks.
- Option D: While DynamoDB Global Tables are suitable, CloudFront is primarily a content delivery network (CDN) and is best suited for caching static content closer to users. Although CloudFront supports origin failover, it is not the ideal mechanism for directing a small percentage of requests for continuous verification or for managing application logic failover. Using it this way is also more expensive than simply using Route53 with healthchecks. Route 53 provides more granular control over routing policies and health checks for application availability.
In summary, option A strikes the right balance between simplicity, cost-effectiveness, and functionality by effectively utilizing Elastic Beanstalk, DynamoDB Global Tables, and Route 53 weighted routing with health checks to meet all the stated requirements.
Question.58 A company runs an application on Amazon EC2 instances. The company uses a series of AWS CloudFormation stacks to define the application resources. A developer performs updates by building and testing the application on a laptop and then uploading the build output and CloudFormation stack templates to Amazon S3. The developer’s peers review the changes before the developer performs the CloudFormation stack update and installs a new version of the application onto the EC2 instances. The deployment process is prone to errors and is time-consuming when the developer updates each EC2 instance with the new application. The company wants to automate as much of the application deployment process as possible while retaining a final manual approval step before the modification of the application or resources. The company already has moved the source code for the application and the CloudFormation templates to AWS CodeCommit. The company also has created an AWS CodeBuild project to build and test the application. Which combination of steps will meet the company’s requirements? (Choose two.) (A) Create an application group and a deployment group in AWS CodeDeploy. Install the CodeDeploy agent on the EC2 instances. (B) Create an application revision and a deployment group in AWS CodeDeploy. Create an environment in CodeDeploy. Register the EC2 instances to the CodeDeploy environment. (C) Use AWS CodePipeline to invoke the CodeBuild job, run the CloudFormation update, and pause for a manual approval step. After approval, start the AWS CodeDeploy deployment. (D) Use AWS CodePipeline to invoke the CodeBuild job, create CloudFormation change sets for each of the application stacks, and pause for a manual approval step. After approval, run the CloudFormation change sets and start the AWS CodeDeploy deployment. (E) Use AWS CodePipeline to invoke the CodeBuild job, create CloudFormation change sets for each of the application stacks, and pause for a manual approval step. After approval, start the AWS CodeDeploy deployment. |
58. Click here to View Answer
Answer: AD
Explanation:
The correct answer is AD. Let’s break down why:
A: Create an application group and a deployment group in AWS CodeDeploy. Install the CodeDeploy agent on the EC2 instances.
This step is crucial for automating application deployments to the EC2 instances. AWS CodeDeploy automates code deployments to various compute services like EC2. To use CodeDeploy, you need to:
- Install the CodeDeploy agent: This agent runs on your EC2 instances and is responsible for pulling the application revision (the new code) from S3 and deploying it according to your deployment configuration. https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent.html
- Create an application: This is a container for organizing deployments.
- Create a deployment group: This defines the set of EC2 instances that will be part of the deployment. You can use tags, Auto Scaling groups, or EC2 instance names to identify these instances. https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-deployment-group.html
D: Use AWS CodePipeline to invoke the CodeBuild job, create CloudFormation change sets for each of the application stacks, and pause for a manual approval step. After approval, run the CloudFormation change sets and start the AWS CodeDeploy deployment.
This step orchestrates the entire deployment process and incorporates the required manual approval. AWS CodePipeline is a continuous delivery service that enables you to automate your release pipelines. This makes it perfect for this scenario. https://aws.amazon.com/codepipeline/
- Invoke CodeBuild: The pipeline starts by triggering the CodeBuild project, which builds and tests the application.
- Create CloudFormation Change Sets: Instead of directly updating the CloudFormation stacks, the pipeline creates change sets. Change sets allow you to preview the changes that CloudFormation will make to your resources before applying them. This is a crucial safety measure, especially in production environments. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html
- Manual Approval: The pipeline then pauses for a manual approval. Someone can review the change sets and decide whether to proceed. This fulfills the requirement of a final manual approval step.
- Execute Change Sets: After approval, the pipeline executes the change sets, updating the CloudFormation stacks.
- Start CodeDeploy: Finally, the pipeline triggers a CodeDeploy deployment to update the application on the EC2 instances.
Why other options are incorrect:
C & E: These options perform the CloudFormation update before the CodeDeploy deployment. This is not necessarily wrong, but less optimal. It’s usually best to update the infrastructure first (CloudFormation change set), then update the application running on that infrastructure (CodeDeploy). Option E includes starting the CodeDeploy before executing the CloudFormation changes which would cause inconsistencies.
B: While application revisions and deployment groups are part of CodeDeploy, “creating an environment” is not part of the CodeDeploy steps. It adds unnecessary complexity, and the registration of the EC2 instances is already covered within deployment groups.
Question.59 A DevOps engineer manages a web application that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an EC2 Auto Scaling group across multiple Availability Zones. The engineer needs to implement a deployment strategy that: Launches a second fleet of instances with the same capacity as the original fleet. Maintains the original fleet unchanged while the second fleet is launched. Transitions traffic to the second fleet when the second fleet is fully deployed. Terminates the original fleet automatically 1 hour after transition. Which solution will satisfy these requirements? (A) Use an AWS CloudFormation template with a retention policy for the ALB set to 1 hour. Update the Amazon Route 53 record to reflect the new ALB. (B) Use two AWS Elastic Beanstalk environments to perform a blue/green deployment from the original environment to the new one. Create an application version lifecycle policy to terminate the original environment in 1 hour. (C) Use AWS CodeDeploy with a deployment group configured with a blue/green deployment configuration Select the option Terminate the original instances in the deployment group with a waiting period of 1 hour. (D) Use AWS Elastic Beanstalk with the configuration set to Immutable. Create an .ebextension using the Resources key that sets the deletion policy of the ALB to 1 hour, and deploy the application. |
59. Click here to View Answer
Answer: C
Explanation:
The correct answer is C because it directly addresses all requirements of the deployment strategy. AWS CodeDeploy’s blue/green deployment capability is designed for exactly this type of scenario: launching a new fleet, transitioning traffic, and terminating the old fleet.
Here’s a breakdown:
- Launches a second fleet of instances: CodeDeploy’s blue/green deployment creates a new environment (the “green” environment) which includes launching a new fleet of EC2 instances with the desired capacity.
- Maintains the original fleet unchanged: The original environment (the “blue” environment) remains untouched while the green environment is being provisioned and tested.
- Transitions traffic to the second fleet when fully deployed: CodeDeploy manages the traffic shifting. It waits until the new fleet is healthy and then gradually shifts traffic to the new environment using the configured deployment settings.
- Terminates the original fleet automatically 1 hour after transition: CodeDeploy offers a configuration setting called “Terminate the original instances in the deployment group with a waiting period” which perfectly fulfills this requirement. Setting the waiting period to 1 hour will ensure the old instances are terminated 1 hour after the traffic is shifted.
Option A is incorrect because while CloudFormation can be used to manage the infrastructure, its retention policy is not a suitable mechanism for traffic shifting and automated termination in the context of blue/green deployment. Route 53 record updates need to be carefully coordinated with health checks of the new environment, something CodeDeploy automates.
Option B using Elastic Beanstalk comes close but lacks the granular control over termination timing compared to CodeDeploy’s direct setting. While Beanstalk supports blue/green deployments, using application version lifecycle policies isn’t the most direct or reliable method to achieve a precise 1-hour delay before terminating the old environment.
Option D using .ebextensions for deletion policy manipulation on the ALB is risky. It couples infrastructure lifecycle management with application deployment logic in a potentially unstable manner. It is also harder to control the exact timing of the deletion. The ALB deletion might take longer or fail, leaving the old environment running.
Authoritative links:
CodeDeploy Deployment Group Settings: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-deployment-groups.html
AWS CodeDeploy Blue/Green Deployments: https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments-create-blue-green.html
Question.60 A video-sharing company stores its videos in Amazon S3. The company has observed a sudden increase in video access requests, but the company does not know which videos are most popular. The company needs to identify the general access pattern for the video files. This pattern includes the number of users who access a certain file on a given day, as well as the number of pull requests for certain files. How can the company meet these requirements with the LEAST amount of effort? (A) Activate S3 server access logging. Import the access logs into an Amazon Aurora database. Use an Aurora SQL query to analyze the access patterns. (B) Activate S3 server access logging. Use Amazon Athena to create an external table with the log files. Use Athena to create a SQL query to analyze the access patterns. (C) Invoke an AWS Lambda function for every S3 object access event. Configure the Lambda function to write the file access information, such as user. S3 bucket, and file key, to an Amazon Aurora database. Use an Aurora SQL query to analyze the access patterns. (D) Record an Amazon CloudWatch Logs log message for every S3 object access event. Configure a CloudWatch Logs log stream to write the file access information, such as user, S3 bucket, and file key, to an Amazon Kinesis Data Analytics for SQL application. Perform a sliding window analysis. |
60. Click here to View Answer
Answer: B
Explanation:
The correct answer is B because it provides the most efficient and cost-effective solution for analyzing S3 access patterns with minimal effort. S3 server access logging is a built-in feature that automatically captures detailed information about every request made to the S3 buckets. [https://docs.aws.amazon.com/AmazonS3/latest/userguide/logging-with-S3.html]
Athena, a serverless interactive query service, allows direct querying of data stored in S3 using standard SQL. Creating an external table in Athena pointing to the S3 access logs eliminates the need for data ingestion and transformation. Athena then lets you quickly run SQL queries to analyze the access patterns, determining the number of users accessing specific files and the frequency of requests. This approach avoids the overhead and complexity of managing infrastructure like databases or Lambda functions.
Option A, while also using server access logs, involves importing the logs into Aurora, a managed relational database. This requires setting up and managing an Aurora cluster, data loading processes, and ongoing maintenance, which is more complex than using Athena. Aurora is better suited for structured, transactional data, rather than log analysis.
Option C involves invoking a Lambda function for every S3 object access. This serverless compute approach adds significant overhead and cost, as each request triggers a function execution. It also increases complexity through function deployment, error handling, and scalability management, making it less efficient for large-scale access pattern analysis than server access logging. Furthermore, writing directly to Aurora from Lambda creates a tight coupling and scalability challenges.
Option D uses CloudWatch Logs and Kinesis Data Analytics. While CloudWatch logs can capture S3 events and Kinesis can perform real time analysis, this solution adds unnecesary complexity and cost. It’s more efficient to use the built in S3 access logging features and Athena for direct querying.
In summary, the combination of S3 server access logging and Athena offers a simple, scalable, and cost-effective solution for analyzing S3 access patterns without requiring complex infrastructure or custom coding.