Question.21 A company has containerized all of its in-house quality control applications. The company is running Jenkins on Amazon EC2 instances, which require patching and upgrading. The compliance officer has requested a DevOps engineer begin encrypting build artifacts since they contain company intellectual property. What should the DevOps engineer do to accomplish this in the MOST maintainable manner? (A) Automate patching and upgrading using AWS Systems Manager on EC2 instances and encrypt Amazon EBS volumes by default. (B) Deploy Jenkins to an Amazon ECS cluster and copy build artifacts to an Amazon S3 bucket with default encryption enabled. (C) Leverage AWS CodePipeline with a build action and encrypt the artifacts using AWS Secrets Manager. (D) Use AWS CodeBuild with artifact encryption to replace the Jenkins instance running on EC2 instances. |
21. Click here to View Answer
Answer: D
Explanation:
The most maintainable solution is to migrate the CI/CD pipeline from Jenkins on EC2 to AWS CodeBuild with artifact encryption. Here’s why:
- Managed Service: CodeBuild is a fully managed build service, eliminating the need for manual patching, upgrading, and infrastructure maintenance of EC2 instances running Jenkins (as required in option A). This reduces operational overhead.
- Built-in Encryption: CodeBuild natively supports artifact encryption using AWS KMS, fulfilling the compliance requirement without additional configuration complexity or reliance on external secrets management services as suggested in option C.
- Scalability and Availability: CodeBuild automatically scales to handle build workloads, ensuring high availability and eliminating single points of failure associated with Jenkins on EC2.
- Cost Optimization: Pay-as-you-go pricing model of CodeBuild is more cost-effective than running dedicated EC2 instances for Jenkins, especially during periods of low build activity.
- Integration with AWS Ecosystem: CodeBuild integrates seamlessly with other AWS services like S3, CodePipeline, and CloudWatch, creating a streamlined DevOps workflow.
- Security Best Practices: CodeBuild allows for secure storage of build artifacts and restricts access based on IAM roles and policies.
Option B introduces ECS, which adds complexity of container orchestration and does not directly address artifact encryption or Jenkins maintenance. Option C leverages AWS CodePipeline but necessitates external secrets management for encryption, which is natively supported in CodeBuild. Option A is incorrect since it still leaves the overhead of managing Jenkins on EC2, even with Systems Manager.
Supporting Links:
AWS KMS: https://aws.amazon.com/kms/
AWS CodeBuild: https://aws.amazon.com/codebuild/
Question.22 An IT team has built an AWS CloudFormation template so others in the company can quickly and reliably deploy and terminate an application. The template creates an Amazon EC2 instance with a user data script to install the application and an Amazon S3 bucket that the application uses to serve static webpages while it is running. All resources should be removed when the CloudFormation stack is deleted. However, the team observes that CloudFormation reports an error during stack deletion, and the S3 bucket created by the stack is not deleted. How can the team resolve the error in the MOST efficient manner to ensure that all resources are deleted without errors? (A) Add a DelelionPolicy attribute to the S3 bucket resource, with the value Delete forcing the bucket to be removed when the stack is deleted. (B) Add a custom resource with an AWS Lambda function with the DependsOn attribute specifying the S3 bucket, and an IAM role. Write the Lambda function to delete all objects from the bucket when RequestType is Delete. (C) Identify the resource that was not deleted. Manually empty the S3 bucket and then delete it. (D) Replace the EC2 and S3 bucket resources with a single AWS OpsWorks Stacks resource. Define a custom recipe for the stack to create and delete the EC2 instance and the S3 bucket. |
22. Click here to View Answer
Answer: B
Explanation:
The most efficient and programmatic solution to ensure an S3 bucket created by a CloudFormation stack is deleted along with the stack, even when it contains objects, is option B: Add a custom resource with an AWS Lambda function, DependsOn
attribute, and an IAM role.
Here’s why:
- S3 Bucket Deletion Issues: S3 buckets must be empty before they can be deleted. CloudFormation often fails to delete S3 buckets directly if they contain objects, even if those objects were created by the same stack.
- Option B’s Solution: This option involves creating a custom resource in CloudFormation. A custom resource leverages a Lambda function to perform custom tasks during stack creation, update, or deletion.
- Lambda Function Logic: The Lambda function is specifically designed to empty the S3 bucket during stack deletion. When CloudFormation initiates the stack deletion, it triggers the Lambda function with a
RequestType
ofDelete
. The Lambda code then iterates through the S3 bucket, deleting all objects within it. Once the bucket is empty, CloudFormation can successfully delete the bucket itself. DependsOn
Attribute: TheDependsOn
attribute is critical. It tells CloudFormation to execute the Lambda function before attempting to delete the S3 bucket. This ensures that the bucket is emptied before deletion is attempted.- IAM Role: The Lambda function requires an IAM role with permissions to list and delete objects within the S3 bucket. This ensures the function has the necessary authorization.
- Efficiency and Automation: This solution is automated and programmatic. It doesn’t require manual intervention to empty the bucket, making it highly efficient for repeated deployments and deletions.
- Why other options are less ideal:
- A:
DeletionPolicy: Delete
will only delete the bucket if it is empty. It does not address the core issue of objects preventing deletion. - C: Manually emptying the bucket is not scalable or automated. It is a manual process that introduces potential for error.
- D: AWS OpsWorks is unnecessarily complex for this specific problem. Using CloudFormation with a custom resource provides a more streamlined and targeted solution. OpsWorks is designed for application management not basic resource creation and deletion.
- A:
Authoritative Links:
Amazon S3 Bucket Deletion: https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html
AWS CloudFormation Custom Resources: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html
AWS Lambda: https://aws.amazon.com/lambda/
Question.23 A company has an AWS CodePipeline pipeline that is configured with an Amazon S3 bucket in the eu-west-1 Region. The pipeline deploys an AWS Lambda application to the same Region. The pipeline consists of an AWS CodeBuild project build action and an AWS CloudFormation deploy action. The CodeBuild project uses the aws cloudformation package AWS CLI command to build an artifact that contains the Lambda function code’s .zip file and the CloudFormation template. The CloudFormation deploy action references the CloudFormation template from the output artifact of the CodeBuild project’s build action. The company wants to also deploy the Lambda application to the us-east-1 Region by using the pipeline in eu-west-1. A DevOps engineer has already updated the CodeBuild project to use the aws cloudformation package command to produce an additional output artifact for us-east-1. Which combination of additional steps should the DevOps engineer take to meet these requirements? (Choose two.) (A) Modify the CloudFormation template to include a parameter for the Lambda function code’s zip file location. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to pass in the us-east-1 artifact location as a parameter override. (B) Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact. (C) Create an S3 bucket in us-east-1. Configure the S3 bucket policy to allow CodePipeline to have read and write access. (D) Create an S3 bucket in us-east-1. Configure S3 Cross-Region Replication (CRR) from the S3 bucket in eu-west-1 to the S3 bucket in us-east-1. (E) Modify the pipeline to include the S3 bucket for us-east-1 as an artifact store. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact. |
23. Click here to View Answer
Answer: CE
Explanation:
Here’s a detailed justification for why options C and E are the correct choices and why the other options are incorrect.
Why C is Correct:
- Cross-Region Deployment Requires Artifact Storage in the Target Region: CodePipeline, by default, stores artifacts in a bucket within the same region as the pipeline. When deploying to a different region (us-east-1 in this case), you need a bucket in that region to store the artifacts intended for deployment there.
- CodePipeline Access: CodePipeline needs permission to read the CloudFormation template and related artifacts from the destination S3 bucket in the target region. The bucket policy needs to be configured to grant CodePipeline the necessary permissions.
- Reference: AWS Documentation – “Cross-region actions in AWS CodePipeline” https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-create-cross-region.html
Why E is Correct:
- Artifact Store for Target Region: Since we are deploying to us-east-1, CodePipeline requires an artifact store (S3 bucket) in that region to stage the artifacts produced for that specific deployment. Adding the us-east-1 S3 bucket to the pipeline as an artifact store makes the artifacts available for the subsequent deployment action.
- Dedicated CloudFormation Deploy Action: A new CloudFormation deploy action is needed within the pipeline to handle the deployment in the us-east-1 region.
- Targeted Artifact Usage: The new CloudFormation deploy action should specifically utilize the CloudFormation template from the us-east-1 output artifact generated by the CodeBuild project. This ensures that the correct template and related artifacts are used for the deployment in the us-east-1 region.
- Reference: AWS Documentation – “AWS CodePipeline Artifacts” https://docs.aws.amazon.com/codepipeline/latest/userguide/concepts-artifacts.html
Why A is Incorrect:
- While parameterization can be helpful in CloudFormation, modifying the template solely to pass the zip file location isn’t sufficient for cross-region deployment. CodePipeline still needs an artifact store in the target region.
Why B is Incorrect:
- Option B missed the important step about CodePipeline using the S3 bucket as an artifact store, and giving the necessary permissions to do so.
Why D is Incorrect:
- While S3 Cross-Region Replication (CRR) copies objects between buckets, it doesn’t inherently provide the isolation and control needed for CodePipeline in a multi-region deployment scenario. Relying solely on CRR might lead to unexpected behavior or synchronization issues, and would not be considered a best practice. CodePipeline requires a dedicated artifact store within the target region. It is not a substitute for artifact store in the target region.
Question.24 A company runs an application on one Amazon EC2 instance. Application metadata is stored in Amazon S3 and must be retrieved if the instance is restarted. The instance must restart or relaunch automatically if the instance becomes unresponsive. Which solution will meet these requirements? (A) Create an Amazon CloudWatch alarm for the StatusCheckFailed metric. Use the recover action to stop and start the instance. Use an S3 event notification to push the metadata to the instance when the instance is back up and running. (B) Configure AWS OpsWorks, and use the auto healing feature to stop and start the instance. Use a lifecycle event in OpsWorks to pull the metadata from Amazon S3 and update it on the instance. (C) Use EC2 Auto Recovery to automatically stop and start the instance in case of a failure. Use an S3 event notification to push the metadata to the instance when the instance is back up and running. (D) Use AWS CloudFormation to create an EC2 instance that includes the UserData property for the EC2 resource. Add a command in UserData to retrieve the application metadata from Amazon S3. |
24. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Let’s break down why:
- Requirement 1: Instance Restart/Relaunch on Unresponsiveness: Both EC2 Auto Recovery (option C) and CloudWatch alarms with recovery actions (option A) can handle instance restarts. OpsWorks auto healing (option B) also provides similar functionality. CloudFormation with UserData (option D) only addresses initial instance setup, not automatic recovery from failures.
- Requirement 2: Metadata Retrieval from S3: This is where options A and C fall short. While S3 event notifications can trigger actions, they are best suited for reacting to object changes within S3, not reliably pushing data to an instance that has just recovered from a failure. The instance might not be fully ready to receive the notification at that exact moment.
- Why OpsWorks Lifecycle Events are Ideal: OpsWorks provides a robust framework for managing application state during instance lifecycle events. Lifecycle events, such as
setup
,configure
, anddeploy
, are triggered at specific points in the instance’s life. By using a lifecycle event (e.g.,setup
after auto-healing restarts the instance), you can reliably pull the metadata from S3 after the instance has fully recovered and is ready to accept connections and execute commands. This guarantees that the metadata is retrieved correctly. OpsWorks is designed to orchestrate and manage application state in a predictable and reliable way. - Auto Healing in OpsWorks: The auto healing feature in OpsWorks allows you to automatically replace failed instances with new ones. This feature is particularly useful for maintaining high availability and ensuring that your application is always running. https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-autohealing.html
- OpsWorks Lifecycle Events Documentation: https://docs.aws.amazon.com/opsworks/latest/userguide/lifecycle-events.html
In summary, OpsWorks provides both the automatic instance recovery and a reliable mechanism (lifecycle events) to ensure the application metadata is retrieved from S3 after the instance is back online, satisfying both requirements of the question.
Question.25 A company has multiple AWS accounts. The company uses AWS IAM Identity Center (AWS Single Sign-On) that is integrated with AWS Toolkit for Microsoft Azure DevOps. The attributes for access control feature is enabled in IAM Identity Center. The attribute mapping list contains two entries. The department key is mapped to ${path:enterprise.department}. The costCenter key is mapped to ${path:enterprise.costCenter}. All existing Amazon EC2 instances have a department tag that corresponds to three company departments (d1, d2, d3). A DevOps engineer must create policies based on the matching attributes. The policies must minimize administrative effort and must grant each Azure AD user access to only the EC2 instances that are tagged with the user’s respective department name. Which condition key should the DevOps engineer include in the custom permissions policies to meet these requirements?A. ![]() B. ![]() C. ![]() D. ![]() |
25. Click here to View Answer
Answer: C
Explanation:
Reference:
https://aws.amazon.com/blogs/aws/new-attributes-based-access-control-with-aws-single-sign-on/