Question.51 A rapidly growing company wants to scale for developer demand for AWS development environments. Development environments are created manually in the AWS Management Console. The networking team uses AWS CloudFormation to manage the networking infrastructure, exporting stack output values for the Amazon VPC and all subnets. The development environments have common standards, such as Application Load Balancers, Amazon EC2 Auto Scaling groups, security groups, and Amazon DynamoDB tables. To keep up with demand, the DevOps engineer wants to automate the creation of development environments. Because the infrastructure required to support the application is expected to grow, there must be a way to easily update the deployed infrastructure. CloudFormation will be used to create a template for the development environments. Which approach will meet these requirements and quickly provide consistent AWS environments for developers? (A) Use Fn::ImportValue intrinsic functions in the Resources section of the template to retrieve Virtual Private Cloud (VPC) and subnet values. Use CloudFormation StackSets for the development environments, using the Count input parameter to indicate the number of environments needed. Use the UpdateStackSet command to update existing development environments. (B) Use nested stacks to define common infrastructure components. To access the exported values, use TemplateURL to reference the networking team’s template. To retrieve Virtual Private Cloud (VPC) and subnet values, use Fn::ImportValue intrinsic functions in the Parameters section of the root template. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments. (C) Use nested stacks to define common infrastructure components. Use Fn::ImportValue intrinsic functions with the resources of the nested stack to retrieve Virtual Private Cloud (VPC) and subnet values. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments. (D) Use Fn::ImportValue intrinsic functions in the Parameters section of the root template to retrieve Virtual Private Cloud (VPC) and subnet values. Define the development resources in the order they need to be created in the CloudFormation nested stacks. Use the CreateChangeSet. and ExecuteChangeSet commands to update existing development environments. |
51. Click here to View Answer
Answer: C
Explanation:
The correct answer is C. Here’s a detailed justification:
The scenario requires automating and standardizing the creation of AWS development environments using CloudFormation, while also allowing for easy updates to the deployed infrastructure.
Why Option C is Correct:
- Nested Stacks: Nested stacks are a CloudFormation feature that allows you to break down complex infrastructure into smaller, more manageable, and reusable components. This is essential for managing common infrastructure components, as specified in the requirement.
- Fn::ImportValue: This intrinsic function enables cross-stack referencing. It allows retrieving exported values from other CloudFormation stacks (in this case, the networking team’s stack, which exports VPC and subnet information). Using it within the resources of the nested stacks ensures that the shared networking information is correctly utilized by each environment’s infrastructure resources.
- Change Sets: Change sets provide a preview of the changes that CloudFormation will make to your infrastructure before actually applying them. This significantly reduces the risk of unexpected changes and outages. Using
CreateChangeSet
andExecuteChangeSet
ensures controlled and safe updates to the environments.
Why Other Options are Incorrect:
- Option A (StackSets with Count): While StackSets are useful for deploying across multiple accounts or regions, they are not the most appropriate for creating independent development environments within a single account. The
Count
parameter isn’t suitable here because the goal isn’t to deploy identical stacks repeatedly, but to create distinct environments. - Option B: Using
TemplateURL
to reference the entire networking team’s template is not correct. Rather, the required approach is to import individual VPC and subnet IDs. Also,Fn::ImportValue
is not appropriate in the Parameters section of the root template in this context, it needs to be used within resources of nested stacks, which require the VPC and subnet IDs to function. - Option D: Placing
Fn::ImportValue
in the Parameters section is less appropriate than using it directly within the resources of the nested stacks that need the VPC and subnet IDs. Furthermore, explicitly defining the creation order of resources becomes less manageable as the infrastructure grows. CloudFormation generally handles dependencies, and unnecessary ordering can complicate the template.
Supporting Links:
- CloudFormation Nested Stacks: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html
- Fn::ImportValue: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-importvalue.html
- CloudFormation Change Sets: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html
In summary, option C provides the most scalable, maintainable, and controlled approach for automating the creation and updating of development environments using CloudFormation.
Question.52 A company uses AWS Organizations to manage multiple accounts. Information security policies require that all unencrypted Amazon EBS volumes be marked as non-compliant. A DevOps engineer needs to automatically deploy the solution and ensure that this compliance check is always present. Which solution will accomplish this? (A) Create an AWS CloudFormation template that defines an AWS Inspector rule to check whether EBS encryption is enabled. Save the template to an Amazon S3 bucket that has been shared with all accounts within the company. Update the account creation script pointing to the CloudFormation template in Amazon S3. (B) Create an AWS Config organizational rule to check whether EBS encryption is enabled and deploy the rule using the AWS CLI. Create and apply an SCP to prohibit stopping and deleting AWS Config across the organization. (C) Create an SCP in Organizations. Set the policy to prevent the launch of Amazon EC2 instances without encryption on the EBS volumes using a conditional expression. Apply the SCP to all AWS accounts. Use Amazon Athena to analyze the AWS CloudTrail output, looking for events that deny an ec2:RunInstances action. (D) Deploy an IAM role to all accounts from a single trusted account. Build a pipeline with AWS CodePipeline with a stage in AWS Lambda to assume the IAM role, and list all EBS volumes in the account. Publish a report to Amazon S3. |
52. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Here’s why:
AWS Config organizational rules provide a centralized and automated way to evaluate the configuration of AWS resources across an entire organization, checking for compliance with desired policies. In this case, the Config rule can be specifically configured to evaluate whether EBS encryption is enabled. This addresses the requirement of automatically checking for unencrypted EBS volumes across all accounts.
Using the AWS CLI to deploy the organizational rule ensures a programmatic and repeatable deployment process, fitting the DevOps engineer’s need for automation.
Implementing a Service Control Policy (SCP) to prohibit stopping or deleting AWS Config ensures that the compliance checks are consistently enforced and cannot be bypassed by individual accounts. This guarantees the persistence of the compliance assessment. SCPs operate at the organization level and govern the permissions available to member accounts.
Option A is less suitable because AWS Inspector is more focused on security assessments during runtime and might not provide continuous compliance monitoring like AWS Config. Sharing CloudFormation templates requires management across multiple accounts.
Option C, using SCPs to prevent EC2 instance launches without encryption, prevents non-compliant resources, but it doesn’t monitor existing unencrypted volumes. Also, Athena and CloudTrail for detecting denied actions are reactive rather than proactive.
Option D requires a complex pipeline involving IAM roles and Lambda functions which adds complexity for the given requirement. Further, it lists and reports rather than automatically enforcing compliance.
Justification with Cloud Computing Concepts:
- AWS Organizations: Enables centralized management and governance over multiple AWS accounts.
- AWS Config: Provides continuous compliance monitoring by evaluating resource configurations against desired rules.
- Organizational Rules: Extend Config rules to the organization level, ensuring consistent compliance checks across all accounts.
- Service Control Policies (SCPs): Allow centralized control over the maximum available permissions in member accounts, crucial for preventing actions that could undermine compliance efforts.
- Automation: The use of the AWS CLI ensures that the rule deployment is automated and can be easily repeated.
Authoritative Links:
AWS Organizations Service Control Policies (SCPs): https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html
AWS Config Organizational Rules: https://docs.aws.amazon.com/config/latest/developerguide/config-rule-managing-rules.html
Question.53 A company is performing vulnerability scanning for all Amazon EC2 instances across many accounts. The accounts are in an organization in AWS Organizations. Each account’s VPCs are attached to a shared transit gateway. The VPCs send traffic to the internet through a central egress VPC. The company has enabled Amazon Inspector in a delegated administrator account and has enabled scanning for all member accounts. A DevOps engineer discovers that some EC2 instances are listed in the “not scanning” tab in Amazon Inspector. Which combination of actions should the DevOps engineer take to resolve this issue? (Choose three.) (A) Verify that AWS Systems Manager Agent is installed and is running on the EC2 instances that Amazon Inspector is not scanning. (B) Associate the target EC2 instances with security groups that allow outbound communication on port 443 to the AWS Systems Manager service endpoint. (C) Grant inspector:StartAssessmentRun permissions to the IAM role that the DevOps engineer is using. (D) Configure EC2 Instance Connect for the EC2 instances that Amazon Inspector is not scanning. (E) Associate the target EC2 instances with instance profiles that grant permissions to communicate with AWS Systems Manager. (F) Create a managed-instance activation. Use the Activation Code and the Activation ID to register the EC2 instances. |
53. Click here to View Answer
Answer: ABE
Explanation:
The correct answer is ABE. Here’s why:
- A. Verify that AWS Systems Manager Agent is installed and is running on the EC2 instances that Amazon Inspector is not scanning. Amazon Inspector relies on the AWS Systems Manager (SSM) Agent to be installed and running on EC2 instances to perform vulnerability assessments. If the agent isn’t present or is not running, Inspector cannot scan the instance, and it will appear in the “not scanning” tab. This is a fundamental requirement for Inspector to function. https://docs.aws.amazon.com/inspector/latest/userguide/inspector_systems-manager-prerequisites.html
- B. Associate the target EC2 instances with security groups that allow outbound communication on port 443 to the AWS Systems Manager service endpoint. The SSM Agent needs to communicate with the AWS SSM service endpoint to receive instructions and send back assessment data. Port 443 (HTTPS) is the standard port for secure communication. If the security groups associated with the EC2 instances block outbound traffic on port 443 to the SSM endpoint, the agent won’t be able to communicate, and the instances won’t be scanned. https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-security-best-practices.html
- E. Associate the target EC2 instances with instance profiles that grant permissions to communicate with AWS Systems Manager. The EC2 instances need appropriate IAM permissions to allow the SSM Agent to communicate with AWS Systems Manager. This is typically done by attaching an IAM role to the EC2 instance (instance profile). This role should include the
AmazonSSMManagedInstanceCore
policy or equivalent permissions allowing SSM access. Without these permissions, the agent cannot properly communicate with the SSM service, preventing Inspector from scanning the instances. https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-security-iam-id-based-policy-examples.html
Why the other options are incorrect:
F. Create a managed-instance activation. Use the Activation Code and the Activation ID to register the EC2 instances. Managed-instance activation is primarily used for registering on-premises servers or VMs with AWS Systems Manager. EC2 instances, which are already part of AWS, don’t typically require this activation process, especially in the context described with delegated administrator. Using an activation code and ID is a more complex setup that isn’t necessary if the issue is just missing SSM connectivity. https://docs.aws.amazon.com/systems-manager/latest/userguide/managed-instances-onpremise.html
C. Grant inspector:StartAssessmentRun permissions to the IAM role that the DevOps engineer is using. While inspector:StartAssessmentRun
is needed to initiate an assessment run, it does not address the reason why the EC2 instances are listed as “not scanning”. The instances must first be properly configured to be scanned, which requires SSM connectivity.
D. Configure EC2 Instance Connect for the EC2 instances that Amazon Inspector is not scanning. EC2 Instance Connect allows connecting to instances using SSH, but it doesn’t play a role in enabling Amazon Inspector to scan the instances. Instance Connect is a convenience for connecting to instances and does not relate to SSM or Inspector functionality.
Question.54 A development team uses AWS CodeCommit for version control for applications. The development team uses AWS CodePipeline, AWS CodeBuild. and AWS CodeDeploy for CI/CD infrastructure. In CodeCommit, the development team recently merged pull requests that did not pass long-running tests in the code base. The development team needed to perform rollbacks to branches in the codebase, resulting in lost time and wasted effort. A DevOps engineer must automate testing of pull requests in CodeCommit to ensure that reviewers more easily see the results of automated tests as part of the pull request review. What should the DevOps engineer do to meet this requirement? (A) Create an Amazon EventBridge rule that reacts to the pullRequestStatusChanged event. Create an AWS Lambda function that invokes a CodePipeline pipeline with a CodeBuild action that runs the tests for the application. Program the Lambda function to post the CodeBuild badge as a comment on the pull request so that developers will see the badge in their code review. (B) Create an Amazon EventBridge rule that reacts to the pullRequestCreated event. Create an AWS Lambda function that invokes a CodePipeline pipeline with a CodeBuild action that runs the tests for the application. Program the Lambda function to post the CodeBuild test results as a comment on the pull request when the test results are complete. (C) Create an Amazon EventBridge rule that reacts to pullRequestCreated and pullRequestSourceBranchUpdated events. Create an AWS Lambda function that invokes a CodePipeline pipeline with a CodeBuild action that runs the tests for the application. Program the Lambda function to post the CodeBuild badge as a comment on the pull request so that developers will see the badge in their code review. (D) Create an Amazon EventBridge rule that reacts to the pullRequestStatusChanged event. Create an AWS Lambda function that invokes a CodePipeline pipeline with a CodeBuild action that runs the tests for the application. Program the Lambda function to post the CodeBuild test results as a comment on the pull request when the test results are complete. |
54. Click here to View Answer
Answer: C
Explanation:
The correct answer is C. Here’s why:
The requirement is to automate testing of pull requests in CodeCommit so reviewers can easily see the test results. This means triggering tests whenever a new pull request is created or when the source branch of an existing pull request is updated.
- EventBridge Rule: Option C correctly identifies the need to trigger the testing process on two events:
pullRequestCreated
andpullRequestSourceBranchUpdated
. A new pull request needs to be tested, and any changes pushed to the source branch of an existing pull request should also trigger a new test run to ensure continued validity. Option A and D only trigger onpullRequestStatusChanged
, which is not appropriate for initial pull request validation or updates to existing requests. B only addresses creation, ignoring updates. - Lambda Function: The Lambda function acts as the orchestrator. It’s triggered by the EventBridge rule and then starts a CodePipeline.
- CodePipeline and CodeBuild: The CodePipeline contains a CodeBuild action. CodeBuild is ideal for running the actual tests. It compiles the code, executes the tests, and produces the results.
- Posting Badge: The Lambda function then posts the CodeBuild badge as a comment on the pull request. A badge visually indicates the status of the build (success or failure) directly within the pull request, making it very easy for reviewers to quickly assess the test results. While Option D posts the test results as a comment, a badge offers a more immediate visual indicator.
Option C handles both the initial pull request creation and subsequent updates, triggers testing, and provides a clear, visual indication of the test results within the pull request, fulfilling all requirements.
Authoritative Links:
AWS CodeCommit: https://aws.amazon.com/codecommit/
Amazon EventBridge: https://aws.amazon.com/eventbridge/
AWS Lambda: https://aws.amazon.com/lambda/
AWS CodePipeline: https://aws.amazon.com/codepipeline/
AWS CodeBuild: https://aws.amazon.com/codebuild/
Question.55 A company has deployed an application in a production VPC in a single AWS account. The application is popular and is experiencing heavy usage. The company’s security team wants to add additional security, such as AWS WAF, to the application deployment. However, the application’s product manager is concerned about cost and does not want to approve the change unless the security team can prove that additional security is necessary. The security team believes that some of the application’s demand might come from users that have IP addresses that are on a deny list. The security team provides the deny list to a DevOps engineer. If any of the IP addresses on the deny list access the application, the security team wants to receive automated notification in near real time so that the security team can document that the application needs additional security. The DevOps engineer creates a VPC flow log for the production VPC. Which set of additional steps should the DevOps engineer take to meet these requirements MOST cost-effectively? (A) Create a log group in Amazon CloudWatch Logs. Configure the VPC flow log to capture accepted traffic and to send the data to the log group. Create an Amazon CloudWatch metric filter for IP addresses on the deny list. Create a CloudWatch alarm with the metric filter as input. Set the period to 5 minutes and the datapoints to alarm to 1. Use an Amazon Simple Notification Service (Amazon SNS) topic to send alarm notices to the security team. (B) Create an Amazon S3 bucket for log files. Configure the VPC flow log to capture all traffic and to send the data to the S3 bucket. Configure Amazon Athena to return all log files in the S3 bucket for IP addresses on the deny list. Configure Amazon QuickSight to accept data from Athena and to publish the data as a dashboard that the security team can access. Create a threshold alert of 1 for successful access. Configure the alert to automatically notify the security team as frequently as possible when the alert threshold is met. (C) Create an Amazon S3 bucket for log files. Configure the VPC flow log to capture accepted traffic and to send the data to the S3 bucket. Configure an Amazon OpenSearch Service cluster and domain for the log files. Create an AWS Lambda function to retrieve the logs from the S3 bucket, format the logs, and load the logs into the OpenSearch Service cluster. Schedule the Lambda function to run every 5 minutes. Configure an alert and condition in OpenSearch Service to send alerts to the security team through an Amazon Simple Notification Service (Amazon SNS) topic when access from the IP addresses on the deny list is detected. (D) Create a log group in Amazon CloudWatch Logs. Create an Amazon S3 bucket to hold query results. Configure the VPC flow log to capture all traffic and to send the data to the log group. Deploy an Amazon Athena CloudWatch connector in AWS Lambda. Connect the connector to the log group. Configure Athena to periodically query for all accepted traffic from the IP addresses on the deny list and to store the results in the S3 bucket. Configure an S3 event notification to automatically notify the security team through an Amazon Simple Notification Service (Amazon SNS) topic when new objects are added to the S3 bucket. |
55. Click here to View Answer
Answer: A
Explanation:
The most cost-effective solution is option A, which leverages CloudWatch Logs, Metric Filters, Alarms, and SNS. This approach avoids the overhead of managing storage, query engines, and complex data pipelines found in the other options.
Here’s why option A is superior:
- Cost Efficiency: CloudWatch Logs and Metric Filters are designed for log analysis and metric extraction. They are more cost-effective for this specific use case compared to S3, Athena, OpenSearch, or QuickSight, which are geared toward larger-scale data analytics and visualization.
- Near Real-Time Notification: CloudWatch Alarms can be configured with a 5-minute period, providing timely notification when traffic from the deny list is detected. This satisfies the “near real-time” requirement.
- Specific Targeting: Filtering the VPC flow logs for accepted traffic reduces noise and focuses on actual attempts to access the application from the denied IPs.
- Automation: The entire process, from log capture to notification, is automated, minimizing manual intervention.
- Simplicity: The solution avoids complex data transformation and loading processes required by S3, Lambda, and OpenSearch.
Here’s why the other options are less suitable:
- Option B (S3, Athena, QuickSight): This is unnecessarily complex and costly for simple IP address monitoring. Athena queries can be slow, and QuickSight adds visualization overhead that’s not needed for a simple notification system.
- Option C (S3, OpenSearch, Lambda): OpenSearch Service is a powerful search and analytics engine, but it is overkill for this simple use case. The Lambda function adds operational complexity and cost.
- Option D (CloudWatch Logs, S3, Athena, Lambda): Similar to option B, this option introduces unnecessary complexity with Athena and S3. The Athena connector through Lambda is also not the most efficient solution.
CloudWatch documentation: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.htmlVPC Flow Logs: https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html