Question.16 A development team is using AWS CodeCommit to version control application code and AWS CodePipeline to orchestrate software deployments. The team has decided to use a remote main branch as the trigger for the pipeline to integrate code changes. A developer has pushed code changes to the CodeCommit repository, but noticed that the pipeline had no reaction, even after 10 minutes. Which of the following actions should be taken to troubleshoot this issue? (A) Check that an Amazon EventBridge rule has been created for the main branch to trigger the pipeline. (B) Check that the CodePipeline service role has permission to access the CodeCommit repository. (C) Check that the developer’s IAM role has permission to push to the CodeCommit repository. (D) Check to see if the pipeline failed to start because of CodeCommit errors in Amazon CloudWatch Logs. |
16. Click here to View Answer
Answer: A
Explanation:
The correct answer is A: Check that an Amazon EventBridge rule has been created for the main branch to trigger the pipeline.
Here’s why:
The problem is that the CodePipeline is not triggering when code changes are pushed to the main
branch of the CodeCommit repository. CodePipeline’s integration with CodeCommit to trigger on repository events often relies on Amazon EventBridge (formerly CloudWatch Events). EventBridge rules detect events in AWS services and route them to targets, such as starting a CodePipeline execution.
If an EventBridge rule isn’t properly configured to listen for CodeCommit repository changes on the main
branch, the pipeline won’t start automatically when changes are pushed. Therefore, verifying the existence and configuration of this rule is the most relevant first step in troubleshooting. The rule should be configured to detect referenceChanges
events for the main
branch in the CodeCommit repository and target the CodePipeline to start execution.
Option B is less likely to be the cause because if the CodePipeline service role lacked access to the CodeCommit repository, the pipeline would likely fail during execution rather than failing to trigger in the first place. While permissions are always important, trigger issues typically stem from EventBridge configuration.
Option C is incorrect because the developer’s IAM role is relevant for pushing code to CodeCommit, not for triggering the pipeline. The developer was able to successfully push the changes, so their permissions are likely sufficient.
Option D is also less likely to be the primary cause. While CloudWatch Logs can contain information about pipeline failures, checking EventBridge first is more direct because the issue is that the pipeline is not starting at all. If the pipeline were failing during execution, CloudWatch Logs would be a more pertinent troubleshooting step.
Therefore, checking the EventBridge rule is the most direct and appropriate first step for troubleshooting this trigger-related issue.
Further research:
Question.17 A company’s developers use Amazon EC2 instances as remote workstations. The company is concerned that users can create or modify EC2 security groups to allow unrestricted inbound access. A DevOps engineer needs to develop a solution to detect when users create unrestricted security group rules. The solution must detect changes to security group rules in near real time, remove unrestricted rules, and send email notifications to the security team. The DevOps engineer has created an AWS Lambda function that checks for security group ID from input, removes rules that grant unrestricted access, and sends notifications through Amazon Simple Notification Service (Amazon SNS). What should the DevOps engineer do next to meet the requirements? (A) Configure the Lambda function to be invoked by the SNS topic. Create an AWS CloudTrail subscription for the SNS topic. Configure a subscription filter for security group modification events. (B) Create an Amazon EventBridge scheduled rule to invoke the Lambda function. Define a schedule pattern that runs the Lambda function every hour. (C) Create an Amazon EventBridge event rule that has the default event bus as the source. Define the rule’s event pattern to match EC2 security group creation and modification events. Configure the rule to invoke the Lambda function. (D) Create an Amazon EventBridge custom event bus that subscribes to events from all AWS services. Configure the Lambda function to be invoked by the custom event bus. |
17. Click here to View Answer
Answer: C
Explanation:
The correct answer is C. Here’s why:
The requirement is to detect security group rule modifications in near real-time and take corrective actions. Amazon EventBridge (formerly CloudWatch Events) is the ideal service for this purpose because it allows you to react to state changes in your AWS environment. EventBridge can route events from AWS services to targets such as Lambda functions.
Option C:
- EventBridge event rule with default event bus: The default event bus in EventBridge receives events from all AWS services in your account. This means events related to EC2 security group modifications will automatically flow into this bus.
- Event pattern matching security group events: By defining an appropriate event pattern, the EventBridge rule can be configured to specifically target events related to the creation and modification of EC2 security groups. This is crucial to ensure that the Lambda function only triggers when relevant changes occur.
- Invoke the Lambda function: Configuring the rule to invoke the Lambda function allows EventBridge to automatically execute the function whenever a matching security group event is detected. This ensures near real-time processing of security group changes.
Why other options are incorrect:
- Option A: Using SNS to trigger the Lambda function based on CloudTrail logs is inefficient. CloudTrail logs are written periodically and are not a near real-time event source. Furthermore, CloudTrail sends logs to S3. You will have to configure SNS subscriptions using S3 Event Notifications, which are not designed for near real-time monitoring. CloudTrail is better used for auditing, not immediate action.
- Option B: A scheduled rule is not near real-time. The Lambda function would only run every hour, which does not satisfy the requirement of detecting changes “in near real-time.”
- Option D: While a custom event bus can be created, it is not necessary for this scenario. The default event bus already receives events from all AWS services, including EC2. Using a custom event bus adds complexity without providing any significant benefit in this case. Moreover, subscribing “events from all AWS Services” is not the standard way to utilize custom event bus, and in many cases, could be overly broad. Custom event buses are typically used for application events, or other isolated workloads.
In summary, EventBridge provides a scalable, event-driven mechanism to achieve near real-time detection and remediation of security group changes, fulfilling the requirements of the question.https://aws.amazon.com/eventbridge/https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-get-started.html
Question.18 A DevOps engineer is creating an AWS CloudFormation template to deploy a web service. The web service will run on Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). The DevOps engineer must ensure that the service can accept requests from clients that have IPv6 addresses. What should the DevOps engineer do with the CloudFormation template so that IPv6 clients can access the web service? (A) Add an IPv6 CIDR block to the VPC and the private subnet for the EC2 instances. Create route table entries for the IPv6 network, use EC2 instance types that support IPv6, and assign IPv6 addresses to each EC2 instance. (B) Assign each EC2 instance an IPv6 Elastic IP address. Create a target group, and add the EC2 instances as targets. Create a listener on port 443 of the ALB, and associate the target group with the ALB. (C) Replace the ALB with a Network Load Balancer (NLB). Add an IPv6 CIDR block to the VPC and subnets for the NLB, and assign the NLB an IPv6 Elastic IP address. (D) Add an IPv6 CIDR block to the VPC and subnets for the ALB. Create a listener on port 443. and specify the dualstack IP address type on the ALB. Create a target group, and add the EC2 instances as targets. Associate the target group with the ALB. |
18. Click here to View Answer
Answer: D
Explanation:
The correct answer is D because it outlines the necessary steps to enable IPv6 support for an Application Load Balancer (ALB) and the underlying infrastructure within AWS CloudFormation.
Here’s a detailed justification:
- IPv6 CIDR Block: To enable IPv6, the VPC and subnets used by the ALB must have associated IPv6 CIDR blocks. This allows for the allocation of IPv6 addresses within those networks. https://docs.aws.amazon.com/vpc/latest/userguide/get-started-ipv6.html
- Dualstack IP Address Type: ALBs support both IPv4 and IPv6 through the dualstack IP address type. Specifying this on the ALB ensures that it can accept connections from both IPv4 and IPv6 clients. The ALB will then forward the traffic to the EC2 instances via IPv4 (in this specific architecture). https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html#load-balancer-attributes
- Listener on Port 443: Creating a listener on port 443 (HTTPS) is essential for accepting secure connections from clients. The ALB needs to be configured to listen for incoming traffic on this port.
- Target Group and EC2 Instances: The EC2 instances running the web service are registered as targets within a target group. The ALB then forwards traffic received on port 443 to the registered targets in the target group. This allows the ALB to distribute incoming requests among the available EC2 instances.
- Association of Target Group: Associating the target group with the ALB’s listener completes the setup, enabling the ALB to route incoming traffic to the EC2 instances.
Why other options are incorrect:
C: Replacing the ALB with an NLB is not necessary. ALBs support IPv6 and provide more features for web applications. NLBs are better suited for TCP/UDP traffic and high-performance scenarios. It is also not required to assign an IPv6 Elastic IP to the NLB/ALB.
A: While adding IPv6 to VPC, subnets, and EC2 instances might seem logical, it’s not necessary for the scenario where the ALB handles IPv6 termination and forwards traffic via IPv4 to the backend. Also it is not required to explicitly create route table entries for IPv6.
B: Assigning IPv6 Elastic IP addresses to EC2 instances directly and bypassing the ALB defeats the purpose of having a load balancer. The ALB’s primary role is to distribute traffic and manage SSL/TLS termination.
Question.19 A company uses AWS Organizations and AWS Control Tower to manage all the company’s AWS accounts. The company uses the Enterprise Support plan. A DevOps engineer is using Account Factory for Terraform (AFT) to provision new accounts. When new accounts are provisioned, the DevOps engineer notices that the support plan for the new accounts is set to the Basic Support plan. The DevOps engineer needs to implement a solution to provision the new accounts with the Enterprise Support plan. Which solution will meet these requirements? (A) Use an AWS Config conformance pack to deploy the account-part-of-organizations AWS Config rule and to automatically remediate any noncompliant accounts. (B) Create an AWS Lambda function to create a ticket for AWS Support to add the account to the Enterprise Support plan. Grant the Lambda function the support:ResolveCase permission. (C) Add an additional value to the control_tower_parameters input to set the AWSEnterpriseSupport parameter as the organization’s management account number. (D) Set the aft_feature_enterprise_support feature flag to True in the AFT deployment input configuration. Redeploy AFT and apply the changes. |
19. Click here to View Answer
Answer: D
Explanation:
The correct solution is D: Set the aft_feature_enterprise_support feature flag to True in the AFT deployment input configuration. Redeploy AFT and apply the changes.
Here’s why: Account Factory for Terraform (AFT) provides built-in features to customize account provisioning, and managing the support plan is one of those features. AFT utilizes feature flags to enable or disable certain behaviors during account creation. aft_feature_enterprise_support
is a specific flag designed to control the support plan assigned to newly provisioned accounts.
Setting this flag to True
instructs AFT to configure new accounts with the Enterprise Support plan. Redeploying AFT after modifying the input configuration applies the changes, ensuring that all subsequent accounts created through AFT will automatically inherit the Enterprise Support plan. This approach is efficient, automated, and directly leverages AFT’s capabilities for account customization.
Option A is incorrect because AWS Config conformance packs and rules are primarily for auditing and remediating configuration drift after resources have been created. While it could technically be used to change the support plan after account creation, it’s reactive rather than proactive and less efficient than configuring it directly during provisioning. Moreover, this is not the intended use case for AWS Config conformance packs.
Option B is incorrect as it involves creating a Lambda function to raise a support ticket, which is a manual and inefficient process. The support:ResolveCase
permission is not appropriate here; creating a support ticket does not automatically grant resolution privileges. Furthermore, relying on manual intervention violates the principles of automation and infrastructure-as-code, which AFT is designed to facilitate.
Option C is incorrect. The control_tower_parameters
input in AFT is meant to specify values for parameters that will be passed to the AWS Control Tower account baseline customization stack sets. Setting the AWSEnterpriseSupport
parameter to the organization’s management account number would not set the support plan and is not a valid use case of this feature.
Therefore, enabling the feature flag aft_feature_enterprise_support
is the most direct, automated, and supported method for ensuring new accounts provisioned via AFT receive the Enterprise Support plan.
Refer to the AFT documentation for feature flag configurations: https://aws.amazon.com/blogs/mt/customize-your-aws-control-tower-account-factory-provisioned-accounts-using-account-factory-customizations/ (While this blog may not directly mention the feature flag, it details using AFT and account customizations.)
Question.20 A company’s DevOps engineer uses AWS Systems Manager to perform maintenance tasks during maintenance windows. The company has a few Amazon EC2 instances that require a restart after notifications from AWS Health. The DevOps engineer needs to implement an automated solution to remediate these notifications. The DevOps engineer creates an Amazon EventBridge rule. How should the DevOps engineer configure the EventBridge rule to meet these requirements? (A) Configure an event source of AWS Health, a service of EC2. and an event type that indicates instance maintenance. Target a Systems Manager document to restart the EC2 instance. (B) Configure an event source of Systems Manager and an event type that indicates a maintenance window. Target a Systems Manager document to restart the EC2 instance. (C) Configure an event source of AWS Health, a service of EC2, and an event type that indicates instance maintenance. Target a newly created AWS Lambda function that registers an automation task to restart the EC2 instance during a maintenance window. (D) Configure an event source of EC2 and an event type that indicates instance maintenance. Target a newly created AWS Lambda function that registers an automation task to restart the EC2 instance during a maintenance window. |
20. Click here to View Answer
Answer: A
Explanation:
The correct answer is A. Here’s why:
- Requirement: The automation must trigger based on AWS Health events related to EC2 instance maintenance and restart the affected instances. The solution must leverage Systems Manager.
- Option A directly addresses this.
- It configures EventBridge to listen for AWS Health events specific to EC2 instance maintenance. This ensures that the rule triggers when AWS Health identifies a maintenance event requiring a restart.
- It targets a Systems Manager document designed to restart EC2 instances. This fulfills the requirement to use Systems Manager for remediation.
- Why other options are incorrect:
- Option B: Systems Manager maintenance windows are for scheduled maintenance, not for reacting to AWS Health events. The trigger in this option is thus unrelated to AWS Health events.
- Option C: While using Lambda to trigger a task is possible, it adds unnecessary complexity. A Systems Manager document can be directly triggered from EventBridge. Also, the question mentions maintenance tasks being used in windows.
- Option D: While EC2 is related, the source of truth for planned maintenance impacting EC2 instances is AWS Health, not EC2 events themselves. AWS Health provides detailed notifications about planned maintenance impacting your AWS resources.
- In this option, the registering of automation tasks makes little sense when you can directly target SSM document.
- Conclusion: Option A is the most straightforward and efficient way to configure the EventBridge rule to trigger on AWS Health events related to EC2 instance maintenance and use a Systems Manager document to restart the instances, fulfilling all the requirements.
Supporting Links:
Automating AWS Health Events with EventBridge: https://aws.amazon.com/blogs/mt/automating-aws-health-events-with-amazon-eventbridge/
AWS Health: https://aws.amazon.com/health/
Amazon EventBridge: https://aws.amazon.com/eventbridge/
AWS Systems Manager: https://aws.amazon.com/systems-manager/