Question.11 An ecommerce company has chosen AWS to host its new platform. The company’s DevOps team has started building an AWS Control Tower landing zone. The DevOps team has set the identity store within AWS IAM Identity Center (AWS Single Sign-On) to external identity provider (IdP) and has configured SAML 2.0. The DevOps team wants a robust permission model that applies the principle of least privilege. The model must allow the team to build and manage only the team’s own resources. Which combination of steps will meet these requirements? (Choose three.) (A) Create IAM policies that include the required permissions. Include the aws:PrincipalTag condition key. (B) Create permission sets. Attach an inline policy that includes the required permissions and uses the aws:PrincipalTag condition key to scope the permissions. (C) Create a group in the IdP. Place users in the group. Assign the group to accounts and the permission sets in IAM Identity Center. (D) Create a group in the IdP. Place users in the group. Assign the group to OUs and IAM policies. (E) Enable attributes for access control in IAM Identity Center. Apply tags to users. Map the tags as key-value pairs. (F) Enable attributes for access control in IAM Identity Center. Map attributes from the IdP as key-value pairs. |
11. Click here to View Answer
Answer: BCF
Explanation:
The correct answer is BCF. Here’s why:
- B: Create permission sets. Attach an inline policy that includes the required permissions and uses the aws:PrincipalTag condition key to scope the permissions. Permission sets in IAM Identity Center (successor to AWS Single Sign-On) define the permissions granted to users when they access AWS accounts. By attaching an inline policy with the
aws:PrincipalTag
condition key, the permissions are scoped based on user attributes, enforcing least privilege. Theaws:PrincipalTag
condition key allows you to control access to AWS resources based on tags associated with the IAM principal (user or role). This aligns with the requirement to allow teams to manage only their own resources, ensuring resource isolation.https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetsconcept.htmlhttps://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principaltag - C: Create a group in the IdP. Place users in the group. Assign the group to accounts and the permission sets in IAM Identity Center. Group management in the external IdP simplifies user assignment to AWS accounts and permission sets. By placing users in groups, you can centrally manage their access to various AWS accounts and the permissions they have within those accounts. IAM Identity Center uses these group assignments to grant access. This central management improves scalability and reduces administrative overhead.https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-group-access.html
- F: Enable attributes for access control in IAM Identity Center. Map attributes from the IdP as key-value pairs. Enabling attributes for access control is essential for using the
aws:PrincipalTag
condition key effectively. By mapping attributes from the external IdP to key-value pairs in IAM Identity Center, these attributes can be used as tags on the IAM principal (user). These tags are then used in the inline policies within permission sets to scope permissions based on user attributes, enabling fine-grained access control based on user attributes defined in the IdP. This integration between the IdP and AWS is crucial for dynamic, attribute-based access control (ABAC).https://docs.aws.amazon.com/singlesignon/latest/userguide/attributemappings.htmlhttps://aws.amazon.com/blogs/security/how-to-centrally-manage-aws-account-access-using-aws-sso-and-attribute-based-access-control/
Why other options are incorrect:
E: Enable attributes for access control in IAM Identity Center. Apply tags to users. Map the tags as key-value pairs. While enabling attributes is correct (as in option F), applying tags to users directly in IAM Identity Center (SSO) isn’t the usual flow when integrating with an external IdP. The intent is to leverage the attributes already existing in the IdP, thus option F mapping these existing attributes is the better approach.
A: Create IAM policies that include the required permissions. Include the aws:PrincipalTag condition key. While you can create IAM policies with the aws:PrincipalTag
condition key, this approach is not the ideal solution for AWS Control Tower environments with IAM Identity Center (SSO). IAM policies are generally used for roles assumed by applications or services, whereas permission sets in IAM Identity Center are designed for users accessing AWS accounts through SSO. The question focuses on managing user access, making permission sets more appropriate.
D: Create a group in the IdP. Place users in the group. Assign the group to OUs and IAM policies. Assigning IAM policies directly to OUs (Organizational Units) is typically used for service control policies (SCPs), which establish guardrails and restrict what users can do within the OU. This is too coarse-grained for this scenario. We need a finer level of permission control based on the user.
Question.12 An ecommerce company is receiving reports that its order history page is experiencing delays in reflecting the processing status of orders. The order processing system consists of an AWS Lambda function that uses reserved concurrency. The Lambda function processes order messages from an Amazon Simple Queue Service (Amazon SQS) queue and inserts processed orders into an Amazon DynamoDB table. The DynamoDB table has auto scaling enabled for read and write capacity. Which actions should a DevOps engineer take to resolve this delay? (Choose two.) (A) Check the ApproximateAgeOfOldestMessage metric for the SQS queue. Increase the Lambda function concurrency limit. (B) Check the ApproximateAgeOfOldestMessage metnc for the SQS queue Configure a redrive policy on the SQS queue. (C) Check the NumberOfMessagesSent metric for the SQS queue. Increase the SQS queue visibility timeout. (D) Check the WriteThrottleEvents metric for the DynamoDB table. Increase the maximum write capacity units (WCUs) for the table’s scaling policy. (E) Check the Throttles metric for the Lambda function. Increase the Lambda function timeout. |
12. Click here to View Answer
Answer: AD
Explanation:
The correct answer is AD. Here’s why:
A. Check the ApproximateAgeOfOldestMessage metric for the SQS queue. Increase the Lambda function concurrency limit.
The ApproximateAgeOfOldestMessage
metric indicates how long messages have been waiting in the SQS queue. If this value is high, it means messages are not being processed quickly enough. A high backlog suggests the Lambda function is not consuming messages from the queue fast enough, which directly contributes to the delay in updating the order history. Increasing the Lambda function’s concurrency limit allows more instances of the function to run simultaneously, enabling faster processing of messages from the SQS queue. This helps reduce the backlog and reflects order updates more quickly. Reserved concurrency guarantees that the Lambda function has the resources it needs, but it must be adequately sized.
D. Check the WriteThrottleEvents metric for the DynamoDB table. Increase the maximum write capacity units (WCUs) for the table’s scaling policy.
The WriteThrottleEvents
metric for DynamoDB signifies that write requests to the table are being throttled because the table’s write capacity is insufficient. The Lambda function writes processed order information to DynamoDB. If writes are being throttled, updates will be delayed. DynamoDB auto scaling should ideally adjust to workload changes, but the maximum WCUs limit within the scaling policy might be too low. Increasing the maximum WCUs limit in the scaling policy allows DynamoDB to scale up its write capacity further, accommodating the increased load and preventing throttling, which in turn reduces the delay in order history updates.
Why other options are incorrect:
- B: While a redrive policy is useful for handling failed messages, it doesn’t address the root cause of the delay, which is slow processing and write throttling. It only moves problematic messages to a dead-letter queue.
- C:
NumberOfMessagesSent
metric only indicates the number of messages being added to the queue, not the processing status. Increasing the visibility timeout will only delay the messages from being available for other consumers if the initial consumer fails. - E: The
Throttles
metric for Lambda indicates invocation throttles at the account level due to concurrency limits. While related to concurrency,Throttles
is different from issues inside the function itself. Increasing the function timeout won’t directly solve the queue backlog or DynamoDB throttling issues.
Supporting Links:
DynamoDB Metrics: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/metrics-dimensions.html
Amazon SQS Metrics: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-available-cloudwatch-metrics.html
AWS Lambda Concurrency: https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
Amazon DynamoDB Auto Scaling: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html
Question.13 A company has a single AWS account that runs hundreds of Amazon EC2 instances in a single AWS Region. New EC2 instances are launched and terminated each hour in the account. The account also includes existing EC2 instances that have been running for longer than a week. The company’s security policy requires all running EC2 instances to use an EC2 instance profile. If an EC2 instance does not have an instance profile attached, the EC2 instance must use a default instance profile that has no IAM permissions assigned. A DevOps engineer reviews the account and discovers EC2 instances that are running without an instance profile. During the review, the DevOps engineer also observes that new EC2 instances are being launched without an instance profile. Which solution will ensure that an instance profile is attached to all existing and future EC2 instances in the Region? (A) Configure an Amazon EventBridge rule that reacts to EC2 RunInstances API calls. Configure the rule to invoke an AWS Lambda function to attach the default instance profile to the EC2 instances. (B) Configure the ec2-instance-profile-attached AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances. (C) Configure an Amazon EventBridge rule that reacts to EC2 StartInstances API calls. Configure the rule to invoke an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances (D) Configure the iam-role-managed-policy-check AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Lambda function to attach the default instance profile to the EC2 instances. |
13. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Here’s a detailed justification:
The core problem is ensuring all EC2 instances, both existing and future, have an instance profile attached, defaulting to one with no permissions if no other profile is specified. AWS Config is designed for continuous compliance monitoring and remediation. Option B leverages this by using the ec2-instance-profile-attached
managed rule. This rule specifically checks if EC2 instances have an associated instance profile.
The configuration changes trigger ensures that whenever an EC2 instance is launched or modified without an instance profile, the rule detects it. The automatic remediation action then invokes an AWS Systems Manager (SSM) Automation runbook. SSM Automation provides a safe and reliable way to automatically attach the default instance profile to the non-compliant EC2 instances. Runbooks allow for pre-defined and tested procedures for incident response and operational tasks.
Option A reacts to RunInstances
API calls, which is a good approach for new instances, but it doesn’t address existing instances without profiles. It also utilizes Lambda, which is generally acceptable, but SSM Automation offers stronger operational capabilities for this specific remediation task.
Option C focuses on StartInstances
API calls, which would only address instances being started, not those already running without a profile or new instances being created but not immediately started.
Option D utilizes the iam-role-managed-policy-check
AWS Config managed rule, which is not directly related to ensuring an instance profile is attached. This rule primarily focuses on IAM role policy content, not the existence of an instance profile on an EC2 instance.
In summary, Option B provides a comprehensive solution by leveraging AWS Config for continuous monitoring and SSM Automation for automated remediation, covering both existing and newly launched instances, making it the most suitable approach.
Further reading:
EC2 Instance Profiles: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
AWS Config Managed Rules: https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws.html
AWS Systems Manager Automation: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html
Question.14 A DevOps engineer is building a continuous deployment pipeline for a serverless application that uses AWS Lambda functions. The company wants to reduce the customer impact of an unsuccessful deployment. The company also wants to monitor for issues. Which deploy stage configuration will meet these requirements? (A) Use an AWS Serverless Application Model (AWS SAM) template to define the serverless application. Use AWS CodeDeploy to deploy the Lambda functions with the Canary10Percent15Minutes Deployment Preference Type. Use Amazon CloudWatch alarms to monitor the health of the functions. (B) Use AWS CloudFormation to publish a new stack update, and include Amazon CloudWatch alarms on all resources. Set up an AWS CodePipeline approval action for a developer to verify and approve the AWS CloudFormation change set. (C) Use AWS CloudFormation to publish a new version on every stack update, and include Amazon CloudWatch alarms on all resources. Use the RoutingConfig property of the AWS::Lambda::Alias resource to update the traffic routing during the stack update. (D) Use AWS CodeBuild to add sample event payloads for testing to the Lambda functions. Publish a new version of the functions, and include Amazon CloudWatch alarms. Update the production alias to point to the new version. Configure rollbacks to occur when an alarm is in the ALARM state. |
14. Click here to View Answer
Answer: A
Explanation:
The correct answer is A because it directly addresses the requirements of minimizing customer impact and monitoring for issues during serverless application deployments.
Here’s a detailed justification:
- AWS SAM Template: Using AWS SAM (Serverless Application Model) simplifies the definition and deployment of serverless applications on AWS. It is an extension of AWS CloudFormation and makes managing serverless resources easier.
- AWS CodeDeploy with Canary Deployment: Implementing a canary deployment strategy using AWS CodeDeploy (specifically
Canary10Percent15Minutes
) gradually shifts a small percentage (10%) of traffic to the new Lambda function version over a short period (15 minutes). This limits the impact of any potential issues to a small subset of users. If issues arise, the deployment can be rolled back before affecting a larger user base. Canary deployments are ideal for minimizing the blast radius of deployments. - Amazon CloudWatch Alarms: CloudWatch alarms actively monitor the health and performance of the Lambda functions. Setting up alarms on key metrics (e.g., error rates, latency, invocation counts) enables prompt detection of anomalies or failures after the new version is deployed. When an alarm is triggered, automated actions such as rollbacks can be initiated.
Why other options are less suitable:
- B: Using AWS CloudFormation and manual approval adds a human gate that might slow down the continuous deployment pipeline significantly. While CloudWatch alarms are present, the lack of an automated deployment strategy like canary deployment does not minimize the user impact in case of failure.
- C: Although using the
RoutingConfig
property and CloudWatch alarms are helpful, this option relies heavily on CloudFormation to manage traffic shifting and does not inherently provide a managed canary deployment like CodeDeploy. - D: While adding sample event payloads for testing with CodeBuild and using CloudWatch alarms are good practices, this approach does not employ a progressive deployment strategy. Updating the production alias directly exposes all users to the new version immediately, increasing the risk of a widespread impact if issues exist. Relying solely on alarm-triggered rollbacks is reactive and not as preventative as a canary deployment.
Question.15 To run an application, a DevOps engineer launches an Amazon EC2 instance with public IP addresses in a public subnet. A user data script obtains the application artifacts and installs them on the instances upon launch. A change to the security classification of the application now requires the instances to run with no access to the internet. While the instances launch successfully and show as healthy, the application does not seem to be installed. Which of the following should successfully install the application while complying with the new rule? (A) Launch the instances in a public subnet with Elastic IP addresses attached. Once the application is installed and running, run a script to disassociate the Elastic IP addresses afterwards. (B) Set up a NAT gateway. Deploy the EC2 instances to a private subnet. Update the private subnet’s route table to use the NAT gateway as the default route. (C) Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket. (D) Create a security group for the application instances and allow only outbound traffic to the artifact repository. Remove the security group rule once the install is complete. |
15. Click here to View Answer
Answer: C
Explanation:
The correct answer is C, which leverages Amazon S3 and VPC endpoints for application artifact retrieval without internet access. Here’s why:
Option C is the most secure and compliant solution. It suggests storing the application artifacts in an Amazon S3 bucket. By creating a VPC endpoint for S3, the EC2 instances can access the S3 bucket without traversing the internet. This satisfies the requirement of no internet access for the instances. Furthermore, assigning an IAM instance profile to the EC2 instances ensures that they have the necessary permissions to read the application artifacts from the S3 bucket. This approach aligns with best practices for security and resource management.
Option A is problematic because it temporarily allows internet access via public and Elastic IP addresses, which violates the “no internet access” policy, even if the IPs are disassociated later. This approach creates a window of vulnerability.
Option B, using a NAT Gateway, still requires internet access, albeit indirectly, for the instances in the private subnet. NAT Gateways provide internet access for instances within the private subnet, which contradicts the requirements.
Option D, modifying security group rules, is also problematic. Modifying security group rules after the fact introduces a window of vulnerability. It also requires the EC2 instances to have initial internet access to reach the artifact repository, which is not permitted. Security groups also control inbound and outbound traffic. You would still need to open traffic to the internet temporarily to allow the instances to download the artifacts. The key with the right answer is utilizing the S3 bucket as storage within the VPC to allow artifact retrieval.
Therefore, Option C is the only option that securely and permanently prevents internet access to the instances while allowing them to retrieve and install the application artifacts.
Supporting links:
IAM Roles for EC2: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_ec2.html
VPC Endpoints for S3: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html