Question.26 A company hosts a security auditing application in an AWS account. The auditing application uses an IAM role to access other AWS accounts. All the accounts are in the same organization in AWS Organizations. A recent security audit revealed that users in the audited AWS accounts could modify or delete the auditing application’s IAM role. The company needs to prevent any modification to the auditing application’s IAM role by any entity other than a trusted administrator IAM role. Which solution will meet these requirements? (A) Create an SCP that includes a Deny statement for changes to the auditing application’s IAM role. Include a condition that allows the trusted administrator IAM role to make changes. Attach the SCP to the root of the organization. (B) Create an SCP that includes an Allow statement for changes to the auditing application’s IAM role by the trusted administrator IAM role. Include a Deny statement for changes by all other IAM principals. Attach the SCP to the IAM service in each AWS account where the auditing application has an IAM role. (C) Create an IAM permissions boundary that includes a Deny statement for changes to the auditing application’s IAM role. Include a condition that allows the trusted administrator IAM role to make changes. Attach the permissions boundary to the audited AWS accounts. (D) Create an IAM permissions boundary that includes a Deny statement for changes to the auditing application’s IAM role. Include a condition that allows the trusted administrator IAM role to make changes. Attach the permissions boundary to the auditing application’s IAM role in the AWS accounts. |
26. Click here to View Answer
Answer: A
Explanation:
The correct answer is A. Here’s why:
The requirement is to prevent unauthorized modifications to a specific IAM role (used by the auditing application) across multiple AWS accounts within an organization. The chosen solution must allow a trusted administrator to still make necessary changes.
- A: SCP with Deny and Allow Condition: Service Control Policies (SCPs) operate at the AWS Organizations level. By attaching an SCP to the root of the organization, we enforce centralized governance across all accounts within that organization. The SCP will include a
Deny
statement to prevent changes to the auditing application’s IAM role for all principals except the trusted administrator role, for which aCondition
will be added to allow changes. This fulfills the central control requirement and provides a mechanism for exceptions. https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html - B: SCP with Allow and Deny: SCPs act as guardrails, limiting permissions. Using an
Allow
statement in an SCP wouldn’t grant permissions; instead, it specifies which actions are not explicitly denied. ADeny
statement would be required to explicitly prevent the changes. Moreover, attaching the SCP to the IAM service is not feasible. SCPs are attached to accounts or organizational units (OUs). - C: IAM Permissions Boundary attached to audited accounts: Permissions boundaries set the maximum permissions an IAM role or user can have. However, attaching a permissions boundary to the audited AWS accounts would restrict what users in those accounts could do, not protect the auditing application’s IAM role itself. This does not directly protect the role from changes within the account where it resides. https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html
- D: IAM Permissions Boundary attached to the auditing application’s IAM role: While this would restrict the permissions of the auditing application’s IAM role, it wouldn’t prevent users in the AWS account where the role resides from modifying the role’s trust policy or other attributes. The permissions boundary limits what the role can do, not who can modify it.
Therefore, SCPs with a Deny
statement and a conditional Allow
for the trusted administrator applied at the organization root provide the necessary centralized control and exception handling to meet the given requirements.
Question.27 A company has an on-premises application that is written in Go. A DevOps engineer must move the application to AWS. The company’s development team wants to enable blue/green deployments and perform A/B testing. Which solution will meet these requirements? (A) Deploy the application on an Amazon EC2 instance, and create an AMI of the instance. Use the AMI to create an automatic scaling launch configuration that is used in an Auto Scaling group. Use Elastic Load Balancing to distribute traffic. When changes are made to the application, a new AMI will be created, which will initiate an EC2 instance refresh. (B) Use Amazon Lightsail to deploy the application. Store the application in a zipped format in an Amazon S3 bucket. Use this zipped version to deploy new versions of the application to Lightsail. Use Lightsail deployment options to manage the deployment. (C) Use AWS CodeArtifact to store the application code. Use AWS CodeDeploy to deploy the application to a fleet of Amazon EC2 instances. Use Elastic Load Balancing to distribute the traffic to the EC2 instances. When making changes to the application, upload a new version to CodeArtifact and create a new CodeDeploy deployment. (D) Use AWS Elastic Beanstalk to host the application. Store a zipped version of the application in Amazon S3. Use that location to deploy new versions of the application. Use Elastic Beanstalk to manage the deployment options. |
27. Click here to View Answer
Answer: D
Explanation:
The correct answer is D, utilizing AWS Elastic Beanstalk. Elastic Beanstalk simplifies the deployment and management of web applications and services. It automates capacity provisioning, load balancing, auto-scaling, and application health monitoring, aligning perfectly with the DevOps engineer’s need to move the Go application to AWS. The ability to store zipped application versions in Amazon S3 and deploy them using Elastic Beanstalk directly addresses the requirement for a streamlined deployment process.
Elastic Beanstalk’s built-in support for blue/green deployments and A/B testing makes it an ideal choice. Blue/green deployments can be achieved using Elastic Beanstalk’s environment swapping feature, allowing for zero-downtime updates. A/B testing can be implemented by routing a percentage of traffic to the new environment before a full swap. The other options are less ideal because option A requires manual AMI management and instance refreshes, increasing operational overhead. Option B, Amazon Lightsail, is more suited for simpler applications and lacks the advanced deployment management features necessary for blue/green deployments and A/B testing. Option C, using CodeArtifact and CodeDeploy with EC2, involves more manual configuration and management of the underlying infrastructure compared to Elastic Beanstalk, making it a more complex solution.
Elastic Beanstalk abstracts away much of the underlying infrastructure complexity, enabling the development team to focus on writing code and iteratively improving their application. This abstraction is crucial for enabling fast and reliable releases while minimizing the burden on the DevOps team. It facilitates smoother transitions between versions of the application.
Further reading:
Question.28 A developer is maintaining a fleet of 50 Amazon EC2 Linux servers. The servers are part of an Amazon EC2 Auto Scaling group, and also use Elastic Load Balancing for load balancing. Occasionally, some application servers are being terminated after failing ELB HTTP health checks. The developer would like to perform a root cause analysis on the issue, but before being able to access application logs, the server is terminated. How can log collection be automated? (A) Use Auto Scaling lifecycle hooks to put instances in a Pending:Wait state. Create an Amazon CloudWatch alarm for EC2 Instance Terminate Successful and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected. (B) Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an AWS Config rule for EC2 Instance-terminate Lifecycle Action and trigger a step function that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected. (C) Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch subscription filter for EC2 Instance Terminate Successful and trigger a CloudWatch agent that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected. (D) Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon EventBridge rule for EC2 Instance-terminate Lifecycle Action and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected. |
28. Click here to View Answer
Answer: D
Explanation:
Here’s a detailed justification for why option D is the correct answer, along with explanations of why the other options are less suitable:
Justification for Option D (Correct Answer):
Option D provides the most robust and efficient solution for capturing logs from EC2 instances that are being terminated by Auto Scaling. It leverages key AWS services to automate the entire process with minimal impact on the termination process.
- Auto Scaling Lifecycle Hooks (Terminating:Wait): Lifecycle hooks are crucial. The
Terminating:Wait
state pauses the instance termination process, giving you a window to collect logs before the instance is gone. This is essential for successful log retrieval. https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html - Amazon EventBridge Rule: EventBridge rules allow you to react to specific events within your AWS environment. Triggering on the
EC2 Instance-terminate Lifecycle Action
is ideal. This ensures that the log collection process is initiated immediately when an instance begins its termination. https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-concepts.html - AWS Lambda Function: Lambda provides a serverless compute environment to execute your log collection logic. It can be triggered by EventBridge. Its integration with other AWS services (like SSM and S3) makes it easy to orchestrate the task.
- SSM Run Command: SSM Run Command allows you to remotely execute commands on your EC2 instances in a secure manner. Using SSM to run a script that collects the logs is a reliable approach. This eliminates the need for the Lambda function to directly connect to the instance, improving security. https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html
- Amazon S3: S3 is a highly scalable and durable object storage service. Storing the collected logs in S3 provides a central and easily accessible repository for analysis.
- Lifecycle Hook Completion: The Lambda function completing the lifecycle action after the logs are collected is critical. This allows the Auto Scaling group to continue the instance termination process and replace the unhealthy instance.
Why other options are incorrect:
Option C: CloudWatch subscription filters are used to process log data ingested into CloudWatch Logs. The scenario describes capturing logs from an instance before it terminates, not from logs that have already been written to CloudWatch Logs. It’s also unclear how the CloudWatch agent would be triggered. Cloudwatch agent is to send the log data into cloudwatch not trigger actions.
Option A: Using Pending:Wait
is incorrect. The instance is not yet terminating at that phase, so the lifecycle hook is not activated at the correct instance state. The CloudWatch alarm on EC2 instance termination might be late in detecting that the instance needs log collection, especially during rapid scaling events.
Option B: Using AWS Config rule to trigger is not a direct and efficient way to handle log collection. Config is typically used for compliance and configuration management. Step Functions can be used, but Lambda provides a simpler setup for this specific log collection scenario. Also, Config rules aren’t the ideal mechanism to trigger actions on an EC2 terminate lifecycle event with the required immediacy.
Question.29 A company has an organization in AWS Organizations. The organization includes workload accounts that contain enterprise applications. The company centrally manages users from an operations account. No users can be created in the workload accounts. The company recently added an operations team and must provide the operations team members with administrator access to each workload account. Which combination of actions will provide this access? (Choose three.) (A) Create a SysAdmin role in the operations account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the workload accounts. (B) Create a SysAdmin role in each workload account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the operations account. (C) Create an Amazon Cognito identity pool in the operations account. Attach the SysAdmin role as an authenticated role. (D) In the operations account, create an IAM user for each operations team member. (E) In the operations account, create an IAM user group that is named SysAdmins. Add an IAM policy that allows the sts:AssumeRole action for the SysAdmin role in each workload account. Add all operations team members to the group. (F) Create an Amazon Cognito user pool in the operations account. Create an Amazon Cognito user for each operations team member. |
29. Click here to View Answer
Answer: BDE
Explanation:
The correct answer is BDE. Here’s a detailed justification:
B: Create a SysAdmin role in each workload account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the operations account. This is necessary because the operations team needs administrative privileges in each workload account. Creating a role with AdministratorAccess
directly in each workload account and configuring its trust policy to allow the operations account to assume it is the foundation of providing that access. This approach follows the principle of least privilege by granting permissions only within the specific accounts that require administrative actions.
D: In the operations account, create an IAM user for each operations team member. Since users are centrally managed in the operations account and cannot be created in the workload accounts, creating IAM users in the operations account is a prerequisite. Each operations team member needs an individual identity to authenticate and assume the necessary roles in the workload accounts.
E: In the operations account, create an IAM user group that is named SysAdmins. Add an IAM policy that allows the sts:AssumeRole action for the SysAdmin role in each workload account. Add all operations team members to the group. Creating a group and assigning the necessary IAM policy to that group is a best practice. This policy will give the operations team members (who are part of the group) the ability to assume the SysAdmin
role that you created in the workload accounts. The policy attached to the group will specify the necessary permissions, specifically the sts:AssumeRole
action for the SysAdmin
roles in each of the workload accounts. This follows the principle of least privilege because the users are only granted permissions they need and simplifies management of permissions.
Why other options are incorrect:
- A: Creating a
SysAdmin
role in the operations account doesn’t directly grant access to the workload accounts. This role could potentially be used to manage the operations account itself, but not the workloads within other accounts. The trust relationship modification described in option A wouldn’t be the correct direction. The trust relationship needs to be modified in the workload accounts. - C: Amazon Cognito is designed for managing user authentication and authorization for web and mobile applications. It’s not the appropriate solution for granting administrator access to IAM roles across different AWS accounts within an organization. Cognito is best suited for end-user authentication and authorization, not for internal operational access.
- F: Amazon Cognito user pools are designed for managing end-user identities, not internal operations team access. Using Cognito in this context adds unnecessary complexity and doesn’t align with best practices for managing IAM access within an AWS organization.
Supporting documentation:
AssumeRole: https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_delegation.html
IAM Roles: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
IAM Policies: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
AWS Organizations: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html
Question.30 A company has multiple accounts in an organization in AWS Organizations. The company’s SecOps team needs to receive an Amazon Simple Notification Service (Amazon SNS) notification if any account in the organization turns off the Block Public Access feature on an Amazon S3 bucket. A DevOps engineer must implement this change without affecting the operation of any AWS accounts. The implementation must ensure that individual member accounts in the organization cannot turn off the notification. Which solution will meet these requirements? (A) Designate an account to be the delegated Amazon GuardDuty administrator account. Turn on GuardDuty for all accounts across the organization. In the GuardDuty administrator account, create an SNS topic. Subscribe the SecOps team’s email address to the SNS topic. In the same account, create an Amazon EventBridge rule that uses an event pattern for GuardDuty findings and a target of the SNS topic. (B) Create an AWS CloudFormation template that creates an SNS topic and subscribes the SecOps team’s email address to the SNS topic. In the template, include an Amazon EventBridge rule that uses an event pattern of CloudTrail activity for s3:PutBucketPublicAccessBlock and a target of the SNS topic. Deploy the stack to every account in the organization by using CloudFormation StackSets. (C) Turn on AWS Config across the organization. In the delegated administrator account, create an SNS topic. Subscribe the SecOps team’s email address to the SNS topic. Deploy a conformance pack that uses the s3-bucket-level-public-access-prohibited AWS Config managed rule in each account and uses an AWS Systems Manager document to publish an event to the SNS topic to notify the SecOps team. (D) Turn on Amazon Inspector across the organization. In the Amazon Inspector delegated administrator account, create an SNS topic. Subscribe the SecOps team’s email address to the SNS topic. In the same account, create an Amazon EventBridge rule that uses an event pattern for public network exposure of the S3 bucket and publishes an event to the SNS topic to notify the SecOps team. |
30. Click here to View Answer
Answer: C
Explanation:
Here’s a detailed justification for why option C is the best solution, along with explanations of why the other options are less suitable:
Justification for Option C (Correct):
Option C leverages AWS Config, AWS Organizations, SNS, and SSM to provide a robust, centralized, and auditable solution for monitoring S3 bucket public access settings across multiple accounts.
- AWS Config for Continuous Monitoring: Turning on AWS Config ensures continuous evaluation of resource configurations against desired states. This is crucial for detecting deviations from security policies, such as disabling Block Public Access on S3 buckets. https://aws.amazon.com/config/
- Delegated Administrator Account: Centralizing management in a delegated administrator account (within AWS Organizations) allows for a single point of configuration and reporting, preventing individual member accounts from disabling the monitoring.
- SNS for Notification: Creating an SNS topic in the delegated admin account and subscribing the SecOps team provides a reliable mechanism for real-time notifications when a violation occurs. https://aws.amazon.com/sns/
- Conformance Packs & Managed Rules: Deploying a conformance pack with the
s3-bucket-level-public-access-prohibited
AWS Config managed rule automates the compliance checking. This rule specifically evaluates whether the Block Public Access settings are enabled. https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html - SSM Document for Event Publishing: Using an SSM document allows AWS Config to trigger a remediation action when a violation is detected. In this case, the remediation is to publish an event to the SNS topic, notifying the SecOps team. This ensures that even if a violation is detected, the team is alerted in a timely manner.
Why Other Options Are Less Suitable:
Option D (Inspector): Amazon Inspector is primarily designed for vulnerability assessments and security audits, not continuous configuration monitoring. It focuses on finding software vulnerabilities, not detecting changes to S3 bucket public access settings.
Option A (GuardDuty): While GuardDuty is a great threat detection service, it primarily focuses on malicious activity and unauthorized behavior. Directly using GuardDuty findings for this specific S3 configuration monitoring is less efficient and precise than using Config. GuardDuty is designed to identify threats, not configuration changes.
Option B (CloudFormation StackSets with EventBridge): While StackSets enable deployment across accounts, relying solely on CloudTrail and EventBridge for monitoring s3:PutBucketPublicAccessBlock
events can be complex to manage and potentially miss configuration changes due to timing issues or incomplete CloudTrail logging. It also doesn’t ensure continuous compliance like AWS Config. Furthermore, managing CloudTrail events and event patterns across many accounts can be overhead.