Question.36 A DevOps engineer needs to back up sensitive Amazon S3 objects that are stored within an S3 bucket with a private bucket policy using S3 cross-Region replication functionality. The objects need to be copied to a target bucket in a different AWS Region and account. Which combination of actions should be performed to enable this replication? (Choose three.) (A) Create a replication IAM role in the source account (B) Create a replication I AM role in the target account. (C) Add statements to the source bucket policy allowing the replication IAM role to replicate objects. (D) Add statements to the target bucket policy allowing the replication IAM role to replicate objects. (E) Create a replication rule in the source bucket to enable the replication. (F) Create a replication rule in the target bucket to enable the replication. |
36. Click here to View Answer
Answer: ADE
Explanation:
The correct answer is ADE. Here’s why:
- A. Create a replication IAM role in the source account: S3 Cross-Region Replication (CRR) needs an IAM role in the source account to perform the replication actions. This role is assumed by S3 on behalf of the source account. This role needs permissions to read objects from the source bucket and write them to the destination bucket. The official documentation confirms the necessity of an IAM role in the source account. (AWS Documentation: https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-howto-setup.html)
- D. Add statements to the source bucket policy allowing the replication IAM role to replicate objects: The source bucket policy needs to grant the replication IAM role permissions to access objects in the source bucket. Specifically,
s3:GetObjectVersion
permission is required to read objects and their versions. This policy ensures that only the intended replication role can access the data for replication purposes. It also ensures there’s explicit permission granted for the replication to occur. (AWS Documentation: https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-add-bucket-policy.html) - E. Create a replication rule in the source bucket to enable the replication: A replication rule is configured on the source bucket. This rule specifies which objects should be replicated (based on prefixes or tags) and the destination bucket where the objects should be copied. The replication rule also specifies the IAM role that S3 will use to perform the replication. Without a replication rule, S3 won’t know which objects to replicate or where to send them. (AWS Documentation: https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-configure-bucket.html)
Why the other options are incorrect:
F. Create a replication rule in the target bucket to enable the replication: Replication rules are only configured on the source bucket, defining the replication behavior. Target buckets do not have replication rules associated with them.
B. Create a replication IAM role in the target account: While the target account needs an S3 bucket policy allowing the source account (specifically the replication role in the source account) to write objects, a dedicated IAM role in the target account for replication isn’t strictly necessary for basic cross-account replication. The IAM role in the source account performs the replication actions, and the target bucket policy trusts this source account’s role. While resource-based policies can exist in the destination bucket, it’s not the core component driving the functionality.
C. Add statements to the target bucket policy allowing the replication IAM role to replicate objects: Although the bucket policy on the destination bucket needs to allow the role from the source account to write to it, the statements need to be added to the source bucket policy instead to grant the role permission to read the objects. The target bucket policy is for writing operations by the IAM role from the source account, not for the role itself. Therefore this option is not completely correct.
Question.37 A company has multiple member accounts that are part of an organization in AWS Organizations. The security team needs to review every Amazon EC2 security group and their inbound and outbound rules. The security team wants to programmatically retrieve this information from the member accounts using an AWS Lambda function in the management account of the organization. Which combination of access changes will meet these requirements? (Choose three.) (A) Create a trust relationship that allows users in the member accounts to assume the management account IAM role. (B) Create a trust relationship that allows users in the management account to assume the IAM roles of the member accounts. (C) Create an IAM role in each member account that has access to the AmazonEC2ReadOnlyAccess managed policy. (D) Create an I AM role in each member account to allow the sts:AssumeRole action against the management account IAM role’s ARN. (E) Create an I AM role in the management account that allows the sts:AssumeRole action against the member account IAM role’s ARN. (F) Create an IAM role in the management account that has access to the AmazonEC2ReadOnlyAccess managed policy. |
37. Click here to View Answer
Answer: BCE
Explanation:
The correct answer is BCE because it outlines the necessary steps to enable cross-account access for the Lambda function in the management account to retrieve security group information from the member accounts.
- B. Create a trust relationship that allows users in the management account to assume the IAM roles of the member accounts: This is crucial for cross-account access. The trust relationship, defined in the member account’s IAM role, specifies which principals (in this case, the management account’s IAM role) are allowed to assume that role. Without this trust, the management account’s Lambda function cannot gain temporary access to the member accounts.
- C. Create an IAM role in each member account that has access to the AmazonEC2ReadOnlyAccess managed policy: This ensures that when the management account’s Lambda function assumes the role in the member account, it has the necessary permissions to retrieve the security group information. The
AmazonEC2ReadOnlyAccess
policy grants read-only access to EC2 resources, including security groups and their rules. - E. Create an IAM role in the management account that allows the sts:AssumeRole action against the member account IAM role’s ARN: This allows the Lambda function in the management account to actually execute the
AssumeRole
API call against the member account’s IAM role. Without this permission, the Lambda function would be denied access even if the member account’s IAM role trusts the management account. This ensures that the management account has permission to assume the IAM role in the member accounts.
Why other options are incorrect:
- A. Create a trust relationship that allows users in the member accounts to assume the management account IAM role: This is the inverse of what’s needed. The management account needs to access the member accounts, not the other way around.
- D. Create an IAM role in each member account to allow the sts:AssumeRole action against the management account IAM role’s ARN: The member accounts don’t need to assume a role in the management account in this scenario.
- F. Create an IAM role in the management account that has access to the AmazonEC2ReadOnlyAccess managed policy: While having some permissions in the management account might be necessary for general operations, it doesn’t grant access to resources in the member accounts. Cross-account access is required, and this option doesn’t address that.
Supporting Concepts:
- IAM Roles: Provide temporary security credentials for users or services to access AWS resources.
- Trust Relationships: Define which principals are allowed to assume a specific IAM role.
- AWS Organizations: Enables centralized management and governance across multiple AWS accounts.
- Cross-Account Access: Allows resources in one AWS account to access resources in another account.
- AssumeRole: An AWS STS API action that allows a principal to assume a role in another AWS account.
Authoritative Links:
- IAM Roles: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
- AssumeRole: https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
- Cross-Account Access: https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_crossaccount.html
- AWS Organizations: https://aws.amazon.com/organizations/
Question.38 A space exploration company receives telemetry data from multiple satellites. Small packets of data are received through Amazon API Gateway and are placed directly into an Amazon Simple Queue Service (Amazon SQS) standard queue. A custom application is subscribed to the queue and transforms the data into a standard format. Because of inconsistencies in the data that the satellites produce, the application is occasionally unable to transform the data. In these cases, the messages remain in the SQS queue. A DevOps engineer must develop a solution that retains the failed messages and makes them available to scientists for review and future processing. Which solution will meet these requirements? (A) Configure AWS Lambda to poll the SQS queue and invoke a Lambda function to check whether the queue messages are valid. If validation fails, send a copy of the data that is not valid to an Amazon S3 bucket so that the scientists can review and correct the data. When the data is corrected, amend the message in the SQS queue by using a replay Lambda function with the corrected data. (B) Convert the SQS standard queue to an SQS FIFO queue. Configure AWS Lambda to poll the SQS queue every 10 minutes by using an Amazon EventBridge schedule. Invoke the Lambda function to identify any messages with a SentTimestamp value that is older than 5 minutes, push the data to the same location as the application’s output location, and remove the messages from the queue. (C) Create an SQS dead-letter queue. Modify the existing queue by including a redrive policy that sets the Maximum Receives setting to 1 and sets the dead-letter queue ARN to the ARN of the newly created queue. Instruct the scientists to use the dead-letter queue to review the data that is not valid. Reprocess this data at a later time. (D) Configure API Gateway to send messages to different SQS virtual queues that are named for each of the satellites. Update the application to use a new virtual queue for any data that it cannot transform, and send the message to the new virtual queue. Instruct the scientists to use the virtual queue to review the data that is not valid. Reprocess this data at a later time. |
38. Click here to View Answer
Answer: C
Explanation:
The correct answer is C because it directly addresses the requirements of retaining failed messages for review and future processing using a standard and efficient SQS feature: a dead-letter queue (DLQ).
Here’s a breakdown:
- Problem: Telemetry data from satellites sometimes fails transformation, leaving unusable messages in the SQS queue. We need to preserve these failed messages for scientists to review and reprocess.
- Solution C (DLQ):
- Creating a DLQ provides a designated place to move messages that have failed processing after a specified number of attempts.
- The redrive policy moves messages to the DLQ after a single failed attempt (Maximum Receives = 1), ensuring that problematic messages are quickly isolated for investigation.
- Scientists can then access the DLQ to review the problematic data without impacting the main processing flow.
- DLQs are a best practice for handling errors and ensuring data durability in asynchronous messaging systems. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html
Why other options are not the best fit:
- A (Lambda for validation): While Lambda can validate messages, this solution is more complex. Adding validation to Lambda also adds overhead to the processing flow, and re-injecting the data via a replay Lambda function introduces additional complexity. The simplest solution is to leverage SQS DLQ.
- B (FIFO and Lambda): Converting to a FIFO queue is unnecessary. FIFO queues are primarily for maintaining message order, which is not stated as a requirement. The suggested Lambda function introduces complexity and polling introduces latencies.
- D (API Gateway and Virtual Queues): Using virtual queues adds architectural complexity. It also requires changes to the application code and API Gateway configuration. It makes it harder to manage and monitor the errors compared to a dedicated DLQ that SQS provides.
In conclusion, using an SQS dead-letter queue with a redrive policy is the simplest, most efficient, and best-practice approach to retaining failed messages for review and future processing. This aligns with the DevOps principles of automation, efficiency, and recoverability.
Question.39 A company wants to use AWS CloudFormation for infrastructure deployment. The company has strict tagging and resource requirements and wants to limit the deployment to two Regions. Developers will need to deploy multiple versions of the same application. Which solution ensures resources are deployed in accordance with company policy? (A) Create AWS Trusted Advisor checks to find and remediate unapproved CloudFormation StackSets. (B) Create a Cloud Formation drift detection operation to find and remediate unapproved CloudFormation StackSets. (C) Create CloudFormation StackSets with approved CloudFormation templates. (D) Create AWS Service Catalog products with approved CloudFormation templates. |
39. Click here to View Answer
Answer: D
Explanation:
The correct answer is D, creating AWS Service Catalog products with approved CloudFormation templates. Here’s why:
AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use. By creating Service Catalog products from approved CloudFormation templates, the company can ensure developers only deploy infrastructure that adheres to corporate policies, including tagging, resource requirements, and regional limitations. Service Catalog provides centralized governance and control over the infrastructure provisioning process. This is crucial for enforcing standardization and compliance. Each product can encapsulate a specific version of the application’s infrastructure.
StackSets (Option C) is a valid option for deploying infrastructure across multiple accounts and regions. However, StackSets alone don’t inherently enforce the use of approved templates. It’s up to the user to select the template, which could lead to policy violations if developers are allowed to use arbitrary templates. While StackSets allow for centralized deployment, they lack the governance features of Service Catalog that would force compliance.
Trusted Advisor (Option A) provides recommendations for cost optimization, security, fault tolerance, service limits, and performance improvement but does not directly restrict infrastructure deployments or enforce the use of approved templates. Trusted Advisor is primarily for auditing and identifying violations after the fact rather than preventing them during deployment. Remediation would be reactive and manual.
CloudFormation Drift Detection (Option B) is also a reactive approach. It identifies differences between the expected state (defined in the template) and the actual state of the deployed resources. While it’s useful for detecting unauthorized modifications, it doesn’t prevent the initial deployment of non-compliant infrastructure. It only alerts after non-compliant resources have been deployed.
Service Catalog, in contrast, proactively enforces compliance by restricting developers to using only approved CloudFormation templates within the catalog. This approach satisfies all the requirements: strict tagging and resource control, regional limitation, and support for deploying multiple application versions with governance. The other options are either reactive or don’t provide the level of centralized control needed to enforce policies consistently.
Authoritative links:
AWS CloudFormation StackSets: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-stacksets.html
AWS Service Catalog: https://aws.amazon.com/servicecatalog/
AWS CloudFormation: https://aws.amazon.com/cloudformation/
Question.40 A company requires that its internally facing web application be highly available. The architecture is made up of one Amazon EC2 web server instance and one NAT instance that provides outbound internet access for updates and accessing public data. Which combination of architecture adjustments should the company implement to achieve high availability? (Choose two.) (A) Add the NAT instance to an EC2 Auto Scaling group that spans multiple Availability Zones. Update the route tables. (B) Create additional EC2 instances spanning multiple Availability Zones. Add an Application Load Balancer to split the load between them. (C) Configure an Application Load Balancer in front of the EC2 instance. Configure Amazon CloudWatch alarms to recover the EC2 instance upon host failure. (D) Replace the NAT instance with a NAT gateway in each Availability Zone. Update the route tables. (E) Replace the NAT instance with a NAT gateway that spans multiple Availability Zones. Update the route tables. |
40. Click here to View Answer
Answer: BD
Explanation:
Here’s a detailed justification for why options B and D are the correct choices for achieving high availability in the described web application architecture:
Option B: Create additional EC2 instances spanning multiple Availability Zones. Add an Application Load Balancer to split the load between them.
This option addresses the single point of failure presented by the single EC2 web server instance. By deploying multiple EC2 instances across different Availability Zones (AZs), the application becomes resilient to failures within a specific AZ. If one AZ experiences an outage, the other instances in the remaining AZs can continue to serve traffic. An Application Load Balancer (ALB) distributes incoming traffic evenly across these healthy instances. The ALB also performs health checks, ensuring that only healthy instances receive traffic, further enhancing availability. This setup ensures that the application remains accessible even if one or more EC2 instances or entire Availability Zones experience issues. Using the ALB offers features like session persistence and health checks, vital for a highly available web application.
Option D: Replace the NAT instance with a NAT gateway in each Availability Zone. Update the route tables.
The single NAT instance is another single point of failure. If the NAT instance fails, the EC2 web server instance loses its ability to access the internet, which, as stated, is needed for updates and accessing public data. Replacing the NAT instance with a NAT gateway in each Availability Zone provides redundancy and eliminates this bottleneck. NAT Gateways are managed by AWS and are designed for high availability and scalability. Each NAT Gateway operates independently within its AZ. By placing a NAT Gateway in each AZ where the EC2 instances reside and updating the route tables to point to the respective NAT Gateway, the application maintains outbound internet access even if one AZ fails. This is because each EC2 instance will use the NAT Gateway within its own AZ.
Why other options are not suitable:
- Option A: While placing a NAT instance in an Auto Scaling group improves the NAT instance’s resilience, it’s less effective than NAT Gateways in terms of availability and management overhead. Furthermore, it doesn’t address the primary concern of the single EC2 web server instance.
- Option C: Recovering an EC2 instance upon host failure with CloudWatch alarms helps but doesn’t provide high availability. There’s downtime during the recovery process. Also, this doesn’t solve the problem of a single point of failure in the architecture.
- Option E: While replacing the NAT instance with a NAT gateway is a step in the right direction, a single NAT gateway spanning multiple AZs isn’t a standard or highly recommended AWS architecture. NAT Gateways are designed to be zonal resources. Placing a NAT gateway in each AZ offers superior redundancy.
Authoritative Links:
High Availability in AWS: https://wa.aws.amazon.com/wellarchitected/2020-07-02T19-33-28/pillar/AWS-Reliability
Application Load Balancer: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
NAT Gateway: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html