Question.56 An AWS customer has a web application that runs on premises. The web application fetches data from a third-party API that is behind a firewall. The third party accepts only one public CIDR block in each client’s allow list. The customer wants to migrate their web application to the AWS Cloud. The application will be hosted on a set of Amazon EC2 instances behind an Application Load Balancer (ALB) in a VPC. The ALB is located in public subnets. The EC2 instances are located in private subnets. NAT gateways provide internet access to the private subnets. How should a solutions architect ensure that the web application can continue to call the third-party API after the migration? (A) Associate a block of customer-owned public IP addresses to the VPC. Enable public IP addressing for public subnets in the VPC. (B) Register a block of customer-owned public IP addresses in the AWS account. Create Elastic IP addresses from the address block and assign them to the NAT gateways in the VPC. (C) Create Elastic IP addresses from the block of customer-owned IP addresses. Assign the static Elastic IP addresses to the ALB. (D) Register a block of customer-owned public IP addresses in the AWS account. Set up AWS Global Accelerator to use Elastic IP addresses from the address block. Set the ALB as the accelerator endpoint. |
56. Click here to View Answer
Answer: B
Explanation:
The correct solution (B) focuses on ensuring the third-party API consistently sees the same source IP addresses originating from the AWS environment, allowing proper firewall rules to be maintained.
Option B describes how to use Customer Owned IP addresses (CoIPs). CoIPs bring the customer’s own IP range into AWS. When CoIPs are assigned to the NAT Gateways, the NAT Gateways will use these IPs as the source IPs for all outbound traffic. This ensures that all traffic destined for the third-party API originates from a known, dedicated IP address or addresses belonging to the customer.
Option A is incorrect because merely associating CoIPs with the VPC and enabling public IP addressing on public subnets does not guarantee that traffic from the EC2 instances to the third-party API will originate from those CoIP addresses. Without explicit assignment to NAT Gateways, the instances might use ephemeral public IPs.
Option C is incorrect because ALBs do not support assigning static Elastic IP addresses directly. An ALB’s IP addresses can change over time, making it unsuitable for a scenario requiring a fixed source IP for firewall purposes.
Option D is incorrect because AWS Global Accelerator with an ALB endpoint does not guarantee a static IP for outbound traffic originating from the EC2 instances. While Global Accelerator provides static entry points to your application, the traffic from your NAT Gateways to the third-party API would still use the NAT Gateway’s IP, which must be predictable. Additionally, Global Accelerator introduces complexity and costs that are unnecessary in this scenario, as the primary requirement is a consistent source IP, not global distribution or accelerated traffic.
In conclusion, associating CoIPs with the NAT Gateways (Option B) provides the required static IPs needed for the third-party firewall, aligning with the customer’s requirement and avoiding unnecessary complexity.
Here are some authoritative links for further research:
AWS Global Accelerator: https://aws.amazon.com/global-accelerator/
Customer-owned IP addresses (CoIPs): https://docs.aws.amazon.com/vpc/latest/coip/what-is-coip.html
NAT Gateway: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
Elastic IP Addresses: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html
Application Load Balancer: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
Question.57 A company with several AWS accounts is using AWS Organizations and service control policies (SCPs). An administrator created the following SCP and has attached it to an organizational unit (OU) that contains AWS account 1111-1111-1111: ![]() Developers working in account 1111-1111-1111 complain that they cannot create Amazon S3 buckets. How should the administrator address this problem? (A) Add s3:CreateBucket with “Allow” effect to the SCP. (B) Remove the account from the OU, and attach the SCP directly to account 1111-1111-1111. (C) Instruct the developers to add Amazon S3 permissions to their IAM entities. (D) Remove the SCP from account 1111-1111-1111. |
57. Click here to View Answer
Answer: C
Explanation:
C – Users and roles must still be granted permissions with appropriate IAM permission policies. A user without any IAM permission policies has no access at all, even if the applicable SCPs allow all services and all actions.
Question.58 A company has a monolithic application that is critical to the company’s business. The company hosts the application on an Amazon EC2 instance that runs Amazon Linux 2. The company’s application team receives a directive from the legal department to back up the data from the instance’s encrypted Amazon Elastic Block Store (Amazon EBS) volume to an Amazon S3 bucket. The application team does not have the administrative SSH key pair for the instance. The application must continue to serve the users. Which solution will meet these requirements? (A) Attach a role to the instance with permission to write to Amazon S3. Use the AWS Systems Manager Session Manager option to gain access to the instance and run commands to copy data into Amazon S3. (B) Create an image of the instance with the reboot option turned on. Launch a new EC2 instance from the image. Attach a role to the new instance with permission to write to Amazon S3. Run a command to copy data into Amazon S3. (C) Take a snapshot of the EBS volume by using Amazon Data Lifecycle Manager (Amazon DLM). Copy the data to Amazon S3. (D) Create an image of the instance. Launch a new EC2 instance from the image. Attach a role to the new instance with permission to write to Amazon S3. Run a command to copy data into Amazon S3. |
58. Click here to View Answer
Answer: A
Explanation:
The correct answer is A. Here’s why:
- Requirement to Back Up EBS Data to S3: The primary goal is to back up the data from the encrypted EBS volume to an S3 bucket.
- No SSH Access: The application team doesn’t have the SSH key pair, which prevents direct access to the EC2 instance via SSH.
- Application Must Continue Serving Users: The backup process should not interrupt the application’s availability.
Why Option A is Correct:
- IAM Role for S3 Access: Attaching an IAM role to the EC2 instance grants it the necessary permissions to write data to the designated S3 bucket without requiring SSH keys. This adheres to the principle of least privilege. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
- AWS Systems Manager Session Manager: Session Manager provides secure and auditable instance management without the need to open inbound ports or maintain SSH keys. It allows running commands on the EC2 instance without direct SSH access, fulfilling the requirement of no key pair. https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
- Copying Data to S3: Using commands through Session Manager, the application team can copy data from the EBS volume to S3. The specific command would involve leveraging the AWS CLI, which can be configured to use the instance’s IAM role for authentication.
- Minimal Downtime: Option A provides a way to execute the backup procedure on the existing EC2 instance without requiring recreation or rebooting. The whole operation can happen while the instance continues to run.
Why Other Options are Incorrect:
Option C: While Amazon DLM can create EBS snapshots, it does not directly copy the data contained within to S3. DLM manages the lifecycle of EBS snapshots but would require other means to access the volume data and copy it, which is not specified in the option, thus rendering it incomplete. The snapshots themselves are stored within EBS, not S3.
Option B & D: Creating an image of the instance necessitates a restart, which would interrupt application service. It also involves creating a new EC2 instance which adds to the complexity and costs.
Question.59 A solutions architect needs to copy data from an Amazon S3 bucket m an AWS account to a new S3 bucket in a new AWS account. The solutions architect must implement a solution that uses the AWS CLI. Which combination of steps will successfully copy the data? (Choose three.) (A) Create a bucket policy to allow the source bucket to list its contents and to put objects and set object ACLs in the destination bucket. Attach the bucket policy to the destination bucket. (B) Create a bucket policy to allow a user in the destination account to list the source bucket’s contents and read the source bucket’s objects. Attach the bucket policy to the source bucket. (C) Create an IAM policy in the source account. Configure the policy to allow a user in the source account to list contents and get objects in the source bucket, and to list contents, put objects, and set object ACLs in the destination bucket. Attach the policy to the user. (D) Create an IAM policy in the destination account. Configure the policy to allow a user in the destination account to list contents and get objects in the source bucket, and to list contents, put objects, and set objectACLs in the destination bucket. Attach the policy to the user. (E) Run the aws s3 sync command as a user in the source account. Specify the source and destination buckets to copy the data. (F) Run the aws s3 sync command as a user in the destination account. Specify the source and destination buckets to copy the data. |
59. Click here to View Answer
Answer: BDF
Explanation:
The correct answer is BDF because it outlines the necessary steps to securely and successfully copy data from an S3 bucket in one AWS account to another using the AWS CLI. Here’s a breakdown of why each choice is correct and why the others aren’t:
- B. Create a bucket policy to allow a user in the destination account to list the source bucket’s contents and read the source bucket’s objects. Attach the bucket policy to the source bucket. This is essential for allowing the destination account to access the source bucket’s data. The bucket policy on the source bucket needs to explicitly grant permissions to the destination account’s IAM user/role to read the bucket contents. Without this, the destination account cannot retrieve the objects to copy. (Refer to https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html for bucket policy examples)
- D. Create an IAM policy in the destination account. Configure the policy to allow a user in the destination account to list contents and get objects in the source bucket, and to list contents, put objects, and set objectACLs in the destination bucket. Attach the policy to the user. The user in the destination account, running the AWS CLI command, requires the necessary permissions to not only read from the source bucket but also to write to the destination bucket. This IAM policy grants those permissions. It’s crucial to include
s3:PutObject
ands3:PutObjectAcl
to allow writing objects and setting ACLs in the destination bucket. (Refer to https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html for IAM policies) - F. Run the aws s3 sync command as a user in the destination account. Specify the source and destination buckets to copy the data. This command initiates the data transfer. It must be run from the destination account because that account needs to pull the data. Running it from the source account wouldn’t be secure, since we want to grant granular permissions to the destination account to copy the data. (Refer to https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html for
aws s3 sync
documentation)
Here’s why the other options are incorrect:
E. Running aws s3 sync
as a user in the source account is incorrect. The source account needs to allow access, but the destination account must pull the data.
A. Attaching a bucket policy to the destination bucket granting the source bucket access doesn’t make logical sense. The destination needs to pull data from the source. The source needs to allow the pull.
C. While granting the source account’s user permissions is possible, it’s less secure and violates the principle of least privilege. The destination account should explicitly be granted permission.
Question.60 A company built an application based on AWS Lambda deployed in an AWS CloudFormation stack. The last production release of the web application introduced an issue that resulted in an outage lasting several minutes. A solutions architect must adjust the deployment process to support a canary release. Which solution will meet these requirements? (A) Create an alias for every new deployed version of the Lambda function. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load. (B) Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load. (C) Create a version for every new deployed Lambda function. Use the AWS CLI update-function-configuration command with the routing-config parameter to distribute the load. (D) Configure AWS CodeDeploy and use CodeDeployDefault.OneAtATime in the Deployment configuration to distribute the load. |
60. Click here to View Answer
Answer: A
Explanation:
The correct answer is A. Here’s a detailed justification:
A canary release is a deployment strategy where a new version of an application is rolled out to a small subset of users before a wider deployment. This allows for early detection of issues and minimizes the impact of potential bugs.
Option A leverages Lambda aliases and weighted routing to achieve this. Lambda aliases are pointers to specific Lambda function versions. By creating an alias for each new deployed version, you can control the traffic directed to each version. The update-alias
command in the AWS CLI, along with the routing-config
parameter, allows you to specify the percentage of traffic that should be routed to the new version. Gradually increasing the traffic to the new version allows you to monitor its performance and stability without impacting all users. This implements the core concept of a canary release.
Option B is less efficient because deploying into a new CloudFormation stack for each release is more complex and resource-intensive than simply updating a Lambda alias. While Route 53 weighted routing could direct traffic, it wouldn’t be directly integrated with Lambda versioning and canary release strategies.
Option C is incorrect because update-function-configuration
does not support traffic routing. It’s for updating function configurations like memory and timeout settings.
Option D, using CodeDeploy, is overkill for a single Lambda function deployment. CodeDeploy is more suitable for deploying applications across multiple EC2 instances or managing deployments of multiple application components. Plus, CodeDeployDefault.OneAtATime
isn’t designed for canary deployments; it sequentially updates instances.
In summary, option A offers the most straightforward and efficient way to implement a canary release for a Lambda function by utilizing Lambda aliases and weighted traffic routing, allowing for granular control and monitoring of the new version.
Supporting Documentation:
AWS CLI update-alias Command: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-alias.html
AWS Lambda Versions and Aliases: https://docs.aws.amazon.com/lambda/latest/dg/configuration-versions.html