Question.11 A company has many AWS accounts and uses AWS Organizations to manage all of them. A solutions architect must implement a solution that the company can use to share a common network across multiple accounts. The company’s infrastructure team has a dedicated infrastructure account that has a VPC. The infrastructure team must use this account to manage the network. Individual accounts cannot have the ability to manage their own networks. However, individual accounts must be able to create AWS resources within subnets. Which combination of actions should the solutions architect perform to meet these requirements? (Choose two.) (A) Create a transit gateway in the infrastructure account. (B) Enable resource sharing from the AWS Organizations management account. (C) Create VPCs in each AWS account within the organization in AWS Organizations. Configure the VPCs to share the same CIDR range and subnets as the VPC in the infrastructure account. Peer the VPCs in each individual account with the VPC in the infrastructure account. (D) Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each subnet to associate with the resource share. (E) Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each prefix list to associate with the resource share. |
11. Click here to View Answer
Answer: BD
Explanation:
The correct answer is BD. Here’s why:
- D is correct: AWS Resource Access Manager (RAM) facilitates sharing AWS resources across accounts within an AWS Organization. By creating a resource share in the infrastructure account, the infrastructure team can specifically share the subnets of its managed VPC with other accounts or OUs. This aligns with the requirement that individual accounts can create resources within the defined subnets. Sharing specific subnets provides the necessary level of control and prevents individual accounts from managing the network infrastructure. https://docs.aws.amazon.com/ram/latest/userguide/what-is.html
- B is correct: To enable resource sharing across the organization, it’s necessary to enable sharing from the AWS Organizations management account within RAM. This crucial step grants permissions for resource sharing to function across the organizational units. Without this step, the resource share created in the infrastructure account would be ineffective in granting access to other accounts. https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html
- A is incorrect: While a Transit Gateway can centralize network traffic, it doesn’t directly address the requirement of the infrastructure team managing the network and individual accounts using pre-existing subnets.
- C is incorrect: Creating VPCs in each account with the same CIDR range and subnets as the infrastructure account’s VPC is highly problematic. Overlapping CIDR ranges lead to routing conflicts and make interconnectivity exceedingly difficult. Furthermore, peering many VPCs directly is not scalable and introduces management overhead.
- E is incorrect: Prefix lists are not directly used for sharing network access in the same way as subnets via AWS RAM. While prefix lists can be used in security groups and route tables, the primary requirement is to grant access to existing subnets for resource deployment, which RAM effectively handles when sharing subnets.
Therefore, enabling resource sharing from the management account and sharing subnets using RAM ensures central control of the network infrastructure within the infrastructure account while enabling individual accounts to deploy resources within those controlled subnets.
Question.12 A company wants to use a third-party software-as-a-service (SaaS) application. The third-party SaaS application is consumed through several API calls. The third-party SaaS application also runs on AWS inside a VPC. The company will consume the third-party SaaS application from inside a VPC. The company has internal security policies that mandate the use of private connectivity that does not traverse the internet. No resources that run in the company VPC are allowed to be accessed from outside the company’s VPC. All permissions must conform to the principles of least privilege. Which solution meets these requirements? (A) Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-party SaaS application provides. Create a security group to limit the access to the endpoint. Associate the security group with the endpoint. (B) Create an AWS Site-to-Site VPN connection between the third-party SaaS application and the company VPC. Configure network ACLs to limit access across the VPN tunnels. (C) Create a VPC peering connection between the third-party SaaS application and the company VPUpdate route tables by adding the needed routes for the peering connection. (D) Create an AWS PrivateLink endpoint service. Ask the third-party SaaS provider to create an interface VPC endpoint for this endpoint service. Grant permissions for the endpoint service to the specific account of the third-party SaaS provider. |
12. Click here to View Answer
Answer: A
Explanation:
The most suitable solution is A. Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-party SaaS application provides. Create a security group to limit the access to the endpoint. Associate the security group with the endpoint.
Here’s a detailed justification:
- Private Connectivity: AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises networks without exposing traffic to the public internet. This directly addresses the requirement for private connectivity.
- SaaS Integration: PrivateLink allows secure access to SaaS applications running in other AWS accounts or VPCs through endpoint services, enabling the company to consume the third-party SaaS application privately.
- No External Access: Since PrivateLink operates within the AWS network, it ensures that no resources within the company’s VPC are accessible from outside, aligning with the security policy.
- Least Privilege: Security groups attached to the interface VPC endpoint allow granular control over inbound and outbound traffic, limiting access to only the necessary ports and protocols. This enforces the principle of least privilege.
- Endpoint Service Configuration: The third-party SaaS application creates an endpoint service. The company, as the consumer, then creates an interface VPC endpoint in their VPC and connects it to the provider’s endpoint service.
Here’s why the other options are less suitable:
- B. AWS Site-to-Site VPN: While VPNs provide secure connectivity, they involve managing VPN tunnels, which adds operational overhead. Also, VPNs, while encrypted, still involve routing traffic over potentially shared networks, and may not be the most preferred solution when PrivateLink provides a more direct, private option.
- C. VPC Peering: VPC peering allows direct network connectivity between VPCs, but it does not offer the same level of control and security as PrivateLink. Peering requires managing overlapping IP address ranges and doesn’t provide as much isolation. More importantly, VPC Peering is generally not suggested between VPCs managed by different organizations (like this scenario involving a third-party SaaS provider).
- D. AWS PrivateLink Endpoint Service creation by the company: The company needs to consume the service, not provide it. Therefore, the third-party SaaS provider has to create and manage the Endpoint Service. The company can only create the Endpoint.
Authoritative Links:
Interface VPC Endpoints (AWS PrivateLink): https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html
AWS PrivateLink: https://aws.amazon.com/privatelink/
Question.13 A company needs to implement a patching process for its servers. The on-premises servers and Amazon EC2 instances use a variety of tools to perform patching. Management requires a single report showing the patch status of all the servers and instances. Which set of actions should a solutions architect take to meet these requirements? (A) Use AWS Systems Manager to manage patches on the on-premises servers and EC2 instances. Use Systems Manager to generate patch compliance reports. (B) Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use Amazon QuickSight integration with OpsWorks to generate patch compliance reports. (C) Use an Amazon EventBridge rule to apply patches by scheduling an AWS Systems Manager patch remediation job. Use Amazon Inspector to generate patch compliance reports. (D) Use AWS OpsWorks to manage patches on the on-premises servers and EC2 instances. Use AWS X-Ray to post the patch status to AWS Systems Manager OpsCenter to generate patch compliance reports. |
13. Click here to View Answer
Answer: A
Explanation:
The best approach to achieve a unified patching report across on-premises servers and EC2 instances is to leverage AWS Systems Manager. Systems Manager is designed for centralized management of hybrid environments, encompassing both AWS and on-premises resources. Option A directly uses Systems Manager for patch management on both on-premises servers and EC2 instances, providing a single pane of glass for controlling the patching process.
Crucially, Systems Manager offers built-in reporting capabilities through its Patch Manager feature. These capabilities include the generation of patch compliance reports, satisfying the requirement of a single report showing patch statuses for all servers and instances.
Options B, C, and D are less suitable. OpsWorks (B and D) is primarily focused on application management and might not be the best choice for a comprehensive patching solution across both environments. Furthermore, its integration with QuickSight or X-Ray for reporting, while possible, is not as straightforward or natively integrated as Systems Manager’s reporting features. EventBridge and Inspector (C) could be parts of a patching strategy, but EventBridge is primarily for event-driven automation, and Inspector focuses on security vulnerabilities, not necessarily patching. The primary focus should be on centralized management and reporting of patch status. Relying on Inspector for the patch compliance reports isn’t its core functionality.
By centralizing patch management and leveraging its reporting capabilities, Systems Manager streamlines the process of creating and maintaining a single, comprehensive patching report. This reduces complexity and simplifies the monitoring of patching efforts across the entire infrastructure.
Further reading:
Question.14 A company is running an application on several Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The load on the application varies throughout the day, and EC2 instances are scaled in and out on a regular basis. Log files from the EC2 instances are copied to a central Amazon S3 bucket every 15 minutes. The security team discovers that log files are missing from some of the terminated EC2 instances. Which set of actions will ensure that log files are copied to the central S3 bucket from the terminated EC2 instances? (A) Create a script to copy log files to Amazon S3, and store the script in a file on the EC2 instance. Create an Auto Scaling lifecycle hook and an Amazon EventBridge rule to detect lifecycle events from the Auto Scaling group. Invoke an AWS Lambda function on the autoscaling:EC2_INSTANCE_TERMINATING transition to send ABANDON to the Auto Scaling group to prevent termination, run the script to copy the log files, and terminate the instance using the AWS SDK. (B) Create an AWS Systems Manager document with a script to copy log files to Amazon S3. Create an Auto Scaling lifecycle hook and an Amazon EventBridge rule to detect lifecycle events from the Auto Scaling group. Invoke an AWS Lambda function on the autoscaling:EC2_INSTANCE_TERMINATING transition to call the AWS Systems Manager API SendCommand operation to run the document to copy the log files and send CONTINUE to the Auto Scaling group to terminate the instance. (C) Change the log delivery rate to every 5 minutes. Create a script to copy log files to Amazon S3, and add the script to EC2 instance user data. Create an Amazon EventBridge rule to detect EC2 instance termination. Invoke an AWS Lambda function from the EventBridge rule that uses the AWS CLI to run the user-data script to copy the log files and terminate the instance. (D) Create an AWS Systems Manager document with a script to copy log files to Amazon S3. Create an Auto Scaling lifecycle hook that publishes a message to an Amazon Simple Notification Service (Amazon SNS) topic. From the SNS notification, call the AWS Systems Manager API SendCommand operation to run the document to copy the log files and send ABANDON to the Auto Scaling group to terminate the instance. |
14. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Here’s why:
- The Problem: EC2 instances in an Auto Scaling group are being terminated before their log files are copied to S3, resulting in data loss. We need a mechanism to reliably copy logs during the termination process.
- Why B is the Best Solution:
- AWS Systems Manager (SSM) Documents: SSM Documents allow you to define automated tasks to be executed on EC2 instances. This is ideal for the log copying task. Storing the script as an SSM document ensures consistent execution and easier management compared to embedding it directly within instance configuration or Lambda.
- Auto Scaling Lifecycle Hooks: Lifecycle hooks provide a pause in the instance termination process, allowing us to execute tasks before the instance is fully terminated. This is crucial to ensure log files are copied before the instance disappears.
- EventBridge Rule & Lambda: EventBridge detects the
autoscaling:EC2_INSTANCE_TERMINATING
event, triggering a Lambda function. This function serves as the orchestrator, connecting the lifecycle hook with the SSM document execution. - Lambda’s Role: The Lambda function calls the SSM
SendCommand
API operation to execute the pre-defined SSM document on the terminating instance. CONTINUE
Action: After the SSM document (log copy script) completes, the Lambda function sends aCONTINUE
signal to the Auto Scaling group. This allows the instance termination process to proceed gracefully after the logs have been successfully copied.
- Why other options are incorrect:
- A: Holding the instance termination with
ABANDON
and manually terminating with the SDK is risky. If the Lambda fails, the instance could remain running indefinitely, incurring costs. Furthermore, manually terminating the instance bypasses the Auto Scaling group’s management, potentially leading to inconsistencies. - C: User data scripts run during instance launch, not termination. Trying to trigger user data scripts during termination via EventBridge and Lambda is ineffective and not the intended use case. Also, shortening the log delivery interval is a workaround, not a guaranteed solution.
- D: Using SNS for this specific task introduces unnecessary complexity. SNS is primarily for fan-out notifications. Directly calling the SSM API from the Lambda function triggered by EventBridge is more efficient and reliable. Furthermore, using
ABANDON
as in option A has the same issues.
- A: Holding the instance termination with
In summary, Option B provides a reliable and automated solution by leveraging SSM Documents, Auto Scaling lifecycle hooks, EventBridge, and Lambda to guarantee log files are copied before instance termination.
Authoritative Links:
AWS Lambda: https://aws.amazon.com/lambda/
AWS Systems Manager: https://aws.amazon.com/systems-manager/
Auto Scaling Lifecycle Hooks: https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html
Amazon EventBridge: https://aws.amazon.com/eventbridge/
Question.15 A company is using multiple AWS accounts. The DNS records are stored in a private hosted zone for Amazon Route 53 in Account A. The company’s applications and databases are running in Account B. A solutions architect will deploy a two-tier application in a new VPC. To simplify the configuration, the db.example.com CNAME record set for the Amazon RDS endpoint was created in a private hosted zone for Amazon Route 53. During deployment, the application failed to start. Troubleshooting revealed that db.example.com is not resolvable on the Amazon EC2 instance. The solutions architect confirmed that the record set was created correctly in Route 53. Which combination of steps should the solutions architect take to resolve this issue? (Choose two.) (A) Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instance’s private IP in the private hosted zone. (B) Use SSH to connect to the application tier EC2 instance. Add an RDS endpoint IP address to the /etc/resolv.conf file. (C) Create an authorization to associate the private hosted zone in Account A with the new VPC in Account B. (D) Create a private hosted zone for the example com domain in Account B. Configure Route 53 replication between AWS accounts. (E) Associate a new VPC in Account B with a hosted zone in Account A. Delete the association authorization in Account A. |
15. Click here to View Answer
Answer: CE
Explanation:
Let’s analyze why options C and E are the correct solutions to resolve the DNS resolution issue, and why the other options are not ideal.
The core problem is that the EC2 instance in Account B cannot resolve the db.example.com
CNAME record, even though it exists in the Route 53 private hosted zone in Account A. This means the VPC in Account B is not authorized to query the private hosted zone in Account A.
Option C: Create an authorization to associate the private hosted zone in Account A with the new VPC in Account B. This is a necessary step to enable cross-account DNS resolution. When you create a private hosted zone, it’s initially only associated with the VPCs in the same account. To allow VPCs in other accounts to resolve records within the hosted zone, you need to create an authorization. The account that owns the VPC (Account B) can then associate its VPC with the hosted zone using the authorization ID provided by the hosted zone owner (Account A).
Option E: Associate the new VPC in Account B with the hosted zone in Account A. This is the action taken by the account that owns the VPC (Account B) after an authorization has been created in Account A. This step effectively grants the VPC in Account B permission to query the private hosted zone in Account A. The prompt specifies to create an authorization with a new VPC, so we can assume there is not an authorization already made. The second part, “Delete the association authorization in Account A,” is incorrect. Deleting the authorization would immediately break the ability of any VPC using the authorization to resolve DNS through this private hosted zone. Since it’s necessary to create a new authorization, you are not deleting an existing authorization. Thus, associating the VPC after the authorization is created is the correct answer.
Now, let’s analyze why the other options are incorrect:
- Option A: Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instance’s private IP in the private hosted zone. While this would solve the immediate problem by hosting the database in the same VPC, it’s not a practical long-term solution. It involves significant infrastructure changes and potentially refactoring the application to interact with the new database instance. It doesn’t address the underlying issue of cross-account DNS resolution. It bypasses the centralized DNS management in Route 53, complicating future management.
- Option B: Use SSH to connect to the application tier EC2 instance. Add an RDS endpoint IP address to the /etc/resolv.conf file. This is a fragile and non-scalable solution. Manually adding the IP address of the RDS endpoint to
/etc/resolv.conf
breaks the principle of using DNS for service discovery and will lead to problems if the RDS instance is ever replaced or fails over (its IP address might change). It is a bad practice to manually modify/etc/resolv.conf
on EC2 instances, and updates to the underlying operating system can potentially overwrite the manual changes. - Option D: Create a private hosted zone for the example com domain in Account B. Configure Route 53 replication between AWS accounts. Creating another private hosted zone is not the solution, because it introduces further complexity in DNS management across accounts. Route 53 replication is not the same as Private Hosted Zone association; the first is generally used for public DNS. This adds complexity and is likely to introduce DNS inconsistencies. The goal is to use the existing DNS records defined in Account A.
In summary, creating an authorization in Account A (Option C) and associating the VPC in Account B with the hosted zone using this authorization (Option E) is the correct approach to enable cross-account DNS resolution using Route 53 private hosted zones, without impacting existing infrastructure.
Supporting links: