Question.51 A health insurance company stores personally identifiable information (PII) in an Amazon S3 bucket. The company uses server-side encryption with S3 managed encryption keys (SSE-S3) to encrypt the objects. According to a new requirement, all current and future objects in the S3 bucket must be encrypted by keys that the company’s security team manages. The S3 bucket does not have versioning enabled. Which solution will meet these requirements? (A) In the S3 bucket properties, change the default encryption to SSE-S3 with a customer managed key. Use the AWS CLI to re-upload all objects in the S3 bucket. Set an S3 bucket policy to deny unencrypted PutObject requests. (B) In the S3 bucket properties, change the default encryption to server-side encryption with AWS KMS managed encryption keys (SSE-KMS). Set an S3 bucket policy to deny unencrypted PutObject requests. Use the AWS CLI to re-upload all objects in the S3 bucket. (C) In the S3 bucket properties, change the default encryption to server-side encryption with AWS KMS managed encryption keys (SSE-KMS). Set an S3 bucket policy to automatically encrypt objects on GetObject and PutObject requests. (D) In the S3 bucket properties, change the default encryption to AES-256 with a customer managed key. Attach a policy to deny unencrypted PutObject requests to any entities that access the S3 bucket. Use the AWS CLI to re-upload all objects in the S3 bucket. |
51. Click here to View Answer
Answer: B
Explanation:
The requirement is to encrypt S3 objects with keys managed by the company’s security team, replacing the current SSE-S3 encryption. SSE-KMS is the correct encryption method for this because it allows customers to use AWS Key Management Service (KMS) to manage the encryption keys. This gives the company’s security team control over the key lifecycle, including rotation, access policies, and auditing. Option A is incorrect because it refers to “SSE-S3 with a customer managed key,” which is not a valid option. SSE-S3 uses keys managed entirely by AWS. Option D mentions AES-256 with a customer-managed key, which is not a valid S3 encryption type. AES-256 is the encryption algorithm used by SSE-S3.
To implement the solution in Option B: First, the default encryption for the S3 bucket must be changed to SSE-KMS. This ensures that all newly uploaded objects will be encrypted using the KMS key. Then, the existing objects, currently encrypted with SSE-S3, need to be re-encrypted using SSE-KMS. Re-uploading these objects using the AWS CLI achieves this re-encryption. Finally, a bucket policy that denies unencrypted PutObject requests prevents future uploads without encryption. Option C is not correct because S3 bucket policies do not automatically encrypt objects on GetObject or PutObject. Bucket policies control access and enforce conditions on requests. They can deny unencrypted requests, ensuring encryption is enforced but don’t perform the encryption themselves.
Refer to the AWS documentation on server-side encryption to understand the different encryption options and the role of KMS:
https://docs.aws.amazon.com/kms/latest/developerguide/overview.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html
Question.52 A company is running a web application in the AWS Cloud. The application consists of dynamic content that is created on a set of Amazon EC2 instances. The EC2 instances run in an Auto Scaling group that is configured as a target group for an Application Load Balancer (ALB). The company is using an Amazon CloudFront distribution to distribute the application globally. The CloudFront distribution uses the ALB as an origin. The company uses Amazon Route 53 for DNS and has created an A record of www.example.com for the CloudFront distribution. A solutions architect must configure the application so that itis highly available and fault tolerant. Which solution meets these requirements? (A) Provision a full, secondary application deployment in a different AWS Region. Update the Route 53 A record to be a failover record. Add both of the CloudFront distributions as values. Create Route 53 health checks. (B) Provision an ALB, an Auto Scaling group, and EC2 instances in a different AWS Region. Update the CloudFront distribution, and create a second origin for the new ALCreate an origin group for the two origins. Configure one origin as primary and one origin as secondary. (C) Provision an Auto Scaling group and EC2 instances in a different AWS Region. Create a second target for the new Auto Scaling group in the ALB. Set up the failover routing algorithm on the ALB. (D) Provision a full, secondary application deployment in a different AWS Region. Create a second CloudFront distribution, and add the new application setup as an origin. Create an AWS Global Accelerator accelerator. Add both of the CloudFront distributions as endpoints. |
52. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Here’s a detailed justification:
The primary requirement is to achieve high availability and fault tolerance for a web application distributed globally via CloudFront. This necessitates redundancy and the ability to failover to a healthy environment in case of regional failures.
Option B offers the most suitable solution by creating a complete secondary deployment in a separate AWS Region. This secondary deployment mirrors the primary deployment, consisting of an ALB, an Auto Scaling group, and EC2 instances. This ensures that a fully functional backup is readily available.
Crucially, it updates the existing CloudFront distribution to include a second origin, the ALB from the secondary region. It then creates an origin group within CloudFront. This is the key to the failover mechanism. The origin group allows you to designate one origin as primary and the other as secondary. CloudFront automatically routes traffic to the secondary origin if it detects that the primary origin is unavailable or unhealthy, effectively providing automatic failover.
Route 53’s role is already established for DNS resolution to the initial CloudFront distribution. By leveraging CloudFront origin groups, the DNS configuration remains unchanged, simplifying the failover process. CloudFront is responsible for intelligent routing between origins based on health checks and pre-defined configuration.
Option A is less ideal. While using Route 53 failover records could work, relying solely on Route 53 for failover can be slower than utilizing CloudFront’s origin groups, as DNS propagation delays can impact recovery time.
Option C is insufficient. Adding a second target to the ALB in a different region doesn’t automatically achieve regional failover. ALBs are regional resources, and a single ALB cannot span regions.
Option D is unnecessarily complex and potentially more expensive. Creating a second CloudFront distribution and using AWS Global Accelerator introduces additional infrastructure and configuration without providing a significant advantage over the CloudFront origin group approach. The Global Accelerator would add an additional layer of routing that might increase latency compared to leveraging CloudFront’s inherent capabilities.
In summary, the solution in option B effectively uses the native features of CloudFront (origin groups) to provide a seamless failover between the primary and secondary application deployments, meeting the requirements for high availability and fault tolerance with minimal complexity.
Authoritative Links:
- Amazon CloudFront Origin Groups: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html
- Application Load Balancer: https://aws.amazon.com/elasticloadbalancing/application-load-balancer/
- Auto Scaling: https://aws.amazon.com/autoscaling/
Question.53 A company has an organization in AWS Organizations that has a large number of AWS accounts. One of the AWS accounts is designated as a transit account and has a transit gateway that is shared with all of the other AWS accounts. AWS Site-to-Site VPN connections are configured between all of the company’s global offices and the transit account. The company has AWS Config enabled on all of its accounts. The company’s networking team needs to centrally manage a list of internal IP address ranges that belong to the global offices. Developers will reference this list to gain access to their applications securely. Which solution meets these requirements with the LEAST amount of operational overhead? (A) Create a JSON file that is hosted in Amazon S3 and that lists all of the internal IP address ranges. Configure an Amazon Simple Notification Service (Amazon SNS) topic in each of the accounts that can be invoked when the JSON file is updated. Subscribe an AWS Lambda function to the SNS topic to update all relevant security group rules with the updated IP address ranges. (B) Create a new AWS Config managed rule that contains all of the internal IP address ranges. Use the rule to check the security groups in each of the accounts to ensure compliance with the list of IP address ranges. Configure the rule to automatically remediate any noncompliant security group that is detected. (C) In the transit account, create a VPC prefix list with all of the internal IP address ranges. Use AWS Resource Access Manager to share the prefix list with all of the other accounts. Use the shared prefix list to configure security group rules in the other accounts. (D) In the transit account, create a security group with all of the internal IP address ranges. Configure the security groups in the other accounts to reference the transit account’s security group by using a nested security group reference of “/sg-1a2b3c4d”. |
53. Click here to View Answer
Answer: C
Explanation:
The correct answer is C because it provides a centralized, scalable, and least operationally overhead solution for managing IP address ranges across multiple AWS accounts within an organization.
Here’s a detailed justification:
- Centralized Management: VPC prefix lists allow you to group and manage collections of CIDR blocks as a single object. By creating the prefix list in the transit account, the networking team can maintain a single source of truth for internal IP address ranges.
- Sharing with RAM: AWS Resource Access Manager (RAM) enables you to securely share AWS resources across AWS accounts, within your organization. Sharing the VPC prefix list with all other accounts allows these accounts to use it in their security group rules.
- Simplified Security Group Management: By referencing the shared prefix list in security group rules, developers can automatically inherit any updates to the IP address ranges. This eliminates the need to manually update security groups in each account, reducing operational overhead.
- Least Operational Overhead: Option A involves S3, SNS, Lambda, and custom code for security group updates which introduces significant management overhead. Option B requires custom AWS Config rule development and remediation, which is more complex. Option D using nested security group references across account boundaries isn’t natively supported, thus infeasible. Option C directly uses AWS features built for resource sharing.
- Scalability: Prefix lists can be updated easily, and changes are automatically propagated to all security groups that reference them. This ensures that the security posture remains consistent as the company’s network evolves.
In summary, option C leverages AWS’s native resource sharing capabilities (RAM and VPC prefix lists) to provide a scalable, centralized, and operationally efficient solution for managing IP address ranges across multiple AWS accounts.
Supporting Links:
AWS Resource Access Manager (RAM): https://aws.amazon.com/ram/
VPC Prefix Lists: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-prefix-lists.html
Question.54 A company runs a new application as a static website in Amazon S3. The company has deployed the application to a production AWS account and uses Amazon CloudFront to deliver the website. The website calls an Amazon API Gateway REST API. An AWS Lambda function backs each API method. The company wants to create a CSV report every 2 weeks to show each API Lambda function’s recommended configured memory, recommended cost, and the price difference between current configurations and the recommendations. The company will store the reports in an S3 bucket. Which solution will meet these requirements with the LEAST development time? (A) Create a Lambda function that extracts metrics data for each API Lambda function from Amazon CloudWatch Logs for the 2-week period. Collate the data into tabular format. Store the data as a .csv file in an S3 bucket. Create an Amazon EventBridge rule to schedule the Lambda function to run every 2 weeks. (B) Opt in to AWS Compute Optimizer. Create a Lambda function that calls the ExportLambdaFunctionRecommendations operation. Export the .csv file to an S3 bucket. Create an Amazon EventBridge rule to schedule the Lambda function to run every 2 weeks. (C) Opt in to AWS Compute Optimizer. Set up enhanced infrastructure metrics. Within the Compute Optimizer console, schedule a job to export the Lambda recommendations to a .csv file. Store the file in an S3 bucket every 2 weeks. (D) Purchase the AWS Business Support plan for the production account. Opt in to AWS Compute Optimizer for AWS Trusted Advisor checks. In the Trusted Advisor console, schedule a job to export the cost optimization checks to a .csv file. Store the file in an S3 bucket every 2 weeks. |
54. Click here to View Answer
Answer: B
Explanation:
The correct answer is B because it leverages AWS Compute Optimizer to provide the desired recommendations with minimal development effort. Here’s a detailed justification:
- Requirement: The company needs a report with recommended memory, cost, and price difference for each API Lambda function, generated every two weeks.
- AWS Compute Optimizer: This service analyzes the configuration and utilization metrics of AWS resources, including Lambda functions, and provides optimization recommendations.
- ExportLambdaFunctionRecommendations API: AWS Compute Optimizer offers an API that allows exporting its recommendations in CSV format, directly addressing the requirement for a CSV report. https://docs.aws.amazon.com/compute-optimizer/latest/APIReference/API_ExportLambdaFunctionRecommendations.html
- Lambda Function & EventBridge: A Lambda function can call the
ExportLambdaFunctionRecommendations
API and store the resulting CSV file in an S3 bucket. Amazon EventBridge can schedule this Lambda function to run every two weeks. This automates the report generation process. - Least Development Time: This solution utilizes a pre-built service and a simple API call, minimizing the amount of custom code that needs to be written and maintained.
Why other options are less suitable:
- A: Extracting metrics from CloudWatch Logs and manually calculating recommendations would require significant development effort and potentially complex logic.
- C: While Compute Optimizer can export recommendations, scheduling the export job within the console every two weeks involves manual intervention, which is against the automation goal.
- D: AWS Trusted Advisor’s cost optimization checks provide general cost-saving recommendations but are not as granular or specific to Lambda function memory allocation and cost as Compute Optimizer. Also, purchasing a business support plan is not the most effective way to resolve this particular issue; moreover, Trusted Advisor doesn’t have a direct export to csv as seamless as the suggested solution.
In summary, Option B provides the most efficient and automated solution by leveraging AWS Compute Optimizer’s API to generate the required report with minimal development time.
Question.55 A company’s factory and automation applications are running in a single VPC. More than 20 applications run on a combination of Amazon EC2, Amazon Elastic Container Service (Amazon ECS), and Amazon RDS. The company has software engineers spread across three teams. One of the three teams owns each application, and each time is responsible for the cost and performance of all of its applications. Team resources have tags that represent their application and team. The teams use IAM access for daily activities. The company needs to determine which costs on the monthly AWS bill are attributable to each application or team. The company also must be able to create reports to compare costs from the last 12 months and to help forecast costs for the next 12 months. A solutions architect must recommend an AWS Billing and Cost Management solution that provides these cost reports. Which combination of actions will meet these requirements? (Choose three.) (A) Activate the user-define cost allocation tags that represent the application and the team. (B) Activate the AWS generated cost allocation tags that represent the application and the team. (C) Create a cost category for each application in Billing and Cost Management. (D) Activate IAM access to Billing and Cost Management. (E) Create a cost budget. (F) Enable Cost Explorer. |
55. Click here to View Answer
Answer: ACF
Explanation:
The correct answer is ACF. Here’s why:
- A. Activate the user-defined cost allocation tags that represent the application and the team: Cost allocation tags are essential for tracking costs associated with specific resources. User-defined tags are created and applied by the user to resources, enabling the organization to categorize costs based on application and team. Activating these tags ensures that cost information is associated with these tags and available for reporting. https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
- C. Create a cost category for each application in Billing and Cost Management: Cost Categories allow you to group costs based on dimensions such as tags, accounts, and services. By creating a cost category for each application, the company can aggregate and analyze costs specifically related to each application, providing a clear breakdown of expenses. https://docs.aws.amazon.com/cost-management/latest/userguide/cost-categories.html
- F. Enable Cost Explorer: Cost Explorer is a tool within AWS Billing and Cost Management that allows you to visualize, understand, and manage your AWS costs and usage over time. It provides features for analyzing cost trends, identifying cost drivers, and forecasting future costs, thus meeting the requirement for creating reports to compare costs and forecast for the next 12 months. https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html
Why other options are incorrect:
E. Create a cost budget: Creating a budget helps monitor spending against a predefined limit. However, it doesn’t directly create the cost allocation reports required to track costs by application and team, although it could use the Cost Categories created to track budgets against each application.
B. Activate the AWS generated cost allocation tags that represent the application and the team: AWS-generated tags typically relate to resource creation and management and are unlikely to directly represent the application and team context that the company defines.
D. Activate IAM access to Billing and Cost Management: While IAM access is crucial for security, it doesn’t directly contribute to creating the necessary cost reports. IAM is about who can access cost data, not how costs are allocated and analyzed.