Question.21 A company is using an on-premises Active Directory service for user authentication. The company wants to use the same authentication service to sign in to the company’s AWS accounts, which are using AWS Organizations. AWS Site-to-Site VPN connectivity already exists between the on-premises environment and all the company’s AWS accounts. The company’s security policy requires conditional access to the accounts based on user groups and roles. User identities must be managed in a single location. Which solution will meet these requirements? (A) Configure AWS IAM Identity Center (AWS Single Sign-On) to connect to Active Directory by using SAML 2.0. Enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using attribute-based access controls (ABACs). (B) Configure AWS IAM Identity Center (AWS Single Sign-On) by using IAM Identity Center as an identity source. Enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using IAM Identity Center permission sets. (C) In one of the company’s AWS accounts, configure AWS Identity and Access Management (IAM) to use a SAML 2.0 identity provider. Provision IAM users that are mapped to the federated users. Grant access that corresponds to appropriate groups in Active Directory. Grant access to the required AWS accounts by using cross-account IAM users. (D) In one of the company’s AWS accounts, configure AWS Identity and Access Management (IAM) to use an OpenID Connect (OIDC) identity provider. Provision IAM roles that grant access to the AWS account for the federated users that correspond to appropriate groups in Active Directory. Grant access to the required AWS accounts by using cross-account IAM roles. |
21. Click here to View Answer
Answer: A
Explanation:
The correct answer is A because it utilizes AWS IAM Identity Center (successor to AWS Single Sign-On) configured with Active Directory for centralized user authentication. This leverages the existing VPN connection. SAML 2.0 is the industry standard for federated identity, allowing Active Directory to assert user identities to AWS. SCIM v2.0 automates the provisioning and deprovisioning of users and groups from Active Directory to IAM Identity Center, keeping identity information synchronized. ABAC uses attributes (like user groups and roles from Active Directory) to define permissions, fulfilling the requirement for conditional access based on user groups and roles, without the complexity of managing individual IAM roles per user. IAM Identity Center integrates with AWS Organizations, enabling single sign-on access to multiple AWS accounts.
Option B is incorrect because using IAM Identity Center as an identity source doesn’t leverage the existing on-premises Active Directory, thus failing the requirement for using the existing authentication service.
Option C is incorrect because manually configuring IAM with SAML in a single account and then using cross-account IAM users is a complex and less scalable solution compared to using IAM Identity Center across the organization. User management becomes cumbersome as roles and users need to be maintained across multiple accounts.
Option D is incorrect because while OIDC can also be used for federation, SAML is generally preferred for integrating with Active Directory. Furthermore, relying on cross-account IAM roles still presents the same scalability challenges as option C and does not provide centralized user management.
Here are some useful links:
Attribute-Based Access Control (ABAC): https://docs.aws.amazon.com/IAM/latest/UserGuide/security-abac.html
AWS IAM Identity Center: https://aws.amazon.com/iam/identity-center/
SAML 2.0: https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language
SCIM: https://en.wikipedia.org/wiki/System_for_Cross-domain_Identity_Management
Question.22 A software company has deployed an application that consumes a REST API by using Amazon API Gateway, AWS Lambda functions, and an Amazon DynamoDB table. The application is showing an increase in the number of errors during PUT requests. Most of the PUT calls come from a small number of clients that are authenticated with specific API keys. A solutions architect has identified that a large number of the PUT requests originate from one client. The API is noncritical, and clients can tolerate retries of unsuccessful calls. However, the errors are displayed to customers and are causing damage to the API’s reputation. What should the solutions architect recommend to improve the customer experience? (A) Implement retry logic with exponential backoff and irregular variation in the client application. Ensure that the errors are caught and handled with descriptive error messages. (B) Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without error. (C) Turn on API caching to enhance responsiveness for the production stage. Run 10-minute load tests. Verify that the cache capacity is appropriate for the workload. (D) Implement reserved concurrency at the Lambda function level to provide the resources that are needed during sudden increases in traffic. |
22. Click here to View Answer
Answer: B
Explanation:
The best approach is B. Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without error.
Here’s why:
The problem is that a small number of clients, particularly one, are overwhelming the API with PUT requests, leading to errors. Throttling directly addresses this by limiting the number of requests a specific client can make within a given timeframe. API Gateway usage plans are designed for this purpose. By setting a limit on the offending client’s API key through a usage plan tied to a specific API Gateway stage, you control the rate at which they can make requests.
Crucially, the application already tolerates retries. When a client exceeds its request limit, API Gateway returns a 429 “Too Many Requests” error. The problem states the client’s errors are visible and damaging the API’s reputation. If the client application is properly designed to handle 429 errors gracefully (by implementing retry logic with backoff or displaying a user-friendly message), the user experience is improved.
Why other options are not as effective:
- A (Retry logic with exponential backoff): While retry logic is good practice, it doesn’t solve the root cause of the problem. If the client continues to send excessive requests, retries will only exacerbate the situation by placing even more load on the API, potentially leading to even more errors.
- C (API caching): Caching enhances read performance (GET requests), not write performance (PUT requests). Caching is irrelevant when dealing with an overload of PUT requests causing errors.
- D (Reserved concurrency): Reserved concurrency for Lambda could help avoid throttling at the Lambda function level. However, the problem states the excessive traffic originates from a single client. Reserved concurrency would not prevent that client from overwhelming the API, and potentially still exhausting downstream resources like DynamoDB. Throttling at the API Gateway before reaching the Lambda functions is the more targeted and efficient solution. It prevents the excessive requests from even reaching the backend.
In summary, throttling at the API Gateway with usage plans is the most direct and effective way to control the rate of requests from a specific client, thereby mitigating the overload and improving the overall customer experience, especially given that retries are acceptable.
Authoritative Links:
API Gateway Usage Plans: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-usage-plans.html
Amazon API Gateway Throttling: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
Question.23 A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region. A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed data for the duration of the 72-hour run. Which solution will provide the LARGEST overall cost reduction while meeting these requirements? (A) Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete. (B) Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enabled. Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch template. Use the EBS volume as the shared storage for the duration of the job. Detach the EBS volume when the job is complete (C) Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using batch loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete. (D) Migrate the data from the existing shared file system to an Amazon S3 bucket. Before the job runs each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use the file gateway as the shared storage for the job. Delete the file gateway when the job is complete. |
23. Click here to View Answer
Answer: A
Explanation:
The most cost-effective solution is A. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
Here’s a detailed justification:
- Cost Reduction Focus: The core requirement is the largest overall cost reduction. Maintaining a dedicated shared file system on EC2 instances continuously is expensive due to the ongoing compute and storage costs.
- S3 for Data Storage: S3 is a highly scalable, durable, and cost-effective object storage service. Storing the 200 TB of data in S3 is significantly cheaper than storing it on continuously running EC2-backed file systems.
- S3 Intelligent-Tiering: Using S3 Intelligent-Tiering further optimizes cost by automatically moving data between frequent and infrequent access tiers based on access patterns. This is ideal because the data is only accessed once a month.
- FSx for Lustre for Performance: Amazon FSx for Lustre provides a high-performance, parallel file system optimized for data-intensive workloads. It can be easily integrated with S3, allowing you to import data from S3 when needed and export results back to S3.
- Lazy Loading with FSx for Lustre: Lazy loading is a crucial feature of FSx for Lustre. With lazy loading, only the data that is actually accessed by the application is copied from S3 to FSx for Lustre. This significantly reduces the data transfer costs and the amount of storage needed on the FSx for Lustre file system.
- Ephemeral FSx for Lustre: Creating the FSx for Lustre file system only when needed for the 72-hour job and deleting it afterward ensures you only pay for the file system resources when they are actively being used. This avoids the cost of maintaining a persistent file system.
Why other options are less cost-effective:
- Option B (EBS Multi-Attach): While EBS Multi-Attach allows multiple EC2 instances to access the same volume, it does not address the underlying problem of storing 200 TB of data on an EBS volume, which is comparatively expensive. Also, EBS Multi-Attach has limitations and might not scale well for hundreds of instances.
- Option C (S3 Standard with Batch Loading): Batch loading to FSx for Lustre means copying all 200 TB from S3 to FSx for Lustre regardless of how much data is accessed in a month. Since only a subset is accessed, this wastes cost.
- Option D (AWS Storage Gateway): AWS Storage Gateway, specifically the File Gateway, provides a local cache for frequently accessed data. However, for a monthly job accessing a subset of a large dataset, the cache benefits are minimal. File Gateway also doesn’t offer the same level of performance as FSx for Lustre for data-intensive applications.
Authoritative Links:
AWS Storage Gateway: https://aws.amazon.com/storagegateway/
Amazon S3 Storage Classes: https://aws.amazon.com/s3/storage-classes/
Amazon FSx for Lustre: https://aws.amazon.com/fsx/lustre/
Question.24 A company is developing a new service that will be accessed using TCP on a static port. A solutions architect must ensure that the service is highly available, has redundancy across Availability Zones, and is accessible using the DNS name my.service.com, which is publicly accessible. The service must use fixed address assignments so other companies can add the addresses to their allow lists. Assuming that resources are deployed in multiple Availability Zones in a single Region,which solution will meet these requirements? (A) Create Amazon EC2 instances with an Elastic IP address for each instance. Create a Network Load Balancer (NLB) and expose the static TCP port. Register EC2 instances with the NLB. Create a new name server record set named my.service.com, and assign the Elastic IP addresses of the EC2 instances to the record set. Provide the Elastic IP addresses of the EC2 instances to the other companies to add to their allow lists. (B) Create an Amazon ECS cluster and a service definition for the application. Create and assign public IP addresses for the ECS cluster. Create a Network Load Balancer (NLB) and expose the TCP port. Create a target group and assign the ECS cluster name to the NLCreate a new A record set named my.service.com, and assign the public IP addresses of the ECS cluster to the record set. Provide the public IP addresses of the ECS cluster to the other companies to add to their allow lists. (C) Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and register the EC2 instances with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name to the record set. (D) Create an Amazon ECS cluster and a service definition for the application. Create and assign public IP address for each host in the cluster. Create an Application Load Balancer (ALB) and expose the static TCP port. Create a target group and assign the ECS service definition name to the ALB. Create a new CNAME record set and associate the public IP addresses to the record set. Provide the Elastic IP addresses of the Amazon EC2 instances to the other companies to add to their allow lists. |
24. Click here to View Answer
Answer: C
Explanation:
The correct answer is C because it leverages a Network Load Balancer (NLB) with Elastic IPs in each Availability Zone to provide a stable, publicly accessible endpoint for the service. The NLB is designed for high availability and fault tolerance across multiple Availability Zones, and the Elastic IPs ensure the service has static IP addresses that external companies can add to their allow lists. By using an A (alias) record pointing to the NLB’s DNS name, the DNS resolution is handled dynamically, allowing the NLB to manage the underlying instances without requiring updates to the DNS records when instances change.
Option A is incorrect because directly assigning Elastic IPs to EC2 instances does not provide true high availability and load balancing across Availability Zones. While instances are accessible individually, there’s no mechanism for automatic failover or traffic distribution.
Option B incorrectly suggests creating and assigning public IP addresses to the ECS cluster itself. ECS clusters do not inherently have public IPs assigned to them. ECS tasks running on EC2 launch types inherit the public IP addresses of the underlying instances. Assigning addresses to the cluster isn’t a standard practice. Moreover, while NLBs can work with ECS, assigning ECS cluster names to target groups is not how NLBs handle traffic distribution to containers.
Option D is incorrect because Application Load Balancers (ALBs) primarily operate at Layer 7 (HTTP/HTTPS) and are less suitable for raw TCP traffic. While ECS can be integrated with ALBs, this option doesn’t emphasize the static IP requirement and suggests creating CNAME records with IP addresses, which is not best practice. CNAMES should point to DNS names, not IP addresses.
In summary, option C fulfills all the requirements: high availability through an NLB, redundancy across Availability Zones, accessibility via a public DNS name, and the use of static IP addresses provided by the NLB associated with each AZ through its Elastic IPs.
Relevant Documentation:
Route 53 Alias Records: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html
Network Load Balancer: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
Elastic IP Addresses: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html
Question.25 A company uses an on-premises data analytics platform. The system is highly available in a fully redundant configuration across 12 servers in the company’s data center. The system runs scheduled jobs, both hourly and daily, in addition to one-time requests from users. Scheduled jobs can take between 20 minutes and 2 hours to finish running and have tight SLAs. The scheduled jobs account for 65% of the system usage. User jobs typically finish running in less than 5 minutes and have no SLA. The user jobs account for 35% of system usage. During system failures, scheduled jobs must continue to meet SLAs. However, user jobs can be delayed. A solutions architect needs to move the system to Amazon EC2 instances and adopt a consumption-based model to reduce costs with no long-term commitments. The solution must maintain high availability and must not affect the SLAs. Which solution will meet these requirements MOST cost-effectively? (A) Split the 12 instances across two Availability Zones in the chosen AWS Region. Run two instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run four instances in each Availability Zone as Spot Instances. (B) Split the 12 instances across three Availability Zones in the chosen AWS Region. In one of the Availability Zones, run all four instances as On-Demand Instances with Capacity Reservations. Run the remaining instances as Spot Instances. (C) Split the 12 instances across three Availability Zones in the chosen AWS Region. Run two instances in each Availability Zone as On-Demand Instances with a Savings Plan. Run two instances in each Availability Zone as Spot Instances. (D) Split the 12 instances across three Availability Zones in the chosen AWS Region. Run three instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run one instance in each Availability Zone as a Spot Instance. |
25. Click here to View Answer
Answer: D
Explanation:
Here’s a detailed justification for why option D is the most cost-effective solution while meeting the requirements:
The primary goal is to migrate the on-premises data analytics platform to EC2, reduce costs, and maintain high availability, especially for scheduled jobs with tight SLAs. The solution must be consumption-based without long-term commitments. User jobs can tolerate delays during failures.
Option D proposes splitting the 12 instances across three Availability Zones (AZs) for high availability. This strategy protects against single AZ failures. By using three On-Demand Instances with Capacity Reservations per AZ (9 total), the solution ensures that a core capacity is always available to handle the critical scheduled jobs. Capacity Reservations guarantee that the instances will be available when needed, thus upholding the SLAs. The Capacity Reservation ensures predictable performance and minimizes the risk of instance unavailability, crucial for scheduled jobs.
The remaining instance per AZ (3 total) is run as a Spot Instance. Spot Instances can be used for user jobs, since these are more fault-tolerant and can be delayed. Spot Instances provide significant cost savings when available.
Options A and B are less ideal because they rely more heavily on Spot Instances, creating more risk for scheduled jobs, or use only 2 AZs, reducing availability.
Option C uses Savings Plans and Spot Instances. Savings Plans, while providing discounts, involve a commitment (1 or 3 years), which contradicts the “no long-term commitments” requirement. Capacity Reservations ensure capacity is always available while allowing consumption-based pricing.
Therefore, Option D strikes the best balance between cost savings (using Spot Instances for fault-tolerant user jobs) and guaranteed capacity (using On-Demand Instances with Capacity Reservations for SLA-driven scheduled jobs), distributed across three AZs for high availability and cost-effectiveness.
Relevant Links:
AWS Availability Zones: https://aws.amazon.com/about-aws/global-infrastructure/regions_availability_zones/
Amazon EC2 Capacity Reservations: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html
Amazon EC2 Spot Instances: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html