Question.41 A company recently deployed an application on AWS. The application uses Amazon DynamoDB. The company measured the application load and configured the RCUs and WCUs on the DynamoDB table to match the expected peak load. The peak load occurs once a week for a 4-hour period and is double the average load. The application load is close to the average load for the rest of the week. The access pattern includes many more writes to the table than reads of the table. A solutions architect needs to implement a solution to minimize the cost of the table. Which solution will meet these requirements? (A) Use AWS Application Auto Scaling to increase capacity during the peak period. Purchase reserved RCUs and WCUs to match the average load. (B) Configure on-demand capacity mode for the table. (C) Configure DynamoDB Accelerator (DAX) in front of the table. Reduce the provisioned read capacity to match the new peak load on the table. (D) Configure DynamoDB Accelerator (DAX) in front of the table. Configure on-demand capacity mode for the table. |
41. Click here to View Answer
Answer: A
Explanation:
The correct answer is A. Here’s why:
The problem focuses on cost optimization for a DynamoDB table with a predictable weekly peak load that significantly exceeds the average load, especially concerning writes. The access pattern is write-heavy, making write capacity units (WCUs) a primary cost driver.
Option A (Use AWS Application Auto Scaling and reserved capacity) directly addresses this scenario. AWS Application Auto Scaling allows DynamoDB to automatically adjust the provisioned RCUs and WCUs based on real-time demand. During the 4-hour peak, Auto Scaling will increase capacity to handle the increased load. Purchasing reserved capacity for the average load provides a cost-effective baseline for the remaining time, avoiding the higher on-demand costs. Reserved capacity offers significant discounts compared to on-demand pricing when you have predictable, consistent usage.
Option B (Configure on-demand capacity mode) isn’t optimal. On-demand capacity mode eliminates the need for capacity planning but charges for actual reads and writes. While it handles fluctuating workloads well, it’s generally more expensive than provisioned capacity with Auto Scaling when you have predictable periods of high usage, as stated in the problem. Since the peak is predictable, you can utilize provisioned capacity with auto scaling to manage the traffic during peak times and not use on-demand, which charges every time you have an event.
Option C and D (Configure DynamoDB Accelerator (DAX)) is incorrect and DAX is suitable to improve the read performance with cached data and not optimal for optimizing costs in write heavy application. It can also adds complexity and cost. Further, the peak is related to the writes, therefore DAX is not applicable in this situation. Reducing provisioned read capacity is impractical for the peak periods as it may result in throttling.
In summary, combining reserved capacity for the average load with Auto Scaling to handle the peak load provides the most cost-effective solution for a predictable, fluctuating workload in DynamoDB. This minimizes wasted capacity during off-peak hours while ensuring sufficient resources during the peak period.
Supporting Documentation:
DynamoDB On-Demand Capacity: https://aws.amazon.com/dynamodb/pricing/on-demand/
AWS DynamoDB Auto Scaling: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html
DynamoDB Reserved Capacity: https://aws.amazon.com/dynamodb/pricing/provisioned/
Question.42 A solutions architect needs to advise a company on how to migrate its on-premises data processing application to the AWS Cloud. Currently, users upload input files through a web portal. The web server then stores the uploaded files on NAS and messages the processing server over a message queue. Each media file can take up to 1 hour to process. The company has determined that the number of media files awaiting processing is significantly higher during business hours, with the number of files rapidly declining after business hours. What is the MOST cost-effective migration recommendation? (A) Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in an Amazon S3 bucket. (B) Create a queue using Amazon MQ. Configure the existing web server to publish to the new queue. When there are messages in the queue, create a new Amazon EC2 instance to pull requests from the queue and process the files. Store the processed files in Amazon EFS. Shut down the EC2 instance after the task is complete. (C) Create a queue using Amazon MQ. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in Amazon EFS. (D) Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process the files. Scale the EC2 instances based on the SQS queue length. Store the processed files in an Amazon S3 bucket. |
42. Click here to View Answer
Answer: D
Explanation:
The correct answer is D because it provides the most cost-effective and scalable solution for migrating the data processing application to AWS. Here’s a detailed justification:
- SQS for Messaging: Amazon SQS (Simple Queue Service) is a fully managed message queuing service, ideal for decoupling the web server from the processing servers. This offers reliability and scalability. https://aws.amazon.com/sqs/
- Auto Scaling Group: Utilizing an EC2 Auto Scaling group enables dynamic scaling of processing capacity based on the number of messages in the SQS queue. During peak business hours when the queue length increases, the Auto Scaling group automatically launches more EC2 instances to handle the workload. Conversely, during off-peak hours, instances are terminated to reduce costs.
- Cost Efficiency: The dynamic scaling inherent in using EC2 Auto Scaling based on queue length offers optimal cost efficiency. You only pay for the processing capacity needed at any given time.
- EC2 for Processing: EC2 instances provide the computational power necessary to process the media files, which can take up to an hour each.
- S3 for Storage: Amazon S3 (Simple Storage Service) provides durable, scalable, and cost-effective object storage for the processed files. https://aws.amazon.com/s3/
Why other options are less ideal:
- Option A (Lambda): While Lambda is serverless, it has execution time limits (currently 15 minutes). Media files can take up to 1 hour to process which exceeds Lambda’s limitation.
- Option B (Amazon MQ, EC2, EFS): Amazon MQ is a managed message broker service for ActiveMQ and RabbitMQ, which is more suited for migrating existing applications that rely on these brokers. SQS is a simpler, fully managed queueing solution. EFS (Elastic File System) is suitable for shared file storage, but S3 is more cost-effective for storing processed files. Shutting down the EC2 instance after each task requires more management than auto-scaling.
- Option C (Amazon MQ, Lambda, EFS): Similar to option A, Lambda’s execution time limit makes it unsuitable for processing media files that take up to an hour. Using EFS instead of S3 is less cost-effective for storing the processed files.
In summary, Option D provides the best balance of scalability, cost-effectiveness, and manageability for migrating the data processing application to AWS by utilizing SQS for decoupling, EC2 Auto Scaling for dynamic processing capacity, and S3 for durable storage.
Question.43 A company is using Amazon OpenSearch Service to analyze data. The company loads data into an OpenSearch Service cluster with 10 data nodes from an Amazon S3 bucket that uses S3 Standard storage. The data resides in the cluster for 1 month for read-only analysis. After 1 month, the company deletes the index that contains the data from the cluster. For compliance purposes, the company must retain a copy of all input data. The company is concerned about ongoing costs and asks a solutions architect to recommend a new solution. Which solution will meet these requirements MOST cost-effectively? (A) Replace all the data nodes with UltraWarm nodes to handle the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster. (B) Reduce the number of data nodes in the cluster to 2 Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Transition the input data to S3 Glacier Deep Archive after 1 month by using an S3 Lifecycle policy. (C) Reduce the number of data nodes in the cluster to 2. Add UltraWarm nodes to handle the expected capacity. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the data. Add cold storage nodes to the cluster Transition the indexes from UltraWarm to cold storage. Delete the input data from the S3 bucket after 1 month by using an S3 Lifecycle policy. (D) Reduce the number of data nodes in the cluster to 2. Add instance-backed data nodes to handle the expected capacity. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster. |
43. Click here to View Answer
Answer: B
Explanation:
The correct answer is B because it offers the most cost-effective solution for the given requirements of data analysis, retention, and compliance.
Option B optimizes cost in several ways:
- Reduces Hot Storage: By reducing the number of data nodes (which are expensive “hot” storage), the upfront cost of the OpenSearch cluster is lowered.
- Utilizes UltraWarm: Adding UltraWarm nodes provides a cost-effective way to store the data for analysis, as UltraWarm uses S3 as its backing store, offering significantly cheaper storage than dedicated data nodes. UltraWarm is designed for read-only analysis, fulfilling the requirement.
- S3 Glacier Deep Archive for Compliance: Transitioning the input data to S3 Glacier Deep Archive after one month using an S3 Lifecycle policy ensures the data is retained for compliance purposes at the lowest possible storage cost. S3 Glacier Deep Archive is designed for long-term archival.
https://aws.amazon.com/glacier/deep-archive/https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-configuration-examples.html
- Automated Transition: The S3 Lifecycle policy automates the transition to Glacier Deep Archive, minimizing operational overhead.
Option A is less cost-effective because transitioning the input data to S3 Glacier Deep Archive immediately means the OpenSearch cluster needs to access Glacier Deep Archive to ingest data which is much slower and potentially more expensive than initially using S3 Standard. This is less suitable for the initial month of analysis.
Option C introduces Cold storage which, while cost-effective, necessitates another layer of complexity and data movement.
Option D uses instance-backed storage which is generally less cost-effective for long term storage compared to UltraWarm or S3 Glacier. Using S3 Glacier Deep Archive upfront also shares same issue with option A.
In summary, Option B strikes the best balance between performance during the active analysis period and cost-effective storage for long-term compliance, by reducing the hot storage requirements, leveraging UltraWarm for analysis, and then transitioning data to S3 Glacier Deep Archive for long-term, low-cost storage after analysis.
Question.44 A company has 10 accounts that are part of an organization in AWS Organizations. AWS Config is configured in each account. All accounts belong to either the Prod OU or the NonProd OU. The company has set up an Amazon EventBridge rule in each AWS account to notify an Amazon Simple Notification Service (Amazon SNS) topic when an Amazon EC2 security group inbound rule is created with 0.0.0.0/0 as the source. The company’s security team is subscribed to the SNS topic. For all accounts in the NonProd OU, the security team needs to remove the ability to create a security group inbound rule that includes 0.0.0.0/0 as the source. Which solution will meet this requirement with the LEAST operational overhead? (A) Modify the EventBridge rule to invoke an AWS Lambda function to remove the security group inbound rule and to publish to the SNS topic. Deploy the updated rule to the NonProd OU. (B) Add the vpc-sg-open-only-to-authorized-ports AWS Config managed rule to the NonProd OU. (C) Configure an SCP to allow the ec2:AuthorizeSecurityGroupIngress action when the value of the aws:SourceIp condition key is not 0.0.0.0/0. Apply the SCP to the NonProd OU. (D) Configure an SCP to deny the ec2:AuthorizeSecurityGroupIngress action when the value of the aws:SourceIp condition key is 0.0.0.0/0. Apply the SCP to the NonProd OU. |
44. Click here to View Answer
Answer: D
Explanation:
Here’s a detailed justification for why option D is the best solution, along with supporting concepts and authoritative links:
The requirement is to prevent the creation of EC2 security group inbound rules with 0.0.0.0/0
as the source (effectively opening the port to the world) in the NonProd OU with the least operational overhead.
Option D uses a Service Control Policy (SCP) to deny the ec2:AuthorizeSecurityGroupIngress
action when the condition key aws:SourceIp
has a value of 0.0.0.0/0
. This is the most effective and efficient way to centrally enforce this restriction at the OU level. SCPs act as guardrails, preventing actions from being taken regardless of the IAM permissions granted within the accounts in the OU. This ensures consistent enforcement across all accounts in the NonProd OU. The administrative burden is low because the SCP is defined once and applied at the OU level.
Option A suggests modifying the EventBridge rule to invoke a Lambda function to remove the security group rule and publish to the SNS topic. This is reactive, not preventative. It allows the rule to be created and then attempts to remediate it. This approach also requires writing and managing Lambda code, increasing operational complexity. Moreover, EventBridge rules trigger after an event occurs, making this a detection and remediation approach, rather than prevention.
Option B proposes adding the vpc-sg-open-only-to-authorized-ports
AWS Config managed rule to the NonProd OU. This is also reactive and not preventative. AWS Config rules evaluate resources against desired configurations and report on non-compliance. While it can identify security groups that violate the rule, it doesn’t actively prevent their creation. Like Option A, it addresses the problem after the rule is created.
Option C suggests configuring an SCP to allow the ec2:AuthorizeSecurityGroupIngress
action only when aws:SourceIp
is not 0.0.0.0/0
. This is the logical inverse of option D but has a crucial flaw. Allow policies in SCPs are ineffective because if any SCP denies an action, the action is denied, regardless of any allow policies. This approach effectively removes the ability to create any security group ingress rule, breaking necessary functionality.
Therefore, Option D is the most appropriate because it directly prevents the prohibited action using SCPs, offering the least operational overhead and the strongest security posture.
Supporting Links:
AWS Condition Keys: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html
AWS Organizations SCPs: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html
AWS Organizations Policy Types: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies.html
EC2 AuthorizeSecurityGroupIngress API: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AuthorizeSecurityGroupIngress.html
Question.45 A company hosts a Git repository in an on-premises data center. The company uses webhooks to invoke functionality that runs in the AWS Cloud. The company hosts the webhook logic on a set of Amazon EC2 instances in an Auto Scaling group that the company set as a target for an Application Load Balancer (ALB). The Git server calls the ALB for the configured webhooks. The company wants to move the solution to a serverless architecture. Which solution will meet these requirements with the LEAST operational overhead? (A) For each webhook, create and configure an AWS Lambda function URL. Update the Git servers to call the individual Lambda function URLs. (B) Create an Amazon API Gateway HTTP API. Implement each webhook logic in a separate AWS Lambda function. Update the Git servers to call the API Gateway endpoint. (C) Deploy the webhook logic to AWS App Runner. Create an ALB, and set App Runner as the target. Update the Git servers to call the ALB endpoint. (D) Containerize the webhook logic. Create an Amazon Elastic Container Service (Amazon ECS) cluster, and run the webhook logic in AWS Fargate. Create an Amazon API Gateway REST API, and set Fargate as the target. Update the Git servers to call the API Gateway endpoint. |
45. Click here to View Answer
Answer: B
Explanation:
The best solution for moving the on-premises Git webhook logic to a serverless architecture with the least operational overhead is option B: using Amazon API Gateway HTTP API and Lambda functions.
Here’s why:
- Serverless: Both API Gateway and Lambda are fully managed serverless services. This eliminates the need to manage servers, operating systems, scaling, and patching, significantly reducing operational overhead.
- Scalability: API Gateway and Lambda automatically scale to handle the incoming webhook requests. You don’t have to provision or manage scaling policies.
- Cost-Effectiveness: You only pay for the actual usage of API Gateway and Lambda, leading to potential cost savings compared to running EC2 instances or container-based solutions.
- Simplicity: Mapping webhook URLs to Lambda functions through API Gateway is a relatively straightforward process. API Gateway acts as the entry point, routing requests to the corresponding Lambda functions based on the webhook path.Each webhook logic is encapsulated within its own Lambda function, promoting modularity and maintainability.
Other options are less optimal:
- A (Lambda function URLs): Lambda function URLs are a simple way to invoke a single Lambda function but lack the advanced features of API Gateway such as request validation, throttling, and custom domain names. It will require more configuration and management.
- C (AWS App Runner with ALB): While App Runner simplifies container deployment, it still involves managing a container image and configuring an ALB. This adds operational complexity compared to a purely serverless approach.
- D (ECS Fargate with API Gateway REST API): ECS Fargate also involves containerization, which adds operational overhead. Creating a REST API with Fargate requires more configuration than using API Gateway’s HTTP API with Lambda, which is designed for lightweight, serverless applications.
Using API Gateway HTTP API and Lambda functions offers a balanced solution by providing serverless, scalable, and cost-effective architecture with minimal operational overhead for handling Git webhooks.
Supporting links: