Question.41 A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1 MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth. Which solution will meet these requirements? (A) Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket. (B) Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3. (C) Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway. (D) Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a public virtual interface (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway. |
41. Click here to View Answer
Answer is (B) Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.
Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.
This solution is the most efficient way to migrate the video files to Amazon S3. The Snowball Edge device can transfer data at up to 100 Gbps, which is much faster than the company’s current network bandwidth. The Snowball Edge device is also a secure way to transfer data, as it is encrypted at rest and in transit.
Option A: It would require transferring the data over the network, which could consume a significant amount of bandwidth. This option does not address the requirement of minimizing network bandwidth usage.
Option C: It would still involve network transfers, potentially utilizing a significant amount of bandwidth.
Option D: It would also involve network transfers. Although it provides a dedicated network connection, it doesn’t address the requirement of minimizing network bandwidth usage.
Reference:
https://aws.amazon.com/snowball/
Question.42 A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability. How should a solutions architect design the architecture to meet these requirements? (A) Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling. (B) Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue. (C) Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server. (D) Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the compute nodes. |
42. Click here t View Answer
Answer is (B) Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.
Based on the requirements stated, Option B is the most appropriate solution. It utilizes Amazon SQS for job destination and EC2 Auto Scaling based on the size of the queue to handle variable workloads while maximizing resiliency and scalability.
The architecture for this scenario works well if the number of image uploads doesn’t vary over time. But if the number of uploads changes over time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group.
Configure scaling based on Amazon SQS
Tasks to do:
Step 1: Create a CloudWatch custom metric
Step 2: Create a target tracking scaling policy
Step 3: Test your scaling policy
Configuring an Amazon SQS queue as a destination for the jobs, implementing compute nodes with EC2 instances managed in an Auto Scaling group, and configuring EC2 Auto Scaling based on the size of the queue is the most suitable solution. With this approach, the primary server can enqueue jobs into the SQS queue, and the compute nodes can dynamically scale based on the size of the queue. This ensures that the compute capacity adjusts according to the workload, maximizing resiliency and scalability. The SQS queue acts as a buffer, decoupling the primary server from the compute nodes and providing fault tolerance in case of failures or spikes in the workload.
Reference:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
Question.43 A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received. Which solution will meet these requirements? (A) Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing. (B) Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing. (C) Use an API Gateway authorizer to block any requests while the application processes an order. (D) Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing. |
43. Click here to View Answer
Answer is (B) Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.
B because SQS FIFO queue guarantees message order.
– Amazon API Gateway will be used to receive the orders from the web application.
– Instead of directly processing the orders, the API Gateway will integrate with an Amazon SQS FIFO queue.
– FIFO (First-In-First-Out) queues in Amazon SQS ensure that messages are processed in the order they are received.
– By using a FIFO queue, the order processing is guaranteed to be sequential, ensuring that the first order received is processed before the next one.
– An AWS Lambda function can be configured to be triggered by the SQS FIFO queue, processing the orders as they arrive
Reference:
https://aws.amazon.com/sqs/
Question.44 An application development team is designing a microservice that will convert large images to smaller, compressed images. When a user uploads an image through the web interface, the microservice should store the image in an Amazon S3 bucket, process and compress the image with an AWS Lambda function, and store the image in its compressed form in a different S3 bucket. A solutions architect needs to design a solution that uses durable, stateless components to process the images automatically. Which combination of actions will meet these requirements? (Choose two.) (A) Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an image is uploaded to the S3 bucket. (B) Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS message is successfully processed, delete the message in the queue. (C) Configure the Lambda function to monitor the S3 bucket for new uploads. When an uploaded image is detected, write the file name to a text file in memory and use the text file to keep track of the images that were processed. (D) Launch an Amazon EC2 instance to monitor an Amazon Simple Queue Service (Amazon SQS) queue. When items are added to the queue, log the file name in a text file on the EC2 instance and invoke the Lambda function. (E) Configure an Amazon EventBridge (Amazon CloudWatch Events) event to monitor the S3 bucket. When an image is uploaded, send an alert to an Amazon ample Notification Service (Amazon SNS) topic with the application owner’s email address for further processing. |
44. Click here to View Answer
Answers are;
(A) Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an image is uploaded to the S3 bucket.
(B. Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS message is successfully processed, delete the message in the queue.
Keywords:
- Store the image in an Amazon S3 bucket, process and compress the image with an AWS Lambda function.
- Durable, stateless components to process the images automatically
To design a solution that uses durable, stateless components to process images automatically, a solutions architect could consider the following actions:
Option A involves creating an SQS queue and configuring the S3 bucket to send a notification to the queue when an image is uploaded. This allows the application to decouple the image upload process from the image processing process and ensures that the image processing process is triggered automatically when a new image is uploaded.
Option B involves configuring the Lambda function to use the SQS queue as the invocation source. When the SQS message is successfully processed, the message is deleted from the queue. This ensures that the Lambda function is invoked only once per image and that the image is not processed multiple times.
Option C is incorrect because it involves storing state (the file name) in memory, which is not a durable or scalable solution.
Option D is incorrect because it involves launching an EC2 instance to monitor the SQS queue, which is not a stateless solution.
Option E is incorrect because it involves using Amazon EventBridge (formerly Amazon CloudWatch Events) to send an alert to an Amazon Simple Notification Service (Amazon SNS) topic, which is not related to the image processing process.
Question.45 A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to store the information in an Amazon Aurora PostgreSQL database. During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to load into the database. A solutions architect must recommend a new design to improve scalability and minimize the configuration effort. Which solution will meet these requirements? (A) Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native Java Database Connectivity (JDBC) drivers. (B) Change the platform from Aurora to Amazon DynamoDProvision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster. (C) Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS). (D) Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue. |
45. Click here to View Answer
Answer is (D) Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
Keywords: – Company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to load into the database.
– Improve scalability and minimize the configuration effort.
A: Incorrect – Lambda is Serverless and automatically scale – EC2 instance we have to create load balancer, auto scaling group,.. a lot of things. using native Java Database Connectivity (JDBC) drivers don’t improve the performance.
B: Incorrect – a lot of things to changes and DynamoDB Accelerator use for cache(read) not for write.
C: Incorrect – SNS is use for send notification (e-mail, SMS).
D: Correct – Lambda and SQS are serverless. with SQS we can scale application well by queuing the data.
By dividing the functionality into two Lambda functions, one for receiving the information and the other for loading it into the database, you can independently scale and optimize each function based on their specific requirements. This approach allows for more efficient resource allocation and reduces the potential impact of high volumes of data on the overall system.
Integrating the Lambda functions using an SQS adds another layer of scalability and reliability. The receiving function can push the information to the SQS, and the loading function can retrieve messages from the queue and process them independently. This asynchronous decoupling ensures that the receiving function can handle high volumes of incoming requests without overwhelming the loading function. Additionally, SQS provides built-in retries and guarantees message durability, ensuring that no data is lost during processing.