Question.46 A company is planning to migrate 1,000 on-premises servers to AWS. The servers run on several VMware clusters in the company’s data center. As part of the migration plan, the company wants to gather server metrics such as CPU details, RAM usage, operating system information, and running processes. The company then wants to query and analyze the data. Which solution will meet these requirements? (A) Deploy and configure the AWS Agentless Discovery Connector virtual appliance on the on-premises hosts. Configure Data Exploration in AWS Migration Hub. Use AWS Glue to perform an ETL job against the data. Query the data by using Amazon S3 Select. (B) Export only the VM performance information from the on-premises hosts. Directly import the required data into AWS Migration Hub. Update any missing information in Migration Hub. Query the data by using Amazon QuickSight. (C) Create a script to automatically gather the server information from the on-premises hosts. Use the AWS CLI to run the put-resource-attributes command to store the detailed server data in AWS Migration Hub. Query the data directly in the Migration Hub console. (D) Deploy the AWS Application Discovery Agent to each on-premises server. Configure Data Exploration in AWS Migration Hub. Use Amazon Athena to run predefined queries against the data in Amazon S3. |
46. Click here to View Answer
Answer: D
Explanation:
The correct answer is D. Here’s why:
Why Option D is Correct:
- AWS Application Discovery Agent: This agent is designed specifically to collect detailed server metrics (CPU, RAM, OS, processes) directly from each server. This fulfills the requirement of gathering comprehensive information. It is more effective than agentless methods because it can discover running processes and other details impossible to obtain without direct OS access.
- Data Exploration in Migration Hub: Migration Hub’s Data Exploration feature provides a centralized view of the discovered data. This feature allows you to organize and categorize the data collected by the agents. This is essential for querying and analysis.
- Amazon Athena: Athena is a serverless query service that enables analysis of data stored in Amazon S3 using SQL. The discovered data from Migration Hub is made available in S3. Athena fits the need for querying and analyzing the collected server metrics using predefined or custom queries.
Why Other Options are Incorrect:
- Option A: AWS Agentless Discovery Connector is more suitable for identifying server inventory, not detailed performance metrics like running processes and RAM usage. While Glue and S3 Select could be used, they introduce unnecessary complexity compared to Athena.
- Option B: Directly importing VM performance information into Migration Hub is insufficient. The requirement includes gathering detailed information such as running processes and OS information. Also, it assumes that you have these metrics readily available and formatted.
- Option C: While
put-resource-attributes
can store data in Migration Hub, manually scripting data collection and using the CLI for each server is inefficient for 1,000 servers. The Migration Hub console isn’t designed for in-depth data analysis; it’s primarily for visualization and basic filtering.
In summary: Option D offers the most efficient and comprehensive solution for gathering, storing, and analyzing detailed server metrics, aligning with the requirements of the migration plan. The Application Discovery Agent provides in-depth data collection, Migration Hub facilitates data organization, and Athena allows for flexible and scalable querying of the collected data.
Supporting Links:
Amazon Athena: https://aws.amazon.com/athena/
AWS Application Discovery Service: https://aws.amazon.com/application-discovery/
AWS Migration Hub: https://aws.amazon.com/migration-hub/
Question.47 A company is building a serverless application that runs on an AWS Lambda function that is attached to a VPC. The company needs to integrate the application with a new service from an external provider. The external provider supports only requests that come from public IPv4 addresses that are in an allow list. The company must provide a single public IP address to the external provider before the application can start using the new service. Which solution will give the application the ability to access the new service? (A) Deploy a NAT gateway. Associate an Elastic IP address with the NAT gateway. Configure the VPC to use the NAT gateway. (B) Deploy an egress-only internet gateway. Associate an Elastic IP address with the egress-only internet gateway. Configure the elastic network interface on the Lambda function to use the egress-only internet gateway. (C) Deploy an internet gateway. Associate an Elastic IP address with the internet gateway. Configure the Lambda function to use the internet gateway. (D) Deploy an internet gateway. Associate an Elastic IP address with the internet gateway. Configure the default route in the public VPC route table to use the internet gateway. |
47. Click here to View Answer
Answer: A
Explanation:
The correct answer is A: Deploy a NAT gateway. Associate an Elastic IP address with the NAT gateway. Configure the VPC to use the NAT gateway.
Here’s why:
The scenario requires a Lambda function within a VPC to access an external service that requires a specific, allow-listed public IP address. Lambda functions deployed inside a VPC do not have direct public internet access by default.
- NAT Gateway: A NAT (Network Address Translation) gateway allows instances in a private subnet to connect to the internet or other AWS services but prevents the internet from initiating a connection with those instances. This aligns perfectly with the requirement of accessing an external service without allowing inbound connections.
- Elastic IP Address: An Elastic IP address is a static, public IPv4 address designed for dynamic cloud computing. By associating an Elastic IP with the NAT gateway, the company obtains a consistent public IP address that can be provided to the external service provider.
- VPC Configuration: Configuring the VPC to use the NAT gateway ensures that all outbound traffic from the Lambda function, destined for the internet, is routed through the NAT gateway. This means the source IP address will be the Elastic IP associated with the NAT gateway, fulfilling the external provider’s requirement.
Let’s examine why the other options are incorrect:
- B. Egress-Only Internet Gateway: Egress-only internet gateways are designed for IPv6 traffic and do not support associating with an Elastic IP. They only allow outbound communication from the VPC, specifically for IPv6, which does not solve the IPv4 public IP requirement.
- C. Internet Gateway: While an Internet Gateway allows resources in the VPC to access the internet, directly attaching a Lambda function to an Internet Gateway is not a typical or recommended practice. Lambda functions in a VPC are usually placed in private subnets. Moreover, you can’t directly associate an Elastic IP with a Lambda function’s network interface in the same way you can with a NAT gateway.
- D. Internet Gateway with Public Route Table: Although an Internet Gateway is necessary for enabling internet connectivity to a VPC, simply associating it with a public route table does not address the problem. The Lambda function resides within the private subnet, and providing general internet access doesn’t inherently give the Lambda function a specific, controlled, and static public IP address to be whitelisted by the third-party provider. Furthermore, this approach introduces security risks by potentially exposing the Lambda function to unsolicited inbound traffic.
Therefore, the NAT gateway solution provides the required static public IP address, allows outbound-only access, and maintains a secure configuration for the Lambda function within the VPC.
Authoritative Links:
AWS Lambda and VPCs: https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html
NAT Gateway: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
Elastic IP Addresses: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html
Question.48 A solutions architect has developed a web application that uses an Amazon API Gateway Regional endpoint and an AWS Lambda function. The consumers of the web application are all close to the AWS Region where the application will be deployed. The Lambda function only queries an Amazon Aurora MySQL database. The solutions architect has configured the database to have three read replicas. During testing, the application does not meet performance requirements. Under high load, the application opens a large number of database connections. The solutions architect must improve the application’s performance. Which actions should the solutions architect take to meet these requirements? (Choose two.) (A) Use the cluster endpoint of the Aurora database. (B) Use RDS Proxy to set up a connection pool to the reader endpoint of the Aurora database. (C) Use the Lambda Provisioned Concurrency feature. (D) Move the code for opening the database connection in the Lambda function outside of the event handler. (E) Change the API Gateway endpoint to an edge-optimized endpoint. |
48. Click here to View Answer
Answer: BD
Explanation:
The correct answer is BD. Here’s why:
B. Use RDS Proxy to set up a connection pool to the reader endpoint of the Aurora database.
- Problem Addressed: The primary bottleneck is the excessive number of database connections opened by the Lambda function. Establishing a new connection for each Lambda invocation (or a significant portion of invocations under high load) is resource-intensive and quickly exhausts available database connections.
- RDS Proxy Solution: Amazon RDS Proxy sits between the Lambda function and the Aurora database, creating and managing a pool of database connections. The Lambda function interacts with the proxy, which reuses existing connections from the pool for multiple function invocations. This significantly reduces the overhead of creating new connections and improves overall performance.
- Reader Endpoint: Using the reader endpoint allows the proxy to distribute read requests across the read replicas, further optimizing performance by offloading read operations from the primary Aurora instance.
- Connection pooling efficiency: RDS Proxy is designed for serverless environments like Lambda, specifically to address connection limitations to relational databases. It reduces database load, minimizes latency, and improves application scalability.
D. Move the code for opening the database connection in the Lambda function outside of the event handler.
- Problem Addressed: Lambda functions operate on a stateless, event-driven model. If the database connection code is within the handler function, a new connection is potentially established with each invocation.
- Connection Reuse: Moving the database connection initialization outside the handler (into the global scope of the Lambda function) allows the Lambda execution environment to reuse the established connection across multiple invocations (within the same execution environment). This significantly reduces the overhead of repeatedly creating and destroying database connections.
- Performance Improvement: Reusing database connections improves efficiency and reduces latency, especially under high load.
- Cold Starts and Optimization: This optimization specifically targets the connection overhead. It can be implemented by instantiating a database connection outside the handler function, allowing it to persist between invocations if the same execution environment is reused. This reduces latency, especially during initial invocations or cold starts.
Why other options are not as effective:
- A. Use the cluster endpoint of the Aurora database. While using the cluster endpoint is good practice for high availability, it doesn’t directly address the connection pooling issue. It provides failover capabilities but doesn’t reduce the number of connections opened.
- C. Use the Lambda Provisioned Concurrency feature. Provisioned concurrency ensures that a specified number of Lambda function instances are initialized and ready to respond to requests. While it can improve latency, it can also exacerbate the connection pool issue if each concurrent instance still opens its own database connection.
- E. Change the API Gateway endpoint to an edge-optimized endpoint. Changing the API Gateway endpoint to an edge-optimized one is relevant for distributing API traffic globally through CloudFront. This approach will not address the excessive number of connections to the database server, which is the performance bottleneck. It primarily focuses on reducing latency for geographically dispersed users.
Supporting Documentation:
Lambda Execution Environment: https://docs.aws.amazon.com/lambda/latest/dg/services-rds-tutorial.html
Amazon RDS Proxy: https://aws.amazon.com/rds/proxy/
Optimize Lambda connections to RDS: https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/
Question.49 A company is planning to host a web application on AWS and wants to load balance the traffic across a group of Amazon EC2 instances. One of the security requirements is to enable end-to-end encryption in transit between the client and the web server. Which solution will meet this requirement? (A) Place the EC2 instances behind an Application Load Balancer (ALB). Provision an SSL certificate using AWS Certificate Manager (ACM), and associate the SSL certificate with the ALB. Export the SSL certificate and install it on each EC2 instance. Configure the ALB to listen on port 443 and to forward traffic to port 443 on the instances. (B) Associate the EC2 instances with a target group. Provision an SSL certificate using AWS Certificate Manager (ACM). Create an Amazon CloudFront distribution and configure it to use the SSL certificate. Set CloudFront to use the target group as the origin server. (C) Place the EC2 instances behind an Application Load Balancer (ALB) Provision an SSL certificate using AWS Certificate Manager (ACM), and associate the SSL certificate with the ALB. Provision a third-party SSL certificate and install it on each EC2 instance. Configure the ALB to listen on port 443 and to forward traffic to port 443 on the instances. (D) Place the EC2 instances behind a Network Load Balancer (NLB). Provision a third-party SSL certificate and install it on the NLB and on each EC2 instance. Configure the NLB to listen on port 443 and to forward traffic to port 443 on the instances. |
49. Click here to View Answer
Answer: C
Explanation:
The correct answer is C. Here’s why:
The primary requirement is end-to-end encryption. This means encryption must occur between the client and the load balancer, and then again between the load balancer and the EC2 instances.
Option C achieves this using an Application Load Balancer (ALB). An ALB can handle HTTPS traffic. The ACM-provisioned certificate is associated with the ALB to handle encryption between the client and the ALB. The third-party SSL certificate installed on the EC2 instances ensures encrypted communication between the ALB and the EC2 instances. Configuring the ALB to listen on port 443 (HTTPS) and forward to port 443 on the instances ensures that traffic remains encrypted throughout the entire path.
Option A is incorrect because it suggests exporting the ACM certificate and installing it on each EC2 instance, which is not the standard recommended practice. Using a separate certificate on the instances allows for independent certificate management and greater flexibility. Also, ACM certificates are designed to be used with AWS services and exporting them is generally restricted.
Option B is incorrect because CloudFront, while capable of HTTPS, is primarily a Content Delivery Network (CDN). While CloudFront can encrypt traffic between the client and CloudFront, it doesn’t inherently guarantee encryption between CloudFront and the EC2 instances unless explicitly configured. The description lacks the necessary detail to ensure end-to-end encryption is properly set up and managed.
Option D is incorrect because Network Load Balancers (NLBs) operate at the TCP layer (Layer 4). While they can forward traffic over port 443, they don’t inherently handle SSL/TLS termination like ALBs do. NLBs do not have the capability to decrypt and re-encrypt the traffic; this functionality is provided by the ALB. While an NLB can pass encrypted traffic to EC2 instances that handle SSL/TLS termination, this approach is generally less efficient and more complex than using an ALB, which is designed for application-level load balancing and SSL/TLS management. Installing a certificate on the NLB implies it will do the TLS termination, a functionality that is not its primary design use.Here are some helpful links:
AWS Certificate Manager: https://aws.amazon.com/certificate-manager/
Application Load Balancer: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
Network Load Balancer: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
Question.50 A company wants to migrate its data analytics environment from on premises to AWS. The environment consists of two simple Node.js applications. One of the applications collects sensor data and loads it into a MySQL database. The other application aggregates the data into reports. When the aggregation jobs run, some of the load jobs fail to run correctly. The company must resolve the data loading issue. The company also needs the migration to occur without interruptions or changes for the company’s customers. What should a solutions architect do to meet these requirements? (A) Set up an Amazon Aurora MySQL database as a replication target for the on-premises database. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind a Network Load Balancer (NLB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced, disable the replication job and restart the Aurora Replica as the primary instance. Point the collector DNS record to the NLB. (B) Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora. Move the aggregation jobs to run against the Aurora MySQL database. Set up collection endpoints behind an Application Load Balancer (ALB) as Amazon EC2 instances in an Auto Scaling group. When the databases are synced, point the collector DNS record to the ALDisable the AWS DMS sync task after the cutover from on premises to AWS. (C) Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind an Application Load Balancer (ALB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced, point the collector DNS record to the ALB. Disable the AWS DMS sync task after the cutover from on premises to AWS. (D) Set up an Amazon Aurora MySQL database. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as an Amazon Kinesis data stream. Use Amazon Kinesis Data Firehose to replicate the data to the Aurora MySQL database. When the databases are synced, disable the replication job and restart the Aurora Replica as the primary instance. Point the collector DNS record to the Kinesis data stream. |
50. Click here to View Answer
Answer: C
Explanation:
The correct answer is C. Here’s why:
- Migration Strategy: AWS DMS is the ideal service for migrating databases to AWS with minimal downtime. It supports continuous data replication, allowing for a seamless transition.
- Read Replica for Aggregation: Moving the aggregation jobs to an Aurora Replica offloads the read-heavy reporting workload from the primary database, preventing performance impact on the data loading process and addressing the original data loading failures. Aurora Replicas provide read scalability and improved availability.
- Scalable Collection Endpoints: Using Lambda functions behind an ALB provides a serverless, scalable, and cost-effective solution for handling sensor data collection. Lambda functions can automatically scale based on demand. ALB allows for intelligent routing and load balancing of requests to the Lambda functions.
- RDS Proxy for Connection Management: RDS Proxy optimizes database connections, preventing connection exhaustion and improving application availability, especially when using serverless functions like Lambda which can generate numerous concurrent connections.
- DNS Cutover: Pointing the collector DNS record to the ALB after the databases are synced directs incoming traffic to the migrated environment, completing the migration.
- Disabling DMS: After the cutover and successful data synchronization, disabling the DMS task prevents unnecessary data replication and cost.
Option A is incorrect because it uses NLB which is more suitable for TCP traffic and is not the best choice for HTTP traffic from Lambda functions. While theoretically possible, it’s not the most efficient architecture. It also promotes restarting the Aurora Replica as the primary instance which isn’t typically recommended for simple cutover scenarios.
Option B is incorrect because it moves the aggregation jobs to the primary database, which can impact the data loading process, negating the original requirement. EC2 instances behind an ALB would work but are less cost-effective than serverless Lambda.
Option D is incorrect because Kinesis Data Streams are designed for real-time streaming data and would require significant code changes to the existing Node.js application. Using Kinesis Data Firehose to replicate to Aurora MySQL is not its intended purpose and is more suitable for data lakes or analytics use cases, not direct database replication for existing applications.
Supporting Documentation:
Amazon RDS Proxy: https://aws.amazon.com/rds/proxy/
AWS Database Migration Service (DMS): https://aws.amazon.com/dms/
Amazon Aurora: https://aws.amazon.com/rds/aurora/
AWS Lambda: https://aws.amazon.com/lambda/
Application Load Balancer (ALB): https://aws.amazon.com/elasticloadbalancing/application-load-balancer/