Question.36 A company wants to build a scalable key management infrastructure to support developers who need to encrypt data in their applications. What should a solutions architect do to reduce the operational burden? (A) Use multi-factor authentication (MFA) to protect the encryption keys. (B) Use AWS Key Management Service (AWS KMS) to protect the encryption keys. (C) Use AWS Certificate Manager (ACM) to create, store, and assign the encryption keys. (D) Use an IAM policy to limit the scope of users who have access permissions to protect the encryption keys. |
36. Click here to View Answer
Answer is (B) Use AWS Key Management Service (AWS KMS) to protect the encryption keys.
To reduce the operational burden, the solutions architect should use AWS Key Management Service (AWS KMS) to protect the encryption keys.
AWS KMS is a fully managed service that makes it easy to create and manage encryption keys. It allows developers to easily encrypt and decrypt data in their applications, and it automatically handles the underlying key management tasks, such as key generation, key rotation, and key deletion. This can help to reduce the operational burden associated with key managemen.
Option A (MFA), option C (ACM), and option D (IAM policy) are not directly related to reducing the operational burden of key management. While these options may provide additional security measures or access controls, they do not specifically address the scalability and management aspects of a key management infrastructure. AWS KMS is designed to simplify the key management process and is the most suitable option for reducing the operational burden in this scenario.
Reference:
https://aws.amazon.com/kms/faqs/#:~:text=If%20you%20are%20a%20developer%20who%20needs%20to%20digitally
,a%20broad%20set%20of%20industry%20and%20regional%20compliance%20regimes.
Question.37 A company is developing a file-sharing application that will use an Amazon S3 bucket for storage. The company wants to serve all the files through an Amazon CloudFront distribution. The company does not want the files to be accessible through direct navigation to the S3 URL. What should a solutions architect do to meet these requirements? (A) Write individual policies for each S3 bucket to grant read permission for only CloudFront access. (B) Create an IAM user. Grant the user read permission to objects in the S3 bucket. Assign the user to CloudFront. (C) Write an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and assigns the target S3 bucket as the Amazon Resource Name (ARN). (D) Create an origin access identity (OAI). Assign the OAI to the CloudFront distribution. Configure the S3 bucket permissions so that only the OAI has read permission. |
37. Click here to View Answer
Answer is (D) Create an origin access identity (OAI). Assign the OAI to the CloudFront distribution. Configure the S3 bucket permissions so that only the OAI has read permission.
To meet the requirements of serving files through CloudFront while restricting direct access to the S3 bucket URL, the recommended approach is to use an origin access identity (OAI). By creating an OAI and assigning it to the CloudFront distribution, you can control access to the S3 bucket.
This setup ensures that the files stored in the S3 bucket are only accessible through CloudFront and not directly through the S3 bucket URL. Requests made directly to the S3 URL will be blocked.
Option A suggests writing individual policies for each S3 bucket, which can be cumbersome and difficult to manage, especially if there are multiple buckets involved.
Option B suggests creating an IAM user and assigning it to CloudFront, but this does not address restricting direct access to the S3 bucket URL.
Option C suggests writing an S3 bucket policy with CloudFront distribution ID as the Principal, but this alone does not provide the necessary restrictions to prevent direct access to the S3 bucket URL.
Question.38 A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection. The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity. Which solution meets these requirements? (A) Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket. (B) Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket. (C) Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. (D) Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region. |
38. Click here to View Answer
Answer is (A) Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.
Amazon S3 Transfer Acceleration can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects. Customers who have either web or mobile applications with widespread users or applications hosted far away from their S3 bucket can experience long and variable upload and download speeds over the Internet. S3 Transfer Acceleration (S3TA) reduces the variability in Internet routing, congestion and speeds that can affect transfers, and logically shortens the distance to S3 for remote applications.
Question.39 An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet. Which solution will provide private network connectivity to Amazon S3? (A) Create a gateway VPC endpoint to the S3 bucket. (B) Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket. (C) Create an instance profile on Amazon EC2 to allow S3 access. (D) Create an Amazon API Gateway API with a private link to access the S3 endpoint. |
39. Click here to View Answer
Answer is (A) Create a gateway VPC endpoint to the S3 bucket.
Keywords: – EC2 in VPC
– EC2 instance needs to access the S3 bucket without connectivity to the internet
A: Correct – Gateway VPC endpoint can connect to S3 bucket privately without additional cost
B: Incorrect – You can set up interface VPC endpoint for CloudWatch Logs for private network from EC2 to CloudWatch. But from CloudWatch to S3 bucket: Log data can take up to 12 hours to become available for export and the requirement only need EC2 to S3
C: Incorrect – Create an instance profile just grant access but not help EC2 connect to S3 privately
D: Incorrect – API Gateway like the proxy which receive network from out site and it forward request to AWS Lambda, Amazon EC2, Elastic Load Balancing products such as Application Load Balancers or Classic Load Balancers, Amazon DynamoDB, Amazon Kinesis, or any publicly available HTTPS-based endpoint. But not S3
Reference:
https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html#:~:text=A
-,VPC%20endpoint,-enables%20customers%20to
Question.40 A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time. What should a solutions architect propose to ensure users see all of their documents at once? (A) Copy the data so both EBS volumes contain all the documents (B) Configure the Application Load Balancer to direct a user to the server with the documents (C) Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS (D) Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server |
40. Click here to View Answer
Answer is (C) Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
To ensure users can see all their documents at once in the duplicated architecture with multiple EC2 instances and EBS volumes behind an Application Load Balancer, the most appropriate solution is Option C: Copy the data from both EBS volumes to Amazon EFS (Elastic File System) and modify the application to save new documents to Amazon EFS.
In summary, Option C, which involves copying the data to Amazon EFS and modifying the application to use Amazon EFS for document storage, is the most appropriate solution to ensure users can see all their documents at once in the duplicated architecture. Amazon EFS provides scalability, availability, and shared access, allowing both EC2 instances to access and synchronize the documents seamlessly.
Reference:
https://aws.amazon.com/efs/