Question.11 A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company’s solutions architect must recommend a solution to detect and protect against large-scale DDoS attacks. Which solution meets these requirements? (A) Enable Amazon GuardDuty on the account. (B) Enable Amazon Inspector on the EC2 instances. (C) Enable AWS Shield and assign Amazon Route 53 to it. (D) Enable AWS Shield Advanced and assign the ELB to it. |
11. Click here to View Answer
Answer is (D) Enable AWS Shield Advanced and assign the ELB to it.
AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers, CloudFront distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators.
Option A is incorrect because Amazon GuardDuty is a threat detection service that focuses on identifying malicious activity and unauthorized behavior within AWS accounts. While it is useful for detecting various security threats, it does not specifically address large-scale DDoS attacks.
Option B is also incorrect because Amazon Inspector is a vulnerability assessment service that helps identify security issues and vulnerabilities within EC2. It does not directly protect against DDoS attacks.
Reference:
https://aws.amazon.com/shield/features/#:~:text=In%20addition%20to%20the%20network
,WAF%2C%20a%20web%20application%20firewall.
Question.12 A company has an Amazon S3 bucket that contains critical data. The company must protect the data from accidental deletion. Which combination of steps should a solutions architect take to meet these requirements? (Choose two.) (A) Enable versioning on the S3 bucket. (B) Enable MFA Delete on the S3 bucket. (C) Create a bucket policy on the S3 bucket. (D) Enable default encryption on the S3 bucket. (E) Create a lifecycle policy for the objects in the S3 bucket. |
12. Click here to View Answer
Answers are;
A. Enable versioning on the S3 bucket.
B. Enable MFA Delete on the S3 bucket.
Enabling versioning on S3 ensures multiple versions of object are stored in bucket. When object is updated or deleted, new version is created, preserving previous version.
Enabling MFA Delete adds additional layer of protection by requiring MFA device to be present when attempting to delete objects. This helps prevent accidental or unauthorized deletions by requiring extra level of authentication.
C. Creating a bucket policy on S3 is more focused on defining access control and permissions for bucket and its objects, rather than protecting against accidental deletion.
D. Enabling default encryption on S3 ensures that any new objects uploaded to bucket are automatically encrypted. While encryption is important for data security, it does not directly address accidental deletion.
E. Creating lifecycle policy for objects in S3 allows for automated management of objects based on predefined rules. While this can help with data retention and storage cost optimization, it does not directly protect against accidental deletion.
Reference:
https://aws.amazon.com/it/premiumsupport/knowledge-center/s3-audit-deleted-missing-objects/
Question.13 A company has a production workload that runs on 1,000 Amazon EC2 Linux instances. The workload is powered by third-party software. The company needs to patch the third-party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability. What should a solutions architect do to meet these requirements? (A) Create an AWS Lambda function to apply the patch to all EC2 instances. (B) Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances. (C) Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances. (D) Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances. |
13. Click here to View Answer
Answer is (D) Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.
AWS Systems Manager Run Command allows the company to run commands or scripts on multiple EC2 instances. By using Run Command, the company can quickly and easily apply the patch to all 1,000 EC2 instances to remediate the security vulnerability.
Creating an AWS Lambda function to apply the patch to all EC2 instances would not be a suitable solution, as Lambda functions are not designed to run on EC2 instances. Configuring AWS Systems Manager Patch Manager to apply the patch to all EC2 instances would not be a suitable solution, as Patch Manager is not designed to apply third-party software patches. Scheduling an AWS Systems Manager maintenance window to apply the patch to all EC2 instances would not be a suitable solution, as maintenance windows are not designed to apply patches to third-party software
D is best choice: Critical means immediate. Just run the patch command with AWS SM run command to get it done.
A: Too convoluted
B: Can work but have to setup a lot of things to get this done. would be a good choice if D wasn’t an option
C: It’s a critical patch so not time for maintenance window
Reference:
https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager.html
Question.14 A company needs to store its accounting records in Amazon S3. The records must be immediately accessible for 1 year and then must be archived for an additional 9 years. No one at the company, including administrative users and root users, can be able to delete the records during the entire 10-year period. The records must be stored with maximum resiliency. Which solution will meet these requirements? (A) Store the records in S3 Glacier for the entire 10-year period. Use an access control policy to deny deletion of the records for a period of 10 years. (B) Store the records by using S3 Intelligent-Tiering. Use an IAM policy to deny deletion of the records. After 10 years, change the IAM policy to allow deletion. (C) Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance mode for a period of 10 years. (D) Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 year. Use S3 Object Lock in governance mode for a period of 10 years. |
14. Click here to View Answer
Answer is (C) Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance mode for a period of 10 years.
To meet the requirements of immediately accessible records for 1 year and then archived for an additional 9 years with maximum resiliency, we can use S3 Lifecycle policy to transition records from S3 Standard to S3 Glacier Deep Archive after 1 year. And to ensure that the records cannot be deleted by anyone, including administrative and root users, we can use S3 Object Lock in compliance mode for a period of 10 years. Therefore, the correct answer is option C.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.htmls
Question.15 A solutions architect is developing a VPC architecture that includes multiple subnets. The architecture will host applications that use Amazon EC2 instances and Amazon RDS DB instances. The architecture consists of six subnets in two Availability Zones. Each Availability Zone includes a public subnet, a private subnet, and a dedicated subnet for databases. Only EC2 instances that run in the private subnets can have access to the RDS databases. Which solution will meet these requirements? (A) Create a new route table that excludes the route to the public subnets’ CIDR blocks. Associate the route table with the database subnets. (B) Create a security group that denies inbound traffic from the security group that is assigned to instances in the public subnets. Attach the security group to the DB instances. (C) Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the security group to the DB instances. (D) Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the private subnets and the database subnets. |
15. Click here to View Answer
Answer is (C) Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the security group to the DB instances.
RDS databases can only be accessed by EC2 instances located in private subnets: From the security group given to instances in the private subnets, the DB instances’ security group will permit incoming traffic. Because of this, the RDS databases will only be accessible by EC2 instances located on the private subnets.
Because of its safe architecture, Every other source of incoming traffic will be blocked by the security group that is linked to the database instances. The RDS databases will be better shielded from unwanted access thanks to this.
Option A, creating a new route table that excludes the route to the public subnets’ CIDR blocks and associating it with the database subnets, would not meet the requirements because it would block all traffic to the database subnets, not just traffic from the public subnets.
Option B, creating a security group that denies inbound traffic from the security group assigned to instances in the public subnets and attaching it to the DB instances, would not meet the requirements because it would allow all traffic from the private subnets to reach the DB instances, not just traffic from the security group assigned to instances in the private subnets.
Option D, creating a new peering connection between the public subnets and the private subnets and a different peering connection between the private subnets and the database subnets, would not meet the requirements because it would allow all traffic from the private subnets to reach the DB instances, not just traffic from the security group assigned to instances in the private subnets.