Question.16 A company is developing an ecommerce application. The application uses Amazon EC2 instances and an Amazon RDS MySQL database. For compliance reasons, data must be secured in transit and at rest. The company needs a solution that minimizes operational overhead and minimizes cost. Which solution meets these requirements? (A) Use TLS certificates from AWS Certificate Manager (ACM) with an Application Load Balancer. Deploy self-signed certificates on the EC2 instances. Ensure that the database client software uses a TLS connection to Amazon RDS. Enable encryption of the RDS DB instance. Enable encryption on the Amazon Elastic Block Store (Amazon EBS) volumes that support the EC2 instances. (B) Use TLS certificates from a third-party vendor with an Application Load Balancer. Install the same certificates on the EC2 instances. Ensure that the database client software uses a TLS connection to Amazon RDS. Use AWS Secrets Manager for client-side encryption of application data. (C) Use AWS CloudHSM to generate TLS certificates for the EC2 instances. Install the TLS certificates on the EC2 instances. Ensure that the database client software uses a TLS connection to Amazon RDS. Use the encryption keys from CloudHSM for client-side encryption of application data. (D) Use Amazon CloudFront with AWS WAF. Send HTTP connections to the origin EC2 instances. Ensure that the database client software uses a TLS connection to Amazon RDS. Use AWS Key Management Service (AWS KMS) for client-side encryption of application data before the data is stored in the RDS database. |
16. Click here to View Answer
Answer: A
Explanation:
The correct answer is A. Here’s a breakdown of why and why the other options are less suitable:
Why Option A is Correct:
- TLS certificates from ACM with ALB: Using ACM provides free, managed TLS certificates for securing traffic between clients and the Application Load Balancer (ALB). This handles encryption in transit for web traffic, minimizing operational overhead related to certificate management. https://aws.amazon.com/certificate-manager/
- Self-signed certificates on EC2 instances: Using self-signed certificates on EC2 instances to encrypt internal communications helps secure in transit traffic within the VPC between the ALB and your application instances. Self-signed certificates minimize cost when a full CA signed certificate is not needed.
- Database client TLS connection: Enforcing a TLS connection from the EC2 instances to the RDS instance ensures that data is encrypted in transit between the application and the database. RDS supports TLS connections. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html
- RDS DB instance encryption: Enabling encryption for the RDS DB instance ensures that the data is encrypted at rest within the database storage. This addresses the data at-rest compliance requirement. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
- EBS volume encryption: Encrypting the EBS volumes for the EC2 instances ensures that data at rest on the underlying storage of the EC2 instances is protected. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
- Cost and Overhead: This solution effectively utilizes AWS managed services like ACM and RDS encryption, which are relatively inexpensive and minimize operational burden. Using self-signed certificates between the ALB and EC2 instances further reduces cost and overhead.
Why Other Options Are Incorrect:
- Option B: Using a third-party certificate vendor adds cost and operational overhead compared to ACM. AWS Secrets Manager is not needed as the database already encrypts at rest.
- Option C: CloudHSM is a dedicated hardware security module. Using it to generate TLS certificates is an overkill and significantly increases cost and complexity. Client-side encryption with CloudHSM keys introduces unnecessary complexity and isn’t required if RDS encryption is enabled.
- Option D: CloudFront is a CDN, more appropriate for caching static content, not as a core security component for an e-commerce application. Sending HTTP traffic to origin EC2 instances after passing through CloudFront/WAF does not satisfy the secure in transit requirement and is a vulnerability. Client-side encryption before storing data in RDS is not as efficient or manageable as enabling RDS native encryption.
In summary, option A provides a cost-effective and operationally efficient solution for securing data in transit and at rest by using AWS managed services where possible and minimizing unnecessary complexity.
Question.17 A security engineer is working with a company to design an ecommerce application. The application will run on Amazon EC2 instances that run in an Auto Scaling group behind an Application Load Balancer (ALB). The application will use an Amazon RDS DB instance for its database. The only required connectivity from the internet is for HTTP and HTTPS traffic to the application. The application must communicate with an external payment provider that allows traffic only from a preconfigured allow list of IP addresses. The company must ensure that communications with the external payment provider are not interrupted as the environment scales. Which combination of actions should the security engineer recommend to meet these requirements? (Choose three.) (A) Deploy a NAT gateway in each private subnet for every Availability Zone that is in use. (B) Place the DB instance in a public subnet. (C) Place the DB instance in a private subnet. (D) Configure the Auto Scaling group to place the EC2 instances in a public subnet. (E) Configure the Auto Scaling group to place the EC2 instances in a private subnet. (F) Deploy the ALB in a private subnet. |
17. Click here to View Answer
Answer: ACE
Explanation:
Let’s break down why the combination of A, C, and E is the correct solution for this AWS security scenario.
- A. Deploy a NAT gateway in each private subnet for every Availability Zone that is in use. EC2 instances in the private subnet need to communicate with the external payment provider. Since this provider requires a preconfigured allow list of IP addresses, using a NAT Gateway is essential. A NAT Gateway allows instances in private subnets to initiate outbound internet traffic (like connecting to the payment provider) while preventing the internet from initiating connections to those instances. Deploying a NAT Gateway in each Availability Zone ensures high availability and avoids cross-AZ traffic costs. This fulfills the requirement of uninterrupted communication with the external payment provider during scaling. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
- C. Place the DB instance in a private subnet. This is a critical security measure. The database should never be directly exposed to the internet. Placing it in a private subnet isolates it from external threats, only allowing access from resources within the VPC, such as the EC2 instances in the application tier. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.Security.html
- E. Configure the Auto Scaling group to place the EC2 instances in a private subnet. Placing the EC2 instances in a private subnet is a key security best practice. The EC2 instances running the application logic do not need direct internet access for incoming requests. The Application Load Balancer (ALB) handles the incoming HTTP/HTTPS traffic from the internet. The EC2 instances communicate with the ALB internally and use the NAT Gateway to access the external payment provider, thus maintaining isolation and security. This also works in tandem with NAT gateway usage. https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-in-vpc.html
Let’s examine why the other options are incorrect:
F. Deploy the ALB in a private subnet. Incorrect. An ALB needs to be in a public subnet to receive inbound HTTP/HTTPS traffic from the internet. It acts as the entry point for web traffic. While internal ALBs exist, they aren’t appropriate for receiving traffic from the open internet. https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
B. Place the DB instance in a public subnet. Incorrect. This is a significant security risk. Public subnets expose the database to the internet, making it vulnerable to attacks.
D. Configure the Auto Scaling group to place the EC2 instances in a public subnet. Incorrect. Placing EC2 instances directly in public subnets is generally discouraged for security reasons unless there is a compelling reason for them to have public IP addresses. The ALB already handles public traffic.
Question.18 A company uses several AWS CloudFormation stacks to handle the deployment of a suite of applications. The leader of the company’s application development team notices that the stack deployments fail with permission errors when some team members try to deploy the stacks. However, other team members can deploy the stacks successfully. The team members access the account by assuming a role that has a specific set of permissions that are necessary for the job responsibilities of the team members. All team members have permissions to perform operations on the stacks. Which combination of steps will ensure consistent deployment of the stacks MOST securely? (Choose three.) (A) Create a service role that has a composite principal that contains each service that needs the necessary permissions. Configure the role to allow the sts:AssumeRole action. (B) Create a service role that has cloudformation.amazonaws.com as the service principal. Configure the role to allow the sts:AssumeRole action. (C) For each required set of permissions, add a separate policy to the role to allow those permissions. Add the ARN of each CloudFormation stack in the resource field of each policy. (D) For each required set of permissions, add a separate policy to the role to allow those permissions. Add the ARN of each service that needs the permissions in the resource field of the corresponding policy. (E) Update each stack to use the service role. (F) Add a policy to each member role to allow the iam:PassRole action. Set the policy’s resource field to the ARN of the service role. |
18. Click here to View Answer
Answer: BDE
Explanation:
Here’s a detailed explanation of why options B, D, and E are the correct choices, and why the others are not, for ensuring consistent and secure CloudFormation stack deployments:
B. Create a service role that has cloudformation.amazonaws.com
as the service principal. Configure the role to allow the sts:AssumeRole
action.
This is a crucial step. CloudFormation needs a way to assume permissions to create and manage resources on your behalf. This service role grants CloudFormation the authority to perform actions defined within its policies. The principal cloudformation.amazonaws.com
signifies that only the CloudFormation service is allowed to assume this role, limiting the blast radius and ensuring security. The sts:AssumeRole
action is a prerequisite for CloudFormation to take on this role.https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html
D. For each required set of permissions, add a separate policy to the role to allow those permissions. Add the ARN of each service that needs the permissions in the resource field of the corresponding policy.
This implements the principle of least privilege. The service role needs specific permissions to create the resources described in the CloudFormation template (e.g., EC2 instances, S3 buckets, IAM roles). Rather than granting broad access, you create specific policies that permit actions only on the necessary resources. The resource
field should contain the ARNs of the services that will be affected by these actions within your templates, therefore the resource is a target service’s ARN. This minimizes the risk of accidental or malicious actions.https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege
E. Update each stack to use the service role.
This explicitly tells CloudFormation to use the created service role when creating or updating the stack’s resources. This is essential to ensure consistent deployment, regardless of who initiates the stack deployment. By using the service role, the stack uses a known set of permissions, rather than relying on the permissions of the individual user invoking the deployment. This mitigates the problem of some team members lacking the necessary permissions.https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-service-role.html
Why the other options are incorrect:
F. Add a policy to each member role to allow the iam:PassRole
action. Set the policy’s resource field to the ARN of the service role. iam:PassRole
is only needed if you intend to allow users to pass the service role to other AWS services. In this case, CloudFormation itself assumes the role. You’ve already granted CloudFormation access to assume the service role and use it. The users do not need to be able to pass the service role to other AWS services themselves. So in this case, Option F is not required and may introduce unnecessary complications.
A. Create a service role that has a composite principal that contains each service that needs the necessary permissions. Configure the role to allow the sts:AssumeRole action. This is overly complex and unnecessary. CloudFormation is the service deploying the resources, so it is the only principal that needs to assume the role directly. A composite principal is not the proper approach in this context.
C. For each required set of permissions, add a separate policy to the role to allow those permissions. Add the ARN of each CloudFormation stack in the resource field of each policy. This is incorrect. The resource field should contain the ARN of the resource being affected, not the CloudFormation stack itself. The stack is simply the tool managing the resources; the policies need to govern actions on the underlying AWS resources.
Question.19 A company used a lift-and-shift approach to migrate from its on-premises data centers to the AWS Cloud. The company migrated on-premises VMs to Amazon EC2 instances. Now the company wants to replace some of components that are running on the EC2 instances with managed AWS services that provide similar functionality. Initially, the company will transition from load balancer software that runs on EC2 instances to AWS Elastic Load Balancers. A security engineer must ensure that after this transition, all the load balancer logs are centralized and searchable for auditing. The security engineer must also ensure that metrics are generated to show which ciphers are in use. Which solution will meet these requirements? (A) Create an Amazon CloudWatch Logs log group. Configure the load balancers to send logs to the log group. Use the CloudWatch Logs console to search the logs. Create CloudWatch Logs filters on the logs for the required metrics. (B) Create an Amazon S3 bucket. Configure the load balancers to send logs to the S3 bucket. Use Amazon Athena to search the logs that are in the S3 bucket. Create Amazon CloudWatch filters on the S3 log files for the required metrics. (C) Create an Amazon S3 bucket. Configure the load balancers to send logs to the S3 bucket. Use Amazon Athena to search the logs that are in the S3 bucket. Create Athena queries for the required metrics. Publish the metrics to Amazon CloudWatch. (D) Create an Amazon CloudWatch Logs log group. Configure the load balancers to send logs to the log group. Use the AWS Management Console to search the logs. Create Amazon Athena queries for the required metrics. Publish the metrics to Amazon CloudWatch. |
19. Click here to View Answer
Answer: C
Explanation:
The correct solution is C. Here’s why:
The core requirement is centralized, searchable load balancer logs with cipher usage metrics.
- Centralized Logging and Search: AWS Elastic Load Balancers can directly send access logs to an Amazon S3 bucket. S3 provides durable storage for these logs. Amazon Athena can then be used to query these logs directly within S3 using SQL, making them searchable for auditing purposes. This addresses the “centralized and searchable logs” requirement.
- Cipher Usage Metrics: Athena’s SQL query capabilities can be leveraged to analyze the load balancer logs and extract information about the ciphers used during SSL/TLS connections. By crafting specific Athena queries, the security engineer can count the occurrences of each cipher and derive the required metrics. These metrics are not automatically generated by CloudWatch from the logs themselves.
- Publishing Metrics: After extracting the cipher usage metrics using Athena, the results can be published to Amazon CloudWatch as custom metrics. This allows for visualization, alarming, and further analysis of the security posture.
Let’s examine why the other options are less suitable:
- A: While CloudWatch Logs can receive load balancer logs and be searched, it doesn’t directly support extracting complex metrics like cipher usage from log data. CloudWatch Log Filters are suitable for simple pattern matching but not for generating aggregate metrics like cipher counts.
- B: While storing logs in S3 and using Athena for search is correct, attempting to create CloudWatch filters on S3 files is not the intended use of CloudWatch. CloudWatch filters work on streams of log data, not directly on S3 files. CloudWatch Logs Insights could work here to extract metrics; however, it is more costly than Athena and, given the constraints of the question, less efficient.
- D: CloudWatch Logs and Athena working together for the same logs would not be efficient and is not how logs are intended to flow between CloudWatch and Athena. CloudWatch does not have SQL-based querying, unlike Athena. Also, the AWS Management Console can search CloudWatch Logs, but that is not the strength provided by using Athena, which is complex SQL-based queries.
Therefore, option C provides the most efficient and direct way to achieve the stated requirements by leveraging the strengths of S3 for storage, Athena for querying and metric extraction, and CloudWatch for monitoring and alerting on the extracted metrics.
Relevant Documentation:
Amazon CloudWatch Custom Metrics: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html
Elastic Load Balancing Access Logs: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html
Amazon Athena: https://aws.amazon.com/athena/
Question.20 A company uses AWS Organizations to manage a multi-account AWS environment in a single AWS Region. The organization’s management account is named management-01. The company has turned on AWS Config in all accounts in the organization. The company has designated an account named security-01 as the delegated administrator for AWS Config. All accounts report the compliance status of each account’s rules to the AWS Config delegated administrator account by using an AWS Config aggregator. Each account administrator can configure and manage the account’s own AWS Config rules to handle each account’s unique compliance requirements. A security engineer needs to implement a solution to automatically deploy a set of 10 AWS Config rules to all existing and future AWS accounts in the organization. The solution must turn on AWS Config automatically during account creation. Which combination of steps will meet these requirements? (Choose two.) (A) Create an AWS CloudFormation template that contains the 10 required AWS Config rules. Deploy the template by using CloudFormation StackSets in the security-01 account. (B) Create a conformance pack that contains the 10 required AWS Config rules. Deploy the conformance pack from the security-01 account. (C) Create a conformance pack that contains the 10 required AWS Config rules. Deploy the conformance pack from the management-01 account. (D) Create an AWS CloudFormation template that will activate AWS Config. Deploy the template by using CloudFormation StackSets in the security-01 account. (E) Create an AWS CloudFormation template that will activate AWS Config. Deploy the template by using CloudFormation StackSets in the management-01 account. |
20. Click here to View Answer
Answer: BE
Explanation:
Here’s a detailed justification for the answer choices B and E:
Choice B: Create a conformance pack that contains the 10 required AWS Config rules. Deploy the conformance pack from the security-01 account.
This is correct because AWS Config conformance packs provide a simplified way to package a collection of AWS Config rules and remediation actions that can be deployed as a single entity across an organization. Delegated administrators (like the security-01 account) can deploy conformance packs to member accounts. This centralizes the management and deployment of Config rules across the organization. This method ensures all accounts, including existing ones, get the predefined set of AWS Config rules, promoting consistent compliance posture.
Choice E: Create an AWS CloudFormation template that will activate AWS Config. Deploy the template by using CloudFormation StackSets in the management-01 account.
This is correct because to automatically enable AWS Config in all existing and future accounts within an AWS Organization, you can leverage CloudFormation StackSets from the management account. StackSets allow you to deploy a CloudFormation template to multiple accounts and Regions with a single operation. In this case, the CloudFormation template would enable AWS Config. Because the template can be deployed to future accounts as part of the account creation process (using StackSets’ automatic deployment features to new organizational units), it satisfies the requirement of automatic activation upon account creation. The management account must initiate this process to ensure organizational-wide setup.
Why other options are incorrect:
- A: CloudFormation StackSets can deploy templates to multiple accounts, but using them directly to deploy individual config rules, especially when a solution like conformance packs is available, is less efficient and not the intended use case for the given scenario.
- C: Although conformance packs can ensure consistency, they would be deployed from the delegated administrator account (security-01), according to the prompt, and not management-01.
- D: While StackSets can deploy infrastructure, deploying the AWS Config activation template from the delegated administrator account doesn’t follow the best practices of establishing foundational organizational policies from the management account. The management account should be responsible for the organizational wide set up of AWS config.
Supporting links:
AWS Organizations: https://aws.amazon.com/organizations/
AWS Config Conformance Packs: https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html
AWS CloudFormation StackSets: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-stacksets.html