Question.46 A company’s security engineer is designing an isolation procedure for Amazon EC2 instances as part of an incident response plan. The security engineer needs to isolate a target instance to block any traffic to and from the target instance, except for traffic from the company’s forensics team. Each of the company’s EC2 instances has its own dedicated security group. The EC2 instances are deployed in subnets of a VPC. A subnet can contain multiple instances. The security engineer is testing the procedure for EC2 isolation and opens an SSH session to the target instance. The procedure starts to simulate access to the target instance by an attacker. The security engineer removes the existing security group rules and adds security group rules to give the forensics team access to the target instance on port 22. After these changes, the security engineer notices that the SSH connection is still active and usable. When the security engineer runs a ping command to the public IP address of the target instance, the ping command is blocked. What should the security engineer do to isolate the target instance? (A) Add an inbound rule to the security group to allow traffic from 0.0.0.0/0 for all ports. Add an outbound rule to the security group to allow traffic to 0.0.0.0/0 for all ports. Then immediately delete these rules. (B) Remove the port 22 security group rule. Attach an instance role policy that allows AWS Systems Manager Session Manager connections so that the forensics team can access the target instance. (C) Create a network ACL that is associated with the target instance’s subnet. Add a rule at the top of the inbound rule set to deny all traffic from 0.0.0.0/0. Add a rule at the top of the outbound rule set to deny all traffic to 0.0.0.0/0. (D) Create an AWS Systems Manager document that adds a host-level firewall rule to block all inbound traffic and outbound traffic. Run the document on the target instance. |
46. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Here’s why:
The problem highlights the need for immediate isolation while preserving access for a specific team (forensics). Security groups are stateful, meaning that if a connection is already established (like the existing SSH session), traffic related to that connection will continue to flow even if the security group rules are modified. This explains why the SSH connection remained active. Removing the SSH rule is not a viable solution, as that can disrupt forensics work.
Option A is incorrect because temporarily allowing all traffic defeats the purpose of isolation.
Option C is the best solution because Network ACLs (NACLs) operate at the subnet level and are stateless. Denying all traffic at the NACL level will immediately block all new connections to and from the target instance’s subnet, regardless of existing security group rules. Because NACLs are stateless, existing connections like the still active SSH session will not be affected.
Option D is incorrect because while AWS Systems Manager documents can enforce configurations, including firewall rules, this method requires more time to implement compared to an NACL change. Additionally, modifying the host-level firewall might interfere with the forensics team’s tools or create unintended side effects.
The best approach here is to utilize AWS Systems Manager Session Manager, which allows access to EC2 instances without opening inbound ports (like SSH) and without managing SSH keys. This enables secure and auditable access for the forensics team while effectively isolating the instance from external traffic. The combination of removing the SSH rule and implementing Session Manager provides a controlled and auditable way to access the instance.
In Summary:
- Immediate Isolation Needed: The scenario emphasizes the need for instant traffic blockage.
- Security Group Limitations: Security groups are stateful and don’t immediately terminate existing connections.
- Network ACL Effectiveness: NACLs provide stateless, subnet-level traffic filtering, ensuring immediate blockage of new connections.
- Session Manager for Secure Access: Systems Manager Session Manager offers a secure, keyless, and auditable alternative to SSH for forensics access.
- Avoid Unnecessary Access: Granting broad access (Option A) defeats the purpose of isolation.
- Consider Tool Complexity: Relying solely on host-level firewalls (Option D) may introduce complexities.
Supporting Links:
AWS Systems Manager Session Manager: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
AWS Security Groups: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
AWS Network ACLs: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_ACLs.html
Question.47 A startup company is using a single AWS account that has resources in a single AWS Region. A security engineer configures an AWS CloudTrail trail in the same Region to deliver log files to an Amazon S3 bucket by using the AWS CLI. Because of expansion, the company adds resources in multiple Regions. The security engineer notices that the logs from the new Regions are not reaching the S3 bucket. What should the security engineer do to fix this issue with the LEAST amount of operational overhead? (A) Create a new CloudTrail trail. Select the new Regions where the company added resources. (B) Change the S3 bucket to receive notifications to track all actions from all Regions. (C) Create a new CloudTrail trail that applies to all Regions. (D) Change the existing CloudTrail trail so that it applies to all Regions. |
47. Click here to View Answer
Answer: D
Explanation:
The correct answer is D. Change the existing CloudTrail trail so that it applies to all Regions.
Here’s a detailed justification:
The problem is that the existing CloudTrail trail, which was initially configured in a single Region, is not collecting logs from the newly added Regions. CloudTrail trails, by default, are configured to log events only in the AWS Region where they were created.
Option A, creating a new CloudTrail trail for the new Regions, would work, but it increases operational overhead. You’d have multiple trails to manage, each potentially sending logs to the same S3 bucket or requiring a more complex setup to consolidate them. This adds unnecessary administrative complexity.
Option B, changing the S3 bucket to receive notifications, doesn’t address the root issue. S3 bucket notifications are triggered by object-level events within the bucket. While helpful for other purposes, they won’t ensure logs from all Regions are collected by CloudTrail in the first place.
Option C, creating a new CloudTrail trail that applies to all Regions, is a valid solution but less efficient than modifying the existing trail. It introduces a second trail, requiring you to manage two separate configurations.
Option D, changing the existing CloudTrail trail to apply to all Regions, is the most efficient and least operationally intensive solution. CloudTrail supports the capability to configure a trail to be multi-regional. By enabling this feature, the existing trail will automatically start collecting logs from all AWS Regions within the account and deliver them to the configured S3 bucket. This eliminates the need for multiple trails and simplifies management.
Therefore, modifying the existing trail is the preferred approach because it leverages the built-in multi-region logging capability of CloudTrail, minimizing administrative effort while ensuring comprehensive logging across all regions.
Relevant Documentation:
Creating a Trail for All Regions: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/create_trail_console.html
AWS CloudTrail Concepts: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html
Question.48 A company’s public Application Load Balancer (ALB) recently experienced a DDoS attack. To mitigate this issue, the company deployed Amazon CloudFront in front of the ALB so that users would not directly access the Amazon EC2 instances behind the ALB. The company discovers that some traffic is still coming directly into the ALB and is still being handled by the EC2 instances. Which combination of steps should the company take to ensure that the EC2 instances will receive traffic only from CloudFront? (Choose two.) (A) Configure CloudFront to add a cache key policy to allow a custom HTTP header that CloudFront sends to the ALB. (B) Configure CloudFront to add a custom HTTP header to requests that CloudFront sends to the ALB. (C) Configure the ALB to forward only requests that contain the custom HTTP header. (D) Configure the ALB and CloudFront to use the X-Forwarded-For header to check client IP addresses. (E) Configure the ALB and CloudFront to use the same X.509 certificate that is generated by AWS Certificate Manager (ACM). |
48. Click here to View Answer
Answer: BC
Explanation:
Here’s a detailed justification for why options B and C are the correct solutions to ensure only CloudFront traffic reaches the ALB, along with why the other options are incorrect.
Why B and C are correct:
The core problem is preventing direct access to the ALB, bypassing CloudFront. The most effective way to achieve this is to implement a mutual authentication mechanism between CloudFront and the ALB. This involves CloudFront adding a unique identifier to each request (a custom HTTP header) and configuring the ALB to only accept requests that contain this specific identifier.
- B. Configure CloudFront to add a custom HTTP header to requests that CloudFront sends to the ALB: This is the first part of the solution. CloudFront is configured to include a specific header (e.g., “X-CloudFront-Authorized: true”) in all requests it forwards to the ALB. This header acts as a “secret key” that only CloudFront knows and can add. https://docs.aws.amazon.com/cloudfront/latest/developerguide/using-cloudfront-headers.html
- C. Configure the ALB to forward only requests that contain the custom HTTP header: This is the crucial second part. The ALB’s security group or listener rules are configured to only allow traffic that includes the specific custom header added by CloudFront. This means any request that bypasses CloudFront and goes directly to the ALB will not have this header, and the ALB will reject it. This can be done using ALB listener rules to drop requests without the custom header. https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-authenticate-users.html
Why other options are incorrect:
- A. Configure CloudFront to add a cache key policy to allow a custom HTTP header that CloudFront sends to the ALB: Cache key policies control what parts of the request are used to generate the cache key. While custom headers can be used in cache keys, this doesn’t inherently block traffic from directly accessing the ALB. It only affects CloudFront’s caching behavior and doesn’t address the direct access issue.
- D. Configure the ALB and CloudFront to use the X-Forwarded-For header to check client IP addresses: The
X-Forwarded-For
header is used to pass the original client IP address to the backend servers when a proxy (like CloudFront) is in front. While it’s useful for logging and analytics, relying solely on it for security is insufficient. A malicious user could spoof this header, rendering it ineffective for blocking direct access to the ALB. - E. Configure the ALB and CloudFront to use the same X.509 certificate that is generated by AWS Certificate Manager (ACM): Sharing the same SSL/TLS certificate ensures encrypted communication between the client and CloudFront, and between CloudFront and the ALB. However, it doesn’t prevent someone from bypassing CloudFront and directly accessing the ALB using its public IP address or DNS name. Encryption alone doesn’t provide authentication to ensure the request originated from CloudFront.
In summary, the custom header approach provides a verifiable way to ensure that traffic reaching the ALB is only from CloudFront, effectively mitigating direct access and DDoS attacks targeted directly at the ALB.
Questiuon.49 A company discovers a billing anomaly in its AWS account. A security consultant investigates the anomaly and discovers that an employee who left the company 30 days ago still has access to the account. The company has not monitored account activity in the past. The security consultant needs to determine which resources have been deployed or reconfigured by the employee as quickly as possible. Which solution will meet these requirements? (A) In AWS Cost Explorer, filter chart data to display results from the past 30 days. Export the results to a data table. Group the data table by resource. (B) Use AWS Cost Anomaly Detection to create a cost monitor. Access the detection history. Set the time frame to Last 30 days. In the search area, choose the service category. (C) In AWS CloudTrail, filter the event history to display results from the past 30 days. Create an Amazon Athena table that contains the data. Partition the table by event source. (D) Use AWS Audit Manager to create an assessment for the past 30 days. Apply a usage-based framework to the assessment. Configure the assessment to assess by resource. |
49. Click here to View Answer
Answer: C
Explanation:
The correct answer is C. In AWS CloudTrail, filter the event history to display results from the past 30 days. Create an Amazon Athena table that contains the data. Partition the table by event source.
Here’s why:
- CloudTrail’s purpose: CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. It logs AWS API calls made by or on behalf of your AWS account, providing a record of actions taken in your environment. This is exactly what’s needed to trace the employee’s activity.
- Filtering by time: The requirement to investigate the last 30 days of activity directly aligns with CloudTrail’s filtering capabilities. You can easily specify a time range to narrow down the event history.
- Amazon Athena integration: Athena allows you to query CloudTrail logs stored in S3 using standard SQL. This means you can analyze the logs efficiently to identify which resources were deployed or reconfigured.
- Partitioning by event source: Partitioning the Athena table by event source (e.g., EC2, S3, IAM) enhances query performance by allowing Athena to only scan the relevant partitions based on the service in question.
- Identifying resources: CloudTrail events include details about the resources affected by each API call. By querying the Athena table, you can determine which resources the employee interacted with.
Why other options are less suitable:
- A (AWS Cost Explorer): Cost Explorer is designed for analyzing spending patterns, not for tracking specific user actions or resource configurations. While it might show increased costs, it won’t pinpoint the employee’s actions directly.
- B (AWS Cost Anomaly Detection): Cost Anomaly Detection flags unusual spending but doesn’t provide details on who made the changes or which resources were involved.
- D (AWS Audit Manager): Audit Manager is used for compliance auditing against predefined frameworks. While it can provide insights, it’s not the quickest way to reconstruct an individual’s activity and is less granular compared to CloudTrail when tracking API calls.
In summary: CloudTrail, combined with Athena, provides the fastest and most direct approach to determine which resources were deployed or reconfigured by the employee within the specified time frame. It captures API calls, which are the foundation of all actions in AWS, and Athena allows for efficient analysis.
Authoritative Links:
Amazon Athena: https://aws.amazon.com/athena/
AWS CloudTrail: https://aws.amazon.com/cloudtrail/
Question.50 A security engineer is checking an AWS CloudFormation template for vulnerabilities. The security engineer finds a parameter that has a default value that exposes an application’s API key in plaintext. The parameter is referenced several times throughout the template. The security engineer must replace the parameter while maintaining the ability to reference the value in the template. Which solution will meet these requirements in the MOST secure way? (A) Store the API key value as a SecureString parameter in AWS Systems Manager Parameter Store. In the template, replace all references to the value with {{resolve:ssm:MySSMParameterName:1}}. (B) Store the API key value in AWS Secrets Manager. In the template, replace all references to the value with {{resolve:secretsmanager:MySecretId:SecretString}}. (C) Store the API key value in Amazon DynamoDB. In the template, replace all references to the value with {{resolve:dynamodb:MyTableName:MyPrimaryKey}}. (D) Store the API key value in a new Amazon S3 bucket. In the template, replace all references to the value with {{resolve:s3:MyBucketName:MyObjectName}}. |
50. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Store the API key value in AWS Secrets Manager. In the template, replace all references to the value with {{resolve:secretsmanager:MySecretId:SecretString}}.
Here’s a detailed justification:
The problem highlights a security vulnerability: storing a sensitive API key in plaintext within a CloudFormation template’s default parameter. The requirement is to securely store the API key and still be able to reference it within the template.
Option B leverages AWS Secrets Manager, a service specifically designed for managing secrets like API keys, passwords, and database credentials. Secrets Manager encrypts secrets at rest using KMS and integrates with AWS CloudFormation through the {{resolve:secretsmanager:MySecretId:SecretString}}
intrinsic function. This approach avoids storing the plaintext API key directly in the template. The SecretString
retrieves the secret value. Secrets Manager also offers rotation capabilities and integrates with audit logging, enhancing overall security posture.
Option A uses AWS Systems Manager Parameter Store with SecureString. While Parameter Store can store encrypted values, Secrets Manager is generally preferred for managing secrets, especially sensitive information like API keys. Secrets Manager provides dedicated features for secret rotation and auditing that Parameter Store lacks.
Option C proposes using Amazon DynamoDB. While DynamoDB can store data, it’s not a secrets management service. Storing API keys in DynamoDB requires implementing custom encryption and access control mechanisms, which is less secure and more complex than using Secrets Manager. The {{resolve:dynamodb:...}}
intrinsic function doesn’t exist in CloudFormation, making this approach impossible.
Option D suggests storing the API key in an S3 bucket. S3 is primarily for object storage, not secret management. Storing secrets directly in S3 introduces significant security risks if access controls are not properly configured and maintained. There isn’t a {{resolve:s3:...}}
intrinsic function in CloudFormation either.
Therefore, option B provides the most secure and straightforward solution by utilizing AWS Secrets Manager, a dedicated secret management service that integrates seamlessly with CloudFormation through the resolve
intrinsic function. It avoids plaintext storage, provides encryption, and offers secret rotation and auditing capabilities.
Authoritative Links:
AWS Systems Manager Parameter Store: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html
AWS Secrets Manager: https://aws.amazon.com/secrets-manager/
AWS CloudFormation Intrinsic Functions: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html