Question.36 A company has enabled Amazon GuardDuty in all AWS Regions as part of its security monitoring strategy. In one of its VPCs, the company hosts an Amazon EC2 instance that works as an FTP server. A high number of clients from multiple locations contact the FTP server. GuardDuty identifies this activity as a brute force attack because of the high number of connections that happen every hour. The company has flagged the finding as a false positive, but GuardDuty continues to raise the issue. A security engineer must improve the signal-to-noise ratio without compromising the company’s visibility of potential anomalous behavior. Which solution will meet these requirements? (A) Disable the FTP rule in GuardDuty in the Region where the FTP server is deployed. (B) Add the FTP server to a trusted IP list. Deploy the list to GuardDuty to stop receiving the notifications. (C) Create a suppression rule in GuardDuty to filter findings by automatically archiving new findings that match the specified criteria. (D) Create an AWS Lambda function that has the appropriate permissions to delete the finding whenever a new occurrence is reported. |
36. Click here to View Answer
Answer: C
Explanation:
The correct answer is C: Create a suppression rule in GuardDuty to filter findings by automatically archiving new findings that match the specified criteria.
Here’s why:
GuardDuty is raising a false positive due to the legitimate high volume of connections to the FTP server. The goal is to reduce noise without losing visibility of actual threats. Suppression rules in GuardDuty provide a way to automatically archive findings that match specific criteria. This is ideal in situations where a legitimate activity triggers a finding that is not a real threat. By creating a rule based on characteristics of the FTP server connections (e.g., source IP ranges, destination port), the security engineer can automatically archive these findings, effectively suppressing the false positives.
Option A (disabling the FTP rule) is incorrect because it would eliminate the detection of any FTP-related attacks, not just the false positives. This reduces overall security visibility.
Option B (using a trusted IP list) is not suitable in this scenario. Trusted IP lists are intended for known, safe IP addresses and are not intended for dynamic or large sets of clients. The problem is not a malicious source IP, but the high volume of connections. Additionally, FTP clients may be coming from varying source IP addresses.
Option D (Lambda function to delete findings) is an unnecessary and inefficient solution. Deleting the finding programmatically doesn’t address the root cause (the conditions triggering the false positive) and requires more complex configuration and maintenance than using built-in GuardDuty suppression rules. Suppression rules are the intended mechanism for addressing false positives in GuardDuty.
Suppression rules allow the security engineer to focus on genuine threats while acknowledging and automatically managing known exceptions. This approach balances security monitoring with practicality.
For more information on GuardDuty suppression rules:
AWS Documentation: https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings_suppression-rules.html
Question.37 A company is running internal microservices on Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type. The company is using Amazon Elastic Container Registry (Amazon ECR) private repositories. A security engineer needs to encrypt the private repositories by using AWS Key Management Service (AWS KMS). The security engineer also needs to analyze the container images for any common vulnerabilities and exposures (CVEs). Which solution will meet these requirements? (A) Enable KMS encryption on the existing ECR repositories. Install Amazon Inspector Agent from the ECS container instances’ user data. Run an assessment with the CVE rules. (B) Recreate the ECR repositories with KMS encryption and ECR scanning enabled. Analyze the scan report after the next push of images. (C) Recreate the ECR repositories with KMS encryption and ECR scanning enabled. Install AWS Systems Manager Agent on the ECS container instances. Run an inventory report. (D) Enable KMS encryption on the existing ECR repositories. Use AWS Trusted Advisor to check the ECS container instances and to verify the findings against a list of current CVEs. |
37. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Here’s why:
Requirement 1: KMS Encryption of ECR Repositories
ECR repositories can be configured to use KMS for encrypting the image layers stored within them. This encryption is performed at rest, adding an extra layer of security to the container images. ECR uses KMS customer-managed keys (CMKs) or AWS-managed keys to encrypt the data. Enabling KMS encryption is critical for protecting sensitive data within the containers. Existing repositories cannot be directly modified to add KMS encryption; instead, they must be recreated.
Requirement 2: CVE Analysis of Container Images
ECR provides built-in image scanning capabilities that allow for the identification of common vulnerabilities and exposures (CVEs) in container images. When enabled, ECR automatically scans images after they are pushed to the repository. This scanning uses a vulnerability database to identify potential security flaws.
Why other options are incorrect:
- A: You cannot enable KMS encryption on existing ECR repositories. They have to be created with the KMS encryption setting enabled. While Amazon Inspector can scan EC2 instances, it’s not designed for direct container image scanning in ECR.
- C: Systems Manager Agent and inventory reports are not the right tools for finding CVEs in container images. Systems Manager is more about instance management.
- D: You cannot enable KMS encryption on existing ECR repositories. Trusted Advisor is a high-level service to analyze your AWS environment and find the ways to reduce cost, increase performance, or improve security. It cannot be used to find CVEs in container images.
Justification for option B:
Option B directly addresses both requirements efficiently:
- Recreation with KMS Encryption: Creating new ECR repositories allows for enabling KMS encryption during the creation process. This ensures that all subsequent images stored in the repository will be encrypted at rest using the specified KMS key.
- ECR Scanning Enabled: Enabling ECR scanning on the newly created repositories ensures that every image pushed to the repository is automatically scanned for CVEs. The vulnerability scan results are available for review, allowing security engineers to identify and address potential security risks within the container images.
Relevant Links:
Question.38 A company’s security engineer has been tasked with restricting a contractor’s IAM account access to the company’s Amazon EC2 console without providing access to any other AWS services. The contractor’s IAM account must not be able to gain access to any other AWS service, even if the IAM account is assigned additional permissions based on IAM group membership. What should the security engineer do to meet these requirements? (A) Create an inline IAM user policy that allows for Amazon EC2 access for the contractor’s IAM user. (B) Create an IAM permissions boundary policy that allows Amazon EC2 access. Associate the contractor’s IAM account with the IAM permissions boundary policy. (C) Create an IAM group with an attached policy that allows for Amazon EC2 access. Associate the contractor’s IAM account with the IAM group. (D) Create a IAM role that allows for EC2 and explicitly denies all other services. Instruct the contractor to always assume this role. |
38. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Create an IAM permissions boundary policy that allows Amazon EC2 access. Associate the contractor’s IAM account with the IAM permissions boundary policy.
Here’s why:
- Permissions Boundaries: IAM permissions boundaries are advanced features used to limit the maximum permissions that an IAM entity (user or role) can have. A permissions boundary does not grant permissions; it limits them. Even if an IAM user or role is granted excessive permissions through policies attached directly or via groups, the permissions boundary acts as a constraint.
- Meeting the Requirement: The problem states the contractor’s IAM account must not be able to access any other AWS service, even if granted additional permissions via group membership. This is precisely what a permissions boundary achieves. By creating a boundary that only allows EC2 access, any attempts to grant additional permissions through IAM groups will be ineffective, as the boundary will restrict the effective permissions to only EC2.
- Why other options are incorrect:
- A. Inline IAM user policy: While an inline policy can grant EC2 access, it doesn’t prevent the user from gaining additional permissions through other policies or group memberships, thus failing to meet the “no other AWS service” requirement.
- C. IAM group with attached policy: Similar to option A, this grants EC2 access but doesn’t prevent additional access from being granted through other group memberships or direct policies, failing the core constraint.
- D. IAM role: While a role can restrict access, instructing the contractor to “always assume” the role doesn’t guarantee they will. They could use their IAM user credentials directly, bypassing the role and its restrictions. The requirement is to prevent access, not just advise against it. The problem also says the contractor’s IAM account must have access, implying the contractor’s IAM user credentials must be sufficient by themselves. Furthermore, a role is generally used for applications running on EC2 instances or other AWS services, not for human users, and doesn’t directly prevent additional access from being granted via other policies.
- Justification for Permissions Boundaries: Permissions boundaries offer a strong, centralized way to ensure an IAM entity never exceeds its intended permissions. This aligns perfectly with the requirement to limit the contractor’s access solely to the EC2 console, regardless of other IAM configurations.
Therefore, using a permissions boundary is the only method that guarantees the contractor’s IAM account will be restricted to EC2 access only, fulfilling the problem statement’s requirement.
Relevant links for further research:
Question.39 A company manages multiple AWS accounts using AWS Organizations. The company’s security team notices that some member accounts are not sending AWS CloudTrail logs to a centralized Amazon S3 logging bucket. The security team wants to ensure there is at least one trail configured for all existing accounts and for any account that is created in the future. Which set of actions should the security team implement to accomplish this? (A) Create a new trail and configure it to send CloudTrail logs to Amazon S3. Use Amazon EventBridge to send notification if a trail is deleted or stopped. (B) Deploy an AWS Lambda function in every account to check if there is an existing trail and create a new trail, if needed. (C) Edit the existing trail in the Organizations management account and apply it to the organization. (D) Create an SCP to deny the cloudtrail:Delete* and cloudtrail:Stop* actions. Apply the SCP to all accounts. |
39. Click here to View Answer
Answer: C
Explanation:
The correct answer is C. Edit the existing trail in the Organizations management account and apply it to the organization.
Here’s why this is the best approach, along with a detailed justification:
Justification:
- AWS Organizations and Organization Trails: AWS Organizations allows centralized management of multiple AWS accounts. CloudTrail integrates seamlessly with Organizations. A key feature is the ability to create an organization trail within the management account (formerly known as the master account).
- Centralized Logging with Organization Trails: An organization trail ensures that logs from all member accounts in the organization are captured and stored in a designated S3 bucket. This provides a single pane of glass for auditing and security analysis across the entire AWS environment.
- Automatic Application to New Accounts: When you configure a trail to apply to the entire organization, CloudTrail automatically creates that trail in any new AWS accounts added to the organization. This guarantees that all accounts, both present and future, are covered.
- Ease of Implementation: Editing the existing trail in the management account is the simplest and most efficient way to achieve the desired outcome. It requires minimal code or scripting.
- Scalability and Maintainability: This approach scales effectively as the organization grows because the trail is automatically deployed to new accounts. Maintenance is also simplified since you only need to manage a single trail in the management account.
- Control and Consistency: An organization trail ensures that logging configuration is consistent across all accounts, reducing the risk of misconfiguration and improving overall security posture.
- Least Privilege Principle: The organization trail is managed from the management account, adhering to the principle of least privilege by centralizing administrative control.
Why other options are not as suitable:
- A: While creating a new trail and using EventBridge is partially helpful, it doesn’t solve the core problem of automatically deploying the trail to all accounts, especially new ones.
- B: Deploying a Lambda function in every account is an unnecessarily complex and difficult to maintain approach. It requires managing and updating the function in each account, increasing operational overhead.
- D: While SCPs can prevent deletion/stopping of trails, they don’t ensure a trail actually exists and is correctly configured. Moreover, simply denying deletion/stopping without ensuring the trail’s existence can lead to unexpected logging gaps if the trail was never properly provisioned.
Authoritative Links:
AWS Organizations: https://aws.amazon.com/organizations/
Creating a Trail For an Organization: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
Question.40 A company recently had a security audit in which the auditors identified multiple potential threats. These potential threats can cause usage pattern changes such as DNS access peak, abnormal instance traffic, abnormal network interface traffic, and unusual Amazon S3 API calls. The threats can come from different sources and can occur at any time. The company needs to implement a solution to continuously monitor its system and identify all these incoming threats in near-real time. Which solution will meet these requirements? (A) Enable AWS CloudTrail logs, VPC flow logs, and DNS logs. Use Amazon CloudWatch Logs to manage these logs from a centralized account. (B) Enable AWS CloudTrail logs, VPC flow logs, and DNS logs. Use Amazon Macie to monitor these logs from a centralized account. (C) Enable Amazon GuardDuty from a centralized account. Use GuardDuty to manage AWS CloudTrail logs, VPC flow logs, and DNS logs. (D) Enable Amazon Inspector from a centralized account. Use Amazon Inspector to manage AWS CloudTrail logs, VPC flow logs, and DNS logs. |
40. Click here to View Answer
Answer: C
Explanation:
The correct answer is C. Enable Amazon GuardDuty from a centralized account. Use GuardDuty to manage AWS CloudTrail logs, VPC flow logs, and DNS logs.
Here’s why:
The scenario describes a need for continuous monitoring and near real-time threat detection based on various log sources like CloudTrail, VPC Flow Logs, and DNS Logs. Amazon GuardDuty is specifically designed to fulfill this requirement. It’s a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and unauthorized behavior.
GuardDuty analyzes these data sources:
- AWS CloudTrail logs (management events): Detects unusual API calls and actions within your AWS environment.
- VPC Flow Logs: Identifies suspicious network traffic patterns, such as unusual connection attempts or data transfers.
- DNS logs: Detects malicious domain requests.
GuardDuty uses threat intelligence feeds, machine learning, and anomaly detection to identify threats. It automatically scales and requires no infrastructure management. A centralized account allows for consolidated monitoring across multiple AWS accounts, simplifying security administration.
Option A is incorrect because CloudWatch Logs is a log management service but doesn’t provide automated threat detection capabilities based on anomaly detection and threat intelligence. It needs additional configuration and manual analysis.
Option B is incorrect because Amazon Macie is a data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. It focuses on identifying and classifying sensitive data, not threat detection from logs and network traffic.
Option D is incorrect because Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It focuses on finding vulnerabilities in applications, not on real-time threat detection based on log analysis.
Authoritative Links:
Amazon Inspector: https://aws.amazon.com/inspector/
Amazon GuardDuty: https://aws.amazon.com/guardduty/
Amazon CloudTrail: https://aws.amazon.com/cloudtrail/
VPC Flow Logs: https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html
Amazon Macie: https://aws.amazon.com/macie/