Question.6 A company has an application that uses dozens of Amazon DynamoDB tables to store data. Auditors find that the tables do not comply with the company’s data protection policy. The company’s retention policy states that all data must be backed up twice each month: once at midnight on the 15th day of the month and again at midnight on the 25th day of the month. The company must retain the backups for 3 months. Which combination of steps should a security engineer take to meet these requirements? (Choose two.) (A) Use the DynamoDB on-demand backup capability to create a backup plan. Configure a lifecycle policy to expire backups after 3 months. (B) Use AWS DataSync to create a backup plan. Add a backup rule that includes a retention period of 3 months. (C) Use AWS Backup to create a backup plan. Add a backup rule that includes a retention period of 3 months. (D) Set the backup frequency by using a cron schedule expression. Assign each DynamoDB table to the backup plan. (E) Set the backup frequency by using a rate schedule expression. Assign each DynamoDB table to the backup plan. |
6. Click here to View Answer
Answer: CD
Explanation:
The correct answer is CD. Here’s why:
C. Use AWS Backup to create a backup plan. Add a backup rule that includes a retention period of 3 months. AWS Backup is the service specifically designed to centrally manage and automate backups across various AWS services, including DynamoDB. It allows defining backup plans with specific schedules and retention policies, fitting the company’s needs precisely. By creating a backup rule with a 3-month retention period, the company ensures backups are stored for the required duration, meeting the retention policy.
D. Set the backup frequency by using a cron schedule expression. Assign each DynamoDB table to the backup plan. AWS Backup allows scheduling backups using cron expressions, providing precise control over the backup timing. A cron expression can easily define backups to occur at midnight on the 15th and 25th of each month, meeting the company’s twice-monthly backup requirement. Assigning the DynamoDB tables to the backup plan ensures all tables are included in the scheduled backups.
Why the other options are incorrect:
- A. Use the DynamoDB on-demand backup capability to create a backup plan. Configure a lifecycle policy to expire backups after 3 months. While DynamoDB offers on-demand backups and AWS Backup supports DynamoDB, the on-demand backup feature is not integrated to be used as a part of a lifecycle policy. You need a centralized backup solution like AWS Backup to create a comprehensive backup plan for multiple DynamoDB tables and manage the lifecycle policy.
- B. Use AWS DataSync to create a backup plan. Add a backup rule that includes a retention period of 3 months. AWS DataSync is designed for moving large amounts of data between on-premises storage and AWS storage services or between AWS storage services. It is not primarily intended for backing up DynamoDB tables for compliance and retention purposes. AWS Backup is a better fit for these specific requirements.
- E. Set the backup frequency by using a rate schedule expression. Assign each DynamoDB table to the backup plan. A rate schedule expression allows defining schedules in terms of fixed intervals (e.g., every 12 hours). However, it is less suitable for precise scheduling like the 15th and 25th of each month, making a cron expression a better choice for the given requirement.
Supporting Links:
DynamoDB On-Demand Backup: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/backuprestore_HowItWorks.html
AWS Backup: https://aws.amazon.com/backup/
AWS Backup – Scheduling Expressions: https://docs.aws.amazon.com/aws-backup/latest/devguide/schedule-expression.html
AWS DataSync: https://aws.amazon.com/datasync/
Question.7 A company needs a security engineer to implement a scalable solution for multi-account authentication and authorization. The solution should not introduce additional user-managed architectural components. Native AWS features should be used as much as possible. The security engineer has set up AWS Organizations with all features activated and AWS IAM Identity Center (AWS Single Sign-On) enabled. Which additional steps should the security engineer take to complete the task? (A) Use AD Connector to create users and groups for all employees that require access to AWS accounts. Assign AD Connector groups to AWS accounts and link to the IAM roles in accordance with the employees’ job functions and access requirements. Instruct employees to access AWS accounts by using the AWS Directory Service user portal. (B) Use an IAM Identity Center default directory to create users and groups for all employees that require access to AWS accounts. Assign groups to AWS accounts and link to permission sets in accordance with the employees’ job functions and access requirements. Instruct employees to access AWS accounts by using the IAM Identity Center user portal. (C) Use an IAM Identity Center default directory to create users and groups for all employees that require access to AWS accounts. Link IAM Identity Center groups to the IAM users present in all accounts to inherit existing permissions. Instruct employees to access AWS accounts by using the IAM Identity Center user portal. (D) Use AWS Directory Service for Microsoft Active Directory to create users and groups for all employees that require access to AWS accounts. Enable AWS Management Console access in the created directory and specify IAM Identity Center as a source of information for integrated accounts and permission sets. Instruct employees to access AWS accounts by using the AWS Directory Service user portal. |
7. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Here’s a detailed justification:
The scenario requires a scalable, multi-account authentication and authorization solution using native AWS features, avoiding additional user-managed components. AWS IAM Identity Center (successor to AWS SSO) is already enabled within AWS Organizations.
Option B leverages IAM Identity Center’s built-in directory for user and group management. This fulfills the “native AWS features” requirement and avoids the need for external directories or custom user management systems. The process involves creating users and groups directly within IAM Identity Center, aligning user access based on job functions. Crucially, assigning these groups to AWS accounts and linking them to permission sets defined within IAM Identity Center allows for centralized access control management. This aligns with the requirement for scalability. Employees then use the IAM Identity Center user portal for a streamlined single sign-on experience. This approach centralizes authentication and authorization. Permission sets are sets of AWS permissions defining the access level within linked AWS accounts, offering granular control and role-based access.
Option A is incorrect because while AD Connector could be used, it introduces more complexity than necessary and doesn’t fully leverage the benefits of IAM Identity Center for single sign-on. AD Connector creates a directory gateway for routing requests to your existing on-premises Active Directory (AD) without replicating any of the AD data in the AWS Cloud. This is an architecture that is not needed here.
Option C is incorrect because linking IAM Identity Center groups to existing IAM users in each account undermines the centralized access control management that IAM Identity Center is designed to provide. It doesn’t scale well and doesn’t leverage permission sets for controlled access. Relying on pre-existing IAM users in accounts misses the centralized identity management IAM Identity Center offers.
Option D is incorrect because AWS Directory Service for Microsoft Active Directory, while capable of managing users and groups, introduces a separate managed Active Directory instance, which contradicts the requirement to minimize user-managed architectural components. It is an architectural design for a different use case. Linking IAM Identity Center as a “source of information” is not the primary approach for integrating user access. Instead, IAM Identity Center would be the IDP, not just a source of information for the created directory.
Therefore, option B presents the most suitable and efficient solution by leveraging IAM Identity Center’s directory and permission sets for multi-account authentication and authorization.
Relevant links:
IAM Identity Center Permission Sets: https://docs.aws.amazon.com/singlesignon/latest/userguide/permission-sets.html
AWS IAM Identity Center: https://aws.amazon.com/iam/identity/
AWS Organizations: https://aws.amazon.com/organizations/
Question.8 A company has deployed Amazon GuardDuty and now wants to implement automation for potential threats. The company has decided to start with RDP brute force attacks that come from Amazon EC2 instances in the company’s AWS environment. A security engineer needs to implement a solution that blocks the detected communication from a suspicious instance until investigation and potential remediation can occur. Which solution will meet these requirements? (A) Configure GuardDuty to send the event to an Amazon Kinesis data stream. Process the event with an Amazon Kinesis Data Analytics for Apache Flink application that sends a notification to the company through Amazon Simple Notification Service (Amazon SNS). Add rules to the network ACL to block traffic to and from the suspicious instance. (B) Configure GuardDuty to send the event to Amazon EventBridge. Deploy an AWS WAF web ACL. Process the event with an AWS Lambda function that sends a notification to the company through Amazon Simple Notification Service (Amazon SNS) and adds a web ACL rule to block traffic to and from the suspicious instance. (C) Enable AWS Security Hub to ingest GuardDuty findings and send the event to Amazon EventBridge. Deploy AWS Network Firewall. Process the event with an AWS Lambda function that adds a rule to a Network Firewall firewall policy to block traffic to and from the suspicious instance. (D) Enable AWS Security Hub to ingest GuardDuty findings. Configure an Amazon Kinesis data stream as an event destination for Security Hub. Process the event with an AWS Lambda function that replaces the security group of the suspicious instance with a security group that does not allow any connections. |
8. Click here to View Answer
Answer: C
Explanation:
Here’s a detailed justification for why option C is the most suitable solution for automating the response to RDP brute force attacks detected by GuardDuty, along with why the other options are less appropriate:
Why Option C is Correct:
- GuardDuty and Security Hub Integration: Option C leverages the integration between GuardDuty and AWS Security Hub. GuardDuty detects threats, and Security Hub provides a centralized view and management of security findings across AWS services, making it a good starting point.
- Event-Driven Architecture: Using Amazon EventBridge, the solution adopts an event-driven approach. EventBridge receives findings from Security Hub and triggers a Lambda function. This makes the response automated and scalable.
- AWS Network Firewall for Blocking Traffic: The core of the solution involves using AWS Network Firewall, a fully managed network security service. The Lambda function dynamically adds a rule to the Network Firewall firewall policy to block traffic to and from the suspicious EC2 instance. Network Firewall operates at the network layer, providing centralized control and visibility over network traffic, making it ideal for blocking RDP brute force attempts.
- Lambda Function for Automation: The Lambda function orchestrates the entire process. It receives the event from EventBridge, extracts the relevant information (e.g., the IP address of the attacking EC2 instance), and uses the AWS SDK to programmatically update the Network Firewall policy.
Why other options are incorrect:
- Option A: Using Kinesis Data Streams and Kinesis Data Analytics is overkill for this scenario. While effective for processing large volumes of data, they add unnecessary complexity. Moreover, Network ACLs (Network Access Control Lists) are stateless, which makes implementing proper blocking logic complicated as it would require separate inbound and outbound rules. Using NACLs would not be able to block existing established sessions.
- Option B: AWS WAF (Web Application Firewall) protects web applications (HTTP/HTTPS traffic) at Layer 7 of the OSI model. RDP brute force attacks occur over TCP on port 3389, which is not HTTP/HTTPS traffic. Therefore, WAF is not the appropriate tool for this type of attack.
- Option D: Replacing the security group of the suspicious instance is a viable option to isolate the instance. However, it might cause disruption to other applications if they rely on the same security group. Also, it is not as surgical as using Network Firewall to only block malicious traffic.
Key Concepts and Links:
AWS Network Firewall: Managed network security service for protecting your virtual private clouds (VPCs). https://aws.amazon.com/network-firewall/
Amazon GuardDuty: Intelligent threat detection service. https://aws.amazon.com/guardduty/
AWS Security Hub: Centralized security management and compliance service. https://aws.amazon.com/security-hub/
Amazon EventBridge: Serverless event bus for building event-driven applications. https://aws.amazon.com/eventbridge/
AWS Lambda: Serverless compute service for running code without managing servers. https://aws.amazon.com/lambda/
Question.9 A company has an AWS account that hosts a production application. The company receives an email notification that Amazon GuardDuty has detected an Impact:IAMUser/AnomalousBehavior finding in the account. A security engineer needs to run the investigation playbook for this security incident and must collect and analyze the information without affecting the application. Which solution will meet these requirements MOST quickly? (A) Log in to the AWS account by using read-only credentials. Review the GuardDuty finding for details about the IAM credentials that were used. Use the IAM console to add a DenyAll policy to the IAM principal. (B) Log in to the AWS account by using read-only credentials. Review the GuardDuty finding to determine which API calls initiated the finding. Use Amazon Detective to review the API calls in context. (C) Log in to the AWS account by using administrator credentials. Review the GuardDuty finding for details about the IAM credentials that were used. Use the IAM console to add a DenyAll policy to the IAM principal. (D) Log in to the AWS account by using read-only credentials. Review the GuardDuty finding to determine which API calls initiated the finding. Use AWS CloudTrail Insights and AWS CloudTrail Lake to review the API calls in context. |
9. Click here to View Answer
Answer: B
Explanation:
The best solution is B because it provides a quick and non-disruptive method for investigating the GuardDuty finding.
Here’s a detailed justification:
- Read-only access: Logging in with read-only credentials (as suggested in options A, B, and D) is crucial for the investigation. This prevents accidental modifications to the production environment, minimizing the risk of disrupting the application.
- GuardDuty finding details: The GuardDuty finding provides initial information about the IAM user and the anomalous activity detected. Reviewing these details is the first step in understanding the scope of the potential security incident.
- Amazon Detective: Amazon Detective is specifically designed for security investigations. It automatically collects and analyzes log data (like CloudTrail logs) to provide a comprehensive view of security events. It allows security engineers to visualize relationships between users, roles, and resources, thus simplifying the investigation of the anomalous API calls identified in the GuardDuty finding. Detective helps to quickly understand the context surrounding the finding.
- Option A’s immediate DenyAll policy is premature: Immediately applying a DenyAll policy (as suggested in options A and C) is too drastic and can potentially disrupt legitimate application functionality. A thorough investigation should precede any restrictive actions.
- Option C’s administrator credentials are not necessary: Administrator credentials are not required to investigate GuardDuty findings and use Detective. Using read-only credentials adheres to the principle of least privilege.
- Option D’s CloudTrail Insights and Lake are less efficient: While CloudTrail Insights and Lake are powerful tools for analyzing CloudTrail data, they are not as purpose-built for security investigations as Amazon Detective. They may require more manual effort to correlate events and understand the context, making them less efficient in a situation where time is a factor. Detective provides an integrated and easier-to-use interface to examine the security-related logs.
In summary, option B combines the best practices of using read-only access for investigation with the efficiency of Amazon Detective to quickly understand the context of the anomalous API calls without impacting the production application.
Supporting Links:
IAM Best Practices: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Amazon GuardDuty: https://aws.amazon.com/guardduty/
Amazon Detective: https://aws.amazon.com/detective/
AWS CloudTrail: https://aws.amazon.com/cloudtrail/
Question.10 Company A has an AWS account that is named Account A. Company A recently acquired Company B, which has an AWS account that is named Account B. Company B stores its files in an Amazon S3 bucket. The administrators need to give a user from Account A full access to the S3 bucket in Account B. After the administrators adjust the IAM permissions for the user in Account A to access the S3 bucket in Account B, the user still cannot access any files in the S3 bucket. Which solution will resolve this issue? (A) In Account B, create a bucket ACL to allow the user from Account A to access the S3 bucket in Account B. (B) In Account B, create an object ACL to allow the user from Account A to access all the objects in the S3 bucket in Account B. (C) In Account B, create a bucket policy to allow the user from Account A to access the S3 bucket in Account B. (D) In Account B, create a user policy to allow the user from Account A to access the S3 bucket in Account B. |
10. Click here to View Answer
Answer: C
Explanation:
The correct answer is C: In Account B, create a bucket policy to allow the user from Account A to access the S3 bucket in Account B. Here’s why:
The problem describes a cross-account access scenario where a user in one AWS account (Account A) needs to access resources (an S3 bucket) in another AWS account (Account B). While IAM permissions in Account A allow the user to attempt to access the bucket, S3 buckets operate independently and have their own access control mechanisms. Therefore, even with proper IAM permissions in Account A, the user in Account A will be denied access to the S3 bucket in Account B unless explicit permission is granted within Account B.
Bucket policies are the primary mechanism for granting permissions to an S3 bucket itself. A bucket policy is a resource-based policy attached to the bucket in Account B. This policy explicitly allows the user from Account A to perform actions on the bucket. The policy would specify the Account A user’s ARN and the allowed S3 actions (e.g., s3:GetObject
, s3:PutObject
).
Option A is incorrect because Bucket ACLs are less powerful and older access management methods. They are primarily for granting basic read/write access to AWS accounts, not to specific IAM users within an account. They also lack the granularity and condition options offered by bucket policies.
Option B is incorrect because Object ACLs require updating permissions for each individual object in the bucket, which is impractical and unmanageable, especially for buckets with many objects. While object ACLs can grant permissions, they don’t scale well.
Option D is incorrect because creating a user policy within Account B is not the correct approach for cross-account access to an S3 bucket. Policies attach to IAM entities within that account (Account B), but are not designed to grant access to users that exist in other AWS accounts (Account A). The user’s existing policy in Account A is already granting permission; the issue is the bucket in Account B has no policy allowing the user from Account A to access it.
Therefore, the most effective and scalable solution is to use a bucket policy within Account B that specifically allows the user from Account A to access the S3 bucket. This approach provides centralized and granular control over S3 bucket access.
For further research:
Cross-Account Access: https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
AWS S3 Bucket Policies: https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html
AWS IAM Policies: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html