Question.56 A company that operates in a hybrid cloud environment must meet strict compliance requirements. The company wants to create a report that includes evidence from on-premises workloads alongside evidence from AWS resources. A security engineer must implement a solution to collect, review, and manage the evidence to demonstrate compliance with company policy. Which solution will meet these requirements? (A) Create an assessment in AWS Audit Manager from a prebuilt framework or a custom framework. Upload manual evidence from the on-premises workloads. Add the evidence to the assessment. Generate an assessment report after Audit Manager collects the necessary evidence from the AWS resources. (B) Install the Amazon CloudWatch agent on the on-premises workloads. Use AWS Config to deploy a conformance pack from a sample conformance pack template or a custom YAML template. Generate an assessment report after AWS Config identifies noncompliant workloads and resources. (C) Set up the appropriate security standard in AWS Security Hub. Upload manual evidence from the on-premises workloads. Wait for Security Hub to collect the evidence from the AWS resources. Download the list of controls as a .csv file. (D) Install the Amazon CloudWatch agent on the on-premises workloads. Create a CloudWatch dashboard to monitor the on-premises workloads and the AWS resources. Run a query on the workloads and resources. Download the results. |
56. Click here to View Answer
Answer: A
Explanation:
The correct answer is A because AWS Audit Manager is specifically designed for automating compliance assessments and managing evidence across AWS resources. Option A leverages Audit Manager’s capabilities to address both on-premises and AWS environments. You can create assessments in Audit Manager based on predefined or customized frameworks aligned with your compliance policies. A critical aspect is the ability to manually upload evidence from on-premises workloads directly into Audit Manager. This allows you to centralize evidence from all environments within a single compliance management system. Audit Manager automates evidence collection from AWS resources, saving significant time and effort compared to manual gathering. Finally, generating an assessment report provides a consolidated view of your compliance posture, including both AWS and on-premises evidence, which is exactly what the company needs.
Option B is incorrect. While AWS Config can identify non-compliant resources using conformance packs, it doesn’t directly support uploading evidence from on-premises workloads. Conformance packs primarily focus on evaluating AWS resource configurations against defined rules.
Option C is also incorrect. Security Hub focuses on security posture management and threat detection. While it provides valuable insights into security findings, it is not designed for comprehensive compliance assessments or managing evidence from on-premises environments. It primarily aggregates findings from various AWS services and partner integrations.
Option D is incorrect. CloudWatch is a monitoring and observability service. While it can monitor on-premises workloads through the CloudWatch agent, it doesn’t provide a structured framework for compliance assessments, evidence management, or generating compliance reports. Using CloudWatch alone would require significant manual effort to compile and analyze the data for compliance purposes.
Therefore, only Audit Manager offers the necessary capabilities to collect, review, manage, and report on evidence from both on-premises and AWS resources to demonstrate compliance with company policy.
Relevant Link: https://aws.amazon.com/audit-manager/
Question.57 To meet regulatory requirements, a security engineer needs to implement an IAM policy that restricts the use of AWS services to the us-east-1 Region. What policy should the engineer implement?A. ![]() B. ![]() C. ![]() D. ![]() |
57. Click here to View Answer
Answer: C
Explanation:
Reference:
Question.58 A company has a web server in the AWS Cloud. The company will store the content for the web server in an Amazon S3 bucket. A security engineer must use an Amazon CloudFront distribution to speed up delivery of the content. None of the files can be publicly accessible from the S3 bucket directly. Which solution will meet these requirements? (A) Configure the permissions on the individual files in the S3 bucket so that only the CloudFront distribution has access to them. (B) Create an origin access control (OAC). Associate the OAC with the CloudFront distribution. Configure the S3 bucket permissions so that only the OAC can access the files in the S3 bucket. (C) Create an S3 role in AWS Identity and Access Management (IAM). Allow only the CloudFront distribution to assume the role to access the files in the S3 bucket. (D) Create an S3 bucket policy that uses only the CloudFront distribution ID as the principal and the Amazon Resource Name (ARN) as the target. |
58. Click here to View Answer
Answer: B
Explanation:
The correct solution is B, creating an Origin Access Control (OAC). OAC is the modern and recommended way to securely access S3 content through CloudFront. It enhances security compared to Origin Access Identities (OAI).
Here’s why option B is the best choice:
- Enhanced Security: OAC uses AWS Signature Version 4 (SigV4) authentication, which is more secure than the older OAI method. This ensures that only CloudFront, properly authenticated, can access the S3 bucket.
- Granular Control: OAC allows you to specify the CloudFront distribution that is allowed to access the S3 bucket. This prevents unauthorized access from other sources.
- Simplicity: It simplifies the process of granting permissions. You associate the OAC with the CloudFront distribution and then grant the OAC permission to access the S3 bucket. This eliminates the need to manage individual file permissions or complex IAM roles.
Why other options are not ideal:
- A: Configuring permissions on individual files is cumbersome and difficult to manage, especially with a large number of files.
- C: Creating an S3 role and having CloudFront assume it is an overly complex solution. CloudFront is not designed to assume roles in this manner for S3 access. Also, managing trust relationships and role assumptions adds unnecessary overhead.
- D: Creating an S3 bucket policy using the distribution ID is a less secure and less manageable approach compared to OAC. It might lead to unintended consequences if not implemented correctly.
In summary, OAC provides a secure, simple, and manageable way to allow CloudFront to access your S3 content without making the content publicly accessible. It aligns with AWS best practices for security and simplifies access management.
Authoritative Links:
Securing Amazon S3 content with Amazon CloudFront OAC: https://aws.amazon.com/blogs/networking-and-content-delivery/securing-amazon-s3-content-with-amazon-cloudfront-oac/
Using Origin Access Control (OAC): https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
Question.59 A security engineer logs in to the AWS Lambda console with administrator permissions. The security engineer is trying to view logs in Amazon CloudWatch for a Lambda function that is named myFunction. When the security engineer chooses the option in the Lambda console to view logs in CloudWatch, an “error loading Log Streams” message appears. The IAM policy for the Lambda function’s execution role contains the following: ![]() How should the security engineer correct the error? (A) Move the logs:CreateLogGroup action to the second Allow statement. (B) Add the logs:PutDestination action to the second Allow statement. (C) Add the logs:GetLogEvents action to the second Allow statement. (D) Add the logs:CreateLogStream action to the second Allow statement. |
59. Click here to View Answer
Answer: D
Explanation:
Reference:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/iam-identity-based-access-control-cwl.html
Question.60 A company has a new partnership with a vendor. The vendor will process data from the company’s customers. The company will upload data files as objects into an Amazon S3 bucket. The vendor will download the objects to perform data processing. The objects will contain sensitive data. A security engineer must implement a solution that prevents objects from residing in the S3 bucket for longer than 72 hours. Which solution will meet these requirements? (A) Use Amazon Macie to scan the S3 bucket for sensitive data every 72 hours. Configure Macie to delete the objects that contain sensitive data when they are discovered. (B) Configure an S3 Lifecycle rule on the S3 bucket to expire objects that have been in the S3 bucket for 72 hours. (C) Create an Amazon EventBridge scheduled rule that invokes an AWS Lambda function every day. Program the Lambda function to remove any objects that have been in the S3 bucket for 72 hours. (D) Use the S3 Intelligent-Tiering storage class for all objects that are uploaded to the S3 bucket. Use S3 Intelligent-Tiering to expire objects that have been in the $3 bucket for 72 hours. |
60. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Configure an S3 Lifecycle rule on the S3 bucket to expire objects that have been in the S3 bucket for 72 hours.
Here’s a detailed justification:
The core requirement is to automatically remove objects from the S3 bucket after 72 hours, regardless of their content. S3 Lifecycle policies are designed precisely for managing object lifecycles, including automatic deletion based on age. A lifecycle rule can be configured to permanently delete objects after a specified number of days, directly addressing the 72-hour retention requirement.
Option A is incorrect because Amazon Macie is a data security and privacy service that uses machine learning and pattern matching to discover and protect sensitive data in AWS. While Macie can identify sensitive data, it’s primarily for discovery and reporting rather than automatic deletion based on time. Configuring Macie to delete objects would be complex, less efficient, and not its intended use case. The scenario prioritizes automatic deletion based on time, regardless of whether the data is sensitive.
Option C is also incorrect because, although it would function, it is a more complex solution than necessary. It involves setting up an EventBridge rule, writing and deploying a Lambda function, and granting it the appropriate S3 permissions. While Lambda functions offer flexibility, they introduce additional overhead in terms of management, cost, and potential points of failure compared to a simple S3 Lifecycle rule. Lambda is a general-purpose compute service; using it for a task that S3 Lifecycle is specifically designed for is an anti-pattern.
Option D is incorrect because S3 Intelligent-Tiering is an S3 storage class designed to optimize storage costs by automatically moving data to the most cost-effective access tier based on access patterns. S3 Intelligent-Tiering does not provide a feature to automatically delete objects based on their age. It focuses on cost optimization rather than data retention policies.Furthermore, even if Intelligent-Tiering could delete objects (which it can’t), the primary goal is time-based deletion, not cost optimization based on access patterns.
In summary, S3 Lifecycle rules provide the simplest, most direct, and most cost-effective way to implement automatic deletion of objects based on age. This makes option B the best solution.
Authoritative Links:
- S3 Lifecycle Policies: https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-configuration-concept.html
- Amazon Macie: https://aws.amazon.com/macie/
- Amazon EventBridge: https://aws.amazon.com/eventbridge/
- AWS Lambda: https://aws.amazon.com/lambda/
- S3 Intelligent-Tiering: https://aws.amazon.com/s3/storage-classes/intelligent-tiering/