Question.26 A company developed an application by using AWS Lambda, Amazon S3, Amazon Simple Notification Service (Amazon SNS), and Amazon DynamoDB. An external application puts objects into the company’s S3 bucket and tags the objects with date and time. A Lambda function periodically pulls data from the company’s S3 bucket based on date and time tags and inserts specific values into a DynamoDB table for further processing. The data includes personally identifiable information (PII). The company must remove data that is older than 30 days from the S3 bucket and the DynamoDB table. Which solution will meet this requirement with the MOST operational efficiency? (A) Update the Lambda function to add a TTL S3 flag to S3 objects. Create an S3 Lifecycle policy to expire objects that are older than 30 days by using the TTL S3 flag. (B) Create an S3 Lifecycle policy to expire objects that are older than 30 days. Update the Lambda function to add the TTL attribute in the DynamoDB table. Enable TTL on the DynamoDB table to expire entries that are older than 30 days based on the TTL attribute. (C) Create an S3 Lifecycle policy to expire objects that are older than 30 days and to add all prefixes to the S3 bucket. Update the Lambda function to delete entries that are older than 30 days. (D) Create an S3 Lifecycle policy to expire objects that are older than 30 days by using object tags. Update the Lambda function to delete entries that are older than 30 days. |
26. Click here to View Answer
Answer: B
Explanation:
Option B provides the most operationally efficient solution for automatically removing data older than 30 days from both the S3 bucket and the DynamoDB table.
Here’s why:
- S3 Lifecycle Policy: Creating an S3 Lifecycle policy allows for automated object expiration based on age. The policy can be configured to automatically delete objects older than 30 days. This eliminates the need for custom code to identify and delete these objects, thus reducing operational overhead. https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-management.html
- DynamoDB TTL: DynamoDB Time To Live (TTL) enables automatic removal of expired items from a table. By adding a TTL attribute to the DynamoDB table (populated by the Lambda function when it inserts data), you can instruct DynamoDB to automatically delete entries that are older than 30 days based on the TTL attribute. This significantly simplifies data retention management and reduces operational complexity. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html
The other options are less efficient:
- Option A: Using a TTL S3 flag is not a standard S3 feature. While tags can be used in Lifecycle Policies, it’s not a “TTL flag.”
- Option C: While an S3 lifecycle policy is used, the Lambda function is still responsible for deleting DynamoDB entries, which increases operational overhead and potential for errors. Adding all prefixes to the S3 bucket for lifecycle policy is not necessary if you can target based on object age.
- Option D: Like option C, this requires Lambda to handle DynamoDB deletion, adding operational burden.
Therefore, option B leverages built-in AWS services to automate data expiration in both S3 and DynamoDB, resulting in the most operationally efficient solution and minimal custom code.
Question.27 What are the MOST secure ways to protect the AWS account root user of a recently opened AWS account? (Choose two.) (A) Use the AWS account root user access keys instead of the AWS Management Console. (B) Enable multi-factor authentication for the AWS IAM users with the AdministratorAccess managed policy attached to them. (C) Use AWS KMS to encrypt all AWS account root user and AWS IAM access keys and set automatic rotation to 30 days. (D) Do not create access keys for the AWS account root user; instead, create AWS IAM users. E.Enable multi-factor authentication for the AWS account root user. |
27. Click here to View Answer
Answer: DE
Explanation:
Here’s a detailed justification for why options D and E are the most secure ways to protect the AWS account root user, along with supporting explanations and links:
The AWS account root user has unrestricted access to all resources in your AWS account. Its compromise represents a significant security risk. Therefore, securing it is paramount.
Option D: Do not create access keys for the AWS account root user; instead, create AWS IAM users.
- Justification: Creating access keys for the root user provides a persistent and readily exploitable credential. If compromised, these keys grant full, unfettered access to the entire AWS environment. By avoiding the creation of root user access keys, you eliminate this specific attack vector. Instead, IAM users should be created and assigned specific permissions based on the principle of least privilege. This limits the impact of a compromised IAM user.
- Concept: Least privilege is a security principle that dictates that users (or processes) should only have the minimum level of access needed to perform their job functions.
- Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
- Why it’s secure: If you’re not storing root user credentials anywhere (since you’re not making keys), that removes a massive attack vector.
Option E: Enable multi-factor authentication for the AWS account root user.
- Justification: Multi-factor authentication (MFA) adds an extra layer of security beyond a username and password. Even if the root user’s password is compromised, an attacker would still need access to the second factor (e.g., a code from a mobile app or a security key) to gain access. This significantly reduces the risk of unauthorized access. AWS strongly recommends enabling MFA for the root user.
- Concept: MFA requires users to provide multiple verification factors to prove their identity, bolstering protection against unauthorized access.
- Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html
- Why it’s secure: MFA protects against password compromise, which is one of the most common security issues.
Why other options are incorrect:
C: Use AWS KMS to encrypt all AWS account root user and AWS IAM access keys and set automatic rotation to 30 days. Storing the root user access key anywhere is dangerous. Encrypting and rotating the root user access key is a complex operation and not recommended. Focus instead on eliminating the root user access key completely. While encrypting IAM access keys could be part of a comprehensive security strategy, it’s not directly related to protecting the root user, and again, you should not store root credentials at all. It also doesn’t address the fundamental problem of having the root access keys to begin with.
A: Use the AWS account root user access keys instead of the AWS Management Console. This is explicitly bad advice. Never use root user access keys unless absolutely necessary. The AWS Management Console with MFA is the better path for the root user.
B: Enable multi-factor authentication for the AWS IAM users with the AdministratorAccess managed policy attached to them. While enabling MFA for IAM users is generally good, this is irrelevant to securing the root user, which is the focus of the question. Also, granting the AdministratorAccess policy should be avoided if possible.
Question.28 A company is expanding its group of stores. On the day that each new store opens, the company wants to launch a customized web application for that store. Each store’s application will have a non-production environment and a production environment. Each environment will be deployed in a separate AWS account. The company uses AWS Organizations and has an OU that is used only for these accounts. The company distributes most of the development work to third-party development teams. A security engineer needs to ensure that each team follows the company’s deployment plan for AWS resources. The security engineer also must limit access to the deployment plan to only the developers who need access. The security engineer already has created an AWS CloudFormation template that implements the deployment plan. What should the security engineer do next to meet the requirements in the MOST secure way? (A) Create an AWS Service Catalog portfolio in the organization’s management account. Upload the CloudFormation template. Add the template to the portfolio’s product list. Share the portfolio with the OU. (B) Use the CloudFormation CLI to create a module from the CloudFormation template. Register the module as a private extension in the CloudFormation registry. Publish the extension. In the OU, create an SCP that allows access to the extension. (C) Create an AWS Service Catalog portfolio in the organization’s management account. Upload the CloudFormation template. Add the template to the portfolio’s product list. Create an IAM role that has a trust policy that allows cross-account access to the portfolio for users in the OU accounts. Attach the AWSServiceCatalogEndUserFullAccess managed policy to the role. (D) Use the CloudFormation CLI to create a module from the CloudFormation template. Register the module as a private extension in the CloudFormation registry. Publish the extension. Share the extension with the OU. |
28. Click here to View Answer
Answer: A
Explanation:
The correct answer is A. Here’s why:
- Requirement 1: Standardized Deployment Plan: The company needs to ensure all development teams adhere to a specific deployment plan defined by the CloudFormation template. AWS Service Catalog is specifically designed to provide centrally managed and governed catalogs of IT services that can be deployed across AWS accounts. By using Service Catalog, the security engineer can ensure consistency in deployments.
- Requirement 2: Limited Access to Deployment Plan: The deployment plan (CloudFormation template) should be accessible only to authorized developers. Service Catalog allows you to control who has access to specific products (in this case, the CloudFormation template) through portfolio sharing and IAM permissions.
Why Option A is the Best Approach
- AWS Service Catalog Portfolio: Creating a portfolio in the management account centralizes the CloudFormation template. This allows for easier management and versioning of the deployment plan.
- Uploading the CloudFormation template as a product: This makes the deployment plan available as a self-service product that developers can launch.
- Sharing the portfolio with the OU: Sharing the portfolio with the OU that contains the new store accounts makes the product available to all accounts within that OU, which aligns with the requirement that all store accounts have access to the deployment plan.
Why Other Options Are Less Suitable
- Option B & D (CloudFormation Modules/Private Extension): While CloudFormation Modules allow you to create reusable CloudFormation code packages and Custom Resources allow you to extend CloudFormation’s provisioning capabilities, they are more complex to manage for the given requirements than AWS Service Catalog. Distributing code and managing versions across different AWS accounts and development teams will become an administrative burden in the long run. The overhead is higher compared to leveraging Service Catalog. Also, Option B uses Service Control Policies (SCPs) to allow access, but SCPs are best used to set guardrails and prevent actions, rather than grant access.
- Option C (IAM Role with Cross-Account Access): While creating an IAM role that allows users in the OU accounts to access the portfolio in the management account may seem like a reasonable choice, it is not ideal. The AWSServiceCatalogEndUserFullAccess policy grants a broader range of permissions than necessary. It is best to grant the least privilege, which sharing via Service Catalog inherently provides. Additionally, managing cross-account IAM roles across multiple accounts and developers increases the administrative burden.
Supporting Concepts and Links:
- AWS Service Catalog: AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. It enables you to centrally manage commonly deployed IT services and helps you achieve consistent governance and meet compliance requirements while enabling users to self-provision approved services. https://aws.amazon.com/servicecatalog/
- AWS Organizations: AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources. https://aws.amazon.com/organizations/
- CloudFormation Modules: CloudFormation Modules allow you to package and reuse parts of your CloudFormation templates. https://aws.amazon.com/blogs/aws/new-cloudformation-modules-build-reusable-templates-faster/
In summary, Option A provides the most secure and efficient way to meet the requirements by leveraging AWS Service Catalog to centrally manage the CloudFormation template, control access, and ensure consistent deployment across AWS accounts.
Question.29 A team is using AWS Secrets Manager to store an application database password. Only a limited number of IAM principals within the account can have access to the secret. The principals who require access to the secret change frequently. A security engineer must create a solution that maximizes flexibility and scalability. Which solution will meet these requirements? (A) Use a role-based approach by creating an IAM role with an inline permissions policy that allows access to the secret. Update the IAM principals in the role trust policy as required. (B) Deploy a VPC endpoint for Secrets Manager. Create and attach an endpoint policy that specifies the IAM principals that are allowed to access the secret. Update the list of IAM principals as required. (C) Use a tag-based approach by attaching a resource policy to the secret. Apply tags to the secret and the IAM principals. Use the aws:PrincipalTag and aws:ResourceTag IAM condition keys to control access. (D) Use a deny-by-default approach by using IAM policies to deny access to the secret explicitly. Attach the policies to an IAM group. Add all IAM principals to the IAM group. Remove principals from the group when they need access. Add the principals to the group again when access is no longer allowed. |
29. Click here to View Answer
Answer: C
Explanation:
The correct answer is C. Here’s a detailed justification:
Why Option C is Correct (Tag-Based Approach):
- Flexibility and Scalability: Tags provide a highly flexible and scalable method for managing access control in AWS. Tags are key-value pairs that you can attach to AWS resources, including Secrets Manager secrets and IAM principals (users or roles).
- Dynamic Access Control: The
aws:PrincipalTag
andaws:ResourceTag
condition keys allow you to define fine-grained access control rules based on tags. As the IAM principals needing access to the secret change frequently, you can simply update the tags on the principals to grant or revoke access without modifying the secret’s resource policy itself. This makes the solution very adaptable. - Reduced Policy Updates: This approach minimizes the need to modify the secret’s resource policy. Instead of continuously updating the policy to add or remove individual IAM principals, you manage access through tag updates, simplifying administration and reducing the risk of errors.
- Centralized Access Control: Resource policies associated with Secrets Manager secrets provide a centralized place to define access rules. By using tag-based conditions within the resource policy, you can enforce consistent access control across the organization.
Why Other Options are Incorrect:
- Option A (Role-Based Approach with Inline Policy): While role-based access is generally a good practice, updating the role’s trust policy whenever principals change is cumbersome and prone to errors. The trust policy identifies who can assume the role, and constantly modifying this policy would be operationally difficult.
- Option B (VPC Endpoint Policy): VPC endpoint policies primarily control access to the Secrets Manager service through the VPC endpoint itself. They are not the appropriate mechanism for managing granular access to individual secrets based on IAM principals. They are better suited for controlling network access to the Secrets Manager service.
- Option D (Deny-by-Default Approach): While a deny-by-default approach can be a good security practice, using IAM groups to manage access in this scenario is inefficient. Constantly adding and removing principals from the group would be operationally complex and increase the risk of accidental misconfiguration.
Authoritative Links for Further Research:
Tagging AWS Resources: https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html
AWS Secrets Manager Resource Policies: https://docs.aws.amazon.com/secretsmanager/latest/userguide/security_iam_id-based-policy-examples.html
IAM Condition Keys: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html
Question.30 A company is hosting a web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The application has become the target of a DoS attack. Application logging shows that requests are coming from a small number of client IP addresses, but the addresses change regularly. The company needs to block the malicious traffic with a solution that requires the least amount of ongoing effort. Which solution meets these requirements? (A) Create an AWS WAF rate-based rule, and attach it to the ALB. (B) Update the security group that is attached to the ALB to block the attacking IP addresses. (C) Update the ALB subnet’s network ACL to block the attacking client IP addresses. (D) Create an AWS WAF rate-based rule, and attach it to the security group of the EC2 instances. |
30. Click here to View Answer
Answer: A
Explanation:
The correct answer is A: Create an AWS WAF rate-based rule, and attach it to the ALB.
Here’s a detailed justification:
AWS WAF (Web Application Firewall) is designed to protect web applications from common web exploits and bots that can affect availability, compromise security, or consume excessive resources. A rate-based rule in AWS WAF counts the requests coming from each IP address and blocks IP addresses that exceed a specified threshold within a defined period. This perfectly addresses the scenario where the attack originates from a changing set of IP addresses. By configuring a rate-based rule, the company can automatically block IPs that are sending an unusually high volume of requests, mitigating the DoS attack with minimal manual intervention. The ALB is the entry point for the web traffic, so attaching the WAF to it is the correct place.
Option B is incorrect because updating security groups manually with changing IP addresses is a reactive and labor-intensive approach that will not scale. Security groups are stateful firewalls operating at the instance level and aren’t optimized for rapidly changing IP blocking scenarios.
Option C is incorrect because network ACLs (NACLs) are stateless firewalls that operate at the subnet level. While NACLs can block traffic, updating them frequently with changing IPs is cumbersome and error-prone. Moreover, NACLs are not as easily managed and do not offer the advanced features of WAF such as rate limiting.
Option D is incorrect because while it involves AWS WAF, attaching it directly to the EC2 instance security group bypasses the ALB, which is the intended entry point and the correct place for a web application firewall to analyze traffic and block malicious requests. The ALB is responsible for distributing traffic across instances, and a WAF at the ALB provides centralized protection.
In summary, AWS WAF’s rate-based rule feature provides an automated and efficient way to block DoS attacks originating from a changing set of IP addresses with minimal ongoing effort. This aligns with the company’s requirements and makes it the most suitable solution.
Here are some helpful links for further research:
Application Load Balancer: https://aws.amazon.com/elasticloadbalancing/application-load-balancer/
AWS WAF: https://aws.amazon.com/waf/
AWS WAF Rate-Based Rules: https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html