Question.41 A company that uses AWS Organizations is using AWS IAM Identity Center (AWS Single Sign-On) to administer access to AWS accounts. A security engineer is creating a custom permission set in IAM Identity Center. The company will use the permission set across multiple accounts. An AWS managed policy and a customer managed policy are attached to the permission set. The security engineer has full administrative permissions and is operating in the management account. When the security engineer attempts to assign the permission set to an IAM Identity Center user who has access to multiple accounts, the assignment fails. What should the security engineer do to resolve this failure? (A) Create the customer managed policy in every account where the permission set is assigned. Give the customer managed policy the same name and same permissions in each account. (B) Remove either the AWS managed policy or the customer managed policy from the permission set. Create a second permission set that includes the removed policy. Apply the permission sets separately to the user. (C) Evaluate the logic of the AWS managed policy and the customer managed policy. Resolve any policy conflicts in the permission set before deployment. (D) Do not add the new permission set to the user. Instead, edit the user’s existing permission set to include the AWS managed policy and the customer managed policy. |
41. Click here to View Answer
Answer: A
Explanation:
The correct answer is A. The failure in assigning the permission set to the IAM Identity Center user across multiple accounts arises because customer-managed policies are account-specific. When a permission set containing a customer-managed policy is used across multiple accounts within AWS Organizations, the policy must exist in each of those accounts. IAM Identity Center cannot automatically assume a policy from a different account. Therefore, the security engineer needs to replicate the customer-managed policy in every account where the permission set is intended to be used, ensuring it has the same name and identical permissions.
Option B is incorrect because creating multiple permission sets to workaround this limitation is not efficient and can lead to administrative overhead. Option C focuses on policy conflicts which aren’t the cause of this assignment failure. While policy evaluation and conflict resolution are important for security best practices, they don’t directly address the underlying problem of the customer-managed policy not existing in the target accounts. Option D is not a viable solution, as adding policies to the user’s existing permission set doesn’t change the fact that the customer-managed policy still needs to exist within each account where the permission set is applied. IAM Identity Center Permission sets need the custom managed policies deployed in each account.
Further reading on AWS IAM Identity Center and permission sets:
AWS documentation on Customer managed policies: https://docs.aws.amazon.com/IAM/latest/UserGuide/access-policies-managed-vs-inline.html
AWS documentation on Permission Sets: https://docs.aws.amazon.com/singlesignon/latest/userguide/permission-sets.html
Question.42 A company has thousands of AWS Lambda functions. While reviewing the Lambda functions, a security engineer discovers that sensitive information is being stored in environment variables and is viewable as plaintext in the Lambda console. The values of the sensitive information are only a few characters long. What is the MOST cost-effective way to address this security issue? (A) Set up IAM policies from the Lambda console to hide access to the environment variables. (B) Use AWS Step Functions to store the environment variables. Access the environment variables at runtime. Use IAM permissions to restrict access to the environment variables to only the Lambda functions that require access. (C) Store the environment variables in AWS Secrets Manager, and access them at runtime. Use IAM permissions to restrict access to the secrets to only the Lambda functions that require access. (D) Store the environment variables in AWS Systems Manager Parameter Store as secure string parameters, and access them at runtime. Use IAM permissions to restrict access to the parameters to only the Lambda functions that require access. |
42. Click here to View Answer
Answer: D
Explanation:
Here’s a detailed justification for why option D is the most cost-effective solution for securely storing and accessing sensitive information within a large number of AWS Lambda functions:
The problem lies in storing sensitive information (environment variables) in plaintext within the Lambda console, exposing them to unauthorized access. To remediate this, the environment variables must be encrypted and access to them needs to be controlled. The requirement is to implement the solution in the most cost-effective way.
Option A is incorrect because IAM policies in the Lambda console can restrict who can view the environment variables, but they do not encrypt the values at rest. The variables are still stored as plaintext, so they would not resolve the security issue.
Option B is not ideal because AWS Step Functions is designed for orchestrating serverless workflows, not primarily for secret storage. Using Step Functions solely for environment variable storage would be an unnecessarily complex and expensive solution compared to alternatives.
Option C, using AWS Secrets Manager, is a valid solution. Secrets Manager is designed for securely storing secrets (API keys, passwords, etc.). However, it’s generally more expensive than Parameter Store. Secrets Manager offers features like automatic rotation, which aren’t necessary if the environment variables being stored don’t require such functionality.
Option D, using AWS Systems Manager Parameter Store with secure strings, provides both encryption and access control at a lower cost than Secrets Manager, particularly for relatively static secrets. Parameter Store’s secure string parameters are encrypted using AWS KMS, ensuring that sensitive information is not stored in plaintext. Access to these parameters can be finely controlled using IAM policies, allowing you to grant specific Lambda functions access only to the parameters they need. The cost is primarily associated with KMS usage, which is typically minimal for infrequent access. Because the question states that the values are only a few characters long, it becomes even more cost-effective to use Parameter Store, as Secrets Manager is really better suited to larger, regularly rotated secrets. The prompt’s emphasis on “most cost-effective” points toward Parameter Store.
In summary, while both Secrets Manager and Parameter Store provide secure storage and access control, Parameter Store is the more cost-effective option in this specific scenario due to the small size and assumed static nature of the secrets. It addresses the security issue effectively by encrypting the data and using IAM for granular access control.
Relevant links:
AWS KMS: https://aws.amazon.com/kms/
AWS Systems Manager Parameter Store: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html
AWS Secrets Manager: https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
Question.43 A security engineer is using AWS Organizations and wants to optimize SCPs. The security engineer needs to ensure that the SCPs conform to best practices. Which approach should the security engineer take to meet this requirement? (A) Use AWS IAM Access Analyzer to analyze the polices. View the findings from policy validation checks. (B) Review AWS Trusted Advisor checks for all accounts in the organization. (C) Set up AWS Audit Manager. Run an assessment for all AWS Regions for all accounts. (D) Ensure that Amazon Inspector agents are installed on all Amazon EC2 instances in all accounts. |
43. Click here to View Answer
Answer: A
Explanation:
The correct answer is A. Here’s why:
IAM Access Analyzer is a service specifically designed to analyze IAM policies, including SCPs, and identify potential security issues. It leverages automated reasoning to validate policies against AWS best practices and generates findings on policy validation checks. These findings highlight deviations from best practices, such as overly permissive policies or policies that grant unintended access. By analyzing the SCPs with IAM Access Analyzer, the security engineer can proactively identify and remediate any security gaps, ensuring the SCPs conform to AWS’s recommended guidelines. This provides a direct assessment of policy effectiveness.
Option B, reviewing AWS Trusted Advisor, offers high-level recommendations across cost optimization, security, fault tolerance, and performance, but its checks are broader and less granular than the policy-specific analysis provided by IAM Access Analyzer. While Trusted Advisor includes some security checks, it doesn’t delve into the specific logic of individual SCPs.
Option C, setting up AWS Audit Manager, focuses on auditing compliance against regulatory standards and industry frameworks. While Audit Manager is valuable for compliance, it does not directly provide feedback on optimizing SCPs or ensuring they conform to best practices related to access control.
Option D, ensuring Amazon Inspector agents are installed, is relevant for identifying vulnerabilities in EC2 instances, but does not address SCP optimization. Inspector assesses the security posture of your EC2 instances, but not the organizational-level access control defined by SCPs.
Therefore, using IAM Access Analyzer provides the most direct and effective approach to ensure SCPs conform to best practices by providing policy-level findings.
Relevant links:
AWS Organizations SCPs: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
AWS IAM Access Analyzer: https://aws.amazon.com/iam/features/access-analyzer/
Question.44 A company uses Amazon RDS for MySQL as a database engine for its applications. A recent security audit revealed an RDS instance that is not compliant with company policy for encrypting data at rest. A security engineer at the company needs to ensure that all existing RDS databases are encrypted using server-side encryption and that any future deviations from the policy are detected. Which combination of steps should the security engineer take to accomplish this? (Choose two.) (A) Create an AWS Config rule to detect the creation of unencrypted RDS databases. Create an Amazon EventBridge rule to trigger on the AWS Config rules compliance state change and use Amazon Simple Notification Service (Amazon SNS) to notify the security operations team. (B) Use AWS System Manager State Manager to detect RDS database encryption configuration drift. Create an Amazon EventBridge rule to track state changes and use Amazon Simple Notification Service (Amazon SNS) to notify the security operations team. (C) Create a read replica for the existing unencrypted RDS database and enable replica encryption in the process. Once the replica becomes active, promote it into a standalone database instance and terminate the unencrypted database instance. (D) Take a snapshot of the unencrypted RDS database. Copy the snapshot and enable snapshot encryption in the process. Restore the database instance from the newly created encrypted snapshot. Terminate the unencrypted database instance. (E) Enable encryption for the identified unencrypted RDS instance by changing the configurations of the existing database. |
44. Click here to View Answer
Answer: AD
Explanation:
Let’s analyze why options A and D are the correct choices.
Option A: AWS Config and EventBridge for Detection and Notification
This option effectively addresses the requirement for detecting deviations from the encryption policy in both existing and future RDS instances. AWS Config can be configured with a rule to continuously assess RDS instances and determine whether they are encrypted. If an unencrypted instance is detected, the rule will mark it as non-compliant. Then, Amazon EventBridge monitors the state changes of the AWS Config rule. Whenever the compliance status changes (e.g., from compliant to non-compliant), EventBridge triggers an Amazon SNS notification, alerting the security operations team. This ensures prompt awareness of policy violations.
- Justification: AWS Config provides a centralized way to define and enforce configuration policies. EventBridge offers real-time event monitoring and routing, allowing for automated responses to configuration changes.
- Relevant Concepts: Infrastructure as Code (IaC), Configuration Management, Event-Driven Architecture, Continuous Monitoring.
- Authoritative Links:
- AWS Config: https://aws.amazon.com/config/
- Amazon EventBridge: https://aws.amazon.com/eventbridge/
Option D: Snapshot Copy with Encryption and Instance Restoration
This option outlines the correct procedure for encrypting an existing unencrypted RDS instance without downtime. RDS for MySQL does not allow in-place encryption. Therefore, you must create an encrypted copy of the data. The process involves taking a snapshot of the unencrypted RDS instance, copying that snapshot while enabling encryption on the copy, and then restoring a new encrypted RDS instance from the encrypted snapshot. The original unencrypted instance can then be safely terminated, ensuring all data at rest is encrypted.
- Justification: This method fulfills the requirement of encrypting existing RDS instances. By copying the snapshot, you’re effectively creating a new, encrypted instance from the old, unencrypted one.
- Relevant Concepts: Database Snapshots, Encryption at Rest, Data Migration.
- Authoritative Links:
- Encrypting Amazon RDS resources: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
- Copying a DB snapshot: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html
Why other options are incorrect:
Option E: Directly enabling encryption by changing configurations on an existing unencrypted RDS for MySQL instance is not supported. The only way to enable encryption is to create a new encrypted instance via snapshot copy or read replica creation.
Option B: AWS Systems Manager State Manager is not designed for detecting RDS database encryption configuration drift. Config is the tool for compliance.
Option C: While creating a read replica and promoting it is a valid method, it involves more steps and is not as straightforward as snapshot copying. Also, the original database still has to be encrypted.
Question.45 A company has recently recovered from a security incident that required the restoration of Amazon EC2 instances from snapshots. The company uses an AWS Key Management Service (AWS KMS) customer managed key to encrypt all Amazon Elastic Block Store (Amazon EBS) snapshots. The company performs a gap analysis of its disaster recovery procedures and backup strategies. A security engineer needs to implement a solution so that the company can recover the EC2 instances if the AWS account is compromised and the EBS snapshots are deleted. Which solution will meet this requirement? (A) Create a new Amazon S3 bucket. Use EBS lifecycle policies to move EBS snapshots to the new S3 bucket. Use lifecycle policies to move snapshots to the S3 Glacier Instant Retrieval storage class. Use S3 Object Lock to prevent deletion of the snapshots. (B) Use AWS Systems Manager to distribute a configuration that backs up all attached disks to Amazon S3. (C) Create a new AWS account that has limited privileges. Allow the new account to access the KMS key that encrypts the EBS snapshots. Copy the encrypted snapshots to the new account on a recurring basis. (D) Use AWS Backup to copy EBS snapshots to Amazon S3. Use S3 Object Lock to prevent deletion of the snapshots. |
45. Click here to View Answer
Answer: D
Explanation:
The correct answer is D. Here’s a detailed justification:
The scenario requires a solution that protects EBS snapshots from deletion in case of account compromise, allowing EC2 instance recovery even if the primary account is compromised. The key is to store backups in a secure, isolated location with immutability features.
Why D is the best solution:
- AWS Backup: AWS Backup is a fully managed backup service designed to centralize and automate data protection across AWS services. It integrates with EBS and allows for consistent snapshot creation and management. https://aws.amazon.com/backup/
- Copying Snapshots: AWS Backup allows copying snapshots to Amazon S3. Storing backups in S3 provides an isolated location, separate from the compromised EBS environment.
- S3 Object Lock: This feature makes objects immutable for a specified retention period or indefinitely. By enabling S3 Object Lock on the S3 bucket where the snapshots are stored, it becomes impossible to delete the snapshots, even with compromised credentials. This directly addresses the requirement of protecting snapshots from deletion due to account compromise. https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html
Why other options are not as suitable:
- A: While moving snapshots to S3 and using S3 Object Lock is helpful, using EBS lifecycle policies to manage the transfer is unconventional and does not offer centralized backup management like AWS Backup. Also, Glacier Instant Retrieval is optimized for infrequent access, which may not align with disaster recovery’s potential need for faster restores.
- B: Using AWS Systems Manager to back up attached disks to S3 is a valid approach for file-level backups, but it does not create EBS snapshots. Snapshots are block-level copies of your EBS volumes, optimized for EC2 instance recovery. Moreover, it lacks centralized management and object lock features that address the immutability requirement.
- C: Creating a new AWS account is a good practice for isolation, but simply copying snapshots doesn’t protect against deletion within the source account if the KMS key is compromised there. It also introduces complexity in key management and snapshot synchronization. It doesn’t utilize immutable storage like S3 Object Lock for safeguarding against deletion.
Therefore, using AWS Backup to copy snapshots to S3 with Object Lock enabled is the most effective and secure solution for protecting EBS snapshots from deletion due to account compromise, thus ensuring EC2 instance recovery is possible.