Question.1 A company has an AWS Lambda function that creates image thumbnails from larger images. The Lambda function needs read and write access to an Amazon S3 bucket in the same AWS account. Which solutions will provide the Lambda function this access? (Choose two.) (A) Create an IAM user that has only programmatic access. Create a new access key pair. Add environmental variables to the Lambda function with the access key ID and secret access key. Modify the Lambda function to use the environmental variables at run time during communication with Amazon S3. (B) Generate an Amazon EC2 key pair. Store the private key in AWS Secrets Manager. Modify the Lambda function to retrieve the private key from Secrets Manager and to use the private key during communication with Amazon S3. (C) Create an IAM role for the Lambda function. Attach an IAM policy that allows access to the S3 bucket. (D) Create an IAM role for the Lambda function. Attach a bucket policy to the S3 bucket to allow access. Specify the function’s IAM role as the principal. (E) Create a security group. Attach the security group to the Lambda function. Attach a bucket policy that allows access to the S3 bucket through the security group ID. |
1. Click here to View Answer
Answer: C,D
Explanation:
The correct answers are C and D.
Justification:
- Option C: Create an IAM role for the Lambda function. Attach an IAM policy that allows access to the S3 bucket. This is the recommended and most secure approach for granting Lambda functions access to AWS resources. IAM roles provide temporary credentials to the Lambda function, eliminating the need to store long-term access keys. By attaching an IAM policy to the role, you can precisely define the permissions the Lambda function has on the S3 bucket (e.g.,
s3:GetObject
,s3:PutObject
). This follows the principle of least privilege. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html - Option D: Create an IAM role for the Lambda function. Attach a bucket policy to the S3 bucket to allow access. Specify the function’s IAM role as the principal. This is another valid approach. While attaching an IAM policy to the role itself (option C) is generally preferred for centralizing permissions management, bucket policies can also grant access to resources. Here, the bucket policy would explicitly allow the Lambda function’s IAM role to perform actions on the bucket. Crucially, this bucket policy needs to specify the Lambda function’s role’s ARN as the principal. The effectiveness of this option relies on the role being properly configured, however, policies can be complex to debug. https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-policy-language-overview.html
Why the other options are incorrect:
Option E: Create a security group. Attach the security group to the Lambda function. Attach a bucket policy that allows access to the S3 bucket through the security group ID. Security groups are used to control network traffic, not to grant access to S3 buckets. Lambda functions, by default, don’t reside within VPCs, so even if the function was in a VPC, security groups aren’t the correct mechanism for authorizing access to S3. Security groups authorize inbound/outbound traffic for compute instances or other resources. They do not directly govern access to S3 buckets. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
Option A: Create an IAM user that has only programmatic access. Create a new access key pair. Add environmental variables to the Lambda function with the access key ID and secret access key. Modify the Lambda function to use the environmental variables at run time during communication with Amazon S3. Storing access keys directly in environment variables of a Lambda function is highly discouraged. This is a security risk because access keys can be inadvertently exposed in logs or other application data. IAM roles offer a more secure, managed solution for granting temporary credentials.
Option B: Generate an Amazon EC2 key pair. Store the private key in AWS Secrets Manager. Modify the Lambda function to retrieve the private key from Secrets Manager and to use the private key during communication with Amazon S3. EC2 key pairs are for SSH access to EC2 instances, not for authenticating Lambda functions to S3. They are irrelevant in this context. Using them would introduce unnecessary complexity and security vulnerabilities.
Question.2 A security engineer is configuring a new website that is named example.com. The security engineer wants to secure communications with the website by requiring users to connect to example.com through HTTPS. Which of the following is a valid option for storing SSL/TLS certificates? (A) Custom SSL certificate that is stored in AWS Key Management Service (AWS KMS) (B) Default SSL certificate that is stored in Amazon CloudFront (C) Custom SSL certificate that is stored in AWS Certificate Manager (ACM) (D) Default SSL certificate that is stored in Amazon S3 |
2. Click here to View Answer
Answer: C
Explanation:
The correct answer is C: Custom SSL certificate that is stored in AWS Certificate Manager (ACM). Here’s why:
ACM is the preferred service for provisioning, managing, and deploying SSL/TLS certificates for use with AWS services and internally connected servers. It is designed to easily integrate with services like Elastic Load Balancing, CloudFront, API Gateway, and others, making it straightforward to secure websites and applications with HTTPS. Storing custom SSL/TLS certificates in ACM centralizes management, automates renewal, and helps ensure proper security practices.
Option A is incorrect because while AWS KMS can store cryptographic keys, it’s not specifically designed for storing SSL/TLS certificates. ACM is the dedicated service for this purpose, offering features like automatic renewal and integration with other AWS services that KMS doesn’t provide for SSL/TLS certificates.
Option B is incorrect because CloudFront can use ACM certificates, but it doesn’t store default SSL certificates. Certificates for CloudFront are either imported to ACM or provisioned directly through ACM for use with CloudFront distributions. Default certificates might be used, but these are managed internally by CloudFront and not something the user directly interacts with or stores within CloudFront itself.
Option D is incorrect because Amazon S3 is an object storage service, not designed for storing and managing SSL/TLS certificates. While you could technically store a certificate file in S3, it would not provide the necessary management features like automatic renewal or integration with other AWS services. Furthermore, S3 is not intended to be a secure repository for private keys without significant additional configuration and safeguards. ACM is specifically built for secure certificate management.
Here are some resources for further reading:
Using SSL/TLS Certificates with CloudFront: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-cloudfront-to-s3.html
AWS Certificate Manager (ACM): https://aws.amazon.com/certificate-manager/
Question.3 A security engineer needs to develop a process to investigate and respond to potential security events on a company’s Amazon EC2 instances. All the EC2 instances are backed by Amazon Elastic Block Store (Amazon EBS). The company uses AWS Systems Manager to manage all the EC2 instances and has installed Systems Manager Agent (SSM Agent) on all the EC2 instances. The process that the security engineer is developing must comply with AWS security best practices and must meet the following requirements: A compromised EC2 instance’s volatile memory and non-volatile memory must be preserved for forensic purposes. A compromised EC2 instance’s metadata must be updated with corresponding incident ticket information. A compromised EC2 instance must remain online during the investigation but must be isolated to prevent the spread of malware. Any investigative activity during the collection of volatile data must be captured as part of the process. Which combination of steps should the security engineer take to meet these requirements with the LEAST operational overhead? (Choose three.) (A) Gather any relevant metadata for the compromised EC2 instance. Enable termination protection. Isolate the instance by updating the instance’s security groups to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources. (B) Gather any relevant metadata for the compromised EC2 instance. Enable termination protection. Move the instance to an isolation subnet that denies all source and destination traffic. Associate the instance with the subnet to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources. (C) Use Systems Manager Run Command to invoke scripts that collect volatile data. (D) Establish a Linux SSH or Windows Remote Desktop Protocol (RDP) session to the compromised EC2 instance to invoke scripts that collect volatile data. (E) Create a snapshot of the compromised EC2 instance’s EBS volume for follow-up investigations. Tag the instance with any relevant metadata and incident ticket information. (F) Create a Systems Manager State Manager association to generate an EBS volume snapshot of the compromised EC2 instance. Tag the instance with any relevant metadata and incident ticket information. |
3. Click here to View Answer
Answer: ACE
Explanation:
Here’s a detailed justification for choosing options A, C, and E to address the security incident response scenario with the least operational overhead, while meeting all the requirements:
- A – Metadata, Termination Protection, Isolation: Gathering metadata is crucial for context. Enabling termination protection prevents accidental deletion. Updating security groups to restrict access isolates the instance to contain the incident and prevent lateral movement. Removing the instance from Auto Scaling and ELB ensures it’s no longer serving production traffic and won’t be automatically replaced. This option directly addresses the requirement to isolate the instance.
- C – Systems Manager Run Command for Volatile Data: Using Systems Manager Run Command is the most efficient and auditable way to collect volatile data (memory dumps, running processes, etc.). SSM Agent is already installed, making this method straightforward. It avoids the need to establish interactive SSH/RDP sessions, improving security and automation. Systems Manager also logs the commands executed, fulfilling the requirement to capture investigative activity. https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html
- E – EBS Snapshot and Tagging: Creating an EBS snapshot preserves the non-volatile memory (disk contents) for later forensic analysis. Tagging the instance with metadata and incident ticket information ensures proper tracking and context. This directly addresses the preservation of non-volatile memory and the updating of metadata.
- Why other options are not optimal:
- B: Moving to an isolation subnet requires network configuration changes, increasing operational overhead compared to simply adjusting security groups.
- D: Establishing SSH/RDP sessions is less secure and less auditable than using Systems Manager. It requires more manual intervention and increases the attack surface.
- F: While State Manager can automate EBS snapshot creation, it adds complexity compared to a simple Run Command or manual snapshot. The question specifies least operational overhead.
In summary, ACE provide the most direct, automated, and auditable solution, leveraging existing Systems Manager infrastructure to collect volatile data, isolate the instance, and preserve non-volatile data, all while ensuring proper tracking through tagging. They follow AWS security best practices and minimize operational overhead.
Question.4 A company has an organization in AWS Organizations. The company wants to use AWS CloudFormation StackSets in the organization to deploy various AWS design patterns into environments. These patterns consist of Amazon EC2 instances, Elastic Load Balancing (ELB) load balancers, Amazon RDS databases, and Amazon Elastic Kubernetes Service (Amazon EKS) clusters or Amazon Elastic Container Service (Amazon ECS) clusters. Currently, the company’s developers can create their own CloudFormation stacks to increase the overall speed of delivery. A centralized CI/CD pipeline in a shared services AWS account deploys each CloudFormation stack. The company’s security team has already provided requirements for each service in accordance with internal standards. If there are any resources that do not comply with the internal standards, the security team must receive notification to take appropriate action. The security team must implement a notification solution that gives developers the ability to maintain the same overall delivery speed that they currently have. Which solution will meet these requirements in the MOST operationally efficient way? (A) Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the security team’s email addresses to the SNS topic. Create a custom AWS Lambda function that will run the aws cloudformation validate-template AWS CLI command on all CloudFormation templates before the build stage in the CI/CD pipeline. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found. (B) Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the security team’s email addresses to the SNS topic. Create custom rules in CloudFormation Guard for each resource configuration. In the CI/CD pipeline, before the build stage, configure a Docker image to run the cfn-guard command on the CloudFormation template. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found. (C) Create an Amazon Simple Notification Service (Amazon SNS) topic and an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe the security team’s email addresses to the SNS topic. Create an Amazon S3 bucket in the shared services AWS account. Include an event notification to publish to the SQS queue when new objects are added to the S3 bucket. Require the developers to put their CloudFormation templates in the S3 bucket. Launch EC2 instances that automatically scale based on the SQS queue depth. Configure the EC2 instances to use CloudFormation Guard to scan the templates and deploy the templates if there are no issues. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found. (D) Create a centralized CloudFormation stack set that includes a standard set of resources that the developers can deploy in each AWS account. Configure each CloudFormation template to meet the security requirements. For any new resources or configurations, update the CloudFormation template and send the template to the security team for review. When the review is completed, add the new CloudFormation stack to the repository for the developers to use. |
4. Click here to View Answer
Answer: B
Explanation:
The best solution is B because it provides the most operationally efficient way to validate CloudFormation templates against internal standards before deployment, while maintaining the developers’ speed of delivery.
Here’s why:
- CloudFormation Guard:
cfn-guard
allows you to define rules (policies) to validate your CloudFormation templates. This allows for declarative validation, which is easier to maintain and update as standards evolve compared to custom scripts. - Docker Image for Validation: Packaging
cfn-guard
and its dependencies into a Docker image ensures a consistent and reproducible validation environment across different CI/CD pipeline runners. - CI/CD Integration: Integrating the validation process directly into the CI/CD pipeline before the build stage ensures that only compliant templates are deployed, preventing non-compliant resources from being created.
- SNS Notification: Using SNS to notify the security team provides a flexible and scalable way to alert them when non-compliant templates are detected.
- Operational Efficiency: This approach avoids the operational overhead of managing EC2 instances for validation (option C) and the manual review process (option D). It also doesn’t rely on potentially complex custom validation scripts (option A). CloudFormation Guard is purpose built for this scenario and therefore likely to require less custom scripting.
Here’s why the other options are less suitable:
- A: Using
aws cloudformation validate-template
only validates the syntax of the template, not the compliance with internal standards. Requires custom scripting and isn’t as declarative as CloudFormation Guard. - C: Using SQS and EC2 instances adds unnecessary complexity and operational overhead. It also introduces potential scaling issues and latency. Developers have to put their templates in S3, which is unnecessary friction.
- D: A centralized CloudFormation StackSet is a good idea for deploying standard resources, but it doesn’t address the need to validate custom templates created by developers. This also severely hinders the speed of delivery.
Option B strikes the right balance between automation, compliance, and developer velocity. It leverages a purpose-built tool (cfn-guard
) for validation, integrates seamlessly into the CI/CD pipeline, and provides timely notifications to the security team.
Question.5 A company is migrating one of its legacy systems from an on-premises data center to AWS. The application server will run on AWS, but the database must remain in the on-premises data center for compliance reasons. The database is sensitive to network latency. Additionally, the data that travels between the on-premises data center and AWS must have IPsec encryption. Which combination of AWS solutions will meet these requirements? (Choose two.) (A) AWS Site-to-Site VPN (B) AWS Direct Connect (C) AWS VPN CloudHub (D) VPC peering (E) NAT gateway |
5. Click here to View Answer
Answer: AB
Explanation:
The question requires establishing a secure, low-latency connection between an AWS application server and an on-premises database. The connection needs IPsec encryption.
Option A, AWS Site-to-Site VPN, is a correct choice because it creates an encrypted IPsec tunnel between the AWS environment and the on-premises data center. This satisfies the requirement for IPsec encryption. Site-to-Site VPN can be established over the internet or a dedicated connection, though the internet option may increase latency.
Option B, AWS Direct Connect, is also a correct choice. Direct Connect establishes a private, dedicated network connection from your on-premises data center to AWS. This dedicated connection bypasses the public internet, leading to lower latency and more consistent network performance, which is crucial for latency-sensitive database interactions. Direct Connect supports encrypting data in transit using MACsec, but it’s important to configure IPsec for meeting the IPsec encryption requirement.
Option C, AWS VPN CloudHub, is incorrect because it’s designed for connecting multiple remote sites using VPN connections, not for optimizing latency between a single on-premises data center and AWS.
Option D, VPC peering, is incorrect because it establishes a direct networking connection between two VPCs. It cannot connect an AWS VPC to an on-premises data center.
Option E, NAT gateway, is incorrect because it allows instances in a private subnet to connect to the internet or other AWS services, but it does not establish a secure, low-latency connection to an on-premises environment.
In summary, using AWS Direct Connect coupled with an IPsec VPN tunnel over Direct Connect offers the best combination of low latency and IPsec encryption for the required connection between AWS and the on-premises database. Site-to-Site VPN could be used alone, but may not provide the required low latency.
Relevant links:
AWS Direct Connect: https://aws.amazon.com/directconnect/
AWS Site-to-Site VPN: https://aws.amazon.com/vpn/site-to-site-vpn/