Question.31 A company has migrated its container-based applications to Amazon EKS and want to establish automated email notifications. The notifications sent to each email address are for specific activities related to EKS components. The solution will include Amazon SNS topics and an AWS Lambda function to evaluate incoming log events and publish messages to the correct SNS topic. Which logging solution will support these requirements? (A) Enable Amazon CloudWatch Logs to log the EKS components. Create a CloudWatch subscription filter for each component with Lambda as the subscription feed destination. (B) Enable Amazon CloudWatch Logs to log the EKS components. Create CloudWatch Logs Insights queries linked to Amazon EventBridge events that invoke Lambda. (C) Enable Amazon S3 logging for the EKS components. Configure an Amazon CloudWatch subscription filter for each component with Lambda as the subscription feed destination. (D) Enable Amazon S3 logging for the EKS components. Configure S3 PUT Object event notifications with AWS Lambda as the destination. |
31. Click here to View Answer
Answer: A
Explanation:
Here’s a detailed justification for why option A is the best solution for implementing automated email notifications for EKS components using SNS topics and Lambda, while also explaining why the other options are less suitable:
Why Option A is the Best Choice:
Option A proposes using Amazon CloudWatch Logs to capture logs from EKS components. This is the most logical starting point since CloudWatch Logs is a centralized logging service that readily integrates with AWS services like EKS. By enabling CloudWatch Logs, the logs can be aggregated and made searchable.
The critical part of option A is the use of CloudWatch subscription filters. Subscription filters act like real-time event detectors, scanning incoming log data for specific patterns or keywords related to each EKS component’s activity. These filters are the “glue” that connects log events to specific SNS topics.
The Lambda function is configured as the target (or “destination”) of the subscription filter. When a log event matches a subscription filter’s criteria, CloudWatch Logs automatically invokes the Lambda function, passing the log event data as input.
Inside the Lambda function, logic is implemented to parse the log event data, determine which SNS topic is relevant based on the event’s content, and then publish a message to that SNS topic. Subscribers to those SNS topics will then receive the email notifications.
CloudWatch subscription filters provide near real-time event processing, which satisfies the requirement for timely email notifications. The solution leverages the native integration between CloudWatch Logs and Lambda, providing an efficient and cost-effective mechanism.
Why Other Options are Less Suitable:
- Option B: While CloudWatch Logs Insights can query logs, linking these queries to Amazon EventBridge is not a practical real-time solution for triggering notifications. CloudWatch Logs Insights is primarily for analyzing historical data, not for reacting to events as they happen. EventBridge is typically used for different purposes, such as inter-service communication, not for continuous log scraping.
- Option C: S3 logging is often used for archiving logs, not for real-time event processing. Storing logs in S3 and then using CloudWatch subscription filters is inefficient. CloudWatch Logs is designed for real-time log processing. Using S3 introduces unnecessary complexity and delay.
- Option D: S3 PUT Object event notifications can trigger Lambda functions when new log files are written to S3. However, this approach only reacts to the creation of a new log file, not individual log events within the files. To process specific log events, the Lambda function would have to parse the entire log file each time a new one is written, which is highly inefficient and doesn’t provide timely, event-driven notifications. Furthermore, EKS components typically don’t directly write logs to S3 in a way that would make S3 event notifications the primary means of consuming logs. They generally write to stdout/stderr, which is then collected by a logging agent and sent to CloudWatch Logs or other logging backends.
In Summary:
Option A offers the most efficient and appropriate solution for implementing real-time, event-driven email notifications for EKS components using SNS topics and Lambda. It leverages the native capabilities of CloudWatch Logs for log aggregation, filtering, and integration with Lambda.
Authoritative Links:
AWS Lambda: https://aws.amazon.com/lambda/
Amazon CloudWatch Logs: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html
CloudWatch Subscription Filters: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
Amazon SNS: https://aws.amazon.com/sns/
Question.32 A company is implementing an Amazon Elastic Container Service (Amazon ECS) cluster to run its workload. The company architecture will run multiple ECS services on the cluster. The architecture includes an Application Load Balancer on the front end and uses multiple target groups to route traffic. A DevOps engineer must collect application and access logs. The DevOps engineer then needs to send the logs to an Amazon S3 bucket for near-real-time analysis. Which combination of steps must the DevOps engineer take to meet these requirements? (Choose three.) (A) Download the Amazon CloudWatch Logs container instance from AWS. Configure this instance as a task. Update the application service definitions to include the logging task. (B) Install the Amazon CloudWatch Logs agent on the ECS instances. Change the logging driver in the ECS task definition to awslogs. (C) Use Amazon EventBridge to schedule an AWS Lambda function that will run every 60 seconds and will run the Amazon CloudWatch Logs create-export-task command. Then point the output to the logging S3 bucket. (D) Activate access logging on the ALB. Then point the ALB directly to the logging S3 bucket. (E) Activate access logging on the target groups that the ECS services use. Then send the logs directly to the logging S3 bucket. (F) Create an Amazon Kinesis Data Firehose delivery stream that has a destination of the logging S3 bucket. Then create an Amazon CloudWatch Logs subscription filter for Kinesis Data Firehose. |
32. Click here to View Answer
Answer: BDF
Explanation:
The correct solution for collecting application and access logs from an ECS cluster and sending them to an S3 bucket for near-real-time analysis involves several key steps focusing on proper log capture, transportation, and access configuration.
Option B, “Install the Amazon CloudWatch Logs agent on the ECS instances. Change the logging driver in the ECS task definition to awslogs,” is crucial for application logs. The awslogs
logging driver sends container logs directly to CloudWatch Logs, making them easily accessible. Installing the CloudWatch Logs agent on each instance ensures log ingestion. This is more efficient than using a separate container instance (Option A), which would add unnecessary complexity. [https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html]
Option D, “Activate access logging on the ALB. Then point the ALB directly to the logging S3 bucket,” correctly captures access logs generated by the Application Load Balancer (ALB). ALBs provide native access logging, which directly records requests made to the load balancer. Configuring the ALB to directly write these logs to the S3 bucket streamlines the process. [https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html]
Option F, “Create an Amazon Kinesis Data Firehose delivery stream that has a destination of the logging S3 bucket. Then create an Amazon CloudWatch Logs subscription filter for Kinesis Data Firehose,” establishes a real-time pipeline for log delivery. The CloudWatch Logs subscription filter directs logs matching a specific pattern from CloudWatch Logs to the Kinesis Data Firehose delivery stream. Kinesis Data Firehose then efficiently streams these logs to the designated S3 bucket. This offers low latency and ensures logs are available for near-real-time analysis. [https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_KinesisDataFirehose.html]
Option A is incorrect because managing a separate container instance for logging is overly complex. Option C involves scheduled Lambda functions to export logs, which is less efficient and not near real-time. Option E incorrectly focuses on target group access logs, which are not typically used for analysis compared to ALB access logs.
Question.33 A company that uses electronic health records is running a fleet of Amazon EC2 instances with an Amazon Linux operating system. As part of patient privacy requirements, the company must ensure continuous compliance for patches for operating system and applications running on the EC2 instances. How can the deployments of the operating system and application patches be automated using a default and custom repository? (A) Use AWS Systems Manager to create a new patch baseline including the custom repository. Run the AWS-RunPatchBaseline document using the run command to verify and install patches. (B) Use AWS Direct Connect to integrate the corporate repository and deploy the patches using Amazon CloudWatch scheduled events, then use the CloudWatch dashboard to create reports. (C) Use yum-config-manager to add the custom repository under /etc/yum.repos.d and run yum-config-manager-enable to activate the repository. (D) Use AWS Systems Manager to create a new patch baseline including the corporate repository. Run the AWS-AmazonLinuxDefaultPatchBaseline document using the run command to verify and install patches. |
33. Click here to View Answer
Answer: A
Explanation:
The correct answer is A because it leverages AWS Systems Manager Patch Manager, a service designed for automating OS and application patching on EC2 instances. Creating a new patch baseline allows the company to define the specific patches to approve or reject, and importantly, to include custom repositories where their application patches reside. The AWS-RunPatchBaseline
document, when executed via Systems Manager Run Command, handles the patching process according to the defined baseline, ensuring continuous compliance. This approach addresses the core requirement of automating patch deployments using both default and custom repositories.
Option B is incorrect. While AWS Direct Connect provides connectivity to corporate resources, it doesn’t directly automate patch deployment. CloudWatch Scheduled Events could be used to trigger a patching process, but the process itself isn’t defined, nor does it integrate custom repositories.
Option C is incorrect because while yum-config-manager
is a valid tool for managing YUM repositories on Amazon Linux instances, this method does not provide the centralized control, auditability, and automated patching capabilities of Systems Manager Patch Manager. It also doesn’t readily scale to a fleet of EC2 instances or provide compliance reporting.
Option D is incorrect because the AWS-AmazonLinuxDefaultPatchBaseline
document only leverages the default Amazon Linux repositories. It does not support the inclusion of custom repositories, failing to meet the requirement of integrating application patches alongside OS patches.
Systems Manager Patch Manager offers several advantages, including centralized control, automated patching schedules, compliance reporting, and support for both default and custom patch sources. This aligns perfectly with the company’s need for continuous compliance and automation of patch deployments using custom repositories for applications.
Supporting Links:
Working with Patch Baselines: https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-baselines.html
AWS Systems Manager Patch Manager: https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager.html
AWS Systems Manager Run Command: https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html
Question.34 A company is using AWS CodePipeline to automate its release pipeline. AWS CodeDeploy is being used in the pipeline to deploy an application to Amazon Elastic Container Service (Amazon ECS) using the blue/green deployment model. The company wants to implement scripts to test the green version of the application before shifting traffic. These scripts will complete in 5 minutes or less. If errors are discovered during these tests, the application must be rolled back. Which strategy will meet these requirements? (A) Add a stage to the CodePipeline pipeline between the source and deploy stages. Use AWS CodeBuild to create a runtime environment and build commands in the buildspec file to invoke test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment. (B) Add a stage to the CodePipeline pipeline between the source and deploy stages. Use this stage to invoke an AWS Lambda function that will run the test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment. (C) Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTestTraffic lifecycle event to invoke an AWS Lambda function to run the test scripts. If errors are found, exit the Lambda function with an error to initiate rollback. (D) Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTraffic lifecycle event to invoke the test scripts. If errors are found, use the aws deploy stop-deployment CLI command to stop the deployment. |
34. Click here to View Answer
Answer: C
Explanation:
The correct answer is C because it leverages CodeDeploy’s lifecycle hooks, specifically AfterAllowTestTraffic
, to execute tests after traffic has been shifted to the green environment but before fully committing the change. This is the ideal point to validate the new deployment.
Here’s a breakdown of why C is the best option and why the others are less suitable:
- Option A: Adding a CodeBuild stage between the source and deploy stages is too early. The application isn’t deployed to the green environment yet, so testing would be based on the built artifacts, not the running application in its target environment. Using
aws deploy stop-deployment
from CodeBuild to stop the entire deployment may not be ideal and more complex to implement. - Option B: Using a Lambda function in a CodePipeline stage offers more flexibility compared to CodeBuild for certain testing scenarios but shares the same timing problem as Option A. It runs before the application is deployed to the green environment. Also,
aws deploy stop-deployment
called from the Lambda would affect the entire deployment process. - Option C: Utilizing the
AfterAllowTestTraffic
hook in the CodeDeployAppSpec.yml
aligns perfectly with the requirement of testing the green environment after a portion of traffic has been shifted. This allows you to test the application with real-world traffic. If the Lambda function encounters errors, simply exiting with an error code will automatically trigger a rollback, as CodeDeploy is designed to handle this. This solution tightly integrates with the blue/green deployment model. - Option D: While using lifecycle hooks is correct,
AfterAllowTraffic
happens after all traffic has shifted to the green environment. If errors are found at this point, rolling back could cause a service disruption for all users. TheAfterAllowTestTraffic
event is specifically designed for pre-production validation. Also, you cannot stop the deployment using CLI from within the CodeDeploy lifecycle event.
In summary, option C ensures tests are executed in the correct environment (green), at the correct point in the deployment lifecycle (after test traffic is shifted), and leverages the built-in rollback mechanism of CodeDeploy upon test failure. This strategy minimizes potential disruption and provides a robust mechanism for verifying the new deployment.
Relevant links:
AWS Lambda: https://aws.amazon.com/lambda/
AWS CodeDeploy AppSpec File Reference: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html
Blue/Green Deployments with CodeDeploy: https://docs.aws.amazon.com/codedeploy/latest/userguide/traffic-management-blue-green.html
Explanation:
The correct answer is C because it leverages CodeDeploy’s lifecycle hooks, specifically AfterAllowTestTraffic
, to execute tests after traffic has been shifted to the green environment but before fully committing the change. This is the ideal point to validate the new deployment.
Here’s a breakdown of why C is the best option and why the others are less suitable:
- Option A: Adding a CodeBuild stage between the source and deploy stages is too early. The application isn’t deployed to the green environment yet, so testing would be based on the built artifacts, not the running application in its target environment. Using
aws deploy stop-deployment
from CodeBuild to stop the entire deployment may not be ideal and more complex to implement. - Option B: Using a Lambda function in a CodePipeline stage offers more flexibility compared to CodeBuild for certain testing scenarios but shares the same timing problem as Option A. It runs before the application is deployed to the green environment. Also,
aws deploy stop-deployment
called from the Lambda would affect the entire deployment process. - Option C: Utilizing the
AfterAllowTestTraffic
hook in the CodeDeployAppSpec.yml
aligns perfectly with the requirement of testing the green environment after a portion of traffic has been shifted. This allows you to test the application with real-world traffic. If the Lambda function encounters errors, simply exiting with an error code will automatically trigger a rollback, as CodeDeploy is designed to handle this. This solution tightly integrates with the blue/green deployment model. - Option D: While using lifecycle hooks is correct,
AfterAllowTraffic
happens after all traffic has shifted to the green environment. If errors are found at this point, rolling back could cause a service disruption for all users. TheAfterAllowTestTraffic
event is specifically designed for pre-production validation. Also, you cannot stop the deployment using CLI from within the CodeDeploy lifecycle event.
In summary, option C ensures tests are executed in the correct environment (green), at the correct point in the deployment lifecycle (after test traffic is shifted), and leverages the built-in rollback mechanism of CodeDeploy upon test failure. This strategy minimizes potential disruption and provides a robust mechanism for verifying the new deployment.
Relevant links:
AWS Lambda: https://aws.amazon.com/lambda/
AWS CodeDeploy AppSpec File Reference: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html
Blue/Green Deployments with CodeDeploy: https://docs.aws.amazon.com/codedeploy/latest/userguide/traffic-management-blue-green.html
Question.35 A company uses AWS Storage Gateway in file gateway mode in front of an Amazon S3 bucket that is used by multiple resources. In the morning when business begins, users do not see the objects processed by a third party the previous evening. When a DevOps engineer looks directly at the S3 bucket, the data is there, but it is missing in Storage Gateway. Which solution ensures that all the updated third-party files are available in the morning? (A) Configure a nightly Amazon EventBridge event to invoke an AWS Lambda function to run the RefreshCache command for Storage Gateway. (B) Instruct the third party to put data into the S3 bucket using AWS Transfer for SFTP. (C) Modify Storage Gateway to run in volume gateway mode. (D) Use S3 Same-Region Replication to replicate any changes made directly in the S3 bucket to Storage Gateway. |
35. Click here to View Answer
Answer: A
Explanation:
The problem describes a situation where files added to an S3 bucket by a third party overnight are not visible to users accessing the bucket through AWS Storage Gateway’s file gateway mode the next morning. The root cause is that Storage Gateway’s cache is not automatically updated with the new objects written directly to S3.
Option A is the correct solution because it proactively addresses the cache staleness issue. Using Amazon EventBridge to schedule a nightly event that triggers an AWS Lambda function to execute the RefreshCache
command for the Storage Gateway ensures that the gateway’s cache is synchronized with the latest state of the S3 bucket every morning. This is a suitable automated solution for keeping the gateway’s view of the S3 bucket consistent. This command is designed for this very scenario.
Option B is incorrect because even if the third party uses AWS Transfer for SFTP, the data will still be written directly to the S3 bucket, and Storage Gateway’s cache will still need to be refreshed to reflect these changes. Transfer for SFTP does not directly address the cache synchronization problem.
Option C, changing to volume gateway mode, is not the correct approach. Volume gateway mode presents block-based storage to on-premises applications, which is not the required functionality. The file gateway is appropriate since the resources utilize a file system interface. Switching mode would necessitate a significant architectural change unrelated to the cache refresh issue.
Option D, using S3 Same-Region Replication, is also incorrect in this scenario. S3 replication is designed to replicate objects between S3 buckets, but it does not automatically update Storage Gateway’s cache. Replication would only copy the data to another bucket; the original problem of the gateway’s stale cache would persist.
Therefore, the best solution is to schedule a periodic refresh of the Storage Gateway’s cache to ensure consistency between the S3 bucket and the files accessible through the gateway.
Supporting documentation:
Amazon EventBridge: https://aws.amazon.com/eventbridge/
AWS Storage Gateway RefreshCache: https://docs.aws.amazon.com/storagegateway/latest/userguide/HowFileGatewayWorks.html (See section about “How File Gateway Works” and caching behaviour.)
AWS Lambda: https://aws.amazon.com/lambda/