Question.1 A company has a mobile application that makes HTTP API calls to an Application Load Balancer (ALB). The ALB routes requests to an AWS Lambda function. Many different versions of the application are in use at any given time, including versions that are in testing by a subset of users. The version of the application is defined in the user-agent header that is sent with all requests to the API. After a series of recent changes to the API, the company has observed issues with the application. The company needs to gather a metric for each API operation by response code for each version of the application that is in use. A DevOps engineer has modified the Lambda function to extract the API operation name, version information from the user-agent header and response code. Which additional set of actions should the DevOps engineer take to gather the required metrics? (A) Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric. (B) Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs Insights query to populate CloudWatch metrics from the log lines. Specify response code and application version as dimensions for the metric. (C) Configure the ALB access logs to write to an Amazon CloudWatch Logs log group. Modify the Lambda function to respond to the ALB with the API operation name, response code, and version number as response metadata. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric. (D) Configure AWS X-Ray integration on the Lambda function. Modify the Lambda function to create an X-Ray subsegment with the API operation name, response code, and version number. Configure X-Ray insights to extract an aggregated metric for each API operation name and to publish the metric to Amazon CloudWatch. Specify response code and application version as dimensions for the metric. |
1. Click here to View Answer
Answer: A
Explanation:
The correct answer is A. Here’s why:
- Requirements: The company needs to gather metrics for each API operation by response code and application version.
- Why Option A is Correct:
- Logging: Writing the API operation name, response code, and version number to CloudWatch Logs captures all necessary data points. This is a common and efficient practice for detailed application monitoring.
- CloudWatch Logs Metric Filters: Metric filters allow you to define patterns to search for within your logs and increment CloudWatch metrics based on those patterns. Critically, they also allow specifying dimensions which enable breaking down the metric by different categories. In this case, the dimensions are
response code
andapplication version
, precisely what the requirements ask for. This aggregates the log data into meaningful metrics. - Scalability: CloudWatch Logs is designed to handle large volumes of log data.
- Why Option B is Incorrect:
- CloudWatch Logs Insights: While powerful for ad-hoc log analysis, CloudWatch Logs Insights isn’t designed for continuous metric population. It’s more suited for investigative analysis rather than regularly generating metrics. Additionally, it cannot create dimensions in the way that metric filters can.
- Why Option C is Incorrect:
- ALB Access Logs: ALB access logs contain information about requests to the ALB, but they might not directly contain the internal API operation name as defined within the Lambda function. The question specifies that the Lambda function extracts the API operation name.
- Response Metadata: Lambda cannot directly control response metadata in a manner that is directly usable for metric filters with dimensions in CloudWatch Logs.
- Why Option D is Incorrect:
- AWS X-Ray: X-Ray is primarily used for tracing requests through a distributed system. While X-Ray can provide insights, it’s not the most direct way to gather aggregated metrics broken down by dimensions.
- Complexity: Configuring X-Ray and its insights to extract and publish metrics with dimensions to CloudWatch would be more complex than the straightforward approach offered by option A.
In essence, option A provides the most direct, scalable, and cost-effective solution using the available information, focusing on leveraging CloudWatch Logs metric filters with dimensions to meet the stated requirements.
Supporting Documentation:
AWS X-Ray: https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html
CloudWatch Logs Metric Filters: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringPolicyExamples.html
CloudWatch Metrics: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html
CloudWatch Logs Insights: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html
Question.2 A company provides an application to customers. The application has an Amazon API Gateway REST API that invokes an AWS Lambda function. On initialization, the Lambda function loads a large amount of data from an Amazon DynamoDB table. The data load process results in long cold-start times of 8-10 seconds. The DynamoDB table has DynamoDB Accelerator (DAX) configured. Customers report that the application intermittently takes a long time to respond to requests. The application receives thousands of requests throughout the day. In the middle of the day, the application experiences 10 times more requests than at any other time of the day. Near the end of the day, the application’s request volume decreases to 10% of its normal total. A DevOps engineer needs to reduce the latency of the Lambda function at all times of the day. Which solution will meet these requirements? (A) Configure provisioned concurrency on the Lambda function with a concurrency value of 1. Delete the DAX cluster for the DynamoDB table. (B) Configure reserved concurrency on the Lambda function with a concurrency value of 0. (C) Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100. (D) Configure reserved concurrency on the Lambda function. Configure AWS Application Auto Scaling on the API Gateway API with a reserved concurrency maximum value of 100. |
2. Click here to View Answer
Answer: C
Explanation:
The correct answer is C. Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.
Here’s why:
The problem describes a Lambda function with long cold start times due to loading data from DynamoDB. Cold starts occur when a Lambda function is invoked for the first time or after a period of inactivity, requiring the function’s code to be loaded and initialized. This initialization includes fetching the large dataset from DynamoDB.
Provisioned concurrency addresses this issue directly. It pre-initializes a specified number of Lambda function instances and keeps them warm, ready to respond to requests. This eliminates the cold start latency for requests routed to these pre-initialized instances.
Option A is incorrect because deleting DAX would likely increase DynamoDB read latency, negatively impacting application performance. DAX is designed to accelerate DynamoDB reads, especially for frequently accessed data, which helps mitigate the cold start problem to some extent before introducing provisioned concurrency. Also, setting provisioned concurrency to 1 is ineffective to handle fluctuating workloads.
Option B is incorrect because Reserved Concurrency is used to limit the maximum number of concurrent executions for a function. Setting it to 0 would effectively prevent the function from running at all. Reserved concurrency doesn’t pre-initialize instances like provisioned concurrency does.
Option D is incorrect because reserved concurrency limits overall concurrency, not solving cold starts. Auto Scaling on the API Gateway does not address Lambda function cold starts. The problem resides in Lambda itself, not the number of API requests being served. Auto Scaling API Gateway would only distribute requests effectively, but the latency issue from Lambda would still exist on initial Lambda invocations.
Using Application Auto Scaling to manage provisioned concurrency is crucial because the application experiences significant fluctuations in traffic throughout the day. Scaling the provisioned concurrency between 1 and 100 allows the application to handle the increased traffic during peak hours without experiencing cold starts, while also reducing costs during periods of low traffic. A minimum of 1 ensures that at least one instance is always warm, preventing cold starts during off-peak times. The maximum of 100 handles the sudden tenfold increase in requests during peak times. This dynamic scaling optimizes resource utilization and minimizes latency at all times of the day.
Relevant Documentation:
Amazon DynamoDB Accelerator (DAX): https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.html
AWS Lambda Provisioned Concurrency: https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
AWS Application Auto Scaling: https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html
Question.3 A company is adopting AWS CodeDeploy to automate its application deployments for a Java-Apache Tomcat application with an Apache Webserver. The development team started with a proof of concept, created a deployment group for a developer environment, and performed functional tests within the application. After completion, the team will create additional deployment groups for staging and production. The current log level is configured within the Apache settings, but the team wants to change this configuration dynamically when the deployment occurs, so that they can set different log level configurations depending on the deployment group without having a different application revision for each group. How can these requirements be met with the LEAST management overhead and without requiring different script versions for each deployment group? (A) Tag the Amazon EC2 instances depending on the deployment group. Then place a script into the application revision that calls the metadata service and the EC2 API to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference the script as part of the AfterInstall lifecycle hook in the appspec.yml file. (B) Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ NAME to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file. (C) Create a CodeDeploy custom environment variable for each environment. Then place a script into the application revision that checks this environment variable to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the ValidateService lifecycle hook in the appspec.yml file. (D) Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to identify which deployment group the instance is part of to configure the log level settings. Reference this script as part of the Install lifecycle hook in the appspec.yml file. |
3. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Here’s a detailed justification:
The requirement is to dynamically adjust the Apache log level based on the CodeDeploy deployment group without creating different application revisions or script versions for each group. This necessitates a mechanism to identify the deployment group during the deployment process.
Option B leverages the DEPLOYMENT_GROUP_NAME
environment variable, which is automatically provided by CodeDeploy during deployments. This variable holds the name of the deployment group to which the instance is being deployed. A script can readily access this variable and use it to determine the appropriate log level configuration. Placing this script in the BeforeInstall
lifecycle hook allows it to run before the application is actually installed, ensuring that the log level is configured before the application starts using it. This approach avoids the overhead of managing EC2 tags or custom CodeDeploy environment variables.
Option A is less efficient. While EC2 tags can identify deployment groups, querying the metadata service and the EC2 API introduces complexity and overhead. Also, the AfterInstall
lifecycle hook is too late for configuring the log level effectively, as the application might already be running.
Option C, creating custom CodeDeploy environment variables for each environment, adds unnecessary management overhead. Each environment would need to be configured separately, which defeats the purpose of having a single deployment process. Also, the ValidateService
lifecycle hook is designed for validating the service’s integrity, not for configuration.
Option D is close but the DEPLOYMENT_GROUP_ID is less human readable and is harder to configure and read.
In summary, Option B provides the least management overhead by leveraging the built-in CodeDeploy environment variable and executing the configuration script during the BeforeInstall
lifecycle hook, fulfilling the requirements efficiently and effectively.
Relevant links for further research:
AWS CodeDeploy Environment Variables: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-environment-variables.html
AWS CodeDeploy Lifecycle Events: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
Question.4 A company requires its developers to tag all Amazon Elastic Block Store (Amazon EBS) volumes in an account to indicate a desired backup frequency. This requirement Includes EBS volumes that do not require backups. The company uses custom tags named Backup_Frequency that have values of none, dally, or weekly that correspond to the desired backup frequency. An audit finds that developers are occasionally not tagging the EBS volumes. A DevOps engineer needs to ensure that all EBS volumes always have the Backup_Frequency tag so that the company can perform backups at least weekly unless a different value is specified. Which solution will meet these requirements? (A) Set up AWS Config in the account. Create a custom rule that returns a compliance failure for all Amazon EC2 resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. (B) Set up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. (C) Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule. (D) Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events or EBS ModifyVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule. |
4. Click here to View Answer
Answer: B
Explanation:
The correct answer is B. Here’s a detailed justification:
AWS Config continuously monitors and assesses the configuration of your AWS resources. It’s ideal for enforcing compliance rules, as required by the scenario. Option B leverages AWS Config with a managed rule, specifically targeting EC2::Volume
resources. This ensures that all EBS volumes are checked for the presence of the Backup_Frequency
tag. Managed rules are pre-built and readily available within AWS Config, simplifying the implementation. If a volume lacks the required tag, the rule marks it as non-compliant.
Crucially, Option B also includes a remediation action using a Systems Manager Automation runbook. This runbook automatically applies the missing Backup_Frequency
tag with a default value of weekly
. This automated remediation addresses the problem of developers occasionally forgetting to tag volumes, ensuring that all volumes are tagged and receive at least weekly backups unless explicitly configured otherwise. Systems Manager Automation provides a secure and auditable way to apply the tag.
Option A is similar but suggests using a custom Config rule, which is unnecessary since a managed rule already exists for checking tags on EBS volumes (EC2::Volume). Managed rules are generally preferred for simplicity and reduced overhead.
Options C and D rely on AWS CloudTrail and EventBridge, triggered by CreateVolume
(or ModifyVolume
) events. While this approach can work, it has drawbacks. It reacts after the volume has been created, potentially allowing untagged volumes to exist briefly. Also, reacting to ModifyVolume
events is unnecessary as the requirement is to ensure tags are present from the start. AWS Config continuously monitors and remediates, offering a more proactive and consistent approach for ensuring ongoing compliance. Config provides historical configuration data and continuous compliance checks, features absent in event-driven, post-creation tagging.
In essence, AWS Config, with its managed rule and automated remediation via Systems Manager, provides a comprehensive and proactive solution for ensuring all EBS volumes are tagged with the desired Backup_Frequency
, meeting the company’s compliance requirements.
Here are authoritative links for further research:
Tagging AWS Resources: https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html
AWS Config: https://aws.amazon.com/config/
AWS Config Managed Rules: https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws.html
AWS Systems Manager Automation: https://aws.amazon.com/systems-manager/automation/
Question.5 A company is using an Amazon Aurora cluster as the data store for its application. The Aurora cluster is configured with a single DB instance. The application performs read and write operations on the database by using the cluster’s instance endpoint. The company has scheduled an update to be applied to the cluster during an upcoming maintenance window. The cluster must remain available with the least possible interruption during the maintenance window. What should a DevOps engineer do to meet these requirements? (A) Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads. (B) Add a reader instance to the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster’s custom ANY endpoint for read and write operations. (C) Turn on the Multi-AZ option on the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads. (D) Turn on the Multi-AZ option on the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster’s custom ANY endpoint for read and write operations |
5. Click here to View Answer
Answer: A
Explanation:
The best approach to minimize interruption during an Aurora cluster update while maintaining availability is option A. Here’s why:
- Read Scalability and Availability: Adding a reader instance provides a read replica. During the maintenance window, the primary instance might be unavailable temporarily. The reader instance continues to serve read requests, improving availability for read operations.
- Cluster Endpoint for Writes: Using the cluster endpoint for writes ensures that write operations are automatically directed to the primary instance in the cluster, even during the maintenance process when the primary may failover. Aurora handles the redirection behind the scenes.
- Reader Endpoint for Reads: The reader endpoint specifically directs read traffic to available reader instances. This is crucial because, as mentioned before, the primary instance may be unavailable briefly during the update. Directing read traffic to the reader instance prevents read operations from failing.
- Why other options are less ideal:
- Option B: Creating a custom ANY endpoint is not a standard practice for Aurora. While custom endpoints can be made, the default cluster and reader endpoints are more appropriate and easily managed for this scenario. Creating a custom endpoint adds unnecessary complexity. Also, an ANY endpoint won’t automatically redirect writes to a new primary if a failover happens.
- Option C: Turning on Multi-AZ is a good practice for high availability. However, by itself, it only provides failover capabilities. During the failover itself, there will be a short period of unavailability. It does not address read availability explicitly.
- Option D: Combining Multi-AZ with a custom ANY endpoint has the same drawbacks as options B and C combined. It doesn’t give the advantages of the built in reader endpoint.
- Least Interruption: By using the cluster endpoint for writes and the reader endpoint for reads with a reader instance, the application experiences the least interruption during the maintenance window. Aurora manages the failover to a new primary (if needed), and the reader instance continues serving reads. The application logic itself doesn’t need to be aware of the failover process.
In summary: Option A leverages Aurora’s built-in high availability and read scaling features in the most straightforward and effective way to minimize downtime during a maintenance window. It ensures that reads can continue to be served even if the primary instance is temporarily unavailable, and the cluster endpoint seamlessly handles write redirection.
Relevant Links: