Question.31 An enterprise company wants to allow its developers to purchase third-party software through AWS Marketplace. The company uses an AWS Organizations account structure with full features enabled, and has a shared services account in each organizational unit (OU) that will be used by procurement managers. The procurement team’s policy indicates that developers should be able to obtain third-party software from an approved list only and use Private Marketplace in AWS Marketplace to achieve this requirement. The procurement team wants administration of Private Marketplace to be restricted to a role named procurement-manager-role, which could be assumed by procurement managers. Other IAM users, groups, roles, and account administrators in the company should be denied Private Marketplace administrative access. What is the MOST efficient way to design an architecture to meet these requirements? (A) Create an IAM role named procurement-manager-role in all AWS accounts in the organization. Add the PowerUserAccess managed policy to the role. Apply an inline policy to all IAM users and roles in every AWS account to deny permissions on the AWSPrivateMarketplaceAdminFullAccess managed policy. (B) Create an IAM role named procurement-manager-role in all AWS accounts in the organization. Add the AdministratorAccess managed policy to the role. Define a permissions boundary with the AWSPrivateMarketplaceAdminFullAccess managed policy and attach it to all the developer roles. (C) Create an IAM role named procurement-manager-role in all the shared services accounts in the organization. Add the AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an organization root-level SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. Create another organization root-level SCP to deny permissions to create an IAM role named procurement-manager-role to everyone in the organization. (D) Create an IAM role named procurement-manager-role in all AWS accounts that will be used by developers. Add the AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an SCP in Organizations to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. Apply the SCP to all the shared services accounts in the organization. |
31. Click here to View Answer
Answer: C
Explanation:
The most efficient solution is C. Here’s why:
- Centralized Control: AWS Organizations and Service Control Policies (SCPs) offer centralized governance across all accounts within the organization. This aligns with the enterprise’s requirement for restricted administration.
- Role-Based Access Control: Creating the
procurement-manager-role
in the shared services accounts and assigningAWSPrivateMarketplaceAdminFullAccess
ensures only procurement managers in designated accounts can administer the Private Marketplace. - Explicit Deny with SCPs: SCPs with explicit deny statements are crucial. The first SCP denies Private Marketplace administration to everyone at the root level of the organization. This provides a broad restriction, which is then selectively allowed through the procurement-manager-role. The second SCP restricts the ability to create the procurement-manager-role itself, mitigating the risk of unauthorized users assuming the role by creating it themselves.
- Least Privilege: The
AWSPrivateMarketplaceAdminFullAccess
policy grants the minimum required permissions to manage the Private Marketplace, adhering to the principle of least privilege. - Why other options are incorrect:
- A: Adding
PowerUserAccess
is overly permissive. Applying inline deny policies to all IAM users and roles is cumbersome and difficult to maintain across a large organization. - B:
AdministratorAccess
is far too broad and violates the principle of least privilege. Permissions boundaries restrict what roles can do, but don’t prevent users from taking actions outside those roles. This doesn’t effectively prevent unauthorized administration of the Private Marketplace. - D: Applying the SCP to shared services accounts is ineffective because the role granting permissions is also in the shared services accounts. The SCP needs to restrict access everywhere else. Furthermore, the
procurement-manager-role
is only needed in shared services accounts, not all developer accounts.
- A: Adding
Authoritative Links:
IAM Roles: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
AWS Organizations: https://aws.amazon.com/organizations/
Service Control Policies (SCPs): https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
AWS Marketplace Private Marketplace: https://docs.aws.amazon.com/marketplace/latest/userguide/private-marketplace.html
Question.32 A company is in the process of implementing AWS Organizations to constrain its developers to use only Amazon EC2, Amazon S3, and Amazon DynamoDB. The developers account resides in a dedicated organizational unit (OU). The solutions architect has implemented the following SCP on the developers account: ![]() When this policy is deployed, IAM users in the developers account are still able to use AWS services that are not listed in the policy. What should the solutions architect do to eliminate the developers’ ability to use services outside the scope of this policy? (A) Create an explicit deny statement for each AWS service that should be constrained. (B) Remove the FullAWSAccess SCP from the developers account’s OU. (C) Modify the FullAWSAccess SCP to explicitly deny all services. (D) Add an explicit deny statement using a wildcard to the end of the SCP. |
32. Click here to View Answer
Answer: B
Explanation:
Initially I voted for A but then I saw the following statement : “AWS services that aren’t explicitly allowed by the SCPs associated with an AWS account or its parent OUs are denied access to the AWS accounts or OUs associated with the SCP. SCPs associated to an OU are inherited by all AWS accounts in that OU”
B’ is the BEST answer, but not the only correct one. ‘D’ is also technically correct, because adding a wildcard DENY statement would override the FullAWSAccess SCP attached by default to the OU and it would have the same final result. However ‘B’ is more appropriate here, the so called best practice. This is what ‘Professional’ exam certs are all about.
Question.33 A company is hosting a monolithic REST-based API for a mobile app on five Amazon EC2 instances in public subnets of a VPC. Mobile clients connect to the API by using a domain name that is hosted on Amazon Route 53. The company has created a Route 53 multivalue answer routing policy with the IP addresses of all the EC2 instances. Recently, the app has been overwhelmed by large and sudden increases to traffic. The app has not been able to keep up with the traffic. A solutions architect needs to implement a solution so that the app can handle the new and varying load. Which solution will meet these requirements with the LEAST operational overhead? (A) Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API. (B) Containerize the API logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Run the containers in the cluster by using Amazon EC2. Create a Kubernetes ingress. Update the Route 53 record to point to the Kubernetes ingress. (C) Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group. Configure the Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record. (D) Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB. |
33. Click here to View Answer
Answer: A
Explanation:
The best solution is A. Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API.
Here’s why:
- Scalability: Lambda functions scale automatically and independently based on the number of incoming requests. This provides the best elasticity for handling sudden and large traffic spikes. API Gateway can also handle a massive amount of concurrent API calls.
- Operational Overhead: Lambda and API Gateway are serverless services. This significantly reduces operational overhead, as you don’t need to manage servers, operating systems, or scaling infrastructure. The overhead of managing EC2 instances, EKS clusters, or even Auto Scaling groups is considerably higher.
- Cost Efficiency: You only pay for the compute time consumed by Lambda functions when they are running. API Gateway also has a pay-per-use pricing model. This can be more cost-effective than running dedicated EC2 instances, especially during periods of low traffic.
- Integration: API Gateway integrates seamlessly with Lambda functions, making it easy to create and manage RESTful APIs. It also provides features like request validation, authentication, and authorization.
- Route 53 integration: Updating the Route 53 record to point to the API Gateway endpoint is a straightforward process.
Here’s why the other options are less suitable:
- B (EKS): EKS is a powerful container orchestration platform, but it adds significant operational overhead. Managing a Kubernetes cluster requires expertise and effort, and it is not the least operationally heavy way to handle the scalability requirements.
- C (Auto Scaling Group): While Auto Scaling helps with scaling, it requires you to manage EC2 instances, which still involves operational overhead like patching, security updates, and instance sizing. Furthermore, the Lambda function needed to update Route 53 records when scaling events happen increases the complexity.
- D (ALB): An ALB would improve availability and load distribution, but it still relies on EC2 instances that require ongoing management. It also might take time for the ALB to react to traffic changes and scale instances which will also require more operational overhead.
Authoritative Links:
Amazon Route 53: https://aws.amazon.com/route53/
AWS Lambda: https://aws.amazon.com/lambda/
Amazon API Gateway: https://aws.amazon.com/api-gateway/
Question.34 A company has created an OU in AWS Organizations for each of its engineering teams. Each OU owns multiple AWS accounts. The organization has hundreds of AWS accounts. A solutions architect must design a solution so that each OU can view a breakdown of usage costs across its AWS accounts. Which solution meets these requirements? (A) Create an AWS Cost and Usage Report (CUR) for each OU by using AWS Resource Access Manager. Allow each team to visualize the CUR through an Amazon QuickSight dashboard. (B) Create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard. (C) Create an AWS Cost and Usage Report (CUR) in each AWS Organizations member account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard. (D) Create an AWS Cost and Usage Report (CUR) by using AWS Systems Manager. Allow each team to visualize the CUR through Systems Manager OpsCenter dashboards. |
34. Click here to View Answer
Answer: B
Explanation:
The correct answer is B because it leverages the central management capabilities of AWS Organizations for cost reporting. Creating a single Cost and Usage Report (CUR) from the management account allows for consolidated billing and cost tracking across the entire organization, including all OUs and their associated accounts. This centralized approach simplifies administration and avoids the complexity of managing multiple CURs, as would be the case in options A and C. The CUR generated from the management account contains detailed cost information, which can then be visualized using Amazon QuickSight dashboards. These dashboards can be customized to provide each OU with a breakdown of its usage costs, meeting the requirement of the question.
Option A is incorrect because AWS Resource Access Manager (RAM) is designed for sharing AWS resources between accounts, not creating or managing cost reports. While RAM can share access to the CUR data, it does not solve the problem of generating a consolidated report for each OU. Option C is inefficient and difficult to manage because it would require creating and maintaining a CUR in each member account. This approach would create significant overhead and complexity. Option D is incorrect because AWS Systems Manager is designed for operations management and automation, not cost reporting or visualization. Systems Manager OpsCenter dashboards are not designed for cost analysis. Therefore, option B offers the most efficient and scalable solution to the problem by leveraging the centralized cost management capabilities of AWS Organizations and the visualization capabilities of Amazon QuickSight.
Further Reading:
Amazon QuickSight: https://aws.amazon.com/quicksight/
AWS Cost and Usage Reports: https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html
AWS Organizations: https://aws.amazon.com/organizations/
Question.35 A company is storing data on premises on a Windows file server. The company produces 5 GB of new data daily. The company migrated part of its Windows-based workload to AWS and needs the data to be available on a file system in the cloud. The company already has established an AWS Direct Connect connection between the on-premises network and AWS. Which data migration strategy should the company use? (A) Use the file gateway option in AWS Storage Gateway to replace the existing Windows file server, and point the existing file share to the new file gateway. (B) Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx. (C) Use AWS Data Pipeline to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS). (D) Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS). |
35. Click here to View Answer
Answer: B
Explanation:
The most suitable data migration strategy is option B: using AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx. Here’s why:
- DataSync’s suitability for the task: AWS DataSync is designed specifically for efficient and automated data transfer between on-premises storage and AWS storage services. It can handle the daily replication requirement effectively. (https://aws.amazon.com/datasync/)
- FSx’s appropriateness: Amazon FSx provides fully managed file systems that are compatible with Windows file servers, making it a natural fit for the on-premises Windows workload. (https://aws.amazon.com/fsx/)
- Direct Connect Integration: The existing AWS Direct Connect connection provides the necessary bandwidth and dedicated network connectivity for reliable data transfer. DataSync is optimized to use the Direct Connect connection.
- Incremental Replication: DataSync supports incremental replication, meaning it only transfers changed data, which is ideal for the daily 5GB of new data and minimizes bandwidth consumption.
- File Gateway Limitation: Option A, using File Gateway, is more suitable when you want to replace your existing file server entirely. The scenario requires keeping the on-premises server and replicating data to AWS.
- Data Pipeline is Not Optimal: Option C, using AWS Data Pipeline, is designed for more complex data workflows and transformations, which are not needed in this simple file replication scenario. It’s an overkill for simple data migration. Additionally, it’s now superseded by AWS Glue.
- EFS Incompatibility: Option D uses DataSync with EFS. While technically possible, EFS isn’t designed for native compatibility with Windows-based file servers and SMB protocol which the company presumably uses on-premise. FSx is a better fit for the on-premises Windows server.
Therefore, using AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx provides an efficient, automated, and compatible solution for the company’s data migration needs, leveraging the existing Direct Connect connection.