Question.1 A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations. Which solution meets these requirements with the LEAST amount of operational overhead? (A) Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy. (B) Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy. (C) Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly. (D) Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy. |
1. Click here to View Answers
Answer is (A) Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
This is the least operationally overhead solution because it does not require any additional infrastructure or configuration. AWS Organizations already tracks the organization ID of each account, so you can simply add the aws:PrincipalOrgID condition key to the S3 bucket policy and reference the organization ID. This will ensure that only users of accounts within the organization can access the S3 bucket.
Condition keys: AWS provides condition keys that you can query to provide more granular control over certain actions. The following condition keys are especially useful with AWS Organizations:
aws:PrincipalOrgID – Simplifies specifying the Principal element in a resource-based policy. This global key provides an alternative to listing all the account IDs for all AWS accounts in an organization. Instead of listing all of the accounts that are members of an organization, you can specify the organization ID in the Condition element.
aws:PrincipalOrgPaths – Use this condition key to match members of a specific organization root, an OU, or its children. The aws:PrincipalOrgPaths condition key returns true when the principal (root user, IAM user, or role) making the request is in the specified organization path. A path is a text representation of the structure of an AWS Organizations entity.
Question.2 A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management. What should a solutions architect do to accomplish this goal? (A) Use AWS Secrets Manager. Turn on automatic rotation. (B) Use AWS Systems Manager Parameter Store. Turn on automatic rotation. (C) Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate the credential file to the S3 bucket. Point the application to the S3 bucket. (D) Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2 instance. Migrate the credential file to the new EBS volume. Point the application to the new EBS volume. |
2. Click here to View Answer
Answer is (A) Use AWS Secrets Manager. Turn on automatic rotation.
AWS Secrets Manager is a service that provides a secure and convenient way to store, manage, and rotate secrets. Secrets Manager can be used to store database credentials, SSH keys, and other sensitive information.
AWS Secrets Manager also supports automatic rotation, which can help to minimize the operational overhead of credential management. When automatic rotation is enabled, Secrets Manager will automatically generate new secrets and rotate them on a regular schedule
Reference:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-db.html
Question.3 A company recently migrated to AWS and wants to implement a solution to protect the traffic that flows in and out of the production VPC. The company had an inspection server in its on-premises data center. The inspection server performed specific operations such as traffic flow inspection and traffic filtering. The company wants to have the same functionalities in the AWS Cloud. Which solution will meet these requirements? (A) Use Amazon GuardDuty for traffic inspection and traffic filtering in the production VPC. (B) Use Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering. (C) Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPMost Voted (D) Use AWS Firewall Manager to create the required rules for traffic inspection and traffic filtering for the production VPC. |
3. Click here to View Answer
Answer is (C) Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC.
AWS Network Firewall is a managed network firewall service that allows you to define firewall rules to filter and inspect network traffic. You can create rules to define the traffic that should be allowed or blocked based on various criteria such as source/destination IP addresses, protocols, ports, and more. With AWS Network Firewall, you can implement traffic inspection and filtering capabilities within the production VPC, helping to protect the network traffic.
With AWS Network Firewall, you can create custom rule groups to define specific operations for traffic inspection and filtering.
It can perform deep packet inspection and filtering at the network level to enforce security policies, block malicious traffic, and allow or deny traffic based on defined rules.
By integrating AWS Network Firewall with the production VPC, you can achieve similar functionalities as the on-premises inspection server, performing traffic flow inspection and filtering.
In the context of the given scenario, AWS Network Firewall can be a suitable choice if the company wants to implement traffic inspection and filtering directly within the VPC without the need for traffic mirroring. It provides an additional layer of security by enforcing specific rules for traffic filtering, which can help protect the production environment.
Question.4 A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company’s management team should have full access to all the visualizations. The rest of the company should have only limited access. Which solution will meet these requirements? (A) Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate IAM roles. (B) Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups. (C) Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports. (D) Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports. |
4. Click here to View Answer
Answer is (B) Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups.
Keywords:
– Data lake on AWS.
– Consists of data in Amazon S3 and Amazon RDS for PostgreSQL.
– The company needs a reporting solution that provides data VISUALIZATION and includes ALL the data sources within the data lake.
Option B involves using Amazon QuickSight, which is a business intelligence tool provided by AWS for data visualization and reporting. With this option, you can connect all the data sources within the data lake, including Amazon S3 and Amazon RDS for PostgreSQL. You can create datasets within QuickSight that pull data from these sources.
The solution allows you to publish dashboards in Amazon QuickSight, which will provide the required data visualization capabilities. To control access, you can use appropriate IAM (Identity and Access Management) roles, assigning full access only to the company’s management team and limiting access for the rest of the company. You can share the dashboards selectively with the users and groups that need access.
A – Incorrect: Amazon QuickSight only support users(standard version) and groups (enterprise version). users and groups only exists without QuickSight. QuickSight don’t support IAM. We use users and groups to view the QuickSight dashboard
C – Incorrect: This way don’t support visulization and don’t mention how to process RDS data
D – Incorrect: This way don’t support visulization and don’t mention how to combine data RDS and S3
Reference:
https://docs.aws.amazon.com/quicksight/latest/user/share-a-dashboard-grant-access-users.html
Question.5 A company is implementing a new business application. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket. What should the solutions architect do to meet this requirement? (A) Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances. (B) Create an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances. (C) Create an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances. (D) Create an IAM user that grants access to the S3 bucket. Attach the user account to the EC2 instances. |
5. Click here to View Answer
Answer is (A) Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.
Option A is the correct approach because IAM roles are designed to provide temporary credentials to AWS resources such as EC2 instances. By creating an IAM role, you can define the necessary permissions and policies that allow the EC2 instances to access the S3 bucket securely. Attaching the IAM role to the EC2 instances will automatically provide the necessary credentials to access the S3 bucket without the need for explicit access keys or secrets.
Option B is not recommended in this case because IAM policies alone cannot be directly attached to EC2 instances. Policies are usually attached to IAM users, groups, or roles.
Option C is not the most appropriate choice because IAM groups are used to manage collections of IAM users and their permissions, rather than granting access to specific resources like S3 buckets.
Option D is not the optimal solution because IAM users are intended for individual user accounts and are not the recommended approach for granting access to resources within EC2 instances.