AWS Solutions Architect Professional exam is intended for those performing the role of Solutions Architect Professional. AWS CSAP exam recognizes and validates the advanced technical knowledge and expertise of candidate in designing distributed systems and applications on the AWS platform.
AWS CSAP exam validates the candidate’s knowledge and skills in –
- Designing and deploying scalable, highly available, reliable, and robust applications on the AWS platform
- Selecting suitable services for designing and deploying applications as per requirements
- Migrating multi-tier, complex applications on AWS platform
- Implementing solutions for cost control
So, AWS Certified Solutions Architect Professional certification is a credential that demonstrates your skills of designing and deploying AWS systems and applications.
Practice with Free AWS Solutions Architect Professional Exam Questions
While preparing for the AWS CSAP exam, it is recommended to go through various resources including AWS Whitepapers, documentation, books, and online training. But there is no match of practicing with the questions that are in the same format as that of the real exam. For this, we’ve prepared this blog, where you will get 10 free AWS Solutions Architect Professional Exam Questions. This will help you understand the pattern covered in the AWS CSAP exam.
These practice questions have been prepared by our team of certified professionals and subject matter experts. These free AWS Solutions Architect Professional Exam Questions have a detailed explanation for the correct as well as incorrect options. So, will clear your doubts why a particular option is correct or incorrect. What are you thinking now? Just go through these AWS CSAP exam questions and get ready for the real exam.
1. Two departments A and B have been added into a consolidated billing organization. Department A has 5 reserved RDS instances with DB Engine as MySQL. During a particular hour, department A used three DB Instances and department B used two RDS instances, for a total of 5 DB Instances on the consolidated bill. How should the RDS instances in department B be configured so that all five instances are charged as Reserved DB Instances?
A. Department B should launch DB instances in the same availability zone as a Reserved Instance in department A.
B. The DB engine in Department B should be MySQL.
C. The DB Instance Class should be the same in both departments such as m1.large.
D. The deployment type such as Multi-AZ should be the same in both department A and department B.
E. All of the above are needed.
Correct Answer: E
In order to receive the cost-benefit from Reserved DB Instances, all the attributes of DB Instances (DB Instance class, DB Engine, License Model, and Deployment type) in another account have to match with the attributes of the Reserved DB Instances.
Option A~D are incorrect: Refer to the reason in Option E.
Option E is CORRECT: Because all of the other options are needed. The reference is in https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidatedbilling-other.html.
2. As an AWS specialist, you are in charge of configuring consolidated billing in a multinational IT company. In the linked accounts, users have set up AWS resources using a tag called Department, which is used to differentiate resources. There are some other user-created tags such as Phase, CICD, Trial, etc. In the cost allocation report, you only want to filter it using the tag of Department and other tags are excluded in the report. How should you implement this so that the cost report is properly set up?
A. In the Cost Allocation Tags console of master account, select the Department tag in the User-Defined Cost Allocation Tags area and activate it. The tag starts appearing on the cost allocation report after it is applied but does not appear on earlier reports.
B. In the Cost Explorer console of master account, deactivate all the other tags except the Department tag in the User-Defined Cost Allocation Tags area. By default, all user-defined tags are activated.
C. In the Cost Explorer console of master account, select the Department tag in the User-Defined Cost Allocation Tags area and activate it. Make sure that other tags are inactive at the same time.
D. In the Cost Allocation Tags console of master account and linked accounts, select the Department tag in the User-Defined Cost Allocation Tags area and activate it. The tag starts appearing on the cost allocation report after it is applied and also appears on earlier reports after 1 hour.
Correct Answer: A
User-Defined Cost Allocation Tags can be selected and activated in the Cost Allocation Tags console:
Option A is CORRECT: Because using this method, only the user-defined tag Department will appear in the cost allocation report.
Option B is incorrect: Because it should be the Cost Allocation Tags console rather than the Cost Explorer console. Moreover, by default, all user-defined tags are deactivated.
Option C is incorrect: Similar to Option B.
Option D is incorrect: Because only the master account can activate or deactivate the user-defined tags. Besides, the tag does not appear on earlier reports before it is activated.
Preparing for an AWS Architect Interview? Check out these top AWS Solutions Architect Interview Questions and get yourself ready to crack the interview.
3. You are an AWS solutions architect and are in charge of the maintenance of an RDS on VMware database which is deployed on-premise. You have created a read replica in ap-south-1 region to share some read traffic. The system has run smoothly for a while then the company decides to migrate all the products to AWS including the on-premise RDS instance. Other than that, the instance needs to have another replica in another region ap-southeast-1. What actions should you take to fulfill this requirement?
- Use Data Migration Service to migrate the on-premise database to a RDS instance in AWS. Create a read replica in ap-southeast-1 region afterwards.
- In RDS console, click “migrating the instance” to create a new RDS instance. Then create a new read replica in the ap-southeast-1 region.
- Create another read replica in ap-southeast-1 region to share the read traffic for the RDS instance on VMware. Promote the RDS read replica in ap-south-1 to be the new RDS instance so that the original on-premise database is migrated in AWS with a replica in ap-southeast-1.
- Promote the RDS read replica in ap-south-1 to be the new RDS instance. Create another read replica in ap-southeast-1 for this new instance.
Correct Answer: D
Amazon RDS on VMware database instances can be easily migrated to Amazon RDS database instances in AWS with no impact to uptime, giving you the ability to rapidly deploy databases in all AWS regions without interrupting your customer experience. The process is as below:
Option A is incorrect: Because Data Migration Service is not needed. You just need to promote the read-replica to be the new RDS instance.
Option B is incorrect: Same reason as Option A. Also “migrating the instance” is incorrect.
Option C is incorrect: Because the read replica in ap-southeast-1 is still syncing with the original on-premise RDS instance. A new read replica should be created from the instance in ap-south-1.
Option D is CORRECT: Because the database can be easily migrated by promoting the read replica in ap-south-1.
4. There are two departments in a company. Both departments have owned several EC2 instances. Department A has a requirement to backup EBS volumes every 12 hours and the administrator set up a Data LifeCycle Policy in DLM for their instances. Department B requires a similar Data LifeCycle Policy as well for their instances. However, they prefer the schedule to run every 24 hours. The administrator has noticed that 2 EC2 EBS volumes have been owned by two departments at the same time. How can the administrator set up the Data LifeCycle Policy for Department B?
A. Add a tag for EBS volumes that Department B has owned. Set up a Data LifeCycle Policy based on the tag. For the EBS volumes owned by two departments, snapshots will be taken every 12 hours and 24 hours.
B. Add a tag for EBS volumes that Department B has owned. Set up a Data LifeCycle Policy based on the tag. For the EBS volumes owned by two departments, snapshots will not be taken as there is a schedule conflict between two policies. However other EBS volumes are not affected.
C. Add a tag for EBS volumes that Department B has owned. Set up a Data LifeCycle Policy based on the tag. For the EBS volumes owned by two departments, snapshots will be taken every 12 hours as 12 hours schedule takes priority.
D. Add a tag for EBS volumes that Department B has owned except the EBS volumes owned by two departments. Set up a Data LifeCycle Policy based on this tag. For the EBS volumes owned by two departments, snapshots are taken every 12 hours due to the policy of Department A.
Correct Answer: A
Multiple policies can be created to take snapshots for an EBS volume, as long as each policy targets a unique tag on the volume. In this case, the EBS volumes owned by two departments should have two tags, where tag A is the target for policy A to create a snapshot every 12 hours for Department A, and tag B is the target for policy B to create a snapshot every 24 hours for Department B, Amazon DLM creates snapshots according to the schedules for both policies.
Option A is CORRECT: Because when an EBS volume has two tags, multiple policies can run at the same time.
Option B is incorrect: Because there is no schedule conflict for this scenario.
Option C is incorrect: Because 12 hours schedule does not take priority over 24 hours. And both schedules can run in parallel.
Option D is incorrect: Because the EBS volumes owned by two departments can add another tag and be included in the policy for Department B.
Preparing for AWS Solutions Architect Associate exam? Go through these Free AWS Certified Solutions Architect Exam Questions and get ready for the real exam.
5. You work at an AWS consulting company. A customer plans to migrate all its products into AWS and you are required to provide a detailed plan. The company has good experiences of Chef and prefers to continue using that. They wish that their EC2 instances use a blue/green deployment method. Moreover, it will be best if their infrastructure setup such as network layer can be easily re-run using scripts. Automatic scalability is also required for EC2. Which below options should you choose for the migration plan? Choose 3.
A. As Blue/Green deployment is not supported in OpsWorks, use Elastic Beanstalk Swap Url feature to deploy the application. Swap CNAMEs of the two environments to redirect traffic to the new version instantly.
B. Use Chef/Recipes in OpsWorks to add/deploy/edit the app in EC2 instances. The Blue/Green deployment in OpsWorks would require the Route 53 weighted routing feature.
C. In OpsWorks, set up a set of load-based EC2 instances, which AWS OpsWorks Stacks starts and stops to handle unpredictable traffic variations.
D. Create an autoscaling group with a suitable configuration based on CPU usage. Add the autoscaling group in OpsWorks stack so that its EC2 instances can scale up and down according to the CPU level automatically.
E. Edit CloudFormation templates and creates stacks for infrastructure. Add a dedicated CloudFormation stack for OpsWorks deployment and use the nested infrastructure stacks.
F. Create CloudFormation stacks for infrastructure. For the OpsWorks configurations, use AWS CLI commands such as “AWS Opsworks create-app”.
Correct Answer: B, C, E
In this scenario, as Chef is needed, OpsWorks should be considered at first priority unless there are conditions that it does not meet.
OpsWork has a key feature to scale based on time or load. For example:
In terms of infrastructure, CloudFormation stack should be used. Besides, CloudFormation supports OpsWorks which means OpsWorks stack can use other nested CloudFormation stacks. In this way, the whole deployment is implemented as code.
Nested stacks are stacks created as part of other stacks. A nested stack is created within another stack by using the “AWS::CloudFormation::Stack” resource.
Option A is incorrect: Because OpsWorks supports Blue/Green Deployment. It needs the involvement of Route53. Refer to https://d1.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf.
Option B is CORRECT: Because OpsWorks can meet the need of Blue/Green Deployment and also use Chef which the customer prefers to use.
Option C is CORRECT: Because AWS OpsWorks supports scaling based on load including:
CPU: The average CPU consumption, such as 80%
Memory: The average memory consumption, such as 60%
Load: The average computational work a system performs in one minute.
Option D is incorrect: Because it is not straightforward to add an autoscaling group to OpsWorks although this may work. Refer to https://aws.amazon.com/blogs/devops/auto-scaling-aws-opsworks-instances/ on how to do that. The native OpsWorks Scaling feature should be chosen in Option C as it can already meet the customer’s need.
Option E is CORRECT: Because nested stacks are suitable for infrastructure and OpsWorks to work together.
Option F is incorrect: Because using AWS CLI commands to configure OpsWorks is not a automated method. An OpsWorks stack in CloudFormation should be considered.