Inspirational journeys

Follow the stories of academics and their research expeditions

AWS Certified DevOps Engineer - Professional Certification - Part 5

Mary Smith

Mon, 17 Mar 2025

AWS Certified DevOps Engineer - Professional Certification - Part 5

1. After a daily scrum with your development teams, you?ve agreed that using Blue/Green style deployments would benefit the team. Which technique should you use to deliver this new requirement? Please select:

A) Using an AWS(Amazon Web Service) Cloud Formation template. re-deploy your application behind a load balancer. launch a new AWS(Amazon Web Service) Cloud Formation stack during each deployment, update your load balancer to send half your traffic to the ne stack while you test, after verification update the load ba lancer to send 100% of traffic to the new stack, and then terminate the old stack.
B) Re-deploy your application on AWS(Amazon Web Service) Elastic Beanstalk, and take advantage of Elastic Beanstalk deployment types.
C) Using an AWS(Amazon Web Service) Ops Works stack, re-deploy your application behind an Elastic Load Balancing load balancer and take advantage of Ops Works stack versioning, during deployment create a new version of your application, tell Ops Works to launch the new version behind your load balancer. and when the new version is launched, terminate the old Ops Works stack.
D) Re-deploy your application behind a load balancer that uses Auto Scaling groups, create a new Identical Auto Scaling group. and associate it to the load balancer. During deployment, set the desired number of instances on the old Auto Scaling group to zero, and when all instances have terminated, delete the old Auto Scaling group.



2. You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data In a durable data store in order to run reports. Web servers In the Auto Scaling group are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling. What two approaches will meet these requirements? Choose two answers from the options given below?

A) On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacier. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use Amazon Data Pipeline to process the data in Amazon Glacier and run reports every hour.
B) Install an AWS(Amazon Web Service) Data Pipeline Logs Agent on every web server during the bootstrap process Create a log group object In AWS(Amazon Web Service) Data Pipeline, and define Metric Filters to move processed log data directly from the web servers to Amazon Red shift and run reports every hour.
C) Install an Amazon Cloud watch Logs Agent on every web server during the bootstrap process. Create a Cloud Watch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloud watch custom metrics.
D) On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS(Amazon Web Service) Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour.



3. After reviewing the last quarter?s monthly bills, management has noticed an increase in the overall bill from Amazon. After researching this increase in cost, you discovered that one of your new services Is doing a lot of GET Bucket API calls to Amazon S3 to build a metadata cache of all objects in the applications bucket. Your boss has asked you to come up with a new cost-effective way to help reduce the amount of these new GET Bucket API calls. What process should you use to help mitigate the cost?

A) Using Amazon SNS. create a notification on any new Amazon 53 objects that automatically updates a new Dynamo DB table to store all metadata about the new object. Subscribe the application to the Amazon SNS topic to update its internal Amazon 53 object metadata cache from the Dynamo DS table.
B) Create a new Dynamo DS table. Use the new Dynamo DS table to store all metadata about all objects uploaded to Amazon 53. Any time a new object is uploaded, update the application?s internal Amazon 53 object metadata cache from Dynamo DB.
C) Update your Amazon 53 buckets lifecycle policies to automatically push a list of objects to a new bucket. a use this list to view objects associated with the application?s bucket.
D) Upload all files to an Elastic Cache file cache server. Update your application to now read all file metadata from the Elastic Cache file cache server, and configure the Elastic Cache policies to push all files to Amazon 53 for long- term storage.



4. ou are using Cloud Formation to launch an EC2 instance and then configure an application after the instance is launched. You need the stack creation of the ELB and Auto Scaling to wait until the EC2 Instance is launched and configured properly. How do you do this? Please select:

A) It is not possible for the stack creation to wait until one service is created and launched
B) Use the Wait Condition resource to hold the creation of the other dependent resources
C) Use a Creation Policy to wait for the creation of the other dependent resources
D) Use the Hold Condition resource to hold the creation of the other dependent resources



5. The operations team and the development team want a single place to view both operating system and application logs. How should you implement this using AWS(Amazon Web Service) services? Choose two from the options below(Select 2answers)

A) Using AWS(Amazon Web Service) Cloud Formation, create a Cloud Watch Logs Log Group and send the operating system and application logs of interest using the Cloud Watch Logs Agent.
B) Using configuration management, set up remote logging to send events to Amazon Kinesis and insert the into Amazon Cloud Search or Amazon Red shift. depending on available analytic tools.
C) Using AWS(Amazon Web Service) Cloud Formation, merge the application logs with the operating system logs, and use lAM Roles to allow both teams to have access to view console output from Amazon EC2.
D) Using AWS(Amazon Web Service) Cloud Formation and configuration management, set up remote logging to send events via UDP packets to Cloud Trail.



1. Right Answer: D
Explanation:

2. Right Answer: D
Explanation:

3. Right Answer: A
Explanation:

4. Right Answer: C
Explanation:

5. Right Answer: A,B
Explanation:

0 Comments

Leave a comment