1. You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data in a durable data store in order to run reports. Web servers In the Auto Scaling group are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling. What two approaches will meet these requirements? Choose two answers from the options given below?.(Select 2answers)
A) On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacier. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use Amazon Data Pipeline to process the data in Amazon Glacier and run reports every hour.
B) Install an Amazon Cloud watch Logs Agent on every web server during the bootstrap process. Create a Cloud Watch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloud watch custom metrics.
C) On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon 53 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS(Amazon Web Service) Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour.
D) Install an AWS(Amazon Web Service) Data Pipeline Logs Agent on every web server during the bootstrap process. Create a log group object In AWS(Amazon Web Service) Data Pipeline, and define Metric Filters to move processed log data directly from the web servers to Amazon Red shift and run reports every hour.
2. Your company has developed a web application and is hosting it in an Amazon 53 bucket configured for static website hosting. The application is using the AWS(Amazon Web Service) SDK for JavaScript in the browser to access data stored in an Amazon Dynamo DB table. How can you ensure that API keys for access to your data In Dynamo DB are kept secure?
A) Store AWS(Amazon Web Service) keys in global variables within your application and configure the application to use these credentials when making requests.
B) Create an Amazon S3 role in lAM with access to the specific Dynamo DB tables, and assign It to the bucket hosting your website.
C) Configure a web identity federation role within lAM to enable access to the correct Dynamo DB resources and retrieve temporary credentials
D) Configure 53 bucket tags with your AWS(Amazon Web Service) access keys (or your bucket hosing your website so that the application can query them for access.
3. As part of your continuous deployment process, your application undergoes an I/O load performance test before It is deployed to production using new AMIs. The application uses one Amazon Elastic Block Store (EBS) PIOPS volume per instance and requires consistent I/O performance. Which of the following must be carried out to ensure that I/O load performance tests yield the correct results in a repeatable manner?
A) Ensure that snapshots of the Amazon EBS volumes are created as a backup.
B) Ensure that the Amazon EBS volume Is encrypted.
C) Ensure that the i/O block sizes for the test are randomly selected.
D) Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test.
4. As an architect you have decided to use Cloud Formation instead of Ops Works or Elastic Beanstalk for deploying the applications in your company. Unfortunately. you have discovered that there is a resource type that Is not supported by Cloud Formation. What can you do to get around this. Please select:
A) Create a custom resource type using template developer. custom resource template. and Cloud Formation.
B) Use a configuration management tool such as Chef, Puppet. or Ensile.
C) Specify more mappings and separate your template into multiple templates by using nested stacks.
D) Specify the custom resource by separating your template into multiple templates by using nested stacks
5. ou work for an insurance company and are responsible for the day-to-day operations of your company?s online quote system used to provide insurance quotes to members of the public. Your company wants to use the application logs generated by the system to better understand customer behavior. Industry, regulations also require that you retain all application logs for the system indefinitely in order to investigate fraudulent claims in the future. You have been tasked with designing a log management system with the following requirements: - All log entries must be retained by the system, even during unplanned instance failure, The customer insight team requires immediate access to the logs from the past seven days. The fraud investigation team requires access to all historic logs, but will wait up to 24 hours before these logs are available. How would you meet these requirements in a cost-effective manner? Choose three answers from the options below?(Select 3answers)
A) Configure your application to write logs to a separate Amazon EBS volume with the delete on termination field set to false. Create a script that moves the logs from the instance to Amazon 53 once an hour.
B) Configure your application to write logs to the Instance?s ephemeral disk, because this storage is free and has good write performance. Create a script that moves the logs from the instance to Amazon S3 once an hour.
C) Configure your application to write logs to the instance?s default Amazon EBS boot volume, because this storage already exists. Create a script that moves the logs from the instance to Amazon 53 once an hour.
D) Create an Amazon 53 lifecycle configuration to move log files from Amazon 53 to Amazon Glacier after seven days.
E) Write a script that is configured to be executed when the instance is stopped or terminated and that will upload any remaining logs on the instance to Amazon S3.
F) Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availability.
Leave a comment