Inspirational journeys

Follow the stories of academics and their research expeditions

AWS Certified Big Data - Specialty Certification - Part 42

Mary Smith

Wed, 09 Jul 2025

AWS Certified Big Data - Specialty Certification - Part 42

1. The application uses the Dynamo DB data store today. You also have a test environment where you perform stress tests on your application. There is a constant need to restore data in the tables Dynamo DB. How can this be achieved? Select two of the following response options. Each answer is part of the solution.(Select 2answers)

A) Using the import function Dynamo DB, to copy the data after completion of the test.
B) Using Data Pipeline AWS to export data from the database to a file Dynamo table with a bucket Amazon S3 prior to the test ....
C) Using AWS data pipelines to import data from a database table from Dynamo file as Amazon bucket 53 after the test ends
D) Using a Dynamo DB Export function to copy the data before beginning the test



2. You now have a tabular Dynamo DB. You are required to comply with complex requests for data analysis for data stored in tables Dynamo DB. How can this be achieved?

A) None
B) Copying data to the AWS redshift, and then perform the complex issues
C) Query Dynamo DB table, because it supports the complex issues
D) Copying data to the AWS CSOs, and then perform the complex issues
E) Copying data to the AWS quick glance, and then perform the complex issues


3. There is a need for EC2 instances in a private subnet, access to database tables Dynamo. How this can be achieved

A) Use endpoint VPC
B) Converting private subnets in a public subnet because that is the only way to access the achieved
C) There is no way for the parties in a private subnet, access to database tables Dynamo is VPC endpoints allow Dynamo DB Amazon EC2 instances in the VPC to use their private IP-address to access the Dynamo database without affecting the public Internet
D) None
E) Attach a virtual private gate VPC


4. Your company supports e-commerce in AWS. They want to use AWS machine learning, to see how it will be sold many units of a particular product. Which model of machine learning you will use for this purpose?

A) regression classification
B) Binary classification
C) None
D) MultiClass classification
E) simple classification


5. To enable encryption for cluster startup. To go from the anther to the encrypted plaintext rag. loosen the data from an existing source rag. You can then download the data to a new target cluster What is the purpose Hadoop features Encrypted randomly?

A) None
B) Files are mixed over the nodes in the cluster
C) EC2 instances blended in the cluster to improve performance
D) The encryption keys used in the cluster periodically mixed
E) Data transmission between nodes in the cluster ciphered


1. Right Answer: B,C
Explanation: You can use the AWS data pipeline to export data from the database table to a file Dynamo in the Amazon 53 buckets. You can also use the console to import data from Amazon to the table 53 in the DB Dynamo AWS region or another

2. Right Answer: B
Explanation: Amazon Reds hift complements Amazon Dynamo DB with enhanced business intelligence and powerful, the SQL interface. When you copy data from a table in Amazon DynamoDB Red shlft you can perform complex queries to analyze the data for the data, including going to other tables in the cluster Shift, Amazon Red.

3. Right Answer: A
Explanation: VPC endpoint Dynamo DB enables Amazon EC2 instances in the VPC to use their private IP-address to access the Dynamo database without affecting the public Internet. Your EC2 instances require public the IP-address, and you do not need the Internet gateway. NAT-device or virtual private gateway VPC. You can use endpoint policies to control access to the database Dynamo. Traffic between VPC and the AWS service Amazon does not leave the network.

4. Right Answer: A
Explanation: the problem of regression to predict a numerical value. Amazon ML ML supports three types of models: the binary classification. 11t9 F Tll5MstuFV3I1t?

5. Right Answer: E
Explanation: The process of transferring data from the device to the gear unit known as shuffling i.e. process by which the system performs sorting and transmits the map to the gear unit output as the input data. The shuttle phase. Hadoop MapReduce (MRV2) mixes the output of each map task to reduce at different sites, using a standard H1TP therefore in a way (in flight) character.

0 Comments

Leave a comment