Amazon DBS-C01 Dumps - AWS Certified Database - Specialty PDF Sample Questions

discount banner
Exam Code:
DBS-C01
Exam Name:
AWS Certified Database - Specialty
324 Questions
Last Update Date : 16 March, 2024
PDF + Test Engine
$60 $78
Test Engine Only Demo
$50 $65
PDF Only Demo
$35 $45.5

Amazon DBS-C01 This Week Result

0

They can't be wrong

0

Score in Real Exam at Testing Centre

0

Questions came word by word from this dumps

Best Amazon DBS-C01 Dumps - pass your exam In First Attempt

Our DBS-C01 dumps are better than all other cheap DBS-C01 study material.

Only best way to pass your Amazon DBS-C01 is that if you will get reliable exam study materials. We ensure you that realexamdumps is one of the most authentic website for Amazon AWS Certified Database exam question answers. Pass your DBS-C01 AWS Certified Database - Specialty with full confidence. You can get free AWS Certified Database - Specialty demo from realexamdumps. We ensure 100% your success in DBS-C01 Exam with the help of Amazon Dumps. you will feel proud to become a part of realexamdumps family.

Our success rate from past 5 year very impressive. Our customers are able to build their carrier in IT field.

Owl
Search

45000+ Exams

Buy

Desire Exam

Download

Exam

and pass your exam...

Related Exam

Realexamdumps Providing most updated AWS Certified Database Question Answers. Here are a few exams:


Sample Questions

Realexamdumps Providing most updated AWS Certified Database Question Answers. Here are a few sample questions:

Amazon DBS-C01 Sample Question 1

A company is running an Amazon RDS for MySQL Multi-AZ DB instance for a business-critical workload. RDS encryption for the DB instance is disabled. A recent security audit concluded that all business-critical applications must encrypt data at rest. The company has asked its database specialist to formulate a plan to accomplish this for the DB instance.

Which process should the database specialist recommend?


Options:

A. Create an encrypted snapshot of the unencrypted DB instance. Copy the encrypted snapshot to Amazon S3. Restore the DB instance from the encrypted snapshot using Amazon S3.
B. Create a new RDS for MySQL DB instance with encryption enabled. Restore the unencrypted snapshot to this DB instance.
C. Create a snapshot of the unencrypted DB instance. Create an encrypted copy of the snapshot. Restore the DB instance from the encrypted snapshot.
D. Temporarily shut down the unencrypted DB instance. Enable AWS KMS encryption in the AWS Management Console using an AWS managed CMK. Restart the DB instance in an encrypted state.

Answer: C Explanation: Explanation: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.Limitationt

Amazon DBS-C01 Sample Question 2

A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed.

Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)


Options:

A. Review the stack drift before modifying the template
B. Create and review a change set before applying it
C. Export the database resources as stack outputs
D. Define the database resources in a nested stack
E. Set a stack policy for the database resources

Answer: B, E Explanation: Explanation: https://docs.amazonaws.cn/en_us/AWSCloudFormation/latest/UserGuide/best-practices.html#cfn-best-practices-changesett

Amazon DBS-C01 Sample Question 3

A ride-hailing application uses an Amazon RDS for MySQL DB instance as persistent storage for bookings. This application is very popular and the company expects a tenfold increase in the user base in next few months. The application experiences more traffic during the morning and evening hours.

This application has two parts:

  • An in-house booking component that accepts online bookings that directly correspond to simultaneous requests from users.
  • A third-party customer relationship management (CRM) component used by customer care representatives. The CRM uses queries to access booking data.

A database specialist needs to design a cost-effective database solution to handle this workload. Which solution meets these requirements?


Options:

A. Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.
B. Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.
C. Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.
D. Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.

Answer: E

Amazon DBS-C01 Sample Question 4

A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions.

Which solution would meet these requirements and deploy the DynamoDB tables?


Options:

A. Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.
B. Create an AWS CloudFormation template and deploy the template to all the Regions.
C. Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.
D. Create DynamoDB tables using the AWS Management Console in all the Regions and create a step-by- step guide for future deployments.

Answer: C Explanation: Explanation: https://aws.amazon.com/blogs/aws/use-cloudformation-stacksets-to-provision-resources-across-multiple-aws-accounts-and-regions/ https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-concepts.htmm

Amazon DBS-C01 Sample Question 5

A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements.

Which combination of actions should a database specialist take to meet these requirements? (Choose two.)


Options:

A. Create an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS). Then promote the replica to master.
B. Use SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.
C. Modify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.
D. Take a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.
E. Use AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.

Answer: A, B Explanation: Explanation: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-replicas-adding.htmm

Amazon DBS-C01 Sample Question 6

A company is using Amazon with Aurora Replicas for read-only workload scaling. A Database Specialist needs to split up two read-only applications so each application always connects to a dedicated replica. The Database Specialist wants to implement load balancing and high availability for the read-only applications.

Which solution meets these requirements?


Options:

A. Use a specific instance endpoint for each replica and add the instance endpoint to each read-only application connection string.
B. Use reader endpoints for both the read-only workload applications.
C. Use a reader endpoint for one read-only application and use an instance endpoint for the other read-only application.
D. Use custom endpoints for the two read-only applications.

Answer: D Explanation: Explanation: https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-aurora-simplifies-workload-management-with-custom-endpoints/

Amazon DBS-C01 Sample Question 7

A company is building a software as a service application. As part of the new user sign-on workflow, a Python script invokes the CreateTable operation using the Amazon DynamoDB API. After the call returns, the script attempts to call PutItem.

Occasionally, the PutItem request fails with a ResourceNotFoundException error, which causes the workflow to fail. The development team has confirmed that the same table name is used in the two API calls.

How should a database specialist fix this issue?


Options:

A. Add an allow statement for the dynamodb:PutItem action in a policy attached to the role used by the application creating the table.
B. Set the StreamEnabled property of the StreamSpecification parameter to true, then call PutItem.
C. Change the application to call DescribeTable periodically until the TableStatus is ACTIVE, then call PutItem.
D. Add a ConditionExpression parameter in the PutItem request.

Answer: C Explanation: Explanation: https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.htmm

Amazon DBS-C01 Sample Question 8

A company’s Security department established new requirements that state internal users must connect to an existing Amazon RDS for SQL Server DB instance using their corporate Active Directory (AD) credentials. A Database Specialist must make the modifications needed to fulfill this requirement.

Which combination of actions should the Database Specialist take? (Choose three.)


Options:

A. Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.
B. Modify the RDS SQL Server DB instance to use the directory for Windows authentication. Create appropriate new logins.
C. Use the AWS Management Console to create an AWS Managed Microsoft AD. Create a trust relationship with the corporate AD.
D. Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and start it again. Create appropriate new logins.
E. Use the AWS Management Console to create an AD Connector. Create a trust relationship with the corporate AD.
F. Configure the AWS Managed Microsoft AD domain controller Security Group.

Answer: B, C, F Explanation: Explanation: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_SQLServerWinAuth.htmm

Amazon DBS-C01 Sample Question 9

A company is running an on-premises application comprised of a web tier, an application tier, and a MySQL database tier. The database is used primarily during business hours with random activity peaks throughout the day. A database specialist needs to improve the availability and reduce the cost of the MySQL database tier as part of the company’s migration to AWS.

Which MySQL database option would meet these requirements?


Options:

A. Amazon RDS for MySQL with Multi-AZ
B. Amazon Aurora Serverless MySQL cluster
C. Amazon Aurora MySQL cluster
D. Amazon RDS for MySQL with read replica

Answer: D

Amazon DBS-C01 Sample Question 10

Recently, an ecommerce business transferred one of its SQL Server databases to an Amazon RDS for SQL Server Enterprise Edition database instance. The corporation anticipates an increase in read traffic as a result of an approaching sale. To accommodate the projected read load, a database professional must establish a read replica of the database instance.

Which procedures should the database professional do prior to establishing the read replica? (Select two.)


Options:

A. Identify a potential downtime window and stop the application calls to the source DB instance.
B. Ensure that automatic backups are enabled for the source DB instance.
C. Ensure that the source DB instance is a Multi-AZ deployment with Always ON Availability Groups.
D. Ensure that the source DB instance is a Multi-AZ deployment with SQL Server Database Mirroring (DBM).
E. Modify the read replica parameter group setting and set the value to 1.

Answer: B, C Explanation: Explanation: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.ReadReplicas.htmm

Amazon DBS-C01 Sample Question 11

For the first time, a database professional is establishing a test graph database on Amazon Neptune. The database expert must input millions of rows of test observations from an Amazon S3.csv file. The database professional uploaded the data to the Neptune DB instance through a series of API calls.

Which sequence of actions enables the database professional to upload the data most quickly? (Select three.)


Options:

A. Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.
B. Ensure the vertices and edges are specified in different .csv files with proper header column formatting.
C. Use AWS DMS to move data from Amazon S3 to the Neptune Loader.
D. Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.
E. Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.
F. Create an S3 VPC endpoint and issue an HTTP POST to the databaseג€™s loader endpoint.

Answer: B, E, F Explanation: Explanation: https://d ocs.aws.amazon.com/neptune/latest/userguide/bulk-load-optimize.htmm

Amazon DBS-C01 Sample Question 12

A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies. The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest.

Which step will provide additional security?


Options:

A. Set up NACLs that allow the entire EC2 subnet to access the DB instance
B. Disable the master user account
C. Set up a security group that blocks SSH to the DB instance
D. Set up RDS to use SSL for data in transit

Answer: D Explanation: Reference: [Reference: https://aws.amazon.com/blogs/database/applying-best-practices-for-securing-sensitive-data-in- amazon-rds/, ]

Amazon DBS-C01 Sample Question 13

A company is running a website on Amazon EC2 instances deployed in multiple Availability Zones (AZs). The site performs a high number of repetitive reads and writes each second on an Amazon RDS for MySQL Multi- AZ DB instance with General Purpose SSD (gp2) storage. After comprehensive testing and analysis, a database specialist discovers that there is high read latency and high CPU utilization on the DB instance.

Which approach should the database specialist to take to resolve this issue without changing the application?


Options:

A. Implementing sharding to distribute the load to multiple RDS for MySQL databases.
B. Use the same RDS for MySQL instance class with Provisioned IOPS (PIOPS) storage.
C. Add an RDS for MySQL read replica.
D. Modify the RDS for MySQL database class to a bigger size and implement Provisioned IOPS (PIOPS).

Answer: E

Amazon DBS-C01 Sample Question 14

A company is running a two-tier ecommerce application in one AWS account. The web server is deployed using an Amazon RDS for MySQL Multi-AZ DB instance. A Developer mistakenly deleted the database in the production environment. The database has been restored, but this resulted in hours of downtime and lost revenue.

Which combination of changes in existing IAM policies should a Database Specialist make to prevent an error like this from happening in the future? (Choose three.)


Options:

A. Grant least privilege to groups, users, and roles
B. Allow all users to restore a database from a backup that will reduce the overall downtime to restore the database
C. Enable multi-factor authentication for sensitive operations to access sensitive resources and API operations
D. Use policy conditions to restrict access to selective IP addresses
E. Use AccessList Controls policy type to restrict users for database instance deletion
F. Enable AWS CloudTrail logging and Enhanced Monitoring

Answer: A, C, D Explanation: Explanation: https://aws.amazon.com/blogs/database/using-iam-multifactor-authentication-with-amazon-rds/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/security_iam_id-based-policy-htmlhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/DataDurability.htmm

Amazon DBS-C01 Sample Question 15

A company is looking to move an on-premises IBM Db2 database running AIX on an IBM POWER7 server. Due to escalating support and maintenance costs, the company is exploring the option of moving the workload to an Amazon Aurora PostgreSQL DB cluster.

What is the quickest way for the company to gather data on the migration compatibility?


Options:

A. Perform a logical dump from the Db2 database and restore it to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing row counts from source and target tables.
B. Run AWS DMS from the Db2 database to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing the row counts from source and target tables.
C. Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate the migration compatibility.
D. Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster. Create a migration assessment report to evaluate the migration compatibility.

Answer: D Explanation: Reference: [Reference: https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/Schema-Conversion- Tool.pdf, • Converts DB/DW schema from source to target (including procedures / views / secondary indexes / FK and constraints), • Mainly for heterogeneous DB migrations and DW migrations, , ]

Amazon DBS-C01 Sample Question 16

A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for PostgreSQL DB instance is currently using the default parameter group. A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging.

Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)


Options:

A. Update the log_connections parameter in the default parameter group
B. Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance
C. Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days
D. Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days
E. Connect to the RDS PostgreSQL host and update the log_connections parameter in the postgresql.conf file

Answer: A, E Explanation: Reference: [Reference: https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-1/, ]

Amazon DBS-C01 Sample Question 17

A large company has a variety of Amazon DB clusters. Each of these clusters has various configurations that adhere to various requirements. Depending on the team and use case, these configurations can be organized into broader categories.

A database administrator wants to make the process of storing and modifying these parameters more systematic. The database administrator also wants to ensure that changes to individual categories of configurations are automatically applied to all instances when required.

Which AWS service or feature will help automate and achieve this objective?


Options:

A. AWS Systems Manager Parameter Store
B. DB parameter group
C. AWS Config
D. AWS Secrets Manager

Answer: B Explanation: Reference: [Reference: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/ USER_WorkingWithParamGroups.html, ]

Amazon DBS-C01 Sample Question 18

A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the company discovered there is a period of time every day around 3:00 PM where the response time of the application is noticeably slower. The company has narrowed down the cause of this issue to the database and not the application.

Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?


Options:

A. Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.
B. Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.
C. Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.
D. Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.

Answer: E

Amazon DBS-C01 Sample Question 19

An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant.

Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?


Options:

A. Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
B. Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.
C. Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
D. Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.

Answer: D Explanation: Explanation: Plant id as partition key and Sensor id as a sort key. Fault can be identified quickly using the local secondary index and associated plant and sensor can be identified easily.

Amazon DBS-C01 Sample Question 20

A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after- the-fact analyses.

What should a Database Specialist do to meet these requirements with minimal effort?


Options:

A. Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
B. Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.
C. Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
D. Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.

Answer: B Explanation: Explanation: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Procedural.UploadtoCloudWatch.htm https://aws.amazon.com/premiumsupport/knowledge-center/rds-aurora-mysql-logs-cloudwatch/ https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutRetentionPolicy.htmm

Amazon DBS-C01 Sample Question 21

A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a primary concern for the company, and a solution that does not require significant rework is needed.

Which solution meets these requirements?


Options:

A. Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.
B. Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.
C. Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.
D. Change the DB clusters to the burstable instance family.

Answer: A Explanation: Explanation: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.htmm


and so much more...