- 144 Actual Exam Questions
- Compatible with all Devices
- Printable Format
- No Download Limits
- 90 Days Free Updates
Get All Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB Exam Questions with Validated Answers
| Vendor: | Microsoft |
|---|---|
| Exam Code: | DP-420 |
| Exam Name: | Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB |
| Exam Questions: | 144 |
| Last Updated: | January 14, 2026 |
| Related Certifications: | Azure Cosmos DB Developer Specialty |
| Exam Tags: | Cloud Certifications, Microsoft Azure Certifications, Data Management Certifications, Data and AI Certifications Intermediate Microsoft Developers |
Looking for a hassle-free way to pass the Microsoft Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB exam? DumpsProvider provides the most reliable Dumps Questions and Answers, designed by Microsoft certified experts to help you succeed in record time. Available in both PDF and Online Practice Test formats, our study materials cover every major exam topic, making it possible for you to pass potentially within just one day!
DumpsProvider is a leading provider of high-quality exam dumps, trusted by professionals worldwide. Our Microsoft DP-420 exam questions give you the knowledge and confidence needed to succeed on the first attempt.
Train with our Microsoft DP-420 exam practice tests, which simulate the actual exam environment. This real-test experience helps you get familiar with the format and timing of the exam, ensuring you're 100% prepared for exam day.
Your success is our commitment! That's why DumpsProvider offers a 100% money-back guarantee. If you don’t pass the Microsoft DP-420 exam, we’ll refund your payment within 24 hours no questions asked.
Don’t waste time with unreliable exam prep resources. Get started with DumpsProvider’s Microsoft DP-420 exam dumps today and achieve your certification effortlessly!
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB Core (SQL) API account named account 1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1 exceeds a specific value.
Solution: You configure an application to use the change feed processor to read the change feed and you configure the application to trigger the function.
Does this meet the goal?
Instead configure an Azure Monitor alert to trigger the function.
You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.
You have an Azure Cosmos DB database that hosts a container named container1.
You need to ensure that the items stored in contained will NOT expire unless their TTL value is set explicitly. What should you do?
You need to create a database in an Azure Cosmos DB for NoSQL account. The database will contain three containers named coll1, coll2 and coll3. The coll1 container will have unpredictable read and write volumes. The col!2 and coll3 containers will have predictable read and write volumes. The expected maximum throughput for coll1 and coll2 is 50,000 request units per second (RU/s) each.
How should you provision the collection while minimizing costs?
To create a database that minimizes costs, you should consider the following factors:
The read and write volumes of your containers
The predictability and variability of your traffic
The latency and throughput requirements of your application
The geo-distribution and availability needs of your data
Based on these factors, one possible option that you could choose isB. Create a provisioned throughput account. Set the throughput for coll1 to Autoscale. Set the throughput for coll2 and coll3 to Manual.
This option has the following advantages:
It allows you to optimize your costs by paying only for the throughput you need for each container1.
This option also has some limitations, such as:
It may not support availability zones or multi-master replication for your account1.
You need to create a data store for a directory of small and medium-sized businesses (SMBs). The data store must meet the following requirements:
* Store companies and the users employed by them. Each company will have less than 1,000 users.
* Some users have data that is greater than 2 KB.
* Associate each user to only one company.
* Provide the ability to browse by company.
* Provide the ability to browse the users by company.
* Whenever a company or user profile is selected, show a details page for the company and all the related users.
* Be optimized for reading data.
Which design should you implement to optimize the data store for reading data?
To optimize the data store for reading data, you should consider the following factors:
The size and shape of your data
The frequency and complexity of your queries
The latency and throughput requirements of your application
The trade-offs between storage efficiency and query performance
Based on these factors, one possible design that you could implement isB. In a company container, create a document for each company. Embed the users into company documents. Use the company ID as the partition key.
This design has the following advantages:
It avoids storing redundant data or creating additional containers for users1.
It allows you to browse by company and browse the users by company with simple queries1.
It shows a details page for the company and all the related users by fetching a single document1.
This design also has some limitations, such as:
You have a container in an Azure Cosmos DB for NoSQL account that stores data about orders. The following is a sample of an order document.

Documents are up to 2 KB.
You plan to receive one million orders daily.
Customers will frequently view then past order history.
You are the evaluating whether to use orderDate as the partition key.
What are two effects of using orderDate as the partition key? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Security & Privacy
Satisfied Customers
Committed Service
Money Back Guranteed