Overview of partitioning + container 15,000 RUs physical partition 1 7,500 RUs physical partition 2 7,500 RUs Client application (write) Another client application (read)
Overview of partitioning Client application (write) Another client application (read) Application writes data and provides a partition key value with every item + container 15,000 RUs physical partition 1 7,500 RUs physical partition 2 7,500 RUs
Overview of partitioning Client application (write) Another client application (read) Cosmos DB uses partition key value to route data to a partition + container 15,000 RUs physical partition 1 7,500 RUs physical partition 2 7,500 RUs
Overview of partitioning + Client application (write) Another client application (read) Every partition can store up to 50GB of data and serve up to 10,000 RU/s container 15,000 RUs physical partition 1 7,500 RUs physical partition 2 7,500 RUs
Overview of partitioning + Client application (write) Another client application (read) The total throughput for the container will be divided evenly across all partitions container 15,000 RUs physical partition 1 7,500 RUs physical partition 2 7,500 RUs
Overview of partitioning container 15,000 RUs physical partition 1 5,000 RUs physical partition 2 5,000 RUs Client application (write) Another client application (read) If more data or throughput is needed, Cosmos DB will add a new partition automatically physical partition 3 5,000 RUs
Overview of partitioning container 15,000 RUs physical partition 1 5,000 RUs physical partition 2 5,000 RUs Client application (write) Another client application (read) The data will be redistributed as a result physical partition 3 5,000 RUs
Overview of partitioning container 15,000 RUs physical partition 1 5,000 RUs physical partition 2 5,000 RUs Client application (write) Another client application (read) And the total throughput capacity will be divided evenly between all partitions physical partition 3 5,000 RUs
Overview of partitioning container 15,000 RUs physical partition 1 5,000 RUs physical partition 2 5,000 RUs Client application (write) Another client application (read) To read data efficiently, the app must provide the partition key of the documents it is requesting physical partition 3 5,000 RUs
How is data distributed? {#} Range of partition addresses Hashing algorithm Physical partitions Data with partition keys Whenever a document is inserted, the partition key value will be checked and assigned to a physical partition pk = 1
How is data distributed? {#} Range of partition addresses Hashing algorithm Physical partitions Data with partition keys The item will be assigned to a partition based on its partitioning key. pk = 1
How is data distributed? {#} Range of partition addresses Hashing algorithm Physical partitions All partition key values will be distributed amongst the physical partitions Data with partition keys
How is data distributed? {#} Range of partition addresses Hashing algorithm Physical partitions However, items with the exact same partition key value will be co-located pk = 1 pk = 1
Partitioning dynamics Sri Tim Client application (write) Thomas Scenario 1 And now we will take the largest partition and re-balance it with the new one
An efficient partitioning strategy has a close to even distribution An inefficient partitioning strategy is the main source of cost and performance challenges
An efficient partitioning strategy has a close to even distribution An inefficient partitioning strategy is the main source of cost and performance challenges A random partition key can provide an even data distribution
Database Account (per tenant) Container w/ Dedicated Throughput (per tenant) Container w/ Shared Throughput (per tenant) Partition Key (per tenant) Isolation Knobs Independent geo-replication knobs Multiple throughput knobs (dedicated throughput – eliminating noisy neighbors) Independent throughput knobs (dedicated throughput – eliminating noisy neighbors) Group tenants within database account(s) based on regional needs Share throughput across tenants grouped by database (great for lowering cost on “spiky” tenants) Easy management of tenants (drop container when tenant leaves) Mitigate noisy-neighbor blast radius (group tenants by database) Share throughput across tenants grouped by container (great for lowering cost on “spiky” tenants) Enables easy queries across tenants (containers act as boundary for queries) Mitigate noisy-neighbor blast radius (group tenants by container) Throughput requirements >400 RUs per Tenant (> $24 per tenant) >400 RUs per Tenant (> $24 per tenant) >100 RUs per Tenant (> $6 per tenant) >0 RUs per Tenant (> $0 per tenant) T-Shirt Size Large Example: Premium offer for B2B apps Large Example: Premium offer for B2B apps Medium Example: Standard offer for B2B apps Small Example: B2C apps
In the case of network Partitioning in a distributed computer system, one has to choose between Availability and Consistency, but Else, even when the system is running normally in the absence of partitions, one has to choose between Latency and Consistency.
Region A Region B Region C Azure Traffic Manager Master (read/write) Master (read/write) Master (read/write) Master (read/write) Replica (read) Replica (read)
Consistency Level Quorum Reads Quorum Writes Strong Local Minority (2 RU) Global Majority (1 RU) Bounded Staleness Local Minority (2 RU) Local Majority (1 RU) Session Single replica using session token(1 RU) Local Majority (1 RU) Consistent Prefix Single replica (1 RU) Local Majority (1 RU) Eventual Single replica (1 RU) Local Majority (1 RU) forwarder follower follower
Internet Device Traffic Manager Mobile Browser West US 2 Cosmos DB Application Gateway Web Tier Middle Tier Load Balancer North Europe Cosmos DB Application Gateway Web Tier Middle Tier Load Balancer Southeast Asia Cosmos DB Application Gateway Web Tier Middle Tier Load Balancer
Azure Cosmos DB’s schema-less service automatically indexes all your data, regardless of the data model, to delivery blazing fast queries. Item Color Microwave safe Liquid capacity CPU Memory Storage Geek mug Graphite Yes 16ox ??? ??? ??? Coffee Bean mug Tan No 12oz ??? ??? ??? Surface book Gray ??? ??? 3.4 GHz Intel Skylake Core i7- 6600U 16GB 1 TB SSD • Automatic index management • Synchronous auto-indexing • No schemas or secondary indices needed • Works across every data model GEEK
Custom Indexing Policies Though all Azure Cosmos DB data is indexed by default, you can specify a custom indexing policy for your collections. Custom indexing policies allow you to design and customize the shape of your index while maintaining schema flexibility. • Define trade-offs between storage, write and query performance, and query consistency • Include or exclude documents and paths to and from the index • Configure various index types { "automatic": true, "indexingMode": "Consistent", "includedPaths": [{ "path": "/*", "indexes": [{ "kind": “Range", "dataType": "String", "precision": -1 }, { "kind": "Range", "dataType": "Number", "precision": -1 }, { "kind": "Spatial", "dataType": "Point" }] }], "excludedPaths": [{ "path": "/nonIndexedContent/*" }] }
Athens locations headquarter exports 0 country city Germany Bonn revenue 200 0 1 city city Berlin Italy dealers 0 name Hans locations headquarter exports 0 country city Germany Berlin 1 country city France Paris 0 1 city Athens city Moscow Belgium
locations headquarter exports 0 country city Germany Berlin revenue 200 0 1 city Athens city Berlin Italy dealers 0 name Hans Bonn 1 country city France Paris Belgium Moscow
On-the-fly Index Changes In Azure Cosmos DB, you can make changes to the indexing policy of a collection on the fly. Changes can affect the shape of the index, including paths, precision values, and its consistency model. A change in indexing policy effectively requires a transformation of the old index into a new index.
Metrics Analysis The SQL APIs provide information about performance metrics, such as the index storage used and the throughput cost (request units) for every operation. You can use this information to compare various indexing policies, and for performance tuning. When running a HEAD or GET request against a collection resource, the x-ms-request-quota and the x-ms-request-usage headers provide the storage quota and usage of the collection. You can use this information to compare various indexing policies, and for performance tuning.