Skip to content

Commit 29b98ba

Browse files
colloacted ---> collocated
I suppose it is a typo.
1 parent bd3a517 commit 29b98ba

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/cosmos-db/hierarchical-partition-keys.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ When you choose each level of your hierarchical partition key, it's important to
3131

3232
- **Have a high cardinality**. The first, second, and third (if applicable) keys of the hierarchical partition should all have a wide range of possible values.
3333

34-
- Having low cardinality at the first level of the hierarchical partition key will limit all of your write operations at the time of ingestion to just one physical partition until it reaches 50 GB and splits into two physical partitions. For example, suppose your first level key is on `TenantId` and only have 5 unique tenants. Each of these tenants' operations will be scoped to just one physical partition, limiting your throughput consumption to just what is on that one physical partition. This is because hierarchical partitions optimize for all documents with the same first-level key to be colloacted on the same physical partition to avoid full-fanout queries.
34+
- Having low cardinality at the first level of the hierarchical partition key will limit all of your write operations at the time of ingestion to just one physical partition until it reaches 50 GB and splits into two physical partitions. For example, suppose your first level key is on `TenantId` and only have 5 unique tenants. Each of these tenants' operations will be scoped to just one physical partition, limiting your throughput consumption to just what is on that one physical partition. This is because hierarchical partitions optimize for all documents with the same first-level key to be collocated on the same physical partition to avoid full-fanout queries.
3535
- While this may be okay for workloads where we do a one-time ingest of all our tenants' data and the following operations are primarily read-heavy afterwards, this can be unideal for workloads where your business requirements involve ingestion of data within a specific time. For example, if you have strict business requirements to avoid latencies, the maximum throughput your workload can theoretically achieve to ingest data is number of physical partitions * 10k. If your top-level key has low cardinality, your number of physical partitions will likely be 1, unless there is sufficient data for the level 1 key for it to be spread across multiple partitions after splits which can take between 4-6 hours to complete.
3636

3737
- **Spread request unit (RU) consumption and data storage evenly across all logical partitions**. This spread ensures even RU consumption and storage distribution across your physical partitions.

0 commit comments

Comments
 (0)