Skip to content

Commit 164542e

Browse files
Merge pull request #252886 from seesharprun/cosmos-issue-resolution-1
Cosmos DB | Bulk issue resolutions #1
2 parents e75a269 + 687c9aa commit 164542e

File tree

7 files changed

+19
-15
lines changed

7 files changed

+19
-15
lines changed

articles/cosmos-db/burst-capacity-faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ sections:
5151
- question: |
5252
How can I enable burst capacity on an account programatically?
5353
answer: |
54-
You can use the [Azure Cosmos DB Resource Provider REST API version 2022-11-15-preview](/rest/api/cosmos-db-resource-provider/) or a [Resource Manager template](manage-with-templates.md) with API version 2022-11-15-preview to set the property **enableBurstCapacity** to true.
54+
You can use the [Azure Cosmos DB Resource Provider REST API version `2023-03-15-preview`](/rest/api/cosmos-db-resource-provider/2023-03-15-preview/database-accounts/create-or-update) or a [Resource Manager template with API version `2023-03-01-preview`](/azure/templates/microsoft.documentdb/2023-03-01-preview/databaseaccounts) to set the property `enableBurstCapacity` to true.
5555
You can also use the Azure CLI or PowerShell.
5656
5757
#### [PowerShell](#tab/azure-powershell)

articles/cosmos-db/gremlin/quickstart-python.md

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -32,17 +32,11 @@ In this quickstart, you create and manage an Azure Cosmos DB for Gremlin (graph)
3232
You can also install the Python driver for Gremlin by using the `pip` command line:
3333

3434
```bash
35-
pip install gremlinpython==3.4.13
35+
pip install gremlinpython==3.7.*
3636
```
3737

3838
- [Git](https://git-scm.com/downloads).
3939

40-
> [!NOTE]
41-
> This quickstart requires a graph database account created after December 20, 2017. Existing accounts will support Python once they’re migrated to general availability.
42-
43-
> [!NOTE]
44-
> We currently recommend using gremlinpython==3.4.13 with Gremlin (Graph) API as we haven't fully tested all language-specific libraries of version 3.5.* for use with the service.
45-
4640
## Create a database account
4741

4842
Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB.

articles/cosmos-db/mongodb/connect-using-mongoose.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -56,10 +56,10 @@ After you create the database, you'll use the name in the `COSMOSDB_DBNAME` envi
5656

5757
3. Install the necessary packages using one of the ```npm install``` options:
5858

59-
* **Mongoose**: ```npm install mongoose@5.13.15 --save```
59+
* **Mongoose**: ```npm install mongoose --save```
6060

61-
> [!IMPORTANT]
62-
> The Mongoose example connection below is based on Mongoose 5+, which has changed since earlier versions. Azure Cosmos DB for MongoDB is compatible with up to version `5.13.15` of Mongoose. For more information, please see the [issue discussion](https://github.com/Automattic/mongoose/issues/11072) in the Mongoose GitHub repository.
61+
> [!NOTE]
62+
> For more information on which version of mongoose is compatible with your API for MongoDB server version, see [Mongoose compatability](https://mongoosejs.com/docs/compatibility.html).
6363
6464
* **Dotenv** *(if you'd like to load your secrets from an .env file)*: ```npm install dotenv --save```
6565

articles/cosmos-db/monitor-resource-logs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ Here, we walk through the process of creating diagnostic settings for your accou
5656
| **GremlinRequests** | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
5757
| **QueryRuntimeStatistics** | NoSQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
5858
| **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the PartitionKeyStatistics log captures the top three keys with largest storage size. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
59-
| **PartitionKeyRUConsumption** | API for NoSQL | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` |
59+
| **PartitionKeyRUConsumption** | API for NoSQL | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write, query, and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` |
6060
| **ControlPlaneRequests** | All APIs | Logs details on control plane operations, which include, creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` |
6161
| **TableApiRequests** | API for Table | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
6262

articles/cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -564,7 +564,7 @@ The following code shows how to call a post-trigger using the Python SDK:
564564
```python
565565
item = {'category': 'Personal', 'name': 'Groceries',
566566
'description': 'Pick up strawberries', 'isComplete': False}
567-
container.create_item(item, {'post_trigger_include': 'trgPreValidateToDoItemTimestamp'})
567+
container.create_item(item, pre_trigger_include='trgPreValidateToDoItemTimestamp')
568568
```
569569

570570
---

articles/cosmos-db/nosql/performance-tips-query-sdk.md

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,17 @@ The SQL SDK includes a native ServiceInterop.dll to parse and optimize queries l
9191

9292
# [V3 .NET SDK](#tab/v3)
9393

94-
For queries that target a Partition Key by setting the [PartitionKey](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.partitionkey) property in `QueryRequestOptions` and contain no aggregations (including Distinct, DCount, Group By):
94+
For queries that target a Partition Key by setting the [PartitionKey](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.partitionkey) property in `QueryRequestOptions` and contain no aggregations (including Distinct, DCount, Group By). In this example, the partition key field of `/state` is filtered on the value `Washington`.
95+
96+
```csharp
97+
using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
98+
"SELECT * FROM c WHERE c.city = 'Seattle' AND c.state = 'Washington'"
99+
{
100+
// ...
101+
}
102+
```
103+
104+
Optionally, you can provide the partition key as a part of the request options object.
95105

96106
```cs
97107
using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(

articles/cosmos-db/nosql/quickstart-spark.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -675,7 +675,7 @@ The Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL has a complete conf
675675
az cosmosdb sql role assignment create --account-name $accountName --resource-group $resourceGroupName --scope "/" --principal-id $principalId --role-definition-id $readOnlyRoleDefinitionId
676676
```
677677

678-
1. Now that you have created an Azure Active Directory application and service principle, created a custom role, and assigned that role permissions to your Cosmos DB account, you should be able to run your notebook.
678+
1. Now that you have created an Azure Active Directory application and service principal, created a custom role, and assigned that role permissions to your Cosmos DB account, you should be able to run your notebook.
679679

680680
## Migrate to Spark 3 Connector
681681

0 commit comments

Comments
 (0)