You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/cassandra/support.md
-6Lines changed: 0 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -240,19 +240,13 @@ Azure Cosmos DB for Apache Cassandra is a managed service platform. The platform
240
240
241
241
## CQL shell
242
242
243
-
<!-- You can open a hosted native Cassandra shell (CQLSH v5.0.1) directly from the Data Explorer in the [Azure portal](../data-explorer.md) or the [Azure Cosmos DB Explorer](https://cosmos.azure.com/). Before enabling the CQL shell, you must [enable the Notebooks](../notebooks-overview.md) feature in your account (if not already enabled, you will be prompted when clicking on `Open Cassandra Shell`).
You can connect to the API for Cassandra in Azure Cosmos DB by using the CQLSH installed on a local machine. It comes with Apache Cassandra 3.11 and works out of the box by setting the environment variables. The following sections include the instructions to install, configure, and connect to API for Cassandra in Azure Cosmos DB, on Windows or Linux using CQLSH.
248
244
249
245
> [!WARNING]
250
246
> Connections to Azure Cosmos DB for Apache Cassandra will not work with DataStax Enterprise (DSE) or Cassandra 4.0 versions of CQLSH. Please ensure you use only v3.11 open source Apache Cassandra versions of CQLSH when connecting to API for Cassandra.
251
247
252
248
**Windows:**
253
249
254
-
<!-- If using windows, we recommend you enable the [Windows filesystem for Linux](/windows/wsl/install-win10#install-the-windows-subsystem-for-linux). You can then follow the linux commands below. -->
Copy file name to clipboardExpand all lines: articles/cosmos-db/includes/cosmos-db-tutorial-global-distribution-portal.md
-18Lines changed: 0 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,28 +33,10 @@ For delivering low-latency to end users, it is recommended that you deploy both
33
33
34
34
For BCDR, it is recommended to add regions based on the region pairs described in the [Cross-region replication in Azure: Business continuity and disaster recovery](../../availability-zones/cross-region-replication-azure.md) article.
35
35
36
-
<!--
37
-
38
-
## <a id="selectwriteregion"></a>Select the write region
39
-
40
-
While all regions associated with your Azure Cosmos DB database account can serve reads (both, single item as well as multi-item paginated reads) and queries, only one region can actively receive the write (insert, upsert, replace, delete) requests. To set the active write region, do the following
41
-
42
-
43
-
1. In the **Azure Cosmos DB** blade, select the database account to modify.
44
-
2. In the account blade, click **Replicate data globally** from the menu.
45
-
3. In the **Replicate data globally** blade, click **Manual Failover** from the top bar.
46
-
![Change the write region under Azure Cosmos DB Account > Replicate data globally > Manual Failover][2]
47
-
4. Select a read region to become the new write region, click the checkbox to confirm triggering a failover, and click OK
48
-
![Change the write region by selecting a new region in list under Azure Cosmos DB Account > Replicate data globally > Manual Failover][3]
Copy file name to clipboardExpand all lines: articles/cosmos-db/mongodb/vcore/ai-advertisement-generation.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,8 +19,6 @@ In this guide, we demonstrate how to create dynamic advertising content that res
19
19
-**OpenAI Embeddings**: Utilizes the cutting-edge embeddings from OpenAI to generate vectors for inventory descriptions. This approach allows for more nuanced and semantically rich matches between the inventory and the advertisement content.
20
20
-**Content Generation**: Employs OpenAI's advanced language models to generate engaging, trend-focused advertisements. This method ensures that the content is not only relevant but also captivating to the target audience.
- Azure OpenAI: Let's setup the Azure OpenAI resource. Access to this service is currently available by application only. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. Once you have access, complete the following steps:
26
24
- Create an Azure OpenAI resource following this [quickstart](../../../ai-services/openai/how-to/create-resource.md?pivots=web-portal).
<!-- The importAll method accepts the following parameters:
90
-
91
-
|**Parameter** |**Description** |
92
-
|---------|---------|
93
-
|isUpsert | A flag to enable upsert of the documents. If a document with given ID already exists, it's updated. |
94
-
|disableAutomaticIdGeneration | A flag to disable automatic generation of ID. By default, it is set to true. |
95
-
|maxConcurrencyPerPartitionRange | The maximum degree of concurrency per partition key range. The default value is 20. |
96
-
97
-
**Bulk import response object definition**
98
-
The result of the bulk import API call contains the following get methods:
99
-
100
-
|**Parameter** |**Description** |
101
-
|---------|---------|
102
-
|int getNumberOfDocumentsImported() | The total number of documents that were successfully imported out of the documents supplied to the bulk import API call. |
103
-
|double getTotalRequestUnitsConsumed() | The total request units (RU) consumed by the bulk import API call. |
104
-
|Duration getTotalTimeTaken() | The total time taken by the bulk import API call to complete execution. |
105
-
|List\<Exception> getErrors() | Gets the list of errors if some documents out of the batch supplied to the bulk import API call failed to get inserted. |
106
-
|List\<Object> getBadInputDocuments() | The list of bad-format documents that were not successfully imported in the bulk import API call. User should fix the documents returned and retry import. Bad-formatted documents include documents whose ID value is not a string (null or any other datatype is considered invalid). |
107
-
108
-
<!-- 5. After you have the bulk import application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
109
-
110
-
```bash
111
-
mvn clean package
112
-
```
113
-
114
-
6. After the target dependencies are generated, you can invoke the bulk importer application by using the following command:
115
-
116
-
```bash
117
-
java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint *<Fill in your Azure Cosmos DB's endpoint>* -masterKey *<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkImportDb -collectionId bulkImportColl -operation import -shouldCreateCollection -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
118
-
```
119
-
120
-
The bulk importer creates a new database and a collection with the database name, collection name, and throughput values specified in the App.config file.
121
-
122
-
## Bulk update data in Azure Cosmos DB
123
-
124
-
You can update existing documents by using the BulkUpdateAsync API. In this example, you will set the Name field to a new value and remove the Description field from the existing documents. For the full set of supported field update operations, see [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor).
125
-
126
-
1. Defines the update items along with corresponding field update operations. In this example, you will use SetUpdateOperation to update the Name field and UnsetUpdateOperation to remove the Description field from all the documents. You can also perform other operations like increment a document field by a specific value, push specific values into an array field, or remove a specific value from an array field. To learn about different methods provided by the bulk update API, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor).
127
-
128
-
```java
129
-
SetUpdateOperation<String> nameUpdate = new SetUpdateOperation<>("Name","UpdatedDocValue");
130
-
UnsetUpdateOperation descriptionUpdate = new UnsetUpdateOperation("description");
131
-
132
-
ArrayList<UpdateOperationBase> updateOperations = new ArrayList<>();
133
-
updateOperations.add(nameUpdate);
134
-
updateOperations.add(descriptionUpdate);
135
-
136
-
List<UpdateItem> updateItems = new ArrayList<>(cfg.getNumberOfDocumentsForEachCheckpoint());
2. Call the updateAll API that generates random documents to be then bulk imported into an Azure Cosmos DB container. You can configure the command-line configurations to be passed in CmdLineConfiguration.java file.
The bulk update API accepts a collection of items to be updated. Each update item specifies the list of field update operations to be performed on a document identified by an ID and a partition key value. for more information, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor):
The updateAll method accepts the following parameters:
157
-
158
-
|**Parameter** |**Description** |
159
-
|---------|---------|
160
-
|maxConcurrencyPerPartitionRange | The maximum degree of concurrency per partition key range. The default value is 20. |
161
-
162
-
**Bulk import response object definition**
163
-
The result of the bulk import API call contains the following get methods:
164
-
165
-
|**Parameter** |**Description** |
166
-
|---------|---------|
167
-
|int getNumberOfDocumentsUpdated() | The total number of documents that were successfully updated out of the documents supplied to the bulk update API call. |
168
-
|double getTotalRequestUnitsConsumed() | The total request units (RU) consumed by the bulk update API call. |
169
-
|Duration getTotalTimeTaken() | The total time taken by the bulk update API call to complete execution. |
170
-
|List\<Exception> getErrors() | Gets the list of operational or networking issues related to the update operation. |
171
-
|List\<BulkUpdateFailure> getFailedUpdates() | Gets the list of updates, which could not be completed along with the specific exceptions leading to the failures.|
172
-
173
-
3. After you have the bulk update application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
174
-
175
-
```bash
176
-
mvn clean package
177
-
```
178
-
179
-
4. After the target dependencies are generated, you can invoke the bulk update application by using the following command:
180
-
181
-
```bash
182
-
java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint **<Fill in your Azure Cosmos DB's endpoint>* -masterKey **<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkUpdateDb -collectionId bulkUpdateColl -operation update -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
183
-
``` -->
184
-
185
88
## Performance tips
186
89
187
90
Consider the following points for better performance when using bulk executor library:
0 commit comments