Skip to content

Commit ec117dc

Browse files
authored
Merge pull request #280412 from seesharprun/cosmos-remove-html-comments-2
Cosmos DB | Remove HTML comments from articles (2/2)
2 parents e6b7bda + d54b428 commit ec117dc

File tree

5 files changed

+0
-127
lines changed

5 files changed

+0
-127
lines changed

articles/cosmos-db/cassandra/access-data-spring-data-app.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -134,8 +134,6 @@ To learn more about Spring and Azure, continue to the Spring on Azure documentat
134134

135135
For more information about using Azure with Java, see the [Azure for Java Developers] and the [Working with Azure DevOps and Java].
136136

137-
<!-- URL List -->
138-
139137
[Azure for Java Developers]: ../index.yml
140138
[free Azure account]: https://azure.microsoft.com/pricing/free-trial/
141139
[Working with Azure DevOps and Java]: /azure/devops/
@@ -145,8 +143,6 @@ For more information about using Azure with Java, see the [Azure for Java Develo
145143
[Spring Initializr]: https://start.spring.io/
146144
[Spring Framework]: https://spring.io/
147145

148-
<!-- IMG List -->
149-
150146
[COSMOSDB01]: media/access-data-spring-data-app/create-cosmos-db-01.png
151147
[COSMOSDB02]: media/access-data-spring-data-app/create-cosmos-db-02.png
152148
[COSMOSDB03]: media/access-data-spring-data-app/create-cosmos-db-03.png

articles/cosmos-db/cassandra/support.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -240,19 +240,13 @@ Azure Cosmos DB for Apache Cassandra is a managed service platform. The platform
240240

241241
## CQL shell
242242

243-
<!-- You can open a hosted native Cassandra shell (CQLSH v5.0.1) directly from the Data Explorer in the [Azure portal](../data-explorer.md) or the [Azure Cosmos DB Explorer](https://cosmos.azure.com/). Before enabling the CQL shell, you must [enable the Notebooks](../notebooks-overview.md) feature in your account (if not already enabled, you will be prompted when clicking on `Open Cassandra Shell`).
244-
245-
:::image type="content" source="./media/support/cqlsh.png" alt-text="Open CQLSH"::: -->
246-
247243
You can connect to the API for Cassandra in Azure Cosmos DB by using the CQLSH installed on a local machine. It comes with Apache Cassandra 3.11 and works out of the box by setting the environment variables. The following sections include the instructions to install, configure, and connect to API for Cassandra in Azure Cosmos DB, on Windows or Linux using CQLSH.
248244

249245
> [!WARNING]
250246
> Connections to Azure Cosmos DB for Apache Cassandra will not work with DataStax Enterprise (DSE) or Cassandra 4.0 versions of CQLSH. Please ensure you use only v3.11 open source Apache Cassandra versions of CQLSH when connecting to API for Cassandra.
251247
252248
**Windows:**
253249

254-
<!-- If using windows, we recommend you enable the [Windows filesystem for Linux](/windows/wsl/install-win10#install-the-windows-subsystem-for-linux). You can then follow the linux commands below. -->
255-
256250
1. Install [Python 3](https://www.python.org/downloads/windows/)
257251
1. Install PIP
258252
1. Before install PIP, download the get-pip.py file.

articles/cosmos-db/includes/cosmos-db-tutorial-global-distribution-portal.md

Lines changed: 0 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -33,28 +33,10 @@ For delivering low-latency to end users, it is recommended that you deploy both
3333

3434
For BCDR, it is recommended to add regions based on the region pairs described in the [Cross-region replication in Azure: Business continuity and disaster recovery](../../availability-zones/cross-region-replication-azure.md) article.
3535

36-
<!--
37-
38-
## <a id="selectwriteregion"></a>Select the write region
39-
40-
While all regions associated with your Azure Cosmos DB database account can serve reads (both, single item as well as multi-item paginated reads) and queries, only one region can actively receive the write (insert, upsert, replace, delete) requests. To set the active write region, do the following
41-
42-
43-
1. In the **Azure Cosmos DB** blade, select the database account to modify.
44-
2. In the account blade, click **Replicate data globally** from the menu.
45-
3. In the **Replicate data globally** blade, click **Manual Failover** from the top bar.
46-
![Change the write region under Azure Cosmos DB Account > Replicate data globally > Manual Failover][2]
47-
4. Select a read region to become the new write region, click the checkbox to confirm triggering a failover, and click OK
48-
![Change the write region by selecting a new region in list under Azure Cosmos DB Account > Replicate data globally > Manual Failover][3]
49-
50-
--->
51-
52-
<!--Image references-->
5336
[1]: ./media/cosmos-db-tutorial-global-distribution-portal/azure-cosmos-db-add-region.png
5437
[2]: ./media/cosmos-db-tutorial-global-distribution-portal/azure-cosmos-db-manual-failover-1.png
5538
[3]: ./media/cosmos-db-tutorial-global-distribution-portal/azure-cosmos-db-manual-failover-2.png
5639

57-
<!--Reference style links - using these makes the source content way more readable than using inline links-->
5840
[consistency]: ../consistency-levels.md
5941
[azureregions]: https://azure.microsoft.com/regions/#services
6042
[offers]: https://azure.microsoft.com/pricing/details/cosmos-db/

articles/cosmos-db/mongodb/vcore/ai-advertisement-generation.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,6 @@ In this guide, we demonstrate how to create dynamic advertising content that res
1919
- **OpenAI Embeddings**: Utilizes the cutting-edge embeddings from OpenAI to generate vectors for inventory descriptions. This approach allows for more nuanced and semantically rich matches between the inventory and the advertisement content.
2020
- **Content Generation**: Employs OpenAI's advanced language models to generate engaging, trend-focused advertisements. This method ensures that the content is not only relevant but also captivating to the target audience.
2121

22-
<!-- > [!VIDEO https://www.youtube.com/live/MLY5Pc_tSXw?si=fQmAuQcZkVauhmu-&t=1078] -->
23-
2422
## Prerequisites
2523
- Azure OpenAI: Let's setup the Azure OpenAI resource. Access to this service is currently available by application only. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. Once you have access, complete the following steps:
2624
- Create an Azure OpenAI resource following this [quickstart](../../../ai-services/openai/how-to/create-resource.md?pivots=web-portal).

articles/cosmos-db/nosql/bulk-executor-java.md

Lines changed: 0 additions & 97 deletions
Original file line numberDiff line numberDiff line change
@@ -85,103 +85,6 @@ com.azure.cosmos.examples.bulk.async.SampleBulkQuickStartAsync
8585

8686
[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkCreateItemsWithResponseProcessingAndExecutionOptions)]
8787

88-
89-
<!-- The importAll method accepts the following parameters:
90-
91-
|**Parameter** |**Description** |
92-
|---------|---------|
93-
|isUpsert | A flag to enable upsert of the documents. If a document with given ID already exists, it's updated. |
94-
|disableAutomaticIdGeneration | A flag to disable automatic generation of ID. By default, it is set to true. |
95-
|maxConcurrencyPerPartitionRange | The maximum degree of concurrency per partition key range. The default value is 20. |
96-
97-
**Bulk import response object definition**
98-
The result of the bulk import API call contains the following get methods:
99-
100-
|**Parameter** |**Description** |
101-
|---------|---------|
102-
|int getNumberOfDocumentsImported() | The total number of documents that were successfully imported out of the documents supplied to the bulk import API call. |
103-
|double getTotalRequestUnitsConsumed() | The total request units (RU) consumed by the bulk import API call. |
104-
|Duration getTotalTimeTaken() | The total time taken by the bulk import API call to complete execution. |
105-
|List\<Exception> getErrors() | Gets the list of errors if some documents out of the batch supplied to the bulk import API call failed to get inserted. |
106-
|List\<Object> getBadInputDocuments() | The list of bad-format documents that were not successfully imported in the bulk import API call. User should fix the documents returned and retry import. Bad-formatted documents include documents whose ID value is not a string (null or any other datatype is considered invalid). |
107-
108-
<!-- 5. After you have the bulk import application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
109-
110-
```bash
111-
mvn clean package
112-
```
113-
114-
6. After the target dependencies are generated, you can invoke the bulk importer application by using the following command:
115-
116-
```bash
117-
java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint *<Fill in your Azure Cosmos DB's endpoint>* -masterKey *<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkImportDb -collectionId bulkImportColl -operation import -shouldCreateCollection -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
118-
```
119-
120-
The bulk importer creates a new database and a collection with the database name, collection name, and throughput values specified in the App.config file.
121-
122-
## Bulk update data in Azure Cosmos DB
123-
124-
You can update existing documents by using the BulkUpdateAsync API. In this example, you will set the Name field to a new value and remove the Description field from the existing documents. For the full set of supported field update operations, see [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor).
125-
126-
1. Defines the update items along with corresponding field update operations. In this example, you will use SetUpdateOperation to update the Name field and UnsetUpdateOperation to remove the Description field from all the documents. You can also perform other operations like increment a document field by a specific value, push specific values into an array field, or remove a specific value from an array field. To learn about different methods provided by the bulk update API, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor).
127-
128-
```java
129-
SetUpdateOperation<String> nameUpdate = new SetUpdateOperation<>("Name","UpdatedDocValue");
130-
UnsetUpdateOperation descriptionUpdate = new UnsetUpdateOperation("description");
131-
132-
ArrayList<UpdateOperationBase> updateOperations = new ArrayList<>();
133-
updateOperations.add(nameUpdate);
134-
updateOperations.add(descriptionUpdate);
135-
136-
List<UpdateItem> updateItems = new ArrayList<>(cfg.getNumberOfDocumentsForEachCheckpoint());
137-
IntStream.range(0, cfg.getNumberOfDocumentsForEachCheckpoint()).mapToObj(j -> {
138-
return new UpdateItem(Long.toString(prefix + j), Long.toString(prefix + j), updateOperations);
139-
}).collect(Collectors.toCollection(() -> updateItems));
140-
```
141-
142-
2. Call the updateAll API that generates random documents to be then bulk imported into an Azure Cosmos DB container. You can configure the command-line configurations to be passed in CmdLineConfiguration.java file.
143-
144-
```java
145-
BulkUpdateResponse bulkUpdateResponse = bulkExecutor.updateAll(updateItems, null)
146-
```
147-
148-
The bulk update API accepts a collection of items to be updated. Each update item specifies the list of field update operations to be performed on a document identified by an ID and a partition key value. for more information, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor):
149-
150-
```java
151-
public BulkUpdateResponse updateAll(
152-
Collection<UpdateItem> updateItems,
153-
Integer maxConcurrencyPerPartitionRange) throws DocumentClientException;
154-
```
155-
156-
The updateAll method accepts the following parameters:
157-
158-
|**Parameter** |**Description** |
159-
|---------|---------|
160-
|maxConcurrencyPerPartitionRange | The maximum degree of concurrency per partition key range. The default value is 20. |
161-
162-
**Bulk import response object definition**
163-
The result of the bulk import API call contains the following get methods:
164-
165-
|**Parameter** |**Description** |
166-
|---------|---------|
167-
|int getNumberOfDocumentsUpdated() | The total number of documents that were successfully updated out of the documents supplied to the bulk update API call. |
168-
|double getTotalRequestUnitsConsumed() | The total request units (RU) consumed by the bulk update API call. |
169-
|Duration getTotalTimeTaken() | The total time taken by the bulk update API call to complete execution. |
170-
|List\<Exception> getErrors() | Gets the list of operational or networking issues related to the update operation. |
171-
|List\<BulkUpdateFailure> getFailedUpdates() | Gets the list of updates, which could not be completed along with the specific exceptions leading to the failures.|
172-
173-
3. After you have the bulk update application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
174-
175-
```bash
176-
mvn clean package
177-
```
178-
179-
4. After the target dependencies are generated, you can invoke the bulk update application by using the following command:
180-
181-
```bash
182-
java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint **<Fill in your Azure Cosmos DB's endpoint>* -masterKey **<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkUpdateDb -collectionId bulkUpdateColl -operation update -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
183-
``` -->
184-
18588
## Performance tips
18689

18790
Consider the following points for better performance when using bulk executor library:

0 commit comments

Comments
 (0)