You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -24,13 +24,13 @@ The following are the key features that work differently in Azure Cosmos DB when
24
24
25
25
## Key differences
26
26
27
-
1. Azure Cosmos DB has an "ID" field within the document whereas Couchbase has the id as a part of bucket. The "id" field is unique across the partition.
27
+
* Azure Cosmos DB has an "ID" field within the document whereas Couchbase has the ID as a part of bucket. The "ID" field is unique across the partition.
28
28
29
-
2. Azure Cosmos DB scales by using the partitioning or sharding technique. Which means it splits the data into multiple shards/partitions. These partitions/shards are created based on the partition key property that you provide. You can select the partition key to optimize read as well write operations or read/write optimized too. To learn more, see the [partitioning](./partition-data.md) article.
29
+
* Azure Cosmos DB scales by using the partitioning or sharding technique. Which means it splits the data into multiple shards/partitions. These partitions/shards are created based on the partition key property that you provide. You can select the partition key to optimize read as well write operations or read/write optimized too. To learn more, see the [partitioning](./partition-data.md) article.
30
30
31
-
3. In Azure Cosmos DB, it is not required for the top-level hierarchy to denote the collection because the collection name already exists. This feature makes the JSON structure much simpler. The following is an example that shows differences in the data model between Couchbase and Azure Cosmos DB:
31
+
* In Azure Cosmos DB, it is not required for the top-level hierarchy to denote the collection because the collection name already exists. This feature makes the JSON structure much simpler. The following is an example that shows differences in the data model between Couchbase and Azure Cosmos DB:
32
32
33
-
**Couchbase**: Document Id = "99FF4444"
33
+
**Couchbase**: Document ID = "99FF4444"
34
34
35
35
```json
36
36
{
@@ -60,7 +60,7 @@ The following are the key features that work differently in Azure Cosmos DB when
60
60
}
61
61
```
62
62
63
-
**Azure Cosmos DB**: Refer "id" within the document as shown below
63
+
**Azure Cosmos DB**: Refer "ID" within the document as shown below
64
64
65
65
```json
66
66
{
@@ -94,76 +94,80 @@ The following are the key features that work differently in Azure Cosmos DB when
94
94
95
95
Azure Cosmos DB has following SDKs to support different Java frameworks:
96
96
97
-
1. Async SDK
98
-
2. Spring Boot SDK
97
+
* Async SDK
98
+
* Spring Boot SDK
99
99
100
100
The following sections describe when to use each of these SDKs. Consider an example where we have three types of workloads:
101
101
102
102
## Couchbase as document repository & spring data-based custom queries
103
103
104
104
If the workload that you are migrating is based on Spring Boot Based SDK, then you can use the following steps:
**Step-3:** Define the name of the collection in the model. You can also specify further annotations, for example, id, partition key to denote them explicitly:
133
+
1. Add application properties under resources and specify the following. Make sure to replace the URL, key, and database name parameters:
138
134
139
-
```java
140
-
@Document(collection="mycollection")
141
-
publicclassUser {
142
-
@id
143
-
privateString id;
144
-
privateString firstName;
145
-
@PartitionKey
146
-
privateString lastName;
147
-
}
148
-
```
135
+
```java
136
+
azure.cosmosdb.uri=<your-cosmosDB-URL>
137
+
azure.cosmosdb.key=<your-cosmosDB-key>
138
+
azure.cosmosdb.database=<your-cosmosDB-dbName>
139
+
```
140
+
141
+
1. Define the name of the collection in the model. You can also specify further annotations. For example, ID, partition key to denote them explicitly:
142
+
143
+
```java
144
+
@Document(collection="mycollection")
145
+
publicclassUser {
146
+
@id
147
+
privateString id;
148
+
privateString firstName;
149
+
@PartitionKey
150
+
privateString lastName;
151
+
}
152
+
```
149
153
150
154
The following are the code snippets for CRUD operations:
151
155
152
156
### Insert and update operations
153
157
154
-
Where *_repo* is the object of repository and *doc* is the POJO class’s object. You can use `.save` to insert or upsert (if document with specified id found). The following code snippet shows how to insert or update a doc object:
158
+
Where *_repo* is the object of repository and *doc* is the POJO class’s object. You can use `.save` to insert or upsert (if document with specified ID found). The following code snippet shows how to insert or update a doc object:
155
159
156
160
```_repo.save(doc);```
157
161
158
162
### Delete Operation
159
163
160
-
Consider the following code snippet, where doc object will have Id and partition key mandatory to locate and delete the object:
164
+
Consider the following code snippet, where doc object will have ID and partition key mandatory to locate and delete the object:
161
165
162
166
```_repo.delete(doc);```
163
167
164
168
### Read Operation
165
169
166
-
You can read the document with or without specifying the partition key. If you don’t specify the partition key, then it is treated as a cross-partition query. Consider the following code samples, first one will perform operation using id and partition key field. Second example uses a regular field & without specifying the partition key field.
170
+
You can read the document with or without specifying the partition key. If you don’t specify the partition key, then it is treated as a cross-partition query. Consider the following code samples, first one will perform operation using ID and partition key field. Second example uses a regular field & without specifying the partition key field.
@@ -180,52 +184,52 @@ N1QL queries is the way to define queries in the Couchbase.
180
184
181
185
You can notice the following changes in your N1QL queries:
182
186
183
-
1. You don’t need to use the META keyword or refer to the first-level document. Instead you can create your own reference to the container. In this example, we have considered it as "c" (it can be anything). This reference is used as a prefix for all the first-level fields. Fr example, c.id, c.country etc.
187
+
* You don’t need to use the META keyword or refer to the first-level document. Instead you can create your own reference to the container. In this example, we have considered it as "c" (it can be anything). This reference is used as a prefix for all the first-level fields. Fr example, c.id, c.country etc.
184
188
185
-
2. Instead of "ANY" now you can do a join on subdocument and refer it with a dedicated alias such as "m". Once you have created alias for a subdocument you need to use alias. For example, m.Country.
189
+
* Instead of "ANY" now you can do a join on subdocument and refer it with a dedicated alias such as "m". Once you have created alias for a subdocument you need to use alias. For example, m.Country.
186
190
187
-
3. The sequence of OFFSET is different in Azure Cosmos DB query, first you need to specify OFFSET then LIMIT.
191
+
* The sequence of OFFSET is different in Azure Cosmos DB query, first you need to specify OFFSET then LIMIT.
188
192
It is recommended not to use Spring Data SDK if you are using maximum custom defined queries as it can have unnecessary overhead at the client side while passing the query to Azure Cosmos DB. Instead we have a direct Async Java SDK, which can be utilized much efficiently in this case.
189
193
190
194
### Read operation
191
195
192
196
Use the Async Java SDK with the following steps:
193
197
194
-
**Step-1:** Configure the following dependency onto the POM.xml file:
198
+
1. Configure the following dependency onto the POM.xml file:
**Step-2:** Create a connection object for Azure Cosmos DB by using the `ConnectionBuilder` method as shown in the following example. Make sure you put this declaration into the bean such that the following code should get executed only once:
209
+
1. Create a connection object for Azure Cosmos DB by using the `ConnectionBuilder` method as shown in the following example. Make sure you put this declaration into the bean such that the following code should get executed only once:
Now, with the help of above method you can pass multiple queries and execute without any hassle. In case you have the requirement to execute one large query, which can be split into multiple queries then try the following code snippet instead of the above one:
232
+
Now, with the help of above method you can pass multiple queries and execute without any hassle. In case you have the requirement to execute one large query, which can be split into multiple queries then try the following code snippet instead of the previous one:
229
233
230
234
```java
231
235
for(SqlQuerySpec query:queries)
@@ -299,70 +303,70 @@ Then subscribe to mono, refer mono subscription snippet in insert operation. The
299
303
300
304
This is a simple type of workload in which you can perform lookups instead of queries. Use the following steps for key/value pairs:
301
305
302
-
**Step-1:** Consider having "/id" as primary key, which will makes sure you can perform lookup operation directly in the specific partition. Create a collection and specify "/id" as partition key.
306
+
1. Consider having "/ID" as primary key, which will makes sure you can perform lookup operation directly in the specific partition. Create a collection and specify "/ID" as partition key.
303
307
304
-
**Step-2:** Switch off the indexing completely. Because you will execute lookup operations, there is no point of carrying indexing overhead. To turn off indexing, sign into Azure Portal, goto Azure Cosmos DB Account. Open the **Data Explorer**, select your **Database** and the **Container**. Open the **Scale & Settings** tab and select the **Indexing Policy**. Currently indexing policy looks like the following:
305
-
306
-
```json
307
-
{
308
-
"indexingMode": "consistent",
309
-
"includedPaths":
310
-
[
311
-
{
308
+
1. Switch off the indexing completely. Because you will execute lookup operations, there is no point of carrying indexing overhead. To turn off indexing, sign into Azure portal, goto Azure Cosmos DB Account. Open the **Data Explorer**, select your **Database** and the **Container**. Open the **Scale & Settings** tab and select the **Indexing Policy**. Currently indexing policy looks like the following:
Now you can execute the CRUD operations as follows:
368
372
@@ -437,9 +441,9 @@ Then subscribe to mono, refer mono subscription snippet in insert operation. The
437
441
438
442
There are two ways to migrate data.
439
443
440
-
1.**Use Azure Data Factory:** This is the most recommended method to migrate the data. Configure the source as Couchbase and sink as Azure Cosmos DB SQL API, see the Azure [Cosmos DB Data Factory connector](../data-factory/connector-azure-cosmos-db.md) article for detailed steps.
444
+
***Use Azure Data Factory:** This is the most recommended method to migrate the data. Configure the source as Couchbase and sink as Azure Cosmos DB SQL API, see the Azure [Cosmos DB Data Factory connector](../data-factory/connector-azure-cosmos-db.md) article for detailed steps.
441
445
442
-
2.**Use the Azure Cosmos DB data import tool:** This option is recommended to migrate using VMs with less amount of data. For detailed steps, see the [Data importer](./import-data.md) article.
446
+
***Use the Azure Cosmos DB data import tool:** This option is recommended to migrate using VMs with less amount of data. For detailed steps, see the [Data importer](./import-data.md) article.
0 commit comments