You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/cassandra/materialized-views.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -244,7 +244,7 @@ You'll be able to add a column to the base table, but you won't be able to remov
244
244
245
245
### Can we create MV on existing base table?
246
246
247
-
No. Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. Create new table after account is onboarded on which materialized views can be defined. MV on existing table is planned for the future.
247
+
No. Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. You would need to create a new table with materialized views defined and move the existing data using [container copy jobs](../intra-account-container-copy.md). MV on existing table is planned for the future.
248
248
249
249
### What are the conditions on which records won't make it to MV and how to identify such records?
Copy file name to clipboardExpand all lines: articles/cosmos-db/hierarchical-partition-keys.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -408,7 +408,7 @@ For more information, see [Azure Cosmos DB emulator](./local-emulator.md).
408
408
* Support for automation platforms (Azure PowerShell, Azure CLI) is planned and not yet available.
409
409
* In the Data Explorer in the portal, you currently can't view documents in a container with hierarchical partition keys. You can read or edit these documents with the supported .NET v3 or Java v4 SDK version\[s\].
410
410
* You can only specify hierarchical partition keys up to three layers in depth.
411
-
* Hierarchical partition keys can currently only be enabled on new containers. The desired partition key paths must be specified at the time of container creation and can't be changed later.
411
+
* Hierarchical partition keys can currently only be enabled on new containers. The desired partition key paths must be specified at the time of container creation and can't be changed later. To use hierarchical partitions on existing containers, you should create a new container with the hierarchical partition keys set and move the data using [container copy jobs](intra-account-container-copy.md).
412
412
* Hierarchical partition keys are currently supported only for API for NoSQL accounts (API for MongoDB and Cassandra aren't currently supported).
Copy file name to clipboardExpand all lines: articles/cosmos-db/import-data.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,6 +61,8 @@ The Data Migration tool is an open-source solution that imports data to Azure Co
61
61
While the import tool includes a graphical user interface (dtui.exe), it can also be driven from the command-line (dt.exe). In fact, there's an option to output the associated command after setting up an import through the UI. You can transform tabular source data, such as SQL Server or CSV files, to create hierarchical relationships (subdocuments) during import. Keep reading to learn more about source options, sample commands to import from each source, target options, and viewing import results.
62
62
63
63
> [!NOTE]
64
+
> We recommend using [container copy jobs](intra-account-container-copy.md) for copying data within the same Azure Cosmos DB account.
65
+
>
64
66
> You should only use the Azure Cosmos DB migration tool for small migrations. For large migrations, view our [guide for ingesting data](migration-choices.md).
Copy file name to clipboardExpand all lines: articles/cosmos-db/migration-choices.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,9 +14,8 @@ ms.date: 04/02/2022
14
14
15
15
You can load data from various data sources to Azure Cosmos DB. Since Azure Cosmos DB supports multiple APIs, the targets can be any of the existing APIs. The following are some scenarios where you migrate data to Azure Cosmos DB:
16
16
17
-
* Move data from one Azure Cosmos DB container to another container in the same database or a different databases.
18
-
* Moving data between dedicated containers to shared database containers.
19
-
* Move data from an Azure Cosmos DB account located in region1 to another Azure Cosmos DB account in the same or a different region.
17
+
* Move data from one Azure Cosmos DB container to another container within the Azure Cosmos DB account (could be in the same database or a different database).
18
+
* Move data from one Azure Cosmos DB account to another Azure Cosmos DB account (could be in the same region or a different regions, same subscription or a different one).
20
19
* Move data from a source such as Azure blob storage, a JSON file, Oracle database, Couchbase, DynamoDB to Azure Cosmos DB.
21
20
22
21
In order to support migration paths from the various sources to the different Azure Cosmos DB APIs, there are multiple solutions that provide specialized handling for each migration path. This document lists the available solutions and describes their advantages and limitations.
@@ -45,6 +44,7 @@ If you need help with capacity planning, consider reading our [guide to estimati
|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB for NoSQL|Azure Cosmos DB for NoSQL|• CLI-based; No set up needed. <br/>• Supports large datasets.|
48
48
|Offline|[Data Migration Tool](import-data.md)|•JSON/CSV Files<br/>•Azure Cosmos DB for NoSQL<br/>•MongoDB<br/>•SQL Server<br/>•Table Storage<br/>•AWS DynamoDB<br/>•Azure Blob Storage|•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB Tables API<br/>•JSON Files |• Easy to set up and supports multiple sources. <br/>• Not suitable for large datasets.|
49
49
|Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)|•JSON/CSV Files<br/>•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB<br/>•MongoDB <br/>•SQL Server<br/>•Table Storage<br/>•Azure Blob Storage <br/> <br/>See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources.|•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB<br/>•JSON Files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets. |• Easy to set up and supports multiple sources.<br/>• Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets. <br/>• Lack of checkpointing - It means that if an issue occurs during the course of migration, you need to restart the whole migration process.<br/>• Lack of a dead letter queue - It means that a few erroneous files can stop the entire migration process.|
50
50
|Offline|[Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md)|Azure Cosmos DB for NoSQL. <br/><br/>You can use other sources with additional connectors from the Spark ecosystem.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.|• Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets. <br/>• Needs a custom Spark setup. <br/>• Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
@@ -83,6 +83,7 @@ If you need help with capacity planning, consider reading our [guide to estimati
|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB API for Cassandra | Azure Cosmos DB API for Cassandra|• CLI-based; No set up needed. <br/>• Supports large datasets.|
86
87
|Offline|[cqlsh COPY command](cassandra/migrate-data.md#migrate-data-by-using-the-cqlsh-copy-command)|CSV Files | Azure Cosmos DB API for Cassandra|• Easy to set up. <br/>• Not suitable for large datasets. <br/>• Works only when the source is a Cassandra table.|
87
88
|Offline|[Copy table with Spark](cassandra/migrate-data.md#migrate-data-by-using-spark)|•Apache Cassandra<br/> | Azure Cosmos DB API for Cassandra |• Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>• Needs configuration with a custom retry policy to handle throttles.|
88
89
|Online|[Dual-write proxy + Spark](cassandra/migrate-data-dual-write-proxy.md)|•Apache Cassandra<br/>|•Azure Cosmos DB API for Cassandra <br/>|• Supports larger datasets, but careful attention required for setup and validation. <br/>• Open-source tools, no purchase required.|
Copy file name to clipboardExpand all lines: articles/cosmos-db/partitioning-overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -88,7 +88,7 @@ A partition key has two components: **partition key path** and the **partition k
88
88
89
89
To learn about the limits on throughput, storage, and length of the partition key, see the [Azure Cosmos DB service quotas](concepts-limits.md) article.
90
90
91
-
Selecting your partition key is a simple but important design choice in Azure Cosmos DB. Once you select your partition key, it is not possible to change it in-place. If you need to change your partition key, you should move your data to a new container with your new desired partition key.
91
+
Selecting your partition key is a simple but important design choice in Azure Cosmos DB. Once you select your partition key, it is not possible to change it in-place. If you need to change your partition key, you should move your data to a new container with your new desired partition key. ([Container copy jobs](intra-account-container-copy.md) help with this process.)
92
92
93
93
For **all** containers, your partition key should:
Copy file name to clipboardExpand all lines: articles/cosmos-db/set-throughput.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -85,7 +85,7 @@ You can combine the two models. Provisioning throughput on both the database and
85
85
* The container named *B* is guaranteed to get the *"P"* RUs throughput all the time. It's backed by SLAs.
86
86
87
87
> [!NOTE]
88
-
> A container with provisioned throughput cannot be converted to shared database container. Conversely a shared database container cannot be converted to have a dedicated throughput.
88
+
> A container with provisioned throughput cannot be converted to shared database container. Conversely a shared database container cannot be converted to have a dedicated throughput. You will need to move the data to a container with the desired throughput setting. ([Container copy jobs](intra-account-container-copy.md) for NoSQL and Cassandra APIs help with this process.)
Copy file name to clipboardExpand all lines: articles/cosmos-db/unique-keys.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ You can define unique keys only when you create an Azure Cosmos DB container. A
41
41
42
42
* You can't update an existing container to use a different unique key. In other words, after a container is created with a unique key policy, the policy can't be changed.
43
43
44
-
* To set a unique key for an existing container, create a new container with the unique key constraint. Use the appropriate data migration tool to move the data from the existing container to the new container. For SQL containers, use the [Data Migration tool](import-data.md) to move data. For MongoDB containers, use [mongoimport.exe or mongorestore.exe](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) to move data.
44
+
* To set a unique key for an existing container, create a new container with the unique key constraint. Use the appropriate data migration tool to move the data from the existing container to the new container. For SQL containers, use the [container copy jobs](intra-account-container-copy.md) to move data. For MongoDB containers, use [mongoimport.exe or mongorestore.exe](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) to move data.
45
45
46
46
* A unique key policy can have a maximum of 16 path values. For example, the values can be `/firstName`, `/lastName`, and `/address/zipCode`. Each unique key policy can have a maximum of 10 unique key constraints or combinations. In the previous example, first name, last name, and email address together are one constraint. This constraint uses 3 out of the 16 possible paths.
0 commit comments