You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB for NoSQL|Azure Cosmos DB for NoSQL|• CLI-based, No set up needed. <br/>• Supports large datasets.|
47
+
|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB for NoSQL|Azure Cosmos DB for NoSQL|• CLI-based; No set up needed. <br/>• Supports large datasets.|
48
48
|Offline|[Data Migration Tool](import-data.md)|•JSON/CSV Files<br/>•Azure Cosmos DB for NoSQL<br/>•MongoDB<br/>•SQL Server<br/>•Table Storage<br/>•AWS DynamoDB<br/>•Azure Blob Storage|•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB Tables API<br/>•JSON Files |• Easy to set up and supports multiple sources. <br/>• Not suitable for large datasets.|
49
49
|Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)|•JSON/CSV Files<br/>•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB<br/>•MongoDB <br/>•SQL Server<br/>•Table Storage<br/>•Azure Blob Storage <br/> <br/>See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources.|•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB<br/>•JSON Files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets. |• Easy to set up and supports multiple sources.<br/>• Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets. <br/>• Lack of checkpointing - It means that if an issue occurs during the course of migration, you need to restart the whole migration process.<br/>• Lack of a dead letter queue - It means that a few erroneous files can stop the entire migration process.|
50
50
|Offline|[Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md)|Azure Cosmos DB for NoSQL. <br/><br/>You can use other sources with additional connectors from the Spark ecosystem.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.|• Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets. <br/>• Needs a custom Spark setup. <br/>• Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
@@ -83,7 +83,7 @@ If you need help with capacity planning, consider reading our [guide to estimati
|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB API for Cassandra | Azure Cosmos DB API for Cassandra|• CLI-based, No set up needed. <br/>• Supports large datasets.|
86
+
|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB API for Cassandra | Azure Cosmos DB API for Cassandra|• CLI-based; No set up needed. <br/>• Supports large datasets.|
87
87
|Offline|[cqlsh COPY command](cassandra/migrate-data.md#migrate-data-by-using-the-cqlsh-copy-command)|CSV Files | Azure Cosmos DB API for Cassandra|• Easy to set up. <br/>• Not suitable for large datasets. <br/>• Works only when the source is a Cassandra table.|
88
88
|Offline|[Copy table with Spark](cassandra/migrate-data.md#migrate-data-by-using-spark)|•Apache Cassandra<br/> | Azure Cosmos DB API for Cassandra |• Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>• Needs configuration with a custom retry policy to handle throttles.|
89
89
|Online|[Dual-write proxy + Spark](cassandra/migrate-data-dual-write-proxy.md)|•Apache Cassandra<br/>|•Azure Cosmos DB API for Cassandra <br/>|• Supports larger datasets, but careful attention required for setup and validation. <br/>• Open-source tools, no purchase required.|
0 commit comments