You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@
8
8
Migrate and Validate Tables between Origin and Target Cassandra Clusters.
9
9
10
10
> [!IMPORTANT]
11
-
> Please note this job has been tested with spark version [3.5.5](https://archive.apache.org/dist/spark/spark-3.5.5/)
11
+
> Please note this job has been tested with spark version [3.5.6](https://archive.apache.org/dist/spark/spark-3.5.6/)
12
12
13
13
## Install as a Container
14
14
- Get the latest image that includes all dependencies from [DockerHub](https://hub.docker.com/r/datastax/cassandra-data-migrator)
@@ -22,14 +22,14 @@ Migrate and Validate Tables between Origin and Target Cassandra Clusters.
22
22
### Prerequisite
23
23
-**Java11** (minimum) as Spark binaries are compiled with it.
24
24
-**Spark `3.5.x` with Scala `2.13` and Hadoop `3.3`**
25
-
- Typically installed using [this binary](https://archive.apache.org/dist/spark/spark-3.5.5/spark-3.5.5-bin-hadoop3-scala2.13.tgz) on a single VM (no cluster necessary) where you want to run this job. This simple setup is recommended for most one-time migrations.
25
+
- Typically installed using [this binary](https://archive.apache.org/dist/spark/spark-3.5.6/spark-3.5.6-bin-hadoop3-scala2.13.tgz) on a single VM (no cluster necessary) where you want to run this job. This simple setup is recommended for most one-time migrations.
26
26
- However we recommend using a Spark Cluster or a Spark Serverless platform like `Databricks` or `Google Dataproc` (that supports the above mentioned versions) for large (e.g. several terabytes) complex migrations OR when CDM is used as a long-term data-transfer utility and not a one-time job.
27
27
28
28
Spark can be installed by running the following: -
0 commit comments