Skip to content

Commit b510c20

Browse files
authored
DOC-3707 - Attributes replacement and nav fixes (#193)
* begin attribute replacement * continuing find and replace * finish replacement * add one page missing from nav * separate nav
1 parent 126ad97 commit b510c20

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+822
-838
lines changed

antora.yml

Lines changed: 17 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -8,19 +8,23 @@ nav:
88

99
asciidoc:
1010
attributes:
11+
company: 'DataStax'
1112
product: 'Zero Downtime Migration'
12-
zdm-product: 'Zero Downtime Migration'
13-
zdm-shortproduct: 'ZDM'
14-
zdm-proxy: 'ZDM Proxy'
15-
zdm-utility: 'ZDM Utility'
16-
zdm-automation: 'ZDM Proxy Automation'
17-
cstar-data-migrator: 'Cassandra Data Migrator'
13+
product-short: 'ZDM'
14+
product-proxy: 'ZDM Proxy'
15+
product-utility: 'ZDM Utility'
16+
product-automation: 'ZDM Proxy Automation'
17+
product-demo: 'ZDM Demo Client'
1818
dsbulk-migrator: 'DSBulk Migrator'
1919
dsbulk-loader: 'DSBulk Loader'
20-
db-serverless: 'Serverless (Non-Vector)'
21-
db-serverless-vector: 'Serverless (Vector)'
22-
db-classic: 'Classic'
23-
astra-cli: 'Astra CLI'
24-
url-astra: 'https://astra.datastax.com'
25-
link-astra-portal: '{url-astra}[{astra_ui}]'
26-
astra-db-serverless: 'Astra DB Serverless'
20+
cass: 'Apache Cassandra'
21+
cass-short: 'Cassandra'
22+
cass-reg: 'Apache Cassandra(R)'
23+
cass-migrator: 'Cassandra Data Migrator'
24+
cass-migrator-short: 'CDM'
25+
dse: 'DataStax Enterprise (DSE)'
26+
dse-short: 'DSE'
27+
astra-db: 'Astra DB'
28+
astra-ui: 'Astra Portal'
29+
astra-url: 'https://astra.datastax.com'
30+
support-url: 'https://support.datastax.com'

modules/ROOT/nav.adoc

Lines changed: 39 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -1,56 +1,52 @@
1-
* Zero Downtime Migration
2-
** xref:introduction.adoc[]
3-
** xref:components.adoc[]
1+
.{product}
2+
* xref:introduction.adoc[]
3+
* xref:components.adoc[]
4+
* Planning
45
** xref:preliminary-steps.adoc[]
5-
*** xref:feasibility-checklists.adoc[]
6-
*** xref:deployment-infrastructure.adoc[]
7-
*** xref:create-target.adoc[]
8-
*** xref:rollback.adoc[]
9-
//phase 1
6+
** xref:feasibility-checklists.adoc[]
7+
** xref:deployment-infrastructure.adoc[]
8+
** xref:create-target.adoc[]
9+
** xref:rollback.adoc[]
10+
* Phase 1
1011
** xref:phase1.adoc[]
11-
*** xref:setup-ansible-playbooks.adoc[]
12-
*** xref:deploy-proxy-monitoring.adoc[]
13-
*** xref:tls.adoc[]
14-
*** xref:connect-clients-to-proxy.adoc[]
15-
*** xref:metrics.adoc[]
16-
*** xref:manage-proxy-instances.adoc[]
17-
//phase 2
12+
** xref:setup-ansible-playbooks.adoc[]
13+
** xref:deploy-proxy-monitoring.adoc[]
14+
** xref:tls.adoc[]
15+
** xref:connect-clients-to-proxy.adoc[]
16+
** xref:metrics.adoc[]
17+
** xref:manage-proxy-instances.adoc[]
18+
* Phase 2
1819
** xref:migrate-and-validate-data.adoc[]
19-
*** xref:cassandra-data-migrator.adoc[]
20-
*** https://docs.datastax.com/en/dsbulk/overview/dsbulk-about.html[DSBulk Loader]
21-
//phase 3
20+
** xref:cassandra-data-migrator.adoc[{cass-migrator}]
21+
** xref:dsbulk-migrator.adoc[{dsbulk-migrator}]
22+
** https://docs.datastax.com/en/dsbulk/overview/dsbulk-about.html[{dsbulk-loader}]
23+
* Phase 3
2224
** xref:enable-async-dual-reads.adoc[]
23-
//phase 4
25+
* Phase 4
2426
** xref:change-read-routing.adoc[]
25-
//phase 5
27+
* Phase 5
2628
** xref:connect-clients-to-target.adoc[]
27-
29+
* References
2830
** Troubleshooting
29-
*** xref:troubleshooting.adoc[]
31+
*** xref:troubleshooting.adoc[]
3032
*** xref:troubleshooting-tips.adoc[]
3133
*** xref:troubleshooting-scenarios.adoc[]
32-
34+
** xref:contributions.adoc[]
3335
** xref:faqs.adoc[]
34-
3536
** xref:glossary.adoc[]
36-
37-
** xref:contributions.adoc[]
38-
3937
** xref:release-notes.adoc[]
4038
41-
* {cstar-data-migrator}
42-
** xref:cdm-overview.adoc[]
43-
** xref:cdm-steps.adoc[Migrate data]
44-
45-
* {dsbulk-loader}
46-
** https://docs.datastax.com/en/dsbulk/overview/dsbulk-about.html[Overview]
47-
** https://docs.datastax.com/en/dsbulk/installing/install.html[Installing DataStax Bulk Loader]
48-
** Loading and unloading data
49-
*** https://docs.datastax.com/en/dsbulk/getting-started/simple-load.html[Loading data without a configuration file]
50-
*** https://docs.datastax.com/en/dsbulk/getting-started/simple-unload.html[Unloading data without a configuration file]
51-
*** https://docs.datastax.com/en/dsbulk/developing/loading-unloading-vector-data.html[Loading and unloading vector data]
52-
** Loading and unloading data examples
53-
*** https://docs.datastax.com/en/dsbulk/reference/load.html[Loading data examples]
54-
*** https://docs.datastax.com/en/dsbulk/reference/unload.html[Unloading data examples]
55-
** https://docs.datastax.com/en/dsbulk/reference/dsbulk-cmd.html#escaping-and-quoting-command-line-arguments[Escaping and quoting command line arguments]
56-
39+
.{cass-migrator}
40+
* xref:cdm-overview.adoc[{cass-migrator-short} overview]
41+
* xref:cdm-steps.adoc[Use {cass-migrator-short} to migrate data]
42+
43+
.{dsbulk-loader}
44+
* https://docs.datastax.com/en/dsbulk/overview/dsbulk-about.html[{dsbulk-loader}]
45+
* https://docs.datastax.com/en/dsbulk/installing/install.html[Installing {dsbulk-loader}]
46+
* Loading and unloading data
47+
** https://docs.datastax.com/en/dsbulk/getting-started/simple-load.html[Loading data without a configuration file]
48+
** https://docs.datastax.com/en/dsbulk/getting-started/simple-unload.html[Unloading data without a configuration file]
49+
** https://docs.datastax.com/en/dsbulk/developing/loading-unloading-vector-data.html[Loading and unloading vector data]
50+
** https://docs.datastax.com/en/dsbulk/reference/load.html[Loading data examples]
51+
** https://docs.datastax.com/en/dsbulk/reference/unload.html[Unloading data examples]
52+
* https://docs.datastax.com/en/dsbulk/reference/dsbulk-cmd.html#escaping-and-quoting-command-line-arguments[Escaping and quoting command line arguments]

modules/ROOT/pages/cassandra-data-migrator.adoc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,35 +1,35 @@
1-
= {cstar-data-migrator}
1+
= {cass-migrator}
22
:page-aliases: cdm-parameters.adoc
33

4-
Use {cstar-data-migrator} to migrate and validate tables between origin and target Cassandra clusters, with available logging and reconciliation support.
4+
Use {cass-migrator} to migrate and validate tables between origin and target {cass-short} clusters, with available logging and reconciliation support.
55

66
[[cdm-prerequisites]]
7-
== {cstar-data-migrator} prerequisites
7+
== {cass-migrator} prerequisites
88

99
include::partial$cdm-prerequisites.adoc[]
1010

1111
[[cdm-install-as-container]]
12-
== Install {cstar-data-migrator} as a Container
12+
== Install {cass-migrator} as a Container
1313

1414
include::partial$cdm-install-as-container.adoc[]
1515

1616
[[cdm-install-as-jar]]
17-
== Install {cstar-data-migrator} as a JAR file
17+
== Install {cass-migrator} as a JAR file
1818

1919
include::partial$cdm-install-as-jar.adoc[]
2020

2121
[[cdm-build-jar-local]]
22-
== Build {cstar-data-migrator} JAR for local development (optional)
22+
== Build {cass-migrator} JAR for local development (optional)
2323

2424
include::partial$cdm-build-jar-local.adoc[]
2525

2626
[[cdm-steps]]
27-
== Use {cstar-data-migrator}
27+
== Use {cass-migrator}
2828

2929
include::partial$use-cdm-migrator.adoc[]
3030

3131
[[cdm-validation-steps]]
32-
== Use {cstar-data-migrator} steps in validation mode
32+
== Use {cass-migrator} steps in validation mode
3333

3434
include::partial$cdm-validation-steps.adoc[]
3535

modules/ROOT/pages/cdm-overview.adoc

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,25 @@
1-
= Overview
1+
= {cass-migrator} ({cass-migrator-short}) overview
22

3-
Cassandra Data Migrator (CDM) is a tool designed for migrating and validating data between origin and target Apache Cassandra-compatible clusters. It facilitates the transfer of data, creating multiple jobs at once that can access the Cassandra cluster concurrently. This tool is also useful when dealing with large datasets and requires careful configuration to balance performance impact and migration speed.
3+
{cass-migrator} ({cass-migrator-short}) is a tool designed for migrating and validating data between origin and target {cass-reg}-compatible clusters. It facilitates the transfer of data, creating multiple jobs at once that can access the {cass-short} cluster concurrently. This tool is also useful when dealing with large datasets and requires careful configuration to balance performance impact and migration speed.
44

5-
The information below explains how to get started with CDM. Review your prerequisites and decide between the two installation options: as a container or as a JAR file.
5+
The information below explains how to get started with {cass-migrator-short}. Review your prerequisites and decide between the two installation options: as a container or as a JAR file.
66

77
[[cdm-prerequisites]]
8-
== {cstar-data-migrator} prerequisites
8+
== {cass-migrator} prerequisites
99

1010
include::partial$cdm-prerequisites.adoc[]
1111

12-
== {cstar-data-migrator} installation methods
12+
== {cass-migrator} installation methods
1313

1414
Both installation methods require attention to version compatibility, especially with the `cdm.properties` files.
1515
Both environments also use `spark-submit` to run the jobs.
1616

1717
[[cdm-install-as-container]]
18-
=== Install {cstar-data-migrator} as a Container
18+
=== Install {cass-migrator} as a Container
1919

2020
include::partial$cdm-install-as-container.adoc[]
2121

2222
[[cdm-install-as-jar]]
23-
=== Install {cstar-data-migrator} as a JAR file
23+
=== Install {cass-migrator} as a JAR file
2424

2525
include::partial$cdm-install-as-jar.adoc[]

modules/ROOT/pages/cdm-steps.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
1-
= {cstar-data-migrator}
1+
= {cass-migrator}
22

3-
Use {cstar-data-migrator} to migrate and validate tables between the origin and target Cassandra clusters, with available logging and reconciliation support.
3+
Use {cass-migrator} to migrate and validate tables between the origin and target {cass-short} clusters, with available logging and reconciliation support.
44

55
[[cdm-steps]]
6-
== Use {cstar-data-migrator}
6+
== Use {cass-migrator}
77

88
include::partial$use-cdm-migrator.adoc[]
99

1010
[[cdm-validation-steps]]
11-
== Use {cstar-data-migrator} steps in validation mode
11+
== Use {cass-migrator} steps in validation mode
1212

1313
include::partial$cdm-validation-steps.adoc[]
1414

modules/ROOT/pages/change-read-routing.adoc

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@
33
ifdef::env-github,env-browser,env-vscode[:imagesprefix: ../images/]
44
ifndef::env-github,env-browser,env-vscode[:imagesprefix: ]
55

6-
This topic explains how you can configure the {zdm-proxy} to route all reads to Target instead of Origin.
6+
This topic explains how you can configure the {product-proxy} to route all reads to Target instead of Origin.
77

88
//include::partial$lightbox-tip.adoc[]
99

10-
image::{imagesprefix}migration-phase4ra9.png["Phase 4 diagram shows read routing on ZDM Proxy was switched to Target."]
10+
image::{imagesprefix}migration-phase4ra9.png["Phase 4 diagram shows read routing on {product-proxy} was switched to Target."]
1111

1212
For illustrations of all the migration phases, see the xref:introduction.adoc#_migration_phases[Introduction].
1313

@@ -28,7 +28,7 @@ Example:
2828
read_mode: PRIMARY_ONLY
2929
----
3030
31-
Otherwise, if you don't disable async dual reads, {zdm-proxy} instances would continue to send async reads to Origin, which, although harmless, is unnecessary.
31+
Otherwise, if you don't disable async dual reads, {product-proxy} instances would continue to send async reads to Origin, which, although harmless, is unnecessary.
3232
====
3333

3434
== Changing the read routing configuration
@@ -56,16 +56,16 @@ Now open the configuration file `vars/zdm_proxy_core_config.yml` for editing.
5656

5757
Change the variable `primary_cluster` to `TARGET`.
5858

59-
Run the playbook that changes the configuration of the existing {zdm-proxy} deployment:
59+
Run the playbook that changes the configuration of the existing {product-proxy} deployment:
6060

6161
[source,bash]
6262
----
6363
ansible-playbook rolling_update_zdm_proxy.yml -i zdm_ansible_inventory
6464
----
6565

66-
Wait for the {zdm-proxy} instances to be restarted by Ansible, one by one.
66+
Wait for the {product-proxy} instances to be restarted by Ansible, one by one.
6767
All instances will now send all reads to Target instead of Origin.
68-
In other words, Target is now the primary cluster, but the {zdm-proxy} is still keeping Origin up-to-date via dual writes.
68+
In other words, Target is now the primary cluster, but the {product-proxy} is still keeping Origin up-to-date via dual writes.
6969

7070
== Verifying the read routing change
7171

@@ -74,21 +74,21 @@ This is not a required step, but you may wish to do it for peace of mind.
7474

7575
[TIP]
7676
====
77-
Issuing a `DESCRIBE` or a read to any system table through the {zdm-proxy} is *not* a valid verification.
77+
Issuing a `DESCRIBE` or a read to any system table through the {product-proxy} is *not* a valid verification.
7878
79-
The {zdm-proxy} handles reads to system tables differently, by intercepting them and always routing them to Origin, in some cases partly populating them at proxy level.
79+
The {product-proxy} handles reads to system tables differently, by intercepting them and always routing them to Origin, in some cases partly populating them at proxy level.
8080
81-
This means that system reads are *not representative* of how the {zdm-proxy} routes regular user reads: even after you switched the configuration to read from Target as the primary cluster, all system reads will still go to Origin.
81+
This means that system reads are *not representative* of how the {product-proxy} routes regular user reads: even after you switched the configuration to read from Target as the primary cluster, all system reads will still go to Origin.
8282
8383
Although `DESCRIBE` requests are not system requests, they are also generally resolved in a different way to regular requests, and should not be used as a means to verify the read routing behavior.
8484
8585
====
8686

87-
Verifying that the correct routing is taking place is a slightly cumbersome operation, due to the fact that the purpose of the ZDM process is to align the clusters and therefore, by definition, the data will be identical on both sides.
87+
Verifying that the correct routing is taking place is a slightly cumbersome operation, due to the fact that the purpose of the {product-short} process is to align the clusters and therefore, by definition, the data will be identical on both sides.
8888

8989
For this reason, the only way to do a manual verification test is to force a discrepancy of some test data between the clusters.
9090
To do this, you could consider using the xref:connect-clients-to-proxy.adoc#_themis_client[Themis sample client application].
91-
This client application connects directly to Origin, Target and the {zdm-proxy}, inserts some test data in its own table and allows you to view the results of reads from each source.
91+
This client application connects directly to Origin, Target and the {product-proxy}, inserts some test data in its own table and allows you to view the results of reads from each source.
9292
Please refer to its README for more information.
9393

9494
Alternatively, you could follow this manual procedure:
@@ -99,5 +99,5 @@ For example `CREATE TABLE test_keyspace.test_table(k TEXT PRIMARY KEY, v TEXT);`
9999
Insert a row with any key, and with a value specific to Origin, for example `INSERT INTO test_keyspace.test_table(k, v) VALUES ('1', 'Hello from Origin!');`.
100100
* Now, use `cqlsh` to connect *directly to Target*.
101101
Insert a row with the same key as above, but with a value specific to Target, for example `INSERT INTO test_keyspace.test_table(k, v) VALUES ('1', 'Hello from Target!');`.
102-
* Now, use `cqlsh` to connect to the {zdm-proxy} (see xref:connect-clients-to-proxy.adoc#_connecting_cqlsh_to_the_zdm_proxy[here] for how to do this) and issue a read request for this test table: `SELECT * FROM test_keyspace.test_table WHERE k = '1';`.
102+
* Now, use `cqlsh` to connect to the {product-proxy} (see xref:connect-clients-to-proxy.adoc#_connecting_cqlsh_to_the_zdm_proxy[here] for how to do this) and issue a read request for this test table: `SELECT * FROM test_keyspace.test_table WHERE k = '1';`.
103103
The result will clearly show you where the read actually comes from.

0 commit comments

Comments
 (0)