Skip to content

Commit 57f1046

Browse files
committed
Remove all mention of RAID from docs
Fixes DOC-15399 Summary of changes: - Remove all mentions of RAID from various places - Also update the "don't use LVM" guidance to point users to multi-store instead
1 parent f07eaea commit 57f1046

File tree

4 files changed

+4
-8
lines changed

4 files changed

+4
-8
lines changed
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
Do not use LVM in the I/O path. Dynamically resizing CockroachDB store volumes can result in significant performance degradation. Using LVM snapshots in lieu of CockroachDB <a href="{% link {{ page.version.version }}/take-full-and-incremental-backups.md %}">backup and restore</a> is also not supported.
1+
Do not use LVM in the I/O path. Dynamically resizing CockroachDB store volumes can result in significant performance degradation. Using LVM snapshots in lieu of CockroachDB <a href="{% link {{ page.version.version }}/take-full-and-incremental-backups.md %}">backup and restore</a> is also not supported. Use [multiple stores per node]({% link {{ page.version.version }}/cockroach-start.md %}#store) instead.

src/current/_includes/v25.4/prod-deployment/topology-recommendations.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
- Do not run multiple node processes on the same VM or machine. This defeats CockroachDB's replication and causes the system to be a single point of failure. Instead, start each node on a separate VM or machine.
2-
- To start a node with multiple disks or SSDs, you can use either of these approaches:
3-
- Configure the disks or SSDs as a single RAID volume, then pass the RAID volume to the `--store` flag when starting the `cockroach` process on the node.
2+
- To start a node with multiple disks or SSDs, use the following approach:
43
- Provide a separate `--store` flag for each disk when starting the `cockroach` process on the node. For more details about stores, see [Start a Node]({% link {{ page.version.version }}/cockroach-start.md %}#store).
54

65
{{site.data.alerts.callout_danger}}

src/current/v25.4/cockroach-start.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -205,9 +205,8 @@ The `--storage-engine` flag is used to choose the storage engine used by the nod
205205

206206
The `--store` flag allows you to specify details about a node's storage.
207207

208-
To start a node with multiple disks or SSDs, you can use either of these approaches:
208+
To start a node with multiple disks or SSDs, use the following approach:
209209

210-
- Configure the disks or SSDs as a single RAID volume, then pass the RAID volume to the `--store` flag when starting the `cockroach` process on the node.
211210
- Provide a separate `--store` flag for each disk when starting the `cockroach` process on the node. For more details about stores, see [Start a Node]({% link {{ page.version.version }}/cockroach-start.md %}#store).
212211

213212
{{site.data.alerts.callout_danger}}

src/current/v25.4/recommended-production-settings.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ We recommend provisioning volumes with {% include {{ page.version.version }}/pro
150150

151151
- Use [zone configs]({% link {{ page.version.version }}/configure-replication-zones.md %}) to increase the replication factor from 3 (the default) to 5 (across at least 5 nodes).
152152

153-
This is especially recommended if you are using local disks with no RAID protection rather than a cloud provider's network-attached disks that are often replicated under the hood, because local disks have a greater risk of failure. You can do this for the [entire cluster]({% link {{ page.version.version }}/configure-replication-zones.md %}#edit-the-default-replication-zone) or for specific [databases]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-database), [tables]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-table), or [rows]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-partition).
153+
This is especially recommended if you are using local disks rather than a cloud provider's network-attached disks that are often replicated under the hood, because local disks have a greater risk of failure. You can do this for the [entire cluster]({% link {{ page.version.version }}/configure-replication-zones.md %}#edit-the-default-replication-zone) or for specific [databases]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-database), [tables]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-table), or [rows]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-partition).
154154

155155
{{site.data.alerts.callout_info}}
156156
Under-provisioning storage leads to node crashes when the disks fill up. Once this has happened, it is difficult to recover from. To prevent your disks from filling up, provision enough storage for your workload, monitor your disk usage, and use a [ballast file]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#automatic-ballast-files). For more information, see [capacity planning issues]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#capacity-planning-issues) and [storage issues]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#storage-issues).
@@ -170,8 +170,6 @@ Disks must be able to achieve {% include {{ page.version.version }}/prod-deploym
170170

171171
- {% include {{ page.version.version }}/prod-deployment/prod-guidance-lvm.md %}
172172

173-
- The optimal configuration for striping more than one device is [RAID 10](https://wikipedia.org/wiki/Nested_RAID_levels#RAID_10_(RAID_1+0)). RAID 0 and 1 are also acceptable from a performance perspective.
174-
175173
{{site.data.alerts.callout_info}}
176174
Disk I/O especially affects [performance on write-heavy workloads]({% link {{ page.version.version }}/architecture/reads-and-writes-overview.md %}#network-and-i-o-bottlenecks). For more information, see [capacity planning issues]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#capacity-planning-issues).
177175
{{site.data.alerts.end}}

0 commit comments

Comments
 (0)