You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/docs/explanations/advanced-configuration.md
+68Lines changed: 68 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -411,6 +411,74 @@ Defining a wildcard certificate decreases the amount of Common Name (CN) names y
411
411
</TabItem>
412
412
</Tabs>
413
413
414
+
### Shared Storage Configuration
415
+
416
+
:::note
417
+
As of Nebari 2024.9.1, alpha support for [Ceph](https://docs.ceph.com/en/latest/) shared file systems as an alternative to NFS is available.
418
+
:::
419
+
420
+
Nebari includes shared file systems for the jupyterhub user storage, jupyterhub shared storage, and conda store shared storage. By default, NFS drives are used.
421
+
422
+
The initial benefit of using Ceph is increased read/write performance compared to NFS, but further benefits are expected in future development. Ceph is a distributed storage system which has the potential to provide increased performance, high availability, data redundancy, storage consolidation, and scalability to Nebari.
423
+
424
+
:::danger
425
+
Do not switch from one storage type to another on an existing Nebari deployment. Any files in the user home directory and conda environments will be lost if you do so! On GCP, all node groups in the cluster will be destroyed and recreated. Only change the storage type prior to the initial deployment.
426
+
:::
427
+
428
+
Storage is configured in the `nebari-config.yaml` file under the storage section.
429
+
430
+
```yaml
431
+
storage:
432
+
type: nfs
433
+
conda_store: 200Gi
434
+
shared_filesystem: 200Gi
435
+
```
436
+
437
+
Supported values for `storage.type` are `nfs` (default on most cloud providers), `efs` (default on AWS), and `cephfs`.
438
+
439
+
When using the `cephfs` storage type option, the block storage underlying all Ceph storage will be provisioned through the same Kubernetes storage class. By default, Kubernetes will use the default storage class unless a specific one is provided. For enhanced performance, some cloud providers offer premium storage class options.
440
+
441
+
You can specify the desired storage class under `ceph.storage_class_name` section in the configuration file. Below are examples of potential storage class values for various cloud providers:
442
+
443
+
<Tabs>
444
+
<TabItem label="AWS" value="AWS" default="true">
445
+
446
+
Premium storage is not available on AWS.
447
+
</TabItem>
448
+
<TabItem label="Azure" value="Azure">
449
+
450
+
```yaml
451
+
ceph:
452
+
storage_class_name: managed-premium
453
+
```
454
+
455
+
</TabItem>
456
+
<TabItem label="GCP" value="GCP">
457
+
458
+
```yaml
459
+
ceph:
460
+
storage_class_name: premium-rwo
461
+
```
462
+
463
+
</TabItem>
464
+
<TabItem label="Existing" value="Existing">
465
+
466
+
```yaml
467
+
ceph:
468
+
storage_class_name: some-cluster-storage-class
469
+
```
470
+
471
+
</TabItem>
472
+
<TabItem label="Local" value="Local">
473
+
474
+
Ceph is not supported on local deployments.
475
+
</TabItem>
476
+
</Tabs>
477
+
478
+
:::note
479
+
Premium storage is not available for some cloud providers on all node types. Check the documentation for your specific cloud provider to confirm which node types are compatible with which storage classes.
480
+
:::
481
+
414
482
## More configuration options
415
483
416
484
Learn to configure more aspects of your Nebari deployment with the following topic guides:
0 commit comments