Skip to content

Commit 2536d43

Browse files
authored
Merge pull request ceph#57900 from zdover23/wip-doc-2024-06-06-start-intro-to-index
doc/start: s/intro.rst/index.rst/ Reviewed-by: Anthony D'Atri <[email protected]>
2 parents fa83b90 + 84ce221 commit 2536d43

File tree

2 files changed

+104
-1
lines changed

2 files changed

+104
-1
lines changed

doc/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ about Ceph, see our `Architecture`_ section.
101101
:maxdepth: 3
102102
:hidden:
103103

104-
start/intro
104+
start/index
105105
install/index
106106
cephadm/index
107107
rados/index

doc/start/index.rst

Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
===============
2+
Intro to Ceph
3+
===============
4+
5+
Ceph can be used to provide :term:`Ceph Object Storage` to :term:`Cloud
6+
Platforms` and Ceph can be used to provide :term:`Ceph Block Device` services
7+
to :term:`Cloud Platforms`. Ceph can be used to deploy a :term:`Ceph File
8+
System`. All :term:`Ceph Storage Cluster` deployments begin with setting up
9+
each :term:`Ceph Node` and then setting up the network.
10+
11+
A Ceph Storage Cluster requires the following: at least one Ceph Monitor and at
12+
least one Ceph Manager, and at least as many :term:`Ceph Object Storage
13+
Daemon<Ceph OSD>`\s (OSDs) as there are copies of a given object stored in the
14+
Ceph cluster (for example, if three copies of a given object are stored in the
15+
Ceph cluster, then at least three OSDs must exist in that Ceph cluster).
16+
17+
The Ceph Metadata Server is necessary to run Ceph File System clients.
18+
19+
.. note::
20+
21+
It is a best practice to have a Ceph Manager for each Monitor, but it is not
22+
necessary.
23+
24+
.. ditaa::
25+
26+
+---------------+ +------------+ +------------+ +---------------+
27+
| OSDs | | Monitors | | Managers | | MDSs |
28+
+---------------+ +------------+ +------------+ +---------------+
29+
30+
- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps of the
31+
cluster state, including the :ref:`monitor map<display-mon-map>`, manager
32+
map, the OSD map, the MDS map, and the CRUSH map. These maps are critical
33+
cluster state required for Ceph daemons to coordinate with each other.
34+
Monitors are also responsible for managing authentication between daemons and
35+
clients. At least three monitors are normally required for redundancy and
36+
high availability.
37+
38+
- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is
39+
responsible for keeping track of runtime metrics and the current
40+
state of the Ceph cluster, including storage utilization, current
41+
performance metrics, and system load. The Ceph Manager daemons also
42+
host python-based modules to manage and expose Ceph cluster
43+
information, including a web-based :ref:`mgr-dashboard` and
44+
`REST API`_. At least two managers are normally required for high
45+
availability.
46+
47+
- **Ceph OSDs**: An Object Storage Daemon (:term:`Ceph OSD`,
48+
``ceph-osd``) stores data, handles data replication, recovery,
49+
rebalancing, and provides some monitoring information to Ceph
50+
Monitors and Managers by checking other Ceph OSD Daemons for a
51+
heartbeat. At least three Ceph OSDs are normally required for
52+
redundancy and high availability.
53+
54+
- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores metadata
55+
for the :term:`Ceph File System`. Ceph Metadata Servers allow CephFS users to
56+
run basic commands (like ``ls``, ``find``, etc.) without placing a burden on
57+
the Ceph Storage Cluster.
58+
59+
Ceph stores data as objects within logical storage pools. Using the
60+
:term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should
61+
contain the object, and which OSD should store the placement group. The
62+
CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and
63+
recover dynamically.
64+
65+
.. _REST API: ../../mgr/restful
66+
67+
.. container:: columns-2
68+
69+
.. container:: column
70+
71+
.. raw:: html
72+
73+
<h3>Recommendations</h3>
74+
75+
To begin using Ceph in production, you should review our hardware
76+
recommendations and operating system recommendations.
77+
78+
.. toctree::
79+
:maxdepth: 2
80+
81+
Beginner's Guide <beginners-guide>
82+
Hardware Recommendations <hardware-recommendations>
83+
OS Recommendations <os-recommendations>
84+
85+
.. container:: column
86+
87+
.. raw:: html
88+
89+
<h3>Get Involved</h3>
90+
91+
You can avail yourself of help or contribute documentation, source
92+
code or bugs by getting involved in the Ceph community.
93+
94+
.. toctree::
95+
:maxdepth: 2
96+
97+
get-involved
98+
documenting-ceph
99+
100+
.. toctree::
101+
:maxdepth: 2
102+
103+
intro

0 commit comments

Comments
 (0)