@@ -23,9 +23,9 @@ The Ceph Metadata Server is necessary to run Ceph File System clients.
2323
2424.. ditaa ::
2525
26- +--------------- + +------------ + +------------ + +--------- ------+
27- | OSDs | | Monitors | | Managers | | MDSs |
28- +--------------- + +------------ + +------------ + +--------- ------+
26+ +------+ +----------+ +----------+ +-------+ + ------+
27+ | OSDs | | Monitors | | Managers | | MDSes | | RGWs |
28+ +------+ +----------+ +----------+ +-------+ + ------+
2929
3030- **Monitors **: A :term: `Ceph Monitor ` (``ceph-mon ``) maintains maps of the
3131 cluster state, including the :ref: `monitor map<display-mon-map> `, manager
@@ -51,11 +51,15 @@ The Ceph Metadata Server is necessary to run Ceph File System clients.
5151 heartbeat. At least three Ceph OSDs are normally required for
5252 redundancy and high availability.
5353
54- - **MDSs **: A :term: `Ceph Metadata Server ` (MDS, ``ceph-mds ``) stores metadata
54+ - **MDSes **: A :term: `Ceph Metadata Server ` (MDS, ``ceph-mds ``) stores metadata
5555 for the :term: `Ceph File System `. Ceph Metadata Servers allow CephFS users to
5656 run basic commands (like ``ls ``, ``find ``, etc.) without placing a burden on
5757 the Ceph Storage Cluster.
5858
59+ - **RGWs **: A :term: `Ceph Object Gateway ` (RGW, ``ceph-radosgw ``) daemon provides
60+ a RESTful gateway between applications and Ceph storage clusters. The
61+ S3-compatible API is most commonly used, though Swift is also available.
62+
5963Ceph stores data as objects within logical storage pools. Using the
6064:term: `CRUSH ` algorithm, Ceph calculates which placement group (PG) should
6165contain the object, and which OSD should store the placement group. The
0 commit comments