Skip to content

Commit 82a4534

Browse files
Merge pull request ceph#61883 from anthonyeleven/rgw-into-intro
doc/start: Mention RGW in Intro to Ceph
2 parents 590d192 + 4a6e9b0 commit 82a4534

File tree

2 files changed

+9
-5
lines changed

2 files changed

+9
-5
lines changed

doc/glossary.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -224,7 +224,7 @@
224224
Architecture document<architecture_cluster_map>` for details.
225225

226226
Crimson
227-
A next-generation OSD architecture whose main aim is the
227+
A next-generation OSD architecture whose aim is the
228228
reduction of latency costs incurred due to cross-core
229229
communications. A re-design of the OSD reduces lock
230230
contention by reducing communication between shards in the data

doc/start/index.rst

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,9 @@ The Ceph Metadata Server is necessary to run Ceph File System clients.
2323

2424
.. ditaa::
2525

26-
+---------------+ +------------+ +------------+ +---------------+
27-
| OSDs | | Monitors | | Managers | | MDSs |
28-
+---------------+ +------------+ +------------+ +---------------+
26+
+------+ +----------+ +----------+ +-------+ +------+
27+
| OSDs | | Monitors | | Managers | | MDSes | | RGWs |
28+
+------+ +----------+ +----------+ +-------+ +------+
2929

3030
- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps of the
3131
cluster state, including the :ref:`monitor map<display-mon-map>`, manager
@@ -51,11 +51,15 @@ The Ceph Metadata Server is necessary to run Ceph File System clients.
5151
heartbeat. At least three Ceph OSDs are normally required for
5252
redundancy and high availability.
5353

54-
- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores metadata
54+
- **MDSes**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores metadata
5555
for the :term:`Ceph File System`. Ceph Metadata Servers allow CephFS users to
5656
run basic commands (like ``ls``, ``find``, etc.) without placing a burden on
5757
the Ceph Storage Cluster.
5858

59+
- **RGWs**: A :term:`Ceph Object Gateway` (RGW, ``ceph-radosgw``) daemon provides
60+
a RESTful gateway between applications and Ceph storage clusters. The
61+
S3-compatible API is most commonly used, though Swift is also available.
62+
5963
Ceph stores data as objects within logical storage pools. Using the
6064
:term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should
6165
contain the object, and which OSD should store the placement group. The

0 commit comments

Comments
 (0)