@@ -16,24 +16,29 @@ consistent, but you can add, remove or replace a monitor in a cluster. See
1616Background
1717==========
1818
19- Ceph Monitors maintain a "master copy" of the :term: `Cluster Map `, which means a
20- :term: `Ceph Client ` can determine the location of all Ceph Monitors, Ceph OSD
21- Daemons, and Ceph Metadata Servers just by connecting to one Ceph Monitor and
19+ Ceph Monitors maintain a "master copy" of the :term: `Cluster Map `.
20+
21+ The maintenance by Ceph Monitors of a :term: `Cluster Map ` makes it possible for
22+ a :term: `Ceph Client ` to determine the location of all Ceph Monitors, Ceph OSD
23+ Daemons, and Ceph Metadata Servers by connecting to one Ceph Monitor and
2224retrieving a current cluster map. Before Ceph Clients can read from or write to
23- Ceph OSD Daemons or Ceph Metadata Servers, they must connect to a Ceph Monitor
24- first. With a current copy of the cluster map and the CRUSH algorithm, a Ceph
25- Client can compute the location for any object. The ability to compute object
26- locations allows a Ceph Client to talk directly to Ceph OSD Daemons, which is a
27- very important aspect of Ceph's high scalability and performance. See
28- `Scalability and High Availability `_ for additional details.
29-
30- The primary role of the Ceph Monitor is to maintain a master copy of the cluster
31- map. Ceph Monitors also provide authentication and logging services. Ceph
32- Monitors write all changes in the monitor services to a single Paxos instance,
33- and Paxos writes the changes to a key/value store for strong consistency. Ceph
34- Monitors can query the most recent version of the cluster map during sync
35- operations. Ceph Monitors leverage the key/value store's snapshots and iterators
36- (using leveldb) to perform store-wide synchronization.
25+ Ceph OSD Daemons or Ceph Metadata Servers, they must connect to a Ceph Monitor.
26+ When a Ceph client has a current copy of the cluster map and the CRUSH
27+ algorithm, it can compute the location for any RADOS object within in the
28+ cluster. This ability to compute the locations of objects makes it possible for
29+ Ceph Clients to talk directly to Ceph OSD Daemons. This direct communication
30+ with Ceph OSD Daemons represents an improvment upon traditional storage
31+ architectures in which clients were required to communicate with a central
32+ component, and that improvment contributes to Ceph's high scalability and
33+ performance. See `Scalability and High Availability `_ for additional details.
34+
35+ The Ceph Monitor's primary function is to maintain a master copy of the cluster
36+ map. Monitors also provide authentication and logging services. All changes in
37+ the monitor services are written by the Ceph Monitor to a single Paxos
38+ instance, and Paxos writes the changes to a key/value store for strong
39+ consistency. Ceph Monitors are able to query the most recent version of the
40+ cluster map during sync operations, and they use the key/value store's
41+ snapshots and iterators (using leveldb) to perform store-wide synchronization.
3742
3843.. ditaa ::
3944 /-------------\ /-------------\
@@ -56,12 +61,6 @@ operations. Ceph Monitors leverage the key/value store's snapshots and iterators
5661 | cCCC |*---------------------+
5762 \- ------------/
5863
59-
60- .. deprecated :: version 0.58
61-
62- In Ceph versions 0.58 and earlier, Ceph Monitors use a Paxos instance for
63- each service and store the map as a file.
64-
6564.. index :: Ceph Monitor; cluster map
6665
6766Cluster Maps
0 commit comments