@@ -206,7 +206,7 @@ monitor, two in a cluster that contains three monitors, three in a cluster that
206206contains five monitors, four in a cluster that contains six monitors, and so
207207on).
208208
209- See the ` Monitor Config Reference `_ for more detail on configuring monitors.
209+ See the :ref: ` monitor-config-reference ` for more detail on configuring monitors.
210210
211211.. index :: architecture; high availability authentication
212212
@@ -368,9 +368,9 @@ daemons. The authentication is not extended beyond the Ceph client. If a user
368368accesses the Ceph client from a remote host, cephx authentication will not be
369369applied to the connection between the user's host and the client host.
370370
371- See ` Cephx Config Guide `_ for more on configuration details.
371+ See :ref: ` rados-cephx-config-ref ` for more on configuration details.
372372
373- See ` User Management `_ for more on user management.
373+ See :ref: ` user-management ` for more on user management.
374374
375375See :ref: `A Detailed Description of the Cephx Authentication Protocol
376376<cephx_2012_peter>` for more on the distinction between authorization and
@@ -433,8 +433,8 @@ the greater cluster provides several benefits:
433433 mismatches in object size and finds metadata mismatches, and is usually
434434 performed daily. Ceph OSD Daemons perform deeper scrubbing by comparing the
435435 data in objects, bit-for-bit, against their checksums. Deep scrubbing finds
436- bad sectors on drives that are not detectable with light scrubs. See `Data
437- Scrubbing `_ for details on configuring scrubbing.
436+ bad sectors on drives that are not detectable with light scrubs. See :ref: `Data
437+ Scrubbing <rados_config_scrubbing>` for details on configuring scrubbing.
438438
439439#. **Replication: ** Data replication involves collaboration between Ceph
440440 Clients and Ceph OSD Daemons. Ceph OSD Daemons use the CRUSH algorithm to
@@ -525,7 +525,7 @@ Pools set at least the following parameters:
525525- The Number of Placement Groups, and
526526- The CRUSH Rule to Use.
527527
528- See ` Set Pool Values `_ for details.
528+ See :ref: ` setpoolvalues ` for details.
529529
530530
531531.. index: architecture; placement group mapping
@@ -626,7 +626,7 @@ which is the process of bringing all of the OSDs that store a Placement Group
626626(PG) into agreement about the state of all of the RADOS objects (and their
627627metadata) in that PG. Ceph OSD Daemons `Report Peering Failure `_ to the Ceph
628628Monitors. Peering issues usually resolve themselves; however, if the problem
629- persists, you may need to refer to the `Troubleshooting Peering Failure `_
629+ persists, you may need to refer to the :ref: `Troubleshooting Peering Failure < failures-osd-peering >`
630630section.
631631
632632.. Note :: PGs that agree on the state of the cluster do not necessarily have
@@ -721,7 +721,7 @@ scrubbing by comparing data in objects bit-for-bit. Deep scrubbing (by default
721721performed weekly) finds bad blocks on a drive that weren't apparent in a light
722722scrub.
723723
724- See `Data Scrubbing `_ for details on configuring scrubbing.
724+ See :ref: `Data Scrubbing < rados_config_scrubbing >` for details on configuring scrubbing.
725725
726726
727727
@@ -1219,8 +1219,8 @@ appliances do not fully utilize the CPU and RAM of a typical commodity server,
12191219Ceph does. From heartbeats, to peering, to rebalancing the cluster or
12201220recovering from faults, Ceph offloads work from clients (and from a centralized
12211221gateway which doesn't exist in the Ceph architecture) and uses the computing
1222- power of the OSDs to perform the work. When referring to ` Hardware
1223- Recommendations `_ and the `Network Config Reference `_, be cognizant of the
1222+ power of the OSDs to perform the work. When referring to :ref: ` hardware-recommendations `
1223+ and the `Network Config Reference `_, be cognizant of the
12241224foregoing concepts to understand how Ceph utilizes computing resources.
12251225
12261226.. index :: Ceph Protocol, librados
@@ -1574,7 +1574,7 @@ another application.
15741574 correspond in a 1:1 manner with an object stored in the storage cluster. It
15751575 is possible for an S3 or Swift object to map to multiple Ceph objects.
15761576
1577- See ` Ceph Object Storage `_ for details.
1577+ See :ref: ` object-gateway ` for details.
15781578
15791579
15801580.. index :: Ceph Block Device; block device; RBD; Rados Block Device
@@ -1671,26 +1671,15 @@ instance for high availability.
16711671
16721672.. _RADOS - A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters : https://ceph.io/assets/pdfs/weil-rados-pdsw07.pdf
16731673.. _Paxos : https://en.wikipedia.org/wiki/Paxos_(computer_science)
1674- .. _Monitor Config Reference : ../rados/configuration/mon-config-ref
1675- .. _Monitoring OSDs and PGs : ../rados/operations/monitoring-osd-pg
16761674.. _Heartbeats : ../rados/configuration/mon-osd-interaction
16771675.. _Monitoring OSDs : ../rados/operations/monitoring-osd-pg/#monitoring-osds
16781676.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data : https://ceph.io/assets/pdfs/weil-crush-sc06.pdf
1679- .. _Data Scrubbing : ../rados/configuration/osd-config-ref#scrubbing
16801677.. _Report Peering Failure : ../rados/configuration/mon-osd-interaction#osds-report-peering-failure
1681- .. _Troubleshooting Peering Failure : ../rados/troubleshooting/troubleshooting-pg#placement-group-down-peering-failure
1682- .. _Ceph Authentication and Authorization : ../rados/operations/auth-intro/
1683- .. _Hardware Recommendations : ../start/hardware-recommendations
16841678.. _Network Config Reference : ../rados/configuration/network-config-ref
1685- .. _Data Scrubbing : ../rados/configuration/osd-config-ref#scrubbing
16861679.. _striping : https://en.wikipedia.org/wiki/Data_striping
16871680.. _RAID : https://en.wikipedia.org/wiki/RAID
16881681.. _RAID 0 : https://en.wikipedia.org/wiki/RAID_0#RAID_0
1689- .. _Ceph Object Storage : ../radosgw/
16901682.. _RESTful : https://en.wikipedia.org/wiki/RESTful
16911683.. _Erasure Code Notes : https://github.com/ceph/ceph/blob/40059e12af88267d0da67d8fd8d9cd81244d8f93/doc/dev/osd_internals/erasure_coding/developer_notes.rst
16921684.. _Cache Tiering : ../rados/operations/cache-tiering
1693- .. _Set Pool Values : ../rados/operations/pools#set-pool-values
16941685.. _Kerberos : https://en.wikipedia.org/wiki/Kerberos_(protocol)
1695- .. _Cephx Config Guide : ../rados/configuration/auth-config-ref
1696- .. _User Management : ../rados/operations/user-management
0 commit comments