Skip to content

Commit 8b42c18

Browse files
committed
doc/radosgw: Fix capitalization, tab use, punctuation in two files
Use title case consistently in section titles. Capitalize first letter for Ceph, Unix, Luminous. Capitalize RGW and NFS-Ganesha consistently. Remove a colon from end of a section title in nfs.rst. Add full stops at end of two sentences in sync-modules.rst. Change tabs into four spaces in nfs.rst. Also use comments more sensibly in logging example in nfs.rst: - Indent the comments consistently, fixing a leading space in the beginning of the rendered preformatted block. - Also comment out the closing brace. Signed-off-by: Ville Ojamo <[email protected]>
1 parent 04893ca commit 8b42c18

File tree

2 files changed

+50
-50
lines changed

2 files changed

+50
-50
lines changed

doc/radosgw/nfs.rst

Lines changed: 45 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ protocols (S3 and Swift).
1313
In particular, the Ceph Object Gateway can now be configured to
1414
provide file-based access when embedded in the NFS-Ganesha NFS server.
1515

16-
The simplest and preferred way of managing nfs-ganesha clusters and rgw exports
16+
The simplest and preferred way of managing NFS-Ganesha clusters and RGW exports
1717
is using ``ceph nfs ...`` commands. See :doc:`/mgr/nfs` for more details.
1818

1919
librgw
@@ -34,7 +34,7 @@ Namespace Conventions
3434
=====================
3535

3636
The implementation conforms to Amazon Web Services (AWS) hierarchical
37-
namespace conventions which map UNIX-style path names onto S3 buckets
37+
namespace conventions which map Unix-style path names onto S3 buckets
3838
and objects.
3939

4040
The top level of the attached namespace consists of S3 buckets,
@@ -103,7 +103,7 @@ following characteristics:
103103

104104
* additional RGW authentication types such as Keystone are not currently supported
105105

106-
Manually configuring an NFS-Ganesha Instance
106+
Manually Configuring an NFS-Ganesha Instance
107107
============================================
108108

109109
Each NFS RGW instance is an NFS-Ganesha server instance *embedding*
@@ -191,8 +191,8 @@ variables in the RGW config section::
191191
``ceph_conf`` gives a path to a non-default ceph.conf file to use
192192

193193

194-
Other useful NFS-Ganesha configuration:
195-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
194+
Other Useful NFS-Ganesha Configuration
195+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
196196

197197
Any EXPORT block which should support NFSv3 should include version 3
198198
in the NFS_Protocols setting. Additionally, NFSv3 is the last major
@@ -239,45 +239,45 @@ Example::
239239
240240
LOG {
241241

242-
Components {
243-
MEMLEAKS = FATAL;
244-
FSAL = FATAL;
245-
NFSPROTO = FATAL;
246-
NFS_V4 = FATAL;
247-
EXPORT = FATAL;
248-
FILEHANDLE = FATAL;
249-
DISPATCH = FATAL;
250-
CACHE_INODE = FATAL;
251-
CACHE_INODE_LRU = FATAL;
252-
HASHTABLE = FATAL;
253-
HASHTABLE_CACHE = FATAL;
254-
DUPREQ = FATAL;
255-
INIT = DEBUG;
256-
MAIN = DEBUG;
257-
IDMAPPER = FATAL;
258-
NFS_READDIR = FATAL;
259-
NFS_V4_LOCK = FATAL;
260-
CONFIG = FATAL;
261-
CLIENTID = FATAL;
262-
SESSIONS = FATAL;
263-
PNFS = FATAL;
264-
RW_LOCK = FATAL;
265-
NLM = FATAL;
266-
RPC = FATAL;
267-
NFS_CB = FATAL;
268-
THREAD = FATAL;
269-
NFS_V4_ACL = FATAL;
270-
STATE = FATAL;
271-
FSAL_UP = FATAL;
272-
DBUS = FATAL;
273-
}
274-
# optional: redirect log output
275-
# Facility {
276-
# name = FILE;
277-
# destination = "/tmp/ganesha-rgw.log";
278-
# enable = active;
279-
}
280-
}
242+
Components {
243+
MEMLEAKS = FATAL;
244+
FSAL = FATAL;
245+
NFSPROTO = FATAL;
246+
NFS_V4 = FATAL;
247+
EXPORT = FATAL;
248+
FILEHANDLE = FATAL;
249+
DISPATCH = FATAL;
250+
CACHE_INODE = FATAL;
251+
CACHE_INODE_LRU = FATAL;
252+
HASHTABLE = FATAL;
253+
HASHTABLE_CACHE = FATAL;
254+
DUPREQ = FATAL;
255+
INIT = DEBUG;
256+
MAIN = DEBUG;
257+
IDMAPPER = FATAL;
258+
NFS_READDIR = FATAL;
259+
NFS_V4_LOCK = FATAL;
260+
CONFIG = FATAL;
261+
CLIENTID = FATAL;
262+
SESSIONS = FATAL;
263+
PNFS = FATAL;
264+
RW_LOCK = FATAL;
265+
NLM = FATAL;
266+
RPC = FATAL;
267+
NFS_CB = FATAL;
268+
THREAD = FATAL;
269+
NFS_V4_ACL = FATAL;
270+
STATE = FATAL;
271+
FSAL_UP = FATAL;
272+
DBUS = FATAL;
273+
}
274+
# optional: redirect log output
275+
# Facility {
276+
# name = FILE;
277+
# destination = "/tmp/ganesha-rgw.log";
278+
# enable = active;
279+
# }
280+
}
281281

282282
Running Multiple NFS Gateways
283283
=============================
@@ -315,7 +315,7 @@ if a Swift container name contains underscores, it is not a valid S3
315315
bucket name and will be rejected unless ``rgw_relaxed_s3_bucket_names``
316316
is set to true.
317317

318-
Configuring NFSv4 clients
318+
Configuring NFSv4 Clients
319319
=========================
320320

321321
To access the namespace, mount the configured NFS-Ganesha export(s)

doc/radosgw/sync-modules.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,20 +9,20 @@ create multiple zones and mirror data and metadata between them. ``Sync Modules`
99
are built atop of the multisite framework that allows for forwarding data and
1010
metadata to a different external tier. A sync module allows for a set of actions
1111
to be performed whenever a change in data occurs (metadata ops like bucket or
12-
user creation etc. are also regarded as changes in data). As the rgw multisite
12+
user creation etc. are also regarded as changes in data). As the RGW multisite
1313
changes are eventually consistent at remote sites, changes are propagated
1414
asynchronously. This would allow for unlocking use cases such as backing up the
1515
object storage to an external cloud cluster or a custom backup solution using
1616
tape drives, indexing metadata in ElasticSearch etc.
1717

1818
A sync module configuration is local to a zone. The sync module determines
1919
whether the zone exports data or can only consume data that was modified in
20-
another zone. As of luminous the supported sync plugins are `elasticsearch`_,
20+
another zone. As of Luminous the supported sync plugins are `elasticsearch`_,
2121
``rgw``, which is the default sync plugin that synchronizes data between the
2222
zones and ``log`` which is a trivial sync plugin that logs the metadata
2323
operation that happens in the remote zones. The following docs are written with
2424
the example of a zone using `elasticsearch sync module`_, the process would be similar
25-
for configuring any sync plugin
25+
for configuring any sync plugin.
2626

2727
.. toctree::
2828
:maxdepth: 1
@@ -40,7 +40,7 @@ Requirements and Assumptions
4040
Let us assume a simple multisite configuration as described in the :ref:`multisite`
4141
docs, of 2 zones ``us-east`` and ``us-west``, let's add a third zone
4242
``us-east-es`` which is a zone that only processes metadata from the other
43-
sites. This zone can be in the same or a different ceph cluster as ``us-east``.
43+
sites. This zone can be in the same or a different Ceph cluster as ``us-east``.
4444
This zone would only consume metadata from other zones and RGWs in this zone
4545
will not serve any end user requests directly.
4646

@@ -71,7 +71,7 @@ For example in the ``elasticsearch`` sync module
7171
--tier-config=endpoint=http://localhost:9200,num_shards=10,num_replicas=1
7272

7373

74-
For the various supported tier-config options refer to the `elasticsearch sync module`_ docs
74+
For the various supported tier-config options refer to the `elasticsearch sync module`_ docs.
7575

7676
Finally update the period
7777

0 commit comments

Comments
 (0)