Skip to content

Commit 202d063

Browse files
Merge pull request ceph#65920 from nh2/docs-cephfs-inodes-on-replicated
doc/cephfs/createfs: Recommend default data pool on SSDs for non-EC
2 parents 17244c0 + c891639 commit 202d063

File tree

1 file changed

+17
-1
lines changed

1 file changed

+17
-1
lines changed

doc/cephfs/createfs.rst

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,28 @@ There are important considerations when planning these pools:
2222
- The data pool used to create the file system is the "default" data pool and
2323
the location for storing all inode backtrace information, which is used for hard link
2424
management and disaster recovery. For this reason, all CephFS inodes
25-
have at least one object in the default data pool. If erasure-coded
25+
have at least one RADOS object in the default data pool. If erasure-coded
2626
pools are planned for file system data, it is best to configure the default as
2727
a replicated pool to improve small-object write and
2828
read performance when updating backtraces. Separately, another erasure-coded
2929
data pool can be added (see also :ref:`ecpool`) that can be used on an entire
3030
hierarchy of directories and files (see also :ref:`file-layouts`).
31+
- For the same reason, even if you are not using erasure coding
32+
and plan to store all or most of your files on HDDs,
33+
it is recommended to set the default data pool to an SSD pool
34+
and set a file layout for top-level directories to an HDD pool.
35+
This gives you the option to move small files and inode objects completely off HDDs
36+
in the future using file layouts, without having to re-create the pool from scratch.
37+
That reduces scrub and recovery times when you have many small files,
38+
as those operations cause at least one HDD seek per RADOS object.
39+
This optimization cannot be retrofitted in place when deploying a CephFS
40+
file system with an HDD default data pool,
41+
and the default data pool cannot be subsequently removed without creating
42+
an entirely new CephFS filesystem and migrating all files.
43+
This strategy requires only modest capacity in the SSD default data pool
44+
when subdirectories are aligned with an HDD data pool,
45+
but accelerates various operations and sets your file system up for
46+
future flexibility.
3147

3248
Refer to :doc:`/rados/operations/pools` to learn more about managing pools. For
3349
example, to create two pools with default settings for use with a file system, you

0 commit comments

Comments
 (0)