Skip to content

Commit 431f39f

Browse files
Merge pull request ceph#65052 from anthonyeleven/pending
doc: Improve PendingReleaseNotes
2 parents f0b51ed + 32d9744 commit 431f39f

File tree

1 file changed

+61
-62
lines changed

1 file changed

+61
-62
lines changed

PendingReleaseNotes

Lines changed: 61 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -1,67 +1,66 @@
1-
* RGW: OpenSSL engine support deprecated in favor of provider support.
2-
- Removed `openssl_engine_opts` configuration option. OpenSSL engine configurations in string format are no longer supported.
3-
- Added `openssl_conf` configuration option for loading specified providers as default providers.
1+
* RGW: OpenSSL engine support is deprecated in favor of provider support.
2+
- Removed the `openssl_engine_opts` configuration option. OpenSSL engine configuration in string format is no longer supported.
3+
- Added the `openssl_conf` configuration option for loading specified providers as default providers.
44
Configuration file syntax follows the OpenSSL standard (see https://github.com/openssl/openssl/blob/master/doc/man5/config.pod).
5-
If the default provider is still required when using custom providers,
5+
If the default provider is required when also using custom providers,
66
it must be explicitly loaded in the configuration file or code (see https://github.com/openssl/openssl/blob/master/README-PROVIDERS.md).
77

88
>=20.0.0
99

10-
* RADOS: lead Monitor and stretch mode status are now included in the `ceph status` output.
10+
* RADOS: The lead Monitor and stretch mode status are now displayed by `ceph status`.
1111
Related Tracker: https://tracker.ceph.com/issues/70406
1212
* RGW: The User Account feature introduced in Squid provides first-class support for
13-
IAM APIs and policy. Our preliminary STS support was instead based on tenants, and
13+
IAM APIs and policy. Our preliminary STS support was based on tenants, and
1414
exposed some IAM APIs to admins only. This tenant-level IAM functionality is now
1515
deprecated in favor of accounts. While we'll continue to support the tenant feature
1616
itself for namespace isolation, the following features will be removed no sooner
1717
than the V release:
18-
* tenant-level IAM APIs like CreateRole, PutRolePolicy and PutUserPolicy,
19-
* use of tenant names instead of accounts in IAM policy documents,
20-
* interpretation of IAM policy without cross-account policy evaluation,
18+
* Tenant-level IAM APIs including CreateRole, PutRolePolicy and PutUserPolicy,
19+
* Use of tenant names instead of accounts in IAM policy documents,
20+
* Interpretation of IAM policy without cross-account policy evaluation,
2121
* S3 API support for cross-tenant names such as `Bucket='tenant:bucketname'`
2222
* RGW: Lua scripts will not run against health checks.
2323
* RGW: For compatibility with AWS S3, LastModified timestamps are now truncated
24-
to the second. Note that during upgrade, users may observe these timestamps
24+
to the second. Note that during an upgrade, users may observe these timestamps
2525
moving backwards as a result.
26-
* RGW: IAM policy evaluation now supports conditions ArnEquals and ArnLike, along
26+
* RGW: IAM policy evaluation now supports the conditions ArnEquals and ArnLike, along
2727
with their Not and IfExists variants.
28-
* RGW: Adding missing quotes to the ETag values returned by S3 CopyPart,
28+
* RGW: Adding missing quotes to ETag values in S3 CopyPart,
2929
PostObject and CompleteMultipartUpload responses.
30-
* RGW: Added support for S3 GetObjectAttributes.
31-
* RGW: Added BEAST frontend option 'so_reuseport' which facilitates running multiple
32-
RGW instances on the same host by sharing a single TCP port.
33-
30+
* RGW: Add support for S3 GetObjectAttributes.
31+
* RGW: Add the Beast front end option 'so_reuseport', which facilitates running multiple
32+
RGW instances on the same host that share a single TCP port.
3433
* RBD: All Python APIs that produce timestamps now return "aware" `datetime`
3534
objects instead of "naive" ones (i.e. those including time zone information
3635
instead of those not including it). All timestamps remain to be in UTC but
3736
including `timezone.utc` makes it explicit and avoids the potential of the
3837
returned timestamp getting misinterpreted -- in Python 3, many `datetime`
3938
methods treat "naive" `datetime` objects as local times.
40-
* RBD: `rbd group info` and `rbd group snap info` commands are introduced to
39+
* RBD: The `rbd group info` and `rbd group snap info` commands are introduced to
4140
show information about a group and a group snapshot respectively.
42-
* RBD: `rbd group snap ls` output now includes the group snapshot IDs. The header
43-
of the column showing the state of a group snapshot in the unformatted CLI
44-
output is changed from 'STATUS' to 'STATE'. The state of a group snapshot
45-
that was shown as 'ok' is now shown as 'complete', which is more descriptive.
41+
* RBD: The output of `rbd group snap ls` now includes the group snapshot IDs. The
42+
heading for group snapshot status in the unformatted CLI
43+
output is changed from `STATUS` to `STATE`. A group snapshot
44+
that was previously shown as `ok` is now shown as `complete`, which is more descriptive.
4645
* CephFS: Directories may now be configured with case-insensitive or
47-
normalized directory entry names. This is an inheritable configuration making
48-
it apply to an entire directory tree. For more information, see
46+
normalized directory entry names. This is inheritable and
47+
applies to an entire directory tree. For more information, see
4948
https://docs.ceph.com/en/latest/cephfs/charmap/
5049
* Based on tests performed at scale on an HDD based Ceph cluster, it was found
5150
that scheduling with mClock was not optimal with multiple OSD shards. For
52-
example, in the test cluster with multiple OSD node failures, the client
53-
throughput was found to be inconsistent across test runs coupled with multiple
54-
reported slow requests. However, the same test with a single OSD shard and
51+
example, when multiple OSD nodes failed, client
52+
throughput was found to be inconsistent across test runs and multiple
53+
slow requests were reported. However, the same test with a single OSD shard and
5554
with multiple worker threads yielded significantly better results in terms of
56-
consistency of client and recovery throughput across multiple test runs.
57-
Therefore, as an interim measure until the issue with multiple OSD shards
55+
consistent client and recovery throughput across multiple test runs.
56+
As an interim measure until the issue with multiple OSD shards
5857
(or multiple mClock queues per OSD) is investigated and fixed, the following
59-
changes to the default option values have been made:
58+
changes to option value defaults have been made:
6059
- osd_op_num_shards_hdd = 1 (was 5)
6160
- osd_op_num_threads_per_shard_hdd = 5 (was 1)
6261
For more details see https://tracker.ceph.com/issues/66289.
6362
* MGR: The Ceph Manager's always-on modulues/plugins can now be force-disabled.
64-
This can be necessary in cases where we wish to prevent the manager from being
63+
This can be necessary when we wish to prevent the Manager from being
6564
flooded by module commands when Ceph services are down or degraded.
6665

6766
* CephFS: It is now possible to pause the threads that asynchronously purge
@@ -71,21 +70,21 @@
7170
the subvolume snapshots by using the config option
7271
"mgr/volumes/pause_cloning".
7372

74-
* CephFS: Modifying the setting "max_mds" when a cluster is
73+
* CephFS: Modifying the setting `max_mds` when a cluster is
7574
unhealthy now requires users to pass the confirmation flag
76-
(--yes-i-really-mean-it). This has been added as a precaution to tell the
77-
users that modifying "max_mds" may not help with troubleshooting or recovery
78-
effort. Instead, it might further destabilize the cluster.
75+
(--yes-i-really-mean-it). This has been added as a precaution to inform
76+
admins that modifying `max_mds` may not help with troubleshooting or recovery
77+
efforts. Instead, it might further destabilize the cluster.
7978
* RADOS: Added convenience function `librados::AioCompletion::cancel()` with
8079
the same behavior as `librados::IoCtx::aio_cancel()`.
8180

8281
* mgr/restful, mgr/zabbix: both modules, already deprecated since 2020, have been
83-
finally removed. They have not been actively maintenance in the last years,
84-
and started suffering from vulnerabilities in their dependency chain (e.g.:
82+
finally removed. They have not been actively maintained and
83+
suffer from vulnerabilities in their dependency chain (e.g.:
8584
CVE-2023-46136). As alternatives, for the `restful` module, the `dashboard` module
8685
provides a richer and better maintained RESTful API. Regarding the `zabbix` module,
87-
there are alternative monitoring solutions, like `prometheus`, which is the most
88-
widely adopted among the Ceph user community.
86+
there are alternative monitoring solutions, notably the Prometheus Alertmanager,
87+
which scales more readily and is widely adopted within the Ceph community.
8988

9089
* CephFS: EOPNOTSUPP (Operation not supported ) is now returned by the CephFS
9190
fuse client for `fallocate` for the default case (i.e. mode == 0) since
@@ -103,15 +102,15 @@
103102
* RGW: PutObjectLockConfiguration can now be used to enable S3 Object Lock on an
104103
existing versioning-enabled bucket that was not created with Object Lock enabled.
105104

106-
* RADOS: The ceph df command reports incorrect MAX AVAIL for stretch mode pools when
105+
* RADOS: The `ceph df` command reports incorrect `MAX AVAIL` values for stretch mode pools when
107106
CRUSH rules use multiple take steps for datacenters. PGMap::get_rule_avail
108107
incorrectly calculates available space from only one datacenter.
109-
As a workaround, define CRUSH rules with take default and choose firstn 0 type
110-
datacenter. See https://tracker.ceph.com/issues/56650#note-6 for details.
111-
Upgrading a cluster configured with a crush rule with multiple take steps
112-
can lead to data shuffling, as the new crush changes may necessitate data
113-
redistribution. In contrast, a stretch rule with a single-take configuration
114-
will not cause any data movement during the upgrade process.
108+
As a workaround, define CRUSH rules with `take default` and `choose firstn 0 type
109+
datacenter`. See https://tracker.ceph.com/issues/56650#note-6 for details.
110+
Upgrading a cluster configured with a CRUSH rule that includes multiple `take` steps
111+
can lead to data shuffling, as CRUSH changes may necessitate data
112+
redistribution. In contrast, a stretch rule with a single `take` step
113+
will not cause data movement during the upgrade process.
115114

116115
* RGW: The `x-amz-confirm-remove-self-bucket-access` header is now supported by
117116
`PutBucketPolicy`. Additionally, the root user will always have access to modify
@@ -126,28 +125,28 @@
126125
is not passed or if either is a non-empty pool, the command will abort.
127126

128127
* RADOS: A new command, `ceph osd rm-pg-upmap-primary-all`, has been added that allows
129-
users to clear all pg-upmap-primary mappings in the osdmap when desired.
128+
admins to clear all pg-upmap-primary mappings in the osdmap when desired.
130129
Related trackers:
131130
- https://tracker.ceph.com/issues/67179
132131
- https://tracker.ceph.com/issues/66867
133132

134133
* RADOS: The default plugin for erasure coded pools has been changed
135-
from Jerasure to ISA-L. Clusters created on T or later releases will
136-
use ISA-L as the default plugin when creating a new pool. Clusters that upgrade
137-
to the T release will continue to use their existing default values.
134+
from Jerasure to ISA-L. Pools created on Tentacle or later releases will
135+
use ISA-L as the default plugin. Pools within clusters upgraded
136+
to Tentacle or Umbrella will continue to use their existing plugin.
138137
The default values can be overriden by creating a new erasure code profile and
139-
selecting it when creating a new pool.
138+
selecting it when creating a pool.
140139
ISA-L is recommended for new pools because the Jerasure library is
141140
no longer maintained.
142141

143-
* CephFS: Format of name of pool namespace for CephFS volumes has been changed
142+
* CephFS: The format of pool namespaces for CephFS volumes has been changed
144143
from `fsvolumens__<subvol-name>` to `fsvolumens__<subvol-grp-name>_<subvol-name>`
145144
to avoid namespace collision when two subvolumes located in different
146145
subvolume groups have the same name. Even with namespace collision, there were
147146
no security issues since the MDS auth cap is restricted to the subvolume path.
148-
Now, with this change, the namespaces are completely isolated.
147+
Now, with this change, namespaces are completely isolated.
149148

150-
* RGW: Added support for the `RestrictPublicBuckets` property of the S3 `PublicAccessBlock`
149+
* RGW: Add support for the `RestrictPublicBuckets` property of the S3 `PublicAccessBlock`
151150
configuration.
152151

153152
* RBD: Moving an image that is a member of a group to trash is no longer
@@ -170,10 +169,10 @@
170169
Replication of tags is controlled by the `s3:GetObject(Version)Tagging` permission.
171170

172171
* RADOS: A new command, ``ceph osd pool availability-status``, has been added that allows
173-
users to view the availability score for each pool in a cluster. A pool is considered
174-
unavailable if any PG in the pool is not in active state or if there are unfound
175-
objects. Otherwise the pool is considered available. The score is updated every
176-
5 seconds. The feature is on by default. A new config option ``enable_availability_tracking``
172+
users to view the availability score for each pool. A pool is considered
173+
`unavailable` if any PG in the pool is not in active state or if there are unfound
174+
objects, otherwise the pool is considered `available`. The score is updated every
175+
five seconds and the feature is enabled by default. A new config option ``enable_availability_tracking``
177176
can be used to turn off the feature if required. Another command is added to clear the
178177
availability status for a specific pool, ``ceph osd pool clear-availability-status <pool-name>``.
179178
This feature is in tech preview.
@@ -197,7 +196,7 @@
197196

198197
>=19.2.1
199198

200-
* CephFS: Command `fs subvolume create` now allows tagging subvolumes through option
199+
* CephFS: The `fs subvolume create` command now allows tagging subvolumes through option
201200
`--earmark` with a unique identifier needed for NFS or SMB services. The earmark
202201
string for a subvolume is empty by default. To remove an already present earmark,
203202
an empty string can be assigned to it. Additionally, commands
@@ -742,8 +741,8 @@ It is also adding 2 new mon commands, to notify monitor about the gateway creati
742741
- nvme-gw delete
743742
Relevant tracker: https://tracker.ceph.com/issues/64777
744743

745-
* MDS now uses host errors, as defined in errno.cc, for current platform.
746-
errorcode32_t is converting, internally, the error code from host to ceph, when encoding, and vice versa,
747-
when decoding, resulting having LINUX codes on the wire, and HOST code on the receiver.
748-
All CEPHFS_E* defines have been removed across Ceph (including the python binding).
744+
* MDS now uses host error numbers, as defined in errno.cc, for current platform.
745+
`errorcode32_t` is converts, internally, error codes from host to Ceph when encoding, and vice versa,
746+
when decoding, resulting in Linux codes on the wire, and host codes on the receiver.
747+
All CEPHFS_E* defines have been removed across Ceph (including the Python binding).
749748
Relevant tracker: https://tracker.ceph.com/issues/64611

0 commit comments

Comments
 (0)