|
1 | | -* RGW: OpenSSL engine support deprecated in favor of provider support. |
2 | | - - Removed `openssl_engine_opts` configuration option. OpenSSL engine configurations in string format are no longer supported. |
3 | | - - Added `openssl_conf` configuration option for loading specified providers as default providers. |
| 1 | +* RGW: OpenSSL engine support is deprecated in favor of provider support. |
| 2 | + - Removed the `openssl_engine_opts` configuration option. OpenSSL engine configuration in string format is no longer supported. |
| 3 | + - Added the `openssl_conf` configuration option for loading specified providers as default providers. |
4 | 4 | Configuration file syntax follows the OpenSSL standard (see https://github.com/openssl/openssl/blob/master/doc/man5/config.pod). |
5 | | - If the default provider is still required when using custom providers, |
| 5 | + If the default provider is required when also using custom providers, |
6 | 6 | it must be explicitly loaded in the configuration file or code (see https://github.com/openssl/openssl/blob/master/README-PROVIDERS.md). |
7 | 7 |
|
8 | 8 | >=20.0.0 |
9 | 9 |
|
10 | | -* RADOS: lead Monitor and stretch mode status are now included in the `ceph status` output. |
| 10 | +* RADOS: The lead Monitor and stretch mode status are now displayed by `ceph status`. |
11 | 11 | Related Tracker: https://tracker.ceph.com/issues/70406 |
12 | 12 | * RGW: The User Account feature introduced in Squid provides first-class support for |
13 | | - IAM APIs and policy. Our preliminary STS support was instead based on tenants, and |
| 13 | + IAM APIs and policy. Our preliminary STS support was based on tenants, and |
14 | 14 | exposed some IAM APIs to admins only. This tenant-level IAM functionality is now |
15 | 15 | deprecated in favor of accounts. While we'll continue to support the tenant feature |
16 | 16 | itself for namespace isolation, the following features will be removed no sooner |
17 | 17 | than the V release: |
18 | | - * tenant-level IAM APIs like CreateRole, PutRolePolicy and PutUserPolicy, |
19 | | - * use of tenant names instead of accounts in IAM policy documents, |
20 | | - * interpretation of IAM policy without cross-account policy evaluation, |
| 18 | + * Tenant-level IAM APIs including CreateRole, PutRolePolicy and PutUserPolicy, |
| 19 | + * Use of tenant names instead of accounts in IAM policy documents, |
| 20 | + * Interpretation of IAM policy without cross-account policy evaluation, |
21 | 21 | * S3 API support for cross-tenant names such as `Bucket='tenant:bucketname'` |
22 | 22 | * RGW: Lua scripts will not run against health checks. |
23 | 23 | * RGW: For compatibility with AWS S3, LastModified timestamps are now truncated |
24 | | - to the second. Note that during upgrade, users may observe these timestamps |
| 24 | + to the second. Note that during an upgrade, users may observe these timestamps |
25 | 25 | moving backwards as a result. |
26 | | -* RGW: IAM policy evaluation now supports conditions ArnEquals and ArnLike, along |
| 26 | +* RGW: IAM policy evaluation now supports the conditions ArnEquals and ArnLike, along |
27 | 27 | with their Not and IfExists variants. |
28 | | -* RGW: Adding missing quotes to the ETag values returned by S3 CopyPart, |
| 28 | +* RGW: Adding missing quotes to ETag values in S3 CopyPart, |
29 | 29 | PostObject and CompleteMultipartUpload responses. |
30 | | -* RGW: Added support for S3 GetObjectAttributes. |
31 | | -* RGW: Added BEAST frontend option 'so_reuseport' which facilitates running multiple |
32 | | - RGW instances on the same host by sharing a single TCP port. |
33 | | - |
| 30 | +* RGW: Add support for S3 GetObjectAttributes. |
| 31 | +* RGW: Add the Beast front end option 'so_reuseport', which facilitates running multiple |
| 32 | + RGW instances on the same host that share a single TCP port. |
34 | 33 | * RBD: All Python APIs that produce timestamps now return "aware" `datetime` |
35 | 34 | objects instead of "naive" ones (i.e. those including time zone information |
36 | 35 | instead of those not including it). All timestamps remain to be in UTC but |
37 | 36 | including `timezone.utc` makes it explicit and avoids the potential of the |
38 | 37 | returned timestamp getting misinterpreted -- in Python 3, many `datetime` |
39 | 38 | methods treat "naive" `datetime` objects as local times. |
40 | | -* RBD: `rbd group info` and `rbd group snap info` commands are introduced to |
| 39 | +* RBD: The `rbd group info` and `rbd group snap info` commands are introduced to |
41 | 40 | show information about a group and a group snapshot respectively. |
42 | | -* RBD: `rbd group snap ls` output now includes the group snapshot IDs. The header |
43 | | - of the column showing the state of a group snapshot in the unformatted CLI |
44 | | - output is changed from 'STATUS' to 'STATE'. The state of a group snapshot |
45 | | - that was shown as 'ok' is now shown as 'complete', which is more descriptive. |
| 41 | +* RBD: The output of `rbd group snap ls` now includes the group snapshot IDs. The |
| 42 | + heading for group snapshot status in the unformatted CLI |
| 43 | + output is changed from `STATUS` to `STATE`. A group snapshot |
| 44 | + that was previously shown as `ok` is now shown as `complete`, which is more descriptive. |
46 | 45 | * CephFS: Directories may now be configured with case-insensitive or |
47 | | - normalized directory entry names. This is an inheritable configuration making |
48 | | - it apply to an entire directory tree. For more information, see |
| 46 | + normalized directory entry names. This is inheritable and |
| 47 | + applies to an entire directory tree. For more information, see |
49 | 48 | https://docs.ceph.com/en/latest/cephfs/charmap/ |
50 | 49 | * Based on tests performed at scale on an HDD based Ceph cluster, it was found |
51 | 50 | that scheduling with mClock was not optimal with multiple OSD shards. For |
52 | | - example, in the test cluster with multiple OSD node failures, the client |
53 | | - throughput was found to be inconsistent across test runs coupled with multiple |
54 | | - reported slow requests. However, the same test with a single OSD shard and |
| 51 | + example, when multiple OSD nodes failed, client |
| 52 | + throughput was found to be inconsistent across test runs and multiple |
| 53 | + slow requests were reported. However, the same test with a single OSD shard and |
55 | 54 | with multiple worker threads yielded significantly better results in terms of |
56 | | - consistency of client and recovery throughput across multiple test runs. |
57 | | - Therefore, as an interim measure until the issue with multiple OSD shards |
| 55 | + consistent client and recovery throughput across multiple test runs. |
| 56 | + As an interim measure until the issue with multiple OSD shards |
58 | 57 | (or multiple mClock queues per OSD) is investigated and fixed, the following |
59 | | - changes to the default option values have been made: |
| 58 | + changes to option value defaults have been made: |
60 | 59 | - osd_op_num_shards_hdd = 1 (was 5) |
61 | 60 | - osd_op_num_threads_per_shard_hdd = 5 (was 1) |
62 | 61 | For more details see https://tracker.ceph.com/issues/66289. |
63 | 62 | * MGR: The Ceph Manager's always-on modulues/plugins can now be force-disabled. |
64 | | - This can be necessary in cases where we wish to prevent the manager from being |
| 63 | + This can be necessary when we wish to prevent the Manager from being |
65 | 64 | flooded by module commands when Ceph services are down or degraded. |
66 | 65 |
|
67 | 66 | * CephFS: It is now possible to pause the threads that asynchronously purge |
|
71 | 70 | the subvolume snapshots by using the config option |
72 | 71 | "mgr/volumes/pause_cloning". |
73 | 72 |
|
74 | | -* CephFS: Modifying the setting "max_mds" when a cluster is |
| 73 | +* CephFS: Modifying the setting `max_mds` when a cluster is |
75 | 74 | unhealthy now requires users to pass the confirmation flag |
76 | | - (--yes-i-really-mean-it). This has been added as a precaution to tell the |
77 | | - users that modifying "max_mds" may not help with troubleshooting or recovery |
78 | | - effort. Instead, it might further destabilize the cluster. |
| 75 | + (--yes-i-really-mean-it). This has been added as a precaution to inform |
| 76 | + admins that modifying `max_mds` may not help with troubleshooting or recovery |
| 77 | + efforts. Instead, it might further destabilize the cluster. |
79 | 78 | * RADOS: Added convenience function `librados::AioCompletion::cancel()` with |
80 | 79 | the same behavior as `librados::IoCtx::aio_cancel()`. |
81 | 80 |
|
82 | 81 | * mgr/restful, mgr/zabbix: both modules, already deprecated since 2020, have been |
83 | | - finally removed. They have not been actively maintenance in the last years, |
84 | | - and started suffering from vulnerabilities in their dependency chain (e.g.: |
| 82 | + finally removed. They have not been actively maintained and |
| 83 | + suffer from vulnerabilities in their dependency chain (e.g.: |
85 | 84 | CVE-2023-46136). As alternatives, for the `restful` module, the `dashboard` module |
86 | 85 | provides a richer and better maintained RESTful API. Regarding the `zabbix` module, |
87 | | - there are alternative monitoring solutions, like `prometheus`, which is the most |
88 | | - widely adopted among the Ceph user community. |
| 86 | + there are alternative monitoring solutions, notably the Prometheus Alertmanager, |
| 87 | + which scales more readily and is widely adopted within the Ceph community. |
89 | 88 |
|
90 | 89 | * CephFS: EOPNOTSUPP (Operation not supported ) is now returned by the CephFS |
91 | 90 | fuse client for `fallocate` for the default case (i.e. mode == 0) since |
|
103 | 102 | * RGW: PutObjectLockConfiguration can now be used to enable S3 Object Lock on an |
104 | 103 | existing versioning-enabled bucket that was not created with Object Lock enabled. |
105 | 104 |
|
106 | | -* RADOS: The ceph df command reports incorrect MAX AVAIL for stretch mode pools when |
| 105 | +* RADOS: The `ceph df` command reports incorrect `MAX AVAIL` values for stretch mode pools when |
107 | 106 | CRUSH rules use multiple take steps for datacenters. PGMap::get_rule_avail |
108 | 107 | incorrectly calculates available space from only one datacenter. |
109 | | - As a workaround, define CRUSH rules with take default and choose firstn 0 type |
110 | | - datacenter. See https://tracker.ceph.com/issues/56650#note-6 for details. |
111 | | - Upgrading a cluster configured with a crush rule with multiple take steps |
112 | | - can lead to data shuffling, as the new crush changes may necessitate data |
113 | | - redistribution. In contrast, a stretch rule with a single-take configuration |
114 | | - will not cause any data movement during the upgrade process. |
| 108 | + As a workaround, define CRUSH rules with `take default` and `choose firstn 0 type |
| 109 | + datacenter`. See https://tracker.ceph.com/issues/56650#note-6 for details. |
| 110 | + Upgrading a cluster configured with a CRUSH rule that includes multiple `take` steps |
| 111 | + can lead to data shuffling, as CRUSH changes may necessitate data |
| 112 | + redistribution. In contrast, a stretch rule with a single `take` step |
| 113 | + will not cause data movement during the upgrade process. |
115 | 114 |
|
116 | 115 | * RGW: The `x-amz-confirm-remove-self-bucket-access` header is now supported by |
117 | 116 | `PutBucketPolicy`. Additionally, the root user will always have access to modify |
|
126 | 125 | is not passed or if either is a non-empty pool, the command will abort. |
127 | 126 |
|
128 | 127 | * RADOS: A new command, `ceph osd rm-pg-upmap-primary-all`, has been added that allows |
129 | | - users to clear all pg-upmap-primary mappings in the osdmap when desired. |
| 128 | + admins to clear all pg-upmap-primary mappings in the osdmap when desired. |
130 | 129 | Related trackers: |
131 | 130 | - https://tracker.ceph.com/issues/67179 |
132 | 131 | - https://tracker.ceph.com/issues/66867 |
133 | 132 |
|
134 | 133 | * RADOS: The default plugin for erasure coded pools has been changed |
135 | | - from Jerasure to ISA-L. Clusters created on T or later releases will |
136 | | - use ISA-L as the default plugin when creating a new pool. Clusters that upgrade |
137 | | - to the T release will continue to use their existing default values. |
| 134 | + from Jerasure to ISA-L. Pools created on Tentacle or later releases will |
| 135 | + use ISA-L as the default plugin. Pools within clusters upgraded |
| 136 | + to Tentacle or Umbrella will continue to use their existing plugin. |
138 | 137 | The default values can be overriden by creating a new erasure code profile and |
139 | | - selecting it when creating a new pool. |
| 138 | + selecting it when creating a pool. |
140 | 139 | ISA-L is recommended for new pools because the Jerasure library is |
141 | 140 | no longer maintained. |
142 | 141 |
|
143 | | -* CephFS: Format of name of pool namespace for CephFS volumes has been changed |
| 142 | +* CephFS: The format of pool namespaces for CephFS volumes has been changed |
144 | 143 | from `fsvolumens__<subvol-name>` to `fsvolumens__<subvol-grp-name>_<subvol-name>` |
145 | 144 | to avoid namespace collision when two subvolumes located in different |
146 | 145 | subvolume groups have the same name. Even with namespace collision, there were |
147 | 146 | no security issues since the MDS auth cap is restricted to the subvolume path. |
148 | | - Now, with this change, the namespaces are completely isolated. |
| 147 | + Now, with this change, namespaces are completely isolated. |
149 | 148 |
|
150 | | -* RGW: Added support for the `RestrictPublicBuckets` property of the S3 `PublicAccessBlock` |
| 149 | +* RGW: Add support for the `RestrictPublicBuckets` property of the S3 `PublicAccessBlock` |
151 | 150 | configuration. |
152 | 151 |
|
153 | 152 | * RBD: Moving an image that is a member of a group to trash is no longer |
|
170 | 169 | Replication of tags is controlled by the `s3:GetObject(Version)Tagging` permission. |
171 | 170 |
|
172 | 171 | * RADOS: A new command, ``ceph osd pool availability-status``, has been added that allows |
173 | | - users to view the availability score for each pool in a cluster. A pool is considered |
174 | | - unavailable if any PG in the pool is not in active state or if there are unfound |
175 | | - objects. Otherwise the pool is considered available. The score is updated every |
176 | | - 5 seconds. The feature is on by default. A new config option ``enable_availability_tracking`` |
| 172 | + users to view the availability score for each pool. A pool is considered |
| 173 | + `unavailable` if any PG in the pool is not in active state or if there are unfound |
| 174 | + objects, otherwise the pool is considered `available`. The score is updated every |
| 175 | + five seconds and the feature is enabled by default. A new config option ``enable_availability_tracking`` |
177 | 176 | can be used to turn off the feature if required. Another command is added to clear the |
178 | 177 | availability status for a specific pool, ``ceph osd pool clear-availability-status <pool-name>``. |
179 | 178 | This feature is in tech preview. |
|
197 | 196 |
|
198 | 197 | >=19.2.1 |
199 | 198 |
|
200 | | -* CephFS: Command `fs subvolume create` now allows tagging subvolumes through option |
| 199 | +* CephFS: The `fs subvolume create` command now allows tagging subvolumes through option |
201 | 200 | `--earmark` with a unique identifier needed for NFS or SMB services. The earmark |
202 | 201 | string for a subvolume is empty by default. To remove an already present earmark, |
203 | 202 | an empty string can be assigned to it. Additionally, commands |
@@ -742,8 +741,8 @@ It is also adding 2 new mon commands, to notify monitor about the gateway creati |
742 | 741 | - nvme-gw delete |
743 | 742 | Relevant tracker: https://tracker.ceph.com/issues/64777 |
744 | 743 |
|
745 | | -* MDS now uses host errors, as defined in errno.cc, for current platform. |
746 | | -errorcode32_t is converting, internally, the error code from host to ceph, when encoding, and vice versa, |
747 | | -when decoding, resulting having LINUX codes on the wire, and HOST code on the receiver. |
748 | | -All CEPHFS_E* defines have been removed across Ceph (including the python binding). |
| 744 | +* MDS now uses host error numbers, as defined in errno.cc, for current platform. |
| 745 | +`errorcode32_t` is converts, internally, error codes from host to Ceph when encoding, and vice versa, |
| 746 | +when decoding, resulting in Linux codes on the wire, and host codes on the receiver. |
| 747 | +All CEPHFS_E* defines have been removed across Ceph (including the Python binding). |
749 | 748 | Relevant tracker: https://tracker.ceph.com/issues/64611 |
0 commit comments