|
| 1 | +[id="cluster-logging-release-notes-5-1-0"] |
| 2 | += OpenShift Logging 5.1.0 |
| 3 | + |
| 4 | +This release includes link:https://access.redhat.com/errata/RHSA-2021:2112[RHSA-2021:2112 OpenShift Logging Bug Fix Release 5.1.0]. |
| 5 | + |
| 6 | +[id="openshift-logging-5-1-0-new-features-and-enhancements"] |
| 7 | +== New features and enhancements |
| 8 | + |
| 9 | +OpenShift Logging 5.1 now supports {product-title} 4.7 and later running on: |
| 10 | + |
| 11 | +* IBM Power Systems |
| 12 | +* IBM Z and LinuxONE |
| 13 | + |
| 14 | +This release adds improvements related to the following components and concepts. |
| 15 | + |
| 16 | +* As a cluster administrator, you can use Kubernetes pod labels to gather log data from an application and send it to a specific log store. You can gather log data by configuring the `inputs[].application.selector.matchLabels` element in the `ClusterLogForwarder` custom resource (CR) YAML file. You can also filter the gathered log data by namespace. |
| 17 | +(link:https://issues.redhat.com/browse/LOG-883[*LOG-883*]) |
| 18 | + |
| 19 | +* This release adds the following new `ElasticsearchNodeDiskWatermarkReached` warnings to the OpenShift Elasticsearch Operator (EO): |
| 20 | + - Elasticsearch Node Disk Low Watermark Reached |
| 21 | + - Elasticsearch Node Disk High Watermark Reached |
| 22 | + - Elasticsearch Node Disk Flood Watermark Reached |
| 23 | + |
| 24 | ++ |
| 25 | +The alert applies the past several warnings when it predicts that an Elasticsearch node will reach the `Disk Low Watermark`, `Disk High Watermark`, or `Disk Flood Stage Watermark` thresholds in the next 6 hours. This warning period gives you time to respond before the node reaches the disk watermark thresholds. The warning messages also provide links to the troubleshooting steps, which you can follow to help mitigate the issue. The EO applies the past several hours of disk space data to a linear model to generate these warnings. |
| 26 | +(link:https://issues.redhat.com/browse/LOG-1100[*LOG-1100*]) |
| 27 | + |
| 28 | +* JSON logs can now be forwarded as JSON objects, rather than quoted strings, to either Red Hat's managed Elasticsearch cluster or any of the other supported third-party systems. Additionally, you can now query individual fields from a JSON log message inside Kibana which increases the discoverability of specific logs. |
| 29 | +(link:https://issues.redhat.com/browse/LOG-785[*LOG-785*], https://issues.redhat.com/browse/LOG-1148[*LOG-1148*]) |
| 30 | + |
| 31 | +[id="openshift-logging-5-1-0-deprecated-removed-features"] |
| 32 | +== Deprecated and removed features |
| 33 | + |
| 34 | +Some features available in previous releases have been deprecated or removed. |
| 35 | + |
| 36 | +Deprecated functionality is still included in OpenShift Logging and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. |
| 37 | + |
| 38 | +[id="openshift-logging-5-1-0-elasticsearch-curator"] |
| 39 | +=== Elasticsearch Curator has been removed |
| 40 | + |
| 41 | +With this update, the Elasticsearch Curator has been removed and is no longer supported. Elasticsearch Curator helped you curate or manage your indices on OpenShift Container Platform 4.4 and earlier. Instead of using Elasticsearch Curator, configure the log retention time. |
| 42 | + |
| 43 | +[id="openshift-logging-5-1-0-bug-fixes"] |
| 44 | +== Bug fixes |
| 45 | + |
| 46 | +* Before this update, the `ClusterLogForwarder` CR did not show the `input[].selector` element after it had been created. With this update, when you specify a `selector` in the `ClusterLogForwarder` CR, it remains. Fixing this bug was necessary for link:https://issues.redhat.com/browse/LOG-883[LOG-883], which enables using pod label selectors to forward application log data. |
| 47 | +(link:https://issues.redhat.com/browse/LOG-1338[*LOG-1338*]) |
| 48 | + |
| 49 | +* Before this update, an update in the cluster service version (CSV) accidentally introduced resources and limits for the OpenShift Elasticsearch Operator container. Under specific conditions, this caused an out-of-memory condition that terminated the Elasticsearch Operator pod. This update fixes the issue by removing the CSV resources and limits for the Operator container. The Operator gets scheduled without issues. |
| 50 | +(link:https://issues.redhat.com/browse/LOG-1254[*LOG-1254*]) |
| 51 | + |
| 52 | +* Before this update, forwarding logs to Kafka using chained certificates failed with the following error message: |
| 53 | ++ |
| 54 | +`state=error: certificate verify failed (unable to get local issuer certificate)` |
| 55 | ++ |
| 56 | +Logs could not be forwarded to a Kafka broker with a certificate signed by an intermediate CA. This happened because the Fluentd Kafka plug-in could only handle a single CA certificate supplied in the `ca-bundle.crt` entry of the corresponding secret. This update fixes the issue by enabling the Fluentd Kafka plug-in to handle multiple CA certificates supplied in the `ca-bundle.crt` entry of the corresponding secret. Now, logs can be forwarded to a Kafka broker with a certificate signed by an intermediate CA. |
| 57 | +(link:https://issues.redhat.com/browse/LOG-1218[*LOG-1218*], link:https://issues.redhat.com/browse/LOG-1216[*LOG-1216*]) |
| 58 | + |
| 59 | +* Before this update, while under load, Elasticsearch responded to some requests with an HTTP 500 error, even though there was nothing wrong with the cluster. Retrying the request was successful. This update fixes the issue by updating the index management cron jobs to be more resilient when they encounter temporary HTTP 500 errors. The updated index management cron jobs will first retry a request multiple times before failing. |
| 60 | +(link:https://issues.redhat.com/browse/LOG-1215[*LOG-1215*]) |
| 61 | + |
| 62 | +* Before this update, if you did not set the `.proxy` value in the cluster installation configuration, and then configured a global proxy on the installed cluster, a bug prevented Fluentd from forwarding logs to Elasticsearch. To work around this issue, in the proxy or cluster configuration, set the `no_proxy` value to `.svc.cluster.local` so it skips internal traffic. This update fixes the proxy configuration issue. If you configure the global proxy after installing an {product-title} cluster, Fluentd forwards logs to Elasticsearch. |
| 63 | +(link:https://issues.redhat.com/browse/LOG-1187[*LOG-1187*], link:https://bugzilla.redhat.com/show_bug.cgi?id=1915448[*BZ#1915448*]) |
| 64 | + |
| 65 | +* Before this update, the logging collector created more socket connections than necessary. With this update, the logging collector reuses the existing socket connection to send logs. |
| 66 | +(link:https://issues.redhat.com/browse/LOG-1186[*LOG-1186*]) |
| 67 | + |
| 68 | +* Before this update, if a cluster administrator tried to add or remove storage from an Elasticsearch cluster, the OpenShift Elasticsearch Operator (EO) incorrectly tried to upgrade the Elasticsearch cluster, displaying `scheduledUpgrade: "True"`, `shardAllocationEnabled: primaries`, and change the volumes. With this update, the EO does not try to upgrade the Elasticsearch cluster. |
| 69 | ++ |
| 70 | +The EO status displays the following new status information to indicate when you have tried to make an unsupported change to the Elasticsearch storage that it has ignored: |
| 71 | ++ |
| 72 | + - `StorageStructureChangeIgnored` when you try to change between using ephemeral and persistent storage structures. |
| 73 | + - `StorageClassNameChangeIgnored` when you try to change the storage class name. |
| 74 | + - `StorageSizeChangeIgnored` when you try to change the storage size. |
| 75 | ++ |
| 76 | +[NOTE] |
| 77 | +==== |
| 78 | +If you configure the `ClusterLogging` custom resource (CR) to switch from ephemeral to persistent storage, the EO creates a persistent volume claim (PVC) but does not create a persistent volume (PV). To clear the `StorageStructureChangeIgnored` status, you must revert the change to the `ClusterLogging` CR and delete the persistent volume claim (PVC). |
| 79 | +==== |
| 80 | ++ |
| 81 | +(link:https://issues.redhat.com/browse/LOG-1351[*LOG-1351*]) |
| 82 | + |
| 83 | +Before this update, if you redeployed a full Elasticsearch cluster, it got stuck in an unhealthy state, with one non-data node running and all other data nodes shut down. This happened because new certificates prevented the Elasticsearch Operator from scaling down the non-data nodes of the Elasticsearch cluster. With this update, Elasticsearch Operator can scale all the data and non-data nodes down and then back up again, so they load the new certificates. The Elasticsearch Operator can reach the new nodes after they load the new certificates. |
| 84 | +(link:https://issues.redhat.com/browse/LOG-1536[*LOG-1536*]) |
0 commit comments