Skip to content

Commit f1e345e

Browse files
kyannyvgrl
andauthored
Update GHES 3.13.12, 3.14.9, 3.15.4, 3.16.0 release notes (Elasticsearch data loss with ghe-repl-teardown | ghe-repl-promote) (#54912)
Co-authored-by: vgrl <[email protected]>
1 parent 7e0b947 commit f1e345e

File tree

4 files changed

+15
-5
lines changed

4 files changed

+15
-5
lines changed

data/release-notes/enterprise-server/3-13/12.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,10 @@ sections:
4444
When restoring data originally backed up from a 3.13 or greater appliance version, the elasticsearch indices need to be reindexed before some of the data will show up. This happens via a nightly scheduled job. It can also be forced by running `/usr/local/share/enterprise/ghe-es-search-repair`.
4545
- |
4646
When restoring from a backup snapshot, a large number of `mapper_parsing_exception` errors may be displayed.
47+
- |
48+
{% data reusables.release-notes.2025-03-03-elasticsearch-data-loss %}
49+
50+
[Updated: 2025-03-19]
4751
- |
4852
After a restore, existing outside collaborators are unable to be added to repositories in a new organization. This issue can be resolved by running `/usr/local/share/enterprise/ghe-es-search-repair` on the appliance.
4953
- |

data/release-notes/enterprise-server/3-14/9.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,10 @@ sections:
5656
When enabling automatic update checks for the first time in the Management Console, the status is not dynamically reflected until the "Updates" page is reloaded.
5757
- |
5858
When restoring from a backup snapshot, a large number of `mapper_parsing_exception` errors may be displayed.
59+
- |
60+
{% data reusables.release-notes.2025-03-03-elasticsearch-data-loss %}
61+
62+
[Updated: 2025-03-19]
5963
- |
6064
After a restore, existing outside collaborators are unable to be added to repositories in a new organization. This issue can be resolved by running `/usr/local/share/enterprise/ghe-es-search-repair` on the appliance.
6165
- |

data/release-notes/enterprise-server/3-15/4.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,5 +60,9 @@ sections:
6060
When initializing a new GHES cluster, nodes with the `consul-server` role should be added to the cluster before adding additional nodes. Adding all nodes simultaneously creates a race condition between nomad server registration and nomad client registration.
6161
- |
6262
Admins setting up cluster high availability (HA) may encounter a spokes error when running ghe-cluster-repl-status if a new organization and repositories are created before using the ghe-cluster-repl-bootstrap command. To avoid this issue, complete the cluster HA setup with ghe-cluster-repl-bootstrap before creating new organizations and repositories.
63+
- |
64+
{% data reusables.release-notes.2025-03-03-elasticsearch-data-loss %}
65+
66+
[Updated: 2025-03-19]
6367
- |
6468
After a restore, existing outside collaborators are unable to be added to repositories in a new organization. This issue can be resolved by running `/usr/local/share/enterprise/ghe-es-search-repair` on the appliance.

data/release-notes/enterprise-server/3-16/0.yml

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ intro: |
77
**Warning**: Customers who have security products enabled by default at the organization level will experience issues when upgrading from 3.14 to 3.16.0. We recommend waiting for the next 3.16 patch to upgrade.
88
99
{% endwarning %}
10-
10+
1111
For upgrade instructions, see [AUTOTITLE](/admin/upgrading-your-instance/preparing-to-upgrade/overview-of-the-upgrade-process).
1212
sections:
1313

@@ -230,11 +230,9 @@ sections:
230230
- |
231231
After a geo-replica is promoted to primary by running `ghe-repl-promote`, the actions workflow of a repository does not have any suggested workflows.
232232
- |
233-
For appliances in a high availability configuration, Elasticsearch indices are deleted in two situations:
234-
* On failover
235-
* When running `ghe-repl-teardown <REPLICA_HOSTNAME>` from the primary instance
233+
{% data reusables.release-notes.2025-03-03-elasticsearch-data-loss %}
236234
237-
All indices are recoverable, except for Audit Log indices. Since Elasticsearch itself is the source of truth for these logs, they may only be recoverable from a backup. If you need assistance, visit {% data variables.contact.contact_ent_support %}.
235+
[Updated: 2025-03-19]
238236
239237
closing_down:
240238
# https://github.com/github/releases/issues/4683

0 commit comments

Comments
 (0)