Skip to content

Commit 3f51f32

Browse files
authored
Merge pull request #62748 from laubai/osdocs-5904-tshoot-kcs-docs
[OSDOCS#5904]: Adding links to priority ROSA troubleshooting material
2 parents f55108e + 42ba6b1 commit 3f51f32

9 files changed

+32
-4
lines changed

modules/cluster-logging-deploy-console.adoc

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ This option sets the `openshift.io/cluster-monitoring: "true"` label in the Name
9393

9494
.. Choose *Red Hat OpenShift Logging* from the list of available Operators, and click *Install*.
9595

96-
.. Ensure that the *A specific namespace on the cluster* is selected under *Installation Mode*.
96+
.. Ensure that *A specific namespace on the cluster* is selected under *Installation Mode*.
9797

9898
.. Ensure that *Operator recommended namespace* is *openshift-logging* under *Installed Namespace*.
9999

@@ -241,4 +241,7 @@ You should see several pods for OpenShift Logging, Elasticsearch, Fluentd, and K
241241
* fluentd-br7r2
242242
* fluentd-fn2sb
243243
* fluentd-zqgqx
244-
* kibana-7fb4fd4cc9-bvt4p
244+
* kibana-7fb4fd4cc9-bvt4p
245+
246+
.Troubleshooting
247+
* If Alertmanager logs alerts such as `Prometheus could not scrape fluentd for more than 10m`, make sure that `openshift.io/cluster-monitoring` is set to `"true"` for the OpenShift Elasticsearch Operator and OpenShift Logging Operator. See the Red Hat KnowledgeBase for more information: link:https://access.redhat.com/solutions/5692801[Prometheus could not scrape fluentd for more than 10m alert in Alertmanager]

modules/life-cycle-mandatory-upgrades.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
// * rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc
33
// * osd_architecture/osd_policy/osd-life-cycle.adoc
44

5+
:_content-type: REFERENCE
56
[id="rosa-mandatory-upgrades_{context}"]
67
= Mandatory upgrades
78

@@ -10,4 +11,5 @@ impacts the security or stability of the cluster, the customer must upgrade to t
1011
patch release within two link:https://access.redhat.com/articles/2623321[business days].
1112

1213
In extreme circumstances and based on Red Hat's assessment of the CVE criticality to the
13-
environment, Red Hat will notify customers that they have two link:https://access.redhat.com/articles/2623321[business days] to schedule or manually update their cluster to the latest, secure patch release. In the case that an update has not been performed, Red Hat will automatically update the cluster to the latest, secure patch release to mitigate potential security breach(es) or instability. Red Hat may, at its own discretion, temporarily delay an automated update if requested by a customer through a link:https://access.redhat.com/support[support case].
14+
environment, Red Hat will notify customers that they have two link:https://access.redhat.com/articles/2623321[business days] to schedule or manually update their cluster to the latest, secure patch release.
15+
In the case that an update is not performed after two link:https://access.redhat.com/articles/2623321[business days], Red Hat will automatically update the cluster to the latest, secure patch release to mitigate potential security breach(es) or instability. Red Hat may, at its own discretion, temporarily delay an automated update if requested by a customer through a link:https://access.redhat.com/support[support case].

modules/osd-applications-config-custom-domains.adoc

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -122,3 +122,7 @@ $ oc get route -n my-project
122122
$ curl https://hello-openshift-tls-my-project.apps.<company_name>.io
123123
Hello OpenShift!
124124
----
125+
126+
.Troubleshooting
127+
* link:https://access.redhat.com/solutions/5419501[Error creating TLS secret]
128+
* link:https://access.redhat.com/solutions/6546011[Troubleshooting: CustomDomain in NotReady state]

modules/osd-applications-renew-custom-domains.adoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,4 +32,7 @@ $ oc patch customdomain <company_name> --type='merge' -p '{"spec":{"certificate"
3232
[source,terminal]
3333
----
3434
$ oc delete secret <secret-old> -n <my_project>
35-
----
35+
----
36+
37+
.Troubleshooting
38+
* link:https://access.redhat.com/solutions/5419501[Error creating TLS secret]

modules/registry-exposing-secure-registry-manually.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,3 +80,6 @@ spec:
8080
Only set `secretName` if you are providing a custom TLS configuration for the
8181
registry's route.
8282
====
83+
84+
.Troubleshooting
85+
* link:https://access.redhat.com/solutions/5419501[Error creating TLS secret]

modules/rosa-deleting-cluster.adoc

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -143,3 +143,9 @@ $ rosa delete operator-roles -c <cluster_id> --mode auto <1>
143143
----
144144
<1> Replace `<cluster_id>` with the ID of the cluster.
145145
endif::sts[]
146+
147+
.Troubleshooting
148+
* If the cluster cannot be deleted because of missing IAM roles, see xref:../sd_support/rosa-troubleshooting-deployments.adoc#rosa-troubleshooting-cluster-deletion_rosa-troubleshooting-cluster-deployments[Repairing a cluster that cannot be deleted].
149+
* If the cluster cannot be deleted for other reasons:
150+
** Check that there are no Add-ons for your cluster pending in the link:https://console.redhat.com/openshift[Hybrid Cloud Console].
151+
** Check that all AWS resources and dependencies have been deleted in the Amazon Web Console.

modules/rosa-upgrading-cli-tutorial.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,3 +71,6 @@ You will receive an email when the upgrade is complete. You can also check the s
7171
ifeval::["{context}" == "rosa-upgrading-sts"]
7272
:!sts:
7373
endif::[]
74+
75+
.Troubleshooting
76+
* Sometimes a scheduled upgrade does not trigger. See link:https://access.redhat.com/solutions/6648291[Upgrade maintenance cancelled] for more information.

modules/rosa-upgrading-manual-ocm.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,3 +57,6 @@ The status is displayed in the *Update status* pane.
5757
ifeval::["{context}" == "rosa-upgrading-sts"]
5858
:!sts:
5959
endif::[]
60+
61+
.Troubleshooting
62+
* Sometimes a scheduled upgrade does not trigger. See link:https://access.redhat.com/solutions/6648291[Upgrade maintenance cancelled] for more information.

rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,7 @@ endif::[]
7272

7373
[id="nodes-about-autoscaling-nodes-additional-resources"]
7474
== Additional resources
75+
* link:https://access.redhat.com/solutions/6821651[Troubleshooting: Autoscaling is not scaling down nodes]
7576
* xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-machinepools-about.adoc#rosa-nodes-machinepools-about[About machinepools]
7677
ifdef::openshift-rosa[]
7778
* xref:../../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing worker nodes]

0 commit comments

Comments
 (0)