You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This glossary defines common terms that are used in the {product-title} Logging content.
10
+
This glossary defines common terms that are used in the {logging} documentation.
11
11
12
-
annotation::
12
+
Annotation::
13
13
You can use annotations to attach metadata to objects.
14
14
15
-
Cluster Logging Operator (CLO)::
16
-
The Cluster Logging Operator provides a set of APIs to control the collection and forwarding of application, infrastructure, and audit logs.
15
+
{clo}::
16
+
The {clo} provides a set of APIs to control the collection and forwarding of application, infrastructure, and audit logs.
17
17
18
-
Custom Resource (CR)::
19
-
A CR is an extension of the Kubernetes API. To configure {product-title} Logging and log forwarding, you can customize the `ClusterLogging` and the `ClusterLogForwarder` custom resources.
18
+
Custom resource (CR)::
19
+
A CR is an extension of the Kubernetes API. To configure the {logging} and log forwarding, you can customize the `ClusterLogging` and the `ClusterLogForwarder` custom resources.
20
20
21
-
event router::
22
-
The event router is a pod that watches {product-title} events. It collects logs by using {product-title} Logging.
21
+
Event router::
22
+
The event router is a pod that watches {product-title} events. It collects logs by using the {logging}.
23
23
24
24
Fluentd::
25
25
Fluentd is a log collector that resides on each {product-title} node. It gathers application, infrastructure, and audit logs and forwards them to different outputs.
26
26
27
-
garbage collection::
27
+
Garbage collection::
28
28
Garbage collection is the process of cleaning up cluster resources, such as terminated containers and images that are not referenced by any running pods.
29
29
30
30
Elasticsearch::
31
-
Elasticsearch is a distributed search and analytics engine. {product-title} uses ELasticsearch as a default log store for {product-title} Logging.
31
+
Elasticsearch is a distributed search and analytics engine. {product-title} uses Elasticsearch as a default log store for the {logging}.
32
32
33
-
Elasticsearch Operator::
34
-
Elasticsearch operator is used to run Elasticsearch cluster on top of {product-title}. The Elasticsearch Operator provides self-service for the Elasticsearch cluster operations and is used by {product-title} Logging.
33
+
{es-op}::
34
+
The {es-op} is used to run an Elasticsearch cluster on {product-title}. The {es-op}provides self-service for the Elasticsearch cluster operations and is used by the {logging}.
35
35
36
-
indexing::
36
+
Indexing::
37
37
Indexing is a data structure technique that is used to quickly locate and access data. Indexing optimizes the performance by minimizing the amount of disk access required when a query is processed.
38
38
39
39
JSON logging::
40
-
{product-title} Logging Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either {product-title} Logging-managed Elasticsearch or any other third-party system supported by the Log Forwarding API.
40
+
The Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either the {Logging}managed Elasticsearch or any other third-party system supported by the Log Forwarding API.
41
41
42
42
Kibana::
43
43
Kibana is a browser-based console interface to query, discover, and visualize your Elasticsearch data through histograms, line graphs, and pie charts.
@@ -49,39 +49,39 @@ Labels::
49
49
Labels are key-value pairs that you can use to organize and select subsets of objects, such as a pod.
50
50
51
51
Logging::
52
-
With {product-title} Logging you can aggregate application, infrastructure, and audit logs throughout your cluster. You can also store them to a default log store, forward them to third party systems, and query and visualize the stored logs in the default log store.
52
+
With the {logging}, you can aggregate application, infrastructure, and audit logs throughout your cluster. You can also store them to a default log store, forward them to third party systems, and query and visualize the stored logs in the default log store.
53
53
54
-
logging collector::
54
+
Logging collector::
55
55
A logging collector collects logs from the cluster, formats them, and forwards them to the log store or third party systems.
56
56
57
-
log store::
58
-
A log store is used to store aggregated logs. You can use the default Elasticsearch log store or forward logs to external log stores. The default log store is optimized and tested for short-term storage.
57
+
Log store::
58
+
A log store is used to store aggregated logs. You can use an internal log store or forward logs to external log stores.
59
59
60
-
log visualizer::
61
-
Log visualizer is the user interface (UI) component you can use to view information such as logs, graphs, charts, and other metrics. The current implementation is Kibana.
60
+
Log visualizer::
61
+
Log visualizer is the user interface (UI) component you can use to view information such as logs, graphs, charts, and other metrics.
62
62
63
-
node::
63
+
Node::
64
64
A node is a worker machine in the {product-title} cluster. A node is either a virtual machine (VM) or a physical machine.
65
65
66
66
Operators::
67
67
Operators are the preferred method of packaging, deploying, and managing a Kubernetes application in an {product-title} cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers.
68
68
69
-
pod::
70
-
A pod is the smallest logical unit in Kubernetes. A pod consists of one or more containers and runs on a worker node..
69
+
Pod::
70
+
A pod is the smallest logical unit in Kubernetes. A pod consists of one or more containers and runs on a worker node.
71
71
72
72
Role-based access control (RBAC)::
73
73
RBAC is a key security control to ensure that cluster users and workloads have access only to resources required to execute their roles.
74
74
75
-
shards::
76
-
Elasticsearch organizes the log data from Fluentd into datastores, or indices, then subdivides each index into multiple pieces called shards.
75
+
Shards::
76
+
Elasticsearch organizes log data from Fluentd into datastores, or indices, then subdivides each index into multiple pieces called shards.
77
77
78
-
taint::
78
+
Taint::
79
79
Taints ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node.
80
80
81
-
toleration::
81
+
Toleration::
82
82
You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints.
83
83
84
-
web console::
84
+
Web console::
85
85
A user interface (UI) to manage {product-title}.
86
86
ifdef::openshift-rosa,openshift-dedicated[]
87
87
The web console for {product-title} can be found at link:https://console.redhat.com/openshift[https://console.redhat.com/openshift].
Copy file name to clipboardExpand all lines: modules/cluster-logging-uninstall.adoc
+11-14Lines changed: 11 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,21 +4,18 @@
4
4
5
5
:_mod-docs-content-type: PROCEDURE
6
6
[id="cluster-logging-uninstall_{context}"]
7
-
= Uninstalling the {logging-title}
7
+
= Uninstalling the {logging}
8
8
9
9
You can stop log aggregation by deleting the `ClusterLogging` custom resource (CR). After deleting the CR, there are other {logging} components that remain, which you can optionally remove.
10
10
11
-
12
11
Deleting the `ClusterLogging` CR does not remove the persistent volume claims (PVCs). To preserve or delete the remaining PVCs, persistent volumes (PVs), and associated data, you must take further action.
13
12
14
13
.Prerequisites
15
14
16
-
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
15
+
* The {clo}and {es-op} are installed.
17
16
18
17
.Procedure
19
18
20
-
To remove OpenShift Logging:
21
-
22
19
. Use the
23
20
ifndef::openshift-rosa,openshift-dedicated[]
24
21
{product-title} web console
@@ -46,15 +43,20 @@ endif::[]
46
43
47
44
.. Click the Options menu {kebab} next to *Elasticsearch* and select *Delete Custom Resource Definition*.
48
45
49
-
. Optional: Remove the Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator:
46
+
. Optional: Remove the {clo}and {es-op}:
50
47
51
48
.. Switch to the *Operators* -> *Installed Operators* page.
52
49
53
-
.. Click the Options menu {kebab} next to the Red Hat OpenShift Logging Operator and select *Uninstall Operator*.
50
+
.. Click the Options menu {kebab} next to the {clo} and select *Uninstall Operator*.
54
51
55
-
.. Click the Options menu {kebab} next to the OpenShift Elasticsearch Operator and select *Uninstall Operator*.
52
+
.. Click the Options menu {kebab} next to the {es-op} and select *Uninstall Operator*.
56
53
57
-
. Optional: Remove the OpenShift Logging and Elasticsearch projects.
54
+
. Optional: Remove the `openshift-logging` and `openshift-operators-redhat` projects.
55
+
+
56
+
[IMPORTANT]
57
+
====
58
+
Do not delete the `openshift-operators-redhat` project if other global Operators are installed in this namespace.
59
+
====
58
60
59
61
.. Switch to the *Home* -> *Projects* page.
60
62
@@ -63,11 +65,6 @@ endif::[]
63
65
.. Confirm the deletion by typing `openshift-logging` in the dialog box and click *Delete*.
64
66
65
67
.. Click the Options menu {kebab} next to the *openshift-operators-redhat* project and select *Delete Project*.
66
-
+
67
-
[IMPORTANT]
68
-
====
69
-
Do not delete the `openshift-operators-redhat` project if other global operators are installed in this namespace.
70
-
====
71
68
72
69
.. Confirm the deletion by typing `openshift-operators-redhat` in the dialog box and click *Delete*.
0 commit comments