Skip to content

Commit 1e71a17

Browse files
committed
Add common terms for logging book
1 parent b4542e1 commit 1e71a17

File tree

2 files changed

+85
-0
lines changed

2 files changed

+85
-0
lines changed

logging/cluster-logging.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ endif::[]
3030
// modules required to cover the user story. You can also include other
3131
// assemblies.
3232

33+
include::modules/logging-common-terms.adoc[leveloffset=+1]
3334
include::modules/cluster-logging-about.adoc[leveloffset=+1]
3435

3536
For information, see xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Configuring the log collector].

modules/logging-common-terms.adoc

Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * logging/cluster-logging.adoc
4+
5+
:_content-type: REFERENCE
6+
[id="openshift-logging-common-terms_{context}"]
7+
= Common {product-title} Logging terms
8+
9+
This glossary defines common terms that are used in the {product-title} Logging content.
10+
11+
annotation::
12+
You can use annotations to attach metadata to objects.
13+
14+
Cluster Logging Operator (CLO)::
15+
The Cluster Logging Operator provides a set of APIs to control the collection and forwarding of application, infrastructure, and audit logs.
16+
17+
Custom Resource (CR)::
18+
A CR is an extension of the Kubernetes API. To configure {product-title} Logging and log forwarding, you can customize the `ClusterLogging` and the `ClusterLogForwarder` custom resources.
19+
20+
event router::
21+
The event router is a pod that watches {product-title} events. It collects logs by using {product-title} Logging.
22+
23+
Fluentd::
24+
Fluentd is a log collector that resides on each {product-title} node. It gathers application, infrastructure, and audit logs and forwards them to different outputs.
25+
26+
garbage collection::
27+
Garbage collection is the process of cleaning up cluster resources, such as terminated containers and images that are not referenced by any running pods.
28+
29+
Elasticsearch::
30+
Elasticsearch is a distributed search and analytics engine. {product-title} uses ELasticsearch as a default log store for {product-title} Logging.
31+
32+
Elasticsearch Operator::
33+
Elasticsearch operator is used to run Elasticsearch cluster on top of {product-title}. The Elasticsearch Operator provides self-service for the Elasticsearch cluster operations and is used by {product-title} Logging.
34+
35+
indexing::
36+
Indexing is a data structure technique that is used to quickly locate and access data. Indexing optimizes the performance by minimizing the amount of disk access required when a query is processed.
37+
38+
JSON logging::
39+
{product-title} Logging Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either {product-title} Logging-managed Elasticsearch or any other third-party system supported by the Log Forwarding API.
40+
41+
Kibana::
42+
Kibana is a browser-based console interface to query, discover, and visualize your Elasticsearch data through histograms, line graphs, and pie charts.
43+
44+
Kubernetes API server::
45+
Kubernetes API server validates and configures data for the API objects.
46+
47+
Labels::
48+
Labels are key-value pairs that you can use to organize and select subsets of objects, such as a pod.
49+
50+
Logging::
51+
With {product-title} Logging you can aggregate application, infrastructure, and audit logs throughout your cluster. You can also store them to a default log store, forward them to third party systems, and query and visualize the stored logs in the default log store.
52+
53+
logging collector::
54+
A logging collector collects logs from the cluster, formats them, and forwards them to the log store or third party systems.
55+
56+
log store::
57+
A log store is used to store aggregated logs. You can use the default Elasticsearch log store or forward logs to external log stores. The default log store is optimized and tested for short-term storage.
58+
59+
log visualizer::
60+
Log visualizer is the user interface (UI) component you can use to view information such as logs, graphs, charts, and other metrics. The current implementation is Kibana.
61+
62+
node::
63+
A node is a worker machine in the {product-title} cluster. A node is either a virtual machine (VM) or a physical machine.
64+
65+
Operators::
66+
Operators are the preferred method of packaging, deploying, and managing a Kubernetes application in an {product-title} cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers.
67+
68+
pod::
69+
A pod is the smallest logical unit in Kubernetes. A pod consists of one or more containers and runs on a worker node..
70+
71+
Role-based access control (RBAC)::
72+
RBAC is a key security control to ensure that cluster users and workloads have access only to resources required to execute their roles.
73+
74+
shards::
75+
Elasticsearch organizes the log data from Fluentd into datastores, or indices, then subdivides each index into multiple pieces called shards.
76+
77+
taint::
78+
Taints ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node.
79+
80+
toleration::
81+
You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints.
82+
83+
web console::
84+
A user interface (UI) to manage {product-title}.

0 commit comments

Comments
 (0)