Skip to content

Commit e923c4e

Browse files
committed
Updating kubernetes install docs for style and grammar
Signed-off-by: Lynette Miles <[email protected]>
1 parent 9db958d commit e923c4e

File tree

2 files changed

+85
-49
lines changed

2 files changed

+85
-49
lines changed

installation/kubernetes.md

Lines changed: 79 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -6,93 +6,118 @@ description: Kubernetes Production Grade Log Processor
66

77
![](<../.gitbook/assets/fluentbit\_kube\_logging (1).png>)
88

9-
[Fluent Bit](http://fluentbit.io) is a lightweight and extensible **Log Processor** that comes with full support for Kubernetes:
9+
[Fluent Bit](http://fluentbit.io) is a lightweight and extensible log processor
10+
with full support for Kubernetes:
1011

11-
* Process Kubernetes containers logs from the file system or Systemd/Journald.
12-
* Enrich logs with Kubernetes Metadata.
13-
* Centralize your logs in third party storage services like Elasticsearch, InfluxDB, HTTP, etc.
12+
- Process Kubernetes containers logs from the file system or Systemd/Journald.
13+
- Enrich logs with Kubernetes Metadata.
14+
- Centralize your logs in third party storage services like Elasticsearch, InfluxDB,
15+
HTTP, and so on.
1416

15-
## Concepts <a href="#concepts" id="concepts"></a>
17+
## Concepts
1618

17-
Before getting started it is important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of _nodes_, so our log agent tool will need to run on every node to collect logs from every _POD_, hence Fluent Bit is deployed as a DaemonSet (a POD that runs on every _node_ of the cluster).
19+
Before getting started it's important to understand how Fluent Bit will be deployed.
20+
Kubernetes manages a cluster of nodes. The Fluent Bit log agent tool needs to run
21+
on every node to collect logs from every pod. Fluent Bit is deployed as a
22+
DaemonSet, which is a pod that runs on every node of the cluster.
1823

19-
When Fluent Bit runs, it will read, parse and filter the logs of every POD and will enrich each entry with the following information (metadata):
24+
When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In
25+
addition, Fluent Bit adds metadata to each entry using the
26+
[Kubernetes](../pipeline/filters/kubernetes) filter
27+
plugin.
2028

21-
* Pod Name
22-
* Pod ID
23-
* Container Name
24-
* Container ID
25-
* Labels
26-
* Annotations
29+
The Kubernetes filter plugin talks to the Kubernetes API Server to retrieve relevant
30+
information such as the `pod_id`, `labels`, and `annotations`. Other fields such as
31+
`pod_name`, `container_id`, and `container_name` are retrieved locally from the log
32+
file names. All of this is handled automatically, no intervention is required from a
33+
configuration aspect.
2734

28-
To obtain this information, a built-in filter plugin called _kubernetes_ talks to the Kubernetes API Server to retrieve relevant information such as the _pod\_id_, _labels_ and _annotations_, other fields such as _pod\_name_, _container\_id_ and _container\_name_ are retrieved locally from the log file names. All of this is handled automatically, no intervention is required from a configuration aspect.
35+
## Installation
2936

30-
> Our Kubernetes Filter plugin is fully inspired by the [Fluentd Kubernetes Metadata Filter](https://github.com/fabric8io/fluent-plugin-kubernetes\_metadata\_filter) written by [Jimmi Dyson](https://github.com/jimmidyson).
37+
[Fluent Bit](http://fluentbit.io) should be deployed as a DaemonSet, so it will
38+
be available on every node of your Kubernetes cluster.
3139

32-
## Installation <a href="#installation" id="installation"></a>
33-
34-
[Fluent Bit](http://fluentbit.io) should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.
35-
36-
The recommended way to deploy Fluent Bit is with the official Helm Chart: <https://github.com/fluent/helm-charts>
40+
The recommended way to deploy Fluent Bit for Kubernetes is with the official Helm
41+
Chart: <https://github.com/fluent/helm-charts>
3742

3843
### Note for OpenShift
3944

40-
If you are using Red Hat OpenShift you will also need to set up security context constraints (SCC) using the relevant option in the helm chart.
45+
If you are using Red Hat OpenShift you must set up Security Context Constraints (SCC)
46+
using the relevant option in the helm chart.
4147

4248
### Installing with Helm Chart
4349

44-
[Helm](https://helm.sh) is a package manager for Kubernetes and allows you to quickly deploy application packages into your running cluster. Fluent Bit is distributed via a helm chart found in the Fluent Helm Charts repo: [https://github.com/fluent/helm-charts](https://github.com/fluent/helm-charts).
50+
[Helm](https://helm.sh) is a package manager for Kubernetes and lets you deploy
51+
application packages into your running cluster. Fluent Bit is distributed using a Helm
52+
chart found in the [Fluent Helm Charts repository](https://github.com/fluent/helm-charts).
4553

46-
To add the Fluent Helm Charts repo use the following command
54+
Use the following command to add the Fluent Helm charts repository
4755

4856
```shell
4957
helm repo add fluent https://fluent.github.io/helm-charts
5058
```
5159

52-
To validate that the repo was added you can run `helm search repo fluent` to ensure the charts were added. The default chart can then be installed by running the following
60+
To validate that the repository was added you can run `helm search repo fluent` to
61+
ensure the charts were added. The default chart can then be installed by running the
62+
following
5363

5464
```shell
5565
helm upgrade --install fluent-bit fluent/fluent-bit
5666
```
5767

5868
### Default Values
5969

60-
The default chart values include configuration to read container logs, with Docker parsing, systemd logs apply Kubernetes metadata enrichment and finally output to an Elasticsearch cluster. You can modify the values file included [https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/values.yaml](https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/values.yaml) to specify additional outputs, health checks, monitoring endpoints, or other configuration options.
70+
The default chart values include configuration to read container logs. With Docker
71+
parsing, Systemd logs apply Kubernetes metadata enrichment, and output to an
72+
Elasticsearch cluster. You can modify the
73+
[included values file](https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/values.yaml)
74+
to specify additional outputs, health checks, monitoring endpoints, or other
75+
configuration options.
6176

6277
## Details
6378

64-
The default configuration of Fluent Bit makes sure of the following:
79+
The default configuration of Fluent Bit ensures the following:
6580

66-
* Consume all containers logs from the running Node and parse them with either the `docker` or `cri` multiline parser.
67-
* Persist how far it got into each file it is tailing so if a pod is restarted it picks up from where it left off.
68-
* The Kubernetes filter will enrich the logs with Kubernetes metadata, specifically _labels_ and _annotations_. The filter only goes to the API Server when it cannot find the cached info, otherwise it uses the cache.
69-
* The default backend in the configuration is Elasticsearch set by the [Elasticsearch Output Plugin](../pipeline/outputs/elasticsearch.md). It uses the Logstash format to ingest the logs. If you need a different Index and Type, please refer to the plugin option and do your own adjustments.
70-
* There is an option called **Retry\_Limit** set to False, that means if Fluent Bit cannot flush the records to Elasticsearch it will re-try indefinitely until it succeed.
81+
- Consume all containers logs from the running node and parse them with either
82+
the `docker` or `cri` multi-line parser.
83+
- Persist how far it got into each file it's tailing so if a pod is restarted it
84+
picks up from where it left off.
85+
- The Kubernetes filter adds Kubernetes metadata, specifically `labels` and
86+
`annotations`. The filter only contacts the API Server when it can't find the
87+
cached information, otherwise it uses the cache.
88+
- The default backend in the configuration is Elasticsearch set by the
89+
[Elasticsearch Output Plugin](../pipeline/outputs/elasticsearch.md).
90+
It uses the Logstash format to ingest the logs. If you need a different `Index`
91+
and `Type`, refer to the plugin option and update as needed.
92+
- There is an option called `Retry_Limit`, which is set to `False`. If Fluent Bit
93+
can't flush the records to Elasticsearch it will retry indefinitely until it
94+
succeeds.
7195

72-
## Windows Deployment
96+
## Windows deployment
7397

74-
Since v1.5.0, Fluent Bit supports deployment to Windows pods.
98+
Fluent Bit v1.5.0 and later supports deployment to Windows pods.
7599

76100
### Log files overview
77101

78102
When deploying Fluent Bit to Kubernetes, there are three log files that you need to pay attention to.
79103

80-
`C:\k\kubelet.err.log`
104+
- `C:\k\kubelet.err.log`
81105

82-
* This is the error log file from kubelet daemon running on host.
83-
* You will need to retain this file for future troubleshooting (to debug deployment failures etc.)
106+
This is the error log file from kubelet daemon running on host. Retain this file
107+
for future troubleshooting like debugging deployment failures.
84108

85-
`C:\var\log\containers\<pod>_<namespace>_<container>-<docker>.log`
109+
- `C:\var\log\containers\<pod>_<namespace>_<container>-<docker>.log`
86110

87-
* This is the main log file you need to watch. Configure Fluent Bit to follow this file.
88-
* It is actually a symlink to the Docker log file in `C:\ProgramData\`, with some additional metadata on its file name.
111+
This is the main log file you need to watch. Configure Fluent Bit to follow this
112+
file. It's a symlink to the Docker log file in `C:\ProgramData\`, with some
113+
additional metadata on the file's name.
89114

90-
`C:\ProgramData\Docker\containers\<docker>\<docker>.log`
115+
- `C:\ProgramData\Docker\containers\<docker>\<docker>.log`
91116

92-
* This is the log file produced by Docker.
93-
* Normally you don't directly read from this file, but you need to make sure that this file is visible from Fluent Bit.
117+
This is the log file produced by Docker. Normally you don't directly read from this
118+
file, but you need to make sure that this file is visible from Fluent Bit.
94119

95-
Typically, your deployment yaml contains the following volume configuration.
120+
Typically, your deployment YAML contains the following volume configuration.
96121

97122
```yaml
98123
spec:
@@ -120,7 +145,8 @@ spec:
120145
121146
### Configure Fluent Bit
122147
123-
Assuming the basic volume configuration described above, you can apply the following config to start logging. You can visualize this configuration [here (Sign-up required)](https://calyptia.com/free-trial)
148+
Assuming the basic volume configuration described previously, you can apply the
149+
following configuration to start logging.
124150
125151
```yaml
126152
fluent-bit.conf: |
@@ -162,14 +188,18 @@ parsers.conf: |
162188
163189
### Mitigate unstable network on Windows pods
164190
165-
Windows pods often lack working DNS immediately after boot ([#78479](https://github.com/kubernetes/kubernetes/issues/78479)). To mitigate this issue, `filter_kubernetes` provides a built-in mechanism to wait until the network starts up:
191+
Windows pods often lack working DNS immediately after boot
192+
([#78479](https://github.com/kubernetes/kubernetes/issues/78479)). To mitigate this
193+
issue, `filter_kubernetes` provides a built-in mechanism to wait until the network
194+
starts up:
166195

167-
* `DNS_Retries` - Retries N times until the network start working (6)
168-
* `DNS_Wait_Time` - Lookup interval between network status checks (30)
196+
- `DNS_Retries`: Retries N times until the network start working (6)
197+
- `DNS_Wait_Time`: Lookup interval between network status checks (30)
169198

170-
By default, Fluent Bit waits for 3 minutes (30 seconds x 6 times). If it's not enough for you, tweak the configuration as follows.
199+
By default, Fluent Bit waits for 3 minutes (30 seconds x 6 times). If it's not enough
200+
for you, tweak the configuration as follows.
171201

172-
```
202+
```python
173203
[filter]
174204
Name kubernetes
175205
...

pipeline/filters/kubernetes.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -469,3 +469,9 @@ By default the Kube\_URL is set to `https://kubernetes.default.svc:443` . Ensure
469469
### I can't see new objects getting metadata
470470

471471
In some cases, you may only see some objects being appended with metadata while other objects are not enriched. This can occur at times when local data is cached and does not contain the correct id for the kubernetes object that requires enrichment. For most Kubernetes objects the Kubernetes API server is updated which will then be reflected in Fluent Bit logs, however in some cases for `Pod` objects this refresh to the Kubernetes API server can be skipped, causing metadata to be skipped.
472+
473+
## Credit
474+
475+
Our Kubernetes Filter plugin is fully inspired by the [Fluentd Kubernetes Metadata
476+
Filter](https://github.com/fabric8io/fluent-plugin-kubernetes\_metadata\_filter)
477+
written by [Jimmi Dyson](https://github.com/jimmidyson).

0 commit comments

Comments
 (0)