diff --git a/pipeline/filters/kubernetes.md b/pipeline/filters/kubernetes.md index 9908dfca6..114907f4a 100644 --- a/pipeline/filters/kubernetes.md +++ b/pipeline/filters/kubernetes.md @@ -26,7 +26,7 @@ The plugin supports the following configuration parameters: | Key | Description | Default | | :--- | :--- | :--- | | `Buffer_Size` | Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must conform to the [unit size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. A value of `0` results in no limit, and the buffer will expand as-needed. If pod specifications exceed the buffer limit, the API response is discarded when retrieving metadata, and some Kubernetes metadata will fail to be injected to the logs. | `32k` | -| `Kube_URL` | API Server end-point | `https://kubernetes.default.svc:443` | +| `Kube_URL` | API Server endpoint | `https://kubernetes.default.svc:443` | | `Kube_CA_File` | CA certificate file | `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` | | `Kube_CA_Path` | Absolute path to scan for certificate files | _none_ | | `Kube_Token_File` | Token file | `/var/run/secrets/kubernetes.io/serviceaccount/token` | diff --git a/pipeline/inputs/fluentbit-metrics.md b/pipeline/inputs/fluentbit-metrics.md index 4424780c1..d64b80980 100644 --- a/pipeline/inputs/fluentbit-metrics.md +++ b/pipeline/inputs/fluentbit-metrics.md @@ -33,7 +33,7 @@ In the following configuration file, the input plugin `node_exporter_metrics` co service: flush: 1 log_level: info - + pipeline: inputs: - name: fluentbit_metrics @@ -54,7 +54,7 @@ pipeline: # Fluent Bit Metrics + Prometheus Exporter # ------------------------------------------- # The following example collects Fluent Bit metrics and exposes -# them through a Prometheus HTTP end-point. +# them through a Prometheus HTTP endpoint. # # After starting the service try it with: # @@ -83,4 +83,4 @@ You can test the expose of the metrics by using `curl`: ```shell curl http://127.0.0.1:2021/metrics -``` \ No newline at end of file +``` diff --git a/pipeline/inputs/node-exporter-metrics.md b/pipeline/inputs/node-exporter-metrics.md index d4d122162..25158c97e 100644 --- a/pipeline/inputs/node-exporter-metrics.md +++ b/pipeline/inputs/node-exporter-metrics.md @@ -100,7 +100,7 @@ In the following configuration file, the input plugin `node_exporter_metrics` co # Node Exporter Metrics + Prometheus Exporter # ------------------------------------------- # The following example collect host metrics on Linux and expose -# them through a Prometheus HTTP end-point. +# them through a Prometheus HTTP endpoint. # # After starting the service try it with: # diff --git a/pipeline/inputs/process-exporter-metrics.md b/pipeline/inputs/process-exporter-metrics.md index a6f15fd9b..bbb5de1e2 100644 --- a/pipeline/inputs/process-exporter-metrics.md +++ b/pipeline/inputs/process-exporter-metrics.md @@ -14,7 +14,6 @@ The Process Exporter Metrics plugin implements collecting of the various metrics {% hint style="info" %} All metrics including those collected with this plugin flow through a separate pipeline from logs and current filters don't operate on top of metrics. This plugin is only supported on Linux based operating systems as it uses the `proc` filesystem to access the relevant metrics. MacOS doesn't have the `proc` filesystem so this plugin won't work for it. - {% endhint %} ## Configuration @@ -58,7 +57,7 @@ In the following configuration file, the input plugin `process_exporter_metrics` # Process Exporter Metrics + Prometheus Exporter # ------------------------------------------- # The following example collect host metrics on Linux and expose -# them through a Prometheus HTTP end-point. +# them through a Prometheus HTTP endpoint. # # After starting the service try it with: # @@ -88,7 +87,7 @@ pipeline: # Process Exporter Metrics + Prometheus Exporter # ------------------------------------------- # The following example collect host metrics on Linux and expose -# them through a Prometheus HTTP end-point. +# them through a Prometheus HTTP endpoint. # # After starting the service try it with: # diff --git a/pipeline/inputs/windows-exporter-metrics.md b/pipeline/inputs/windows-exporter-metrics.md index bdf368a96..40cbd3d1b 100644 --- a/pipeline/inputs/windows-exporter-metrics.md +++ b/pipeline/inputs/windows-exporter-metrics.md @@ -77,7 +77,7 @@ In the following configuration file, the input plugin `windows_exporter_metrics` # Node Exporter Metrics + Prometheus Exporter # ------------------------------------------- # The following example collect host metrics on Linux and expose -# them through a Prometheus HTTP end-point. +# them through a Prometheus HTTP endpoint. # # After starting the service try it with: # @@ -106,7 +106,7 @@ pipeline: # Node Exporter Metrics + Prometheus Exporter # ------------------------------------------- # The following example collect host metrics on Linux and expose -# them through a Prometheus HTTP end-point. +# them through a Prometheus HTTP endpoint. # # After starting the service try it with: # diff --git a/pipeline/outputs/prometheus-exporter.md b/pipeline/outputs/prometheus-exporter.md index e66a39b6a..40b8361b9 100644 --- a/pipeline/outputs/prometheus-exporter.md +++ b/pipeline/outputs/prometheus-exporter.md @@ -31,7 +31,7 @@ The Prometheus exporter works only with metrics captured from metric plugins. In # Node Exporter Metrics + Prometheus Exporter # ------------------------------------------- # The following example collect host metrics on Linux and expose -# them through a Prometheus HTTP end-point. +# them through a Prometheus HTTP endpoint. # # After starting the service try it with: # @@ -65,7 +65,7 @@ pipeline: # Node Exporter Metrics + Prometheus Exporter # ------------------------------------------- # The following example collect host metrics on Linux and expose -# them through a Prometheus HTTP end-point. +# them through a Prometheus HTTP endpoint. # # After starting the service try it with: # diff --git a/pipeline/outputs/stackdriver.md b/pipeline/outputs/stackdriver.md index 34bd744b0..03ac24dda 100644 --- a/pipeline/outputs/stackdriver.md +++ b/pipeline/outputs/stackdriver.md @@ -267,7 +267,10 @@ An upstream connection error means Fluent Bit wasn't able to reach Google servic [2019/01/07 23:24:09] [error] [oauth2] could not get an upstream connection ``` -This is due to a network issue in the environment where Fluent Bit is running. Ensure that the Host, Container, or Pod can reach the Google API endpoints. +This is due to a network issue in the environment where Fluent Bit is running. Make sure that the Host, Container or Pod can reach the following Google endpoints: + +- [https://www.googleapis.com](https://www.googleapis.com) +- [https://logging.googleapis.com](https://logging.googleapis.com) {% hint style="warning" %} diff --git a/vale-styles/FluentBit/Endpoint.yml b/vale-styles/FluentBit/Endpoint.yml new file mode 100644 index 000000000..69d4da40f --- /dev/null +++ b/vale-styles/FluentBit/Endpoint.yml @@ -0,0 +1,9 @@ +extends: substitution +message: 'Use "%s" instead of "%s".' +level: error +ignorecase: true +swap: + end point: endpoint + end-point: endpoint + end points: endpoints + end-points: endpoints