Skip to content

Vale: endpoint, not end points #1994

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion pipeline/filters/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ The plugin supports the following configuration parameters:
| Key | Description | Default |
| :--- | :--- | :--- |
| `Buffer_Size` | Set the buffer size for HTTP client when reading responses from Kubernetes API server. The value must conform to the [unit size](../../administration/configuring-fluent-bit/unit-sizes.md) specification. A value of `0` results in no limit, and the buffer will expand as-needed. If pod specifications exceed the buffer limit, the API response is discarded when retrieving metadata, and some Kubernetes metadata will fail to be injected to the logs. | `32k` |
| `Kube_URL` | API Server end-point | `https://kubernetes.default.svc:443` |
| `Kube_URL` | API Server endpoint | `https://kubernetes.default.svc:443` |
| `Kube_CA_File` | CA certificate file | `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` |
| `Kube_CA_Path` | Absolute path to scan for certificate files | _none_ |
| `Kube_Token_File` | Token file | `/var/run/secrets/kubernetes.io/serviceaccount/token` |
Expand Down
6 changes: 3 additions & 3 deletions pipeline/inputs/fluentbit-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ In the following configuration file, the input plugin `node_exporter_metrics` co
service:
flush: 1
log_level: info

pipeline:
inputs:
- name: fluentbit_metrics
Expand All @@ -54,7 +54,7 @@ pipeline:
# Fluent Bit Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collects Fluent Bit metrics and exposes
# them through a Prometheus HTTP end-point.
# them through a Prometheus HTTP endpoint.
#
# After starting the service try it with:
#
Expand Down Expand Up @@ -83,4 +83,4 @@ You can test the expose of the metrics by using `curl`:

```shell
curl http://127.0.0.1:2021/metrics
```
```
2 changes: 1 addition & 1 deletion pipeline/inputs/node-exporter-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ In the following configuration file, the input plugin `node_exporter_metrics` co
# Node Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP end-point.
# them through a Prometheus HTTP endpoint.
#
# After starting the service try it with:
#
Expand Down
5 changes: 2 additions & 3 deletions pipeline/inputs/process-exporter-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ The Process Exporter Metrics plugin implements collecting of the various metrics
{% hint style="info" %}

All metrics including those collected with this plugin flow through a separate pipeline from logs and current filters don't operate on top of metrics. This plugin is only supported on Linux based operating systems as it uses the `proc` filesystem to access the relevant metrics. MacOS doesn't have the `proc` filesystem so this plugin won't work for it.

{% endhint %}

## Configuration
Expand Down Expand Up @@ -58,7 +57,7 @@ In the following configuration file, the input plugin `process_exporter_metrics`
# Process Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP end-point.
# them through a Prometheus HTTP endpoint.
#
# After starting the service try it with:
#
Expand Down Expand Up @@ -88,7 +87,7 @@ pipeline:
# Process Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP end-point.
# them through a Prometheus HTTP endpoint.
#
# After starting the service try it with:
#
Expand Down
4 changes: 2 additions & 2 deletions pipeline/inputs/windows-exporter-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ In the following configuration file, the input plugin `windows_exporter_metrics`
# Node Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP end-point.
# them through a Prometheus HTTP endpoint.
#
# After starting the service try it with:
#
Expand Down Expand Up @@ -106,7 +106,7 @@ pipeline:
# Node Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP end-point.
# them through a Prometheus HTTP endpoint.
#
# After starting the service try it with:
#
Expand Down
4 changes: 2 additions & 2 deletions pipeline/outputs/prometheus-exporter.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The Prometheus exporter works only with metrics captured from metric plugins. In
# Node Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP end-point.
# them through a Prometheus HTTP endpoint.
#
# After starting the service try it with:
#
Expand Down Expand Up @@ -65,7 +65,7 @@ pipeline:
# Node Exporter Metrics + Prometheus Exporter
# -------------------------------------------
# The following example collect host metrics on Linux and expose
# them through a Prometheus HTTP end-point.
# them through a Prometheus HTTP endpoint.
#
# After starting the service try it with:
#
Expand Down
5 changes: 4 additions & 1 deletion pipeline/outputs/stackdriver.md
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,10 @@ An upstream connection error means Fluent Bit wasn't able to reach Google servic
[2019/01/07 23:24:09] [error] [oauth2] could not get an upstream connection
```

This is due to a network issue in the environment where Fluent Bit is running. Ensure that the Host, Container, or Pod can reach the Google API endpoints.
This is due to a network issue in the environment where Fluent Bit is running. Make sure that the Host, Container or Pod can reach the following Google endpoints:

- [https://www.googleapis.com](https://www.googleapis.com)
- [https://logging.googleapis.com](https://logging.googleapis.com)

{% hint style="warning" %}

Expand Down
9 changes: 9 additions & 0 deletions vale-styles/FluentBit/Endpoint.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
extends: substitution
message: 'Use "%s" instead of "%s".'
level: error
ignorecase: true
swap:
end point: endpoint
end-point: endpoint
end points: endpoints
end-points: endpoints