Skip to content

Commit 5b47090

Browse files
committed
Fixing remaining 404s
Signed-off-by: Lynette Miles <[email protected]>
1 parent 610a646 commit 5b47090

File tree

11 files changed

+15
-17
lines changed

11 files changed

+15
-17
lines changed

.github/workflows/links.yaml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,6 @@ name: Links
33
on:
44
repository_dispatch:
55
workflow_dispatch:
6-
pull_request:
76
schedule:
87
- cron: "00 18 * * *"
98

administration/configuring-fluent-bit/unit-sizes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Unit sizes
22

3-
Some configuration directives in [Fluent Bit](http://fluentbit.io) refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](../../pipeline/inputs/tail.md), [Forward Input](../../pipeline/inputs/forward.md) or generic properties like [`Mem_Buf_Limit`](../backpressure.md) use unit sizes.
3+
Some configuration directives in [Fluent Bit](http://fluentbit.io) refer to unit sizes such as when defining the size of a buffer or specific limits. Plugins like [Tail Input](../../pipeline/inputs/tail.md), [Forward Input](../../pipeline/inputs/forward.md), or generic properties like [`Mem_Buf_Limit`](../backpressure.md) use unit sizes.
44

55
Fluent Bit v0.11.10 standardized unit sizes across the core and plugins. The following table describes the options that can be used and what they mean:
66

concepts/data-pipeline/parser.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ The parser converts unstructured data to structured data. As an example, conside
2626
192.168.2.20 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395
2727
```
2828

29-
This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](../../pipeline/parsers/regular-expression) is used, the log entry could be converted to:
29+
This log line is a raw string without format. Structuring the log makes it easier to process the data later. If the [regular expression parser](../../pipeline/parsers/regular-expression.md) is used, the log entry could be converted to:
3030

3131
```javascript
3232
{

local-testing/validating-your-data-and-structure.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Fluent Bit supports multiple sources and formats. In addition, it provides filte
44

55
Fluent Bit users are encouraged to integrate data validation in their continuous integration (CI) systems.
66

7-
In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the [Expect](../pipeline/filters/expect) filter, which you can use to validate keys and values from your records and take action when an exception is found.
7+
In a normal production environment, inputs, filters, and outputs are defined in configuration files. Fluent Bit provides the [Expect](../pipeline/filters/expect.md) filter, which you can use to validate keys and values from your records and take action when an exception is found.
88

99
A simplified view of the data processing pipeline is as follows:
1010

@@ -20,8 +20,8 @@ IS --> Fil --> OD
2020

2121
Consider the following pipeline, which uses a JSON file as its data source and has two filters:
2222

23-
- [Grep](../pipeline/filters/grep) to exclude certain records.
24-
- [Record Modifier](../pipeline/filters/record-modifier) to alter records' content by adding and removing specific keys.
23+
- [Grep](../pipeline/filters/grep.md) to exclude certain records.
24+
- [Record Modifier](../pipeline/filters/record-modifier.md) to alter records' content by adding and removing specific keys.
2525

2626
```mermaid
2727
flowchart LR
@@ -37,7 +37,7 @@ record --> stdout
3737

3838
Add data validation between each step to ensure your data structure is correct.
3939

40-
This example uses the [Expect](../pipeline/filters/expect) filter.
40+
This example uses the [Expect](../pipeline/filters/expect.md) filter.
4141

4242
```mermaid
4343
flowchart LR
@@ -164,7 +164,7 @@ The following is the Fluent Bit classic parsers file:
164164
{% endtab %}
165165
{% endtabs %}
166166

167-
If the JSON parser fails or is missing in the [Tail](../pipeline/inputs/tail) input (`parser json`), the Expect filter triggers the `exit` action.
167+
If the JSON parser fails or is missing in the [Tail](../pipeline/inputs/tail.md) input (`parser json`), the Expect filter triggers the `exit` action.
168168

169169
To extend the pipeline, add a Grep filter to match records that map `label` containing a key called `name` with value the `abc`, and add an Expect filter to re-validate that condition:
170170

pipeline/filters/grep.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ To start filtering records, run the filter from the command line or through the
3939

4040
When using the command line, pay close attention to quote the regular expressions. Using a configuration file might be easier.
4141

42-
The following command loads the [tail](../../pipeline/inputs/tail) plugin and reads the content of `lines.txt`. Then the `grep` filter applies a regular expression rule over the `log` field created by the `tail` plugin and only passes records with a field value starting with `aa`:
42+
The following command loads the [tail](../../pipeline/inputs/tail.md) plugin and reads the content of `lines.txt`. Then the `grep` filter applies a regular expression rule over the `log` field created by the `tail` plugin and only passes records with a field value starting with `aa`:
4343

4444
```shell
4545
fluent-bit -i tail -p 'path=lines.txt' -F grep -p 'regex=log aa' -m '*' -o stdout

pipeline/inputs/process.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The _Process metrics_ input plugin lets you check how healthy a process is. It does so by performing service checks at specified intervals.
44

5-
This plugin creates metrics that are log-based, such as JSON payloads. For Prometheus-based metrics, see the [Node exporter metrics](./node-exporter-metrics) input plugin.
5+
This plugin creates metrics that are log-based, such as JSON payloads. For Prometheus-based metrics, see the [Node exporter metrics](./node-exporter-metrics.md) input plugin.
66

77
## Configuration parameters
88

@@ -16,7 +16,7 @@ The plugin supports the following configuration parameters:
1616
| `Alert` | If enabled, the plugin will only generate messages if the target process is down. | `false` |
1717
| `Fd` | If enabled, a number of `fd` is appended to each record. | `true` |
1818
| `Mem` | If enabled, memory usage of the process is appended to each record. | `true` |
19-
| `Threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading#inputs). | `false` |
19+
| `Threaded` | Specifies whether to run this input in its own [thread](../../administration/multithreading.md#inputs). | `false` |
2020

2121
## Get started
2222

pipeline/outputs/azure_kusto.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Fluent Bit uses the application's credentials to ingest data into your cluster.
2626

2727
- [Register an application](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application)
2828
- [Add a client secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#add-a-client-secret)
29-
- [Authorize the app in your database](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/access-control/principals-and-identity-providers#azure-ad-tenants)
29+
- [Authorize the app in your database](https://learn.microsoft.com/en-us/azure/app-service/overview-authentication-authorization)
3030

3131
## Create a table
3232

@@ -70,7 +70,7 @@ By default, Kusto will insert incoming ingestion data into a table by inferring
7070
| `buffering_enabled` | Optional. Enable buffering into disk before ingesting into Azure Kusto. | `Off` |
7171
| `buffer_dir` | Optional. When buffering is `On`, specifies the location of directory where the buffered data will be stored. | `/tmp/fluent-bit/azure-kusto/` |
7272
| `upload_timeout` | Optional. When buffering is `On`, specifies a timeout for uploads. Fluent Bit will start ingesting buffer files which have been created more than x minutes and haven't reached `upload_file_size` limit. | `30m` |
73-
| `upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in MBs. | `200MB` |
73+
| `upload_file_size`| Optional. When buffering is `On`, specifies the size of files to be uploaded in megabytes. | `200MB` |
7474
| `azure_kusto_buffer_key` | Optional. When buffering is `On`, set the Azure Kusto buffer key which must be specified when using multiple instances of Azure Kusto output plugin and buffering is enabled. | `key` |
7575
| `store_dir_limit_size` | Optional. When buffering is `On`, set the max size of the buffer directory. | `8GB` |
7676
| `buffer_file_delete_early` | Optional. When buffering is `On`, whether to delete the buffered file early after successful blob creation. | `Off` |

pipeline/outputs/logdna.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,6 @@ This plugin uses the following configuration parameters:
1818
| `tags` | A list of comma-separated strings to group records in LogDNA and simplify the query with filters. | _none_ |
1919
| `file` | Optional name of a file being monitored. This value is only set if the record doesn't contain a reference to it. | _none_ |
2020
| `app` | Name of the application. This value is automatically discovered on each record. If no value is found, the default value is used. | `Fluent Bit` |
21-
| `workers` | The number of [workers](../../administration/multithreading#outputs) to perform flush operations for this output. | `0` |
2221

2322
## Data discovery and enrichment
2423

pipeline/outputs/new-relic.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ The _New Relic_ output plugin lets you send logs to New Relic.
1010
| `api_key` | Your [New Relic API key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/). Either an `api_key` or `license_key` is required.| _none_ |
1111
| `license_key` | Your [New Relic license key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/). Either an `api_key` or `license_key` is required. | _none_ |
1212
| `compress` | Sets the compression mechanism for the payload. Possible values: `gzip` or `false`. | `gzip` |
13-
| `workers` | Sets the number of [workers](../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
13+
| `workers` | Sets the number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `0` |
1414

1515
## Example configuration
1616

pipeline/outputs/observe.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ The following HTTP configuration parameters are relevant to Observe:
88

99
| Key | Description | Default |
1010
| --- | ----------- | ------- |
11-
| `host` | IP address or hostname of the Observe data collection endpoint. Replace `$(OBSERVE_CUSTOMER)` with your [Customer ID](https://docs.observeinc.com/en/latest/content/common-topics/HelpfulHints.html?highlight=customer%20id#customer-id). | `OBSERVE_CUSTOMER.collect.observeinc.com` |
11+
| `host` | IP address or hostname of the Observe data collection endpoint. Replace `$(OBSERVE_CUSTOMER)` with your [Customer ID](https://docs.observeinc.com/en/latest/content/hints/CustomerId.html). | `OBSERVE_CUSTOMER.collect.observeinc.com` |
1212
| `port` | TCP port to use when sending data to Observe. | `443` |
1313
| `tls` | Specifies whether to use TLS. | `on` |
1414
| `uri` | Specifies the HTTP URI for Observe. | `/v1/http/fluentbit` |

0 commit comments

Comments
 (0)