diff --git a/about/sandbox-and-lab-resources.md b/about/sandbox-and-lab-resources.md
index c4efbc8ae..22a249741 100644
--- a/about/sandbox-and-lab-resources.md
+++ b/about/sandbox-and-lab-resources.md
@@ -6,6 +6,8 @@ description: >-
# Sandbox and lab resources
+
+
## Open Source labs - environment required
The following are open source labs where you will need to spin up resources to run through the lab in details
@@ -32,4 +34,4 @@ Fluent Bit Workshop for Getting Started with Cloud Native Telemetry Pipelines
This workshop by Amazon goes through common Kubernetes logging patterns and routing data to OpenSearch and visualizing with OpenSearch dashboards
-{% embed url="https://eksworkshop.com/" %}
+{% embed url="https://eksworkshop.com/" %}
\ No newline at end of file
diff --git a/administration/backpressure.md b/administration/backpressure.md
index 5e9308a02..ab5ca3579 100644
--- a/administration/backpressure.md
+++ b/administration/backpressure.md
@@ -1,5 +1,7 @@
# Backpressure
+
+
It's possible for logs or data to be ingested or created faster than the ability to flush it to some destinations. A common scenario is when reading from big log files, especially with a large backlog, and dispatching the logs to a backend over the network, which takes time to respond. This generates _backpressure_, leading to high memory consumption in the service.
To avoid backpressure, Fluent Bit implements a mechanism in the engine that restricts the amount of data an input plugin can ingest. Restriction is done through the configuration parameters `Mem_Buf_Limit` and `storage.Max_Chunks_Up`.
@@ -68,4 +70,4 @@ With `storage.type filesystem` and `storage.max_chunks_up`, the following log me
```text
[input] {input name or alias} paused (storage buf overlimit)
[input] {input name or alias} resume (storage buf overlimit)
-```
+```
\ No newline at end of file
diff --git a/administration/configuring-fluent-bit/classic-mode/configuration-file.md b/administration/configuring-fluent-bit/classic-mode/configuration-file.md
index 6d2bfeeb4..aca31967f 100644
--- a/administration/configuring-fluent-bit/classic-mode/configuration-file.md
+++ b/administration/configuring-fluent-bit/classic-mode/configuration-file.md
@@ -1,5 +1,7 @@
# Configuration file
+
+
One of the ways to configure Fluent Bit is using a main configuration file. Fluent Bit allows the use one configuration file that works at a global scope and uses the defined [Format and Schema](format-schema.md).
The main configuration file supports four sections:
diff --git a/administration/configuring-fluent-bit/classic-mode/variables.md b/administration/configuring-fluent-bit/classic-mode/variables.md
index 3d944eeea..1cfaf3e5b 100644
--- a/administration/configuring-fluent-bit/classic-mode/variables.md
+++ b/administration/configuring-fluent-bit/classic-mode/variables.md
@@ -1,5 +1,7 @@
# Variables
+
+
Fluent Bit supports the usage of environment variables in any value associated to a key when using a configuration file.
The variables are case sensitive and can be used in the following format:
diff --git a/administration/configuring-fluent-bit/multiline-parsing.md b/administration/configuring-fluent-bit/multiline-parsing.md
index 3a5fac100..8a0c0abd6 100644
--- a/administration/configuring-fluent-bit/multiline-parsing.md
+++ b/administration/configuring-fluent-bit/multiline-parsing.md
@@ -1,5 +1,7 @@
# Multiline parsing
+
+
In an ideal world, applications might log their messages within a single line, but in reality applications generate multiple log messages that sometimes belong to the same context. Processing this information can be complex, like in application stack traces, which always have multiple log lines.
Fluent Bit v1.8 implemented a unified Multiline core capability to solve corner cases.
diff --git a/administration/configuring-fluent-bit/yaml.md b/administration/configuring-fluent-bit/yaml.md
index cbba131d4..649bf092f 100644
--- a/administration/configuring-fluent-bit/yaml.md
+++ b/administration/configuring-fluent-bit/yaml.md
@@ -1,5 +1,7 @@
# YAML configuration
+
+
## Before you get started
Fluent Bit traditionally offered a `classic` configuration mode, a custom configuration format that's phasing out. While `classic` mode has served well for many years, it has several limitations. Its basic design only supports grouping sections with key-value pairs and lacks the ability to handle sub-sections or complex data structures like lists.
@@ -41,4 +43,4 @@ To access detailed configuration guides for each section, use the following link
- [Environment Variables Section documentation](./yaml/environment-variables-section.md)
- Information on setting environment variables and their scope within Fluent Bit.
- [Includes Section documentation](./yaml/includes-section.md)
- - Description on how to include external YAML files.
+ - Description on how to include external YAML files.
\ No newline at end of file
diff --git a/administration/memory-management.md b/administration/memory-management.md
index b54e63783..14857ea1b 100644
--- a/administration/memory-management.md
+++ b/administration/memory-management.md
@@ -1,5 +1,7 @@
# Memory management
+
+
You might need to estimate how much memory Fluent Bit could be using in scenarios like containerized environments where memory limits are essential.
To make an estimate, in-use input plugins must set the `Mem_Buf_Limit`option. Learn more about it in [Backpressure](backpressure.md).
@@ -33,4 +35,4 @@ FLB_HAVE_PROXY_GO FLB_HAVE_JEMALLOC JEMALLOC_MANGLE FLB_HAVE_REGEX
FLB_HAVE_C_TLS FLB_HAVE_SETJMP FLB_HAVE_ACCEPT4 FLB_HAVE_INOTIFY
```
-If the `FLB_HAVE_JEMALLOC` option is listed in `Build Flags`, jemalloc is enabled.
+If the `FLB_HAVE_JEMALLOC` option is listed in `Build Flags`, jemalloc is enabled.
\ No newline at end of file
diff --git a/pipeline/filters/log_to_metrics.md b/pipeline/filters/log_to_metrics.md
index f5e104605..df988c240 100644
--- a/pipeline/filters/log_to_metrics.md
+++ b/pipeline/filters/log_to_metrics.md
@@ -4,6 +4,8 @@ description: Generate metrics from logs
# Logs to metrics
+
+
The _log to metrics_ filter lets you generate log-derived metrics. It supports modes to count records, provide a gauge for field values, or create a histogram. You can also match or exclude specific records based on regular expression patterns for values or nested values.
This filter doesn't actually act as a record filter and therefore doesn't change or drop records. All records will pass through this filter untouched, and any generated metrics will be emitted into a separate metric pipeline.
@@ -530,4 +532,4 @@ The `+Inf` bucket will always be included regardless of the buckets you specify.
{% endhint %}
-This filter also attaches Kubernetes labels to each metric, identical to the behavior of `label_field`. This results in two sets for the histogram.
+This filter also attaches Kubernetes labels to each metric, identical to the behavior of `label_field`. This results in two sets for the histogram.
\ No newline at end of file
diff --git a/pipeline/filters/lua.md b/pipeline/filters/lua.md
index e16e460bc..7a242cf88 100644
--- a/pipeline/filters/lua.md
+++ b/pipeline/filters/lua.md
@@ -1,5 +1,7 @@
# Lua
+
+
The _Lua_ filter lets you modify incoming records (or split one record into multiple records) using custom [Lua](https://www.lua.org/) scripts.
A Lua-based filter requires two steps:
@@ -804,4 +806,4 @@ pipeline:
```text
test: [[1731990257.781970977, {}], {"my_env"=>{"A"=>"aaa", "C"=>"ccc", "HOSTNAME"=>"monox-2.lan", "B"=>"bbb"}, "rand_value"=>4805047635809401856}]
-```
+```
\ No newline at end of file
diff --git a/pipeline/filters/type-converter.md b/pipeline/filters/type-converter.md
index 0c7a47e8e..a88789f63 100644
--- a/pipeline/filters/type-converter.md
+++ b/pipeline/filters/type-converter.md
@@ -1,5 +1,7 @@
# Type converter
+
+
The _Type converter_ filter plugin converts data types and appends new key-value pairs.
You can use this filter in combination with plugins which expect incoming string value. For example, [Grep](grep.md) and [Modify](modify.md).
@@ -88,4 +90,4 @@ The output will be
```text
[0] mem.0: [1639915154.160159749, {"Mem.total"=>8146052, "Mem.used"=>4513564, "Mem.free"=>3632488, "Swap.total"=>1918356, "Swap.used"=>0, "Swap.free"=>1918356, "Mem.total_str"=>"8146052", "Mem.used_str"=>"4513564", "Mem.free_str"=>"3632488"}]
-```
+```
\ No newline at end of file
diff --git a/pipeline/processors/content-modifier.md b/pipeline/processors/content-modifier.md
index f6ee26d33..c4c94b73e 100644
--- a/pipeline/processors/content-modifier.md
+++ b/pipeline/processors/content-modifier.md
@@ -1,5 +1,7 @@
# Content modifier
+
+
The _content modifier_ processor lets you manipulate the content, metadata, and attributes of logs and traces.
Similar to how filters work, this processor uses a unified mechanism to perform operations for data manipulation. The most significant difference is that processors perform better than filters, and when chaining them, there are no encoding/decoding performance penalties.
@@ -281,4 +283,4 @@ pipeline:
```
{% endtab %}
-{% endtabs %}
+{% endtabs %}
\ No newline at end of file
diff --git a/pipeline/processors/labels.md b/pipeline/processors/labels.md
index b48b85af5..de8b23aee 100644
--- a/pipeline/processors/labels.md
+++ b/pipeline/processors/labels.md
@@ -1,5 +1,7 @@
# Labels
+
+
The _labels_ processor lets you manipulate the labels of metrics.
Similar to filters, this processor presents an enriching/modifying mechanism to perform operations for labels manipulation. The most significant difference is that processors perform better than filters, and when chaining them there are no encoding or decoding performance penalties.
@@ -143,4 +145,4 @@ pipeline:
```
{% endtab %}
-{% endtabs %}
+{% endtabs %}
\ No newline at end of file
diff --git a/pipeline/processors/metrics-selector.md b/pipeline/processors/metrics-selector.md
index e42384f06..d68e64f74 100644
--- a/pipeline/processors/metrics-selector.md
+++ b/pipeline/processors/metrics-selector.md
@@ -1,5 +1,7 @@
# Metrics selector
+
+
The _metrics selector_ processor lets you choose which metrics to include or exclude, similar to the [grep](../filters/grep.md) filter for logs.
## Configuration parameters
diff --git a/pipeline/processors/sql.md b/pipeline/processors/sql.md
index 1b26776a5..a6163b3a6 100644
--- a/pipeline/processors/sql.md
+++ b/pipeline/processors/sql.md
@@ -1,5 +1,7 @@
# SQL
+
+
The _SQL_ processor lets you use conditional expressions to select content from logs. This processor doesn't depend on a database or table. Instead, your queries run on the stream.
This processor differs from the stream processor interface that runs after filters.
@@ -82,4 +84,4 @@ The resulting output resembles the following:
"date": 1711059261.630668,
"http_domain": "fluentbit.io"
}
-```
+```
\ No newline at end of file