diff --git a/stream-processing/changelog.md b/stream-processing/changelog.md index 21d9c2054..f51dca6f1 100644 --- a/stream-processing/changelog.md +++ b/stream-processing/changelog.md @@ -37,7 +37,7 @@ For conditionals, added the new _@record_ functions: | `@record.time()` | Returns the record timestamp. | | `@record.contains(key)` | Returns `true` or false if `key` exists in the record, or `false` if not. | -### IS NULL, IS NOT NULL +### `IS NULL` and `IS NOT NULL` Added `IS NULL` and `IS NOT NULL` statements to determine whether an existing key in a record has a null value. For example: @@ -45,10 +45,10 @@ Added `IS NULL` and `IS NOT NULL` statements to determine whether an existing ke SELECT * FROM STREAM:test WHERE key3['sub1'] IS NOT NULL; ``` -For more details, see [Check Keys and NULL values](../stream-processing/getting-started/check-keys-null-values.md). +For more details, see [Check keys and null values](../stream-processing/getting-started/check-keys-null-values.md). ## Fluent Bit v1.1 -> Release date: May 09, 2019 +> Release date: 2019-05-09 Added the stream processor to Fluent Bit. diff --git a/stream-processing/getting-started/check-keys-null-values.md b/stream-processing/getting-started/check-keys-null-values.md index 9aa90218a..183da6370 100644 --- a/stream-processing/getting-started/check-keys-null-values.md +++ b/stream-processing/getting-started/check-keys-null-values.md @@ -23,7 +23,8 @@ SELECT * FROM STREAM:test WHERE phone IS NOT NULL; ## Check if a key exists You can also confirm whether a certain key exists in a record at all, regardless of its value. Fluent Bit provides specific record functions that you can use in the condition part of the SQL statement. The following function determines whether `key` exists in a record: -```text + +```sql @record.contains(key) ``` diff --git a/stream-processing/getting-started/fluent-bit-sql.md b/stream-processing/getting-started/fluent-bit-sql.md index 63270c5c8..35545b085 100644 --- a/stream-processing/getting-started/fluent-bit-sql.md +++ b/stream-processing/getting-started/fluent-bit-sql.md @@ -26,7 +26,7 @@ A `SELECT` statement not associated with stream creation will send the results t You can filter the results of this query by applying a condition by using a `WHERE` statement. For information about the `WINDOW` and `GROUP BY` statements, see [Aggregation functions](#aggregation-functions). -#### Examples +#### Examples [#select-examples] Selects all keys from records that originate from a stream called `apache`: @@ -50,7 +50,7 @@ CREATE STREAM stream_name Creates a new stream of data using the results from a `SELECT` statement. If the `Tag` property in the `WITH` statement is set, this new stream can optionally be re-ingested into the Fluent Bit pipeline. -#### Examples +#### Examples [#create-stream-examples] Creates a new stream called `hello_` from a stream called `apache`: @@ -101,6 +101,7 @@ Returns the minimum value of a key in a set of records. ```sql SELECT MAX(key) FROM STREAM:apache; ``` + Returns the maximum value of a key in a set of records. ### `SUM` @@ -111,7 +112,7 @@ SELECT SUM(key) FROM STREAM:apache; Calculates the sum of all values of a key in a set of records. -## Time Functions +## Time functions Use time functions to add a new key with time data into a record. @@ -131,7 +132,7 @@ SELECT UNIX_TIMESTAMP() FROM STREAM:apache; Adds the current Unix time to a record. Output example: `1552196165`. -## Record Functions +## Record functions Use record functions to append new keys to a record using values from the record's context. diff --git a/stream-processing/introduction.md b/stream-processing/introduction.md index b2f6c7228..1fbea8754 100644 --- a/stream-processing/introduction.md +++ b/stream-processing/introduction.md @@ -1,6 +1,6 @@ # Introduction to stream processing -![](../.gitbook/assets/stream_processor.png) +![Fluent Bit stream processing](../.gitbook/assets/stream_processor.png) Fluent Bit is a fast and flexible log processor that collects, parsers, filters, and delivers logs to remote databases, where data analysis can then be performed. diff --git a/stream-processing/overview.md b/stream-processing/overview.md index e3c71a684..7d0cee09e 100644 --- a/stream-processing/overview.md +++ b/stream-processing/overview.md @@ -10,7 +10,7 @@ To understand how stream processing works in Fluent Bit, follow this overview of Most of the phases in the pipeline are implemented through plugins: input, filter, and output. -![](../.gitbook/assets/flb_pipeline.png) +![Fluent Bit pipeline flow](../.gitbook/assets/flb_pipeline.png) Filters can perform specific record modifications like appending or removing a key, enriching with metadata (for example, Kubernetes filter), or discarding records based on specific conditions. After data is stored, no further modifications are made, but records can optionally be redirected to the stream processor. @@ -20,7 +20,7 @@ The stream processor is an independent subsystem that checks for new records hit Every input instance is considered a stream. These streams collect data and ingest records into the pipeline. -![](../.gitbook/assets/flb_pipeline_sp.png) +![Fluent Bit pipeline flow plus stream processor](../.gitbook/assets/flb_pipeline_sp.png) By configuring specific SQL queries, you can perform specific tasks like key selections, filtering, and data aggregation. Keep in mind that there is no database; everything is schema-less and happens in memory. Concepts like tables that are common in relational database don't exist in Fluent Bit. diff --git a/stream-processing/stream-processing.md b/stream-processing/stream-processing.md index 99eda4b50..bd06b3687 100644 --- a/stream-processing/stream-processing.md +++ b/stream-processing/stream-processing.md @@ -1,7 +1,7 @@ # Introduction -![](../.gitbook/assets/stream_processor.png) +![Fluent Bit stream processing](../.gitbook/assets/stream_processor.png) -[Fluent Bit](https://fluentbit.io) is a fast and flexible Log processor that aims to collect, parse, filter and deliver logs to remote databases, so Data Analysis can be performed. +[Fluent Bit](https://fluentbit.io) is a fast and flexible log processor that aims to collect, parse, filter, and deliver logs to remote databases so data analysis can be performed. -Data Analysis usually happens after the data is stored and indexed in a database, but for real-time and complex analysis needs, process the data while it's still in motion in the Log processor brings a lot of advantages and this approach is called **Stream Processing on the Edge**. +Data analysis usually happens after the data is stored and indexed in a database. However, for real-time and complex analysis needs, processing the data while it's still in motion in the log processor brings a lot of advantages. This approach is called **Stream Processing on the Edge**. diff --git a/vale-styles/FluentBit/Headings.yml b/vale-styles/FluentBit/Headings.yml index 1b26d41df..63dda8156 100644 --- a/vale-styles/FluentBit/Headings.yml +++ b/vale-styles/FluentBit/Headings.yml @@ -112,6 +112,7 @@ exceptions: - SignalFx - SIMD - Slack + - SQL - SSL - StatsD - Studio diff --git a/vale-styles/FluentBit/Spelling-exceptions.txt b/vale-styles/FluentBit/Spelling-exceptions.txt index da58b3f4d..28a3f1255 100644 --- a/vale-styles/FluentBit/Spelling-exceptions.txt +++ b/vale-styles/FluentBit/Spelling-exceptions.txt @@ -193,6 +193,8 @@ stdout strftime subcommand subcommands +subkey +subkeys subquery subrecord substring