You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pipeline/outputs/vivo-exporter.md
+58-58Lines changed: 58 additions & 58 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,19 +2,20 @@
2
2
3
3
Vivo Exporter is an output plugin that exposes logs, metrics, and traces through an HTTP endpoint. This plugin aims to be used in conjunction with [Vivo project](https://github.com/calyptia/vivo) .
|`empty_stream_on_read`| If enabled, when an HTTP client consumes the data from a stream, the stream content will be removed. | Off |
10
-
|`stream_queue_size`| Specify the maximum queue size per stream. Each specific stream for logs, metrics and traces can hold up to `stream_queue_size` bytes. | 20M |
11
-
|`http_cors_allow_origin`| Specify the value for the HTTP Access-Control-Allow-Origin header (CORS). ||
12
-
|`workers`| The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`1`|
7
+
This plugin supports the following configuration parameters:
13
8
9
+
| Key | Description | Default |
10
+
| --- | ----------- | ---------|
11
+
|`empty_stream_on_read`| If enabled, when an HTTP client consumes the data from a stream, the stream content will be removed. |`Off`|
12
+
|`stream_queue_size`| Specify the maximum queue size per stream. Each specific stream for logs, metrics, and traces can hold up to `stream_queue_size` bytes. |`20M`|
13
+
|`http_cors_allow_origin`| Specify the value for the HTTP `Access-Control-Allow-Origin` header (CORS). |_none_|
14
+
|`workers`| The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`1`|
14
15
15
-
### Getting Started
16
+
### Get started
16
17
17
-
Here is a simple configuration of Vivo Exporter, note that this example is not based on defaults.
18
+
The following is an example configuration of Vivo Exporter. This example isn't based on defaults.
18
19
19
20
{% tabs %}
20
21
{% tab title="fluent-bit.yaml" %}
@@ -25,7 +26,7 @@ pipeline:
25
26
- name: dummy
26
27
tag: events
27
28
rate: 2
28
-
29
+
29
30
outputs:
30
31
- name: vivo_exporter
31
32
match: '*'
@@ -56,64 +57,63 @@ pipeline:
56
57
57
58
### How it works
58
59
59
-
Vivo Exporter provides buffers that serve as streams for each telemetry data type, in this case, `logs`, `metrics`, and `traces`. Each buffer contains a fixed capacity in terms of size (20M by default). When the data arrives at a stream, it's appended to the end. If the buffer is full, it removes the older entries to make room for new data.
60
+
Vivo Exporter provides buffers that serve as streams for each telemetry data type, in this case, `logs`, `metrics`, and `traces`. Each buffer contains a fixed capacity in terms of size (`20M` by default). When the data arrives at a stream, it's appended to the end. If the buffer is full, it removes the older entries to make room for new data.
60
61
61
-
The `data` that arrives is a `chunk`. A chunk is a group of events that belongs to the same type (logs, metrics or traces) and contains the same `tag`. Every chunk placed in a stream is assigned with an auto-incremented `id`.
62
+
The `data` that arrives is a `chunk`. A chunk is a group of events that belongs to the same type (logs, metrics, or traces) and contains the same `tag`. Every chunk placed in a stream is assigned with an auto-incremented `id`.
62
63
63
64
#### Requesting data from the streams
64
65
65
-
By using a simple HTTP request, you can retrieve the data from the streams. The following are the endpoints available:
66
+
By using an HTTP request, you can retrieve the data from the streams. The following are the endpoints available:
|`/logs`| Exposes log events in JSON format. Each event contains a timestamp, metadata and the event content. |
70
71
|`/metrics`| Exposes metrics events in JSON format. Each metric contains name, metadata, metric type and labels (dimensions). |
71
-
|`/traces`| Exposes traces events in JSON format. Each trace contains a name, resource spans, spans, attributes, events information, etc. |
72
+
|`/traces`| Exposes traces events in JSON format. Each trace contains a name, resource spans, spans, attributes, events information, and so on. |
72
73
73
-
The example below will generate dummy log events which will be consuming by using `curl` HTTP command line client:
74
+
The following example generates dummy log events which will be consuming by using `curl` HTTP command line client:
74
75
75
-
**Configure and start Fluent Bit**
76
+
1.Configure and start Fluent Bit
76
77
78
+
{% tabs %}
79
+
{% tab title="fluent-bit.yaml" %}
77
80
78
-
{% tabs %}
79
-
{% tab title="fluent-bit.yaml" %}
81
+
```yaml
82
+
pipeline:
83
+
inputs:
84
+
- name: dummy
85
+
tag: events
86
+
rate: 2
80
87
81
-
```yaml
82
-
pipeline:
83
-
inputs:
84
-
- name: dummy
85
-
tag: events
86
-
rate: 2
87
-
88
-
outputs:
89
-
- name: vivo_exporter
90
-
match: '*'
91
-
```
88
+
outputs:
89
+
- name: vivo_exporter
90
+
match: '*'
91
+
```
92
92
93
-
{% endtab %}
94
-
{% tab title="fluent-bit.conf" %}
93
+
{% endtab %}
94
+
{% tab title="fluent-bit.conf" %}
95
95
96
-
```text
97
-
[INPUT]
98
-
name dummy
99
-
tag events
100
-
rate 2
96
+
```text
97
+
[INPUT]
98
+
name dummy
99
+
tag events
100
+
rate 2
101
101
102
-
[OUTPUT]
103
-
name vivo_exporter
104
-
match *
105
-
```
102
+
[OUTPUT]
103
+
name vivo_exporter
104
+
match *
105
+
```
106
106
107
-
{% endtab %}
108
-
{% endtabs %}
107
+
{% endtab %}
108
+
{% endtabs %}
109
109
110
-
**Retrieve the data**
110
+
1.Retrieve the data
111
111
112
-
```shell
113
-
curl -i http://127.0.0.1:2025/logs
114
-
```
112
+
```shell
113
+
curl -i http://127.0.0.1:2025/logs
114
+
```
115
115
116
-
> We are using the `-i` curl option to print also the HTTP response headers.
116
+
The `-i` curl option prints the HTTP response headers.
117
117
118
118
Curl output would look like this:
119
119
@@ -138,25 +138,25 @@ Vivo-Stream-End-ID: 3
138
138
139
139
### Streams and IDs
140
140
141
-
As mentioned above, on each stream we buffer a `chunk` that contains N events, each chunk contains it own ID which is unique inside the stream.
141
+
As mentioned previously, each stream buffers a `chunk` that contains `N` events, each chunk contains its own ID which is unique inside the stream.
142
142
143
-
When we receive the HTTP response, Vivo Exporter also reports the range of chunk IDs that were served in the response via the HTTP headers `Vivo-Stream-Start-ID` and `Vivo-Stream-End-ID`.
143
+
After receiving the HTTP response, Vivo Exporter also reports the range of chunk IDs that were served in the response using the HTTP headers `Vivo-Stream-Start-ID` and `Vivo-Stream-End-ID`.
144
144
145
145
The values of these headers can be used by the client application to specify a range between IDs or set limits for the number of chunks to retrieve from the stream.
146
146
147
147
### Retrieve ranges and use limits
148
148
149
-
A client might be interested into always retrieve the latest chunks available and skip previous one that already processed. In a first request without any given range, Vivo Exporter will provide all the content that exists in the buffer for the specific stream, on that response the client might want to keep the last ID (Vivo-Stream-End-ID) that was received.
149
+
A client might be interested in always retrieving the latest chunks available and skip previous ones already processed. In a first request without any given range, Vivo Exporter will provide all the content that exists in the buffer for the specific stream. On that response, the client might want to keep the last ID (`Vivo-Stream-End-ID`) that was received.
150
150
151
-
To query ranges or starting from specific chunks IDs, remember that they are incremental, you can use a mix of the following options:
151
+
To query ranges or starting from specific chunks IDs, remember that they're incremental. You can use a mix of the following options:
|`from`| Specify the first chunk ID that is desired to be retrieved. Note that if the `chunk` ID does not exists the next one in the queue will be provided. |
156
-
|`to`| The last chunk ID is desired. If not found, the whole stream will be provided (starting from `from` if was set).|
157
-
|`limit`| Limit the output to a specific number of chunks. The default value is `0`, which means: send everything.|
153
+
| Query string option | Description |
154
+
|---------------------|-------------|
155
+
|`from`| Specify the first chunk ID to be retrieved. If the `chunk` ID doesn't exist, the next one in the queue will be provided. |
156
+
|`to`| The last chunk ID to be retrieved. If not found, the whole stream will be provided (starting from `from` if was set). |
157
+
|`limit`| Limit the output to a specific number of chunks. The default value is `0`, which sends everything. |
158
158
159
-
The following example specifies the range from chunk ID 1 to chunk ID 3 and only 1 chunk:
159
+
The following example specifies the range from chunk ID `1` to chunk ID `3` and only 1 chunk:
0 commit comments