You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`empty_stream_on_read`| If enabled, when an HTTP client consumes the data from a stream, the stream content will be removed. | Off |
10
10
|`stream_queue_size`| Specify the maximum queue size per stream. Each specific stream for logs, metrics and traces can hold up to `stream_queue_size` bytes. | 20M |
11
11
|`http_cors_allow_origin`| Specify the value for the HTTP Access-Control-Allow-Origin header (CORS). ||
12
-
|`workers`| The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`1`|
12
+
|`workers`| The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. |`1`|
13
13
14
14
15
15
### Getting Started
16
16
17
17
Here is a simple configuration of Vivo Exporter, note that this example is not based on defaults.
18
18
19
-
```python
19
+
{% tabs %}
20
+
{% tab title="fluent-bit.yaml" %}
21
+
22
+
```yaml
23
+
pipeline:
24
+
inputs:
25
+
- name: dummy
26
+
tag: events
27
+
rate: 2
28
+
29
+
outputs:
30
+
- name: vivo_exporter
31
+
match: '*'
32
+
empty_stream_on_read: off
33
+
stream_queue_size: 20M
34
+
http_cors_allow_origin: '*'
35
+
```
36
+
37
+
{% endtab %}
38
+
{% tab title="fluent-bit.conf" %}
39
+
40
+
```text
20
41
[INPUT]
21
-
name dummy
22
-
tag events
23
-
rate 2
42
+
name dummy
43
+
tag events
44
+
rate 2
24
45
25
46
[OUTPUT]
26
-
name vivo_exporter
27
-
match *
28
-
empty_stream_on_read off
29
-
stream_queue_size 20M
30
-
http_cors_allow_origin *
47
+
name vivo_exporter
48
+
match *
49
+
empty_stream_on_read off
50
+
stream_queue_size 20M
51
+
http_cors_allow_origin *
31
52
```
32
53
54
+
{% endtab %}
55
+
{% endtabs %}
56
+
33
57
### How it works
34
58
35
59
Vivo Exporter provides buffers that serve as streams for each telemetry data type, in this case, `logs`, `metrics`, and `traces`. Each buffer contains a fixed capacity in terms of size (20M by default). When the data arrives at a stream, it's appended to the end. If the buffer is full, it removes the older entries to make room for new data.
@@ -41,7 +65,7 @@ The `data` that arrives is a `chunk`. A chunk is a group of events that belongs
41
65
By using a simple HTTP request, you can retrieve the data from the streams. The following are the endpoints available:
|`from`| Specify the first chunk ID that is desired to be retrieved. Note that if the `chunk` ID does not exists the next one in the queue will be provided. |
110
156
|`to`| The last chunk ID is desired. If not found, the whole stream will be provided (starting from `from` if was set). |
111
157
|`limit`| Limit the output to a specific number of chunks. The default value is `0`, which means: send everything. |
112
158
113
159
The following example specifies the range from chunk ID 1 to chunk ID 3 and only 1 chunk:
0 commit comments