Skip to content

Commit b8c0a59

Browse files
committed
Updating README.md
1 parent 1d64de8 commit b8c0a59

File tree

1 file changed

+99
-75
lines changed

1 file changed

+99
-75
lines changed

disk-buffering/README.md

Lines changed: 99 additions & 75 deletions
Original file line numberDiff line numberDiff line change
@@ -1,109 +1,133 @@
11
# Disk buffering
22

3-
This module provides exporters that store telemetry data in files which can be
4-
sent later on demand. A high level description of how it works is that there are two separate
5-
processes in place, one for writing data in disk, and one for reading/exporting the previously
6-
stored data.
3+
This module provides an abstraction
4+
named [SignalStorage](src/main/java/io/opentelemetry/contrib/disk/buffering/storage/SignalStorage.java),
5+
as well as default implementations for each signal type that allow writing signals to disk and
6+
reading them later.
77

8-
* Each exporter stores the received data automatically in disk right after it's received from its
9-
processor.
10-
* The reading of the data back from disk and exporting process has to be done manually. At
11-
the moment there's no automatic mechanism to do so. There's more information on how it can be
12-
achieved, under [Reading data](#reading-data).
8+
For a more detailed information on how the whole process works, take a look at
9+
the [DESIGN.md](DESIGN.md) file.
1310

14-
> For a more detailed information on how the whole process works, take a look at
15-
> the [DESIGN.md](DESIGN.md) file.
11+
## Default implementation usage
1612

17-
## Configuration
13+
The default implementations are the following:
1814

19-
The configurable parameters are provided **per exporter**, the available ones are:
15+
* [FileSpanStorage](src/main/java/io/opentelemetry/contrib/disk/buffering/storage/impl/FileSpanStorage.java)
16+
* [FileLogRecordStorage](src/main/java/io/opentelemetry/contrib/disk/buffering/storage/impl/FileLogRecordStorage.java)
17+
* [FileMetricStorage](src/main/java/io/opentelemetry/contrib/disk/buffering/storage/impl/FileMetricStorage.java)
2018

21-
* Max file size, defaults to 1MB.
22-
* Max folder size, defaults to 10MB. All files are stored in a single folder per-signal, therefore
23-
if all 3 types of signals are stored, the total amount of space from disk to be taken by default
24-
would be of 30MB.
25-
* Max age for file writing, defaults to 30 seconds.
26-
* Min age for file reading, defaults to 33 seconds. It must be greater that the max age for file
27-
writing.
28-
* Max age for file reading, defaults to 18 hours. After that time passes, the file will be
29-
considered stale and will be removed when new files are created. No more data will be read from a
30-
file past this time.
31-
32-
## Usage
19+
### Set up
3320

34-
### Storing data
21+
We need to create a signal storage object per signal type to start writing signals to disk. Each
22+
`File*Storage` implementation
23+
has a `create()` function that receives:
24+
25+
* A File directory to store the signal files. Note that each signal storage object must have a
26+
dedicated directory to work properly.
27+
* (Optional) a configuration object.
3528

36-
In order to use it, you need to wrap your own exporter with a new instance of
37-
the ones provided in here:
29+
The available configuration parameters are the following:
3830

39-
* For a LogRecordExporter, it must be wrapped within
40-
a [LogRecordToDiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/LogRecordToDiskExporter.java).
41-
* For a MetricExporter, it must be wrapped within
42-
a [MetricToDiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/MetricToDiskExporter.java).
43-
* For a SpanExporter, it must be wrapped within
44-
a [SpanToDiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/SpanToDiskExporter.java).
31+
* Max file size, defaults to 1MB.
32+
* Max folder size, defaults to 10MB.
33+
* Max age for file writing. It sets the time window where a file can get signals appended to it.
34+
Defaults to 30 seconds.
35+
* Min age for file reading. It sets the time to wait before starting to read from a file after
36+
its creation. Defaults to 33 seconds. It must be greater that the max age for file writing.
37+
* Max age for file reading. After that time passes, the file will be considered stale and will be
38+
removed when new files are created. No more data will be read from a file past this time. Defaults
39+
to 18 hours.
4540

46-
Each wrapper will need the following when instantiating them:
41+
```java
42+
// Root dir
43+
File rootDir = new File("/some/root");
4744

48-
* The exporter to be wrapped.
49-
* A File instance of the root directory where all the data is going to be written. The same root dir
50-
can be used for all the wrappers, since each will create their own folder inside it.
51-
* An instance
52-
of [StorageConfiguration](src/main/java/io/opentelemetry/contrib/disk/buffering/config/StorageConfiguration.java)
53-
with the desired parameters. You can create one with default values by
54-
calling `StorageConfiguration.getDefault()`.
45+
// Setting up span storage
46+
SignalStorage.Span spanStorage = FileSpanStorage.create(new File(rootDir, "spans"));
5547

56-
After wrapping your exporters, you must register the wrapper as the exporter you'll use. It will
57-
take care of always storing the data it receives.
48+
// Setting up metric storage
49+
SignalStorage.Metric metricStorage = FileMetricStorage.create(new File(rootDir, "metrics"));
5850

59-
#### Set up example for spans
51+
// Setting up log storage
52+
SignalStorage.LogRecord logStorage = FileLogRecordStorage.create(new File(rootDir, "logs"));
53+
```
6054

61-
### Writing data
55+
### Storing data
6256

63-
The data is written in the disk by "ToDisk" exporters, these are exporters that serialize and store the data as received by their processors. If for some reason
64-
the "ToDisk" cannot store data in the disk, they'll delegate the data to their wrapped exporter.
57+
While you could manually call your `SignalStorage.write(items)` function, disk buffering
58+
provides convenience exporters that you can use in your OpenTelemetry's instance, so
59+
that all signals are automatically stored as they are created.
6560

66-
```java
67-
// Creating the SpanExporter of our choice.
68-
SpanExporter mySpanExporter = OtlpGrpcSpanExporter.getDefault();
61+
* For a span storage, use
62+
a [SpanToDiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/exporters/SpanToDiskExporter.java).
63+
* For a log storage, use
64+
a [LogRecordToDiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/exporters/LogRecordToDiskExporter.java).
65+
* For a metric storage, use
66+
a [MetricToDiskExporter](src/main/java/io/opentelemetry/contrib/disk/buffering/exporters/MetricToDiskExporter.java).
6967

70-
// Wrapping our exporter with its "ToDisk" exporter.
71-
SpanToDiskExporter toDiskExporter = SpanToDiskExporter.create(mySpanExporter, StorageConfiguration.getDefault(new File("/my/signals/cache/dir")));
68+
Each will wrap a signal storage for its respective signal type, as well as an optional callback
69+
to notify when it succeeds, fails, and gets shutdown.
7270

73-
// Registering the disk exporter within our OpenTelemetry instance.
74-
SdkTracerProvider myTraceProvider = SdkTracerProvider.builder()
75-
.addSpanProcessor(SimpleSpanProcessor.create(toDiskExporter))
71+
```java
72+
// Setting up span to disk exporter
73+
SpanToDiskExporter spanToDiskExporter =
74+
SpanToDiskExporter.builder(spanStorage).setExporterCallback(spanCallback).build();
75+
// Setting up metric to disk
76+
MetricToDiskExporter metricToDiskExporter =
77+
MetricToDiskExporter.builder(metricStorage).setExporterCallback(metricCallback).build();
78+
// Setting up log to disk exporter
79+
LogRecordToDiskExporter logToDiskExporter =
80+
LogRecordToDiskExporter.builder(logStorage).setExporterCallback(logCallback).build();
81+
82+
// Using exporters in your OpenTelemetry instance.
83+
OpenTelemetry openTelemetry =
84+
OpenTelemetrySdk.builder()
85+
// Using span to disk exporter
86+
.setTracerProvider(
87+
SdkTracerProvider.builder()
88+
.addSpanProcessor(BatchSpanProcessor.builder(spanToDiskExporter).build())
89+
.build())
90+
// Using log to disk exporter
91+
.setLoggerProvider(
92+
SdkLoggerProvider.builder()
93+
.addLogRecordProcessor(
94+
BatchLogRecordProcessor.builder(logToDiskExporter).build())
95+
.build())
96+
// Using metric to disk exporter
97+
.setMeterProvider(
98+
SdkMeterProvider.builder()
99+
.registerMetricReader(PeriodicMetricReader.create(metricToDiskExporter))
100+
.build())
76101
.build();
77-
OpenTelemetrySdk.builder()
78-
.setTracerProvider(myTraceProvider)
79-
.buildAndRegisterGlobal();
80-
81102
```
82103

104+
Now when creating signals using your `OpenTelemetry` instance, those will get stored in disk.
105+
83106
### Reading data
84107

85-
In order to read data, we need to create "FromDisk" exporters, which read data from the disk, parse it and delegate it
86-
to their wrapped exporters.
108+
In order to read data, we can iterate through our signal storage objects and then forward them to
109+
a network exporter, as shown in the example for spans below.
87110

88111
```java
89-
try {
90-
SpanFromDiskExporter fromDiskExporter = SpanFromDiskExporter.create(memorySpanExporter, storageConfig);
91-
if(fromDiskExporter.exportStoredBatch(1, TimeUnit.SECONDS)) {
92-
// A batch was successfully exported and removed from disk. You can call this method for as long as it keeps returning true.
93-
} else {
94-
// Either there was no data in the disk or the wrapped exporter returned CompletableResultCode.ofFailure().
95-
}
96-
} catch (IOException e) {
97-
// Something unexpected happened.
112+
// Example of reading an exporting spans from disk
113+
OtlpHttpSpanExporter networkExporter;
114+
Iterator<Collection<SpanData>> spanCollections = spanStorage.iterator();
115+
while(spanCollections.hasNext()){
116+
networkExporter.export(spanCollections.next());
98117
}
99118
```
100119

120+
The `File*Storage` iterators delete the previously returned collection when `next()` is called,
121+
assuming that if the next collection is requested is because the previous one was successfully
122+
consumed.
123+
101124
Both the writing and reading processes can run in parallel and they don't overlap
102125
because each is supposed to happen in different files. We ensure that reader and writer don't
103-
accidentally meet in the same file by using the configurable parameters. These parameters set non-overlapping time frames for each action to be done on a single file at a time. On top of that, there's a mechanism in
104-
place to avoid overlapping on edge cases where the time frames ended but the resources haven't been
105-
released. For that mechanism to work properly, this tool assumes that both the reading and the
106-
writing actions are executed within the same application process.
126+
accidentally meet in the same file by using the configurable parameters. These parameters set
127+
non-overlapping time frames for each action to be done on a single file at a time. On top of that,
128+
there's a mechanism in place to avoid overlapping on edge cases where the time frames ended but the
129+
resources haven't been released. For that mechanism to work properly, this tool assumes that both
130+
the reading and the writing actions are executed within the same application process.
107131

108132
## Component owners
109133

0 commit comments

Comments
 (0)