Skip to content

Commit 642541b

Browse files
committed
feat: readme for example module
1 parent d70b1c4 commit 642541b

File tree

1 file changed

+80
-0
lines changed

1 file changed

+80
-0
lines changed

ingester-example/README.md

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
# GreptimeDB Ingester Examples
2+
3+
## Unary Write
4+
5+
The unary write API provides a straightforward way to write data to GreptimeDB in a single request. It returns a `CompletableFuture<Result<WriteOk, Err>>` that completes when the write operation finishes. This asynchronous design enables high-performance data ingestion while providing clear success/failure information through the Result pattern.
6+
7+
This API is suitable for most scenarios and serves as an excellent default choice when you're unsure which API to use.
8+
9+
### Performance Recommendations
10+
11+
For optimal performance, we recommend batching your writes whenever possible:
12+
13+
- **Batch multiple rows**: Sending 500 rows in a single call rather than making 500 individual calls will significantly improve throughput and reduce network overhead.
14+
- **Combine multiple tables**: This API allows you to write data to multiple tables in a single call, making it convenient to batch related data before sending it to the database.
15+
16+
These batching approaches can dramatically improve performance compared to making separate calls for each row or table, especially in high-throughput scenarios.
17+
18+
### Examples
19+
20+
- [LowLevelApiWriteQuickStart.java](src/main/java/io/greptime/LowLevelApiWriteQuickStart.java)
21+
22+
This example demonstrates how to use the low-level API to write data to GreptimeDB. It covers:
23+
* Defining table schemas with tags, timestamps, and fields
24+
* Writing multiple rows of data to different tables
25+
* Processing write results using the Result pattern
26+
* Deleting data using the `WriteOp.Delete` operation
27+
28+
- [HighLevelApiWriteQuickStart.java](src/main/java/io/greptime/HighLevelApiWriteQuickStart.java)
29+
30+
This example demonstrates how to use the high-level API to write data to GreptimeDB. It covers:
31+
* Writing data using POJO objects with annotations
32+
* Handling multiple tables in a single write operation
33+
* Processing write results asynchronously
34+
* Deleting data using the `WriteOp.Delete` operation
35+
36+
## Streaming Write
37+
38+
The streaming write API establishes a continuous connection for sending data to GreptimeDB, offering a convenient way to write data over time. This approach allows you to write data from different tables in a single stream, prioritizing ease of use over maximum performance.
39+
40+
This API is particularly well-suited for:
41+
- Low-volume continuous data writing scenarios
42+
- Applications that need to write to multiple tables through a single connection
43+
- Cases where simplicity and convenience are more important than maximum throughput
44+
45+
### Examples
46+
47+
- [LowLevelApiStreamWriteQuickStart.java](src/main/java/io/greptime/LowLevelApiStreamWriteQuickStart.java)
48+
49+
This example demonstrates how to use the low-level API to write data to GreptimeDB using stream. It covers:
50+
* Defining table schemas with tags, timestamps, and fields
51+
* Writing multiple rows of data to different tables via streaming
52+
* Finalizing the stream and retrieving write results
53+
* Deleting data using the `WriteOp.Delete` operation
54+
55+
- [HighLevelApiStreamWriteQuickStart.java](src/main/java/io/greptime/HighLevelApiStreamWriteQuickStart.java)
56+
57+
This example demonstrates how to use the high-level API to write data to GreptimeDB using stream. It covers:
58+
* Writing POJO objects directly to the stream
59+
* Managing multiple data types in a single stream
60+
* Finalizing the stream and processing results
61+
* Deleting data using the `WriteOp.Delete` operation
62+
63+
## Bulk Streaming Write
64+
65+
The bulk streaming write API is optimized specifically for high-performance, high-throughput scenarios. Unlike regular streaming, this API allows continuous writing to only one table per stream, but can handle very large data volumes (up to 200MB per write). It features sophisticated adaptive flow control mechanisms that automatically adjust to your data throughput requirements.
66+
67+
This API is ideal for scenarios such as:
68+
- Massive log data ingestion requiring high throughput
69+
- Time-series data collection systems that need to process large volumes of data
70+
- Applications where performance and throughput are critical requirements
71+
72+
### Examples
73+
74+
- [BulkWriteApiQuickStart.java](src/main/java/io/greptime/BulkWriteApiQuickStart.java)
75+
76+
This example demonstrates how to use the bulk write API to write large volumes of data to a single table with maximum efficiency. It covers:
77+
* Configuring the bulk writer for optimal performance
78+
* Writing large batches of data to a single table
79+
* Leveraging the adaptive flow control mechanisms
80+
* Processing write results asynchronously

0 commit comments

Comments
 (0)