Replies: 2 comments 1 reply
-
|
Hi @vasicvuk, Thanks for the questions:
If you're looking for something similar to segmented (but not replicated) local storage. Tansu does now support SQLite as a storage engine: really intended for test environments where the storage is a single file that can be easily copied in/out of an environment. Tansu also includes a message generator so that you can pretty easily generate load with sample data (for any storage engine). Assuming that you have the latest code on main: Spin up a broker using SQLite as the storage engine: ./target/debug/tansu broker --storage-engine=sqlite://tansu.db --schema-registry=file://./etc/schema | tee broker.logIn another shell create the ./target/debug/tansu topic create customerGenerate some load: ./target/debug/tansu generator \
--schema-registry=file://./etc/schema \
--per-second=160 \
--producers=4 \
--batch-size=20 \
--duration-seconds=180 \
customer | tee generator.logThe above will generate 160 message per second (using 4 producers using batches of 20 messages) for 180 seconds. If you look at https://github.com/tansu-io/tansu/blob/main/etc/schema/customer.proto, you'll see that: message Value {
string email_address = 1 [(generate).script = "safe_email()"];
string full_name = 2 [(generate).script = "first_name() + ' ' + last_name()"];
Address home = 3;
repeated string industry = 4 [(generate).repeated = {script: "industry()", range: {min: 1, max: 3}}];
}It uses a scripting language to generate (fake) data. You could adapt the protobuf to fit your data if that makes sense to make a simple load test. Use SQLite to view the data: > sqlite tansu.db
SQLite version 3.43.2 2023-10-10 13:08:14
Enter ".help" for usage hints.
sqlite> select count(*) from record;You can also run the generator while Tansu using S3 or PostgreSQL depending on your setup/environment. I'm planning to put the above into a blog post about the SQLite support and message/load generator... which will hopefully make more sense! I hope this helps, let me know if you have any questions. |
Beta Was this translation helpful? Give feedback.
-
|
@shortishly I was actually looking for some resource friendly Kafka compatible broker when I found this repo :) But i still need scaling and replication so I guess the Postgres is the best way to go. Having lower performance over Java based Kafka is not really issue, as you said there are definitely a lot of different scenarios for usage, so it would be great to have some general benchmarks :). Btw, thank you for responses :) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi :) First, I would like to thank you for the great project you have here. I have some questions regarding the project and i would be very happy if you can provide me with answers :)
Beta Was this translation helpful? Give feedback.
All reactions