-
I'm using Vector Whenever the sink starts to experience a large amount of failures, in my case a number of "connection resets" from ClickHouse Cloud whenever it scales up, Vector will soon stop accepting connections resulting in an unhealthy task and being killed off by AWS. Vector also shoots up in memory, but there's nothing indicating in the logs (level=info) to indicate that. I tried buffering to filesystem to see how it would perform and it resulted in a 2-3x performance decrease, which is a considerable amount at our scale, so I haven't tried that out with ClickHouse Cloud scaling up. Example error:
Vector config:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
The behavior you are experiencing does sounds expected to me. When the A few other ideas you could try:
|
Beta Was this translation helpful? Give feedback.
The behavior you are experiencing does sounds expected to me. When the
clickhouse
sink cannot successfully send data it'll start applying back-pressure to the source (in this case thehttp_server
source). Thehttp_server
source, in the presence of backpressure, will accept requests, but hold them open until it can flush the data. It sounds like you hit a cap on the number of open requests where Vector stops responding.A few other ideas you could try:
buffer.on_full: "drop_newest"
) to shed load when data cannot be sent