Replies: 1 comment
-
|
The exact same question #3548 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Dear experts,
Hello! I am a Linux server-side developer. In my typical use of the spdlog library, one logger instance is bound to only one daily_file_sink.
I noticed that in spdlog’s asynchronous logging implementation, all loggers share the same message queue.
Suppose my program has high logging demands, and I want to start multiple logging worker threads (async threads) to process log messages in the queue, thereby accelerating consumption speed.
However, the vast majority of log messages in my program must be output through the same daily_file_sink instance—meaning the number of "hot daily_file_sink instances" is less than the number of worker threads (hot daily_file_sink instance count < thread count). In this case, multiple logging threads will frequently compete for the mutex lock of the same sink instance, which may lead to the following issues:
In this scenario, increasing the number of async threads from 1 to multiple does not significantly improve log consumption speed.
For this scenario, might the following design be better?
Each logger has its own internal message queue (called a secondary message queue).
Add a global queue (called a primary message queue), which only stores loggers that have pending messages in their secondary queues.
Thread pool threads are scheduled based on the primary queue:
When a thread works, it fetches a logger from the primary queue and processes the accumulated messages in its secondary queue.
To prevent a thread from continuously consuming messages from a single hot logger (causing messages from other loggers to backlog), assign a "weight" to each worker thread.
When a logging thread fetches a logger and processes its secondary queue, it decides (based on the thread’s assigned weight) whether to:
This ensures that "cold" loggers also receive relatively fair and timely processing.
Both enqueue and dequeue operations for the primary and secondary queues must be protected by locks (e.g., std::mutex).
I would like to ask for your thoughts on this problem. How did you consider such scenarios when designing the spdlog library?
Beta Was this translation helpful? Give feedback.
All reactions