-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Support structured logging #5710
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
915a9d4
to
ca9f571
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## develop #5710 +/- ##
=========================================
- Coverage 78.9% 78.7% -0.2%
=========================================
Files 816 816
Lines 72246 72709 +463
Branches 8418 8486 +68
=========================================
+ Hits 56993 57210 +217
- Misses 15253 15499 +246
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a big PR, it'll take some time for me to review it. This is part 1 :)
All in all, I think it would be great to have a side by side performance comparison of the current logger and the one you're proposing. Specifically:
- Execution time
- Log file size (i.e. how does the log file size differ between structured and unstructured logging)
The log file size is going to be 100% to 300% larger than before I believe, as we'll be logging extra information. |
Performance test resultsCode used for testingTEST_CASE("Performance test")
{
std::string logStream;
logStream.reserve(1024 * 5);
MockLogs logs{logStream, beast::severities::kAll};
beast::Journal::enableStructuredJournal();
beast::Journal::addGlobalAttributes(log::attributes(
log::attr("Field1", "Value1"),
log::attr("Field2", "Value2")
));
auto j = logs.journal("Test", log::attributes(
log::attr("Field1", "Value1"),
log::attr("Field2", "Value2")
));
std::chrono::steady_clock::time_point startTime = std::chrono::steady_clock::now();
auto iterations = 10'000'000;
if (beast::Journal::isStructuredJournalEnabled())
{
for (std::int64_t i = 0; i < iterations; i ++)
{
j.debug()
<< "Test, "
<< log::param("Field1", 1)
<< log::param("Field2", "2")
<< log::param("Field3", 121.2);
logStream.clear();
}
}
else
{
for (std::int64_t i = 0; i < iterations; i ++)
{
j.debug()
<<
R"AAA({"JournalParams":{"Field1":"Value1","Field2":"Value2"},"GlobalParams":{"Field1":"Value1","Field2":"Value2"},"MessageParams":{"Function":"void
DOCTEST_ANON_FUNC_14()","File":"rippled/src/tests/libxrpl/basics/log.cpp","Line":249,"ThreadId":"0x1f4fd20c0","Level":"debug","Time":1756807007389,"Field1":1,"Field2":"2","Field3":1.2},"Message":"Test,
121.2"})AAA";
logStream.clear();
}
}
std::chrono::steady_clock::time_point endTime = std::chrono::steady_clock::now();
std::cout << "Time: " << endTime - startTime << std::endl;
} Total time when json logging is enabled26493392625ns Total time when json logging is disabled32366780250ns ConclusionThis feature uses |
3f1dc2e
to
d208e19
Compare
There's some small issues with my lates change. Going to make this PR available tomorrow after I fix those issues. |
11723f6
to
1d3081d
Compare
Add a human readable time field |
@a1q123456 regarding your previous comment:
Could you also run a benchmark comapring the size of the logs? Operators are already concerned that |
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
6717d20
to
6de7802
Compare
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Signed-off-by: JCW <[email protected]>
Performance test result shows around -0.5% throughput drop when json logging is enabled. Given the isolated test, I believe it's because we're logging more text. I can optimise the log performance in a separate PR. |
High Level Overview of Change
This PR adds support for structured logs to rippled.
Before:
After:
To enable it, we'll need to add the following to the config file.
Context of Change
rippled currently produces plain text logs, which is human-readable but it makes debugging harder because contextual values are missing from logs (e.g. the current transaction id or the current account id we're processing), and parameters we push to logs get formatted directly into the message so that we don't have a chance to extract them later programmatically when debugging.
For example, if we'd like to debug an error and we have
AMM Bid: not in range 10 5
hiding in millions of lines of logs, we'd have to write a regex to match logs like it and then we'd have to go through each log message and find the related error.With structured logging, we're allowed to filter logs using javascript or whatever query languages, so that we could do:
if we have logs like:
Type of Change
.gitignore
, formatting, dropping support for older tooling)API Impact
libxrpl
change (any change that may affectlibxrpl
or dependents oflibxrpl
)Future Tasks
beast::detail::SimpleJsonWriter
only pushes strings into the final json structure and it doesn't and can't make sure the json object is valid. (i.e. you can add duplicate keys into an object and it won't complain) We should be careful when we add parameters to ensure that we don't have duplicate keys.e.g.
We should change code like the following
JLOG(ctx_.journal.debug()) << "AMM Bid: not in range " << computedPrice << " " << *bidMax;
to
so that we can get logs with parameters instead of
trace id
to logs so that it makes it easier to extract all the logs related to a transaction or an action.