Skip to content

Conversation

wvanlint
Copy link
Contributor

@wvanlint wvanlint commented May 2, 2025

This change allows users to create hierarchical span objects through the Logger interface for specific computations, such as the handling of HTLCs. These span objects will be held in LDK across the corresponding lifetimes before being dropped, providing insight in durations and latencies.

This API is designed to be compatible with https://docs.rs/opentelemetry/latest/opentelemetry/trace/trait.Tracer.html, but can also be used more directly to derive time-based metrics or to log durations.

Hierarchical RAII spans are currently added for:

  • HTLC lifetimes, HTLC state transitions, inbound to outbound HTLC forwarding (see functional_tests.rs).
  • Ping/pong request-response pairs.

@ldk-reviews-bot
Copy link

ldk-reviews-bot commented May 2, 2025

👋 I see @tnull was un-assigned.
If you'd like another reviewer assignment, please click here.

@wvanlint wvanlint force-pushed the trace_spans branch 2 times, most recently from d115f88 to 40a6022 Compare May 2, 2025 22:17
Copy link

codecov bot commented May 2, 2025

Codecov Report

❌ Patch coverage is 88.40749% with 99 lines in your changes missing coverage. Please review.
✅ Project coverage is 88.99%. Comparing base (ed42300) to head (a7f103c).
⚠️ Report is 71 commits behind head on main.

Files with missing lines Patch % Lines
lightning/src/ln/channel.rs 87.36% 28 Missing and 40 partials ⚠️
lightning/src/ln/channelmanager.rs 71.69% 15 Missing ⚠️
lightning/src/util/logger.rs 72.72% 6 Missing ⚠️
lightning/src/chain/channelmonitor.rs 0.00% 3 Missing ⚠️
lightning/src/ln/invoice_utils.rs 0.00% 3 Missing ⚠️
lightning-dns-resolver/src/lib.rs 0.00% 1 Missing ⚠️
lightning-net-tokio/src/lib.rs 0.00% 1 Missing ⚠️
lightning/src/ln/onion_payment.rs 83.33% 0 Missing and 1 partial ⚠️
lightning/src/util/test_utils.rs 95.65% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3766      +/-   ##
==========================================
+ Coverage   88.94%   88.99%   +0.05%     
==========================================
  Files         173      174       +1     
  Lines      123876   124808     +932     
  Branches   123876   124808     +932     
==========================================
+ Hits       110177   111077     +900     
+ Misses      11248    11247       -1     
- Partials     2451     2484      +33     
Flag Coverage Δ
fuzzing 22.90% <44.85%> (+0.72%) ⬆️
tests 88.82% <88.40%> (+0.05%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@wvanlint wvanlint force-pushed the trace_spans branch 3 times, most recently from f7c48f7 to bd59e5e Compare May 5, 2025 06:31
@wvanlint wvanlint marked this pull request as ready for review May 5, 2025 06:58
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, okay so there was def a miscommunication when we spoke about switching to an RAII span concept 😅. At a high level I wasn't anticipate we'd give spans for a total HTLC forwards - I was assuming we'd only do spans for basically function runtime in LDK - "how long did it take us to process events", "how long did it take us to process the pending forwards", etc. From there, we might give spans for things like monitor updates (including the parent idea, which is cool), so you could answer questions like "how much of the time spent processing pending forwards is spent in monitor updates".

By focusing on answering questions about specific function runtime the "parent" thing is super well-defined. But once we switch to non-functional spans, it becomes a bit murky - a monitor update might block several payment forwards, so should it have all of them as "parents"?

Of course using only functional blocks as spans makes it less able to answer the types of questions I think you specifically want, eg "how long is it taking us to forward most HTLCs", but as it lies its hard to take that and answer "why is it taking us long to forward some HTLCs".

We could also split the difference and have functional spans for LDK things, but also non-functional spans for outside-LDK things - ie the above spans plus a span for how long each async monitor update took to persist as well as how long it took to get a response from a peer once we send a message that needs a response. Sadly the latter is a bit annoying to build, commitment_signed -> revoke_and_ack latency is pretty easy to measure, though once we get the RAA we're still waiting on another commitment_signed in many cases. There may be a way to do it specific to an HTLC kinda like you have here but just around messages themselves.

WDYT?

cltv_expiry: u32,
payment_hash: PaymentHash,
state_wrapper: InboundHTLCStateWrapper,
span: BoxedSpan,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh man its a bit weird to start storing user objects in all our structs. I was kinda anticipating an RAII struct of our own that calls an end method on the Logger, though thinking about it now its definitely obvious that letting the user define the struct is easier for them (I guess otherwise they'd have to keep a map of pending spans and drop the opentelemetry::Tracer by hand?).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah exactly, this was to avoid reconciliation on the user side.

@TheBlueMatt TheBlueMatt removed the request for review from valentinewallace May 5, 2025 14:33
@wvanlint
Copy link
Contributor Author

wvanlint commented May 5, 2025

Haha no worries, I imagined it it wasn't the same but wanted to write this up as a starting point.

I agree on the difference between function scope spans and higher-level state machine spans and how they answer different questions. Given the amount of logic that depends on messages, timers, and asynchronous persistence completions, I think we would need both complementing each other to have a complete understanding.

  • Function scope spans are clear and easy to optimize for, but don't cross certain boundaries and would not give a complete picture. User trait implementations are already easy to measure but it's difficult to decide when it's impactful to improve them.
  • Higher-level spans around the Lightning state machine would give an end-to-end view for Lightning payments and can identify rough areas of investigation.

If we're building higher-level state machine spans on top of messages outside of LDK as in https://discord.com/channels/915026692102316113/978829624635195422/1296877062015029258, would it lead to rebuilding/duplicating something similar to the Lightning state machine to get the full picture? It seems like channel.rs already has all the necessary information.

I'm also curious what we should optimize for. I'm assuming we would like to have maximal observability, while minimizing overhead. Is maintaining span objects in the Lightning state machine too invasive? Would there be a more minimal event API that can be added to channel.rs?

Having start(span: Span, parent: Option<&Span>) -> () and end(span: Span) -> () methods without the RAII concept could be less invasive, possibly with some helpers. We could also still verify in TestLogger whether all spans are ended correctly during tests.

@wvanlint wvanlint requested a review from TheBlueMatt May 13, 2025 18:09
@TheBlueMatt
Copy link
Collaborator

Oops, was travelling last week.

Of course using only functional blocks as spans makes it less able to answer the types of questions I think you specifically want, eg "how long is it taking us to forward most HTLCs", but as it lies its hard to take that and answer "why is it taking us long to forward some HTLCs".

I think this is somewhat the key point - we want to be able to answer questions like how long its taking to send/receive/forward an HTLC but also importantly why.

That may imply we need a bit of both - functional spans for CPU-blocked blocks (like process_pending_htlc_forwards) as well as "state step" notifications for HTLCs - ie something more like the following on the HTLC side:

enum HTLC {
Inbound(InboundHTLCStateDetails),
Outbound(OutboundHTLCStateDetails + AddedToOutboundQueue)
}
fn htlc_state_transitioned(channel_id, htlc_id: u64, payment_hash, new_state: HTLC);

This still wouldn't quite answer the "why it it taking so long to forward HTLCs", but maybe if we joined it with blame on wait-time inside a channel:

fn channel_waiting_on_peer_response(channel_id);
fn channel_peer_responded(channel_id);
fn channel_monitor_persist_start(channel_id, monitor_update_id);
fn channel_monitor_persist_finish(channel_id, monitor_update_id);
fn waiting_on_async_signer(channel_id);
fn async_signer_complete(channel_id);

From there we could answer the above question by looking at how long individual state transitions took - from InboundHTLCStateDetails::AwaitingRemoteRevokeToAdd to InboundHTLCStateDetails::Committed we can look at how much time was spent waiting on a peer vs waiting on a monitor persist or an async signing op. Then the latency from there to a corresponding HTLC::Outbound::AddedToOutboundQueue is on us (visible via monitor and async signer latency), and then the time from there to OutboundHTLCStateDetails::Committed can be seen in the ratio between waiting-on-peer and monitor/async-signing.

It's pretty gnarly to actually answer the why, here, sadly, though - you have to track the HTLC:channel relation and then track time in the channel "states" during the time the HTLC is in different states...I suppose we could break out the HTLC state transitions into channel sub-states like

enum ChanState {
WaitingOnPeerResponse,
WaitingOnMonitorPersist,
WaitingOnAsyncSigner,
NotWaiting
}
fn htlc_state_transition(channel_id, htlc_id: u64, payment_hash, state: HTLC, channel_state: ChanState)

but that would require calling htlc_state_transition once for each HTLC in a channel every time the channel updates, which seems kinda ugly (I guess we could skip it if the HTLC is in the Committed state which would reduce it to only HTLCs that are changing) and it would still require tracking the HTLCs as they cross channel boundaries when forwarded.

Of course this is all somewhat orthogonal to whether to do things RAII - either we require the user do the mapping on their side or we store the RAII objects and forward them around. Personally I prefer to not store user objects in our state machine, but I admit I don't really have a strong argument for not doing so - there's not really much difference in terms of runtime switching, but it does seem maybe-useful for the user to get a &self on the span closure. Its also less clear to me how to map the state transitions onto a Span, I guess we'd just be calling htlc_state_transition on the span object.

@TheBlueMatt TheBlueMatt removed their request for review May 13, 2025 19:19
@tnull tnull self-requested a review July 27, 2025 09:26
@ldk-reviews-bot
Copy link

🔔 1st Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 2nd Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 3rd Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 4th Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

fn log(&self, record: Record);
/// Indicates the start of a span of computation.
/// The returned object will be dropped when the span ends.
fn start(&self, span: Span, parent: Option<&Self::UserSpan>) -> Self::UserSpan;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious what you thought of my comment above? The simplified "parent" approach here isn't gonna allow us to answer the "why" question on an HTLC, and is kinda confusing (like, I'd naively expect a parent OutboundHTLC span on a WaitingOnMonitorPersist span because the outbound HTLC is waiting on a monitor persist for some part of its life).

At a minimum we need to provide enough info to see the mapping to "real" parents, I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking a tree of spans would allow us to fully account for the entire lifetime duration of an HTLC, given enough spans. It would indeed not contain real/unique channel monitor persistence spans due to the HTLC:channel relation, it would be only from the perspective of each HTLC. Would that miss part of the "why"?

The tree-based approach was chosen to be compatible with tracing implementations and get out-of-the-box visualizations, but additional relationships could be represented as id attributes on the Span variants I think. To your point about the mapping, perhaps HTLC traces and channel monitor persistence traces can be created independently, and correlated based on monitor update id attributes.

I'd naively expect a parent OutboundHTLC span on a WaitingOnMonitorPersist span because the outbound HTLC is waiting on a monitor persist for some part of its life

This is mostly the approach I took for the latest commits. Is the confusing part that each HTLC would have its own denormalized WaitingOnMonitorPersist span?

@wvanlint
Copy link
Contributor Author

wvanlint commented Aug 5, 2025

Apologies for the delay here, a project at work came up that needed to be prioritized.

I rebased the PR and added additional WaitingOn* spans as leafs in the HTLC span hierarchy. I still need to verify completeness there.

It's pretty gnarly to actually answer the why, here, sadly, though - you have to track the HTLC:channel relation and then track time in the channel "states" during the time the HTLC is in different states...I suppose we could break out the HTLC state transitions into channel sub-states

Right, the HTLC:channel relation can't be represented simply, but you can indeed break out the spans from the viewpoint of an HTLC, which feels like the right approach to me. If an HTLC needs to wait for a remote revocation, an HTLC-specific span can be started at that point in time until it arrives, even though other HTLCs might have started waiting for that revocation beforehand. These spans would be specifically from the viewpoint of each HTLC, but I believe it could explain the why? When complete, the WaitingOn* leaf spans should fully cover the lifecycle of an HTLC.

The spans from the viewpoint of each HTLC might be complemented by other metrics/spans, such as measuring the duration of Persist calls more directly.

This change allows users to create hierarchical span objects through the
Logger interface for specific computations, such as the handling of
HTLCs. These span objects will be held in LDK across the corresponding
lifetimes before being dropped, providing insight in durations and
latencies.
@ldk-reviews-bot
Copy link

🔔 5th Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 6th Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 7th Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 8th Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 9th Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 10th Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link

🔔 11th Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@TheBlueMatt TheBlueMatt removed the request for review from tnull August 20, 2025 19:34
@TheBlueMatt
Copy link
Collaborator

We discussed this a bit at the meeting last Monday, and I suggested looking into a more event-based approach, which is simpler to track on the LDK end, and, I think, done right, doesn't require a lot more tracking on the client end to get the same results, but also might allow other types of results for users who want it. I'm happy to discuss further here if you prefer the more tracing approach we have here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants