- 
                Notifications
    You must be signed in to change notification settings 
- Fork 421
Release monitor write lock in between update iterations #2528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release monitor write lock in between update iterations #2528
Conversation
| Codecov ReportPatch coverage:  
 
 ❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. Additional details and impacted files@@            Coverage Diff             @@
##             main    #2528      +/-   ##
==========================================
+ Coverage   90.56%   90.81%   +0.24%     
==========================================
  Files         109      109              
  Lines       57304    59861    +2557     
  Branches    57304    59861    +2557     
==========================================
+ Hits        51899    54362    +2463     
- Misses       5405     5499      +94     
 ☔ View full report in Codecov by Sentry. | 
e9b3355    to
    6237706      
    Compare
  
    There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks
b6e7486    to
    bb028a3      
    Compare
  
    Previously, updating block data on a chain monitor would acquire a write lock on all of its associated channel monitors and not release it until the loop completed. Now, we instead acquire it on each iteration, fixing lightningdevkit#2470.
Releasing write locks in between monitor updates requires storing a set of cloned keys to iterate over. For efficiency purposes, that set of keys is an actual set, as opposed to array, which means that the iteration order may not be consistent. The test was relying on an event array index to access the revocation transaction. We change that to accessing a hash map keyed by the txid, fixing the test.
bb028a3    to
    f80284c      
    Compare
  
    | { | ||
| let funding_outpoints: HashSet<OutPoint> = HashSet::from_iter(self.monitors.read().unwrap().keys().cloned()); | ||
| for funding_outpoint in funding_outpoints.iter() { | ||
| let monitor_lock = self.monitors.read().unwrap(); | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: rename monitor_lock -> monitors
| let funding_outpoints: HashSet<OutPoint> = HashSet::from_iter(self.monitors.read().unwrap().keys().cloned()); | ||
| for funding_outpoint in funding_outpoints.iter() { | ||
| let monitor_lock = self.monitors.read().unwrap(); | ||
| if let Some(monitor_state) = monitor_lock.get(funding_outpoint) { | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: rename monitor_state -> monitor
| } | ||
| } | ||
|  | ||
| // do some followup cleanup if any funding outpoints were added in between iterations | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"cleanup" doesn't sound right in this case.
Re-iterate if any funding outpoints were added in between write locked iterations above.
| There are some hidden implications, 
 | 
Fixes #2470.