Skip to content

Conversation

@plebhash
Copy link
Member

@plebhash plebhash commented Dec 24, 2025

@plebhash plebhash changed the title add Group Channel adaptations on apps add Group Channel adaptations on apps 🎅 Dec 24, 2025
@plebhash plebhash force-pushed the 2025-12-18-adapt-group-channels branch from 4a0fa49 to c7b55c2 Compare December 28, 2025 01:16
@plebhash plebhash changed the title add Group Channel adaptations on apps 🎅 add Group Channel adaptations on apps Dec 28, 2025
@plebhash plebhash force-pushed the 2025-12-18-adapt-group-channels branch from c7b55c2 to 6a9c060 Compare December 28, 2025 20:29
@plebhash plebhash force-pushed the 2025-12-18-adapt-group-channels branch 5 times, most recently from d61670c to b907454 Compare December 29, 2025 17:33
@plebhash
Copy link
Member Author

@lucasbalieiro can you please do some testing with the changes from this PR + stratum-mining/stratum#2044?

please make sure you cover:

  • no JD + tProxy in aggregated + at least 2 cpu miners
  • no JD + tProxy in non-aggregated + at least 2 cpu miners
  • JD + tProxy in non-aggregated + at least 2 cpu miners

@lucasbalieiro
Copy link
Contributor

@lucasbalieiro can you please do some testing with the changes from this PR + stratum-mining/stratum#2044?

please make sure you cover:

  • no JD + tProxy in aggregated + at least 2 cpu miners
  • no JD + tProxy in non-aggregated + at least 2 cpu miners
  • JD + tProxy in non-aggregated + at least 2 cpu miners

Working on It

@lucasbalieiro
Copy link
Contributor

lucasbalieiro commented Jan 2, 2026

I was able to do some testing mine some blocks, see if it open more channels than necessary upstream and seems to be working fine. With all the scenarios that you suggested.

I was not able to do longer testing sessions because I need the patch from #138 for my setup to behave better

@GitGab19 GitGab19 linked an issue Jan 21, 2026 that may be closed by this pull request
@GitGab19
Copy link
Member

GitGab19 commented Jan 21, 2026

I was testing this PR by running Pool + tProxy in aggregated mode, and I noticed that after the channel is opened when the first downstream connection arrives, if we try to connect a second cpu-miner, it doesn't receive the mining.notify right after it connects, but it receives it when the next job is created by the Pool.

This doesn't happen on current main, I tested the same thing there and everything is working as expected.

To reproduce:

  • launch Pool
  • launch tProxy in aggregated mode
  • connect the first cpu-miner
  • check that it receives the job right after the initial sv1 messages
  • connect a second cpu-miner
  • check that it doesn't receive the job right after the initial sv1 messages, but only after the next job is created and sent by the Pool

To make this test more clear, I would suggest to change the min_interval value in Pool's config file to 30s or so.

@GitGab19
Copy link
Member

Ok, I guess I found the issue.

In the ChannelManager we are not setting the channel_id of the NewExtendedMiningJob to AGGREGATED_CHANNEL_ID before sending the job to the Sv1Server.

And this PR we introduced the following in Sv1Server (on these lines):

if let Some(prevhash) = self
      .sv1_server_data
      .super_safe_lock(|v| v.get_prevhash(m.channel_id))

But, if we don't use the AGGREGATED_CHANNEL_ID as m.channel_id, we're not gonna find the prevhash in the hashmap, and so we're not gonna create the very first mining.notify for the second cpu-miner.

@plebhash
Copy link
Member Author

thanks for spotting this @GitGab19

I reproduced it, and based on your investigation I introduced the following adaptation into line 407 of channel_manager.rs

more specifically, the context of this change is:

  • inside ChannelManager::handle_downstream_message
  • inside Mining::OpenExtendedMiningChannel(m) => { ... } of match message
  • inside a if mode == ChannelMode::Aggregated { ... }
  • inside a if self.channel_manager_data.super_safe_lock(|c| c.upstream_extended_channel.is_some()) { ... }
  • inside a if let Some(mut job) = last_active_job { ... }
                                // update the downstream channel with the active job and the chain
                                // tip
                                if let Some(mut job) = last_active_job {
                                    if let Some(last_chain_tip) = last_chain_tip {
                                        // update the downstream channel with the active chain tip
                                        self.channel_manager_data.super_safe_lock(|c| {
                                            if let Some(ch) =
                                                c.extended_channels.get(&next_channel_id)
                                            {
                                                ch.write()
                                                    .unwrap()
                                                    .set_chain_tip(last_chain_tip.clone());
                                            }
                                        });
                                    }
                                    job.channel_id = next_channel_id;
                                    // update the downstream channel with the active job
                                    self.channel_manager_data.super_safe_lock(|c| {
                                        if let Some(ch) = c.extended_channels.get(&next_channel_id)
                                        {
                                            let _ = ch
                                                .write()
                                                .unwrap()
                                                .on_new_extended_mining_job(job.clone());
                                        }
                                    });

+                                   // set the channel id to the aggregated channel id
+                                   // before sending the message to the SV1Server
+                                   job.channel_id = AGGREGATED_CHANNEL_ID;

                                    self.channel_state
                                        .sv1_server_sender
                                        .send((Mining::NewExtendedMiningJob(job.clone()), None))
                                        .await
                                        .map_err(|e| {
                                            error!("Failed to send last new extended mining job to SV1Server: {:?}", e);
                                            TproxyError::shutdown(TproxyErrorKind::ChannelErrorSender)
                                        })?;
                                }

after this modification, the issue is no longer reproducible

I'm squashing this modification into a80cba2

@plebhash plebhash force-pushed the 2025-12-18-adapt-group-channels branch from ab33bfb to 28020fc Compare January 21, 2026 14:01
Copy link
Member

@GitGab19 GitGab19 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tACK

Copy link
Collaborator

@Shourya742 Shourya742 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK

@plebhash plebhash force-pushed the 2025-12-18-adapt-group-channels branch from 28020fc to c9118d4 Compare January 21, 2026 15:04
@plebhash plebhash force-pushed the 2025-12-18-adapt-group-channels branch from c9118d4 to 71068aa Compare January 21, 2026 15:14
@plebhash plebhash force-pushed the 2025-12-18-adapt-group-channels branch from 71068aa to fc5ff90 Compare January 21, 2026 15:23
@plebhash plebhash merged commit 4bb6f1c into stratum-mining:main Jan 21, 2026
10 checks passed
@plebhash plebhash deleted the 2025-12-18-adapt-group-channels branch January 21, 2026 15:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Group Channel adaptations on apps CloseChannel should trigger fallback for aggregated tProxy

4 participants