Replies: 2 comments 7 replies
-
We are chasing some potential harvester issues when you got alot of plots on the same harvester - and i imagine with 30+ 16TB drives on a machine, that you qualify for this.
to something high, like 3600 or even higher ? |
Beta Was this translation helpful? Give feedback.
-
Summary: Changing interval_seconds didn't impact performance at all.
================================================================ Current state: SELF_POOLING ================================================================ 2022-04-02T12:36:21.981 harvester chia.harvester.harvester: INFO 6 plots were eligible for farming 216d58fab0... Found 0 proofs. Time: 0.19600 s. Total 3822 plots ================================================================ chia plotnft show Log2: 36 disks per harvester now. You can see no valuable WARNINGS or ERRORS in the log... (but I know there are already there... stale plots) Harv Log Solo farming. latest error in the log at 13:26:53 as of 13:38pm /home/user5/.chia/mainnet/log/debug.log latest error in the log at 13:34:53 as of 13:44pm 2022-04-02T13:33:54.620 full_node full_node_server : INFO Connection closed: 50.35.187.180, node id: aaaae730d4707ccaacb2cbf8f7c7dxxxxxxxxxxxxxxxxxxxxxxxxx Issue: Absolutely nothing tells you that plots are stale. Both Harvester and farmer logs are clear to me... ================================================================ Log3: Now connecting to FLEXPOOL to see some error codes/stale plots (It's the only way to debug the issue) Wallet height: 1782219
2022-04-02T16:16:04.225 farmer chia.farmer.farmer : INFO Submitting partial for a5205ee3eda9b0502fcfc2833acxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx to https://xch-us-east.flexpool.io ================================================================ Log4: Changed interval to 3600sec: user5@user5-harvester:
2022-04-02T16:25:31.425 daemon chia.daemon.server : INFO sending term signal to chia_harvester ================================================================ Log5: Farmer log: (Note that delays are growing. 600-1200 seconds now) VDF Client: Got stop signal! Farmer (Huge delays reported... 1,813 secs): Harvester at the same time. ================================================================ |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Issue: Harvesters are reporting too late.
Chia 1.2.11
I have set of harvesters with 20-25 disks each. 6-7 sata ports used per extension card 3.0 x4 (cards have 10 or 16 sata ports). But even with the top line cpu 12900k and many pcie lanes I'm getting the problem - as soon as I increase number to 30-48 disks per harvester chia farmer every minute is reporting that replies are coming too late. CPU is not loaded. Error code in my case comes from flexpool/spacefarm and reported in main farmer log. And as stale plots on the webportal. Sometimes relpies are 30-60 seconds late, sometimes 200-400 seconds late. When less than 24 disks than I see very rare cases when reply comes with delay (which is ok). I've tried to increase number of threads on harvester - no progress. Harvester's logs shows that requests are processed fast enough 0.1-2 secs in worst case (I am afraid that it reports actual processing time, but do not report time that message is sitting in incoming or outgoing Q on framework level). Otherwise I cannot explain why farmer reports delays where harvester logs says the opposite. Xeon cpus and another mobo also show same problem.
Anybody knows what is the cause? Anyone saw any suggestions on the web? Is there a way to track where delay is formed (may be with wireshark help)? Any official comments on that from CHIA?
Disks are SATA 16tb exos on NTFS 64k/per cluster. Mounted in read-only mode.
Also made a test with mixing existing expansion cards with Adaptec ASR-71605 expansion card (16 disks connected). Also no progress.
Beta Was this translation helpful? Give feedback.
All reactions