A small, incremental release: gaining Elo at this level is getting harder without a new architecture and much more compute. The new version focuses on new nets, search cleanups, and the first proper SMP tuning, with especially nice gains in DFRC.
Changes
Evaluation
- New neural net trained on 17B positions
- Added a castling-rights bonus for FRC/DFRC, combining well with the new nets for larger gains there than in regular chess.
- Removed KRvKR and KQvKQ specialty code (was causing low-time blunders).
Search & TT
- Store eval in TT as early as possible in QSearch.
- In PV nodes, skip QSearch on the TT move (idea by Viz from Stockfish).
- Increased NMP start depth (more conservative at shallow depth).
- Added a recapture extension.
- Increased the “improving” factor in RFP.
- Simplified move ordering and eval adjustment code paths.
- TT now updates the move field even when the entry is not overwritten.
- Removed current-move reporting from the search output.
- First dedicated multithreaded tuning (≈31k games, 20+0.2, 8 threads), giving about +5 Elo on 4 threads.
Time management & SMP
- Tweak with the biggest gain in high-increment games (≈+4 Elo).
Progression test
SMP LTC (20+0.2, 4 threads)
Elo | 17.33 +- 3.96 (95%)
Conf | 20.0+0.20s Threads=4 Hash=128MB
Games | N: 6964 W: 1758 L: 1411 D: 3795
Penta | [5, 639, 1847, 986, 5]
https://furybench.com/test/3969/
LTC (60+0.6)
Elo | 8.84 +- 2.38 (95%)
Conf | 60.0+0.60s Threads=1 Hash=128MB
Games | N: 20010 W: 4835 L: 4326 D: 10849
Penta | [19, 2133, 5185, 2656, 12]
https://furybench.com/test/3935/
DFRC LTC (60+0.6)
Elo | 10.33 +- 2.17 (95%)
Conf | 60.0+0.60s Threads=1 Hash=128MB
Games | N: 17118 W: 2595 L: 2086 D: 12437
Penta | [28, 1110, 5792, 1583, 46]