22==========================================
33
44The engine module provides AI opponents for playing draughts. The main implementation
5- is the ``AlphaBetaEngine ``, which uses minimax search with alpha-beta pruning.
5+ is the ``AlphaBetaEngine ``, which uses Negamax search with alpha-beta pruning and advanced optimizations .
66
77Engine Interface
88----------------
@@ -19,35 +19,82 @@ AlphaBetaEngine
1919Algorithm Overview
2020------------------
2121
22- The ``AlphaBetaEngine `` implements a minimax search with several optimizations:
22+ The ``AlphaBetaEngine `` implements Negamax search with comprehensive optimizations:
2323
24- **Alpha-Beta Pruning **
25- Eliminates branches that cannot affect the final decision, significantly reducing
26- the number of positions evaluated .
24+ **Negamax Architecture **
25+ Simplifies alpha-beta pruning by using the principle that `` max(a,b) = -min(-a,-b) ``,
26+ reducing code complexity while maintaining efficiency .
2727
28- **Move Ordering **
29- Evaluates capture moves first, which improves pruning efficiency since captures
30- are often the strongest moves in draughts.
28+ **Iterative Deepening **
29+ Progressively deepens the search from depth 1 to the target depth, allowing the search
30+ to be interrupted by time limits while still returning the best move found so far.
31+
32+ **Zobrist Hashing **
33+ Computes incremental 64-bit position hashes for efficient transposition table lookups.
34+ Supports turn-aware hashing and handles piece promotion detection.
3135
3236**Transposition Table **
33- Caches previously evaluated positions to avoid redundant calculations when the
34- same position is reached through different move orders.
37+ Caches previously evaluated positions with depth-aware entries (exact scores, lower bounds,
38+ upper bounds) to avoid redundant calculations when the same position is reached through
39+ different move orders. Stores the principal variation move for each position.
40+
41+ **Quiescence Search **
42+ Extends the search beyond the main depth limit to evaluate only capturing sequences,
43+ eliminating horizon effects that would cause poor move evaluation at depth boundaries.
44+
45+ **Move Ordering **
46+ Orders moves to maximize pruning efficiency:
47+
48+ - Principal Variation (PV) moves from transposition table
49+ - Captures, scored by capture chain length
50+ - Killer moves (moves that caused cutoffs in sibling nodes)
51+ - History heuristic (rewarding moves that have caused cutoffs previously)
3552
36- **Enhanced Evaluation **
37- The evaluation function considers:
53+ **Principal Variation Search (PVS) **
54+ Uses null-window searches to optimize the alpha-beta window, reducing the number of
55+ full-window re-searches required.
56+
57+ **Late Move Reductions (LMR) **
58+ Reduces search depth for moves later in the move ordering at depth ≥ 3, assuming
59+ that killer moves and history moves are more likely to fail high.
60+
61+ **Enhanced Evaluation Function **
62+ Considers multiple factors:
3863
39- - Material balance (pieces and kings)
40- - Piece positioning and advancement
41- - King promotion potential
64+ - Material balance (men and kings with different values)
65+ - Piece-Square Tables (PST) for both men and kings
66+ - King advancement and centralization bonuses
67+ - Man advancement toward promotion zone
4268
4369Performance Characteristics
4470---------------------------
4571
46- The engine's strength and speed depend on the search depth:
47-
48- - **Depth 3-4 **: Fast response, suitable for casual play
49- - **Depth 5-6 **: Strong play with reasonable response time
50- - **Depth 7+ **: Very strong but slower, best for analysis
72+ Benchmark results from standard draughts positions show the engine's scaling across depths:
73+
74+ ============ ============ ============ ============
75+ Depth Avg Time Avg Nodes Notes
76+ ============ ============ ============ ============
77+ 1 0.66 ms 24
78+ 2 3.88 ms 102
79+ 3 7.73 ms 269
80+ 4 20.24 ms 777
81+ 5 86.55 ms 2,896 Recommended for casual play
82+ 6 249.85 ms 9,163 Recommended for strong play
83+ 7 733.79 ms 24,528 Strong analysis
84+ 8 1.63 s 51,382 Extended analysis
85+ 9 5.63 s 141,284 Deep analysis
86+ 10 ~20 s ~400,000 Still playable (with time limits)
87+ ============ ============ ============ ============
88+
89+ **Recommendations: **
90+
91+ - **Depth 3-4 **: Fast response, suitable for casual play (< 50 ms per move)
92+ - **Depth 5-6 **: Strong play with reasonable response time (< 1 second per move)
93+ - **Depth 7-8 **: Very strong play, recommended for analysis (< 2 seconds per move)
94+ - **Depth 9-10 **: Expert-level play with extended time (5-20 seconds per move)
95+
96+ The engine can be configured with a ``time_limit `` parameter to constrain search time
97+ across all depths using iterative deepening.
5198
5299Example Usage
53100-------------
@@ -68,6 +115,13 @@ Basic usage::
68115 move, score = engine.get_best_move(board, with_evaluation=True)
69116 print(f"Best move: {move}, Score: {score}")
70117
118+ With time limits::
119+
120+ # Search with 1-second time limit instead of fixed depth
121+ engine = AlphaBetaEngine(depth=20, time_limit=1.0)
122+ move, score = engine.get_best_move(board, with_evaluation=True)
123+ # Engine will iteratively deepen up to depth 20 or until time expires
124+
71125Custom Engine Implementation
72126-----------------------------
73127
0 commit comments