You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add RuleRef expansion infrastructure to SLL prediction engine
Add return stack to SllConfig, enabling the prediction engine to expand
multi-token RuleRefs by entering referenced rules during SLL advancement.
This tracks continuation points so the engine can return to the caller
after advancing through a sub-rule.
Infrastructure added (disabled, zero runtime overhead):
- SllReturn struct and return_stack field on SllConfig
- push_return/pop_return helpers for stack management
- sll_expand_rule_ref: expands multi-token RuleRefs with depth/alt guards
- try_expand_opaque: attempts to resolve opaque prediction groups
- strip_all_consume: removes Consume nodes from expanded prediction trees
The expansion is currently disabled (try_expand_opaque is not called)
because dispatching on tokens from inside expanded sub-rules can produce
incorrect prediction branches. Specifically:
- Consume nodes from sub-rules incorrectly consume tokens at the decision point
- Dispatch branches mix tokens from different rule depths
- Rules sharing prefixes (e.g., with_clause) create false disambiguation
The infrastructure is ready for activation once a correct dispatch strategy
is implemented (e.g., computing FIRST sets at the decision point level
rather than at the expanded position level).
https://claude.ai/code/session_01ACVN5Rr7waUZWXtv8MFN2C
0 commit comments