Commit fb20992
committed
refactor(chat): remove openai_convert, add per-thread sampling params & mode transitions
Remove legacy openai_convert.rs (676 lines) and subchat HTTP endpoints as message
conversion is now handled directly in LLM adapters (OpenAI Chat, Anthropic, Refact).
Add per-thread sampling parameters (temperature, frequency_penalty, max_tokens,
parallel_tool_calls, reasoning_effort) with GUI controls and backend persistence.
Add mode transition system:
- New `mode_transition` subagent extracts context from existing chats
- `/trajectory/mode-transition/apply` endpoint creates linked child chats
- GUI mode selector now supports switching/restarting with context preservation
- New `canonical_mode_id()`/`normalize_mode_id()`/`is_agentic_mode_id()` utilities
Update provider configs with model-specific defaults (default_max_tokens,
default_frequency_penalty) and comprehensive chat model definitions.
Other:
- Extract LLM logging/sanitization utilities
- Add `eof_is_done` model capability
- Preserve citations in chat sanitization
- Mode badges with consistent colors in UI
- Update mode YAML files to schema_version: 4 with enhanced `ask_questions()`
guidance and subagent delegation patterns
BREAKING CHANGE: Legacy ChatMode enum removed - all modes now use string IDs.
Update any hardcoded enum usage to use canonical_mode_id().1 parent 71b3cfd commit fb20992
File tree
1 file changed
+1
-1
lines changed- refact-agent/engine/src/yaml_configs/defaults/subagents
1 file changed
+1
-1
lines changedLines changed: 1 addition & 1 deletion
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
15 | 15 | | |
16 | 16 | | |
17 | 17 | | |
18 | | - | |
| 18 | + | |
19 | 19 | | |
20 | 20 | | |
0 commit comments