fix: add v2 type-safe tool system with response formatter and confidence scoring#362
fix: add v2 type-safe tool system with response formatter and confidence scoring#362cleyesode wants to merge 3 commits intoNano-Collective:mainfrom
Conversation
…nce scoring Resolves \`split is not a function\` errors from LLM non-string responses by implementing comprehensive type-safe handling across all tools. ## Key Features ### New Response Formatter Layer Normalizes LLM responses of ANY type (string, object, array, null, undefined) to a format suitable for parsing, with intelligent confidence scoring. ### Confidence Grading System - HIGH: Valid responses (plain text, successfully parsed tool calls) - MEDIUM: Malformed responses with detectable format indicators - LOW: Valid plain text without format indicators **Rationale**: Plain text is a valid response type, not malformed. Only when we EXPECT tool calls (malformed patterns detected) do we mark as MEDIUM confidence, preventing false positives. ### Type-Safe Parser Updates - JSON Parser: Accepts \`unknown\`, preserves types in ToolCall.arguments - XML Parser: Accepts \`unknown\`, preserves types in ParsedToolCall.parameters - Type helpers: \`ensureString()\`, \`toRequiredString()\`, type guards ### Enhanced Write File Tool - Accepts non-string content (object, array, number, boolean) - Preserves types in memory for type-safe operations - Validates null/undefined and empty content ## Test Coverage All tests passing (204/204): - Response Formatter: 59/59 (100%) - JSON Parser: 59/59 (100%) - XML Parser: 64/64 (100%) - Write File Tool: All tests passing ## Breaking Changes None. Fully backward compatible. Existing string content works identically.
… test The 'Searching for' display was removed from the error formatter in commit d79b7e9 to reduce verbosity. Update test to match the new implementation. Related to Nano-Collective#356 (PR that merged main after v2 PR creation)
|
@will-lamerton This implementation has been field tested locally for 2 days - it fully resolves 'string is not a function' errors and aims be the most robust possible implementation for Local LLM responses. The goal is to provide an experience rivaling ai sdk error free function for local models. Also, this fix applied the necessary logic for a next phase of development: stream handler hook. Let me know what you think! |
Hey! Thanks for this! Excited to take a look! I'll be home again a bit later and I plan to go through all PRs 😎 |
|
Hey @cleyesode - sorry for the delay in me looking at this! This looks great. The "split is not a function" crashes from non-string LLM responses are a real pain and glad you could tackle them! All looking great from my perspective just a few suggestions to tighten this up before merging: 1. The new validation at 2. Changing 3. These two functions have the exact same implementation. Could you consolidate to one function? If you want to keep both names for semantic clarity, one could just re-export the other. 4. Consider trimming unused code
5. The
6. The regex This is just from my review so please let me know if you think I'm wrong on any of the points, this is great work and sorts a lot! I'm just ensuring we're covering all bases :D |
|
You are absolutely right. Tldr; these commits are all wrong. Ironically, it does work - but is not the intended PR. I'll address this presently as a new PR for comparison. Thanks for review! |
|
To your point:
Thanks for the constructive feedback - I'll push updates before too long |
Title: fix: add v2 type-safe tool system with response formatter and confidence scoring
Description:
Resolves
split is not a functionerrors from LLM non-string responses by implementing comprehensive type-safe handling across all tools.Problem: Local LLMs frequently pass objects/arrays/null/undefined directly to tools (not just strings), causing runtime crashes when formatters try to call
.split()on non-string values.Solution: Two-layer type-safe architecture:
unknowntypes, preserve types in memory for direct property accessKey Features:
✅ Type Preservation in Memory: ToolCall.arguments stored as objects, avoiding redundant
JSON.parse()calls. Types are preserved in memory while converted to strings during parsing/display, reducing unnecessary serialization overhead.Example:
Clarification: Types aren't removed—they're preserved as objects in memory for type-safe operations, then converted to strings only when needed for parsing or display. This keeps the parsing logic simple while maintaining type safety in the application layer.
✅ All LLM Response Types: Handles plain text, JSON, XML, mixed content, null, undefined, arrays
✅ Confidence Scoring: HIGH/MEDIUM/LOW grading with malformed detection (empty JSON objects, invalid XML, etc.)
✅ 100% Backward Compatible: No breaking changes - existing string content works identically
✅ Comprehensive Testing: 204/204 tests passing
Files Changed:
source/utils/type-helpers.ts- Type guards and conversion utilitiessource/utils/response-formatter.ts- Response normalization layersource/tool-calling/json-parser.ts- Type-safe JSON parsingsource/tool-calling/xml-parser.ts- Type-safe XML parsingsource/tools/write-file.tsx- Accepts non-string contentsource/tools/string-replace.tsx- Type-safe executionType Preservation Flow: