-
Notifications
You must be signed in to change notification settings - Fork 8.5k
[Obs AI Assistant] Improved error handling for tool response #241425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[Obs AI Assistant] Improved error handling for tool response #241425
Conversation
|
Pinging @elastic/obs-ai-assistant (Team:Obs AI Assistant) |
🤖 GitHub commentsExpand to view the GitHub comments
Just comment with:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors error handling for tool/function calls in the Observability AI Assistant by migrating from custom "function not found" errors to using standardized error types from the @kbn/inference-plugin. The changes also add comprehensive test coverage for error scenarios when tools are called incorrectly.
Key Changes
- Replaces custom
FunctionNotFoundErrorwithToolNotFoundErrorfrom@kbn/inference-plugin - Adds new
FunctionArgsValidationErrorto handle invalid function arguments - Removes obsolete error handling operators (
fail_on_non_existing_function_call.ts,catch_function_limit_exceeded_error.ts) - Adds extensive test coverage for scenarios where the LLM calls non-existent tools or tools with invalid arguments
Reviewed Changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| complete.spec.ts | Adds comprehensive test suites for tool calling error scenarios (non-existent tools and invalid arguments) |
| tsconfig.json | Adds trailing comma to dependencies array for consistency |
| catch_function_limit_exceeded_error.ts | Deleted - functionality consolidated elsewhere |
| fail_on_non_existing_function_call.ts | Deleted - replaced by standardized error handling |
| continue_conversation.ts | Migrates to createToolNotFoundError and adds handling for FunctionArgsValidationError |
| catch_function_not_found_error.ts | Updates to use isToolNotFoundError from @kbn/inference-common and returns structured message instead of empty observable |
| index.ts | Removes unused failOnNonExistingFunctionCall operator |
| chat_function_client/index.ts | Replaces custom FunctionArgsValidationError class with standardized createFunctionArgsValidationError |
| common/index.ts | Removes export of deprecated createFunctionNotFoundError |
| conversation_complete.ts | Replaces FunctionNotFoundError with FunctionArgsValidationError and adds helper functions |
| "@kbn/inference-endpoint-plugin", | ||
| "@kbn/spaces-utils", | ||
| "@kbn/usage-collection-plugin" | ||
| "@kbn/usage-collection-plugin", |
Copilot
AI
Nov 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The trailing comma after @kbn/usage-collection-plugin has been added, but there's an inconsistency: the last item in a JSON array should not have a trailing comma according to JSON specification. While TypeScript config files (which use JSON5) allow trailing commas, this addition should be removed to maintain strict JSON compatibility.
| "@kbn/usage-collection-plugin", | |
| "@kbn/usage-collection-plugin" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was about to yell at the bot but it's actually right. We don't have trailing commas in JSON files (it's invalid JSON in most parsers)
...ability/test/api_integration_deployment_agnostic/apis/ai_assistant/complete/complete.spec.ts
Outdated
Show resolved
Hide resolved
...ability/test/api_integration_deployment_agnostic/apis/ai_assistant/complete/complete.spec.ts
Outdated
Show resolved
Hide resolved
Co-authored-by: Copilot <[email protected]>
| TokenLimitReachedError = 'tokenLimitReachedError', | ||
| FunctionNotFoundError = 'functionNotFoundError', | ||
| FunctionLimitExceededError = 'functionLimitExceededError', | ||
| FunctionArgsValidationError = 'functionArgsValidationError', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't the inference plugin handling this? Do we need our own error codes for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @sorenlouv, I have removed FunctionArgsValidationError and used instead isToolValidationError from @kbn/inference-common
| }); | ||
| }); | ||
|
|
||
| describe('when the LLM calls a function that is not available', function () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You used the work "tool" above but "function" here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, I did have a mix of functions and tools.
…tionError with createToolValidationError and update related tests for tool request terminology.
💔 Build Failed
Failed CI Steps
Test Failures
Metrics [docs]Public APIs missing comments
Page load bundle
Unknown metric groupsAPI count
ESLint disabled in files
Total ESLint disabled count
History
|
Closes #234193
Summary
Fixes an issue where the
chat/completestream could unexpectedly terminate when the LLM attempted to call a tool that resulted in one of the following errors:The tool was not available (e.g., "Tool 'execute_query' called but was not available").
The tool was called with invalid or missing arguments.
Instead of terminating the stream, these errors are now captured and returned as part of the tool response, allowing the chat to continue normally.