Replies: 4 comments 3 replies
-
To dynamically set the from typing import Optional, Type
from langchain.pydantic_v1 import BaseModel, Field
from langchain_core.callbacks import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain_core.tools import BaseTool
class CalculatorInput(BaseModel):
a: int = Field(description="first number")
b: int = Field(description="second number")
class CustomCalculatorTool(BaseTool):
name = "Calculator"
description = "useful for when you need to answer questions about math"
args_schema: Type[BaseModel] = CalculatorInput
return_direct: bool = False # Default value
def _run(
self, a: int, b: int, run_manager: Optional[CallbackManagerForToolRun] = None
) -> str:
"""Use the tool."""
result = a * b
# Dynamically set return_direct based on the result
if result > 100: # Example condition
self.return_direct = True
else:
self.return_direct = False
return result
async def _arun(
self,
a: int,
b: int,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
"""Use the tool asynchronously."""
result = self._run(a, b, run_manager=run_manager.get_sync())
return result
# Example usage
multiply = CustomCalculatorTool()
print(multiply.invoke({"a": 2, "b": 3})) # Should set return_direct to False
print(multiply.return_direct)
print(multiply.invoke({"a": 50, "b": 3})) # Should set return_direct to True
print(multiply.return_direct) In this example, the |
Beta Was this translation helpful? Give feedback.
-
Is it possible if i am using an implementation like @tool |
Beta Was this translation helpful? Give feedback.
-
I cannot pass this tool to agentExecutor like this see below AgentExecutor(agent=agent, llm=llm, tools=tools, verbose=True) |
Beta Was this translation helpful? Give feedback.
-
Okay this works but one additional question unrelated to the above my python debugger now fails to stop inside my function any reason why here is my launch.json snippet { |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I am trying to build a tool and I want to dynamically set the return_dircect flag, since in some response I want the LLM to fetch data from the tool and then return insights on the data otherwise when the data is sufficient enough to be shown directly to the user i want the return directly.
System Info
System Information
OS: Windows
OS Version: 10.0.22621
Python Version: 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)]
Package Information
langchain_core: 0.2.34
langchain: 0.2.14
langchain_community: 0.2.12
langsmith: 0.1.103
langchain_chroma: 0.1.3
langchain_cli: 0.0.30
langchain_openai: 0.1.22
langchain_text_splitters: 0.2.2
langserve: 0.2.2
Optional packages not installed
langgraph
Beta Was this translation helpful? Give feedback.
All reactions