Replies: 1 comment
-
🤖 You're correct in your understanding that the number of tools you can add to the LangChain agent is limited by the context length of the language model. Each tool's name, description, and arguments are converted into a string and included in the prompt that is fed to the language model. If the total number of tokens in the prompt exceeds the context length of the language model, it will not be able to process the prompt correctly. This is evident from the @classmethod
def create_prompt(
cls,
tools: Sequence[BaseTool],
prefix: str = PREFIX,
suffix: str = SUFFIX,
human_message_template: str = HUMAN_MESSAGE_TEMPLATE,
format_instructions: str = FORMAT_INSTRUCTIONS,
input_variables: Optional[List[str]] = None,
memory_prompts: Optional[List[BasePromptTemplate]] = None,
) -> BasePromptTemplate:
tool_strings = []
for tool in tools:
args_schema = re.sub("}", "}}}}", re.sub("{", "{{{{", str(tool.args)))
tool_strings.append(f"{tool.name}: {tool.description}, args: {args_schema}")
formatted_tools = "\n".join(tool_strings)
...
template = "\n\n".join([prefix, formatted_tools, format_instructions, suffix])
... In this method, each tool's name, description, and arguments are converted into a string and added to the The maximum context length that the language model used in LangChain can handle is determined by the Please note that the actual maximum context length might be limited by the specific language model used. For example, GPT-3 has a maximum token limit of 2048 tokens. In the context shared, it's shown that LangChain has a method In conclusion, while it might be technically possible to add thousands of tools to the LangChain agent, the practical limit will be determined by the context length of the language model. If you try to add too many tools such that the total number of tokens in the prompt exceeds the language model's context length, the language model will not be able to process the prompt correctly. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
if i have 2000 token context length than i can add much more tools like thousands of tools. because all have do read by llm which will be out of context length . am i right?
Beta Was this translation helpful? Give feedback.
All reactions