create_sql_agent is confusing instruction steps (Observation) #24939
Unanswered
telekosmos
asked this question in
Q&A
Replies: 1 comment 2 replies
-
To address the issue of the word "Observation" being incorrectly included in the action inputs when using the Here is an example of how you can customize the stop sequence: from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import BaseTool
from langchain.agents.structured_chat.output_parser import StructuredChatOutputParserWithRetries
from langchain.agents import create_sql_agent, AgentExecutor
from langchain.agents.agent_types import AgentType
from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
from langchain_community.tools.sql_database.tool import SQLDatabase
import sqlalchemy
# Define your LLM, tools, and prompt
llm = HuggingFaceEndpoint(
repo_id="meta-llama/Meta-Llama-3-8B-Instruct",
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
temperature=0.01
)
# Define the SQL database and toolkit
engine = sqlalchemy.create_engine('sqlite:///your_database.db')
db = SQLDatabase(engine=engine)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
# Define the output parser
output_parser = StructuredChatOutputParserWithRetries.from_llm(llm=llm)
# Customize the stop sequence to handle "Observation"
stop_sequence = ["Observation:"]
# Create the agent with the customized stop sequence
agent_executor: AgentExecutor = create_sql_agent(
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
llm=llm,
toolkit=toolkit,
format_instructions=_AGENT_FORMAT_INSTRUCTIONS,
verbose=True,
agent_executor_kwargs={"handle_parsing_errors": True},
max_iterations=15,
top_k=15,
output_parser=output_parser,
stop_sequence=stop_sequence
)
def answer_query_agent(question: str):
try:
resp: Dict[str, Any] = agent_executor.invoke({"input": question})
print(f'** Executor result **\n\n{resp}')
except OutputParserException as e:
print(f"Error parsing output: {e}")
if e.send_to_llm:
# Optionally, send the observation and llm_output back to the model
print(f"Observation: {e.observation}")
print(f"LLM Output: {e.llm_output}")
return resp By setting the |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
Hi
I'm trying to query a simple database (just company and sales tables) but using the code outlined above I'm getting multiple times with different models a confusing output like it's following:
So, it is confusing the word Observation with the value of the Action Input and hence starting a chained mess. I've tried many things, like set
suffix=''
or being more restrictive in the instructions but the Observation word keeps getting stuck to the Action Input. Is there any way to get rid of this issue via parameter or prompting or this is a bug or a lack of model performance?It is working ok using
AzureChatOpenAI
, but we prefer other models for this.This is very similar (even the same) to #21652 (unanswered)
System Info
Beta Was this translation helpful? Give feedback.
All reactions