how can i make the conversation agent? #11126
Replies: 1 comment
-
🤖 Based on the context provided, it seems like you want to print only the final answer from the output of To print only the final answer, you need to parse this JSON object and extract the value of the "Final Answer" key. Here's how you can modify your code: response = agent_chain.run(prompt)
final_answer = None
# Parse the response
for item in response:
if 'action' in item and item['action'] == 'Final Answer':
final_answer = item['action_input']
break
# Print the final answer
if final_answer is not None:
print(final_answer)
else:
print("No final answer found in the response.") This code will loop through the response, check each item for an 'action' key, and if the action is 'Final Answer', it will store the corresponding 'action_input' as the final answer. Then it will print the final answer. Please note that this is a general solution and might need to be adjusted based on the exact structure of the response from If you still encounter issues, please provide more details about the structure of the response from Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I created an agent chain with AgentExecutor, but when I output an answer with "response = agent_chain.run(prompt)", the thought, action phrases are constantly output together. I just want the final answer to be printed. how can i make the code? please fix this code.
prefix = """response style as if talking with a friend. Keep your response concise, within 500 characters. Only print the final answer."""
suffix = """Let's start the conversation!
{chat_history}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["chat_history", "agent_scratchpad"],
)
memory = ConversationBufferMemory(memory_key="chat_history")
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(
model_path="llama-2-7b-chat.Q4_K_M.gguf",
input={"temperature": 0.75,
"max_length": 10000,
"top_p": 1},
callback_manager=callback_manager,
verbose=True,
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=False)
agent_chain = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=memory
)
Beta Was this translation helpful? Give feedback.
All reactions