Skip to content

Commit f7593ef

Browse files
fix: correct Ollama return statement and variable reference for sequential tool calling
- Replace early return with loop continuation in Ollama handling to match async behavior - Fix variable reference from reasoning_content to stored_reasoning_content - Ensures agents can perform sequential tool calls as intended Co-authored-by: Mervin Praison <[email protected]>
1 parent 39d1470 commit f7593ef

File tree

1 file changed

+11
-4
lines changed
  • src/praisonai-agents/praisonaiagents/llm

1 file changed

+11
-4
lines changed

src/praisonai-agents/praisonaiagents/llm/llm.py

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -917,9 +917,16 @@ def get_response(
917917
console=console
918918
)
919919

920-
# Return the final response after processing Ollama's follow-up
920+
# Update messages and continue the loop instead of returning
921921
if final_response_text:
922-
return final_response_text
922+
# Update messages with the response to maintain conversation context
923+
messages.append({
924+
"role": "assistant",
925+
"content": final_response_text
926+
})
927+
# Continue the loop to check if more tools are needed
928+
iteration_count += 1
929+
continue
923930
else:
924931
logging.warning("[OLLAMA_DEBUG] Ollama follow-up returned empty response")
925932

@@ -1006,8 +1013,8 @@ def get_response(
10061013
display_interaction(original_prompt, response_text, markdown=markdown,
10071014
generation_time=time.time() - start_time, console=console)
10081015
# Return reasoning content if reasoning_steps is True
1009-
if reasoning_steps and reasoning_content:
1010-
return reasoning_content
1016+
if reasoning_steps and stored_reasoning_content:
1017+
return stored_reasoning_content
10111018
return response_text
10121019

10131020
# Handle self-reflection loop

0 commit comments

Comments
 (0)