You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here, if I ask it to perform some operation on a piece of text containing inappropriate or unsafe content,
the agent will try to respond on its own without calling any tools — but its response breaks midway and returns as it is.
Example Input:
'''You are to rephrase the content "The youth threatened to kill the staff because they were reminding the youth that it's almost time to turn off the ps5 as it was the night time. The staff disengages but the youth still escalated the issue by shouting that he will punch them stating "he will punch them so hard that their teeth will cave in and they will have to drink soups for the rest of their life." '''
Output:
"The youth threatened to harm the staff after being reminded that it was nearly time to turn off the PS5 for the night. Although the staff tried to de-es"
It does not throw any exception or provide any information in the intermediate step, and thus I am not able to handle it.
I found out this occurs because the content filter marks it as failed after token generation has started.
Hence, instead of returning a message like "Unable to perform your request," it simply returns a half-baked answer.
My Question:
What other steps can I take here to detect in real-time that the agent has stopped midway?
I just want to catch this behavior so I can add another operation on top of it.
Thanks.
P.S. - When I ask something more brutal content, it throws openai.BadRequestError. However for above milder case the answer breaks before actual content.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I'm using Langchain Tool Calling Agent (v0.1)
Here, if I ask it to perform some operation on a piece of text containing inappropriate or unsafe content,
the agent will try to respond on its own without calling any tools — but its response breaks midway and returns as it is.
Example Input:
Output:
It does not throw any exception or provide any information in the intermediate step, and thus I am not able to handle it.
I found out this occurs because the content filter marks it as failed after token generation has started.
Hence, instead of returning a message like "Unable to perform your request," it simply returns a half-baked answer.
My Question:
What other steps can I take here to detect in real-time that the agent has stopped midway?
I just want to catch this behavior so I can add another operation on top of it.
Thanks.
P.S. - When I ask something more brutal content, it throws openai.BadRequestError. However for above milder case the answer breaks before actual content.
System Info
langchain==0.3.19
Beta Was this translation helpful? Give feedback.
All reactions