You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched existing ideas and did not find a similar one
I added a very descriptive title
I've clearly described the feature request and motivation for it
Feature request
There should be a generic ContentFilterException class that can be called when the llm content filter gets triggered.
Motivation
Currently when it gets triggered using langchain-openai with AzureChatOpenAI I get a openai.BadRequestError exception saying:
"The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766"
If I were to use another model I would get a different exception and there would not be any way to catch all exceptions of this type.
This can be useful if you need to see which messages trigger a content filter when running an automated system.
Proposal (If applicable)
Create a generic ContentFilterException class that can be implemented by the different modules to catch that specific error. I looked at the source code and I couldn't find anything that does that, so I proposed it.
In the case of azure openai the error message reports:
{
"error": {
"message": "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766",
"type": None,
"param": "prompt",
"code": "content_filter",
"status": 400,
"innererror": {
"code": "ResponsibleAIPolicyViolation",
"content_filter_result": {
"hate": {"filtered": False, "severity": "safe"},
"self_harm": {"filtered": False, "severity": "safe"},
"sexual": {"filtered": True, "severity": "high"},
"violence": {"filtered": False, "severity": "safe"},
}
}
}
}
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
There should be a generic ContentFilterException class that can be called when the llm content filter gets triggered.
Motivation
Currently when it gets triggered using langchain-openai with AzureChatOpenAI I get a openai.BadRequestError exception saying:
"The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766"
If I were to use another model I would get a different exception and there would not be any way to catch all exceptions of this type.
This can be useful if you need to see which messages trigger a content filter when running an automated system.
Proposal (If applicable)
Create a generic ContentFilterException class that can be implemented by the different modules to catch that specific error. I looked at the source code and I couldn't find anything that does that, so I proposed it.
In the case of azure openai the error message reports:
Beta Was this translation helpful? Give feedback.
All reactions