Replies: 1 comment
-
@awonglk can you have a look at #10182 please? Looking for some more info about the request made to the model. Please respond in the issue, thanks. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I've followed the article pointed by following issue to try to get my Semantic Kernel app working with APIM managed Azure OpenAI:
#7143
If there's no function calls involved, the responses back from LLM seems to work as normal.
But as soon as I ask a question that involves a plugin (even the core plugin time_plugin() as an example)
This is what I get when asking a simple question like "What is the time?"
<class 'semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.AzureChatCompletion'> service failed to complete the prompt\", BadRequestError('Error code: 400 - {\\'statusCode\\': 400, \\'message\\': \"Unable to parse and estimate tokens from incoming request. Please ensure incoming request does not contain any images and is of one of the following types: \\'Chat Completion\\', \\'Completion\\', \\'Embeddings\\' and works with current prompt estimation mode of \\'Auto\\'.\"}'))
Is there anything obvious that I may have missed?
Using semantic-kernel 1.16.0
Beta Was this translation helpful? Give feedback.
All reactions