Replies: 3 comments 3 replies
-
🤖 Hello @Luisotee! I'm Dosu, a friendly bot designed to lend a hand with bugs, answer questions, and assist you with contributing to the langchainjs project. While we're waiting for a human maintainer, I'm here to help out! Based on the error message you're receiving, it seems like the model name provided is not recognized by the Here is the relevant code: export const calculateMaxTokens = async ({
prompt,
modelName,
}: CalculateMaxTokenProps) => {
let numTokens;
try {
numTokens = (
await encodingForModel(getModelNameForTiktoken(modelName))
).encode(prompt).length;
} catch (error) {
console.warn(
"Failed to calculate number of tokens, falling back to approximate count"
);
// fallback to approximate calculation if tiktoken is not available
// each token is ~4 characters: https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them#
numTokens = Math.ceil(prompt.length / 4);
}
const maxTokens = getModelContextSize(modelName);
return maxTokens - numTokens;
}; To resolve this issue, ensure that the model name you are providing is recognized by the Ensure that the model name you are using is one of the following: "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "gpt-4-32k", "gpt-4", or any other model name that is recognized by the In your case, the As for the streaming issue, it seems to be related to how streaming is handled in the LangChain framework. In the To handle tokens as they are generated, you would need to modify this code to resolve the Promise or handle the data as soon as each token is received, rather than waiting for the Also, it was mentioned in a previous issue that the I hope this helps! If you have any other questions or need further clarification, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
@Luisotee , were you able to get this fixed. I am having the same problem with my model on azure open Ai |
Beta Was this translation helpful? Give feedback.
-
I also have the same error with below code:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I am having a problem where when I activate the streaming function it starts giving me the error:
Failed to calculate number of tokens, falling back to approximate count Error: Unknown model
. My code is:Initializing the LLM:
I am calling it in:
Beta Was this translation helpful? Give feedback.
All reactions