Integrate NVIDIA NIM Models in chat model #5209
-
Checked other resources
Commit to Help
Example Codeimport { ChatOpenAI } from "@langchain/openai";
import {
ChatPromptTemplate,
MessagesPlaceholder,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
} from "@langchain/core/prompts";
import {
RunnableWithMessageHistory,
RunnableSequence,
RunnablePassthrough,
} from "@langchain/core/runnables";
import { UpstashRedisChatMessageHistory } from "langchain/stores/message/upstash_redis";
import { StringOutputParser } from "@langchain/core/output_parsers";
const model = new ChatOpenAI({
temperature: 0,
top_p: 1,
max_tokens: 1024,
modelName: "meta/llama3-70b-instruct",
apiKey:
API_KEY,
configuration: {
baseURL: "https://integrate.api.nvidia.com/v1",
},
});
// End the Retrieval Runnable
const prompt = ChatPromptTemplate.fromMessages([
["system", system],
new MessagesPlaceholder("history"),
["human", "{input}"],
]);
console.log(model);
// Create a simple runnable which just chains the prompt to the model.
const runnable = prompt.pipe(model).pipe(new StringOutputParser());
// Define your session history store.
const chainWithHistory = new RunnableWithMessageHistory({
runnable: runnable,
getMessageHistory: (sessionId) =>
new UpstashRedisChatMessageHistory({
sessionId,
sessionTTL: 300,
config: {
url: process.env.REDIS_URL,
token: process.env.REDIS_TOKEN,
},
}),
inputMessagesKey: "input",
historyMessagesKey: "history",
});
const result = await chainWithHistory.invoke(
{
input: message,
},
{
configurable: {
sessionId: aiMemories[AI + req.session.id],
},
}
);
res.send({
response: result,
}); Descriptioni am trying to use NVIDIA NIM API endpoint to call custom models hosted on NIM, since the documentation on the NIM platforms is similar to OpenAI, System Info[email protected] | MIT | deps: 18 | versions: 267 keywords: llm, ai, gpt3, chain, prompt, prompt engineering, chatgpt, machine learning, ml, openai, embeddings, vectorstores dist dependencies: maintainers:
dist-tags: published 9 hours ago by jacoblee93 [email protected] |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
This Fixes the issue. const model = new ChatOpenAI({ |
Beta Was this translation helpful? Give feedback.
This Fixes the issue.
const model = new ChatOpenAI({
apiKey:
"KEY_HERE",
configuration: {
baseURL: "https://integrate.api.nvidia.com/v1",
},
model: "meta/llama3-70b-instruct",
maxTokens: 1024,
});