Replies: 1 comment
-
Answer generated by a 🤖 AnswerI understand that you're trying to use the asynchronous feature of LangChain to concurrently run multiple chat models using the ChatOpenAI class. You're using the From what I can see, your code seems to be mostly correct. The only thing that appears to be missing are the import statements for import asyncio
import time
from langchain.chat_models import ChatOpenAI
from langchain.schema import AIMessage, HumanMessage, SystemMessage
async def async_chat(chat_model, messages):
resp = await chat_model.agenerate(messages)
print(resp.generations[0].message.content)
async def chat_concurrently():
chat_model = ChatOpenAI(openai_api_key=openai_api_key, temperature=0.9)
messages = [
SystemMessage(content="You are a helpful assistant."),
HumanMessage(content="What's the weather like today?"),
]
tasks = [async_chat(chat_model, messages) for _ in range(10)]
await asyncio.gather(*tasks)
s = time.perf_counter()
asyncio.run(chat_concurrently())
elapsed = time.perf_counter() - s
print("\033[1m" + f"Concurrent executed in {elapsed:0.2f} seconds." + "\033[0m") This should allow you to run multiple async calls to the ChatOpenAI object concurrently. However, please note that this is just one potential solution based on the information you've provided. If you're still encountering issues, it would be helpful to have more details about the specific problem you're experiencing. If you find this solution to be correct and feel that it's not covered by the existing documentation, I encourage you to make a pull request to update the documentation. This would be greatly beneficial to other users who might encounter the same issue in the future. SourcesThis response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi! ✌️
https://langchain-langchain.vercel.app/docs/modules/model_io/models/llms/how_to/async_llm
I'm attempting to use LangChain's async feature to concurrently run multiple chat models using ChatOpenAI class. But I can't figure out what to the documentation. Anyone can give a hand, please?
Beta Was this translation helpful? Give feedback.
All reactions