Streamed MultiPromptChain with websocket: undesired #6682
Unanswered
bassameter63
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I'm setting up a MultiPromptChain that I call asynchronously as follows:
resp=await multichain.arun(input=standalone_question,include_run_info=False,return_only_outputs=True)
or:
resp=await multichain.acall(inputs={"input":standalone_question,},include_run_info=False,return_only_outputs=True)
The MutiPromptChain is constructed based on a streaming llm as follows:
stream_handler = StreamingLLMCallbackHandler(websocket)
stream_manager = BaseCallbackManager([stream_handler])
llm = ChatOpenAI(
model_name=model,
temperature=0.9,
streaming=True,
callback_manager=stream_manager,
verbose=False,
openai_api_key=openai_apik)
My problem is that I get a non desirable markdown code snippet with a JSON object;
with the response in the websocket. Fo example:
{
"destination":"......................................",
"next_inputs":"....................................................."
}
followed by the answer to the question.
I don't want that json object to be streamed in the websocket; just the answer to the question.
Is there a way to do it?
Thanks for answering.
Bassam.
Beta Was this translation helpful? Give feedback.
All reactions