Replies: 1 comment
-
🤖 Hello, Thank you for reaching out and providing a detailed explanation of your issue. It seems like the problem you're facing is related to the quality of the responses generated by the The The response is generated by the If you're not satisfied with the quality of the responses, you might want to modify the Here is the part of the code where the response is generated: answer = self.combine_docs_chain.run(
input_documents=docs, callbacks=_run_manager.get_child(), **new_inputs
) And here is the part of the code where the combine_docs_chain = StuffDocumentsChain(...) You could replace As for the difference between the streamed response and I hope this helps! If you have any further questions, please don't hesitate to ask. Best, SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm trying to implement HTTPS streaming in the
ConversationalRetrievalChain
via FastAPI but am stumbling into some issues. Mainly the streaming text is full of bad writing.Here's an example of a streamed response ^.
I already had an issue on streaming gibberish before: #9592 (comment) but I've seemed to have resolved that already - in the sense that I'm no longer streaming the model thought process.
However, the issue now is the wrong grammar being sent. Here's the code:
However, the streamed response is different from
result['answer']
. How can I stream the texts inresult['answer']
? Or at least have the streamed response similar to the one returned inresult['answer']
?Beta Was this translation helpful? Give feedback.
All reactions