Replies: 2 comments
-
It might be that the OpenAI calls are printed because we log retries as warnings (this happens on rate limits, api errors, etc). Can you provide an example of what is being printed? |
Beta Was this translation helpful? Give feedback.
-
Thanks for the prompt response. Here are a few examples: a) Openai input (The exact same message below shows up 7 times) b) DEBUG message - the exact same message below shows up 7 times DEBUG:openai:api_version=None data='{"messages": [{"role": "system", "content": "Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n______________________\nSpeaker: So it's, it's really, it's really interesting to hear ...<Long text prompt - Trimmed> c) DEBUG printed 7 times: DEBUG:chromadb.db.index.hnswlib:time to run knn query: 0.00021266937255859375 d) DETAIL message printed 7 times e) Debug message printed 7 times: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I see way too much debug output when I run langchain calls.
For example, even with verbose=False for the following RetrievalQA,
qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(
model_name=chosen_model,
temperature=0,
max_tokens=num_output,
#top_p=0.3,
frequency_penalty=1,
presence_penalty=1,
),
verbose=False,
chain_type="map_reduce", retriever=retriever)
I notice that the calls to OpenAI are printed in the log. The same debug messages are printed several times - sometimes 5, 6, or 7 times.
Anyone else seen this? How do I tweak this?
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions