Can not count token because LLMResult.llm_output() = None . OpenAI #15874
Replies: 4 comments 7 replies
-
🤖 Hello @NguyenDrasp! 👋 I'm Dosu, an AI bot here to assist you with any issues or questions you may have regarding the LangChain repository. While you're waiting for a human maintainer to chime in, I'm here to help you out as much as I can. Don't hesitate to ask anything regarding bug solving, contributing, and more. Let's get your problem solved! 🔧 To count tokens when using a custom AsyncCallbackHandler for streaming in LangChain, you can use the Here is an example of how you can use it: import asyncio
from langchain.callbacks import get_openai_callback
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1)
# Use the context manager to count tokens
with get_openai_callback() as cb:
llm("Your custom AsyncCallbackHandler code here")
# The total number of tokens is stored in cb.total_tokens
total_tokens = cb.total_tokens
assert total_tokens > 0 In this example, If You can find more information about token counting in the token_counting.ipynb notebook in the LangChain repository. As for the I hope this helps! If you have any other questions, feel free to ask. Best, SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
+1 Same issue here after updating from langchain v0.0.278 -> v0.1.11 AzureChatOpenAI model has been moved from langchain.chat_models -> langchain_openai
|
Beta Was this translation helpful? Give feedback.
-
Hi Thread, I am using typescript sdk of langchain. I am still receiving 0 token count. Can you please here ? |
Beta Was this translation helpful? Give feedback.
-
Hi! I am facing a similar problem in LLMCompiler with RunnableBranch class. here is the planner code, def create_planner(
llm: BaseChatModel, tools: Sequence[BaseTool], base_prompt: ChatPromptTemplate
):
tool_descriptions = "\n".join(
f"{i+1}. {tool.description}\n"
for i, tool in enumerate(
tools
) # +1 to offset the 0 starting index, we want it count normally from 1.
)
planner_prompt = base_prompt.partial(
replan="",
num_tools=len(tools)
+ 1, # Add one because we're adding the join() tool at the end.
tool_descriptions=tool_descriptions,
)
replanner_prompt = base_prompt.partial(
replan=' - You are given "Previous Plan" which is the plan that the previous agent created along with the execution results '
"(given as Observation) of each plan and a general thought (given as Thought) about the executed results."
'You MUST use these information to create the next plan under "Current Plan".\n'
' - When starting the Current Plan, you should start with "Thought" that outlines the strategy for the next plan.\n'
" - In the Current Plan, you should NEVER repeat the actions that are already executed in the Previous Plan.\n"
" - You must continue the task index from the end of the previous one. Do not repeat task indices.",
num_tools=len(tools) + 1,
tool_descriptions=tool_descriptions,
)
def should_replan(state: list):
# Context is passed as a system message
return isinstance(state[-1], SystemMessage)
def wrap_messages(state: list):
return {"messages": state}
def wrap_and_get_last_index(state: list):
next_task = 0
for message in state[::-1]:
if isinstance(message, FunctionMessage):
next_task = message.additional_kwargs["idx"] + 1
break
state[-1].content = state[-1].content + f" - Begin counting at : {next_task}"
return {"messages": state}
return (
RunnableBranch(
(should_replan, wrap_and_get_last_index | replanner_prompt),
wrap_messages | planner_prompt,
)
| llm
| LLMCompilerPlanParser(tools=tools)
) and here is where it is called. @as_runnable
def plan_and_schedule(messages: List[BaseMessage], config):
tasks = planner.stream(messages, config)
# Begin executing the planner immediately
try:
tasks = itertools.chain([next(tasks)], tasks)
except StopIteration:
# Handle the case where tasks is empty.
tasks = iter([])
scheduled_tasks = schedule_tasks.invoke(
{
"messages": messages,
"tasks": tasks,
},
config,
)
return scheduled_tasks When I debug the code. The planner LLM's |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I'm trying tracking token I use an Agent with my custom AsyncCallbackHandler for streaming, but I can not counting the token. Here's the photo I printed out of LLMResult:
Here is my code of my custom AsyncCallbackHandler:
And i found that llm_output is "Arbitrary LLM provider-specific output".
What should i do ?
Beta Was this translation helpful? Give feedback.
All reactions