Skip to content
Discussion options

You must be logged in to vote

It's JavaScript... here is a cheap and easy way that seemed to work during debugging:

//save the original console.log function in originalLog.
//define a new console.log function that captures token count data from each log entry.

let originalLog = console.log;
let tokenCount = 0;

console.log = function(...args) {
    originalLog.apply(console, args);
    let logString = args.join(' ');
    let match = logString.match(/"totalTokens":\s(\d+)/);
    if (match) {
        tokenCount += parseInt(match[1], 10);
    }
};

const model = new ChatOpenAI({
  temperature: 0,
  modelName: "gpt-3.5-turbo",
  verbose: true,
});

const executor = PlanAndExecuteAgentExecutor.fromLLMAndTools({
  llm: mod…

Replies: 2 comments 2 replies

Comment options

You must be logged in to vote
2 replies
@scottsuhy
Comment options

Answer selected by scottsuhy
@devstein
Comment options

Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants