Skip to content

Conversation

@clavedeluna
Copy link
Contributor

Any codemod which may use an LLM and spends prompt and completion tokens will now be reported, per each codemod and at the end of codemodder run.

Looks something like this (redacted)

Codemod REDACTED
	completion_tokens = 2309
	prompt_tokens = 13742
...
running codemod SOME OTHER CODEMOD
...
...
[report]
scanned: 8 files
failed: 0 files (0 unique)
changed: 8 files (4 unique)
report file: here.txt
All token usage
	completion_tokens = 5176
	prompt_tokens = 40960

if not AzureOpenAI:
logger.info("Azure OpenAI API client not available")
return None
logger.info("Azure OpenAI API client not available")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guessing this is a mistake, this log is unreacheable and will never execute.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wooops yes very much

@sonarqubecloud
Copy link

sonarqubecloud bot commented Feb 6, 2025

@clavedeluna clavedeluna added this pull request to the merge queue Feb 6, 2025
Merged via the queue into main with commit b2baa38 Feb 6, 2025
15 checks passed
@clavedeluna clavedeluna deleted the report-token-usage branch February 6, 2025 18:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants