Skip to content

Add flag to show estimated AI token usage before run #57

@uberfresh

Description

@uberfresh

Feature request: Add flag to show estimated AI token usage before run

Description

It would be useful to have an option to estimate the potential AI token cost before executing the translations.
This would help users decide whether to proceed, especially with large .po files or multiple languages.

Proposed behavior

  • New CLI flag: --estimate-cost (or something else)
  • When used, the tool should:
    1. Count the approximate tokens in the .po files that would be sent for translation.
    2. Multiply by the number of target languages (if --bulk or multiple langs are used).
    3. Use the selected model’s token pricing to show an estimated cost before starting translation.
    4. Ask for confirmation (e.g., Proceed? (y/n)) unless overridden by --yes.

Example output

Estimated total tokens: 45,230 (all languages combined)
Model: gpt-4o-mini
Estimated cost: $1.23 (based on $0.0000275 / 1K tokens)

Proceed? (y/n)

Benefits

  • Allows cost control before committing to a run
  • Prevents unexpected high charges
  • Improves transparency for budget-conscious users

Additional considerations

  • Token counting can be approximate (use tiktoken or similar)
  • Should support multiple AI providers if the tool is expanded
  • Could also display estimated per-language cost breakdown

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions