Skip to content

carlos-cajina/ai-usage-meter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Usage Meter (v0.2.0)

A CLI application that displays a price comparison table of AI model usage across multiple providers. Built with Rust and opentui.

Features

  • Multi-provider support: Compare pricing across OpenAI, Anthropic, Gemini, DeepSeek, MiniMax, Grok, Qwen, Z.ai, Kimi, HuggingFace, and OpenRouter
  • Accurate token counting: Uses tokenizers for accurate token estimation
  • Currency conversion: Support for MXN, USD, EUR, GBP, JPY, CNY (MXN is default)
  • Interactive mode: Navigate with keyboard arrows, sort columns, refresh prices
  • Price change indicators: Shows ↑/↓ for price changes since last refresh
  • Price caching: Stores pricing data locally for offline use
  • Live price updates: Fetch latest prices from OpenRouter API

Installation

# Clone the repository
git clone https://github.com/yourusername/ai-usage-meter
cd ai-usage-meter

# Build the application
cargo build --release

# Or run directly
cargo run

Usage

Basic Usage

# Show prices with default values (1000 input tokens, 500 output tokens)
cargo run

# Specify input and output tokens
cargo run -- -i 2000 -o 1000

With Prompt Input

# Provide a prompt to calculate input tokens automatically
cargo run -- --prompt "Write a story about a robot"

Currency Options

# Prices in Mexican Pesos (default)
cargo run

# Prices in US Dollars
cargo run -- --currency usd

# Prices in Euros
cargo run -- --currency eur

Sorting

# Sort by total cost (default)
cargo run

# Sort by input cost
cargo run -- --sort-by input

# Sort by output cost
cargo run -- --sort-by output

# Sort by provider name
cargo run -- --sort-by provider

# Sort by model name
cargo run -- --sort-by model

Interactive Mode

# Launch interactive TUI with keyboard navigation
cargo run -- --interactive

# Or short form
cargo run -i

In interactive mode:

  • p: Enter prompt input mode (type prompt, press Enter to calculate)
  • ↑/↓: Navigate through the table
  • ←/→: Change sort column
  • r: Refresh prices from API
  • q or ESC: Exit

Refresh Prices

# Fetch latest prices from OpenRouter API and cache locally
cargo run -- --refresh

After refreshing, price changes are shown:

  • (red): Price increased since last refresh
  • (green): Price decreased since last refresh
  • (blank): No previous data to compare

Supported Providers

Provider Models Tracked
OpenAI o4-mini, gpt-4.5, gpt-4o
Anthropic claude-sonnet-4, claude-opus-4, claude-3.5-sonnet
Gemini gemini-2.5-pro, gemini-2.0-flash, gemini-1.5-pro
DeepSeek deepseek-chat, deepseek-coder, deepseek-reasoner
MiniMax MiniMax-M2, MiniMax-Text-01, abab6.5s-chat
Grok grok-3, grok-3-mini, grok-2
Qwen qwen3-235b-a22b, qwen2.5-coder-32b, qwen2.5-72b
Z.ai zephyr-sft, zaibert, zephyr-7b
Kimi kimi-k2, kimi1.5-sora, kimi1.5-pro
HuggingFace Qwen2.5-72B, Llama-3.1-405B, Mistral-Large
OpenRouter Various aggregated models

Configuration

Token Estimation

The application uses different tokenization approaches:

  • Prompt input: Uses character count / 4 as approximation (can be improved with provider-specific tokenizers)
  • API fetching: Uses OpenRouter's live pricing data

Storage

Pricing data is cached in:

  • Linux/macOS: ~/.local/share/ai-usage-meter/pricing.json

CLI Options

AI Usage Meter - Compare AI model pricing across providers

Usage: ai-usage-meter [OPTIONS]

Options:
  -p, --prompt <PROMPT>         Input prompt for token estimation
  -i, --input-tokens <INPUT>   Input token count [default: 1000]
  -o, --output-tokens <OUTPUT> Output token count [default: 500]
  -e, --estimate-output <ESTIMATE_OUTPUT>
                                  Estimated output tokens
  -c, --currency <CURRENCY>    Currency [default: MXN] [possible values: mxn, usd, eur, gbp, jpy, cny]
      --interactive             Run in interactive mode
      --refresh                 Refresh prices from API
      --sort-by <SORT_BY>      Sort by column [possible values: total, input, output, provider, model]
  -h, --help                   Print help

Development

Run Tests

cargo test

Run a Single Test

cargo test test_name

Linting

cargo clippy

Formatting

cargo fmt

Architecture

src/
├── main.rs              # Application entry point and UI
├── cli.rs               # CLI argument parsing
├── error.rs             # Error types
├── models/              # Data models
│   └── mod.rs           # Pricing, Provider, Currency types
├── providers/           # Provider implementations
│   ├── mod.rs           # Provider registry
│   └── sample_data.rs   # Sample pricing data
└── services/            # Business logic
    ├── mod.rs           # Price calculator
    ├── storage.rs       # JSON file storage
    ├── tokenizer.rs     # Token counting
    └── price_service.rs # Price fetching and caching

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages