A simple, self-hosted React application that helps you calculate the operational cost of using Large Language Models (LLMs) based on your token usage. This tool supports different token types including input tokens, cached tokens, and output tokens, allowing for accurate cost estimation across various LLM providers and models.
- Token-based Cost Calculation: Calculate costs based on input, cached, and output tokens
- Configurable Pricing: Set custom pricing per million tokens for different models
- Popular Model Presets: Quick setup with predefined pricing for popular LLMs (GPT-4o, Claude-3.5, etc.)
- Local Storage: Configuration persists across browser sessions
- Responsive Design: Works seamlessly on desktop and mobile devices
- Self-hosted: Run locally or deploy to your own hosting
- Helpful Resources: Direct links to pricing pages and tokenizer tools
Total Cost = (Uncached Input Tokens × Input Token Price) +
(Cached Input Tokens × Cached Token Price) +
(Output Tokens × Output Token Price)
Where:
- Uncached Input Tokens = Total Input Tokens - Cached Tokens
- Prices are per million tokens
- Cached tokens typically cost less than regular input tokens
- Node.js (version 18 or higher)
- npm or yarn package manager
- Git (for cloning the repository)
-
Clone the repository
git clone https://github.com/yourusername/llm-cost-calculator.git cd llm-cost-calculator -
Install dependencies
npm install
-
Start the development server
npm run dev
-
Open your browser
Navigate to
http://localhost:5173to use the application.
- Go to the Configuration page
- Either select a popular model preset or enter custom pricing values
- Set prices for input tokens, cached tokens, and output tokens (per million tokens)
- Save your configuration
- Go to the Calculator page
- Enter your token usage data:
- Model Name: Optional identifier for your calculation
- Input Tokens: Total number of input tokens consumed
- Cached Tokens: Number of input tokens that were cached
- Output Tokens: Number of tokens generated by the model
- Click "Calculate Cost" to see your total operational cost
The calculator will show:
- Total cost breakdown
- Cost per token type
- Helpful links to pricing resources
- React: Frontend library for building user interfaces
- Vite: Fast build tool and development server
- Tailwind CSS: Utility-first CSS framework for styling
- React Router: Client-side routing for navigation
# Build the application
npm run build
# Preview the production build (optional)
npm run previewThe built files will be in the dist directory, ready for deployment to any static hosting service.
- Netlify: Drag and drop the
distfolder - Vercel: Connect your GitHub repository
- GitHub Pages: Use GitHub Actions to deploy
- Any static hosting: Upload the
distfolder contents
| Model | Input (per 1M tokens) | Cached (per 1M tokens) | Output (per 1M tokens) |
|---|---|---|---|
| GPT-4o | $5.00 | $2.50 | $15.00 |
| GPT-4o mini | $0.15 | $0.075 | $0.60 |
| GPT-4 Turbo | $10.00 | $5.00 | $30.00 |
| Claude-3.5 Sonnet | $3.00 | $1.50 | $15.00 |
Note: Prices are approximate and may change. Always verify current pricing on the provider's website.
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is open source and available under the MIT License.
If you encounter any issues or have questions, please file an issue on the GitHub repository.
Made with love for the LLM community