Skip to content

Commit 34d425a

Browse files
committed
Add comprehensive README with usage examples
- Explains what the tool does and its key features - Shows installation methods (releases and from source) - Includes detailed usage examples with actual output - Documents CLI options and table column meanings - Explains cache pricing concept - Adds development section with just commands
1 parent 5e03f6f commit 34d425a

File tree

1 file changed

+167
-0
lines changed

1 file changed

+167
-0
lines changed

README.md

Lines changed: 167 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,167 @@
1+
# LLM Pricing
2+
3+
A CLI tool to visualize OpenRouter model pricing in a clean, tabular format.
4+
5+
## Features
6+
7+
- 📊 **Tabular display** of model pricing per 1M tokens
8+
- 🔍 **Filter models** by name or provider (e.g., `anthropic`, `sonnet`)
9+
- 💰 **Cache pricing** support for models that offer it
10+
- 📝 **Verbose mode** showing all model details
11+
- 🌐 **Live data** fetched from OpenRouter API
12+
13+
## Installation
14+
15+
### From Releases
16+
17+
Download the latest binary for your platform from the [releases page](https://github.com/tekacs/llm-pricing/releases).
18+
19+
### From Source
20+
21+
```bash
22+
git clone https://github.com/tekacs/llm-pricing.git
23+
cd llm-pricing
24+
cargo install --path .
25+
```
26+
27+
## Usage
28+
29+
### Basic Usage
30+
31+
Show all models in a table format:
32+
33+
```bash
34+
llm-pricing
35+
```
36+
37+
```
38+
Model | Input | Output | Cache Read | Cache Write
39+
------------------------------------------+-------+--------+------------+------------
40+
anthropic/claude-opus-4 | 15.00 | 75.00 | 1.50 | 18.75
41+
anthropic/claude-sonnet-4 | 3.00 | 15.00 | 0.30 | 3.75
42+
google/gemini-2.5-pro | 1.25 | 10.00 | N/A | N/A
43+
x-ai/grok-4 | 3.00 | 15.00 | 0.75 | N/A
44+
openai/gpt-4o | 2.50 | 10.00 | N/A | N/A
45+
...
46+
```
47+
48+
### Filter by Provider
49+
50+
Show only Anthropic models:
51+
52+
```bash
53+
llm-pricing anthropic
54+
```
55+
56+
```
57+
Model | Input | Output | Cache Read | Cache Write
58+
------------------------------------------+-------+--------+------------+------------
59+
anthropic/claude-opus-4 | 15.00 | 75.00 | 1.50 | 18.75
60+
anthropic/claude-sonnet-4 | 3.00 | 15.00 | 0.30 | 3.75
61+
anthropic/claude-3.5-sonnet | 3.00 | 15.00 | 0.30 | 3.75
62+
anthropic/claude-3.5-haiku | 0.80 | 4.00 | 0.08 | 1.00
63+
anthropic/claude-3-opus | 15.00 | 75.00 | 1.50 | 18.75
64+
...
65+
```
66+
67+
### Filter by Model Name
68+
69+
Show models containing "sonnet":
70+
71+
```bash
72+
llm-pricing sonnet
73+
```
74+
75+
```
76+
Model | Input | Output | Cache Read | Cache Write
77+
------------------------------------------+-------+--------+------------+------------
78+
anthropic/claude-sonnet-4 | 3.00 | 15.00 | 0.30 | 3.75
79+
anthropic/claude-3.7-sonnet | 3.00 | 15.00 | 0.30 | 3.75
80+
anthropic/claude-3.5-sonnet | 3.00 | 15.00 | 0.30 | 3.75
81+
anthropic/claude-3-sonnet | 3.00 | 15.00 | 0.30 | 3.75
82+
```
83+
84+
### Verbose Output
85+
86+
Get detailed information about models with the `-v` flag:
87+
88+
```bash
89+
llm-pricing opus-4 -v
90+
```
91+
92+
```
93+
=== ANTHROPIC ===
94+
95+
Model: anthropic/claude-opus-4
96+
Name: Anthropic: Claude Opus 4
97+
Description: Claude Opus 4 is benchmarked as the world's best coding model, at time of release,
98+
bringing sustained performance on complex, long-running tasks and agent workflows. It sets new
99+
benchmarks in software engineering, achieving leading results on SWE-bench (72.5%) and
100+
Terminal-bench (43.2%).
101+
Pricing:
102+
Input: $15.00 per 1M tokens
103+
Output: $75.00 per 1M tokens
104+
Cache Read: $1.50 per 1M tokens
105+
Cache Write: $18.75 per 1M tokens
106+
Per Request: $0
107+
Image: $0.024
108+
Context Length: 200000 tokens
109+
Modality: text+image->text
110+
Tokenizer: Claude
111+
Max Completion Tokens: 32000
112+
Moderated: true
113+
```
114+
115+
## Understanding the Output
116+
117+
### Table Columns
118+
119+
- **Model**: The model identifier used in API calls
120+
- **Input**: Cost per 1M input tokens (USD)
121+
- **Output**: Cost per 1M output tokens (USD)
122+
- **Cache Read**: Cost per 1M tokens read from cache (when available)
123+
- **Cache Write**: Cost per 1M tokens written to cache (when available)
124+
125+
### Cache Pricing
126+
127+
Some providers (like Anthropic and xAI) offer caching to reduce costs on repeated content:
128+
129+
- **Cache Read**: Much cheaper than regular input tokens (typically 10x less)
130+
- **Cache Write**: Slightly more expensive than input tokens (to build the cache)
131+
- **N/A**: Model doesn't support caching
132+
133+
## CLI Options
134+
135+
```bash
136+
llm-pricing [OPTIONS] [FILTER]
137+
138+
Arguments:
139+
[FILTER] Filter models by name (e.g., 'anthropic/', 'sonnet')
140+
141+
Options:
142+
-v, --verbose Show verbose output with all model information
143+
-h, --help Print help
144+
```
145+
146+
## Development
147+
148+
This project uses [just](https://github.com/casey/just) for task running:
149+
150+
```bash
151+
# Show available tasks
152+
just
153+
154+
# Build the project
155+
just build
156+
157+
# Run with arguments
158+
just run anthropic -v
159+
160+
# Format and lint
161+
just fmt
162+
just clippy
163+
```
164+
165+
## License
166+
167+
MIT License - see [LICENSE](LICENSE) for details.

0 commit comments

Comments
 (0)