Real-time token usage monitoring for Claude Code status line | v1.6.0
claude-pulse displays your current Claude Code token usage directly in your status line, helping you stay aware of context consumption without running /context manually.
- Active session names - Conversation names now work from the first message, even before
/renameβ inferred from transcript content via AI - Opus 4.6 detection - Correctly identifies
claude-opus-4-6as "Opus 4.6" in the statusline - 1M context ready - Dynamic detection handles Opus 4.6's extended context window automatically
- Refactored API calls - AI provider logic extracted into reusable functions in both bash and PowerShell
- Conversation names - Shows AI-generated short names for each conversation (e.g., "Statusline Names")
- Multi-provider API - Supports Anthropic, OpenAI, and Gemini APIs for name generation
- Smart fallback - Extracts first 3 words from summary if no API key configured
- Visual clarity - Added emojis (π€ for model, π¬ for conversation) to distinguish elements
- Dynamic context limits - Automatically detects and displays correct context window size (200k, 1M, etc.)
- Mid-conversation updates - Correctly updates when switching models during a conversation
- Model display - Shows current model in use (e.g., "Sonnet 4.5", "Opus 4.5", "Haiku 3.5")
- Single-line output - Token usage, model name, and working directory on one compact line
- Accurate display - Shows FULL context usage matching
/contextcommand (including MCP tools, system prompt, etc.)
Note: >100% is normal when context exceeds the limit - Claude Code will auto-compact.
See RELEASE.md for full release notes.
- β Accurate token counting - Reads actual usage from Claude's API responses
- β Model-aware limits - Automatically detects context limits for different Claude models
- β Model display - Shows which Claude model you're using (e.g., "Sonnet 4.5", "Opus 4.5")
- β Conversation names - AI-generated short names for easy session identification
- β Multi-provider API - Works with Anthropic, OpenAI, or Gemini API keys
- β Compact display - Single line showing usage, model, conversation, and directory
- β Color-coded warnings - Green β Yellow β Red as you approach context limits
- β Lightweight - Pure bash/PowerShell script with minimal dependencies
- β Inspired by ccusage - Uses the same accurate parsing approach
π§ 72k/200k (36%) Β· π€ Sonnet 4.5 Β· π¬ "Statusline Names" π /Users/you/Code/your-project
Displays token usage (current/limit), percentage, current model, conversation name, and working directory all on one compact line.
Color changes based on usage:
- π’ Green (0-49%): Plenty of context remaining
- π‘ Yellow (50-79%): Moderate usage
- π΄ Red (80-100%): High usage, consider compacting
git clone https://github.com/omriariav/claude-pulse.git
cd claude-pulse
./install.shRequirements: jq (brew install jq / apt install jq)
-
Clone or download the repository
-
Copy
claude-pulse.ps1to your.claudefolder:Copy-Item claude-pulse.ps1 "$env:USERPROFILE\.claude\statusline-command.ps1"
-
Add to your Claude Code
settings.json:{ "statusLine": { "type": "command", "command": "powershell -ExecutionPolicy Bypass -File C:/Users/YOUR_USERNAME/.claude/statusline-command.ps1" } } -
Restart Claude Code
-
Copy
claude-pulseto~/.claude/statusline-command.sh:cp claude-pulse ~/.claude/statusline-command.sh chmod +x ~/.claude/statusline-command.sh
-
Add to your Claude Code
settings.json:{ "statusLine": { "type": "command", "command": "~/.claude/statusline-command.sh" } } -
Restart Claude Code
- Claude Code (obviously!)
- macOS/Linux:
jqJSON parser- macOS:
brew install jq - Linux:
sudo apt-get install jq
- macOS:
- Windows: PowerShell (included in Windows)
claude-pulse reads usage data from Claude's transcript files (JSONL format) which contain the actual billing API response. This includes:
- Message tokens
- System prompt tokens
- Tool definitions (including MCP tools)
- Memory files
- All context overhead
This matches what /context shows - the FULL context usage.
If transcript data is unavailable, claude-pulse falls back to the native context_window data from Claude Code:
{
"context_window": {
"total_input_tokens": 15234,
"total_output_tokens": 4521,
"context_window_size": 200000
}
}The script:
- Tries to read billing API data from transcript (most accurate)
- Falls back to native
context_windowif transcript unavailable - Extracts and converts model ID to friendly name
- Calculates percentage and applies color coding
- Returns a compact, single-line status display
- Claude Opus 4.6 (200k default, 1M with extended context)
- Claude Opus 4.5 (200k context)
- Claude Sonnet 4.x (200k context)
- Claude 3.5 Sonnet (200k context)
- Claude Haiku 3.5 (200k context)
- Unknown models default to 200k
To enable AI-generated conversation names, set one of these environment variables in your shell profile (~/.zshrc or ~/.bashrc):
# Option 1: Anthropic (uses Claude Haiku 4.5)
export ANTHROPIC_API_KEY=sk-ant-...
# Option 2: OpenAI (uses GPT-4o Mini)
export OPENAI_API_KEY=sk-...
# Option 3: Google Gemini (uses Gemini 2.5 Flash)
export GEMINI_API_KEY=...The script tries APIs in this order: Anthropic β OpenAI β Gemini β Fallback (first 3 words).
Without an API key, conversation names will be the first 3 words of the session summary or transcript (still useful!).
- Named sessions β If you've used
/rename, the saved summary is sent to the AI for a 2-3 word name - Active sessions β For sessions without a name yet, the first few assistant messages from the transcript are used to infer a topic
- Caching β Generated names are cached in
~/.cache/claude-pulse/to avoid repeated API calls
After adding your API key, restart your terminal or run source ~/.zshrc to apply.
The script automatically detects your model and sets the appropriate context limit. No additional configuration needed!
You can! But claude-pulse offers:
- Always visible - No need to run
/contextmanually - More accurate - Uses billing API data which reflects actual context consumption
- Automatic - Updates with every message
- Color-coded - Visual warnings as you approach limits
Status line shows "No token usage yet"
- This is normal for new sessions before the first API response
- Wait for Claude's first response
Script not running
- Verify
~/.claude/statusline-command.shexists and is executable - Check that
jqis installed:which jq - Restart Claude Code after configuration changes
Token count seems off
- claude-pulse reads from transcript files, which update after each API response
- Small differences (<3%) from
/contextare normal due to timing
Small differences from /context
- Difference is typically 2-3k tokens (~3%)
- Both use context window data, but timing of updates may differ slightly
Inspired by ccusage by @ryoppippi.
MIT License - see LICENSE file for details.
Contributions welcome! Please open an issue or PR.
Made with β€οΈ for the Claude Code community
