A local HTML dashboard for multi-provider AI token usage that auto-recalculates on refresh from local session data. Supports Codex, Claude, and PI Coding Agent.
- Provider toggle filtered to providers present on the system (
Combined,Codex,Claude,PI) - Range-aware stats cards for total, days, sessions, highest day, explicit input/output/cached token totals, and cost totals
- Today + calendar-week rollups
- Activity Rhythm heatmap + time-of-day summary for the selected provider and date range
- Daily breakdown table with sessions, input, output, cached, total token, and total cost columns
- Range-scoped breakdown table grouped by
Agent CLI+Model, including cost totals - Horizontal bar chart (with rank, total tokens, and session count) above the table
- Auto-recalc on browser refresh and every 5 minutes via local
localhostendpoint
dashboard/index.html: Dashboard UIscripts/ai_usage_recalc_server.py: Thin local HTTP recalc service (/health,/recalc)scripts/dashboard_core/config.py: Runtime config/env resolutionscripts/dashboard_core/collectors.py: Codex/Claude/PI usage ingestionscripts/dashboard_core/aggregation.py: Daily aggregation + date window logicscripts/dashboard_core/render.py: HTML rewrite + dataset injectionscripts/dashboard_core/pipeline.py: End-to-end recalc orchestrationscripts/dashboard_core/pricing.py: Built-in rate card + optional pricing override loaderscripts/run_local.sh: Convenience launcher for local developmentscripts/tests/test_harness_contracts.py: Deterministic pipeline/harness invariantslaunchd/*.plist.example: Optional macOS LaunchAgent templatedocs/harness-engineering-adoption.md: Harness-engineering rationale + validation loop
- macOS or Linux
- Python 3.9+
- Local Codex session logs in
~/.codex/sessions - Local Claude project logs in
~/.claude/projects(optional; dashboard still works without Claude data) - Local PI agent state in
~/.pi/agent(optional; dashboard still works without PI data)
- Start the local recalc service:
cd /path/to/ai-token-usage-dashboard
chmod +x scripts/run_local.sh
./scripts/run_local.sh- Open the dashboard:
open http://127.0.0.1:8765/- Refresh the page.
- On refresh, the dashboard calls
/recalcon the same localhost server - By default, the service rewrites
tmp/index.runtime.html(untracked), so git-trackeddashboard/index.htmlstays unchanged - The page reloads with fresh values after recalc completes
- On refresh, the dashboard calls
Environment variables:
AI_USAGE_SERVER_HOST(default:127.0.0.1)AI_USAGE_SERVER_PORT(default:8765)AI_USAGE_CODEX_SESSIONS_ROOT(default:~/.codex/sessions)AI_USAGE_CLAUDE_PROJECTS_ROOT(default:~/.claude/projects)AI_USAGE_PI_AGENT_ROOT(default:~/.pi/agent)AI_USAGE_DASHBOARD_HTML(default viascripts/run_local.sh:<repo>/tmp/index.runtime.html, seeded from<repo>/dashboard/index.html)AI_USAGE_PRICING_FILE(optional JSON rate-card override file merged over the built-in pricing table)
- Copy and edit the template:
cp launchd/com.user.ai-token-usage-dashboard-recalc.plist.example \
~/Library/LaunchAgents/com.user.ai-token-usage-dashboard-recalc.plist-
Replace placeholder absolute paths.
-
Load it:
launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.user.ai-token-usage-dashboard-recalc.plist
launchctl kickstart -k gui/$(id -u)/com.user.ai-token-usage-dashboard-recalc- Verify:
curl http://127.0.0.1:8765/health- The dashboard is designed for local use and reads local AI provider session logs from Codex, Claude, and PI when present.
- Daily rows in the injected dataset include
sessions,input_tokens,output_tokens,cached_tokens,total_tokens,input_cost_usd,output_cost_usd,cached_cost_usd,total_cost_usd,cost_complete, andbreakdown_rowsgrouped by(agent_cli, model). - A built-in versioned pricing table is used for derived Codex and Claude costs, and can be overridden via
AI_USAGE_PRICING_FILE. - Codex usage keeps the latest
token_countsnapshot per session, extractsoriginator/sourcefor the CLI bucket, uses the latest observedturn_context.payload.modelwhen present, and prices uncached input separately from cached tokens. - Claude request usage is deduplicated by
(sessionId, requestId), keeps the highest observed token values for the request, computescached_tokens = cache_creation_input_tokens + cache_read_input_tokens, and derives cost from the model rate card. - PI usage is read from
~/.pi/agent/sessions/**/*.jsonl, tracks the active model viamodel_changeevents, computescached_tokens = cacheRead + cacheWrite, and prefers nativemessage.usage.cost.*when present. - Unmapped provider/model pricing is surfaced as partial cost in the API/UI instead of silently treated as trusted zero cost.
- No third-party services are required.
Run the full local harness checks:
python3 -m unittest discover -s scripts/testsThis repo is skill-ready and includes:
SKILL.md(trigger metadata + workflow instructions)agents/openai.yaml(skill UI metadata)
mkdir -p ~/.codex/skills
ln -sfn /absolute/path/to/ai-token-usage-dashboard \
~/.codex/skills/ai-token-usage-dashboardRestart Codex after installation.
Ask with the skill name, for example:
Use $ai-token-usage-dashboard to recalc and update my usage dashboard.
