Skip to content

Conversation

@b3nw
Copy link
Contributor

@b3nw b3nw commented Jan 28, 2026

Fixes metadata token counting for Anthropic-format API responses (used by /v1/messages endpoint).

The _log_metadata method in TransactionLogger only supported OpenAI format usage keys (prompt_tokens, completion_tokens) but Anthropic responses use different keys (input_tokens, output_tokens). This caused null token counts in metadata.json for providers like dedaluslabs and firmware when using the Anthropic-compatible /v1/messages endpoint.

Changes:

  • Add fallback from OpenAI to Anthropic format for token counts (prompt_tokens → input_tokens, completion_tokens → output_tokens)
  • Use explicit None checks instead of or to correctly handle 0 values
  • Calculate total_tokens if missing from Anthropic responses (sum of input + output)
  • Handle stop_reason (Anthropic format) as well as finish_reason (OpenAI format)

Testing Done

  • Verified dedaluslabs and firmware providers now log token counts correctly in metadata.json when using /v1/messages endpoint
  • Confirmed OpenAI format responses continue to work unchanged
  • Tested edge case where token counts are 0 (now correctly logged as 0 instead of falling back)

b3nw and others added 24 commits January 23, 2026 21:32
- ai-merge-dev.yml: Merges selected branches into dev with AI conflict resolution
- update-branch-list.yml: Auto-updates branch checkboxes on branch create/delete

These files must live on main since dev gets reset to main each merge run.
- Fix $schema escaping by using DOLLAR variable
- Add LLM_PROXY_MODEL secret for configurable model (defaults to claude-sonnet-4-20250514)
- Add reset_dev boolean input (default: false) to preserve existing dev state
- Only reset dev to main when explicitly requested via checkbox
- Prevents regression of previously merged features
- Make custom_branches optional since checkboxes can be used
- Add sync_upstream checkbox to pull from Mirrowel/LLM-API-Key-Proxy
- Add upstream_branch input to choose which branch to sync from (default: dev)
- AI resolves conflicts with upstream if they occur
- Allow running with just upstream sync (no feature branches required)
- Update template generator for consistency
- Check if there are staged changes before attempting commit
- Report "Already merged" status for branches with no new changes
- Prevents failure when re-merging branches that are already in dev
- Remove dev from build.yml and docker-build.yml push triggers
- Builds now only run on main branch pushes
- Add AI-MERGE.md documenting the multi-branch merge system
- Replace bot-setup action with inline Opencode LLM Proxy config
- Update BOT_NAMES_JSON from mirrobot variants to github-actions[bot]
- Change trigger command from /mirrobot-review to /ai-review
- Replace all steps.setup.outputs.token with secrets.GH_PAT
- Uses existing LLM_PROXY_* secrets for AI code review
Users with FIRMWARE_API_KEY_* but no FIRMWARE_API_BASE were getting
errors because LiteLLM doesn't recognize firmware as a native provider.

This adds a specific fallback to use firmware's default api_base
(https://app.firmware.ai/api/v1) when not explicitly configured.
Firmware.ai uses custom model naming (e.g., firmware/anthropic/claude-sonnet-4-5)
that isn't in LiteLLM's pricing database, causing "Provider List" spam and
BadRequestError during cost calculation.

Add skip_cost_calculation=True to FirmwareProvider to prevent these errors.
The provider is OpenAI-compatible and routes through LiteLLM's OpenAI provider
with api_base override, so no custom acompletion() logic is needed.
…lume mounts

When writing atomically with tempfile + shutil.move(), writing to a symlink
path causes shutil.move() to replace the symlink with the temp file instead
of writing through the symlink to the target.

This broke persistence for Docker deployments using the entrypoint pattern:
  ln -sf /app/data/key_usage.json /app/key_usage.json

After the first write, the symlink was replaced with a regular file in the
container's overlay filesystem, and the persistent volume was never updated.
On container restart, the symlink was recreated pointing to the stale/empty
file in the volume, causing stats to reset.

The fix resolves symlinks before performing atomic writes in:
- BufferedWriteRegistry._try_write()
- ResilientStateWriter._try_disk_write()
- safe_write_json()
- Remove all AI-powered workflows (bot-reply, pr-review, issue-comment, ai-merge-dev, compliance-check)
- Remove upstream CI workflows (build.yml, cleanup.yml, update-branch-list.yml, status-check-init.yml)
- Remove AI-related support files (.github/prompts/, .github/actions/bot-setup/, AI-MERGE.md, cliff.toml)
- Keep only docker-build.yml, configured to build container images on dev branch pushes
- Add jq to Docker production image for runtime JSON parsing
- Add docker-compose.dev.yml for deployments using pre-built GHCR images

Build optimizations:
- Add GitHub Actions cache for Docker layers (cache-from/cache-to)
- Build only linux/amd64 (removes slow QEMU arm64 emulation)
- Expected build time: ~2-3 min (down from ~12 min)
Combines three related fixes for firmware provider:
- Properly configure LiteLLM for Firmware.ai OpenAI-compatible API
- Estimate quota limit from API ratio for display
- Ensure window exists before reading quota stats
The _log_metadata method only supported OpenAI format usage keys
(prompt_tokens, completion_tokens) but Anthropic responses use
different keys (input_tokens, output_tokens). This caused null
token counts in metadata.json for dedaluslabs and firmware providers
when using the /v1/messages endpoint.

Changes:
- Add fallback from OpenAI to Anthropic format for token counts
- Use explicit None checks instead of 'or' to handle 0 values
- Calculate total_tokens if missing from Anthropic responses
- Handle stop_reason (Anthropic) as well as finish_reason (OpenAI)
@b3nw b3nw requested a review from Mirrowel as a code owner January 28, 2026 01:07
@b3nw b3nw closed this Jan 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant