Skip to content

Enhancement: LLM API proxy as first-class use case (LiteLLM, OpenAI, Anthropic) #99

@rodaddy

Description

@rodaddy

Summary

Jentic Mini already brokers any HTTP API with credential injection. LLM APIs (OpenAI, Anthropic, self-hosted LiteLLM proxies) are a natural fit -- agents calling LLMs through Jentic would get:

  • Credential isolation -- agents never see raw API keys
  • Per-agent model restrictions -- RBAC policies scoping which models/endpoints each toolkit can hit
  • Centralized LLM audit trail -- every model call across every agent logged with toolkit ID, operation, latency, status
  • Cost visibility -- trace data enables per-agent/per-toolkit usage reporting

What Would Make This Better

The broker already handles this today with a spec import + credential. A few things that would make LLM proxying a first-class pattern:

  1. Response body size in traces -- token counts or response sizes in the trace table would enable cost tracking without parsing every response
  2. Rate limiting per toolkit -- cap requests/minute per toolkit to prevent runaway agents from burning through quotas
  3. Template spec for common LLM APIs -- pre-built specs for OpenAI, Anthropic, and LiteLLM in the catalog so users don't have to hand-craft them

Not a bug, just an observation that Jentic is already 90% of the way to being an excellent LLM API gateway for multi-agent setups.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions