-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Summary
Jentic Mini already brokers any HTTP API with credential injection. LLM APIs (OpenAI, Anthropic, self-hosted LiteLLM proxies) are a natural fit -- agents calling LLMs through Jentic would get:
- Credential isolation -- agents never see raw API keys
- Per-agent model restrictions -- RBAC policies scoping which models/endpoints each toolkit can hit
- Centralized LLM audit trail -- every model call across every agent logged with toolkit ID, operation, latency, status
- Cost visibility -- trace data enables per-agent/per-toolkit usage reporting
What Would Make This Better
The broker already handles this today with a spec import + credential. A few things that would make LLM proxying a first-class pattern:
- Response body size in traces -- token counts or response sizes in the trace table would enable cost tracking without parsing every response
- Rate limiting per toolkit -- cap requests/minute per toolkit to prevent runaway agents from burning through quotas
- Template spec for common LLM APIs -- pre-built specs for OpenAI, Anthropic, and LiteLLM in the catalog so users don't have to hand-craft them
Not a bug, just an observation that Jentic is already 90% of the way to being an excellent LLM API gateway for multi-agent setups.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request