Skip to content

[FEATURE] LLM Model Management Dashbaord #144

@junaidferoz

Description

@junaidferoz

Issue: LLM Integration Layer & Dashboard

Summary

CARE currently has no built-in way to talk to LLMs. Every LLM-based feature requires custom components, and there's no cost tracking or prompt management. This issue covers adding a generic LLM integration layer so users can plug in their own API keys, write prompts, and track usage -- all without touching code.

Problem

  • Users can't connect to LLMs (OpenAI, Anthropic, Google) without developer intervention.
  • There's no way to track what LLM calls cost or what was sent/received.
  • The existing NLP broker adds unnecessary latency for simple LLM API calls.
  • Prompts are hardcoded per feature instead of being user-configurable.

Proposed Solution

Backend

  • A new LLMService that calls provider APIs directly over HTTP, skipping the NLP broker.
  • Four new database tables: api_key, llm_provider, llm_log, and prompt_template.
  • AES-256-GCM encryption for stored API keys.
  • Full I/O logging of every LLM request for cost tracking and research.
  • Seeded provider entries for OpenAI, Anthropic, and Google out of the box.

Frontend

  • A unified LLM Dashboard page with API key management, prompt template editor, cost breakdown by provider, and a filterable request log.
  • An LLM Providers admin page for enabling/disabling providers and restricting models system-wide.
  • Vuex store integration following the same pattern as the existing NLP service.

Prompt Templates

  • Users write prompts with {{placeholders}} and can preview or test-run them from the dashboard.
  • Templates can be shared system-wide, per study, or per project.

Acceptance Criteria

  • Users can add, edit, enable/disable, and delete API keys from the dashboard.
  • Users can create, edit, duplicate, and delete prompt templates with parameter placeholders.
  • Every LLM call is logged with provider, model, tokens, cost, latency, input, and output.
  • The dashboard shows usage stats (total requests, tokens, estimated cost).
  • The request log supports filtering by provider, status, and time range, plus CSV export.
  • Admins can manage providers and control which models are available.
  • API keys are encrypted at rest and masked in the UI.
  • LLM calls bypass the NLP broker and go directly to provider APIs.

Out of Scope (for now)

  • Model browser / comparison page.
  • Automatic input mapping from UI components to prompt template parameters.
  • Backend enforcement of sharing scopes (study/project-level access control).
  • Workflow step integration with prompt templates.

Related Files

Area Path
Service backend/webserver/services/llm.js
Encryption backend/utils/encryption.js
Models backend/db/models/api_key.js, llm_provider.js, llm_log.js, prompt_template.js
Migrations backend/db/migrations/20260331100000 through 20260331100006
Dashboard frontend/src/components/dashboard/LlmDashboard.vue
Admin page frontend/src/components/dashboard/LlmProviders.vue
Store frontend/src/store/modules/service.js
Socket backend/webserver/sockets/service.js

Metadata

Metadata

Assignees

Labels

frontendrequires changes in the frontend of CARE

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions