Skip to content

Bug: llmConfig for non-OpenAI providers is ignored, framework defaults to OpenAI when VITE_OPENAI_API_KEY is present. #264

@jaylooloomi

Description

@jaylooloomi

There seems to be a critical bug in how the library handles LLM provider configuration, which makes it impossible to use non-OpenAI models if the kaiban-board UI component is also used.

The core of the problem is a catch-22:

The kaiban-board UI component requires the VITE_OPENAI_API_KEY environment variable to be present in the .env file. If it's missing, the UI throws a "Missing Keys" error and the application cannot start.
However, the mere presence of the VITE_OPENAI_API_KEY variable seems to set OpenAI as a non-overridable, global default LLM provider for the backend library.
This global default takes precedence over any agent-specific llmConfig. As a result, even when an agent is explicitly configured to use Google Gemini, the library ignores the configuration and still attempts to call the OpenAI API, leading to a 401 Unauthorized error.

Steps to Reproduce

Set up a project with kaibanjs (v0.23.1) and kaiban-board.
In the .env file, provide a valid VITE_GOOGLE_API_KEY and an invalid VITE_OPENAI_API_KEY (e.g., VITE_OPENAI_API_KEY=dummy-key).
Configure an agent to use Google Gemini, providing the API key directly in the llmConfig as per the documentation:
Create a team and a simple task for this agent.
Run the workflow.
Observe the browser's network console. You will see a POST request to https://api.openai.com/v1/chat/completions which fails with a 401 error, using the invalid key from VITE_OPENAI_API_KEY.
What was tried

We went through an extensive debugging process, including:

Scoping the API keys at the Team level (env object).
Scoping the API keys at the Agent level (llmConfig).
Removing the env object from the Team configuration entirely.
Creating a custom GoogleGeminiTool to bypass the agent's core LLM. This also failed because the agent's initial "thinking" step (to decide whether to use the tool) is hardcoded to use the default OpenAI model and fails before it can even select the custom tool.
Root Cause Analysis

The kaibanjs library appears to have a configuration priority issue. The detection of VITE_OPENAI_API_KEY sets a global default that cannot be overridden by agent-specific configurations. The agent's core "thinking" process seems hardcoded to use this default provider, making it impossible to use any other LLM for foundational reasoning.

Expected Behavior

The llmConfig provided during an Agent's initialization should have the highest priority. It should always override any global defaults set by environment variables, allowing developers to use different LLMs for different agents within the same project.

Thank you for your work on this promising framework. I hope this detailed report helps in resolving this critical issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions