This guide runs Kelvin with the first-party OpenAI model plugin on the SDK path.
OPENAI_API_KEYset in your shell.- Installed plugin trust policy and plugin home (defaults are fine).
- CLI plugin installed (required preflight in
kelvin-sdkruntime composition).
Install the CLI plugin:
scripts/install-kelvin-cli-plugin.shInstall the OpenAI model plugin:
scripts/install-kelvin-openai-plugin.shDefault index URL:
https://raw.githubusercontent.com/agentichighway/kelvinclaw-plugins/main/index.json
Both installers support overrides:
KELVIN_PLUGIN_HOMEKELVIN_TRUST_POLICY_PATH
export OPENAI_API_KEY="<your_key>"
cargo run -p kelvin-host -- \
--prompt "Summarize KelvinClaw in one sentence." \
--model-provider kelvin.openai \
--memory fallbackExpected behavior:
- runtime loads installed plugins through signature + manifest checks
- model provider is selected explicitly by plugin id (
kelvin.openai) - request executes through the generic
provider_profile_callguest ABI - host resolves the declarative
openai.responsesprovider profile object and performs the OpenAI HTTPS call
Run mock-backed SDK integration test:
cargo test -p kelvin-sdk --lib run_with_sdk_uses_installed_openai_model_plugin_via_mock_server -- --nocaptureThis validates the full SDK + WASM model-provider path without live network secrets.
- missing plugin id or install path: typed configuration/load error
- missing
OPENAI_API_KEY: typed invalid-input error before outbound call - host not in allowlist: typed invalid-input error
- provider 4xx/5xx: typed backend error
- malformed plugin output: typed invalid-input error
No silent fallback is performed when --model-provider is set.