Bug Description
animaworks anima set-model <agent> <model> only updates the model field in status.json, but leaves the credential field unchanged. When switching between model providers (e.g., from an Anthropic model to a local OpenAI-compatible model), the agent ends up calling the wrong endpoint.
Steps to Reproduce
# Prerequisite: agent is configured with credential for provider A
animaworks anima set-model <agent-name> openai/<some-model>
# Verify: credential still points to old provider
cat ~/.animaworks/animas/<agent-name>/status.json | grep credential
# → old credential remains unchanged
# Result: agent appears to start, but does not respond
animaworks anima restart <agent-name>
# Sending a message to the agent → no response
Expected Behavior
When set-model is called with a model that belongs to a different provider than the current credential, the command should either:
- Automatically update the
credential field to match the new model's provider, OR
- Warn the user that the credential was not updated and suggest the correct value, OR
- Require an explicit
--credential flag when switching providers
Actual Behavior
Only the model field is updated in status.json. The credential field retains the old value, causing endpoint mismatch. The agent restarts without errors but fails to process messages silently.
Impact
- Agents with mismatched model/credential become unresponsive after
set-model + restart
- Silent failure: the process starts normally but all LLM calls fail internally
- Requires manual
status.json editing to recover
Proposed Fix
Option A (Recommended): Add --credential option
animaworks anima set-model <agent> <model> --credential <credential>
- When
--credential is specified, update both model and credential atomically
- When
--credential is omitted but a provider change is detected, emit a warning:
Warning: model provider changed, but --credential was not specified.
Current credential: anthropic
Suggested credential for openai/* models: <local-endpoint-credential>
Re-run with: animaworks anima set-model <agent> <model> --credential <credential>
Option B: Auto-infer credential from model name
Infer the appropriate credential from the model name prefix:
openai/* → local OpenAI-compatible endpoint credential
claude-* or anthropic/* → Anthropic credential
vertex/* → Vertex AI credential
Note: Since multiple credentials may exist for the same provider prefix, Option A (explicit + warning) is safer.
Option C: Startup validation
At agent startup, validate that the model and credential combination is consistent. Log an error and surface a notification when a known mismatch is detected (e.g., an openai/* model with an Anthropic credential).
Recommended Implementation
Implement Option A + Option C:
- Add
--credential to set-model with provider-change warning when omitted
- Add lightweight model/credential compatibility check at agent startup
Workaround
Manually edit status.json to update the credential field after running set-model, then restart the agent:
{
"model": "openai/some-model",
"credential": "correct-credential-name"
}
Bug Description
animaworks anima set-model <agent> <model>only updates themodelfield instatus.json, but leaves thecredentialfield unchanged. When switching between model providers (e.g., from an Anthropic model to a local OpenAI-compatible model), the agent ends up calling the wrong endpoint.Steps to Reproduce
Expected Behavior
When
set-modelis called with a model that belongs to a different provider than the current credential, the command should either:credentialfield to match the new model's provider, OR--credentialflag when switching providersActual Behavior
Only the
modelfield is updated instatus.json. Thecredentialfield retains the old value, causing endpoint mismatch. The agent restarts without errors but fails to process messages silently.Impact
set-model+restartstatus.jsonediting to recoverProposed Fix
Option A (Recommended): Add
--credentialoption--credentialis specified, update bothmodelandcredentialatomically--credentialis omitted but a provider change is detected, emit a warning:Option B: Auto-infer credential from model name
Infer the appropriate credential from the model name prefix:
openai/*→ local OpenAI-compatible endpoint credentialclaude-*oranthropic/*→ Anthropic credentialvertex/*→ Vertex AI credentialNote: Since multiple credentials may exist for the same provider prefix, Option A (explicit + warning) is safer.
Option C: Startup validation
At agent startup, validate that the
modelandcredentialcombination is consistent. Log an error and surface a notification when a known mismatch is detected (e.g., anopenai/*model with an Anthropic credential).Recommended Implementation
Implement Option A + Option C:
--credentialtoset-modelwith provider-change warning when omittedWorkaround
Manually edit
status.jsonto update thecredentialfield after runningset-model, then restart the agent:{ "model": "openai/some-model", "credential": "correct-credential-name" }