Plugin Version
1.1.0-beta.8
OpenClaw Version
2026.3.12
Bug Description
When running openclaw memory-pro upgrade to convert legacy memories to smart format, the plugin tries to use openai/gpt-oss-120b for LLM enrichment but this model is not available/working in my environment.
The plugin logs show:
memory-lancedb-pro: smart extraction enabled (LLM model: openai/gpt-oss-120b, noise bank: ON)
All 51 legacy memories failed enrichment with "LLM enrichment failed ... Error: LLM returned null", falling back to simple mode.
Root Cause: The LLM model for smart extraction is hardcoded to openai/gpt-oss-120b in the plugin source code (index.ts line 1652), and users cannot configure it via OpenClaw config.
Workaround Found: After investigation, I discovered the plugin DOES support llm config fields (apiKey, model, baseURL) in the OpenClaw config schema. However, this is not documented and not obvious. The fix requires:
- Setting
plugins.entries.memory-lancedb-pro.config.smartExtraction: true
- Setting
plugins.entries.memory-lancedb-pro.config.llm.model to desired model
- Setting
plugins.entries.memory-lancedb-pro.config.llm.baseURL to the provider's API endpoint
Expected Behavior
Users should be able to configure the smart extraction LLM model via OpenClaw config. The plugin should:
- Support configuring LLM model (e.g., minimax-portal/MiniMax-M2.5, openai/gpt-4o-mini, etc.)
- Support configuring LLM baseURL for different providers (OpenAI, MiniMax, Ollama, etc.)
- Make this configuration option clearly documented
Steps to Reproduce
- Have legacy memories that need upgrade
- Run
openclaw memory-pro upgrade
- Observe that all LLM enrichment fails with "LLM returned null"
- Check plugin logs to see it's trying to use hardcoded
openai/gpt-oss-120b
Manual Fix That Works (tested and confirmed):
openclaw config set plugins.entries.memory-lancedb-pro.config.smartExtraction true
openclaw config set plugins.entries.memory-lancedb-pro.config.llm.model "minimax-portal/MiniMax-M2.5"
openclaw config set plugins.entries.memory-lancedb-pro.config.llm.baseURL "https://api.minimax.io/v1"
- Restart Gateway
- After restart, logs show:
smart extraction enabled (LLM model: minimax-portal/MiniMax-M2.5, noise bank: ON)
Error Logs / Screenshots
Before fix (using hardcoded model):
00:51:50 [plugins] memory-lancedb-pro: smart extraction enabled (LLM model: openai/gpt-oss-120b, noise bank: ON)
...
memory-upgrader: LLM enrichment failed for <memory-id>, falling back to simple — Error: LLM returned null
After fix (using config):
00:58:XX [plugins] memory-lancedb-pro: smart extraction enabled (LLM model: minimax-portal/MiniMax-M2.5, noise bank: ON)
The fix works! Plugin correctly uses the configured LLM model.
Embedding Provider
None
OS / Platform
Windows 11
Plugin Version
1.1.0-beta.8
OpenClaw Version
2026.3.12
Bug Description
When running
openclaw memory-pro upgradeto convert legacy memories to smart format, the plugin tries to useopenai/gpt-oss-120bfor LLM enrichment but this model is not available/working in my environment.The plugin logs show:
All 51 legacy memories failed enrichment with "LLM enrichment failed ... Error: LLM returned null", falling back to simple mode.
Root Cause: The LLM model for smart extraction is hardcoded to
openai/gpt-oss-120bin the plugin source code (index.ts line 1652), and users cannot configure it via OpenClaw config.Workaround Found: After investigation, I discovered the plugin DOES support
llmconfig fields (apiKey, model, baseURL) in the OpenClaw config schema. However, this is not documented and not obvious. The fix requires:plugins.entries.memory-lancedb-pro.config.smartExtraction: trueplugins.entries.memory-lancedb-pro.config.llm.modelto desired modelplugins.entries.memory-lancedb-pro.config.llm.baseURLto the provider's API endpointExpected Behavior
Users should be able to configure the smart extraction LLM model via OpenClaw config. The plugin should:
Steps to Reproduce
openclaw memory-pro upgradeopenai/gpt-oss-120bManual Fix That Works (tested and confirmed):
openclaw config set plugins.entries.memory-lancedb-pro.config.smartExtraction trueopenclaw config set plugins.entries.memory-lancedb-pro.config.llm.model "minimax-portal/MiniMax-M2.5"openclaw config set plugins.entries.memory-lancedb-pro.config.llm.baseURL "https://api.minimax.io/v1"smart extraction enabled (LLM model: minimax-portal/MiniMax-M2.5, noise bank: ON)Error Logs / Screenshots
Embedding Provider
None
OS / Platform
Windows 11