Skip to content

[Bug] Smart extraction LLM uses hardcoded model, cannot configure via OpenClaw config #193

@jlin53882

Description

@jlin53882

Plugin Version

1.1.0-beta.8

OpenClaw Version

2026.3.12

Bug Description

When running openclaw memory-pro upgrade to convert legacy memories to smart format, the plugin tries to use openai/gpt-oss-120b for LLM enrichment but this model is not available/working in my environment.

The plugin logs show:

memory-lancedb-pro: smart extraction enabled (LLM model: openai/gpt-oss-120b, noise bank: ON)

All 51 legacy memories failed enrichment with "LLM enrichment failed ... Error: LLM returned null", falling back to simple mode.

Root Cause: The LLM model for smart extraction is hardcoded to openai/gpt-oss-120b in the plugin source code (index.ts line 1652), and users cannot configure it via OpenClaw config.

Workaround Found: After investigation, I discovered the plugin DOES support llm config fields (apiKey, model, baseURL) in the OpenClaw config schema. However, this is not documented and not obvious. The fix requires:

  1. Setting plugins.entries.memory-lancedb-pro.config.smartExtraction: true
  2. Setting plugins.entries.memory-lancedb-pro.config.llm.model to desired model
  3. Setting plugins.entries.memory-lancedb-pro.config.llm.baseURL to the provider's API endpoint

Expected Behavior

Users should be able to configure the smart extraction LLM model via OpenClaw config. The plugin should:

  1. Support configuring LLM model (e.g., minimax-portal/MiniMax-M2.5, openai/gpt-4o-mini, etc.)
  2. Support configuring LLM baseURL for different providers (OpenAI, MiniMax, Ollama, etc.)
  3. Make this configuration option clearly documented

Steps to Reproduce

  1. Have legacy memories that need upgrade
  2. Run openclaw memory-pro upgrade
  3. Observe that all LLM enrichment fails with "LLM returned null"
  4. Check plugin logs to see it's trying to use hardcoded openai/gpt-oss-120b

Manual Fix That Works (tested and confirmed):

  1. openclaw config set plugins.entries.memory-lancedb-pro.config.smartExtraction true
  2. openclaw config set plugins.entries.memory-lancedb-pro.config.llm.model "minimax-portal/MiniMax-M2.5"
  3. openclaw config set plugins.entries.memory-lancedb-pro.config.llm.baseURL "https://api.minimax.io/v1"
  4. Restart Gateway
  5. After restart, logs show: smart extraction enabled (LLM model: minimax-portal/MiniMax-M2.5, noise bank: ON)

Error Logs / Screenshots

Before fix (using hardcoded model):

00:51:50 [plugins] memory-lancedb-pro: smart extraction enabled (LLM model: openai/gpt-oss-120b, noise bank: ON)
...
memory-upgrader: LLM enrichment failed for <memory-id>, falling back to simple — Error: LLM returned null


After fix (using config):

00:58:XX [plugins] memory-lancedb-pro: smart extraction enabled (LLM model: minimax-portal/MiniMax-M2.5, noise bank: ON)


The fix works! Plugin correctly uses the configured LLM model.

Embedding Provider

None

OS / Platform

Windows 11

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions