Releases: doobidoo/mcp-memory-service
v10.26.2 — OAuth PKCE fix + automated CHANGELOG housekeeping
What's Changed
Fixed
-
[#576] OAuth token exchange fails with 500 for public PKCE clients (
authorization.py): claude.ai and other MCP clients that use OAuth 2.1 public-client PKCE flow (noclient_secret) received a500 Internal Server Errorduring token exchange. The endpoint now detects public clients — requests supplying acode_verifierbut noclient_secret— and skips secret authentication, using the PKCE verifier as identity proof per OAuth 2.1 §2.1. Confidential clients (withclient_secret) are unaffected. Closes #576. -
Missing
/.well-known/oauth-protected-resourceendpoint (discovery.py): The endpoint required by RFC 9728 and the MCP OAuth spec was returning 404, breaking OAuth discovery for compliant MCP clients. AddedOAuthProtectedResourceMetadataPydantic model and corresponding route, which advertises the resource identifier and authorization server URLs withtoken_endpoint_auth_methods_supported: ["none"]. -
Opaque OAuth error logging: Added
exc_info=Trueto exception handlers in the token and authorization endpoints so that full tracebacks are recorded in logs instead of generic error messages, making future debugging significantly easier.
Added
-
Automated CHANGELOG housekeeping workflow (
.github/workflows/changelog-housekeeping.yml): Monthly GitHub Actions workflow (runs on the 1st of each month, also triggerable viaworkflow_dispatch) that automatically archives CHANGELOG entries older than the 8 most recent versions intodocs/archive/CHANGELOG-HISTORIC.md. Validates that no version entries are lost during archival. -
Changelog housekeeping script (
scripts/maintenance/changelog_housekeeping.py): Backing Python script with--dry-runsupport and README "Previous Releases" trimming (max 7 entries). SHA-pinned third-party Actions for security.
Upgrade Notes
No breaking changes. Standard upgrade:
pip install --upgrade mcp-memory-service
# or
uvx mcp-memory-service@latestIf you use claude.ai's MCP integration panel and encountered OAuth 500 errors, this release resolves the issue. No configuration changes needed.
Full Changelog
v10.26.1 — Hybrid backend correctly reported in MCP health checks
What's Changed
Fixed
- [#570] Hybrid backend misidentified as sqlite-vec in
memory_health:HealthCheckFactoryrelied solely on the storage object's class name to select the health-check strategy. When the hybrid backend's storage is accessed through a delegation or wrapper layer the class name is notHybridMemoryStorage, so the factory fell back to the sqlite-vec strategy and reported"sqlite-vec"instead of"hybrid", hiding Cloudflare sync status from users. The factory now performs structural detection — if the storage object exposes both aprimaryattribute and either asecondaryorsync_serviceattribute it is classified as hybrid regardless of class name. The existing SQLite and Cloudflare strategy paths are unchanged. Fixes #570.
Tests
- Three focused unit tests added for
HealthCheckFactorystrategy selection:- SQLite class-name detection path
- Delegated/wrapped hybrid structural detection path
- Unknown storage fallback behavior
Upgrade Notes
This is a pure bug fix — no API changes, no configuration changes, no migration steps required. Upgrade by running:
pip install --upgrade mcp-memory-serviceFull Changelog
https://github.com/doobidoo/mcp-memory-service/blob/main/CHANGELOG.md#10261---2026-03-08
🙏 Special Thanks
@SergioChan — authored the structural hybrid detection fix in PR #574, correctly diagnosing why class-name matching fails for wrapped/delegated hybrid storage and implementing the primary + secondary/sync_service attribute check. Excellent, well-scoped contribution.
v10.26.0 - Credentials Tab + Settings Restructure + Sync Owner
Summary
This release brings a major dashboard upgrade: a new Credentials tab lets you manage Cloudflare credentials directly from the Settings modal without editing config files. A new Sync Owner setting (MCP_HYBRID_SYNC_OWNER) lets HTTP server own all Cloudflare sync, removing the need for a Cloudflare token in the MCP server config.
Added
- Credentials tab in Settings modal (
GET /api/config/credentials,POST /api/config/credentials/test,POST /api/config/credentials): Manage Cloudflare API token, Account ID, D1 Database ID, and Vectorize Index directly from the dashboard. Credentials shown with partial-reveal (masked) display and an eye-toggle for full reveal. - Connection test gate (test-gate pattern): Credentials must pass a live connection test before they can be saved, preventing accidental misconfiguration.
- Sync Owner selector (
MCP_HYBRID_SYNC_OWNER:http/both/mcp): Control which server handles Cloudflare sync in hybrid mode. Default ishttp(recommended) — HTTP server owns all sync; MCP server (Claude Desktop) uses SQLite-Vec only, removing the need for a Cloudflare token in the MCP config. - Settings tabs restructured: The Backup tab is split into three focused tabs — Quality, Backup, and Server — bringing the total to 7 tabs.
Security
- SSRF protection:
account_idvalidated against[a-f0-9]{32}regex before Cloudflare API calls - Newline injection prevention: credential values sanitised to reject embedded newlines
sync_ownerallowlist: onlyhttp,both,mcpaccepted
Documentation
CLAUDE.mdandREADME.mdupdated:MCP_HYBRID_SYNC_OWNER=httpdocumented as the recommended configuration for hybrid mode- CI: Claude Code Review workflow disabled (OAuth token expiry issues — will be migrated to API key auth)
Tests
1,420 tests total (no new tests in this release).
Installation
pip install --upgrade mcp-memory-serviceRecommended Hybrid Mode Config
export MCP_HYBRID_SYNC_OWNER=http # HTTP server owns all Cloudflare sync
# MCP server (Claude Desktop) then needs no Cloudflare tokenFull changelog: https://github.com/doobidoo/mcp-memory-service/blob/main/CHANGELOG.md
v10.25.3
[10.25.3] - 2026-03-07
Fixed
- Strict stdio eager-init timeout cap — Caps eager storage initialization to 5.0s for non-LM-Studio stdio clients (Claude Desktop, Codex CLI), preventing MCP handshake timeouts (#569, fixes #561)
- Syntax errors in timeout cap — Fixed duplicate
detect_mcp_client_simple()call, orphaned closing paren, duplicate return statement from PR #569; extracted named constants, fixed dead-code guard, clarified warning messages
Changed
- gitignore TLS certificates — Added
*.pemandcerts/to.gitignore
Full Changelog: v10.25.2...v10.25.3
v10.25.2 - Health check script fix
Fixed
- Health check in
update_and_restart.shalways reported "unknown" version: The/api/healthendpoint was stripped of itsversionfield in v10.21.0 (security hardening GHSA-73hc-m4hx-79pj). The update script still tried to readdata.get('version'), causing it to always fall back to "unknown" and wait the full 15-second timeout before giving up. The check now reads thestatusfield ("healthy") to confirm the server is up, and reports the already-known pip-installed version instead.
Improved
- Reduced health check from 2 curl calls to 1 (using
--failflag) - Use
printfinstead ofechofor better shell portability
Full Changelog: v10.25.1...v10.25.2
v10.25.1 — Security patch: CORS hardening + soft-delete fix
Security Fixes
GHSA-g9rg-8vq5-mpwm — Wildcard CORS Default (HIGH)
Impact: When HTTP server is enabled with anonymous access, the default wildcard CORS configuration (*) allowed any website to silently read, modify, and delete all stored memories via cross-origin JavaScript requests.
Fix:
MCP_CORS_ORIGINSnow defaults tohttp://localhost:8000,http://127.0.0.1:8000instead of*allow_credentialsis automatically set toFalsewhen wildcard origins are configured- A startup warning is logged if wildcard is explicitly set via environment variable
Action required: If you set MCP_CORS_ORIGINS=* explicitly, remove it or replace with your actual dashboard origin.
GHSA-x9r8-q2qj-cgvw — TLS Verification in Peer Discovery (HIGH)
Formally closed. The fix (PEER_VERIFY_SSL=True default) was already present in v10.25.0. Advisory published to document the resolution.
Bug Fix
- Soft-delete leak in
search_by_tag_chronological(): MissingAND deleted_at IS NULLfilter caused tombstoned memories to appear in chronological tag search results.
Upgrade
pip install --upgrade mcp-memory-serviceOr via git:
git pull && pip install -e .v10.25.0 — sqlite_vec bug fixes, GLOB security, O(n²) fix, migration script
🙏 Special Thanks
This release is entirely the work of @chriscoey, who contributed 5 meticulously researched and well-tested PRs in a single day. Each one identified real bugs through careful code reading — not just surface-level fixes but root-cause analysis with regression tests proving the fix. Outstanding community contribution.
What's Changed
This release consolidates 5 high-quality PRs from @chriscoey that fix critical bugs in the SQLite-vec storage backend, improve security, and add an embedding migration utility.
🆕 Added
- Embedding model migration script (
scripts/maintenance/migrate_embeddings.py): Migrate embeddings between any models, including across different dimensions (e.g., 384-dim → 768-dim). Works with any OpenAI-compatible API (Ollama, vLLM, OpenAI, TEI). Features:--dry-run, auto-detect dimension, timestamped backup, service detection, cross-platform, batched with progress, post-migration integrity verification. Closes #552.
🐛 Fixed
Soft-delete leaks (data correctness):
recall()— both semantic and time-based paths returned deleted memoriesget_memories_by_time_range()— returned deleted memoriesget_largest_memories()— returned deleted memoriesget_memory_timestamps()— counted deleted memoriesget_memory_connections()— tag group counts included deleted memoriesget_access_patterns()— returned content hashes of deleted memoriesupdate_memory_metadata()— could modify soft-deleted memoriesupdate_memories_batch()— same issue for batch update pathdelete()error handler — added explicit rollback to prevent dangling embedding DELETEs
Score formula:
recall()used1.0 - distancebut cosine distance ∈ [0, 2], producing negative scores. Fixed tomax(0.0, 1.0 - distance/2.0)→ correctly maps to [0, 1].
Tag handling:
get_largest_memories()usedjson.loads()to parse tags, but tags are stored as comma-separated stringsget_all_memories(),count_all_memories(),retrieve(),delete_by_timeframe(),delete_before_date()usedLIKE '%tag%'(substring match) instead of GLOB exact-match. A tag query for"test"incorrectly matched"testing"and"my-test-tag".- Added
_escape_glob()helper to prevent GLOB wildcard injection (*,?,[) from user-supplied tag values. search_by_tag_chronological()LIMIT/OFFSET is now parameterized instead of f-string interpolated.
Consolidation system:
_sample_memory_pairs()materialized allcombinations(memories, 2)(~50M pairs for 10k memories) just to sample 100. Now uses random index pair generation — O(max_pairs)._get_existing_associations()filtered bymemory_type=="association"but associations are stored withmemory_type="observation"and tag"association". The filter never matched, so duplicate associations were never prevented. Now usessearch_by_tag(["association"]).
⚡ Performance
- Batch access metadata:
retrieve()now persists access metadata in oneexecutemanycall per query instead of N individualUPDATE+COMMITround-trips. - Hybrid search O(n+m) dedup:
retrieve_hybrid()replaced O(n×m) nested-loop deduplication with O(n+m) dict-based merging. BM25-only memories are now batch-fetched in a single SQL query (capped at 999 to respectSQLITE_MAX_VARIABLE_NUMBER) instead of N+1 individualget_by_hash()calls.
🧪 Tests
- 23 new regression tests covering all fixed methods
- Total: 1,420 tests
Contributors
v10.24.0 - Fix external embedding API silent fallback (#551)
What's Changed
Bug Fixes
-
[#551] Fix external embedding API silent fallback (PR #554): When an external embedding provider (vLLM, Ollama, TEI, OpenAI-compatible) returned an error during startup or embedding generation,
sqlite_vec.pysilently fell back to the local ONNX model. This mixed two incompatible vector spaces in the same database, causing all subsequent semantic searches to return incorrect or irrelevant results — with no warning to the user that anything was wrong.The fix:
- Replaced the silent
logger.warning + fallbackpath with a hardraise RuntimeError(...)that clearly reports the API failure reason - Added
_get_existing_db_embedding_dimension()helper that reads theFLOAT[N]column definition fromsqlite_masterto detect the dimension already stored in the database - Used
asyncio.to_thread()for the synchronous DB read inside the async method - DRY error message includes the detected existing DB dimension when available, making mismatch diagnosis straightforward
- Corrected a stale integration test that asserted a
versionfield on/api/health(removed in the GHSA-73hc-m4hx-79pj security hardening in v10.21.0)
- Replaced the silent
Tests
- 10 new regression tests in
tests/storage/test_issue_551_external_embedding_fallback.pycovering: hard failure on API error, dimension mismatch detection, DRY error message format,asyncio.to_threadintegration, and interaction with the existing DB dimension helper - 1,397 total tests
Upgrade Notes
This is a PATCH release. No breaking changes. No migration required.
If you were relying on the silent fallback to local ONNX when your external embedding API was unavailable, you will now receive a RuntimeError at startup instead. This is intentional — the previous behaviour silently corrupted your vector database.
Full Changelog: https://github.com/doobidoo/mcp-memory-service/blob/main/CHANGELOG.md#10240---2026-03-05
v10.23.0 — Quality scorer fix, consolidator improvements, two new opt-out flags
What's Changed
Fixed
- [#544] Missing
import asyncioinai_evaluator.py:NameErrorcrashed batch quality scoring for all users without ONNX Runtime installed, leaving 41%+ of memories unscored. - [#545] Consolidator used invalid
memory_type="association"(not in ontology) and omittedskip_semantic_dedup=True, causing templated association content to be rejected as duplicates; store() failure reason now captured and logged instead of discarded.
Added
- [#546]
MCP_TYPED_EDGES_ENABLED=false: Opt-out flag for typed edge inference; when disabled all inferred relationships return as"related"(default:true). - [#547]
MCP_CONSOLIDATION_STORE_ASSOCIATIONS=false: Opt-out flag to suppress writing association entries to thememoriestable during consolidation; associations remain fully stored inmemory_graph(default:truefor backward compatibility).
Tests
- 14 regression tests for all four issues (
tests/consolidation/test_issues_544_545_546_547.py) - 1,387 total tests
Upgrade
pip install --upgrade mcp-memory-serviceOr with uvx:
uvx mcp-memory-service@10.23.0Full Changelog: https://github.com/doobidoo/mcp-memory-service/blob/main/CHANGELOG.md
v10.22.0 — Consolidation Engine Stability Fixes
Summary
Three targeted fixes to the consolidation engine that resolve crashes, data corruption, and accuracy issues encountered during repeated consolidation cycles.
Fixed
memory_consolidate status KeyError on empty statistics dict (closes #542)
The memory_consolidate MCP tool's status action raised KeyError when the consolidation engine returned an empty or partial statistics dict — a common state during the first run or immediately after a reset. All dict lookups in the status handler are now replaced with safe .get() calls with sensible defaults (empty lists, zero counts, None timestamps).
10 new tests in tests/consolidation/test_status_handler_issue542.py
Exponential metadata prefix nesting in compression engine (closes #543)
The consolidation compression engine accumulated metadata prefixes (consolidated_from_, compressed_from_) exponentially across repeated consolidation cycles. Each cycle read existing prefixes and prepended new ones, so a memory consolidated three times would have triple-nested prefix strings.
Two changes prevent re-accumulation:
- New
_strip_compression_prefixes()static method strips all existing compression-related prefixes from source metadata before re-aggregating into the output memory _INTERNAL_METADATA_KEYSblocklist excludes consolidation-internal keys from the aggregated metadata entirely
14 new tests in tests/consolidation/test_compression_prefix_nesting.py
RelationshipInferenceEngine high false positive rate (closes #541)
The RelationshipInferenceEngine produced excessive contradicts relationship labels due to overly broad contradiction detection patterns. Three targeted changes reduce false positives:
- Weak conjunctions (
but,yet,although,however,nevertheless) removed from contradiction pattern vocabulary - Minimum confidence threshold raised to
min_typed_confidence=0.75and minimum semantic similarity raised tomin_typed_similarity=0.65 - New
_shares_domain_keywords()cross-content guard requires shared domain keywords before acontradictslabel is assigned
16 new tests in tests/consolidation/test_relationship_inference_issue541.py
Test Coverage
- 1,373 tests now passing (40 new tests added in this release)
- All new tests tagged
@pytest.mark.unitfor fast feedback
What's Unchanged
No API changes. No breaking changes. No configuration changes required.
Full Changelog
https://github.com/doobidoo/mcp-memory-service/blob/main/CHANGELOG.md#10220---2026-03-05