Describe the bug
RedisJsonCollection._inner_delete does not apply the collection-name prefix to keys before calling JSON.DEL. When prefix_collection_name_to_key_names=True, the upsert path stores keys as {collection_name}:{key} but the delete path sends JSON.DEL {key} (without the prefix). The command targets a non-existent key, returns 0, and the record is never deleted.
Wire format captured via redis-cli MONITOR:
# Upsert (correct — prefixed):
JSON.SET "sk_cov_xxx:cov-1" "$" "{...}"
# Delete (wrong — not prefixed):
JSON.DEL "cov-1" "."
# Subsequent get still returns the record:
JSON.MGET "sk_cov_xxx:cov-1" "$" → [{"content": "alpha", ...}]
The hashset sibling RedisHashsetCollection._inner_delete correctly calls self._get_redis_key(key). The JSON version does not.
Expected behavior
collection.delete("cov-1") on a RedisJsonCollection with prefix_collection_name_to_key_names=True should send JSON.DEL "sk_cov_xxx:cov-1" and the record should be removed.
To Reproduce
Prerequisites: a Redis Stack server reachable on localhost:6379, redis-cli, and a working Python SK dev environment.
-
Install Python SK per python/DEV_SETUP.md:
cd python && make install-sk PYTHON_VERSION=3.12
-
Start Redis Stack on port 6379:
podman run -d -p 6379:6379 docker.io/redis/redis-stack:latest
-
In a separate terminal, attach MONITOR:
redis-cli -h localhost -p 6379 MONITOR
-
From python/, run the following script:
# repro_json_delete.py
import asyncio
from dataclasses import dataclass, field
from typing import Annotated
from uuid import uuid4
from semantic_kernel.connectors.redis import RedisJsonCollection
from semantic_kernel.data.vector import VectorStoreField, vectorstoremodel
@vectorstoremodel
@dataclass
class MyModel:
vector: Annotated[
list[float] | None,
VectorStoreField("vector", index_kind="hnsw", dimensions=3,
distance_function="cosine_similarity", type="float"),
] = None
id: Annotated[str, VectorStoreField("key", type="str")] = field(
default_factory=lambda: str(uuid4())
)
content: Annotated[str, VectorStoreField("data", type="str")] = "hello"
async def main():
async with RedisJsonCollection(
record_type=MyModel,
collection_name="repro_delete",
prefix_collection_name_to_key_names=True,
) as col:
await col.ensure_collection_deleted()
await col.ensure_collection_exists()
rec = MyModel(id="test-1", content="alpha", vector=[0.1, 0.2, 0.3])
await col.upsert([rec])
# Verify record exists
fetched = await col.get("test-1")
print("before delete:", fetched) # MyModel(...)
# Delete — this silently fails
await col.delete("test-1")
# Record is still there
fetched = await col.get("test-1")
print("after delete:", fetched) # MyModel(...) — should be None
await col.ensure_collection_deleted()
asyncio.run(main())
REDIS_CONNECTION_STRING="redis://localhost:6379" uv run python repro_json_delete.py
Output:
before delete: MyModel(vector=[0.1, 0.2, 0.3], id='test-1', content='alpha')
after delete: MyModel(vector=[0.1, 0.2, 0.3], id='test-1', content='alpha')
-
Check MONITOR output — the JSON.DEL targets the raw key test-1 instead of repro_delete:test-1.
Root cause
RedisJsonCollection._inner_delete (line 708 on main) passes raw keys directly:
async def _inner_delete(self, keys: Sequence[str], **kwargs: Any) -> None:
await asyncio.gather(*[self.redis_database.json().delete(key, **kwargs) for key in keys])
Compare to RedisHashsetCollection._inner_delete (line 580) which correctly prefixes:
async def _inner_delete(self, keys: Sequence[TKey], **kwargs: Any) -> None:
await self.redis_database.delete(*[self._get_redis_key(key) for key in keys])
Platform
- Language: Python
- Source:
main branch of microsoft/semantic-kernel (SK version 1.41.2)
redis-py version: 6.4.0
- Backend tested: Redis Stack 7.4.7 (RediSearch v21020, ReJSON v20809)
- IDE: Kiro
- OS: macOS 25.4.0 (Darwin), arm64
Additional context
The bug is silent — no error is raised. JSON.DEL on a non-existent key returns 0, which redis-py does not treat as an error. The caller has no indication the delete failed.
The bug only manifests when prefix_collection_name_to_key_names=True. The default for RedisJsonCollection is False, which is why the existing integration tests (which use the default) do not catch it. However, the parent class RedisCollection defaults to True, and any user who explicitly enables prefixing (a common pattern for multi-collection deployments) will hit this.
Test coverage gap
Tried to use the Redis connector and ran into issues — vector search didn't work and deletes with prefix_collection_name_to_key_names=True were silently failing. The existing integration tests (test_vector_store.py) only cover single-record upsert → get → delete with the default prefix setting (False) and never call collection.search(), so these paths have had zero test coverage. Added a new test file (test_redis_coverage.py) with 30 parametrised tests covering the full public surface — vector search, batch CRUD, filters, paging, include_vectors, prefix mode, etc. — and that's how these issues were found. The new tests should be included alongside the fix to prevent regression.
Describe the bug
RedisJsonCollection._inner_deletedoes not apply the collection-name prefix to keys before callingJSON.DEL. Whenprefix_collection_name_to_key_names=True, the upsert path stores keys as{collection_name}:{key}but the delete path sendsJSON.DEL {key}(without the prefix). The command targets a non-existent key, returns 0, and the record is never deleted.Wire format captured via
redis-cli MONITOR:The hashset sibling
RedisHashsetCollection._inner_deletecorrectly callsself._get_redis_key(key). The JSON version does not.Expected behavior
collection.delete("cov-1")on aRedisJsonCollectionwithprefix_collection_name_to_key_names=Trueshould sendJSON.DEL "sk_cov_xxx:cov-1"and the record should be removed.To Reproduce
Prerequisites: a Redis Stack server reachable on localhost:6379,
redis-cli, and a working Python SK dev environment.Install Python SK per
python/DEV_SETUP.md:Start Redis Stack on port 6379:
In a separate terminal, attach
MONITOR:From
python/, run the following script:REDIS_CONNECTION_STRING="redis://localhost:6379" uv run python repro_json_delete.pyOutput:
Check
MONITORoutput — theJSON.DELtargets the raw keytest-1instead ofrepro_delete:test-1.Root cause
RedisJsonCollection._inner_delete(line 708 on main) passes raw keys directly:Compare to
RedisHashsetCollection._inner_delete(line 580) which correctly prefixes:Platform
mainbranch ofmicrosoft/semantic-kernel(SK version1.41.2)redis-pyversion: 6.4.0Additional context
The bug is silent — no error is raised.
JSON.DELon a non-existent key returns 0, whichredis-pydoes not treat as an error. The caller has no indication the delete failed.The bug only manifests when
prefix_collection_name_to_key_names=True. The default forRedisJsonCollectionisFalse, which is why the existing integration tests (which use the default) do not catch it. However, the parent classRedisCollectiondefaults toTrue, and any user who explicitly enables prefixing (a common pattern for multi-collection deployments) will hit this.Test coverage gap
Tried to use the Redis connector and ran into issues — vector search didn't work and deletes with
prefix_collection_name_to_key_names=Truewere silently failing. The existing integration tests (test_vector_store.py) only cover single-record upsert → get → delete with the default prefix setting (False) and never callcollection.search(), so these paths have had zero test coverage. Added a new test file (test_redis_coverage.py) with 30 parametrised tests covering the full public surface — vector search, batch CRUD, filters, paging, include_vectors, prefix mode, etc. — and that's how these issues were found. The new tests should be included alongside the fix to prevent regression.