Problem / Motivation
Hey team, amazing work on this plugin. I was looking at your hybrid retrieval (Vector + BM25) and cross-encoder setup.
One issue I constantly run into with LanceDB/memory stores is that vector DBs don't forget—so the LLM ends up retrieving highly semantically relevant but deprecated 3-year-old context.
Proposed Solution
I recently built an open-source API (Knowledge Universe) that calculates a "Radioactive Decay" score for sources before they hit the LLM (e.g., penalizing old GitHub code but keeping old math papers fresh). It outputs native 384-dim embeddings.
Might be a really cool feature to add a decay_penalty parameter into your reranking pipeline to push stale memories to the bottom. Happy to share how the math works if you're interested: https://github.com/VLSiddarth/Knowledge-Universe
Alternatives Considered
No response
Area
None
Additional Context
No response
Problem / Motivation
Hey team, amazing work on this plugin. I was looking at your hybrid retrieval (Vector + BM25) and cross-encoder setup.
One issue I constantly run into with LanceDB/memory stores is that vector DBs don't forget—so the LLM ends up retrieving highly semantically relevant but deprecated 3-year-old context.
Proposed Solution
I recently built an open-source API (Knowledge Universe) that calculates a "Radioactive Decay" score for sources before they hit the LLM (e.g., penalizing old GitHub code but keeping old math papers fresh). It outputs native 384-dim embeddings.
Might be a really cool feature to add a decay_penalty parameter into your reranking pipeline to push stale memories to the bottom. Happy to share how the math works if you're interested: https://github.com/VLSiddarth/Knowledge-Universe
Alternatives Considered
No response
Area
None
Additional Context
No response