Replies: 2 comments 1 reply
-
That particular implementation seems overly complicated: Why do you need a "frontend" when the goal is an MCP server? but the concept off adding memory to DesktopCommanderMCP sounds (really) good (hint hint). |
Beta Was this translation helpful? Give feedback.
-
I said: I'm wondering if there is a less complex solution (no frontend needed, no LLM Api key needed, etc.).? The ultimate goal is code in Claude + DeskTopCommanderMCP and keep context and memories across Claude conversations. Claude 4 Sonnet said: You're absolutely right - the mem0ai OpenMemory setup is overly complex for your needs. It requires Docker, PostgreSQL, Qdrant vector database, a web UI, and OpenAI API keys. That's a lot of infrastructure for what should be simple persistent memory. Here's a much simpler solution: Basic Memory by Basic Machines. https://github.com/basicmachines-co/basic-memory Why Basic Memory is Perfect for Your Use Case No Docker, databases, or complex setup - just local Markdown files GitHubPyPI How it works: Stores knowledge in standard Markdown files on your computer GitHubPyPI Installation npx -y @smithery/cli install @basicmachines-co/basic-memory --client claude How It Works in Practice Save information: Create new Markdown files for topics you discuss GitHub - basicmachines-co/basic-memory: Basic Memory is a knowledge management system that allows you to build a persistent semantic graph from conversations with AI assistants, stored in standard Markdown files on your computer. Integrates directly with Obsidan.md The files are stored with YAML frontmatter and follow a specific format GitHub - basicmachines-co/basic-memory: Basic Memory is a knowledge management system that allows you to build a persistent semantic graph from conversations with AI assistants, stored in standard Markdown files on your computer. Integrates directly with Obsidan.md, but you can view and edit them directly in any text editor or Obsidian. Comparison to mem0ai OpenMemory Featuremem0ai OpenMemoryBasic MemorySetup complexityHigh (Docker, PostgreSQL, Qdrant)Low (single command)LLM API requiredYes (OpenAI)NoStorageVector databaseLocal Markdown filesUIYes (web interface)No (file-based)DependenciesManyMinimal This should give you exactly what you want: persistent memory across Claude conversations using DesktopCommanderMCP, without the complexity of the mem0ai solution. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Seems like it would be a powerful combo.
https://levelup.gitconnected.com/how-to-make-your-clients-more-context-aware-with-openmemory-mcp-60057bcc24a3
but one question would be would it eat up tokens even faster? That seems (running our of tokens in a conversation) to be the bigfeet limitation that keeps sending me back to Windsurf, but on the other hand, maybe this enables you to keep context across conversations so actually enable more Claude + DesktopCommanderMCP usage?
Beta Was this translation helpful? Give feedback.
All reactions