Replies: 2 comments 3 replies
-
|
Thanks @fabb. Good questions - that's what I was initially wrestling with, thinking MCP for running prompts would just be overkill. Because when I first heard about MCP servers - I thought it was cool, but people had already mostly built the MCP tools that LLMs are already using (without us having to "reinvent the wheel" But when I read the Anthropic documentation, the more, get the feeling that there's value in it. Basically the MCP server is "just like" or similar to the way APIs work. What we want is a way for LLMs to take our text and turn it into some tangible thing or action. In our case (software developers) LLMs already do that without us having to create more or extra MCP servers (sorry repeating my self here). That means MCP servers enable LLMs to interact with the real world (not just text - or "bits" if you prefer). In other words LLMs can interact by using tools that both gather data points from the outside world and output data that can also "do things" - think physical computing using things like both sensors (input) and actuators (output). Sorry - I just realised that I've been "lecturing" I don't mean to, but I want to keep it here for context and stimulating discussion anyways. NB: Anthropic themselves, in there documentation specifically mention that MCPs are not just for tools but also for prompts. POSSIBLE USE CASE: In my circumstances, I'm continuously "fighting" or re-directing the AI - (I'm thinking here the linting issues I keep having - and CI/CD devops - I know GitHub works well with Claude Code at PRs) but to the point - what if we built a MCP with RAG to store our "issues" or our AI redirects and corrections to a vector db, (I'm thinking while terminal error outputs) in that way we can make sure our coding agent doesn't get bogged down with, unnecessary (but in a way necessary) context? (Google Gemini 1M context window does mitigate this somewhat and the compact features of some of the IDEs also). Then we can have our AI or coding agent or whatever just call the MCP when it needs to. My idea needs a bit of refinement here - and I'm sorry I'm waffling on a bit (my wife keeps reminding me😄) - typing from my phone - so not sure I've come across clear and coherently. Anyways love to hear the thoughts. |
Beta Was this translation helpful? Give feedback.
-
|
He should not analyze problems and plan solutions in mcp. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
How could a BMAD MCP server work?
It could help to integrate the method in any project and workflow more easily, and to be able to update to the latest version of BMAD more easily.
If updates of the MCP server are independent of the docs in the project, how would the MCP server cope with docs from a previous BMAD version?
Would the MCP server do the actual work (A) or return prompts to guide the client AI agent through the BMAD method (B) like sequential-thinking or https://github.com/mettamatt/code-reasoning?
I‘d say B makes more sense, because otherwise the MCP server would need access to an LLM API key itself and that’s not available in all workflow (e.g. Claude Code or Copilot).
MCP allows to provide tools/resources/prompts, but unfortunately a lot of AI agents only have support for MCP tools, so I’d stick with those for a BMAD MCP server implementation.
Which tools would it provide? One for every task (including free-form tasks with some of the personas, and a generic orchestrator task to guide through the method)? Other ideas?
Beta Was this translation helpful? Give feedback.
All reactions