Skip to content

Commit d5e5000

Browse files
author
Lasim
committed
docs(all): update README to clarify management chaos and token reduction
1 parent b94bbbd commit d5e5000

File tree

1 file changed

+7
-0
lines changed

1 file changed

+7
-0
lines changed

README.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,14 @@ DeployStack is **The First MCP-as-a-Service Platform**. We turn MCP from "comple
2222
MCP changes how AI agents use tools, but it has created two critical challenges:
2323

2424
### Problem 1: Management Chaos
25+
2526
- **Credential Sprawl**: Developers copy and paste sensitive API keys and tokens into insecure local configuration files, creating a huge security risk.
2627
- **No Governance**: Who is using which tools? Which agent is accessing sensitive customer data? Without a central control plane, companies are blind.
2728
- **Developer Friction**: Developers spend hours managing complex configurations for dozens of tools, a process that is both tedious and error-prone. Onboarding a new developer is a nightmare of configuration management.
2829
- **Inconsistent Environments**: Every developer has a slightly different local setup, leading to "it works on my machine" problems and configuration drift.
2930

3031
### Problem 2: Context Window Consumption Crisis
32+
3133
- **Token Bloat**: Each MCP server adds 5-15 tools to context. With 10 servers, that's 75,000+ tokens consumed before any work begins.
3234
- **Performance Degradation**: LLM accuracy drops significantly after 20-40 tools are loaded.
3335
- **Hard Limits**: Tools like Cursor enforce a 40-tool maximum, forcing developers to disable useful servers.
@@ -81,17 +83,20 @@ some-mcp configure --api-key=xxx
8183
DeployStack includes a **hierarchical router** that reduces MCP token consumption by 90%+:
8284

8385
**Traditional Approach:**
86+
8487
- 10 MCP servers × 15 tools = 150 tools loaded
8588
- 150 tools × 500 tokens = 75,000 tokens consumed
8689
- Result: 37.5% of context window gone before you start
8790

8891
**DeployStack Hierarchical Router:**
92+
8993
- Exposes only 2 meta-tools: `discover_mcp_tools` and `execute_mcp_tool`
9094
- 2 tools × 175 tokens = 350 tokens consumed
9195
- Result: 0.175% of context window used
9296
- **Token Reduction: 99.5%**
9397

9498
**How it works:**
99+
95100
1. LLM calls `discover_mcp_tools(query)` - "Find GitHub tools"
96101
2. Router searches across all team MCP servers, returns relevant tool paths
97102
3. LLM calls `execute_mcp_tool(path, args)` with selected tool
@@ -102,13 +107,15 @@ This means you can scale from 3 to 100+ MCP servers without degrading LLM perfor
102107
### 🚀 Zero Installation Experience
103108

104109
**Before DeployStack:**
110+
105111
```bash
106112
npm install -g some-mcp-cli
107113
some-mcp configure --api-key=xxx
108114
# Repeat for every tool, every developer
109115
```
110116

111117
**After DeployStack:**
118+
112119
```json
113120
{
114121
"mcpServers": {

0 commit comments

Comments
 (0)