|
1 | | -# Model Context Shell |
| 1 | +<h1 align="center">Model Context Shell</h1> |
2 | 2 |
|
3 | | -[](https://github.com/StacklokLabs/model-context-shell/actions/workflows/ci.yml) |
4 | | -[](LICENSE) |
| 3 | +<p align="center"><b>Unix-style pipelines for MCP tools — compose complex tool workflows as single pipeline requests</b></p> |
5 | 4 |
|
6 | | -**Unix-style pipelines for MCP tools — compose complex tool workflows as single pipeline requests** |
| 5 | +<p align="center"> |
| 6 | +<a href="#introduction">Introduction</a> · |
| 7 | +<a href="#setup">Setup</a> · |
| 8 | +<a href="#security">Security</a> · |
| 9 | +<a href="#development">Development</a> · |
| 10 | +<a href="#specification">Specification</a> · |
| 11 | +<a href="#rfc">RFC</a> · |
| 12 | +<a href="#contributing">Contributing</a> |
| 13 | +</p> |
7 | 14 |
|
8 | 15 | ## Introduction |
9 | 16 |
|
@@ -132,7 +139,38 @@ thv run ghcr.io/stackloklabs/model-context-shell:latest --network host --foregro |
132 | 139 | thv run ghcr.io/stackloklabs/model-context-shell:latest --foreground --transport streamable-http |
133 | 140 | ``` |
134 | 141 |
|
135 | | -Once running, Model Context Shell is available to any AI agent that ToolHive supports — no additional integration required. It works with any existing MCP servers running through ToolHive, and relies on ToolHive's authentication model for connected servers. |
| 142 | +Once running, you can find the server's address with `thv list`, which shows the URL and port for each running server. If you've registered your AI client with `thv client setup`, ToolHive configures it to discover running servers automatically — see the [CLI quickstart](https://docs.stacklok.com/toolhive/tutorials/quickstart-cli) for details. |
| 143 | + |
| 144 | +Model Context Shell works with any existing MCP servers running through ToolHive, and relies on ToolHive's authentication model for connected servers. |
| 145 | + |
| 146 | +### Adding MCP servers for testing |
| 147 | + |
| 148 | +Model Context Shell coordinates tools from other MCP servers running through ToolHive. To try it out, start a few servers: |
| 149 | + |
| 150 | +```bash |
| 151 | +# See what's available in the registry |
| 152 | +thv registry list |
| 153 | + |
| 154 | +# Run a simple fetch server (great for testing pipelines) |
| 155 | +thv run fetch |
| 156 | + |
| 157 | +# Check what's running |
| 158 | +thv list |
| 159 | +``` |
| 160 | + |
| 161 | +You can also run servers from npm/PyPI packages directly: |
| 162 | + |
| 163 | +```bash |
| 164 | +thv run npx://@modelcontextprotocol/server-everything |
| 165 | +``` |
| 166 | + |
| 167 | +For servers that need credentials (e.g. GitHub), pass secrets via ToolHive: |
| 168 | + |
| 169 | +```bash |
| 170 | +thv run --secret github,target=GITHUB_PERSONAL_ACCESS_TOKEN github |
| 171 | +``` |
| 172 | + |
| 173 | +See the [ToolHive documentation](https://docs.stacklok.com/toolhive) for the full guide, including [CLI quickstart](https://docs.stacklok.com/toolhive/tutorials/quickstart-cli) and [available integrations](https://docs.stacklok.com/toolhive/integrations). |
136 | 174 |
|
137 | 175 | ### Tips |
138 | 176 |
|
@@ -175,6 +213,22 @@ uv run ruff format --check . |
175 | 213 | uv run pyright |
176 | 214 | ``` |
177 | 215 |
|
| 216 | +## Specification |
| 217 | + |
| 218 | +For now, this project serves as a living specification — the implementation _is_ the spec. As the idea matures, a more formal specification may be extracted from it. |
| 219 | + |
| 220 | +**Execution model.** The current execution model is a scriptable map-reduce pipeline. Stages run sequentially, with `for_each` providing the map step over tool calls. This could be extended with a more generic mini-interpreter for evaluating more complex pipelines, but the current thinking is that it would never grow into a full-blown programming language. After a certain level of complexity, it makes more sense for agents to write a larger piece of code directly, or combine written code with the shell approach. That said, the built-in access to tools like `jq` and `awk` already makes the pipeline model surprisingly capable for most data transformation tasks. |
| 221 | + |
| 222 | +**Pipeline schema.** The pipeline format is defined by the `execute_pipeline` tool in [`main.py`](https://github.com/StacklokLabs/model-context-shell/blob/main/main.py). Since FastMCP generates the JSON Schema from the function signature and docstring, this serves as the canonical schema definition. |
| 223 | + |
| 224 | +**ToolHive and security.** The reliance on ToolHive and container isolation is a practical choice — it was the simplest way to get a working, secure system. ToolHive handles tool discovery, container management, and networking, which let this project focus on the pipeline execution model itself. A different deployment model could be used in the future without changing the core concept. |
| 225 | + |
| 226 | +## RFC |
| 227 | + |
| 228 | +This project is both a working tech demo and an early-stage RFC for the concept of composable MCP tool pipelines. Rather than writing a detailed specification upfront, the goal is to gather feedback on the idea by providing something concrete to try. |
| 229 | + |
| 230 | +If you have thoughts on the approach, ideas for improvements, or use cases we haven't considered, please share them in the [Discussions](https://github.com/StacklokLabs/model-context-shell/discussions) section. |
| 231 | + |
178 | 232 | ## Contributing |
179 | 233 |
|
180 | 234 | Contributions, ideas, and feedback are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines, including our DCO sign-off requirement. |
|
0 commit comments