You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm building a system with 1 main agent (I will have more in the future) + 4 MCP servers (I will have more in the future) and would love feedback on my approach. My containerization experience is limited, so I want to make sure I'm doing this right.
My Philosophy
I want MCPs to be HTTP-reachable from outside - not just from my main agent. The idea: if I solve a problem once (like a data integration), I don't want to copy-paste code everywhere. I want any external tool, agent, or future protocol (agent-to-agent, ag-ui, etc.) to just reach the MCP directly. That's why I chose streamable-http transport with exposed ports.
Current Setup
Single shared Dockerfile for all services (Python 3.13, FastMCP)
Docker Compose orchestrating 5 services
streamable-http transport with exposed ports (8001:8001, etc.)
Docker internal DNS for service-to-service (http://mcp-1:8001/mcp)
Health checks + restart: unless-stopped
Concerns Raised
A teammate suggested:
Shared Dockerfile = slow deploys when changing one MCP
Separate Dockerfiles per MCP would be better
Hard-mapped ports might be problematic
My Questions
Is Docker Compose + shared Dockerfile reasonable for 4-12 HTTP-reachable MCPs?
For my use case (MCPs reachable externally), is port mapping the right pattern?
At what scale should I consider separate images or different architecture?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I'm building a system with 1 main agent (I will have more in the future) + 4 MCP servers (I will have more in the future) and would love feedback on my approach. My containerization experience is limited, so I want to make sure I'm doing this right.
My Philosophy
I want MCPs to be HTTP-reachable from outside - not just from my main agent. The idea: if I solve a problem once (like a data integration), I don't want to copy-paste code everywhere. I want any external tool, agent, or future protocol (agent-to-agent, ag-ui, etc.) to just reach the MCP directly. That's why I chose streamable-http transport with exposed ports.
Current Setup
Single shared Dockerfile for all services (Python 3.13, FastMCP)
Docker Compose orchestrating 5 services
streamable-http transport with exposed ports (8001:8001, etc.)
Docker internal DNS for service-to-service (http://mcp-1:8001/mcp)
Health checks + restart: unless-stopped
Concerns Raised
A teammate suggested:
My Questions
Is Docker Compose + shared Dockerfile reasonable for 4-12 HTTP-reachable MCPs?
For my use case (MCPs reachable externally), is port mapping the right pattern?
At what scale should I consider separate images or different architecture?
Beta Was this translation helpful? Give feedback.
All reactions