Automated configuration, monitoring, and management of a MongoDB replica set in Docker Swarm.
Ensures continuous operation and adapts to topology changes for high availability and data consistency.
Read the full write-up: Building Production MongoDB Replica Sets in Docker Swarm β covers architecture decisions, failover benchmarks, and production lessons learned.
git clone https://github.com/BitWise-0x/MongoDB-ReplicaSet-Manager && cd MongoDB-ReplicaSet-Manager
./deploy.shNote: Primary discovery uses MongoDB's
hellocommand and checksisWritablePrimaryfor accuracy across server versions.
graph TD
User["π€ Admin / App"]
subgraph Docker Swarm
db["<b>database</b><br>mongo:8.0.13<br>:27017 (internal)<br><i>global mode</i>"]
ctrl["<b>dbcontroller</b><br>jackietreehorn/mongo-replica-ctrl<br><i>manager node only</i>"]
nosql["<b>nosqlclient</b><br>mongoclient/mongoclient<br>:3030"]
end
subgraph Infrastructure
net["<b>backend</b><br>encrypted overlay network"]
secret["<b>mongo-keyfile</b><br>Docker secret"]
vol_data["<b>mongodata</b><br>/data/db"]
vol_cfg["<b>mongoconfigdb</b><br>/data/configdb"]
end
User -->|":3030"| nosql
nosql --> db
ctrl -->|"Docker socket"| db
ctrl -->|"PyMongo<br>rs.reconfig()"| db
db --- net
ctrl --- net
nosql --- net
db ---|"keyfile auth"| secret
db --- vol_data
db --- vol_cfg
| Component | Version |
|---|---|
| MongoDB | 8.0.x (recipe uses 8.0.13; 7.0 compatible) |
| PyMongo Driver | 4.15.x β pinned >=4.15,<5, included in controller image |
| Docker | >= 24.0.5 |
| OS | Linux >= Ubuntu 23.04 (linux/amd64, linux/arm/v7, linux/arm64) |
- A Docker Swarm cluster (local or cloud) β tested on 6-node Swarm
- Docker Stack recipe β see
docker-compose-stack.yml - Environment variables β see
mongo-rs.env - Deployment script β
deploy.sh
-
Set all required environment variables in
mongo-rs.env(see Environment Variables below). -
Modify
docker-compose-stack.ymlto include your application service. Set your app's MongoDB URI to:mongodb://${MONGO_ROOT_USERNAME}:${MONGO_ROOT_PASSWORD}@database:27017/?replicaSet=${REPLICASET_NAME} -
Deploy via
./deploy.shβ this will:- Import environment variables
- Create a
backendoverlay network with encryption enabled - Generate a
keyfilefor the replica set and add it as a Docker secret - Spin up all stack services:
mongo,dbcontroller,nosqlclient, and your application - Run
dbcontrollerin global mode (one instance per Swarm node)
-
Monitor logs for output and errors (see Troubleshooting).
-
To remove the stack:
./remove.sh # or docker stack rm [stackname]The
_backendoverlay network is not removed automatically (it is external). Leave it in place when redeploying to retain the original subnet and avoid connectivity issues.
Defined in mongo-rs.env:
| Variable | Default | Purpose |
|---|---|---|
STACK_NAME |
myapp |
Docker stack name |
MONGO_VERSION |
8.0.13 |
MongoDB image tag |
REPLICASET_NAME |
rs |
Replica set name |
BACKEND_NETWORK_NAME |
${STACK_NAME}_backend |
Overlay network name |
MONGO_SERVICE_URI |
${STACK_NAME}_database |
MongoDB service name |
MONGO_ROOT_USERNAME |
root |
Admin username |
MONGO_ROOT_PASSWORD |
password123 |
Admin password |
INITDB_DATABASE |
myinitdatabase |
Initial database name |
INITDB_USER |
mydbuser |
Initial database user |
INITDB_PASSWORD |
password |
Initial database password |
- Smart Discovery: Identifies and assesses MongoDB services in the Docker Swarm with intelligent node state detection and constraint handling.
- Deployment Intelligence: Distinguishes between fresh deployments, redeployments with IP changes, and dynamic scaling scenarios using advanced configuration analysis.
- Optimized Configuration: Uses current task IPs for primary detection during redeployments, eliminating delays from stale configuration data.
- Adaptive Management: Handles MongoDB startup transitional states with retry logic, ensuring reliable operation across various deployment scenarios.
- Real-time Monitoring: Continuously adapts replica set configuration for network changes, node lifecycle events, and topology updates with minimal downtime.
The nosqlclient service included in the recipe provides a web UI at http://<any-swarm-node-ip>:3030 for browsing and managing the database.
The compose YML pulls the latest controller from DockerHub via jackietreehorn/mongo-replica-ctrl. You can also pull it manually (docker pull jackietreehorn/mongo-replica-ctrl:latest) or build locally using the included ./build.sh.
Check Docker service logs for the dbcontroller service. Enable DEBUG:1 in the compose YML for verbose output. The controller uses colored ANSI logging:
| Color | Level |
|---|---|
| GREEN | INFO messages |
| YELLOW | WARNING messages and IP addresses |
| RED | ERROR messages |
| MAGENTA (bold) | PRIMARY nodes |
| CYAN | SECONDARY nodes and DEBUG messages |
| BOLD | ReplicaSet-related terms |
docker service logs [servicename]_dbcontroller --followExample output:
Environment β verify all variables are correctly set in mongo-rs.env.
Docker Stack Compose YML β dbcontroller must run as a single instance on a Swarm manager node. Multiple instances will perform conflicting actions. A restart policy is included in the service definition for error recovery.
IMPORTANT: The default MongoDB port
27017is used internally only and is not published outside the Swarm by design. Publishing this port will break replica set management.
Firewalls / SELinux β Linux distributions using SELinux are known to cause issues with MongoDB. Check with sestatus and either disable or configure it for MongoDB. Also verify your firewall allows MongoDB traffic.
Networking β the _backend overlay network is assigned an address space automatically by Docker. To define a custom network space, uncomment the relevant section in deploy.sh. Do not remove this network between redeployments.
Persistent Data β the mongo service must run in global mode to prevent multiple instances sharing the same data directory. Volumes are defined as external to prevent accidental deletion between redeployments.
Swarm Nodes β for HA, run more than one Swarm manager so dbcontroller can restart on a different node if needed.
Healthchecks β the mongo-healthcheck script only verifies MongoDB process health (not cluster status). Cluster status is managed by dbcontroller. The script is POSIX sh compatible and uses mongosh ping.
MongoDB Configuration Check β run ./docker-mongodb_config-check.sh from any manager node to fetch rs.status() and rs.config() output:
./docker-mongodb_config-check.shmembers: [
{
_id: 1,
name: '10.0.26.51:27017',
health: 1,
state: 2,
stateStr: 'SECONDARY',
...
},
{
_id: 2,
name: '10.0.26.48:27017',
health: 1,
state: 1,
stateStr: 'PRIMARY', <--- should match dbcontroller log output
...
}
Service Start-up β on initial deployment, image downloads and replica set initialization take time. Dependent services (e.g. nosqlclient, your application) may fail and restart until MongoDB is ready β this is expected behavior. MongoDB in replica set mode is not available until a primary is elected.
- PyMongo driver docs: https://www.mongodb.com/docs/languages/python/pymongo-driver/current/
- PyMongo API docs: https://pymongo.readthedocs.io/en/4.15.1/api/
- Pinned versions:
- PyMongo:
>=4.15,<5(compatible with MongoDB 7.x/8.x and Python 3.13) - Docker SDK for Python:
>=7,<9
- PyMongo:
- GitHub: github.com/BitWise-0x
- DockerHub: hub.docker.com/u/jackietreehorn
MIT License β see LICENSE for details.

