Skip to content

Commit 99b4404

Browse files
committed
Merge remote-tracking branch 'origin/v2'
# Conflicts: # README.md
1 parent ef4d97d commit 99b4404

File tree

4 files changed

+85
-41
lines changed

4 files changed

+85
-41
lines changed

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,5 +26,5 @@ ENV GALERA_USER=galera \
2626
CLUSTER_NAME=docker_cluster \
2727
MYSQL_ALLOW_EMPTY_PASSWORD=1
2828

29-
29+
CMD ["mysqld"]
3030

README.md

Lines changed: 67 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,22 @@
11
# docker-mariadb-cluster
2-
__Version 1.0__
3-
Version 2 is the current branch and is featured as `:latest` on DockerHub.
4-
Dockerized MariaDB Galera Cluster
2+
__Version 2__
3+
Dockerized Automated MariaDB Galera Cluster
54

5+
Version 2 is the advanced branch and is featured on DockerHub as `latest` from now on.
6+
Old version 1.0 can be found here: https://github.com/toughIQ/docker-mariadb-cluster/tree/v1.
7+
To get V1.0 Docker images, just `docker pull toughiq/mariadb-cluster:1.0`
68

7-
Build for use with Docker __1.12.1__+
9+
The idea of this project is to create an automated and fully ephemeral MariaDB Galera cluster.
10+
No static bindings, no persistent volumes. Like a disk RAID the data gets replicated across the cluster.
11+
If one node fails, another node will be brought up and the data will be initialized.
812

9-
# WORK in Progress!!
13+
__Consider this a POC and not a production ready system!__
14+
15+
Built for use with Docker __1.12.1__+ in __Swarm Mode__
16+
17+
# WORK in Progress!
18+
19+
See ISSUES for known problems.
1020

1121
## Setup
1222
### Init Swarm Nodes/Cluster
@@ -17,39 +27,75 @@ Swarm Master:
1727

1828
Additional Swarm Node(s):
1929

20-
docker swarm join <MasterNodeIP>:2377
30+
docker swarm join <MasterNodeIP>:2377 + join-tokens shown at swarm init
31+
32+
To get the tokens at a later time, run `docker swarm join-token (manager|worker)`
2133

2234
### Create DB network
2335

2436
docker network create -d overlay mydbnet
2537

26-
### Fire up Bootstrap node
27-
28-
docker service create --name bootstrap \
38+
### Init/Bootstrap DB Cluster
39+
40+
At first we start with a new service, which is set to `--replicas=1` to turn this instance into a bootstrapping node.
41+
If there is just one service task running within the cluster, this instance automatically starts with `bootstrapping` enabled.
42+
43+
docker service create --name dbcluster \
2944
--network mydbnet \
3045
--replicas=1 \
31-
--env MYSQL_ALLOW_EMPTY_PASSWORD=0 \
32-
--env MYSQL_ROOT_PASSWORD=rootpass \
33-
--env DB_BOOTSTRAP_NAME=bootstrap \
34-
toughiq/mariadb-cluster:1.0 --wsrep-new-cluster
46+
--env DB_SERVICE_NAME=dbcluster \
47+
toughiq/mariadb-cluster
3548

36-
### Fire up Cluster Members
49+
Note: the service name provided by `--name` has to match the environment variable __DB_SERVICE_NAME__ set with `--env DB_SERVICE_NAME`.
50+
51+
Of course there are the default MariaDB options to define a root password, create a database, create a user and set a password for this user.
52+
Example:
3753

3854
docker service create --name dbcluster \
3955
--network mydbnet \
40-
--replicas=3 \
56+
--replicas=1 \
4157
--env DB_SERVICE_NAME=dbcluster \
42-
--env DB_BOOTSTRAP_NAME=bootstrap \
43-
toughiq/mariadb-cluster:1.0
58+
--env MYSQL_ROOT_PASSWORD=rootpass \
59+
--env MYSQL_DATABASE=mydb \
60+
--env MYSQL_USER=mydbuser \
61+
--env MYSQL_PASSWORD=mydbpass \
62+
toughiq/mariadb-cluster
63+
64+
### Scale out additional cluster members
65+
Just after the first service instance/task is running with we are good to scale out.
66+
Check service with `docker service ps dbcluster`. The result should look like this, with __CURRENT STATE__ telling something like __Running__.
67+
68+
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
69+
7c81muy053eoc28p5wrap2uzn dbcluster.1 toughiq/mariadb-cluster node01 Running Running 41 seconds ago
70+
71+
Lets scale out now:
72+
73+
docker service scale dbcluster=3
74+
75+
This additional 2 nodes start will come up in "cluster join"-mode. Lets check again: `docker service ps dbcluster`
4476

45-
### Startup MaxScale Proxy
77+
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
78+
7c81muy053eoc28p5wrap2uzn dbcluster.1 toughiq/mariadb-cluster node01 Running Running 6 minutes ago
79+
8ht037ka0j4g6lnhc194pxqfn dbcluster.2 toughiq/mariadb-cluster node02 Running Running about a minute ago
80+
bgk07betq9pwgkgpd3eoozu6u dbcluster.3 toughiq/mariadb-cluster node02 Running Running about a minute ago
81+
82+
### Create MaxScale Proxy Service and connect to DBCluster
83+
84+
There is no absolute need for a MaxScale Proxy service with this Docker Swarm enabled DB cluster, since Swarm provides a loadbalancer. So it would be possible to connect to the cluster by using the loadbalancer DNS name, which is in our case __dbcluster__. Its the same name, which is provided at startup by `--name`.
85+
86+
But MaxScale provides some additional features regarding loadbalancing database traffic. And its an easy way to get information on the status of the cluster.
87+
88+
Details on this MaxScale image can be found here: https://github.com/toughIQ/docker-maxscale
4689

4790
docker service create --name maxscale \
4891
--network mydbnet \
4992
--env DB_SERVICE_NAME=dbcluster \
5093
--env ENABLE_ROOT_USER=1 \
5194
--publish 3306:3306 \
5295
toughiq/maxscale
96+
97+
To disable root access to the database via MaxScale just set `--env ENABLE_ROOT_USER=0` or remove this line at all.
98+
Root access is disabled by default.
5399

54100
### Check successful startup of Cluster & MaxScale
55101
Execute the following command. Just use autocompletion to get the `SLOT` and `ID`.
@@ -61,12 +107,9 @@ The result should report the cluster up and running:
61107
-------------------+-----------------+-------+-------------+--------------------
62108
Server | Address | Port | Connections | Status
63109
-------------------+-----------------+-------+-------------+--------------------
64-
10.0.0.9 | 10.0.0.9 | 3306 | 0 | Slave, Synced, Running
65-
10.0.0.8 | 10.0.0.8 | 3306 | 0 | Slave, Synced, Running
66-
10.0.0.10 | 10.0.0.10 | 3306 | 0 | Master, Synced, Running
110+
10.0.0.3 | 10.0.0.3 | 3306 | 0 | Slave, Synced, Running
111+
10.0.0.4 | 10.0.0.4 | 3306 | 0 | Slave, Synced, Running
112+
10.0.0.5 | 10.0.0.5 | 3306 | 0 | Master, Synced, Running
67113
-------------------+-----------------+-------+-------------+--------------------
68114

69115

70-
### Remove Bootstrap Service
71-
72-
docker service rm bootstrap

scripts/init_cluster_conf.sh

Lines changed: 17 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -2,45 +2,47 @@
22

33
set -e
44

5+
# we set gcomm string with cluster_members via ENV by default
6+
CLUSTER_ADDRESS="gcomm://$CLUSTER_MEMBERS?pc.wait_prim=no"
57

6-
# we use dns service discovery to find other members when in node mode
8+
# we use dns service discovery to find other members when in service mode
9+
# and set/override cluster_members provided by ENV
710
if [ -n "$DB_SERVICE_NAME" ]; then
8-
CLUSTER_MEMBERS=`getent hosts tasks.$DB_SERVICE_NAME|awk '{print $1}'|tr '\n' ','`
9-
fi
10-
11-
if [ -n "$DB_BOOTSTRAP_NAME" ]; then
12-
CLUSTER_MEMBERS=$CLUSTER_MEMBERS`getent hosts tasks.$DB_BOOTSTRAP_NAME|awk '{print $1}'`
11+
12+
# we check, if we have to enable bootstrapping, if we are the only/first node live
13+
if [ `getent hosts tasks.$DB_SERVICE_NAME|wc -l` = 1 ] ;then
14+
# bootstrapping gets enabled by empty gcomm string
15+
CLUSTER_ADDRESS="gcomm://"
16+
else
17+
# we fetch IPs of service members
18+
CLUSTER_MEMBERS=`getent hosts tasks.$DB_SERVICE_NAME|awk '{print $1}'|tr '\n' ','`
19+
# we set gcomm string with found service members
20+
CLUSTER_ADDRESS="gcomm://$CLUSTER_MEMBERS?pc.wait_prim=no"
21+
fi
1322
fi
1423

1524

1625
# we create a galera config
1726
config_file="/etc/mysql/conf.d/galera.cnf"
1827

19-
# we get the current container IP
20-
# was added for testing. disabled at the moment
21-
#MYIP=`ip add show eth0 | grep inet | head -1 | awk '{print $2}' | cut -d"/" -f1`
22-
# We start config file creation
23-
2428
cat <<EOF > $config_file
2529
# Node specifics
2630
[mysqld]
2731
wsrep-node-name = $HOSTNAME
28-
#wsrep-node-address = $MYIP
2932
wsrep-sst-receive-address = $HOSTNAME
3033
wsrep-node-incoming-address = $HOSTNAME
3134
3235
# Cluster settings
3336
wsrep-on=ON
3437
wsrep-cluster-name = "$CLUSTER_NAME"
35-
wsrep-cluster-address = gcomm://$CLUSTER_MEMBERS?pc.wait_prim=no
38+
wsrep-cluster-address = $CLUSTER_ADDRESS
3639
wsrep-provider = /usr/lib/galera/libgalera_smm.so
37-
wsrep-provider-options = "gcache.size=256M;gcache.page_size=128M"
40+
wsrep-provider-options = "gcache.size=256M;gcache.page_size=128M;debug=no"
3841
wsrep-sst-auth = "$GALERA_USER:$GALERA_PASS"
3942
wsrep_sst_method = rsync
4043
binlog-format = row
4144
default-storage-engine = InnoDB
4245
innodb-doublewrite = 1
4346
innodb-autoinc-lock-mode = 2
4447
innodb-flush-log-at-trx-commit = 2
45-
innodb-locks-unsafe-for-binlog = 1
4648
EOF

scripts/init_galera_user.sh

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,6 @@
33
set -e
44

55
# we use .sh file to create a .sql file, which will be parsed afterwards due to alphabetical sorting
6-
76
config_file="/docker-entrypoint-initdb.d/init_galera_user.sql"
87

98
# We start config file creation

0 commit comments

Comments
 (0)