You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+67-24Lines changed: 67 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,22 @@
1
1
# docker-mariadb-cluster
2
-
__Version 1.0__
3
-
Version 2 is the current branch and is featured as `:latest` on DockerHub.
4
-
Dockerized MariaDB Galera Cluster
2
+
__Version 2__
3
+
Dockerized Automated MariaDB Galera Cluster
5
4
5
+
Version 2 is the advanced branch and is featured on DockerHub as `latest` from now on.
6
+
Old version 1.0 can be found here: https://github.com/toughIQ/docker-mariadb-cluster/tree/v1.
7
+
To get V1.0 Docker images, just `docker pull toughiq/mariadb-cluster:1.0`
6
8
7
-
Build for use with Docker __1.12.1__+
9
+
The idea of this project is to create an automated and fully ephemeral MariaDB Galera cluster.
10
+
No static bindings, no persistent volumes. Like a disk RAID the data gets replicated across the cluster.
11
+
If one node fails, another node will be brought up and the data will be initialized.
8
12
9
-
# WORK in Progress!!
13
+
__Consider this a POC and not a production ready system!__
14
+
15
+
Built for use with Docker __1.12.1__+ in __Swarm Mode__
16
+
17
+
# WORK in Progress!
18
+
19
+
See ISSUES for known problems.
10
20
11
21
## Setup
12
22
### Init Swarm Nodes/Cluster
@@ -17,39 +27,75 @@ Swarm Master:
17
27
18
28
Additional Swarm Node(s):
19
29
20
-
docker swarm join <MasterNodeIP>:2377
30
+
docker swarm join <MasterNodeIP>:2377 + join-tokens shown at swarm init
31
+
32
+
To get the tokens at a later time, run `docker swarm join-token (manager|worker)`
21
33
22
34
### Create DB network
23
35
24
36
docker network create -d overlay mydbnet
25
37
26
-
### Fire up Bootstrap node
27
-
28
-
docker service create --name bootstrap \
38
+
### Init/Bootstrap DB Cluster
39
+
40
+
At first we start with a new service, which is set to `--replicas=1` to turn this instance into a bootstrapping node.
41
+
If there is just one service task running within the cluster, this instance automatically starts with `bootstrapping` enabled.
42
+
43
+
docker service create --name dbcluster \
29
44
--network mydbnet \
30
45
--replicas=1 \
31
-
--env MYSQL_ALLOW_EMPTY_PASSWORD=0 \
32
-
--env MYSQL_ROOT_PASSWORD=rootpass \
33
-
--env DB_BOOTSTRAP_NAME=bootstrap \
34
-
toughiq/mariadb-cluster:1.0 --wsrep-new-cluster
46
+
--env DB_SERVICE_NAME=dbcluster \
47
+
toughiq/mariadb-cluster
35
48
36
-
### Fire up Cluster Members
49
+
Note: the service name provided by `--name` has to match the environment variable __DB_SERVICE_NAME__ set with `--env DB_SERVICE_NAME`.
50
+
51
+
Of course there are the default MariaDB options to define a root password, create a database, create a user and set a password for this user.
52
+
Example:
37
53
38
54
docker service create --name dbcluster \
39
55
--network mydbnet \
40
-
--replicas=3 \
56
+
--replicas=1 \
41
57
--env DB_SERVICE_NAME=dbcluster \
42
-
--env DB_BOOTSTRAP_NAME=bootstrap \
43
-
toughiq/mariadb-cluster:1.0
58
+
--env MYSQL_ROOT_PASSWORD=rootpass \
59
+
--env MYSQL_DATABASE=mydb \
60
+
--env MYSQL_USER=mydbuser \
61
+
--env MYSQL_PASSWORD=mydbpass \
62
+
toughiq/mariadb-cluster
63
+
64
+
### Scale out additional cluster members
65
+
Just after the first service instance/task is running with we are good to scale out.
66
+
Check service with `docker service ps dbcluster`. The result should look like this, with __CURRENT STATE__ telling something like __Running__.
67
+
68
+
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
69
+
7c81muy053eoc28p5wrap2uzn dbcluster.1 toughiq/mariadb-cluster node01 Running Running 41 seconds ago
70
+
71
+
Lets scale out now:
72
+
73
+
docker service scale dbcluster=3
74
+
75
+
This additional 2 nodes start will come up in "cluster join"-mode. Lets check again: `docker service ps dbcluster`
44
76
45
-
### Startup MaxScale Proxy
77
+
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
78
+
7c81muy053eoc28p5wrap2uzn dbcluster.1 toughiq/mariadb-cluster node01 Running Running 6 minutes ago
79
+
8ht037ka0j4g6lnhc194pxqfn dbcluster.2 toughiq/mariadb-cluster node02 Running Running about a minute ago
80
+
bgk07betq9pwgkgpd3eoozu6u dbcluster.3 toughiq/mariadb-cluster node02 Running Running about a minute ago
81
+
82
+
### Create MaxScale Proxy Service and connect to DBCluster
83
+
84
+
There is no absolute need for a MaxScale Proxy service with this Docker Swarm enabled DB cluster, since Swarm provides a loadbalancer. So it would be possible to connect to the cluster by using the loadbalancer DNS name, which is in our case __dbcluster__. Its the same name, which is provided at startup by `--name`.
85
+
86
+
But MaxScale provides some additional features regarding loadbalancing database traffic. And its an easy way to get information on the status of the cluster.
87
+
88
+
Details on this MaxScale image can be found here: https://github.com/toughIQ/docker-maxscale
46
89
47
90
docker service create --name maxscale \
48
91
--network mydbnet \
49
92
--env DB_SERVICE_NAME=dbcluster \
50
93
--env ENABLE_ROOT_USER=1 \
51
94
--publish 3306:3306 \
52
95
toughiq/maxscale
96
+
97
+
To disable root access to the database via MaxScale just set `--env ENABLE_ROOT_USER=0` or remove this line at all.
98
+
Root access is disabled by default.
53
99
54
100
### Check successful startup of Cluster & MaxScale
55
101
Execute the following command. Just use autocompletion to get the `SLOT` and `ID`.
@@ -61,12 +107,9 @@ The result should report the cluster up and running:
0 commit comments