@@ -15,7 +15,7 @@ weight: 30
15
15
16
16
This page shows how to run a replicated stateful application using a
17
17
[ StatefulSet] ( /docs/concepts/workloads/controllers/statefulset/ ) controller.
18
- The example is a MySQL single-master topology with multiple slaves running
18
+ The example is a MySQL single-primary topology with multiple secondaries running
19
19
asynchronous replication.
20
20
21
21
{{< note >}}
@@ -69,9 +69,9 @@ kubectl apply -f https://k8s.io/examples/application/mysql/mysql-configmap.yaml
69
69
```
70
70
71
71
This ConfigMap provides ` my.cnf ` overrides that let you independently control
72
- configuration on the MySQL master and slaves .
73
- In this case, you want the master to be able to serve replication logs to slaves
74
- and you want slaves to reject any writes that don't come via replication.
72
+ configuration on the MySQL primary and secondaries .
73
+ In this case, you want the primary to be able to serve replication logs to secondaries
74
+ and you want secondaries to reject any writes that don't come via replication.
75
75
76
76
There's nothing special about the ConfigMap itself that causes different
77
77
portions to apply to different Pods.
@@ -96,12 +96,12 @@ cluster and namespace.
96
96
97
97
The Client Service, called ` mysql-read ` , is a normal Service with its own
98
98
cluster IP that distributes connections across all MySQL Pods that report
99
- being Ready. The set of potential endpoints includes the MySQL master and all
100
- slaves .
99
+ being Ready. The set of potential endpoints includes the MySQL primary and all
100
+ secondaries .
101
101
102
102
Note that only read queries can use the load-balanced Client Service.
103
- Because there is only one MySQL master , clients should connect directly to the
104
- MySQL master Pod (through its DNS entry within the Headless Service) to execute
103
+ Because there is only one MySQL primary , clients should connect directly to the
104
+ MySQL primary Pod (through its DNS entry within the Headless Service) to execute
105
105
writes.
106
106
107
107
### StatefulSet
@@ -167,33 +167,33 @@ This translates the unique, stable identity provided by the StatefulSet
167
167
controller into the domain of MySQL server IDs, which require the same
168
168
properties.
169
169
170
- The script in the ` init-mysql ` container also applies either ` master .cnf` or
171
- ` slave .cnf` from the ConfigMap by copying the contents into ` conf.d ` .
172
- Because the example topology consists of a single MySQL master and any number of
173
- slaves , the script simply assigns ordinal ` 0 ` to be the master , and everyone
174
- else to be slaves .
170
+ The script in the ` init-mysql ` container also applies either ` primary .cnf` or
171
+ ` secondary .cnf` from the ConfigMap by copying the contents into ` conf.d ` .
172
+ Because the example topology consists of a single MySQL primary and any number of
173
+ secondaries , the script simply assigns ordinal ` 0 ` to be the primary , and everyone
174
+ else to be secondaries .
175
175
Combined with the StatefulSet controller's
176
176
[ deployment order guarantee] ( /docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees/ ) ,
177
- this ensures the MySQL master is Ready before creating slaves , so they can begin
177
+ this ensures the MySQL primary is Ready before creating secondaries , so they can begin
178
178
replicating.
179
179
180
180
### Cloning existing data
181
181
182
- In general, when a new Pod joins the set as a slave , it must assume the MySQL
183
- master might already have data on it. It also must assume that the replication
182
+ In general, when a new Pod joins the set as a secondary , it must assume the MySQL
183
+ primary might already have data on it. It also must assume that the replication
184
184
logs might not go all the way back to the beginning of time.
185
185
These conservative assumptions are the key to allow a running StatefulSet
186
186
to scale up and down over time, rather than being fixed at its initial size.
187
187
188
188
The second Init Container, named ` clone-mysql ` , performs a clone operation on
189
- a slave Pod the first time it starts up on an empty PersistentVolume.
189
+ a secondary Pod the first time it starts up on an empty PersistentVolume.
190
190
That means it copies all existing data from another running Pod,
191
- so its local state is consistent enough to begin replicating from the master .
191
+ so its local state is consistent enough to begin replicating from the primary .
192
192
193
193
MySQL itself does not provide a mechanism to do this, so the example uses a
194
194
popular open-source tool called Percona XtraBackup.
195
195
During the clone, the source MySQL server might suffer reduced performance.
196
- To minimize impact on the MySQL master , the script instructs each Pod to clone
196
+ To minimize impact on the MySQL primary , the script instructs each Pod to clone
197
197
from the Pod whose ordinal index is one lower.
198
198
This works because the StatefulSet controller always ensures Pod ` N ` is
199
199
Ready before starting Pod ` N+1 ` .
@@ -206,15 +206,15 @@ server, and an `xtrabackup` container that acts as a
206
206
[ sidecar] ( https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns ) .
207
207
208
208
The ` xtrabackup ` sidecar looks at the cloned data files and determines if
209
- it's necessary to initialize MySQL replication on the slave .
209
+ it's necessary to initialize MySQL replication on the secondary .
210
210
If so, it waits for ` mysqld ` to be ready and then executes the
211
211
` CHANGE MASTER TO ` and ` START SLAVE ` commands with replication parameters
212
212
extracted from the XtraBackup clone files.
213
213
214
- Once a slave begins replication, it remembers its MySQL master and
214
+ Once a secondary begins replication, it remembers its MySQL primary and
215
215
reconnects automatically if the server restarts or the connection dies.
216
- Also, because slaves look for the master at its stable DNS name
217
- (` mysql-0.mysql ` ), they automatically find the master even if it gets a new
216
+ Also, because secondaries look for the primary at its stable DNS name
217
+ (` mysql-0.mysql ` ), they automatically find the primary even if it gets a new
218
218
Pod IP due to being rescheduled.
219
219
220
220
Lastly, after starting replication, the ` xtrabackup ` container listens for
@@ -224,7 +224,7 @@ case the next Pod loses its PersistentVolumeClaim and needs to redo the clone.
224
224
225
225
## Sending client traffic
226
226
227
- You can send test queries to the MySQL master (hostname ` mysql-0.mysql ` )
227
+ You can send test queries to the MySQL primary (hostname ` mysql-0.mysql ` )
228
228
by running a temporary container with the ` mysql:5.7 ` image and running the
229
229
` mysql ` client binary.
230
230
@@ -291,7 +291,7 @@ it running in another window so you can see the effects of the following steps.
291
291
292
292
## Simulating Pod and Node downtime
293
293
294
- To demonstrate the increased availability of reading from the pool of slaves
294
+ To demonstrate the increased availability of reading from the pool of secondaries
295
295
instead of a single server, keep the ` SELECT @@server_id ` loop from above
296
296
running while you force a Pod out of the Ready state.
297
297
@@ -409,9 +409,9 @@ Now uncordon the Node to return it to a normal state:
409
409
kubectl uncordon < node-name>
410
410
```
411
411
412
- ## Scaling the number of slaves
412
+ ## Scaling the number of secondaries
413
413
414
- With MySQL replication, you can scale your read query capacity by adding slaves .
414
+ With MySQL replication, you can scale your read query capacity by adding secondaries .
415
415
With StatefulSet, you can do this with a single command:
416
416
417
417
``` shell
0 commit comments