@@ -14,7 +14,7 @@ weight: 30
14
14
<!-- overview -->
15
15
16
16
This page shows how to run a replicated stateful application using a
17
- [ StatefulSet ] ( /docs/concepts/workloads/controllers/ statefulset/ ) controller .
17
+ {{< glossary_tooltip term_id=" statefulset" >}} .
18
18
This application is a replicated MySQL database. The example topology has a
19
19
single primary server and multiple replicas, using asynchronous row-based
20
20
replication.
@@ -27,7 +27,7 @@ on general patterns for running stateful applications in Kubernetes.
27
27
## {{% heading "prerequisites" %}}
28
28
29
29
30
- * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
30
+ * {{< include "task-tutorial-prereqs.md" >}}
31
31
* {{< include "default-storage-class-prereqs.md" >}}
32
32
* This tutorial assumes you are familiar with
33
33
[ PersistentVolumes] ( /docs/concepts/storage/persistent-volumes/ )
@@ -44,7 +44,7 @@ on general patterns for running stateful applications in Kubernetes.
44
44
## {{% heading "objectives" %}}
45
45
46
46
47
- * Deploy a replicated MySQL topology with a StatefulSet controller .
47
+ * Deploy a replicated MySQL topology with a StatefulSet.
48
48
* Send MySQL client traffic.
49
49
* Observe resistance to downtime.
50
50
* Scale the StatefulSet up and down.
@@ -58,7 +58,7 @@ on general patterns for running stateful applications in Kubernetes.
58
58
The example MySQL deployment consists of a ConfigMap, two Services,
59
59
and a StatefulSet.
60
60
61
- ### ConfigMap
61
+ ### Create a ConfigMap {#configmap}
62
62
63
63
Create the ConfigMap from the following YAML configuration file:
64
64
@@ -78,7 +78,7 @@ portions to apply to different Pods.
78
78
Each Pod decides which portion to look at as it's initializing,
79
79
based on information provided by the StatefulSet controller.
80
80
81
- ### Services
81
+ ### Create Services {#services}
82
82
83
83
Create the Services from the following YAML configuration file:
84
84
@@ -88,23 +88,24 @@ Create the Services from the following YAML configuration file:
88
88
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-services.yaml
89
89
```
90
90
91
- The Headless Service provides a home for the DNS entries that the StatefulSet
92
- controller creates for each Pod that's part of the set.
93
- Because the Headless Service is named ` mysql ` , the Pods are accessible by
91
+ The headless Service provides a home for the DNS entries that the StatefulSet
92
+ {{< glossary_tooltip text="controllers" term_id="controller" >}} creates for each
93
+ Pod that's part of the set.
94
+ Because the headless Service is named ` mysql ` , the Pods are accessible by
94
95
resolving ` <pod-name>.mysql ` from within any other Pod in the same Kubernetes
95
96
cluster and namespace.
96
97
97
- The Client Service, called ` mysql-read ` , is a normal Service with its own
98
+ The client Service, called ` mysql-read ` , is a normal Service with its own
98
99
cluster IP that distributes connections across all MySQL Pods that report
99
100
being Ready. The set of potential endpoints includes the primary MySQL server and all
100
101
replicas.
101
102
102
- Note that only read queries can use the load-balanced Client Service.
103
+ Note that only read queries can use the load-balanced client Service.
103
104
Because there is only one primary MySQL server, clients should connect directly to the
104
- primary MySQL Pod (through its DNS entry within the Headless Service) to execute
105
+ primary MySQL Pod (through its DNS entry within the headless Service) to execute
105
106
writes.
106
107
107
- ### StatefulSet
108
+ ### Create the StatefulSet {#statefulset}
108
109
109
110
Finally, create the StatefulSet from the following YAML configuration file:
110
111
@@ -120,7 +121,7 @@ You can watch the startup progress by running:
120
121
kubectl get pods -l app=mysql --watch
121
122
```
122
123
123
- After a while, you should see all 3 Pods become Running:
124
+ After a while, you should see all 3 Pods become ` Running ` :
124
125
125
126
```
126
127
NAME READY STATUS RESTARTS AGE
@@ -156,19 +157,18 @@ properties to perform orderly startup of MySQL replication.
156
157
### Generating configuration
157
158
158
159
Before starting any of the containers in the Pod spec, the Pod first runs any
159
- [ Init Containers ] ( /docs/concepts/workloads/pods/init-containers/ )
160
+ [ init containers ] ( /docs/concepts/workloads/pods/init-containers/ )
160
161
in the order defined.
161
162
162
- The first Init Container , named ` init-mysql ` , generates special MySQL config
163
+ The first init container , named ` init-mysql ` , generates special MySQL config
163
164
files based on the ordinal index.
164
165
165
166
The script determines its own ordinal index by extracting it from the end of
166
167
the Pod name, which is returned by the ` hostname ` command.
167
168
Then it saves the ordinal (with a numeric offset to avoid reserved values)
168
169
into a file called ` server-id.cnf ` in the MySQL ` conf.d ` directory.
169
170
This translates the unique, stable identity provided by the StatefulSet
170
- controller into the domain of MySQL server IDs, which require the same
171
- properties.
171
+ into the domain of MySQL server IDs, which require the same properties.
172
172
173
173
The script in the ` init-mysql ` container also applies either ` primary.cnf ` or
174
174
` replica.cnf ` from the ConfigMap by copying the contents into ` conf.d ` .
@@ -188,7 +188,7 @@ logs might not go all the way back to the beginning of time.
188
188
These conservative assumptions are the key to allow a running StatefulSet
189
189
to scale up and down over time, rather than being fixed at its initial size.
190
190
191
- The second Init Container , named ` clone-mysql ` , performs a clone operation on
191
+ The second init container , named ` clone-mysql ` , performs a clone operation on
192
192
a replica Pod the first time it starts up on an empty PersistentVolume.
193
193
That means it copies all existing data from another running Pod,
194
194
so its local state is consistent enough to begin replicating from the primary server.
@@ -203,7 +203,7 @@ Ready before starting Pod `N+1`.
203
203
204
204
### Starting replication
205
205
206
- After the Init Containers complete successfully, the regular containers run.
206
+ After the init containers complete successfully, the regular containers run.
207
207
The MySQL Pods consist of a ` mysql ` container that runs the actual ` mysqld `
208
208
server, and an ` xtrabackup ` container that acts as a
209
209
[ sidecar] ( https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns ) .
@@ -292,13 +292,13 @@ endpoint might be selected upon each connection attempt:
292
292
You can press ** Ctrl+C** when you want to stop the loop, but it's useful to keep
293
293
it running in another window so you can see the effects of the following steps.
294
294
295
- ## Simulating Pod and Node downtime
295
+ ## Simulate Pod and Node failure {#simulate-pod-and-node- downtime}
296
296
297
297
To demonstrate the increased availability of reading from the pool of replicas
298
298
instead of a single server, keep the ` SELECT @@server_id ` loop from above
299
299
running while you force a Pod out of the Ready state.
300
300
301
- ### Break the Readiness Probe
301
+ ### Break the Readiness probe
302
302
303
303
The [ readiness probe] ( /docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes )
304
304
for the ` mysql ` container runs the command ` mysql -h 127.0.0.1 -e 'SELECT 1' `
@@ -372,13 +372,13 @@ NAME READY STATUS RESTARTS AGE IP NODE
372
372
mysql-2 2/2 Running 0 15m 10.244.5.27 kubernetes-node-9l2t
373
373
```
374
374
375
- Then drain the Node by running the following command, which cordons it so
375
+ Then, drain the Node by running the following command, which cordons it so
376
376
no new Pods may schedule there, and then evicts any existing Pods.
377
377
Replace ` <node-name> ` with the name of the Node you found in the last step.
378
378
379
379
{{< caution >}}
380
- Draining a Node can impact other workloads and applications that are
381
- running on the same node, so only do the following step in a test
380
+ Draining a Node can impact other workloads and applications
381
+ running on the same node. Only perform the following step in a test
382
382
cluster.
383
383
{{< /caution >}}
384
384
0 commit comments