You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Pending state of a Pod is usually caused by conditions of insufficient resources, for example:
25
+
26
+
- The `StorageClass` of the PVC used by PD, TiKV, TiFlash, Backup, and Restore Pods does not exist or the PV is insufficient.
27
+
- No nodes in the Kubernetes cluster can satisfy the CPU or memory resources requested by the Pod.
28
+
- The certificates used by TiDB or TiProxy components are not configured.
29
+
30
+
You can check the specific reason for Pending by using the `kubectl describe pod` command:
31
+
32
+
```shell
33
+
kubectl describe po -n ${namespace}${pod_name}
34
+
```
35
+
36
+
### CPU or memory resources are insufficient
37
+
38
+
If the CPU or memory resources are insufficient, you can lower the CPU or memory resources requested by the corresponding component for scheduling, or add a new Kubernetes node.
39
+
40
+
### StorageClass of the PVC does not exist
41
+
42
+
If the `StorageClass` of the PVC cannot be found, take the following steps:
43
+
44
+
1. Get the available `StorageClass` in the cluster:
45
+
46
+
```shell
47
+
kubectl get storageclass
48
+
```
49
+
50
+
2. Change `storageClassName` to the name of the `StorageClass` available in the cluster.
51
+
52
+
3. Update the configuration file:
53
+
54
+
If you want to run a backup/restore task, first execute `kubectl delete bk ${backup_name} -n ${namespace}` to delete the old backup/restore task, and then execute `kubectl apply -f backup.yaml` to create a new backup/restore task.
55
+
56
+
4. Delete the corresponding PVCs:
57
+
58
+
```shell
59
+
kubectl delete pvc -n ${namespace}${pvc_name}
60
+
```
61
+
62
+
### Insufficient available PVs
63
+
64
+
If a `StorageClass` exists in the cluster but the available PVs are insufficient, you need to add PV resources correspondingly.
65
+
66
+
## The Pod is in the `CrashLoopBackOff` state
67
+
68
+
A Pod in the `CrashLoopBackOff` state means that the container in the Pod repeatedly aborts (in the loop of abort - restart by `kubelet` - abort). There are many potential causes of `CrashLoopBackOff`.
69
+
70
+
### View the log of the current container
71
+
72
+
```shell
73
+
kubectl -n ${namespace} logs -f ${pod_name}
74
+
```
75
+
76
+
### View the log when the container was last restarted
77
+
78
+
```shell
79
+
kubectl -n ${namespace} logs -p ${pod_name}
80
+
```
81
+
82
+
After checking the error messages in the log, you can refer to [Cannot start `tidb-server`](https://docs.pingcap.com/tidb/stable/troubleshoot-tidb-cluster#cannot-start-tidb-server), [Cannot start `tikv-server`](https://docs.pingcap.com/tidb/stable/troubleshoot-tidb-cluster#cannot-start-tikv-server), and [Cannot start `pd-server`](https://docs.pingcap.com/tidb/stable/troubleshoot-tidb-cluster#cannot-start-pd-server) for further troubleshooting.
83
+
84
+
### `ulimit` is not large enough
85
+
86
+
TiKV might fail to start when `ulimit` is not large enough. In this case, you can modify the `/etc/security/limits.conf` file of the Kubernetes node to increase the `ulimit`:
Copy file name to clipboardExpand all lines: en/enable-tls-between-components.md
+2-273Lines changed: 2 additions & 273 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,284 +23,13 @@ This document describes how to enable Transport Layer Security (TLS) between com
23
23
24
24
3. Configure `pd-ctl` and `tikv-ctl` to connect to the cluster.
25
25
26
-
Certificates can be issued in multiple methods. This document describes two methods. You can choose either of them to issue certificates for the TiDB cluster:
27
-
28
-
- Use `cfssl`
29
-
- Use `cert-manager`
26
+
Certificates can be issued in multiple methods. This document describes how to use the `cert-manager` system to issue certificates for the TiDB cluster.
30
27
31
28
If you need to renew the existing TLS certificate, refer to [Renew and Replace the TLS Certificate](renew-tls-certificate.md).
32
29
33
30
## Step 1. Generate certificates for components of the TiDB cluster
34
31
35
-
This section describes how to issue certificates using two methods: `cfssl` and `cert-manager`.
36
-
37
-
### Use `cfssl`
38
-
39
-
1. Download `cfssl` and initialize the certificate issuer:
2. Generate the `ca-config.json` configuration file:
53
-
54
-
>**Note:**
55
-
>
56
-
> - All TiDB components share the same set of TLS certificates for inter-component communication to encrypt traffic between clients and servers. Therefore, when generating the CA configuration, you must specify both `server auth` and `client auth`.
57
-
> - It is recommended that all component certificates be issued by the same CA.
58
-
59
-
```shell
60
-
cat <<EOF > ca-config.json
61
-
{
62
-
"signing": {
63
-
"default": {
64
-
"expiry": "8760h"
65
-
},
66
-
"profiles": {
67
-
"internal": {
68
-
"expiry": "8760h",
69
-
"usages": [
70
-
"signing",
71
-
"key encipherment",
72
-
"server auth",
73
-
"client auth"
74
-
]
75
-
}
76
-
}
77
-
}
78
-
}
79
-
EOF
80
-
```
81
-
82
-
3. Generate the `ca-csr.json` configuration file:
83
-
84
-
```shell
85
-
cat << EOF > ca-csr.json
86
-
{
87
-
"CN": "TiDB",
88
-
"CA": {
89
-
"expiry": "87600h"
90
-
},
91
-
"key": {
92
-
"algo": "rsa",
93
-
"size": 2048
94
-
},
95
-
"names": [
96
-
{
97
-
"C": "US",
98
-
"L": "CA",
99
-
"O": "PingCAP",
100
-
"ST": "Beijing",
101
-
"OU": "TiDB"
102
-
}
103
-
]
104
-
}
105
-
EOF
106
-
```
107
-
108
-
4. Generate CA by the configured option:
109
-
110
-
```shell
111
-
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
112
-
```
113
-
114
-
5. Generate certificates:
115
-
116
-
In this step, you need to generate a set of certificates for each component group of the TiDB cluster.
117
-
118
-
- PD certificate
119
-
120
-
First, generate the default `pd.json` file:
121
-
122
-
```shell
123
-
cfssl print-defaults csr > pd.json
124
-
```
125
-
126
-
Then, edit this file to change the `CN` and `hosts` attributes:
127
-
128
-
```json
129
-
...
130
-
"CN": "TiDB",
131
-
"hosts": [
132
-
"127.0.0.1",
133
-
"::1",
134
-
"${pd_group_name}-pd",
135
-
"${pd_group_name}-pd.${namespace}",
136
-
"${pd_group_name}-pd.${namespace}.svc",
137
-
"${pd_group_name}-pd-peer",
138
-
"${pd_group_name}-pd-peer.${namespace}",
139
-
"${pd_group_name}-pd-peer.${namespace}.svc",
140
-
"*.${pd_group_name}-pd-peer",
141
-
"*.${pd_group_name}-pd-peer.${namespace}",
142
-
"*.${pd_group_name}-pd-peer.${namespace}.svc"
143
-
],
144
-
...
145
-
```
146
-
147
-
`${pd_group_name}` is the name of PDGroup, and `${namespace}` is the namespace in which the TiDB cluster is deployed. You can also add your customized `hosts`.
Then, edit this file to change the `CN` and `hosts` attributes:
164
-
165
-
```json
166
-
...
167
-
"CN": "TiDB",
168
-
"hosts": [
169
-
"127.0.0.1",
170
-
"::1",
171
-
"${tikv_group_name}-tikv",
172
-
"${tikv_group_name}-tikv.${namespace}",
173
-
"${tikv_group_name}-tikv.${namespace}.svc",
174
-
"${tikv_group_name}-tikv-peer",
175
-
"${tikv_group_name}-tikv-peer.${namespace}",
176
-
"${tikv_group_name}-tikv-peer.${namespace}.svc",
177
-
"*.${tikv_group_name}-tikv-peer",
178
-
"*.${tikv_group_name}-tikv-peer.${namespace}",
179
-
"*.${tikv_group_name}-tikv-peer.${namespace}.svc"
180
-
],
181
-
...
182
-
```
183
-
184
-
`${tikv_group_name}` is the name of TiKVGroup, and `${namespace}` is the namespace in which the TiDB cluster is deployed. You can also add your customized `hosts`.
Then, edit this file to change the `CN` and `hosts` attributes:
201
-
202
-
```json
203
-
...
204
-
"CN": "TiDB",
205
-
"hosts": [
206
-
"127.0.0.1",
207
-
"::1",
208
-
"${tidb_group_name}-tidb",
209
-
"${tidb_group_name}-tidb.${namespace}",
210
-
"${tidb_group_name}-tidb.${namespace}.svc",
211
-
"${tidb_group_name}-tidb-peer",
212
-
"${tidb_group_name}-tidb-peer.${namespace}",
213
-
"${tidb_group_name}-tidb-peer.${namespace}.svc",
214
-
"*.${tidb_group_name}-tidb-peer",
215
-
"*.${tidb_group_name}-tidb-peer.${namespace}",
216
-
"*.${tidb_group_name}-tidb-peer.${namespace}.svc"
217
-
],
218
-
...
219
-
```
220
-
221
-
`${tidb_group_name}` is the name of TiDBGroup, and `${namespace}` is the namespace in which the TiDB cluster is deployed. You can also add your customized `hosts`.
In addition to PD, TiKV, and TiDB, other component groups also require their own TLS certificates. The following example shows the basic steps to generate a component certificate:
232
-
233
-
First, generate the default `${component_name}.json` file:
234
-
235
-
```shell
236
-
cfssl print-defaults csr > ${component_name}.json
237
-
```
238
-
239
-
Then, edit this file to change the `CN` and `hosts` attributes:
If you have already generated a set of certificates foreach component and a set of client-side certificates for each client as describedin the preceding steps, create the Secret objects for the TiDB cluster by executing the following command:
In this step, separate Secrets are created for the server-side certificates of PD, TiKV, and TiDB for loading during startup, and another set of client-side certificates is provided for their client connections.
302
-
303
-
### Use `cert-manager`
32
+
This section describes how to issue certificates using `cert-manager`.
0 commit comments