@@ -255,10 +255,11 @@ up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
255
255
Deployment also ensures that only a certain number of Pods are created above the desired number of Pods.
256
256
By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).
257
257
258
- For example, if you look at the above Deployment closely, you will see that it first created a new Pod,
259
- then deleted some old Pods , and created new ones . It does not kill old Pods until a sufficient number of
258
+ For example, if you look at the above Deployment closely, you will see that it first creates a new Pod,
259
+ then deletes an old Pod , and creates another new one . It does not kill old Pods until a sufficient number of
260
260
new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed.
261
- It makes sure that at least 2 Pods are available and that at max 4 Pods in total are available.
261
+ It makes sure that at least 3 Pods are available and that at max 4 Pods in total are available. In case of
262
+ a Deployment with 4 replicas, the number of Pods would be between 3 and 5.
262
263
263
264
* Get details of your Deployment:
264
265
```shell
@@ -305,10 +306,17 @@ up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
305
306
```
306
307
Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211)
307
308
and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet
308
- (nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old ReplicaSet to 2, so that at
309
- least 2 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and down
310
- the new and the old ReplicaSet, with the same rolling update strategy. Finally, you' ll have 3 available replicas
311
- in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.
309
+ (nginx-deployment-1564180365) and scaled it up to 1 and waited for it to come up. Then it scaled down the old ReplicaSet
310
+ to 2 and scaled up the new ReplicaSet to 2 so that at least 3 Pods were available and at most 4 Pods were created at all times.
311
+ It then continued scaling up and down the new and the old ReplicaSet, with the same rolling update strategy.
312
+ Finally, you' ll have 3 available replicas in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.
313
+
314
+ {{< note > }}
315
+ Kubernetes doesn' t count terminating Pods when calculating the number of `availableReplicas`, which must be between
316
+ `replicas - maxUnavailable` and `replicas + maxSurge`. As a result, you might notice that there are more Pods than
317
+ expected during a rollout, and that the total resources consumed by the Deployment is more than `replicas + maxSurge`
318
+ until the `terminationGracePeriodSeconds` of the terminating Pods expires.
319
+ {{< /note >}}
312
320
313
321
### Rollover (aka multiple updates in-flight)
314
322
0 commit comments