You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/concepts/workloads/controllers/replicationcontroller.md
+2-7Lines changed: 2 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -220,9 +220,6 @@ Ideally, the rolling update controller would take application readiness into acc
220
220
221
221
The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.
222
222
223
-
Rolling update is implemented in the client tool
224
-
[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update). Visit [`kubectl rolling-update` task](/docs/tasks/run-application/rolling-update-replication-controller/) for more concrete examples.
225
-
226
223
### Multiple release tracks
227
224
228
225
In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.
@@ -246,7 +243,7 @@ The ReplicationController simply ensures that the desired number of pods matches
246
243
247
244
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
248
245
249
-
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
246
+
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
250
247
251
248
252
249
## API Object
@@ -266,9 +263,7 @@ Note that we recommend using Deployments instead of directly using Replica Sets,
266
263
267
264
### Deployment (Recommended)
268
265
269
-
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods
270
-
in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality,
271
-
because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features.
266
+
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods. Deployments are recommended if you want this rolling update functionality because, they are declarative, server-side, and have additional features.
As of version 1.11 `rolling-update` have been deprecated (see [CHANGELOG-1.11.md](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md)), use `rollout` instead.
208
207
209
208
```bash
210
209
kubectl set image deployment/frontend www=image:v2 # Rolling update "www" containers of "frontend" deployment, updating the image
@@ -215,12 +214,6 @@ kubectl rollout status -w deployment/frontend # Watch rolling
215
214
kubectl rollout restart deployment/frontend # Rolling restart of the "frontend" deployment
216
215
217
216
218
-
# deprecated starting version 1.11
219
-
kubectl rolling-update frontend-v1 -f frontend-v2.json # (deprecated) Rolling update pods of frontend-v1
220
-
kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 # (deprecated) Change the name of the resource and update the image
221
-
kubectl rolling-update frontend --image=image:v2 # (deprecated) Update the pods image of frontend
`port-forward` | `kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]` | Forward one or more local ports to a pod.
92
92
`proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Run a proxy to the Kubernetes API server.
93
93
`replace` | `kubectl replace -f FILENAME` | Replace a resource from a file or stdin.
94
-
`rolling-update` | <code>kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC) [flags]</code> | Perform a rolling update by gradually replacing the specified replication controller and its pods.
95
94
`run` | `kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=server|client|none] [--overrides=inline-json] [flags]` | Run a specified image on the cluster.
96
95
`scale` | <code>kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version][--current-replicas=count][flags]</code> | Update the size of the specified replication controller.
97
96
`version` | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server.
Copy file name to clipboardExpand all lines: content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -199,13 +199,12 @@ The detailed documentation of `kubectl autoscale` can be found [here](/docs/refe
199
199
200
200
## Autoscaling during rolling update
201
201
202
-
Currently in Kubernetes, it is possible to perform a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) by managing replication controllers directly,
203
-
or by using the deployment object, which manages the underlying replica sets for you.
202
+
Currently in Kubernetes, it is possible to perform a rolling update by using the deployment object, which manages the underlying replica sets for you.
204
203
Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object,
205
204
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replica sets.
206
205
207
206
Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers,
208
-
i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
207
+
i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update.
209
208
The reason this doesn't work is that when rolling update creates a new replication controller,
210
209
the Horizontal Pod Autoscaler will not be bound to the new replication controller.
0 commit comments