Skip to content

Commit be6c0c3

Browse files
author
Rajesh Deshpande
authored
Removing references of kubectl rolling-update command (#19449)
* Removing rolling-update command details Removing rolling-update command details Removing references to kubectl rolling-update command * Removing rolling-update references
1 parent e937a06 commit be6c0c3

File tree

6 files changed

+5
-278
lines changed

6 files changed

+5
-278
lines changed

content/en/docs/concepts/workloads/controllers/deployment.md

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1076,7 +1076,7 @@ All existing Pods are killed before new ones are created when `.spec.strategy.ty
10761076
10771077
#### Rolling Update Deployment
10781078
1079-
The Deployment updates Pods in a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/)
1079+
The Deployment updates Pods in a rolling update
10801080
fashion when `.spec.strategy.type==RollingUpdate`. You can specify `maxUnavailable` and `maxSurge` to control
10811081
the rolling update process.
10821082
@@ -1143,12 +1143,4 @@ a paused Deployment and one that is not paused, is that any changes into the Pod
11431143
Deployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default when
11441144
it is created.
11451145

1146-
## Alternative to Deployments
1147-
1148-
### kubectl rolling-update
1149-
1150-
[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) updates Pods and ReplicationControllers
1151-
in a similar fashion. But Deployments are recommended, since they are declarative, server side, and have
1152-
additional features, such as rolling back to any previous revision even after the rolling update is done.
1153-
11541146
{{% /capture %}}

content/en/docs/concepts/workloads/controllers/replicationcontroller.md

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -220,9 +220,6 @@ Ideally, the rolling update controller would take application readiness into acc
220220

221221
The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.
222222

223-
Rolling update is implemented in the client tool
224-
[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update). Visit [`kubectl rolling-update` task](/docs/tasks/run-application/rolling-update-replication-controller/) for more concrete examples.
225-
226223
### Multiple release tracks
227224

228225
In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.
@@ -246,7 +243,7 @@ The ReplicationController simply ensures that the desired number of pods matches
246243

247244
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
248245

249-
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
246+
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
250247

251248

252249
## API Object
@@ -266,9 +263,7 @@ Note that we recommend using Deployments instead of directly using Replica Sets,
266263

267264
### Deployment (Recommended)
268265

269-
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods
270-
in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality,
271-
because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features.
266+
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods. Deployments are recommended if you want this rolling update functionality because, they are declarative, server-side, and have additional features.
272267

273268
### Bare Pods
274269

content/en/docs/reference/kubectl/cheatsheet.md

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -204,7 +204,6 @@ kubectl diff -f ./my-manifest.yaml
204204

205205
## Updating Resources
206206

207-
As of version 1.11 `rolling-update` have been deprecated (see [CHANGELOG-1.11.md](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md)), use `rollout` instead.
208207

209208
```bash
210209
kubectl set image deployment/frontend www=image:v2 # Rolling update "www" containers of "frontend" deployment, updating the image
@@ -215,12 +214,6 @@ kubectl rollout status -w deployment/frontend # Watch rolling
215214
kubectl rollout restart deployment/frontend # Rolling restart of the "frontend" deployment
216215

217216

218-
# deprecated starting version 1.11
219-
kubectl rolling-update frontend-v1 -f frontend-v2.json # (deprecated) Rolling update pods of frontend-v1
220-
kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 # (deprecated) Change the name of the resource and update the image
221-
kubectl rolling-update frontend --image=image:v2 # (deprecated) Update the pods image of frontend
222-
kubectl rolling-update frontend-v1 frontend-v2 --rollback # (deprecated) Abort existing rollout in progress
223-
224217
cat pod.json | kubectl replace -f - # Replace a pod based on the JSON passed into std
225218

226219
# Force replace, delete and then re-create the resource. Will cause a service outage.

content/en/docs/reference/kubectl/overview.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,6 @@ Operation | Syntax | Description
9191
`port-forward` | `kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]` | Forward one or more local ports to a pod.
9292
`proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Run a proxy to the Kubernetes API server.
9393
`replace` | `kubectl replace -f FILENAME` | Replace a resource from a file or stdin.
94-
`rolling-update` | <code>kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE &#124; -f NEW_CONTROLLER_SPEC) [flags]</code> | Perform a rolling update by gradually replacing the specified replication controller and its pods.
9594
`run` | `kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=server|client|none] [--overrides=inline-json] [flags]` | Run a specified image on the cluster.
9695
`scale` | <code>kubectl scale (-f FILENAME &#124; TYPE NAME &#124; TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags]</code> | Update the size of the specified replication controller.
9796
`version` | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server.

content/en/docs/tasks/run-application/horizontal-pod-autoscale.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -199,13 +199,12 @@ The detailed documentation of `kubectl autoscale` can be found [here](/docs/refe
199199

200200
## Autoscaling during rolling update
201201

202-
Currently in Kubernetes, it is possible to perform a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) by managing replication controllers directly,
203-
or by using the deployment object, which manages the underlying replica sets for you.
202+
Currently in Kubernetes, it is possible to perform a rolling update by using the deployment object, which manages the underlying replica sets for you.
204203
Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object,
205204
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replica sets.
206205

207206
Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers,
208-
i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
207+
i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update.
209208
The reason this doesn't work is that when rolling update creates a new replication controller,
210209
the Horizontal Pod Autoscaler will not be bound to the new replication controller.
211210

content/en/docs/tasks/run-application/rolling-update-replication-controller.md

Lines changed: 0 additions & 251 deletions
This file was deleted.

0 commit comments

Comments
 (0)