Skip to content

Commit 4d50294

Browse files
Merge branch 'kubernetes:main' into ADITYADAS1999-patch-1
2 parents 13e5937 + e6987bd commit 4d50294

File tree

77 files changed

+48935
-3436
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

77 files changed

+48935
-3436
lines changed

README-hi.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@
33
[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website)
44
[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
55

6-
स्वागत हे! इस रिपॉजिटरी में [कुबरनेट्स वेबसाइट और दस्तावेज](https://kubernetes.io/) बनाने के लिए आवश्यक सभी संपत्तियां हैं। हम बहुत खुश हैं कि आप योगदान करना चाहते हैं!
6+
स्वागत है! इस रिपॉजिटरी में [कुबरनेट्स वेबसाइट और दस्तावेज़](https://kubernetes.io/) बनाने के लिए आवश्यक सभी संपत्तियां हैं। हम बहुत खुश हैं कि आप योगदान करना चाहते हैं!
77

88
## डॉक्स में योगदान देना
99

10-
आप अपने GitHub खाते में इस रिपॉजिटरी की एक copy बनाने के लिए स्क्रीन के ऊपरी-दाएँ क्षेत्र में **Fork** बटन पर क्लिक करें। इस copy को *Fork* कहा जाता है। अपने fork में परिवर्तन करने के बाद जब आप उनको हमारे पास भेजने के लिए तैयार हों, तो अपने fork पर जाएं और हमें इसके बारे में बताने के लिए एक नया pull request बनाएं।
10+
आप अपने GitHub खाते में इस रिपॉजिटरी की एक copy बनाने के लिए स्क्रीन के ऊपरी-दाएँ क्षेत्र में **Fork** बटन पर क्लिक करें। इस copy को *Fork* कहा जाता है। अपने fork में परिवर्तन करने के बाद जब आप उनको हमारे पास भेजने के लिए तैयार हों, तो अपने fork पर जाएँ और हमें इसके बारे में बताने के लिए एक नया pull request बनाएं।
1111

1212
एक बार जब आपका pull request बन जाता है, तो एक कुबरनेट्स समीक्षक स्पष्ट, कार्रवाई योग्य प्रतिक्रिया प्रदान करने की जिम्मेदारी लेगा। pull request के मालिक के रूप में, **यह आपकी जिम्मेदारी है कि आप कुबरनेट्स समीक्षक द्वारा प्रदान की गई प्रतिक्रिया को संबोधित करने के लिए अपने pull request को संशोधित करें।**
1313

content/en/blog/_posts/2022-12-15-dynamic-resource-allocation-alpha/index.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ date: 2022-12-15
55
slug: dynamic-resource-allocation
66
---
77

8-
**Authors:** Patrick Ohly (Intel), Kevin Klues (NVIDIA)
8+
**Authors:** Patrick Ohly (Intel), Kevin Klues (NVIDIA)
99

1010
Dynamic resource allocation is a new API for requesting resources. It is a
1111
generalization of the persistent volumes API for generic resources, making it possible to:
@@ -19,11 +19,11 @@ Third-party resource drivers are responsible for interpreting these parameters
1919
as well as tracking and allocating resources as requests come in.
2020

2121
Dynamic resource allocation is an *alpha feature* and only enabled when the
22-
`DynamicResourceAllocation` [feature
23-
gate](/docs/reference/command-line-tools-reference/feature-gates/) and the
24-
`resource.k8s.io/v1alpha1` {{< glossary_tooltip text="API group"
25-
term_id="api-group" >}} are enabled. For details, see the
26-
`--feature-gates` and `--runtime-config` [kube-apiserver
22+
`DynamicResourceAllocation`
23+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and the
24+
`resource.k8s.io/v1alpha1`
25+
{{< glossary_tooltip text="API group" term_id="api-group" >}} are enabled. For details,
26+
see the `--feature-gates` and `--runtime-config` [kube-apiserver
2727
parameters](/docs/reference/command-line-tools-reference/kube-apiserver/).
2828
The kube-scheduler, kube-controller-manager and kubelet components all need
2929
the feature gate enabled as well.
@@ -39,8 +39,8 @@ for end-to-end testing, but also can be run manually. See
3939

4040
## API
4141

42-
The new `resource.k8s.io/v1alpha1` {{< glossary_tooltip text="API group"
43-
term_id="api-group" >}} provides four new types:
42+
The new `resource.k8s.io/v1alpha1` {{< glossary_tooltip text="API group" term_id="api-group" >}}
43+
provides four new types:
4444

4545
ResourceClass
4646
: Defines which resource driver handles a certain kind of
@@ -77,7 +77,7 @@ this `.spec` (for example, inside a Deployment or StatefulSet) share the same
7777
ResourceClaim instance. When referencing a ResourceClaimTemplate, each Pod gets
7878
its own ResourceClaim instance.
7979

80-
For a container defined within a Pod, the `resources.claims` list
80+
For a container defined within a Pod, the `resources.claims` list
8181
defines whether that container gets
8282
access to these resource instances, which makes it possible to share resources
8383
between one or more containers inside the same Pod. For example, an init container could
@@ -89,7 +89,7 @@ will get created for this Pod and each container gets access to one of them.
8989
Assuming a resource driver called `resource-driver.example.com` was installed
9090
together with the following resource class:
9191

92-
```
92+
```yaml
9393
apiVersion: resource.k8s.io/v1alpha1
9494
kind: ResourceClass
9595
name: resource.example.com
@@ -151,8 +151,7 @@ spec:
151151

152152
In contrast to native resources (such as CPU or RAM) and
153153
[extended resources](/docs/concepts/configuration/manage-resources-containers/#extended-resources)
154-
(managed by a
155-
device plugin, advertised by kubelet), the scheduler has no knowledge of what
154+
(managed by a device plugin, advertised by kubelet), the scheduler has no knowledge of what
156155
dynamic resources are available in a cluster or how they could be split up to
157156
satisfy the requirements of a specific ResourceClaim. Resource drivers are
158157
responsible for that. Drivers mark ResourceClaims as _allocated_ once resources
@@ -227,16 +226,16 @@ It is up to the driver developer to decide how these two components
227226
communicate. The [KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md) outlines an [approach using
228227
CRDs](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/3063-dynamic-resource-allocation#implementing-a-plugin-for-node-resources).
229228

230-
Within SIG Node, we also plan to provide a complete [example
231-
driver](https://github.com/kubernetes-sigs/dra-example-driver) that can serve
229+
Within SIG Node, we also plan to provide a complete
230+
[example driver](https://github.com/kubernetes-sigs/dra-example-driver) that can serve
232231
as a template for other drivers.
233232

234233
## Running the test driver
235234

236235
The following steps bring up a local, one-node cluster directly from the
237236
Kubernetes source code. As a prerequisite, your cluster must have nodes with a container
238237
runtime that supports the
239-
[Container Device Interface](https://github.com/container-orchestrated-devices/container-device-interface)
238+
[Container Device Interface](https://github.com/container-orchestrated-devices/container-device-interface)
240239
(CDI). For example, you can run CRI-O [v1.23.2](https://github.com/cri-o/cri-o/releases/tag/v1.23.2) or later.
241240
Once containerd v1.7.0 is released, we expect that you can run that or any later version.
242241
In the example below, we use CRI-O.
@@ -259,15 +258,16 @@ $ RUNTIME_CONFIG=resource.k8s.io/v1alpha1 \
259258
PATH=$(pwd)/third_party/etcd:$PATH \
260259
./hack/local-up-cluster.sh -O
261260
...
261+
```
262+
262263
To start using your cluster, you can open up another terminal/tab and run:
263264

264-
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
265-
...
265+
```console
266+
$ export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
266267
```
267268

268-
Once the cluster is up, in another
269-
terminal run the test driver controller. `KUBECONFIG` must be set for all of
270-
the following commands.
269+
Once the cluster is up, in another terminal run the test driver controller.
270+
`KUBECONFIG` must be set for all of the following commands.
271271

272272
```console
273273
$ go run ./test/e2e/dra/test-driver --feature-gates ContextualLogging=true -v=5 controller
@@ -319,7 +319,7 @@ user_a='b'
319319
## Next steps
320320

321321
- See the
322-
[Dynamic Resource Allocation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md)
322+
[Dynamic Resource Allocation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md)
323323
KEP for more information on the design.
324324
- Read [Dynamic Resource Allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/)
325325
in the official Kubernetes documentation.
@@ -328,6 +328,6 @@ user_a='b'
328328
and / or the [CNCF Container Orchestrated Device Working Group](https://github.com/cncf/tag-runtime/blob/master/wg/COD.md).
329329
- You can view or comment on the [project board](https://github.com/orgs/kubernetes/projects/95/views/1)
330330
for dynamic resource allocation.
331-
- In order to move this feature towards beta, we need feedback from hardware
332-
vendors, so here's a call to action: try out this feature, consider how it can help
333-
with problems that your users are having, and write resource drivers…
331+
- In order to move this feature towards beta, we need feedback from hardware
332+
vendors, so here's a call to action: try out this feature, consider how it can help
333+
with problems that your users are having, and write resource drivers…

content/en/blog/_posts/2023-08-15-kubernetes-1.28-blog.md

Lines changed: 37 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -26,19 +26,37 @@ Much like a garden, our release has ever-changing growth, challenges and opportu
2626
# What's New (Major Themes)
2727

2828
## Changes to supported skew between control plane and node versions
29-
This enables testing and expanding the supported skew between core node and control plane components by one version from n-2 to n-3, so that node components (kubelet and kube-proxy) for the oldest supported minor version work with control plane components (kube-apiserver, kube-scheduler, kube-controller-manager, cloud-controller-manager) for the newest supported minor version.
3029

31-
This is valuable for end users as control plane upgrade will be a little faster than node upgrade, which are almost always going to be the longer with running workloads.
30+
Kubernetes v1.28 expands the supported skew between core node and control plane
31+
components by one minor version, from _n-2_ to _n-3_, so that node components
32+
(kubelet and kube-proxy) for the oldest supported minor version work with
33+
control plane components (kube-apiserver, kube-scheduler, kube-controller-manager,
34+
cloud-controller-manager) for the newest supported minor version.
3235

33-
The Kubernetes yearly support period already makes annual upgrades possible. Users can upgrade to the latest patch versions to pick up security fixes and do 3 sequential minor version upgrades once a year to "catch up" to the latest supported minor version.
36+
Some cluster operators avoid node maintenance and especially changes to node
37+
behavior, because nodes are where the workloads run. For minor version upgrades
38+
to a kubelet, the supported process includes draining that node, and hence
39+
disruption to any Pods that had been executing there. For Kubernetes end users
40+
with very long running workloads, and where Pods should stay running wherever
41+
possible, reducing the time lost to node maintenance is a benefit.
3442

35-
However, since the tested/supported skew between nodes and control planes is currently limited to 2 versions, a 3-version upgrade would have to update nodes twice to stay within the supported skew.
43+
The Kubernetes yearly support period already made annual upgrades possible. Users can
44+
upgrade to the latest patch versions to pick up security fixes and do 3 sequential
45+
minor version upgrades once a year to "catch up" to the latest supported minor version.
46+
47+
Previously, to stay within the supported skew, a cluster operator planning an annual
48+
upgrade would have needed to upgrade their nodes twice (perhaps only hours apart). Now,
49+
with Kubernetes v1.28, you have the option of making a minor version upgrade to
50+
nodes just once in each calendar year and still staying within upstream support.
51+
52+
If you'd like to stay current and upgrade your clusters more often, that's
53+
fine and is still completely supported.
3654

3755
## Generally available: recovery from non-graceful node shutdown
3856

39-
If a node shuts down down unexpectedly or ends up in a non-recoverable state (perhaps due to hardware failure or unresponsive OS), Kubernetes allows you to clean up afterwards and allow stateful workloads to restart on a different node. For Kubernetes v1.28, that's now a stable feature.
57+
If a node shuts down unexpectedly or ends up in a non-recoverable state (perhaps due to hardware failure or unresponsive OS), Kubernetes allows you to clean up afterward and allow stateful workloads to restart on a different node. For Kubernetes v1.28, that's now a stable feature.
4058

41-
This allows stateful workloads to failover to a different node successfully after the original node is shut down or in a non-recoverable state, such as the hardware failure or broken OS.
59+
This allows stateful workloads to fail over to a different node successfully after the original node is shut down or in a non-recoverable state, such as the hardware failure or broken OS.
4260

4361
Versions of Kubernetes earlier than v1.20 lacked handling for node shutdown on Linux, the kubelet integrates with systemd
4462
and implements graceful node shutdown (beta, and enabled by default). However, even an intentional
@@ -136,22 +154,24 @@ CDI provides a standardized way of injecting complex devices into a container (i
136154
## API awareness of sidecar containers (alpha) {#sidecar-init-containers}
137155

138156
Kubernetes 1.28 introduces an alpha `restartPolicy` field for [init containers](https://github.com/kubernetes/website/blob/main/content/en/docs/concepts/workloads/pods/init-containers.md),
139-
and uses that to indicate when an init container is also a _sidecar container_. The will start init containers with `restartPolicy: Always` in the order they are defined, along with other init containers. Instead of waiting for that sidecar container to complete before starting the main container(s) for the Pod, the kubelet only waits for
140-
the sidecar init container to have started.
141-
142-
The condition for startup completion will be that the startup probe succeeded (or if no startup probe is defined) and postStart handler is completed. This condition is represented with the field Started of ContainerStatus type. See the section "Pod startup completed condition" for considerations on picking this signal.
157+
and uses that to indicate when an init container is also a _sidecar container_.
158+
The kubelet will start init containers with `restartPolicy: Always` in the order
159+
they are defined, along with other init containers.
160+
Instead of waiting for that sidecar container to complete before starting the main
161+
container(s) for the Pod, the kubelet only waits for the sidecar init container to have started.
162+
163+
The kubelet will consider the startup for the sidecar container as being completed
164+
if the startup probe succeeds and the postStart handler is completed.
165+
This condition is represented with the field Started of ContainerStatus type.
166+
If you do not define a startup probe, the kubelet will consider the container
167+
startup to be completed immediately after the postStart handler completion.
143168

144169
For init containers, you can either omit the `restartPolicy` field, or set it to `Always`. Omitting the field
145170
means that you want a true init container that runs to completion before application startup.
146171

147172
Sidecar containers do not block Pod completion: if all regular containers are complete, sidecar
148173
containers in that Pod will be terminated.
149174

150-
For sidecar containers, the restart behavior is more complex than for init containers. In a Pod with
151-
`restartPolicy` set to `Never`, a sidecar container that fails during Pod startup will **not** be restarted
152-
and the whole Pod is treated as having failed. If the Pod's `restartPolicy` is `Always` or `OnFailure`,
153-
a sidecar that fails to start will be retried.
154-
155175
Once the sidecar container has started (process running, `postStart` was successful, and
156176
any configured startup probe is passing), and then there's a failure, that sidecar container will be
157177
restarted even when the Pod's overall `restartPolicy` is `Never` or `OnFailure`.
@@ -165,7 +185,7 @@ To learn more, read [API for sidecar containers](/docs/concepts/workloads/pods/i
165185
Kubernetes automatically sets a `storageClassName` for a PersistentVolumeClaim (PVC) if you don't provide
166186
a value. The control plane also sets a StorageClass for any existing PVC that doesn't have a `storageClassName`
167187
defined.
168-
Previous versions of Kubernetes also had this behavior; for Kubernetes v1.28 is is automatic and always
188+
Previous versions of Kubernetes also had this behavior; for Kubernetes v1.28 it is automatic and always
169189
active; the feature has graduated to stable (general availability).
170190

171191
To learn more, read about [StorageClass](/docs/concepts/storage/storage-classes/) in the Kubernetes
@@ -294,4 +314,4 @@ Have something you’d like to broadcast to the Kubernetes community? Share your
294314

295315
* Read more about what’s happening with Kubernetes on the [blog](https://kubernetes.io/blog/).
296316

297-
* Learn more about the [Kubernetes Release Team](https://github.com/kubernetes/sig-release/tree/master/release-team).
317+
* Learn more about the [Kubernetes Release Team](https://github.com/kubernetes/sig-release/tree/master/release-team).

0 commit comments

Comments
 (0)