Skip to content

Commit fb55bda

Browse files
authored
Merge pull request #50610 from my-git9/npa-13283
[zh-cn]sync image-volumes debug-service
2 parents d02247e + b0e816b commit fb55bda

File tree

4 files changed

+86
-15
lines changed

4 files changed

+86
-15
lines changed

content/zh-cn/docs/tasks/configure-pod-container/image-volumes.md

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -116,3 +116,56 @@ to a valid reference and consuming it in the `volumeMounts` of the container. Fo
116116
## 进一步阅读
117117

118118
- [`image`](/zh-cn/docs/concepts/storage/volumes/#image)
119+
120+
<!--
121+
## Use `subPath` (or `subPathExpr`)
122+
123+
It is possible to utilize
124+
[`subPath`](/docs/concepts/storage/volumes/#using-subpath) or
125+
[`subPathExpr`](/docs/concepts/storage/volumes/#using-subpath-expanded-environment)
126+
from Kubernetes v1.33 when using the image volume feature.
127+
-->
128+
## 使用 `subPath`(或 `subPathExpr`
129+
130+
从 Kubernetes v1.33 开始,使用 `image` 卷特性时,可以利用
131+
[`subPath`](/zh-cn/docs/concepts/storage/volumes/#using-subpath)
132+
[`subPathExpr`](/zh-cn/docs/concepts/storage/volumes/#using-subpath-expanded-environment)
133+
134+
{{% code_sample file="pods/image-volumes-subpath.yaml" %}}
135+
136+
<!--
137+
1. Create the pod on your cluster:
138+
-->
139+
1. 在你的集群上创建 Pod:
140+
141+
```shell
142+
kubectl apply -f https://k8s.io/examples/pods/image-volumes-subpath.yaml
143+
```
144+
145+
<!--
146+
1. Attach to the container:
147+
-->
148+
2. 挂接到容器:
149+
150+
```shell
151+
kubectl attach -it image-volume bash
152+
```
153+
154+
<!--
155+
1. Check the content of the file from the `dir` sub path in the volume:
156+
-->
157+
3. 检查卷中 `dir` 子路径下的文件的内容:
158+
159+
```shell
160+
cat /volume/file
161+
```
162+
163+
<!--
164+
The output is similar to:
165+
-->
166+
167+
输出类似于:
168+
169+
```none
170+
1
171+
```

content/zh-cn/docs/tasks/debug/debug-application/debug-service.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -616,15 +616,15 @@ kubectl get service hostnames -o json
616616
* 端口的 `protocol` 和 Pod 的是否对应?
617617

618618
<!--
619-
## Does the Service have any Endpoints?
619+
## Does the Service have any EndpointSlices?
620620
621621
If you got this far, you have confirmed that your Service is correctly
622622
defined and is resolved by DNS. Now let's check that the Pods you ran are
623623
actually being selected by the Service.
624624
625625
Earlier you saw that the Pods were running. You can re-check that:
626626
-->
627-
## Service 有 Endpoints 吗? {#does-the-service-have-any-endpoints}
627+
## Service 有 EndpointSlices 吗? {#does-the-service-have-any-endpoints}
628628

629629
如果你已经走到了这一步,你已经确认你的 Service 被正确定义,并能通过 DNS 解析。
630630
现在,让我们检查一下,你运行的 Pod 确实是被 Service 选中的。
@@ -658,34 +658,34 @@ restarted. Frequent restarts could lead to intermittent connectivity issues.
658658
If the restart count is high, read more about how to [debug pods](/docs/tasks/debug/debug-application/debug-pods).
659659
660660
Inside the Kubernetes system is a control loop which evaluates the selector of
661-
every Service and saves the results into a corresponding Endpoints object.
661+
every Service and saves the results into a corresponding EndpointSlice object.
662662
-->
663663
"RESTARTS" 列表明 Pod 没有经常崩溃或重启。经常性崩溃可能导致间歇性连接问题。
664664
如果重启次数过大,通过[调试 Pod](/zh-cn/docs/tasks/debug/debug-application/debug-pods)
665665
了解相关技术。
666666

667667
在 Kubernetes 系统中有一个控制回路,它评估每个 Service 的选择算符,并将结果保存到
668-
Endpoints 对象中。
668+
EndpointSlice 对象中。
669669

670670
```shell
671-
kubectl get endpoints hostnames
671+
kubectl get endpointslices -l k8s.io/service-name=hostnames
672672
```
673673

674674
```
675-
NAME ENDPOINTS
676-
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
675+
NAME ADDRESSTYPE PORTS ENDPOINTS
676+
hostnames-ytpni IPv4 9376 10.244.0.5,10.244.0.6,10.244.0.7
677677
```
678678

679679
<!--
680-
This confirms that the endpoints controller has found the correct Pods for
680+
This confirms that the EndpointSlice controller has found the correct Pods for
681681
your Service. If the `ENDPOINTS` column is `<none>`, you should check that
682682
the `spec.selector` field of your Service actually selects for
683683
`metadata.labels` values on your Pods. A common mistake is to have a typo or
684684
other error, such as the Service selecting for `app=hostnames`, but the
685685
Deployment specifying `run=hostnames`, as in versions previous to 1.18, where
686686
the `kubectl run` command could have been also used to create a Deployment.
687687
-->
688-
这证实 Endpoints 控制器已经为你的 Service 找到了正确的 Pods。
688+
这证实 EndpointSlice 控制器已经为你的 Service 找到了正确的 Pods。
689689
如果 `ENDPOINTS` 列的值为 `<none>`,则应检查 Service 的 `spec.selector` 字段,
690690
以及你实际想选择的 Pod 的 `metadata.labels` 的值。
691691
常见的错误是输入错误或其他错误,例如 Service 想选择 `app=hostnames`,但是
@@ -737,7 +737,7 @@ hostnames-632524106-tlaok
737737
```
738738

739739
<!--
740-
You expect each Pod in the Endpoints list to return its own hostname. If
740+
You expect each Pod in the endpoints list to return its own hostname. If
741741
this is not what happens (or whatever the correct behavior is for your own
742742
Pods), you should investigate what's happening there.
743743
-->
@@ -747,7 +747,7 @@ Pods), you should investigate what's happening there.
747747
<!--
748748
## Is the kube-proxy working?
749749
750-
If you get here, your Service is running, has Endpoints, and your Pods
750+
If you get here, your Service is running, has EndpointSlices, and your Pods
751751
are actually serving. At this point, the whole Service proxy mechanism is
752752
suspect. Let's confirm it, piece by piece.
753753
@@ -759,7 +759,7 @@ will have to investigate whatever implementation of Services you are using.
759759
-->
760760
## kube-proxy 正常工作吗? {#is-the-kube-proxy-working}
761761

762-
如果你到达这里,则说明你的 Service 正在运行,拥有 Endpoints,Pod 真正在提供服务。
762+
如果你到达这里,则说明你的 Service 正在运行,拥有 EndpointSlices,Pod 真正在提供服务。
763763
此时,整个 Service 代理机制是可疑的。让我们一步一步地确认它没问题。
764764

765765
Service 的默认实现(在大多数集群上应用的)是 kube-proxy。
@@ -1036,7 +1036,7 @@ used and configured properly, you should see:
10361036
## Seek help
10371037
10381038
If you get this far, something very strange is happening. Your Service is
1039-
running, has Endpoints, and your Pods are actually serving. You have DNS
1039+
running, has EndpointSlices, and your Pods are actually serving. You have DNS
10401040
working, and `kube-proxy` does not seem to be misbehaving. And yet your
10411041
Service is not working. Please let us know what is going on, so we can help
10421042
investigate!
@@ -1048,7 +1048,7 @@ Contact us on
10481048
-->
10491049
## 寻求帮助 {#seek-help}
10501050

1051-
如果你走到这一步,那么就真的是奇怪的事情发生了。你的 Service 正在运行,有 Endpoints 存在,
1051+
如果你走到这一步,那么就真的是奇怪的事情发生了。你的 Service 正在运行,有 EndpointSlices 存在,
10521052
你的 Pods 也确实在提供服务。你的 DNS 正常,`iptables` 规则已经安装,`kube-proxy` 看起来也正常。
10531053
然而 Service 还是没有正常工作。这种情况下,请告诉我们,以便我们可以帮助调查!
10541054

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
apiVersion: v1
2+
kind: Pod
3+
metadata:
4+
name: image-volume
5+
spec:
6+
containers:
7+
- name: shell
8+
command: ["sleep", "infinity"]
9+
image: debian
10+
volumeMounts:
11+
- name: volume
12+
mountPath: /volume
13+
subPath: dir
14+
volumes:
15+
- name: volume
16+
image:
17+
reference: quay.io/crio/artifact:v2
18+
pullPolicy: IfNotPresent

content/zh-cn/examples/pods/image-volumes.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,5 +13,5 @@ spec:
1313
volumes:
1414
- name: volume
1515
image:
16-
reference: quay.io/crio/artifact:v1
16+
reference: quay.io/crio/artifact:v2
1717
pullPolicy: IfNotPresent

0 commit comments

Comments
 (0)