@@ -616,15 +616,15 @@ kubectl get service hostnames -o json
616
616
* 端口的 ` protocol ` 和 Pod 的是否对应?
617
617
618
618
<!--
619
- ## Does the Service have any Endpoints ?
619
+ ## Does the Service have any EndpointSlices ?
620
620
621
621
If you got this far, you have confirmed that your Service is correctly
622
622
defined and is resolved by DNS. Now let's check that the Pods you ran are
623
623
actually being selected by the Service.
624
624
625
625
Earlier you saw that the Pods were running. You can re-check that:
626
626
-->
627
- ## Service 有 Endpoints 吗? {#does-the-service-have-any-endpoints}
627
+ ## Service 有 EndpointSlices 吗? {#does-the-service-have-any-endpoints}
628
628
629
629
如果你已经走到了这一步,你已经确认你的 Service 被正确定义,并能通过 DNS 解析。
630
630
现在,让我们检查一下,你运行的 Pod 确实是被 Service 选中的。
@@ -658,34 +658,34 @@ restarted. Frequent restarts could lead to intermittent connectivity issues.
658
658
If the restart count is high, read more about how to [debug pods](/docs/tasks/debug/debug-application/debug-pods).
659
659
660
660
Inside the Kubernetes system is a control loop which evaluates the selector of
661
- every Service and saves the results into a corresponding Endpoints object.
661
+ every Service and saves the results into a corresponding EndpointSlice object.
662
662
-->
663
663
"RESTARTS" 列表明 Pod 没有经常崩溃或重启。经常性崩溃可能导致间歇性连接问题。
664
664
如果重启次数过大,通过[ 调试 Pod] ( /zh-cn/docs/tasks/debug/debug-application/debug-pods )
665
665
了解相关技术。
666
666
667
667
在 Kubernetes 系统中有一个控制回路,它评估每个 Service 的选择算符,并将结果保存到
668
- Endpoints 对象中。
668
+ EndpointSlice 对象中。
669
669
670
670
``` shell
671
- kubectl get endpoints hostnames
671
+ kubectl get endpointslices -l k8s.io/service-name= hostnames
672
672
```
673
673
674
674
```
675
- NAME ENDPOINTS
676
- hostnames 10.244.0.5:9376 ,10.244.0.6:9376 ,10.244.0.7:9376
675
+ NAME ADDRESSTYPE PORTS ENDPOINTS
676
+ hostnames-ytpni IPv4 9376 10.244.0.5,10.244.0.6,10.244.0.7
677
677
```
678
678
679
679
<!--
680
- This confirms that the endpoints controller has found the correct Pods for
680
+ This confirms that the EndpointSlice controller has found the correct Pods for
681
681
your Service. If the `ENDPOINTS` column is `<none>`, you should check that
682
682
the `spec.selector` field of your Service actually selects for
683
683
`metadata.labels` values on your Pods. A common mistake is to have a typo or
684
684
other error, such as the Service selecting for `app=hostnames`, but the
685
685
Deployment specifying `run=hostnames`, as in versions previous to 1.18, where
686
686
the `kubectl run` command could have been also used to create a Deployment.
687
687
-->
688
- 这证实 Endpoints 控制器已经为你的 Service 找到了正确的 Pods。
688
+ 这证实 EndpointSlice 控制器已经为你的 Service 找到了正确的 Pods。
689
689
如果 ` ENDPOINTS ` 列的值为 ` <none> ` ,则应检查 Service 的 ` spec.selector ` 字段,
690
690
以及你实际想选择的 Pod 的 ` metadata.labels ` 的值。
691
691
常见的错误是输入错误或其他错误,例如 Service 想选择 ` app=hostnames ` ,但是
@@ -737,7 +737,7 @@ hostnames-632524106-tlaok
737
737
```
738
738
739
739
<!--
740
- You expect each Pod in the Endpoints list to return its own hostname. If
740
+ You expect each Pod in the endpoints list to return its own hostname. If
741
741
this is not what happens (or whatever the correct behavior is for your own
742
742
Pods), you should investigate what's happening there.
743
743
-->
@@ -747,7 +747,7 @@ Pods), you should investigate what's happening there.
747
747
<!--
748
748
## Is the kube-proxy working?
749
749
750
- If you get here, your Service is running, has Endpoints , and your Pods
750
+ If you get here, your Service is running, has EndpointSlices , and your Pods
751
751
are actually serving. At this point, the whole Service proxy mechanism is
752
752
suspect. Let's confirm it, piece by piece.
753
753
@@ -759,7 +759,7 @@ will have to investigate whatever implementation of Services you are using.
759
759
-->
760
760
## kube-proxy 正常工作吗? {#is-the-kube-proxy-working}
761
761
762
- 如果你到达这里,则说明你的 Service 正在运行,拥有 Endpoints ,Pod 真正在提供服务。
762
+ 如果你到达这里,则说明你的 Service 正在运行,拥有 EndpointSlices ,Pod 真正在提供服务。
763
763
此时,整个 Service 代理机制是可疑的。让我们一步一步地确认它没问题。
764
764
765
765
Service 的默认实现(在大多数集群上应用的)是 kube-proxy。
@@ -1036,7 +1036,7 @@ used and configured properly, you should see:
1036
1036
## Seek help
1037
1037
1038
1038
If you get this far, something very strange is happening. Your Service is
1039
- running, has Endpoints , and your Pods are actually serving. You have DNS
1039
+ running, has EndpointSlices , and your Pods are actually serving. You have DNS
1040
1040
working, and `kube-proxy` does not seem to be misbehaving. And yet your
1041
1041
Service is not working. Please let us know what is going on, so we can help
1042
1042
investigate!
@@ -1048,7 +1048,7 @@ Contact us on
1048
1048
-->
1049
1049
## 寻求帮助 {#seek-help}
1050
1050
1051
- 如果你走到这一步,那么就真的是奇怪的事情发生了。你的 Service 正在运行,有 Endpoints 存在,
1051
+ 如果你走到这一步,那么就真的是奇怪的事情发生了。你的 Service 正在运行,有 EndpointSlices 存在,
1052
1052
你的 Pods 也确实在提供服务。你的 DNS 正常,` iptables ` 规则已经安装,` kube-proxy ` 看起来也正常。
1053
1053
然而 Service 还是没有正常工作。这种情况下,请告诉我们,以便我们可以帮助调查!
1054
1054
0 commit comments