Skip to content

Commit b06db3c

Browse files
committed
Merge remote-tracking branch 'upstream/main' into dev-1.30
Merge main into dev-1.30 to keep in sync
2 parents c7cd6c5 + fe39fbe commit b06db3c

File tree

30 files changed

+4102
-625
lines changed

30 files changed

+4102
-625
lines changed

assets/scss/_custom.scss

Lines changed: 26 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -239,6 +239,31 @@ body.td-404 main .error-details {
239239
}
240240
}
241241

242+
.search-item.nav-item {
243+
input, input::placeholder {
244+
color: black;
245+
}
246+
}
247+
248+
.flip-nav .search-item {
249+
.td-search-input, .search-bar {
250+
background-color: $medium-grey;
251+
}
252+
input, input::placeholder, .search-icon {
253+
color: white;
254+
}
255+
textarea:focus, input:focus {
256+
color: white;
257+
}
258+
}
259+
260+
@media only screen and (max-width: 1500px) {
261+
header nav .search-item {
262+
display: none;
263+
}
264+
}
265+
266+
242267
/* FOOTER */
243268
footer {
244269
background-color: #303030;
@@ -1050,4 +1075,4 @@ div.alert > em.javascript-required {
10501075
border: none;
10511076
outline: none;
10521077
padding: .5em 0 .5em 0;
1053-
}
1078+
}
Lines changed: 210 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,210 @@
1+
---
2+
layout: blog
3+
title: 'Using Go workspaces in Kubernetes'
4+
date: 2024-03-19T08:30:00-08:00
5+
slug: go-workspaces-in-kubernetes
6+
canonicalUrl: https://www.kubernetes.dev/blog/2024/03/19/go-workspaces-in-kubernetes/
7+
---
8+
9+
**Author:** Tim Hockin (Google)
10+
11+
The [Go programming language](https://go.dev/) has played a huge role in the
12+
success of Kubernetes. As Kubernetes has grown, matured, and pushed the bounds
13+
of what "regular" projects do, the Go project team has also grown and evolved
14+
the language and tools. In recent releases, Go introduced a feature called
15+
"workspaces" which was aimed at making projects like Kubernetes easier to
16+
manage.
17+
18+
We've just completed a major effort to adopt workspaces in Kubernetes, and the
19+
results are great. Our codebase is simpler and less error-prone, and we're no
20+
longer off on our own technology island.
21+
22+
## GOPATH and Go modules
23+
24+
Kubernetes is one of the most visible open source projects written in Go. The
25+
earliest versions of Kubernetes, dating back to 2014, were built with Go 1.3.
26+
Today, 10 years later, Go is up to version 1.22 — and let's just say that a
27+
_whole lot_ has changed.
28+
29+
In 2014, Go development was entirely based on
30+
[`GOPATH`](https://go.dev/wiki/GOPATH). As a Go project, Kubernetes lived by the
31+
rules of `GOPATH`. In the buildup to Kubernetes 1.4 (mid 2016), we introduced a
32+
directory tree called `staging`. This allowed us to pretend to be multiple
33+
projects, but still exist within one git repository (which had advantages for
34+
development velocity). The magic of `GOPATH` allowed this to work.
35+
36+
Kubernetes depends on several code-generation tools which have to find, read,
37+
and write Go code packages. Unsurprisingly, those tools grew to rely on
38+
`GOPATH`. This all worked pretty well until Go introduced modules in Go 1.11
39+
(mid 2018).
40+
41+
Modules were an answer to many issues around `GOPATH`. They gave more control to
42+
projects on how to track and manage dependencies, and were overall a great step
43+
forward. Kubernetes adopted them. However, modules had one major drawback —
44+
most Go tools could not work on multiple modules at once. This was a problem
45+
for our code-generation tools and scripts.
46+
47+
Thankfully, Go offered a way to temporarily disable modules (`GO111MODULE` to
48+
the rescue). We could get the dependency tracking benefits of modules, but the
49+
flexibility of `GOPATH` for our tools. We even wrote helper tools to create fake
50+
`GOPATH` trees and played tricks with symlinks in our vendor directory (which
51+
holds a snapshot of our external dependencies), and we made it all work.
52+
53+
And for the last 5 years it _has_ worked pretty well. That is, it worked well
54+
unless you looked too closely at what was happening. Woe be upon you if you
55+
had the misfortune to work on one of the code-generation tools, or the build
56+
system, or the ever-expanding suite of bespoke shell scripts we use to glue
57+
everything together.
58+
59+
## The problems
60+
61+
Like any large software project, we Kubernetes developers have all learned to
62+
deal with a certain amount of constant low-grade pain. Our custom `staging`
63+
mechanism let us bend the rules of Go; it was a little clunky, but when it
64+
worked (which was most of the time) it worked pretty well. When it failed, the
65+
errors were inscrutable and un-Googleable — nobody else was doing the silly
66+
things we were doing. Usually the fix was to re-run one or more of the `update-*`
67+
shell scripts in our aptly named `hack` directory.
68+
69+
As time went on we drifted farther and farher from "regular" Go projects. At
70+
the same time, Kubernetes got more and more popular. For many people,
71+
Kubernetes was their first experience with Go, and it wasn't always a good
72+
experience.
73+
74+
Our eccentricities also impacted people who consumed some of our code, such as
75+
our client library and the code-generation tools (which turned out to be useful
76+
in the growing ecosystem of custom resources). The tools only worked if you
77+
stored your code in a particular `GOPATH`-compatible directory structure, even
78+
though `GOPATH` had been replaced by modules more than four years prior.
79+
80+
This state persisted because of the confluence of three factors:
81+
1. Most of the time it only hurt a little (punctuated with short moments of
82+
more acute pain).
83+
1. Kubernetes was still growing in popularity - we all had other, more urgent
84+
things to work on.
85+
1. The fix was not obvious, and whatever we came up with was going to be both
86+
hard and tedious.
87+
88+
As a Kubernetes maintainer and long-timer, my fingerprints were all over the
89+
build system, the code-generation tools, and the `hack` scripts. While the pain
90+
of our mess may have been low _on_average_, I was one of the people who felt it
91+
regularly.
92+
93+
## Enter workspaces
94+
95+
Along the way, the Go language team saw what we (and others) were doing and
96+
didn't love it. They designed a new way of stitching multiple modules together
97+
into a new _workspace_ concept. Once enrolled in a workspace, Go tools had
98+
enough information to work in any directory structure and across modules,
99+
without `GOPATH` or symlinks or other dirty tricks.
100+
101+
When I first saw this proposal I knew that this was the way out. This was how
102+
to break the logjam. If workspaces was the technical solution, then I would
103+
put in the work to make it happen.
104+
105+
## The work
106+
107+
Adopting workspaces was deceptively easy. I very quickly had the codebase
108+
compiling and running tests with workspaces enabled. I set out to purge the
109+
repository of anything `GOPATH` related. That's when I hit the first real bump -
110+
the code-generation tools.
111+
112+
We had about a dozen tools, totalling several thousand lines of code. All of
113+
them were built using an internal framework called
114+
[gengo](https://github.com/kubernetes/gengo), which was built on Go's own
115+
parsing libraries. There were two main problems:
116+
117+
1. Those parsing libraries didn't understand modules or workspaces.
118+
1. `GOPATH` allowed us to pretend that Go _package paths_ and directories on
119+
disk were interchangeable in trivial ways. They are not.
120+
121+
Switching to a
122+
[modules- and workspaces-aware parsing](https://pkg.go.dev/golang.org/x/tools/go/packages)
123+
library was the first step. Then I had to make a long series of changes to
124+
each of the code-generation tools. Critically, I had to find a way to do it
125+
that was possible for some other person to review! I knew that I needed
126+
reviewers who could cover the breadth of changes and reviewers who could go
127+
into great depth on specific topics like gengo and Go's module semantics.
128+
Looking at the history for the areas I was touching, I asked Joe Betz and Alex
129+
Zielenski (SIG API Machinery) to go deep on gengo and code-generation, Jordan
130+
Liggitt (SIG Architecture and all-around wizard) to cover Go modules and
131+
vendoring and the `hack` scripts, and Antonio Ojea (wearing his SIG Testing
132+
hat) to make sure the whole thing made sense. We agreed that a series of small
133+
commits would be easiest to review, even if the codebase might not actually
134+
work at each commit.
135+
136+
Sadly, these were not mechanical changes. I had to dig into each tool to
137+
figure out where they were processing disk paths versus where they were
138+
processing package names, and where those were being conflated. I made
139+
extensive use of the [delve](https://github.com/go-delve/delve) debugger, which
140+
I just can't say enough good things about.
141+
142+
One unfortunate result of this work was that I had to break compatibility. The
143+
gengo library simply did not have enough information to process packages
144+
outside of GOPATH. After discussion with gengo and Kubernetes maintainers, we
145+
agreed to make [gengo/v2](https://github.com/kubernetes/gengo/tree/master/v2).
146+
I also used this as an opportunity to clean up some of the gengo APIs and the
147+
tools' CLIs to be more understandable and not conflate packages and
148+
directories. For example you can't just string-join directory names and
149+
assume the result is a valid package name.
150+
151+
Once I had the code-generation tools converted, I shifted attention to the
152+
dozens of scripts in the `hack` directory. One by one I had to run them, debug,
153+
and fix failures. Some of them needed minor changes and some needed to be
154+
rewritten.
155+
156+
Along the way we hit some cases that Go did not support, like workspace
157+
vendoring. Kubernetes depends on vendoring to ensure that our dependencies are
158+
always available, even if their source code is removed from the internet (it
159+
has happened more than once!). After discussing with the Go team, and looking
160+
at possible workarounds, they decided the right path was to
161+
[implement workspace vendoring](https://github.com/golang/go/issues/60056).
162+
163+
The eventual Pull Request contained over 200 individual commits.
164+
165+
## Results
166+
167+
Now that this work has been merged, what does this mean for Kubernetes users?
168+
Pretty much nothing. No features were added or changed. This work was not
169+
about fixing bugs (and hopefully none were introduced).
170+
171+
This work was mainly for the benefit of the Kubernetes project, to help and
172+
simplify the lives of the core maintainers. In fact, it would not be a lie to
173+
say that it was rather self-serving - my own life is a little bit better now.
174+
175+
This effort, while unusually large, is just a tiny fraction of the overall
176+
maintenance work that needs to be done. Like any large project, we have lots of
177+
"technical debt" — tools that made point-in-time assumptions and need
178+
revisiting, internal APIs whose organization doesn't make sense, code which
179+
doesn't follow conventions which didn't exist at the time, and tests which
180+
aren't as rigorous as they could be, just to throw out a few examples. This
181+
work is often called "grungy" or "dirty", but in reality it's just an
182+
indication that the project has grown and evolved. I love this stuff, but
183+
there's far more than I can ever tackle on my own, which makes it an
184+
interesting way for people to get involved. As our unofficial motto goes:
185+
"chop wood and carry water".
186+
187+
Kubernetes used to be a case-study of how _not_ to do large-scale Go
188+
development, but now our codebase is simpler (and in some cases faster!) and
189+
more consistent. Things that previously seemed like they _should_ work, but
190+
didn't, now behave as expected.
191+
192+
Our project is now a little more "regular". Not completely so, but we're
193+
getting closer.
194+
195+
## Thanks
196+
197+
This effort would not have been possible without tons of support.
198+
199+
First, thanks to the Go team for hearing our pain, taking feedback, and solving
200+
the problems for us.
201+
202+
Special mega-thanks goes to Michael Matloob, on the Go team at Google, who
203+
designed and implemented workspaces. He guided me every step of the way, and
204+
was very generous with his time, answering all my questions, no matter how
205+
dumb.
206+
207+
Writing code is just half of the work, so another special thanks to my
208+
reviewers: Jordan Liggitt, Joe Betz, Alexander Zielenski, and Antonio Ojea.
209+
These folks brought a wealth of expertise and attention to detail, and made
210+
this work smarter and safer.

content/en/docs/contribute/new-content/blogs-case-studies.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -189,6 +189,17 @@ To submit a blog post follow these directions:
189189
- **Tutorials** that only apply to specific releases or versions and not all future versions
190190
- References to pre-GA APIs or features
191191

192+
### Mirroring from the Kubernetes Contributor Blog
193+
194+
To mirror a blog post from the [Kubernetes contributor blog](https://www.kubernetes.dev/blog/), follow these guidelines:
195+
196+
- Keep the blog content the same. If there are changes, they should be made to the original article first, and then to the mirrored article.
197+
- The mirrored blog should have a `canonicalUrl`, that is, essentially the url of the original blog after it has been published.
198+
- [Kubernetes contributor blogs](https://kubernetes.dev/blog) have their authors mentioned in the YAML header, while the Kubernetes blog posts mention authors in the blog content itself. This should be changed when mirroring the content.
199+
- Publication dates stay the same as the original blog.
200+
201+
All of the other guidelines and expectations detailed above apply as well.
202+
192203
## Submit a case study
193204

194205
Case studies highlight how organizations are using Kubernetes to solve real-world problems. The

content/en/docs/contribute/participate/pr-wranglers.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,6 @@ see [Reviewing changes](/docs/contribute/review/).
1919

2020
Each day in a week-long shift as PR Wrangler:
2121

22-
- Triage and tag incoming issues daily. See
23-
[Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues)
24-
for guidelines on how SIG Docs uses metadata.
2522
- Review [open pull requests](https://github.com/kubernetes/website/pulls) for quality
2623
and adherence to the [Style](/docs/contribute/style/style-guide/) and
2724
[Content](/docs/contribute/style/content-guide/) guides.
@@ -44,6 +41,11 @@ Each day in a week-long shift as PR Wrangler:
4441
issues as [good first issues](https://kubernetes.dev/docs/guide/help-wanted/#good-first-issue).
4542
- Using style fixups as good first issues is a good way to ensure a supply of easier tasks
4643
to help onboard new contributors.
44+
- Also check for pull requests against the [reference docs generator](https://github.com/kubernetes-sigs/reference-docs) code, and review those (or bring in help).
45+
- Support the [issue wrangler](/docs/contribute/participate/issue-wrangler/) to
46+
triage and tag incoming issues daily.
47+
See [Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues)
48+
for guidelines on how SIG Docs uses metadata.
4749

4850
{{< note >}}
4951
PR wrangler duties do not apply to localization PRs (non-English PRs).

content/en/docs/reference/node/_index.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,15 @@ This section contains the following reference topics about nodes:
99
* the kubelet's [checkpoint API](/docs/reference/node/kubelet-checkpoint-api/)
1010
* a list of [Articles on dockershim Removal and on Using CRI-compatible Runtimes](/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/)
1111

12+
* [Kubelet Device Manager API Versions](/docs/reference/node/device-plugin-api-versions)
13+
14+
* [Node Labels Populated By The Kubelet](/docs/reference/node/node-labels)
15+
1216
* [Node `.status` information](/docs/reference/node/node-status/)
1317

1418
You can also read node reference details from elsewhere in the
1519
Kubernetes documentation, including:
1620

1721
* [Node Metrics Data](/docs/reference/instrumentation/node-metrics).
1822

19-
* [CRI Pod & Container Metrics](/docs/reference/instrumentation/cri-pod-container-metrics).
23+
* [CRI Pod & Container Metrics](/docs/reference/instrumentation/cri-pod-container-metrics).

content/en/docs/reference/using-api/deprecation-guide.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,8 @@ The **autoscaling/v2beta2** API version of HorizontalPodAutoscaler is no longer
7878

7979
* Migrate manifests and API clients to use the **autoscaling/v2** API version, available since v1.23.
8080
* All existing persisted objects are accessible via the new API
81-
81+
* Notable changes:
82+
* `targetAverageUtilization` is replaced with `target.averageUtilization` and `target.type: Utilization`. See [Autoscaling on multiple metrics and custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics).
8283
### v1.25
8384

8485
The **v1.25** release stopped serving the following deprecated API versions:
@@ -130,6 +131,8 @@ The **autoscaling/v2beta1** API version of HorizontalPodAutoscaler is no longer
130131

131132
* Migrate manifests and API clients to use the **autoscaling/v2** API version, available since v1.23.
132133
* All existing persisted objects are accessible via the new API
134+
* Notable changes:
135+
* `targetAverageUtilization` is replaced with `target.averageUtilization` and `target.type: Utilization`. See [Autoscaling on multiple metrics and custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics).
133136

134137
#### PodDisruptionBudget {#poddisruptionbudget-v125}
135138

content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ Next install the `amqp-tools` so you can work with message queues.
8484
The next commands show what you need to run inside the interactive shell in that Pod:
8585

8686
```shell
87-
apt-get update && apt-get install -y curl ca-certificates amqp-tools python dnsutils
87+
apt-get update && apt-get install -y curl ca-certificates amqp-tools python3 dnsutils
8888
```
8989

9090
Later, you will make a container image that includes these packages.

content/en/docs/tasks/job/fine-parallel-processing-work-queue.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,12 @@ You could also download the following files directly:
5959
- [`rediswq.py`](/examples/application/job/redis/rediswq.py)
6060
- [`worker.py`](/examples/application/job/redis/worker.py)
6161

62+
To start a single instance of Redis, you need to create the redis pod and redis service:
63+
64+
```shell
65+
kubectl apply -f https://k8s.io/examples/application/job/redis/redis-pod.yaml
66+
kubectl apply -f https://k8s.io/examples/application/job/redis/redis-service.yaml
67+
```
6268

6369
## Filling the queue with tasks
6470

@@ -171,7 +177,7 @@ Since the workers themselves detect when the workqueue is empty, and the Job con
171177
know about the workqueue, it relies on the workers to signal when they are done working.
172178
The workers signal that the queue is empty by exiting with success. So, as soon as **any** worker
173179
exits with success, the controller knows the work is done, and that the Pods will exit soon.
174-
So, you need to set the completion count of the Job to 1. The job controller will wait for
180+
So, you need to leave the completion count of the Job unset. The job controller will wait for
175181
the other pods to complete too.
176182

177183
## Running the Job

content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -227,7 +227,6 @@ dengan nama untuk melakukan pemeriksaan _liveness_ HTTP atau TCP:
227227
ports:
228228
- name: liveness-port
229229
containerPort: 8080
230-
hostPort: 8080
231230

232231
livenessProbe:
233232
httpGet:
@@ -251,7 +250,6 @@ Sehingga, contoh sebelumnya menjadi:
251250
ports:
252251
- name: liveness-port
253252
containerPort: 8080
254-
hostPort: 8080
255253
256254
livenessProbe:
257255
httpGet:

0 commit comments

Comments
 (0)