You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: keps/sig-release/1732-artifact-management/README.md
+17-14Lines changed: 17 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,10 +14,10 @@
14
14
<!-- /toc -->
15
15
16
16
## Summary
17
+
17
18
This document describes how official artifacts (Container Images, Binaries) for the Kubernetes
18
19
project are managed and distributed.
19
20
20
-
21
21
## Motivation
22
22
23
23
The motivation for this KEP is to describe a process by which artifacts (container images, binaries)
@@ -29,8 +29,9 @@ and that anyone in the project is capable (if given the right authority) to dist
29
29
### Goals
30
30
31
31
The goals of this process are to enable:
32
-
* Anyone in the community (with the right permissions) to manage the distribution of Kubernetes images and binaries.
33
-
* Fast, cost-efficient access to artifacts around the world through appropriate mirrors and distribution
32
+
33
+
- Anyone in the community (with the right permissions) to manage the distribution of Kubernetes images and binaries
34
+
- Fast, cost-efficient access to artifacts around the world through appropriate mirrors and distribution
34
35
35
36
This KEP will have succeeded when artifacts are all managed in the same manner and anyone in the community
36
37
(with the right permissions) can manage these artifacts.
@@ -66,6 +67,7 @@ occur, so that the promotion tool needs to be capable of deleting images and art
66
67
well as promoting them.
67
68
68
69
### HTTP Redirector Design
70
+
69
71
To facilitate world-wide distribution of artifacts from a single (virtual) location we will
70
72
ideally run a replicated redirector service in the United States, Europe and Asia.
71
73
Each of these redirectors
@@ -75,6 +77,7 @@ address and a dns record indicating their location (e.g. `europe.artifacts.k8s.i
75
77
We will use Geo DNS to route requests to `artifacts.k8s.io` to the correct redirector. This is necessary to ensure that we always route to a server which is accessible no matter what region we are in. We will need to extend or enhance the existing DNS synchronization tooling to handle creation of the GeoDNS records.
76
78
77
79
#### Configuring the HTTP Redirector
80
+
78
81
THe HTTP Redirector service will be driven from a YAML configuration that specifies a path to mirror
79
82
mapping. For now the redirector will serve content based on continent, for example:
80
83
@@ -96,33 +99,33 @@ manage the images for a Kubernetes release.
96
99
97
100
### Milestone 0 (MVP): In progress
98
101
99
-
(Described in terms of kops, our first candidate; other candidates welcome!)
102
+
(Described in terms of kOps, our first candidate; other candidates welcome!)
100
103
101
-
* k8s-infra creates a "staging" GCS bucket for each project
104
+
- k8s-infra creates a "staging" GCS bucket for each project
102
105
(e.g. `k8s-artifacts-staging-<project>`) and a "prod" GCS bucket for promoted
103
106
artifacts (e.g. `k8s-artifacts`, one bucket for all projects).
104
-
* We grant write-access to the staging GCS bucket to trusted jobs / people in
105
-
each project (e.g. kops OWNERS and prow jobs can push to
107
+
-We grant write-access to the staging GCS bucket to trusted jobs / people in
108
+
each project (e.g. kOps OWNERS and prow jobs can push to
106
109
`k8s-artifacts-staging-kops`). We can encourage use of CI & reproducible
107
110
builds, but we do not block on it.
108
-
* We grant write-access to the prod bucket only to the infra-admins & the
111
+
-We grant write-access to the prod bucket only to the infra-admins & the
109
112
promoter process.
110
-
* Promotion of artifacts to the "prod" GCS bucket is via a script / utility (as
113
+
-Promotion of artifacts to the "prod" GCS bucket is via a script / utility (as
111
114
we do today). For v1 we can promote based on a sha256sum file (only copy the
112
115
files listed), similarly to the image promoter. We will experiment to develop
113
116
that script / utility in this milestone, along with prow jobs (?) to publish
114
117
to the staging buckets, and to figure out how best to run the promoter.
115
118
Hopefully we can copy the image-promotion work closely.
116
-
* We create a bucket-backed GCLB for serving, with a single url-map entry for
119
+
-We create a bucket-backed GCLB for serving, with a single url-map entry for
117
120
`binaries/`pointing to the prod bucket. (The URL prefix gives us some
118
121
flexibility to e.g. add dynamic content later)
119
-
* We create the artifacts.k8s.io DNS name pointing to the GCLB. (Unclear whether
122
+
-We create the artifacts.k8s.io DNS name pointing to the GCLB. (Unclear whether
120
123
we want one for staging, or just encourage pulling from GCS directly).
121
-
* Projects start using the mirrors e.g. kops adds the
124
+
-Projects start using the mirrors e.g. kOps adds the
122
125
https://artifacts.k8s.io/binaries/kops mirror into the (upcoming) mirror-list
123
-
support, so that it will get real traffic but not break kops should this
126
+
support, so that it will get real traffic but not break kOps should this
124
127
infrastructure break
125
-
* We start to collect data from the GCLB logs. Questions we would like to
128
+
-We start to collect data from the GCLB logs. Questions we would like to
126
129
understand: What are the costs, and what would the costs be for localized
127
130
mirrors? What is the performance impact (latency, throughput) of serving
128
131
everything from GCLB? Is GCLB reachable from everywhere (including China)?
0 commit comments