You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: keps/sig-release/1732-artifact-management/README.md
+89-68Lines changed: 89 additions & 68 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,71 +15,87 @@
15
15
16
16
## Summary
17
17
18
-
This document describes how official artifacts (Container Images, Binaries) for the Kubernetes
19
-
project are managed and distributed.
18
+
This document describes how official artifacts (Container Images, Binaries) for
19
+
the Kubernetes project are managed and distributed.
20
20
21
21
## Motivation
22
22
23
-
The motivation for this KEP is to describe a process by which artifacts (container images, binaries)
24
-
can be distributed by the community. Currently the process by which images is both ad-hoc in nature
25
-
and limited to an arbitrary set of people who have the keys to the relevant repositories. Standardize
26
-
access will ensure that people around the world have access to the same artifacts by the same names
27
-
and that anyone in the project is capable (if given the right authority) to distribute images.
23
+
The motivation for this KEP is to describe a process by which artifacts
24
+
(container images, binaries) can be distributed by the community. Currently,
25
+
the process by which images is both ad hoc in nature and limited to an
26
+
arbitrary set of people who have the keys to the relevant repositories.
27
+
Standardized access will ensure that people around the world have access to the
28
+
same artifacts by the same names and that anyone in the project is capable (if
29
+
given the right authority) to distribute images.
28
30
29
31
### Goals
30
32
31
33
The goals of this process are to enable:
32
34
33
-
- Anyone in the community (with the right permissions) to manage the distribution of Kubernetes images and binaries
34
-
- Fast, cost-efficient access to artifacts around the world through appropriate mirrors and distribution
35
+
- Anyone in the community (with the right permissions) to manage the
36
+
distribution of Kubernetes images and binaries
37
+
- Fast, cost-efficient access to artifacts around the world through appropriate
38
+
mirrors and distribution
35
39
36
-
This KEP will have succeeded when artifacts are all managed in the same manner and anyone in the community
37
-
(with the right permissions) can manage these artifacts.
40
+
This KEP will have succeeded when artifacts are all managed in the same manner
41
+
and anyone in the community (with the right permissions) can manage these
42
+
artifacts.
38
43
39
44
### Non-Goals
40
45
41
-
The actual process and tooling for promoting images, building packages or otherwise assembling artifacts
42
-
is beyond the scope of this KEP. This KEP deals with the infrastructure for serving these things via
43
-
HTTP as well as a generic description of how promotion will be accomplished.
46
+
The actual process and tooling for promoting images, building packages or
47
+
otherwise assembling artifacts is beyond the scope of this KEP. This KEP deals
48
+
with the infrastructure for serving these things via HTTP as well as a generic
49
+
description of how promotion will be accomplished.
44
50
45
51
## Proposal
46
52
47
-
The top level design will be to set up a global redirector HTTP service (`artifacts.k8s.io`)
48
-
which knows how to serve HTTP and redirect requests to an appropriate mirror. This redirector
49
-
will serve both binary and container image downloads. For container images, the HTTP redirector
50
-
will redirect users to the appropriate geo-located container registry. For binary artifacts,
51
-
the HTTP redirector will redirect to appropriate geo-located storage buckets.
52
-
53
-
To facilitate artifact promotion, each project, as necessary, will be given access to a
54
-
project staging area relevant to their particular artifacts (either storage bucket or image
55
-
registry). Each project is free to manage their assets in the staging area however they feel
56
-
it is best to do so. However, end-users are not expected to access artifacts through the
57
-
staging area.
58
-
59
-
For each artifact, there will be a configuration file checked into this repository. When a
60
-
project wants to promote an image, they will file a PR in this repository to update their
61
-
image promotion configuration to promote an artifact from staging to production. Once this
62
-
PR is approved, automation that is running in the k8s project infrastructure (built using our [artifact promotion tooling][promo-tools]) will pick up this new
63
-
configuration file and copy the relevant bits out to the production serving locations.
64
-
65
-
Importantly, if a project needs to roll-back or remove an artifact, the same process will
66
-
occur, so that the promotion tool needs to be capable of deleting images and artifacts as
67
-
well as promoting them.
53
+
The top level design will be to set up a global redirector HTTP service
54
+
(`artifacts.k8s.io`) which knows how to serve HTTP and redirect requests to an
55
+
appropriate mirror. This redirector will serve both binary and container image
56
+
downloads. For container images, the HTTP redirector will redirect users to the
57
+
appropriate geo-located container registry. For binary artifacts, the HTTP
58
+
redirector will redirect to appropriate geo-located storage buckets.
59
+
60
+
To facilitate artifact promotion, each project, as necessary, will be given
61
+
access to a project staging area relevant to their particular artifacts (either
62
+
storage bucket or image registry). Each project is free to manage their assets
63
+
in the staging area however they feel it is best to do so. However, end-users
64
+
are not expected to access artifacts through the staging area.
65
+
66
+
For each artifact, there will be a configuration file checked into this
67
+
repository. When a project wants to promote an image, they will file a PR in
68
+
this repository to update their image promotion configuration to promote an
69
+
artifact from staging to production. Once this PR is approved, automation that
70
+
is running in the k8s project infrastructure (built using our
71
+
[artifact promotion tooling][promo-tools]) will pick up this new configuration
72
+
file and copy the relevant bits out to the production serving locations.
73
+
74
+
Importantly, if a project needs to roll-back or remove an artifact, the same
75
+
process will occur, so that the promotion tool needs to be capable of deleting
76
+
images and artifacts as well as promoting them.
68
77
69
78
### HTTP Redirector Design
70
79
71
-
To facilitate world-wide distribution of artifacts from a single (virtual) location we will
72
-
ideally run a replicated redirector service in the United States, Europe and Asia.
73
-
Each of these redirectors
74
-
services will be deployed in a Kubernetes cluster and they will be exposed via a public IP
75
-
address and a dns record indicating their location (e.g. `europe.artifacts.k8s.io`).
80
+
To facilitate world-wide distribution of artifacts from a single (virtual)
81
+
location we will ideally run a replicated redirector service in the United
82
+
States, Europe and Asia.
83
+
84
+
Each of these redirectors services will be deployed in a Kubernetes cluster and
85
+
they will be exposed via a public IP address and a dns record indicating their
86
+
location (e.g. `europe.artifacts.k8s.io`).
76
87
77
-
We will use Geo DNS to route requests to `artifacts.k8s.io` to the correct redirector. This is necessary to ensure that we always route to a server which is accessible no matter what region we are in. We will need to extend or enhance the existing DNS synchronization tooling to handle creation of the GeoDNS records.
88
+
We will use Geo DNS to route requests to `artifacts.k8s.io` to the correct
89
+
redirector. This is necessary to ensure that we always route to a server which
90
+
is accessible no matter what region we are in. We will need to extend or
91
+
enhance the existing DNS synchronization tooling to handle creation of the
92
+
GeoDNS records.
78
93
79
94
#### Configuring the HTTP Redirector
80
95
81
-
THe HTTP Redirector service will be driven from a YAML configuration that specifies a path to mirror
82
-
mapping. For now the redirector will serve content based on continent, for example:
96
+
THe HTTP Redirector service will be driven from a YAML configuration that
97
+
specifies a path to mirror mapping. For now the redirector will serve content
98
+
based on continent, for example:
83
99
84
100
```yaml
85
101
/kops
@@ -88,47 +104,52 @@ mapping. For now the redirector will serve content based on continent, for examp
88
104
- default: americas.artificats.k8s.io
89
105
```
90
106
91
-
The redirector will use this data to redirect a request to the relevant mirror using HTTP 302 responses. The implementation of the mirrors themselves are details left to the service implementor and may be different depending on the artifacts being exposed (binaries vs. container images)
107
+
The redirector will use this data to redirect a request to the relevant mirror
108
+
using HTTP 302 responses. The implementation of the mirrors themselves are
109
+
details left to the service implementor and may be different depending on the
110
+
artifacts being exposed (binaries vs. container images).
92
111
93
112
## Graduation Criteria
94
113
95
-
This KEP will graduate when the process is implemented and has been successfully used to
96
-
manage the images for a Kubernetes release.
114
+
This KEP will graduate when the process is implemented and has been
115
+
successfully used to manage the images for a Kubernetes release.
97
116
98
117
## Implementation History
99
118
100
119
### Milestone 0 (MVP): In progress
101
120
102
121
(Described in terms of kOps, our first candidate; other candidates welcome!)
103
122
104
-
- k8s-infra creates a "staging" GCS bucket for each project
105
-
(e.g. `k8s-artifacts-staging-<project>`) and a "prod" GCS bucket for promoted
106
-
artifacts (e.g. `k8s-artifacts`, one bucket for all projects).
107
-
- We grant write-access to the staging GCS bucket to trusted jobs / people in
108
-
each project (e.g. kOps OWNERS and prow jobs can push to
109
-
`k8s-artifacts-staging-kops`). We can encourage use of CI & reproducible
123
+
- k8s-infra creates a "staging" GCS bucket for each project (e.g.,
124
+
`k8s-artifacts-staging-<project>`) and a "prod" GCS bucket for promoted
125
+
artifacts (e.g., `k8s-artifacts`, one bucket for all projects)
126
+
- We grant writeaccess to the staging GCS bucket to trusted jobs/people in
127
+
each project (e.g., kOps OWNERS and prow jobs can push to
128
+
`k8s-artifacts-staging-kops`). We can encourage use of CI & reproducible
110
129
builds, but we do not block on it.
111
-
- We grant write-access to the prod bucket only to the infra-admins & the
112
-
promoter process.
113
-
- Promotion of artifacts to the "prod" GCS bucket is via a script / utility (as
114
-
we do today). For v1 we can promote based on a sha256sum file (only copy the
115
-
files listed), similarly to the image promoter. We will experiment to develop
116
-
that script / utility in this milestone, along with prow jobs (?) to publish
117
-
to the staging buckets, and to figure out how best to run the promoter.
118
-
Hopefully we can copy the image-promotion work closely.
130
+
- We grant writeaccess to the prod bucket only to the infra-admins & the
131
+
promoter process
132
+
- Promotion of artifacts to the "prod" GCS bucket is via a script/utility (as
133
+
we do today). For v1, we can promote based on a sha256sum file (only copy the
134
+
files listed), similarly to the image promoter. We will experiment to develop
135
+
that script/utility in this milestone, along with prow jobs (?) to publish to
136
+
the staging buckets, and to figure out how best to run the promoter.
137
+
Hopefully, we can copy the image-promotion work closely.
119
138
- We create a bucket-backed GCLB for serving, with a single url-map entry for
120
139
`binaries/`pointing to the prod bucket. (The URL prefix gives us some
121
140
flexibility to e.g. add dynamic content later)
122
-
- We create the artifacts.k8s.io DNS name pointing to the GCLB. (Unclear whether
123
-
we want one for staging, or just encourage pulling from GCS directly).
124
-
- Projects start using the mirrors e.g. kOps adds the
141
+
- We create the artifacts.k8s.io DNS name pointing to the GCLB (unclear whether
142
+
we want one for staging, or just encourage pulling from GCS directly)
143
+
- Projects start using the mirrors e.g., kOps adds the
125
144
https://artifacts.k8s.io/binaries/kops mirror into the (upcoming) mirror-list
126
145
support, so that it will get real traffic but not break kOps should this
127
146
infrastructure break
128
-
- We start to collect data from the GCLB logs. Questions we would like to
129
-
understand: What are the costs, and what would the costs be for localized
130
-
mirrors? What is the performance impact (latency, throughput) of serving
131
-
everything from GCLB? Is GCLB reachable from everywhere (including China)?
132
-
Can we support private mirrors (i.e. non-coordinated mirrors)?
147
+
- We start to collect data from the GCLB logs
148
+
- Questions we would like to understand:
149
+
- What are the costs and what would the costs be for localized mirrors?
150
+
- What is the performance impact (latency, throughput) of serving everything
151
+
from GCLB?
152
+
- Is GCLB reachable from everywhere (including China)?
153
+
- Can we support private (non-coordinated) mirrors?
0 commit comments