Skip to content

Commit d3e3a4a

Browse files
author
Timo Reimann
authored
Merge pull request #350 from digitalocean/release-v0.1.28
Prepare release v0.1.28
2 parents e137ad9 + 861213f commit d3e3a4a

File tree

6 files changed

+189
-23
lines changed

6 files changed

+189
-23
lines changed

CHANGELOG.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,9 @@
11
# CHANGELOG
22

3+
## v0.1.28 (beta) - October 15th 2020
4+
35
* Fix firewall cache usage (@timoreimann)
6+
* Create context after retrieving item from worker queue (@MorrisLaw)
47
* Fix logging and update Kubernetes dependencies to 1.19.2 (@timoreimann)
58
* Expose health check failures (@timoreimann)
69

README.md

Lines changed: 27 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
## Releases
88

99
Cloud Controller Manager follows [semantic versioning](https://semver.org/).
10-
The current version is **`v0.1.27`**. This means that the project is still
10+
The current version is **`v0.1.28`**. This means that the project is still
1111
under active development and may not be production-ready. The plugin will be
1212
bumped to **`v1.0.0`** once the [DigitalOcean Kubernetes
1313
product](https://www.digitalocean.com/products/kubernetes/) is released and
@@ -90,43 +90,50 @@ Please note that if you use a Kubernetes cluster created on DigitalOcean, there
9090
will be a cloud controller manager running in the cluster already, so you local
9191
one will compete for API access with it.
9292

93-
### Run Locally (optional features)
93+
### Optional features
9494

9595
#### Add Public Access Firewall
9696

97-
If you want to add an additional firewall, that allows public access to your
98-
cluster, you can run a command like this:
97+
You can have `digitalocan-cloud-controller-manager` manage a DigitalOcean Firewall
98+
that will dynamically adjust rules for accessing NodePorts: once a Service of type
99+
`NodePort` is created, the firewall controller will update the firewall to public
100+
allow access to just that NodePort. Likewise, access is automatically retracted
101+
if the Service gets deleted or changed to a different type.
102+
103+
Example invocation:
99104

100105
```bash
101106
cd cloud-controller-manager/cmd/digitalocean-cloud-controller-manager
102-
FAKE_REGION=fra1 DO_ACCESS_TOKEN=your_access_token \
107+
DO_ACCESS_TOKEN=<your_access_token> \
103108
PUBLIC_ACCESS_FIREWALL_NAME=firewall_name \
104-
PUBLIC_ACCESS_FIREWALL_TAGS=k8s,k8s:<cluster-uuid>,k8s:worker \
105-
go run main.go \
109+
PUBLIC_ACCESS_FIREWALL_TAGS=worker-droplet \
110+
digitalocean-cloud-controller-manager \
106111
--kubeconfig <path to your kubeconfig file> \
107112
--leader-elect=false --v=5 --cloud-provider=digitalocean
108113
```
109114

110-
The `PUBLIC_ACCESS_FIREWALL_NAME` environment variable allows you to pass in
111-
the name of the firewall you plan to use in addition to the already existing
112-
DOKS managed firewall. It is called public access because you can
113-
allow access to ports in the NodePort range, whereas this isn't possible with
114-
the default DOKS managed firewall. Not passing this in will cause your cluster
115-
to resort to the default behavior of denying all access to ports in the
116-
NodePort range.
115+
The `PUBLIC_ACCESS_FIREWALL_NAME` environment variable defines the name of the
116+
firewall. The firewall is created if no firewall by that name is found.
117117

118118
The `PUBLIC_ACCESS_FIREWALL_TAGS` environment variable refers to the tags
119-
associated with the public access firewall you provide.
119+
associated with the droplets that the firewall should apply to. Usually, this
120+
is a tag attached to the worker node droplets. Multiple tags are applied in
121+
a logical OR fashion.
122+
123+
No firewall is managed if the environment variables are missing or left
124+
empty. Once the firewall is created, no public access other than to the NodePorts
125+
is allowed. Users should create additional firewalls to further extend access.
120126

121127
#### Expose Prometheus Metrics
122128

123-
If you are interested in exposing prometheus metrics, you can pass in a metrics
129+
If you are interested in exposing Prometheus metrics, you can pass in a metrics
124130
endpoint that will expose them. The command will look similar to this:
125131

126132
```bash
127133
cd cloud-controller-manager/cmd/digitalocean-cloud-controller-manager
128-
FAKE_REGION=fra1 DO_ACCESS_TOKEN=your_access_token \
129-
METRICS_ADDR=<host>:<port> go run main.go \
134+
DO_ACCESS_TOKEN=your_access_token \
135+
METRICS_ADDR=<host>:<port> \
136+
digitalocean-cloud-controller-manager \
130137
--kubeconfig <path to your kubeconfig file> \
131138
--leader-elect=false --v=5 --cloud-provider=digitalocean
132139
```
@@ -135,8 +142,8 @@ The `METRICS_ADDR` environment variable takes a valid endpoint that you'd
135142
like to use to serve your Prometheus metrics. To be valid it should be in the
136143
form `<host>:<port>`.
137144

138-
After you have started up CCM, run the following curl command to view the
139-
Prometheus metrics output:
145+
After you have started up `digitalocan-cloud-controller-manager`, run the
146+
following curl command to view the Prometheus metrics output:
140147

141148
```bash
142149
curl <host>:<port>/metrics

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
v0.1.27
1+
v0.1.28

docs/example-manifests/cloud-controller-manager.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ spec:
4747
operator: Exists
4848
tolerationSeconds: 300
4949
containers:
50-
- image: digitalocean/digitalocean-cloud-controller-manager:v0.1.27
50+
- image: digitalocean/digitalocean-cloud-controller-manager:v0.1.28
5151
name: digitalocean-cloud-controller-manager
5252
command:
5353
- "/bin/digitalocean-cloud-controller-manager"

docs/getting-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,7 @@ digitalocean Opaque 1 18h
163163
Currently we only support alpha release of the `digitalocean-cloud-controller-manager` due to its active development. Run the first alpha release like so
164164

165165
```bash
166-
kubectl apply -f releases/v0.1.27.yml
166+
kubectl apply -f releases/v0.1.28.yml
167167
deployment "digitalocean-cloud-controller-manager" created
168168
```
169169

releases/v0.1.28.yml

Lines changed: 156 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,156 @@
1+
---
2+
apiVersion: apps/v1
3+
kind: Deployment
4+
metadata:
5+
name: digitalocean-cloud-controller-manager
6+
namespace: kube-system
7+
spec:
8+
replicas: 1
9+
revisionHistoryLimit: 2
10+
selector:
11+
matchLabels:
12+
app: digitalocean-cloud-controller-manager
13+
template:
14+
metadata:
15+
labels:
16+
app: digitalocean-cloud-controller-manager
17+
annotations:
18+
scheduler.alpha.kubernetes.io/critical-pod: ''
19+
spec:
20+
dnsPolicy: Default
21+
hostNetwork: true
22+
serviceAccountName: cloud-controller-manager
23+
tolerations:
24+
# this taint is set by all kubelets running `--cloud-provider=external`
25+
# so we should tolerate it to schedule the digitalocean ccm
26+
- key: "node.cloudprovider.kubernetes.io/uninitialized"
27+
value: "true"
28+
effect: "NoSchedule"
29+
- key: "CriticalAddonsOnly"
30+
operator: "Exists"
31+
# cloud controller manages should be able to run on masters
32+
- key: "node-role.kubernetes.io/master"
33+
effect: NoSchedule
34+
containers:
35+
- image: digitalocean/digitalocean-cloud-controller-manager:v0.1.28
36+
name: digitalocean-cloud-controller-manager
37+
command:
38+
- "/bin/digitalocean-cloud-controller-manager"
39+
- "--leader-elect=false"
40+
resources:
41+
requests:
42+
cpu: 100m
43+
memory: 50Mi
44+
env:
45+
- name: DO_ACCESS_TOKEN
46+
valueFrom:
47+
secretKeyRef:
48+
name: digitalocean
49+
key: access-token
50+
51+
---
52+
apiVersion: v1
53+
kind: ServiceAccount
54+
metadata:
55+
name: cloud-controller-manager
56+
namespace: kube-system
57+
---
58+
apiVersion: rbac.authorization.k8s.io/v1
59+
kind: ClusterRole
60+
metadata:
61+
annotations:
62+
rbac.authorization.kubernetes.io/autoupdate: "true"
63+
name: system:cloud-controller-manager
64+
rules:
65+
# The following is necessary to support leader election when running DO-CCM on
66+
# a multi-master cluster. We leave it commented out on the assumption that the
67+
# typical user is not running a multi-master cluster and therefore does not
68+
# need the additional persmissions.
69+
#
70+
# - apiGroups:
71+
# - coordination.k8s.io
72+
# resources:
73+
# - leases
74+
# verbs:
75+
# - get
76+
# - watch
77+
# - list
78+
# - create
79+
# - update
80+
# - delete
81+
- apiGroups:
82+
- ""
83+
resources:
84+
- events
85+
verbs:
86+
- create
87+
- patch
88+
- update
89+
- apiGroups:
90+
- ""
91+
resources:
92+
- nodes
93+
verbs:
94+
- '*'
95+
- apiGroups:
96+
- ""
97+
resources:
98+
- nodes/status
99+
verbs:
100+
- patch
101+
- apiGroups:
102+
- ""
103+
resources:
104+
- services
105+
verbs:
106+
- list
107+
- patch
108+
- update
109+
- watch
110+
- apiGroups:
111+
- ""
112+
resources:
113+
- services/status
114+
verbs:
115+
- list
116+
- patch
117+
- update
118+
- watch
119+
- apiGroups:
120+
- ""
121+
resources:
122+
- serviceaccounts
123+
verbs:
124+
- create
125+
- apiGroups:
126+
- ""
127+
resources:
128+
- persistentvolumes
129+
verbs:
130+
- get
131+
- list
132+
- update
133+
- watch
134+
- apiGroups:
135+
- ""
136+
resources:
137+
- endpoints
138+
verbs:
139+
- create
140+
- get
141+
- list
142+
- watch
143+
- update
144+
---
145+
kind: ClusterRoleBinding
146+
apiVersion: rbac.authorization.k8s.io/v1
147+
metadata:
148+
name: system:cloud-controller-manager
149+
roleRef:
150+
apiGroup: rbac.authorization.k8s.io
151+
kind: ClusterRole
152+
name: system:cloud-controller-manager
153+
subjects:
154+
- kind: ServiceAccount
155+
name: cloud-controller-manager
156+
namespace: kube-system

0 commit comments

Comments
 (0)