Skip to content

Commit 149cf9a

Browse files
author
Charly Fontaine
authored
Merge pull request #1 from CharlyF/201-monitoring
Updating 201 monitoring
2 parents 414f9cd + 9d44020 commit 149cf9a

File tree

3 files changed

+67
-6
lines changed

3 files changed

+67
-6
lines changed

02-path-working-with-clusters/201-cluster-monitoring/readme.adoc

Lines changed: 24 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,17 @@ Prometheus is now scraping metrics from the different scraping targets and we fo
258258
$ kubectl port-forward $(kubectl get po -l prometheus=prometheus -n monitoring -o jsonpath={.items[0].metadata.name}) 9090 -n monitoring
259259
Forwarding from 127.0.0.1:9090 -> 9090
260260
261-
Now open the browser at http://localhost:9090/targets and all targets should be shown as `UP` (it might take a couple of minutes until data collectors are up and running for the first time). The browser displays the output as shown:
261+
Now open the browser at http://localhost:9090/targets.
262+
263+
If you are running this in the Cloud9 IDE, you will need to run the following to be able to visualize your dashboard:
264+
265+
$ kubectl port-forward $(kubectl get po -l prometheus=prometheus -n monitoring -o jsonpath={.items[0].metadata.name}) 8080:9090 -n monitoring
266+
Forwarding from 127.0.0.1:8080 -> 9090
267+
Forwarding from [::1]:8080 -> 9090
268+
269+
The dashboard will be available at https://<ENV_ID>.vfs.cloud9.<REGION_ID>.amazonaws.com/targets.
270+
271+
All targets should be shown as `UP` (it might take a couple of minutes until data collectors are up and running for the first time). The browser displays the output as shown:
262272
263273
image::monitoring-grafana-prometheus-dashboard-1.png[]
264274
image::monitoring-grafana-prometheus-dashboard-2.png[]
@@ -287,7 +297,17 @@ Lets forward the grafana dashboard to a local port:
287297
$ kubectl port-forward $(kubectl get pod -l app=grafana -o jsonpath={.items[0].metadata.name} -n monitoring) 3000 -n monitoring
288298
Forwarding from 127.0.0.1:3000 -> 3000
289299
290-
Grafana dashboard is now accessible at http://localhost:3000/. The complete list of dashboards is available using the search button at the top:
300+
Grafana dashboard is now accessible at http://localhost:3000/.
301+
302+
If you are running this in the Cloud9 IDE, you will need to run the following to be able to visualize your dashboard:
303+
304+
$ kubectl port-forward $(kubectl get pod -l app=grafana -o jsonpath={.items[0].metadata.name} -n monitoring) 8080:3000 -n monitoring
305+
Forwarding from 127.0.0.1:8080 -> 3000
306+
Forwarding from [::1]:8080 -> 3000
307+
308+
The dashboard will be available at https://<ENV_ID>.vfs.cloud9.<REGION_ID>.amazonaws.com/.
309+
310+
The complete list of dashboards is available using the search button at the top:
291311
292312
image::monitoring-grafana-prometheus-dashboard-dashboard-home.png[]
293313
@@ -316,6 +336,8 @@ Convenient link for other dashboards are listed below:
316336
* http://localhost:3000/dashboard/db/kubernetes-resource-requests?orgId=1
317337
* http://localhost:3000/dashboard/db/pods?orgId=1
318338
339+
(For Cloud9 users, just replace `http://localhost:3000/` by `https://<ENV_ID>.vfs.cloud9.<REGION_ID>.amazonaws.com/`
340+
319341
=== Cleanup
320342
321343
Remove all the installed components:

02-path-working-with-clusters/201-cluster-monitoring/templates/prometheus/prometheus-bundle.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ spec:
9797
- args:
9898
- --kubelet-service=kube-system/kubelet
9999
- --config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1
100-
image: quay.io/coreos/prometheus-operator:v0.14.1
100+
image: quay.io/coreos/prometheus-operator:v0.21.0
101101
name: prometheus-operator
102102
ports:
103103
- containerPort: 8080

02-path-working-with-clusters/201-cluster-monitoring/templates/prometheus/prometheus.yaml

Lines changed: 42 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ spec:
160160
serviceAccountName: kube-state-metrics
161161
containers:
162162
- name: kube-state-metrics
163-
image: quay.io/coreos/kube-state-metrics:v1.0.1
163+
image: quay.io/coreos/kube-state-metrics:v1.3.1
164164
ports:
165165
- name: metrics
166166
containerPort: 8080
@@ -171,7 +171,7 @@ spec:
171171
initialDelaySeconds: 5
172172
timeoutSeconds: 5
173173
- name: addon-resizer
174-
image: k8s.gcr.io/addon-resizer:1.0
174+
image: k8s.gcr.io/addon-resizer:1.7
175175
resources:
176176
limits:
177177
cpu: 100m
@@ -225,7 +225,7 @@ metadata:
225225
spec:
226226
replicas: 2
227227
version: v2.0.0-rc.1
228-
serviceAccountName: prometheus-operator
228+
serviceAccountName: prometheus
229229
serviceMonitorSelector:
230230
matchExpressions:
231231
- {key: k8s-app, operator: Exists}
@@ -246,6 +246,45 @@ spec:
246246
name: alertmanager-main
247247
port: web
248248
---
249+
apiVersion: rbac.authorization.k8s.io/v1beta1
250+
kind: ClusterRole
251+
metadata:
252+
name: prometheus
253+
namespace: monitoring
254+
rules:
255+
- apiGroups: [""]
256+
resources:
257+
- nodes
258+
- services
259+
- endpoints
260+
- pods
261+
verbs: ["get", "list", "watch"]
262+
- apiGroups: [""]
263+
resources:
264+
- configmaps
265+
verbs: ["get"]
266+
- nonResourceURLs: ["/metrics"]
267+
verbs: ["get"]
268+
---
269+
apiVersion: v1
270+
kind: ServiceAccount
271+
metadata:
272+
name: prometheus
273+
namespace: monitoring
274+
---
275+
apiVersion: rbac.authorization.k8s.io/v1beta1
276+
kind: ClusterRoleBinding
277+
metadata:
278+
name: prometheus
279+
roleRef:
280+
apiGroup: rbac.authorization.k8s.io
281+
kind: ClusterRole
282+
name: prometheus
283+
subjects:
284+
- kind: ServiceAccount
285+
name: prometheus
286+
namespace: monitoring
287+
---
249288
apiVersion: monitoring.coreos.com/v1
250289
kind: ServiceMonitor
251290
metadata:

0 commit comments

Comments
 (0)