Skip to content

Commit c7746fc

Browse files
committed
Update Blog “kubernetes-monitoring-using-prometheus-and-grafana-in-hpe-greenlake-for-private-cloud-enterprise”
1 parent e57358d commit c7746fc

File tree

1 file changed

+22
-125
lines changed

1 file changed

+22
-125
lines changed

content/blog/kubernetes-monitoring-using-prometheus-and-grafana-in-hpe-greenlake-for-private-cloud-enterprise.md

Lines changed: 22 additions & 125 deletions
Original file line numberDiff line numberDiff line change
@@ -169,123 +169,12 @@ With above main.tf file, the working directory can be initialized by running the
169169
commands will detect it and remind you to do so if necessary.
170170
```
171171

172-
#### Initialize working directoryDeploy
172+
#### Deploy Prometheus and Grafana
173173

174-
Apply the Terraform configuration and deploy Prometheus and Grafana to the K8s cluster by responingd _yes_ at the prompt to confirm the operation.
174+
Apply the Terraform configuration and deploy Prometheus and Grafana to the K8s cluster by responding _yes_ at the prompt to confirm the operation. You may start first a dry run, by running *terraform plan*, to preview the changes to your infrastructure based on the data you provide in your Terraform file.
175175

176176
```markdown
177-
$ terraform plan --var-file=../tfvar_files/variables.tfvars
178-
179-
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
180-
+ create
181-
182-
Terraform will perform the following actions:
183-
184-
# helm_release.grafana-dashboard will be created
185-
+ resource "helm_release" "grafana-dashboard" {
186-
+ atomic = false
187-
+ chart = "grafana"
188-
+ cleanup_on_fail = false
189-
+ create_namespace = true
190-
+ dependency_update = false
191-
+ disable_crd_hooks = false
192-
+ disable_openapi_validation = false
193-
+ disable_webhooks = false
194-
+ force_update = false
195-
+ id = (known after apply)
196-
+ lint = false
197-
+ manifest = (known after apply)
198-
+ max_history = 0
199-
+ metadata = (known after apply)
200-
+ name = "grafana-dashboard"
201-
+ namespace = "monitoring"
202-
+ pass_credentials = false
203-
+ recreate_pods = false
204-
+ render_subchart_notes = true
205-
+ replace = false
206-
+ repository = "https://grafana.github.io/helm-charts"
207-
+ reset_values = false
208-
+ reuse_values = false
209-
+ skip_crds = false
210-
+ status = "deployed"
211-
+ timeout = 300
212-
+ verify = false
213-
+ version = "6.57.4"
214-
+ wait = true
215-
+ wait_for_jobs = false
216-
217-
+ set {
218-
+ name = "persistence.enabled"
219-
+ value = "true"
220-
}
221-
+ set {
222-
+ name = "service.type"
223-
+ value = "NodePort"
224-
}
225-
}
226-
227-
# helm_release.prometheus-stack will be created
228-
+ resource "helm_release" "prometheus-stack" {
229-
+ atomic = false
230-
+ chart = "prometheus"
231-
+ cleanup_on_fail = false
232-
+ create_namespace = true
233-
+ dependency_update = false
234-
+ disable_crd_hooks = false
235-
+ disable_openapi_validation = false
236-
+ disable_webhooks = false
237-
+ force_update = false
238-
+ id = (known after apply)
239-
+ lint = false
240-
+ manifest = (known after apply)
241-
+ max_history = 0
242-
+ metadata = (known after apply)
243-
+ name = "prometheus-stack"
244-
+ namespace = "monitoring"
245-
+ pass_credentials = false
246-
+ recreate_pods = false
247-
+ render_subchart_notes = true
248-
+ replace = false
249-
+ repository = "https://prometheus-community.github.io/helm-charts"
250-
+ reset_values = false
251-
+ reuse_values = false
252-
+ skip_crds = false
253-
+ status = "deployed"
254-
+ timeout = 300
255-
+ verify = false
256-
+ version = "23.0.0"
257-
+ wait = true
258-
+ wait_for_jobs = false
259-
260-
+ set {
261-
+ name = "prometheus-node-exporter.hostNetwork"
262-
+ value = "false"
263-
}
264-
+ set {
265-
+ name = "prometheus-node-exporter.hostPID"
266-
+ value = "false"
267-
}
268-
+ set {
269-
+ name = "prometheus-node-exporter.hostRootFsMount.enabled"
270-
+ value = "false"
271-
}
272-
+ set {
273-
+ name = "server.service.type"
274-
+ value = "NodePort"
275-
}
276-
}
277-
278-
Plan: 2 to add, 0 to change, 0 to destroy.
279-
280-
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
281-
282-
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
283-
```
284-
285-
* terraform apply
286-
287-
```markdown
288-
$ terraform apply --var-file=../tfvar_files/variables_cmc.tfvars
177+
$ terraform apply --var-file=variables.tfvars
289178

290179
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
291180
+ create
@@ -412,11 +301,11 @@ Apply the Terraform configuration and deploy Prometheus and Grafana to the K8s c
412301

413302
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
414303
````
415-
#### Prometheus and Grafana access
304+
#### Check Prometheus and Grafana
416305

417-
After few minutes Terraform run, both Prometheus and Grafana get deployed in the K8s cluster, to its _monitoring_ namespace.
306+
After few minutes Terraform run, both Prometheus and Grafana will be deployed in the K8s cluster to the namespace _monitoring_.
418307

419-
Type the following command to check all the deployed monitoring resources. They should be all in _Running_ and _Ready_ states.
308+
Type the following command to check the deployed monitoring resources. They should be all in _Running_ and _Ready_ states.
420309

421310
```markdown
422311
$ kubectl get all -n monitoring
@@ -457,7 +346,7 @@ NAME READY AGE
457346
statefulset.apps/prometheus-stack-alertmanager 1/1 4d17h
458347
```
459348

460-
Type _helm list_ command, both Prometheus and Grafana helm charts are deployed to the _monitoring_ namespace:
349+
Type _helm list_ command, it shows both Prometheus and Grafana helm charts are deployed to the _monitoring_ namespace:
461350

462351
```markdown
463352
$ helm list -n monitoring
@@ -466,35 +355,43 @@ grafana-dashboard monitoring 1 2023-11-22 15:28:07.986364628 +0100 CET de
466355
prometheus-stack monitoring 1 2023-11-22 15:28:13.290386574 +0100 CET deployed prometheus-23.0.0 v2.45.0
467356
```
468357

358+
### Set up Prometheus and Grafana for K8s monitoring
359+
360+
#### Access Prometheus
361+
362+
The Prometheus application can be accessed by pointing the browser to the URL *gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015*, extracted by the following command:
363+
469364
```markdown
470365
$ kubectl get service/prometheus-stack-server -n monitoring -o jsonpath='{.metadata.annotations.hpecp-internal-gateway/80}'
471366
gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015
472367
```
473368

474369
![](/img/prometheus.png)
475370

371+
#### Access Grafana dashboard
372+
373+
The Grafana dashboard can be accessed by pointing the browser to the URL *gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016*. The URL and the admin password can be extracted by the following commands:
374+
476375
```markdown
477376
$ kubectl get service/grafana-dashboard -n monitoring -o jsonpath='{.metadata.annotations.hpecp-internal-gateway/80}'
478377
gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016
479-
```
480378

481-
The Grafana dashboard can be accessed by typing the URL *gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016*. Its admin password can be extracted by the following command
482-
483-
```markdown
484379
$ kubectl get secrets -n monitoring grafana-dashboard -o jsonpath='{.data.admin-password}' | base64 -d
485380
cs3O6LF2H9m0jLrgdR8UXplmZG22d9Co9WbnJNzx
486381
```
487382

488383
![](/img/grafana.png)
489384

490-
The Prometheus can be configured as the data sources to the Grafana dashboard, by specifying the HTTP URL as *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015/*.
385+
#### Configure Grafana dashboard
386+
387+
The Prometheus can be configured as the data sources from the Grafana dashboard, by specifying the HTTP URL as *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015/*:
491388

492389
![](/img/grafana-datasources.png)
493390

494-
From [Grafana Labs](https://grafana.com/grafana/dashboards/), there is a list of dashboard templates you can download and imported them as monitoring dashboards to the Grafana.
391+
From [Grafana Labs](https://grafana.com/grafana/dashboards/), there is a list of Grafana dashboard templates you can download and then import them as monitoring dashboards to the Grafana.
495392

496393
![](/img/grafana-dashboard-import.png)
497394

498-
Here is the imported dashboard for K8s cluster monitoring (via Prometheus):
395+
Here is the imported dashboard for _K8s cluster monitoring (via Prometheus)_:
499396

500397
![](/img/grafana-cluster-monitoring.png)

0 commit comments

Comments
 (0)