You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/blog/kubernetes-monitoring-using-prometheus-and-grafana-in-hpe-greenlake-for-private-cloud-enterprise.md
+15-16Lines changed: 15 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ Following sections will show how to add a monitoring stack using Prometheus and
41
41
42
42
### Deploy Prometheus and Grafana using Terraform
43
43
44
-
Prometheus and Gafana will be deployed to the K8s cluster using the [HPE GreenLake Terraform provider *hpegl*](https://registry.terraform.io/providers/HPE/hpegl/latest), together with the [Helm provider from Hashicorp](https://registry.terraform.io/providers/hashicorp/helm/latest).
44
+
Prometheus and Gafana will be deployed to the K8s cluster using the [HPE GreenLake Terraform provider *hpegl*](https://registry.terraform.io/providers/HPE/hpegl/latest), together with the [Helm provider from Hashicorp](https://registry.terraform.io/providers/hashicorp/helm/latest).
@@ -172,10 +171,10 @@ With above main.tf config file, the working directory can be initialized by runn
172
171
commands will detect it and remind you to do so if necessary.
173
172
```
174
173
175
-
#### Deploy Prometheus and Grafana
174
+
#### Deploy Prometheus and Grafana
175
+
176
+
Apply the Terraform configuration and deploy Prometheus and Grafana to the K8s cluster by responding *yes* at the prompt to confirm the operation. You may start first a dry run, by running *terraform plan*, to preview the changes to your infrastructure based on the data you provide in your Terraform file.
176
177
177
-
Apply the Terraform configuration and deploy Prometheus and Grafana to the K8s cluster by responding _yes_ at the prompt to confirm the operation. You may start first a dry run, by running *terraform plan*, to preview the changes to your infrastructure based on the data you provide in your Terraform file.
178
-
179
178
```markdown
180
179
$ terraform apply --var-file=variables.tfvars
181
180
@@ -307,9 +306,9 @@ Apply the Terraform configuration and deploy Prometheus and Grafana to the K8s c
307
306
308
307
#### Check Prometheus and Grafana
309
308
310
-
After few minutes Terraform run, both Prometheus and Grafana will be deployed in the K8s cluster to the namespace _monitoring_.
309
+
After few minutes Terraform run, both Prometheus and Grafana will be deployed in the K8s cluster to the namespace *monitoring*.
311
310
312
-
Type the following command to check the deployed monitoring resources. They should be all in _Running_ states.
311
+
Type the following command to check the deployed monitoring resources. They should be all in *Running* states.
Typing _helm list_ command, it will show both Prometheus and Grafana helm charts and versions that are deployed to the _monitoring_ namespace in the cluster:
352
+
Typing *helm list* command, it will show both Prometheus and Grafana helm charts and versions that are deployed to the *monitoring* namespace in the cluster:
354
353
355
354
```markdown
356
355
$ helm list -n monitoring
@@ -361,7 +360,7 @@ prometheus-stack monitoring 1 2023-11-22 15:28:13.290386574 +0100 CET de
361
360
362
361
### Set up Prometheus and Grafana for K8s monitoring
363
362
364
-
#### Access Prometheus
363
+
#### Access Prometheus
365
364
366
365
Prometheus can be accessed by pointing the browser to the URL *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015*, extracted by the following command:
You can execute the query by using some metrics, e.g., *kube_pod_start_time*:
372
+
You can execute the query by using some metrics, e.g., *node_procs_running*:
374
373
375
374

376
375
377
-
#### Access Grafana
376
+
#### Access Grafana
378
377
379
378
Grafana can be accessed by pointing the browser to the URL *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016*. The URL and the admin password can be extracted by the following commands:
Prometheus can be configured as the data sources from the Grafana Administration page, by specifying the HTTP URL as *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015/*:
394
393
@@ -400,16 +399,16 @@ From [Grafana Labs](https://grafana.com/grafana/dashboards/), there is a list
400
399
401
400

402
401
403
-
Here is the imported dashboard for _K8s cluster monitoring (via Prometheus)_:
402
+
Here is the imported dashboard for *K8s cluster monitoring (via Prometheus)*:
404
403
405
404

406
405
407
-
Here is another imported dashboard for _K8s pod metrics_. It shows overall cluster CPU / memory / filesystem usage as well as individual pod, containers, systemd services statistics, etc.
406
+
Here is another imported dashboard for *K8s pod metrics*. It shows overall cluster CPU / memory / filesystem usage as well as individual pod, containers, systemd services statistics, etc.
408
407
409
-

408
+

410
409
411
410
### Summary
412
411
413
412
This blog post described the detailed process to deploy and set up Prometheus and Grafana as a monitoring stack in a K8s cluster in HPE GreenLake for Private Cloud Enterprise. Prometheus excels at collecting and storing time-series data, enabling developers to monitor various aspects of K8s, including metrics, performance, and health. Grafana complements Prometheus by providing developers with intuitive dashboards and visualizations, enabling them to gain meaningful insights into K8s performance and behavior. Integration of Prometheus and Grafana by deploying them in the K8s cluster adds a monitoring stack. It empowers users to gain a deep understanding of the cluster’s internal states and behaviors, enabling them to identify potential issues, optimize performance and enhance overall reliability.
414
413
415
-
You can keep coming back to the [HPE Developer blog](https://developer.hpe.com/blog)] to learn more about HPE GreenLake for Private Cloud Enterprise.
414
+
You can keep coming back to the [HPE Developer blog](https://developer.hpe.com/blog)] to learn more about HPE GreenLake for Private Cloud Enterprise.
0 commit comments