Skip to content

Commit f795026

Browse files
committed
Update Blog “kubernetes-monitoring-using-prometheus-and-grafana-in-hpe-greenlake-for-private-cloud-enterprise”
1 parent 79ab0ce commit f795026

File tree

1 file changed

+27
-19
lines changed

1 file changed

+27
-19
lines changed

content/blog/kubernetes-monitoring-using-prometheus-and-grafana-in-hpe-greenlake-for-private-cloud-enterprise.md

Lines changed: 27 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -12,13 +12,15 @@ tags:
1212
- HPE GreenLake for Private Cloud Enterprise
1313
- HPE GreenLake for Private Cloud Enterprise Containers
1414
---
15-
In this blog post, I will discuss Kubernetes (K8s) monitoring and show you how to add a monitoring stack using Prometheus and Grafana to a K8s cluster in HPE GreenLake for Private Cloud Enterprise. By setting up Prometheus as the data source and importing different dashboard templates into Grafana, various resources and applications can be monitored in the K8s cluster.
15+
In this blog post, I will discuss Kubernetes (K8s) monitoring and show you how to add a monitoring stack using Prometheus and Grafana to a K8s cluster in HPE GreenLake for Private Cloud Enterprise. By setting up Prometheus as the data source and importing different dashboard templates into Grafana, various aspects of K8s, including metrics, performance, and health, can be monitored in the K8s cluster.
1616

1717
### Why monitor K8s
1818

19-
[HPE GreenLake for Private Cloud Enterprise: Containers](https://www.hpe.com/us/en/greenlake/containers.html), one of the HPE GreenLake Cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a K8s cluster, view details about existing clusters, and launch the HPE GreenLake for Container service console. It provides an enterprise-grade container management service using open source K8s.
19+
[HPE GreenLake for Private Cloud Enterprise: Containers](https://www.hpe.com/us/en/greenlake/containers.html), one of the HPE GreenLake Cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a K8s cluster, view details about existing clusters, and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.
2020

21-
Though K8s dramatically simplifies application deployment in containers and across clouds, it adds a new set of complexities for managing, securing and troubleshooting applications. Container-based applications are dynamic and they are being designed using microservices, where the number of components is increased by an order of magnitude. To ensure K8s security, it requires self-configuration that is typically specified in code, whether (K8s) yaml manifests, Helm charts, or templating tools. Properly configuring for workloads, clusters, networks, and infrastructure is crucial for averting issues and limiting the impact if a breach occurs. Dynamic provisioning via Infrastructure as Code (IaC), automated configuration management and orchestration also add to monitoring and troubleshooting complexity. K8s monitoring is critical to managing application performance, service uptime and troubleshooting. Having a good monitoring tool is becoming essential for K8s monitoring. This blog post escribesd how to add a monitoring stack using Prometheus and Grafana to a K8s cluster in HPE GreenLake for Private Cloud Enterprise.
21+
Though K8s dramatically simplifies application deployment in containers and across clouds, it adds a new set of complexities for managing, securing and troubleshooting applications. Container-based applications are dynamic and they are being designed using microservices, where the number of components is increased by an order of magnitude. To ensure K8s security, it requires self-configuration that is typically specified in code, whether (K8s) yaml manifests, Helm charts, or templating tools. Properly configuring for workloads, clusters, networks, and infrastructure is crucial for averting issues and limiting the impact if a breach occurs. Dynamic provisioning via Infrastructure as Code (IaC), automated configuration management and orchestration also add to monitoring and troubleshooting complexity. K8s monitoring is critical to managing application performance, service uptime and troubleshooting. Having a good monitoring tool is becoming essential for K8s monitoring.
22+
23+
This blog post escribesd how to add a monitoring stack using Prometheus and Grafana to a K8s cluster in HPE GreenLake for Private Cloud Enterprise.
2224

2325
### Prerequisites
2426

@@ -39,7 +41,7 @@ Before starting, make sure you meet the following requirements:
3941

4042
### Deploy Prometheus and Grafana using Terraform
4143

42-
Both Prometheus and Gafana can be installed to the K8s cluster using the [HPE GreenLake Terraform provider *hpegl*](https://registry.terraform.io/providers/HPE/hpegl/latest), together with the [Helm provider from Hashicorp]( https://registry.terraform.io/providers/hashicorp/helm/latest).
44+
Prometheus and Gafana will be deployed to the K8s cluster using the [HPE GreenLake Terraform provider *hpegl*](https://registry.terraform.io/providers/HPE/hpegl/latest), together with the [Helm provider from Hashicorp]( https://registry.terraform.io/providers/hashicorp/helm/latest).
4345

4446
#### Create Terraform config
4547

@@ -124,19 +126,20 @@ resource "helm_release" "grafana-dashboard" {
124126
}
125127
```
126128

127-
There a few things I want to point out in above config file:
129+
130+
There are a few things worth noting in above config file:
128131

129132
<style> li { font-size: 100%; line-height: 23px; max-width: none; } </style>
130133

131-
* In Grafana, the persistence by default is disabled. In case Grafana pod gets terminated for some reason, you will lose all your data. In production deployment, such as HPE GreenLake for Containers, this needs to be enabled, by setting *persistence.enabled* as *true*, to prevent any data lose.
132-
* In Prometheus, the *DaemonSet* deployment of the node exporter is trying to mount the *hostPath* volume to the container root “/”, which violates against one deployed OPA (Open Policy Agent) policy to the K8s cluster for FS mount protections. Therefore, the DaemonSet deployment will never be ready, keep showing the warning events as *Warning FailedCreate daemonset-controller Error creating: admission webhook "soft-validate.hpecp.hpe.com" denied the request: Hostpath ("/") referenced in volume is not valid for this namespace because of FS Mount protections.*. You need disable the *hostRootFsMount*, together with *hostNetwork* and *hostPID*, to comply with the security policy in the cluster.
133-
* Both Prometheus and Grafana services are deployed as *NodePort* service types. Those services will be automatically mapped to the gateway host with assigned ports for easy access.
134+
* In Grafana, the persistence by default is disabled. In case Grafana pod gets terminated for some reason, you will lose all your data. In production deployment, such as HPE GreenLake for Containers, this needs to be enabled by setting *persistence.enabled* as *true* to prevent any data lose.
135+
* In Prometheus, the *DaemonSet* deployment of the node exporter is trying to mount the *hostPath* volume to the container root “/”, which violates against one deployed OPA (Open Policy Agent) policy to the K8s cluster for FS mount protections. The DaemonSet deployment will never be ready, keep showing the warning events as *Warning FailedCreate daemonset-controller Error creating: admission webhook "soft-validate.hpecp.hpe.com" denied the request: Hostpath ("/") referenced in volume is not valid for this namespace because of FS Mount protections.*. You need disable the *hostRootFsMount*, together with *hostNetwork* and *hostPID*, to comply with the security policy in the cluster.
136+
* Both Prometheus and Grafana services are deployed as *NodePort* service types. Those services will be automatically mapped to the gateway host with automatically generated ports for easy access and configuration.
134137

135138
#### Initialize working directory
136139

137-
With above main.tf file, the working directory can be initialized by running the following command:
140+
With above main.tf config file, the working directory can be initialized by running the following command:
138141

139-
````markdown
142+
```markdown
140143
$ terraform init
141144

142145
Initializing the backend...
@@ -167,7 +170,7 @@ With above main.tf file, the working directory can be initialized by running the
167170
If you ever set or change modules or backend configuration for Terraform,
168171
rerun this command to reinitialize your working directory. If you forget, other
169172
commands will detect it and remind you to do so if necessary.
170-
```
173+
```
171174

172175
#### Deploy Prometheus and Grafana
173176

@@ -300,12 +303,13 @@ Apply the Terraform configuration and deploy Prometheus and Grafana to the K8s c
300303
helm_release.prometheus-stack: Creation complete after 1m18s [id=prometheus-stack]
301304

302305
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
303-
````
306+
```
307+
304308
#### Check Prometheus and Grafana
305309

306310
After few minutes Terraform run, both Prometheus and Grafana will be deployed in the K8s cluster to the namespace _monitoring_.
307311

308-
Type the following command to check the deployed monitoring resources. They should be all in _Running_ and _Ready_ states.
312+
Type the following command to check the deployed monitoring resources. They should be all in _Running_ states.
309313

310314
```markdown
311315
$ kubectl get all -n monitoring
@@ -346,7 +350,7 @@ NAME READY AGE
346350
statefulset.apps/prometheus-stack-alertmanager 1/1 4d17h
347351
```
348352

349-
Type _helm list_ command, it shows both Prometheus and Grafana helm charts are deployed to the _monitoring_ namespace:
353+
Type _helm list_ command, it will show both Prometheus and Grafana helm charts are deployed to the _monitoring_ namespace:
350354

351355
```markdown
352356
$ helm list -n monitoring
@@ -359,18 +363,20 @@ prometheus-stack monitoring 1 2023-11-22 15:28:13.290386574 +0100 CET de
359363

360364
#### Access Prometheus
361365

362-
The Prometheus application can be accessed by pointing the browser to the URL *gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015*, extracted by the following command:
366+
Prometheus can be accessed by pointing the browser to the URL *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015*, extracted by the following command:
363367

364368
```markdown
365369
$ kubectl get service/prometheus-stack-server -n monitoring -o jsonpath='{.metadata.annotations.hpecp-internal-gateway/80}'
366370
gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015
367371
```
368372

373+
You can execute the query by using some metric, e.g., *kube_pod_start_time*:
374+
369375
![](/img/prometheus.png)
370376

371377
#### Access Grafana dashboard
372378

373-
The Grafana dashboard can be accessed by pointing the browser to the URL *gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016*. The URL and the admin password can be extracted by the following commands:
379+
The Grafana dashboard can be accessed by pointing the browser to the URL *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016*. The URL and the admin password can be extracted by the following commands:
374380

375381
```markdown
376382
$ kubectl get service/grafana-dashboard -n monitoring -o jsonpath='{.metadata.annotations.hpecp-internal-gateway/80}'
@@ -382,12 +388,14 @@ cs3O6LF2H9m0jLrgdR8UXplmZG22d9Co9WbnJNzx
382388

383389
![](/img/grafana.png)
384390

385-
#### Configure Grafana dashboard
391+
#### Configure Grafana
386392

387-
The Prometheus can be configured as the data sources from the Grafana dashboard, by specifying the HTTP URL as *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015/*:
393+
Prometheus can be configured as the data sources from the Grafana Administration page, by specifying the HTTP URL as *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015/*:
388394

389395
![](/img/grafana-datasources.png)
390396

397+
#### Import Grafana dashboards
398+
391399
From [Grafana Labs](https://grafana.com/grafana/dashboards/), there is a list of Grafana dashboard templates you can download and then import them as monitoring dashboards to the Grafana.
392400

393401
![](/img/grafana-dashboard-import.png)
@@ -396,7 +404,7 @@ Here is the imported dashboard for _K8s cluster monitoring (via Prometheus)_:
396404

397405
![](/img/grafana-cluster-monitoring.png)
398406

399-
Here is another imported dashboard for _K8s pod metrics_. It shows overall cluster CPU / Memory / Filesystem usage as well as individual pod, containers, systemd services statistics, etc.
407+
Here is another imported dashboard for _K8s pod metrics_. It shows overall cluster CPU / memory / filesystem usage as well as individual pod, containers, systemd services statistics, etc.
400408

401409
![](/img/grafana-cluster-monitoring.png)
402410

0 commit comments

Comments
 (0)