Skip to content

Commit 3b04ba7

Browse files
committed
Update Blog “kubernetes-monitoring-using-prometheus-and-grafana-in-hpe-greenlake-for-private-cloud-enterprise”
1 parent c5f614e commit 3b04ba7

File tree

1 file changed

+22
-20
lines changed

1 file changed

+22
-20
lines changed

content/blog/kubernetes-monitoring-using-prometheus-and-grafana-in-hpe-greenlake-for-private-cloud-enterprise.md

Lines changed: 22 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -9,18 +9,18 @@ tags:
99
- Prometheus
1010
- Grafana
1111
- Kubernetes
12+
- Kubernetes monitoring
1213
- HPE GreenLake for Private Cloud Enterprise
13-
- HPE GreenLake for Private Cloud Enterprise Containers
1414
---
1515
### Introduction
1616

17-
[HPE GreenLake for Private Cloud Enterprise: Containers](https://www.hpe.com/us/en/greenlake/containers.html), one of the HPE GreenLake Cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a K8s cluster, view details about existing clusters, and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.
17+
[HPE GreenLake for Private Cloud Enterprise: Containers](https://www.hpe.com/us/en/greenlake/containers.html), one of the HPE GreenLake cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a Kubernetes (K8s) cluster, view details about existing clusters, and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.
1818

19-
In this blog post, I will discuss Kubernetes (K8s) monitoring and show you how to add a monitoring stack using Prometheus and Grafana to a K8s cluster in HPE GreenLake for Private Cloud Enterprise. By setting up Prometheus as the data source and importing different dashboard templates into Grafana, various aspects of K8s, including metrics, performance, and health, can be monitored in the K8s cluster.
19+
In this blog post, I will discuss K8s monitoring and show you how to add a monitoring stack using Prometheus and Grafana to a K8s cluster in HPE GreenLake for Private Cloud Enterprise. By setting up Prometheus as the data source and importing different dashboard templates into Grafana, various aspects of K8s, including metrics, performance, and health, can be monitored in the K8s cluster.
2020

2121
### Why monitor K8s
2222

23-
Though K8s dramatically simplifies application deployment in containers and across clouds, it adds a new set of complexities for managing, securing and troubleshooting applications. Container-based applications are dynamic and they are being designed using microservices, where the number of components is increased by an order of magnitude. To ensure K8s security, it requires self-configuration that is typically specified in code, whether (K8s) yaml manifests, Helm charts, or templating tools. Properly configuring for workloads, clusters, networks, and infrastructure is crucial for averting issues and limiting the impact if a breach occurs. Dynamic provisioning via Infrastructure as Code (IaC), automated configuration management and orchestration also add to monitoring and troubleshooting complexity. K8s monitoring is critical to managing application performance, service uptime and troubleshooting. Having a good monitoring tool is becoming essential for K8s monitoring.
23+
Though K8s dramatically simplifies application deployment in containers and across clouds, it adds a new set of complexities for managing, securing and troubleshooting applications. Container-based applications are dynamic and they are being designed using microservices, where the number of components is increased by an order of magnitude. To ensure K8s security, it requires self-configuration that is typically specified in code, whether K8s yaml manifests, Helm charts, or templating tools. Properly configuring for workloads, clusters, networks, and infrastructure is crucial for averting issues and limiting the impact if a breach occurs. Dynamic provisioning via Infrastructure as Code (IaC), automated configuration management and orchestration also add to monitoring and troubleshooting complexity. K8s monitoring is critical to managing application performance, service uptime and troubleshooting. Having a good monitoring tool is becoming essential for K8s monitoring.
2424

2525
### Prerequisites
2626

@@ -39,13 +39,13 @@ Before starting, make sure you meet the following requirements:
3939

4040
[Grafana](https://grafana.com/) is a powerful data visualization and monitoring tool. It serves as the interface for developers to visualize and analyze the data collected by Prometheus. With its rich set of visualization options and customizable dashboards, Grafana empowers developers to gain real-time insights into their systems’ performance, identify trends, and detect anomalies. By leveraging Grafana’s capabilities, developers can create comprehensive visual representations of their systems’ metrics, facilitating informed decision-making and proactive system management.
4141

42-
In the following sections, I will show how to add a monitoring stack using Prometheus and Grafana to a K8s cluster in HPE GreenLake for Private Cloud Enterprise.
42+
In the following sections, I will show you how to add a monitoring stack using Prometheus and Grafana to a K8s cluster in HPE GreenLake for Private Cloud Enterprise.
4343

4444
### Deploy Prometheus and Grafana using Terraform
4545

4646
Prometheus and Gafana will be deployed to the K8s cluster using the [HPE GreenLake Terraform provider *hpegl*](https://registry.terraform.io/providers/HPE/hpegl/latest), together with the [Helm provider from Hashicorp](https://registry.terraform.io/providers/hashicorp/helm/latest).
4747

48-
#### Create Terraform config
48+
#### Create a Terraform config
4949

5050
Here is the terraform config file. You can refer to [Infrastructure-as-code on HPE GreenLake using Terraform](https://developer.hpe.com/blog/infrastructure-as-code-on-hpe-greenlake-using-terraform/) for the details about HPE GreenLake Terraform provider and its usage.
5151

@@ -128,17 +128,17 @@ resource "helm_release" "grafana-dashboard" {
128128
}
129129
```
130130

131-
There are a few things worth noting in above config file:
131+
There are a few things worth noting in above Terraform config file:
132132

133133
<style> li { font-size: 100%; line-height: 23px; max-width: none; } </style>
134134

135-
* In Grafana, the persistence by default is disabled. In case Grafana pod gets terminated for some reason, you will lose all your data. In production deployment, such as HPE GreenLake for Containers, this needs to be enabled by setting *persistence.enabled* as *true* to prevent any data lose.
136-
* In Prometheus, the *DaemonSet* deployment of the node exporter is trying to mount the *hostPath* volume to the container root “/”, which violates against one deployed OPA (Open Policy Agent) policy to the K8s cluster for FS mount protections. The DaemonSet deployment will never be ready, keep showing the warning events as *Warning FailedCreate daemonset-controller Error creating: admission webhook "soft-validate.hpecp.hpe.com" denied the request: Hostpath ("/") referenced in volume is not valid for this namespace because of FS Mount protections.*. You need disable the *hostRootFsMount*, together with *hostNetwork* and *hostPID*, to comply with the security policy in the cluster.
137-
* Both Prometheus and Grafana services are deployed as *NodePort* service types. Those services will be automatically mapped to the gateway host with automatically generated ports for easy access and configuration.
135+
* In Grafana, the persistence by default is disabled. In case Grafana Pod gets terminated for some reason, you will lose all your data. In production deployment, such as HPE GreenLake for Containers, this needs to be enabled, by setting *persistence.enabled* as *true*, to prevent any data lose.
136+
* In Prometheus, the *DaemonSet* deployment of the node exporter is trying to mount the *hostPath* volume to the container root “/”, which violates against one deployed OPA (Open Policy Agent) policy to the K8s cluster for filesystem (FS) mount protections. The DaemonSet deployment will never be ready, keep showing the warning events as *Warning FailedCreate daemonset-controller Error creating: admission webhook "soft-validate.hpecp.hpe.com" denied the request: Hostpath ("/") referenced in volume is not valid for this namespace because of FS Mount protections.*. You need disable the *hostRootFsMount*, together with *hostNetwork* and *hostPID*, to comply with the security policy in the cluster.
137+
* Both Prometheus and Grafana services are deployed as *NodePort* service types. Those services will be mapped to the gateway host with automatically generated ports for easy access and service configuration.
138138

139139
#### Initialize working directory
140140

141-
With above main.tf config file, the working directory can be initialized by running the following command:
141+
With above *main.tf* config file, the working directory can be initialized by running the following command:
142142

143143
```markdown
144144
$ terraform init
@@ -173,9 +173,11 @@ With above main.tf config file, the working directory can be initialized by runn
173173
commands will detect it and remind you to do so if necessary.
174174
```
175175

176+
From the ouput of the command run, in addition to the HPE GreenLake Terraform provider *hpegl*, the provider *helm* is also installed to the Terraform working directory.
177+
176178
#### Deploy Prometheus and Grafana
177179

178-
Apply the Terraform configuration and deploy Prometheus and Grafana to the K8s cluster by responding *yes* at the prompt to confirm the operation. You may start first a dry run, by running *terraform plan*, to preview the changes to your infrastructure based on the data you provide in your Terraform file.
180+
Typing the following command to apply the Terraform configuration and deploy Prometheus and Grafana to the K8s cluster while responding *yes* at the prompt to confirm the operation. You may try first a dry run, by running *terraform plan*, to preview the changes to your infrastructure based on the data you provide in your Terraform file.
179181

180182
```markdown
181183
$ terraform apply --var-file=variables.tfvars
@@ -310,7 +312,7 @@ Apply the Terraform configuration and deploy Prometheus and Grafana to the K8s c
310312

311313
After few minutes Terraform run, both Prometheus and Grafana will be deployed in the K8s cluster to the namespace *monitoring*.
312314

313-
Type the following command to check the deployed monitoring resources. They should be all in *Running* states.
315+
Typing the following command to check the deployed monitoring resources. All the Pods should be in *Running* states.
314316

315317
```markdown
316318
$ kubectl get all -n monitoring
@@ -364,20 +366,20 @@ prometheus-stack monitoring 1 2023-11-22 15:28:13.290386574 +0100 CET de
364366

365367
#### Access Prometheus
366368

367-
Prometheus can be accessed by pointing the browser to the URL *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015*, extracted by the following command:
369+
Prometheus can be accessed by pointing your browser to the URL *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015*, extracted by the following command:
368370

369371
```markdown
370372
$ kubectl get service/prometheus-stack-server -n monitoring -o jsonpath='{.metadata.annotations.hpecp-internal-gateway/80}'
371373
gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015
372374
```
373375

374-
You can execute the query by using some metrics, e.g., *node_procs_running*:
376+
You can execute the query in Prometheus by using some metrics, e.g., *node_procs_running*:
375377

376378
![](/img/prometheus.png)
377379

378380
#### Access Grafana
379381

380-
Grafana can be accessed by pointing the browser to the URL *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016*. The URL and the admin password can be extracted by the following commands:
382+
Grafana can be accessed by pointing your browser to the URL *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016*. The URL and the admin password can be extracted by the following commands:
381383

382384
```markdown
383385
$ kubectl get service/grafana-dashboard -n monitoring -o jsonpath='{.metadata.annotations.hpecp-internal-gateway/80}'
@@ -389,9 +391,9 @@ cs3O6LF2H9m0jLrgdR8UXplmZG22d9Co9WbnJNzx
389391

390392
![](/img/grafana.png)
391393

392-
#### Configure Grafana
394+
#### Configure Grafana data sources
393395

394-
Prometheus can be configured as the data sources from the Grafana Administration page, by specifying the HTTP URL as *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015/*:
396+
From Grafana Administration page, Prometheus can be configured as the data source by specifying the HTTP URL as *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015/*:
395397

396398
![](/img/grafana-datasources.png)
397399

@@ -401,11 +403,11 @@ From [Grafana Labs](https://grafana.com/grafana/dashboards/), there is a list
401403

402404
![](/img/grafana-dashboard-import.png)
403405

404-
Here is the imported dashboard for *K8s cluster monitoring (via Prometheus)*:
406+
Here is the imported dashboard for *K8s cluster monitoring (via Prometheus)*. It shows overall cluster CPU / Memory / Filesystem usage.
405407

406408
![](/img/grafana-cluster-monitoring.png)
407409

408-
Here is another imported dashboard for *K8s pod metrics*. It shows overall cluster CPU / memory / filesystem usage as well as individual pod, containers, systemd services statistics, etc.
410+
Here is another imported dashboard for *K8s Pod metrics*. It shows individual Pod CPU / memory usage.
409411

410412
![](/img/grafana-pod-metrics.png)
411413

0 commit comments

Comments
 (0)