Skip to content

Commit aaab076

Browse files
committed
Update Blog “kubernetes-monitoring-using-prometheus-and-grafana-in-hpe-greenlake-for-private-cloud-enterprise”
1 parent 5ee79c1 commit aaab076

File tree

1 file changed

+19
-19
lines changed

1 file changed

+19
-19
lines changed

content/blog/kubernetes-monitoring-using-prometheus-and-grafana-in-hpe-greenlake-for-private-cloud-enterprise.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Adding a monitoring stack to a Kubernete cluster using Prometheus and
2+
title: Adding a monitoring stack to a Kubernetes cluster using Prometheus and
33
Grafana in HPE GreenLake for Private Cloud Enterprise
44
date: 2024-01-25T14:13:04.918Z
55
author: Guoping Jia
@@ -15,28 +15,28 @@ tags:
1515
---
1616
### Introduction
1717

18-
[HPE GreenLake for Private Cloud Enterprise: Containers](https://www.hpe.com/us/en/greenlake/containers.html)("containers service"), one of the HPE GreenLake cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a Kubernetes (K8s) cluster, view details about existing clusters, and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.
18+
[HPE GreenLake for Private Cloud Enterprise: Containers](https://www.hpe.com/us/en/greenlake/containers.html) ("containers service"), one of the HPE GreenLake cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a Kubernetes (K8s) cluster, view details about existing clusters, and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.
1919

2020
In this blog post, I will discuss K8s monitoring and show you how to add a monitoring stack using Prometheus and Grafana to a K8s cluster in HPE GreenLake for Private Cloud Enterprise. By setting up Prometheus as the data source and importing different dashboard templates into Grafana, various aspects of K8s, including metrics, performance, and health, can be monitored in the K8s cluster.
2121

2222
### Why monitor K8s
2323

24-
Though K8s dramatically simplifies application deployment in containers and across clouds, it adds a new set of complexities for managing, securing and troubleshooting applications. Container-based applications are dynamic and they are being designed using microservices, where the number of components is increased by an order of magnitude. To ensure K8s security, it requires self-configuration that is typically specified in code, whether K8s yaml manifests, Helm charts, or templating tools. Properly configuring for workloads, clusters, networks, and infrastructure is crucial for averting issues and limiting the impact if a breach occurs. Dynamic provisioning via Infrastructure as Code (IaC), automated configuration management and orchestration also add to monitoring and troubleshooting complexity. K8s monitoring is critical to managing application performance, service uptime and troubleshooting. Having a good monitoring tool is becoming essential for K8s monitoring.
24+
Though K8s dramatically simplifies application deployment in containers and across clouds, it adds a new set of complexities for managing, securing and troubleshooting applications. Container-based applications are dynamic and designed using microservices, where the number of components increases by an order of magnitude. K8s security is self-configured, typically being specified in code through K8s yaml manifests, Helm charts, or templating tools. Properly configuring for workloads, clusters, networks, and infrastructure is crucial for averting issues and limiting the impact if a breach occurs. Dynamic provisioning via Infrastructure as Code (IaC), automated configuration management and orchestration also add to monitoring and troubleshooting complexity. As such, K8s monitoring is critical to managing application performance, service uptime, and enabling troubleshooting. Having a good monitoring tool is essential.
2525

2626
### Prerequisites
2727

28-
Before starting, make sure you meet the following requirements:
28+
Before starting, make sure your setup has the following:
2929

3030
<style> li { font-size: 100%; line-height: 23px; max-width: none; } </style>
3131

3232
* A K8s cluster, being provisioned in HPE GreenLake for Private Cloud Enterprise
33-
* Terraform, being installed by following [these steps](https://learn.hashicorp.com/tutorials/terraform/install-cli)
33+
* Terraform, installed using [these steps](https://learn.hashicorp.com/tutorials/terraform/install-cli)
3434
* The kubectl CLI tool, together with the kubeconfig file for accessing the K8s cluster
3535
* The Helm CLI tool, version 3.12.0 or later
3636

3737
### Prometheus and Grafana
3838

39-
[Prometheus](https://prometheus.io/docs/introduction/overview/) is a robust open-source monitoring and alerting tool used to collect, store, query, and alert on time-series data. It employs a pull-based model to gather metrics from instrumented targets and features a powerful query language (PromQL) for data analysis. It enables developers to monitor various aspects of their systems, including metrics, performance, and health.
39+
[Prometheus](https://prometheus.io/docs/introduction/overview/) is a robust open-source monitoring and alerting tool used to collect, store, query, and provide alerts on time-series data. It employs a pull-based model to gather metrics from instrumented targets and features a powerful query language (PromQL) for data analysis. It enables developers to monitor various aspects of their systems, including metrics, performance, and health.
4040

4141
[Grafana](https://grafana.com/) is a powerful data visualization and monitoring tool. It serves as the interface for developers to visualize and analyze the data collected by Prometheus. With its rich set of visualization options and customizable dashboards, Grafana empowers developers to gain real-time insights into their systems’ performance, identify trends, and detect anomalies. By leveraging Grafana’s capabilities, developers can create comprehensive visual representations of their systems’ metrics, facilitating informed decision-making and proactive system management.
4242

@@ -129,12 +129,12 @@ resource "helm_release" "grafana-dashboard" {
129129
}
130130
```
131131

132-
There are a few things worth noting in above Terraform config file:
132+
There are a few things worth noting in the above Terraform configuration file:
133133

134134
<style> li { font-size: 100%; line-height: 23px; max-width: none; } </style>
135135

136-
* In Grafana, the persistence by default is disabled. In case Grafana Pod gets terminated for some reason, you will lose all your data. In production deployment, such as HPE GreenLake for Private Cloud Enterprise: Containers, this needs to be enabled, by setting *persistence.enabled* as *true*, to prevent any data lose.
137-
* In Prometheus, the *DaemonSet* deployment of the node exporter is trying to mount the *hostPath* volume to the container root “/”, which violates against one deployed OPA (Open Policy Agent) policy to the K8s cluster for filesystem (FS) mount protections. The DaemonSet deployment will never be ready, keep showing the warning events as *Warning FailedCreate daemonset-controller Error creating: admission webhook "soft-validate.hpecp.hpe.com" denied the request: Hostpath ("/") referenced in volume is not valid for this namespace because of FS Mount protections.*. You need to disable the *hostRootFsMount*, together with *hostNetwork* and *hostPID*, to comply with the security policy in the cluster.
136+
* In Grafana, the persistence by default is disabled. In case Grafana Pod gets terminated for some reason, you will lose all your data. In production deployments, such as HPE GreenLake for Private Cloud Enterprise: Containers, this needs to be enabled, by setting *persistence.enabled* as *true*, to prevent any data loss.
137+
* In Prometheus, the *DaemonSet* deployment of the node exporter tries to mount the *hostPath* volume to the container root “/”, which violates the deployed OPA (Open Policy Agent) policy to the K8s cluster for filesystem (FS) mount protections. As such, the DaemonSet deployment will never be ready and keeps showing the warning events as *Warning FailedCreate daemonset-controller Error creating: admission webhook "soft-validate.hpecp.hpe.com" denied the request: Hostpath ("/") referenced in volume is not valid for this namespace because of FS Mount protections.*. You need to disable the *hostRootFsMount*, together with *hostNetwork* and *hostPID*, to comply with the security policy in the cluster.
138138
* Both Prometheus and Grafana services are deployed as *NodePort* service types. Those services will be mapped to the gateway host with automatically generated ports for easy access and service configuration.
139139

140140
#### Initialize working directory
@@ -174,11 +174,11 @@ With above *main.tf* config file, the working directory can be initialized by ru
174174
commands will detect it and remind you to do so if necessary.
175175
```
176176
177-
From the ouput of the command run, in addition to the HPE GreenLake Terraform provider *hpegl*, the provider *helm* is also installed to the Terraform working directory.
177+
From the output of the command run, the provider *helm* is also installed to the Terraform working directory in addition to the HPE GreenLake Terraform provider *hpegl*.
178178
179179
#### Deploy Prometheus and Grafana
180180
181-
Typing the following command to apply the Terraform configuration and deploy Prometheus and Grafana to the K8s cluster while responding *yes* at the prompt to confirm the operation. You may try first a dry run, by running *terraform plan*, to preview the changes to your infrastructure based on the data you provide in your Terraform file.
181+
Type the following command to apply the Terraform configuration and deploy Prometheus and Grafana to the K8s cluster while responding *yes* at the prompt to confirm the operation. You may want to first try a dry run to preview the changes to your infrastructure based on the data you provide in your Terraform file by running *terraform plan*.
182182
183183
```shell
184184
$ terraform apply --var-file=variables.tfvars
@@ -311,9 +311,9 @@ Typing the following command to apply the Terraform configuration and deploy Pro
311311
312312
#### Check Prometheus and Grafana
313313
314-
After few minutes Terraform run, both Prometheus and Grafana will be deployed in the K8s cluster to the namespace *monitoring*.
314+
After a few minutes of Terraform running, both Prometheus and Grafana will be deployed in the K8s cluster to the namespace *monitoring*.
315315
316-
Typing the following command to check the deployed monitoring resources. All the Pods should be in *Running* states.
316+
Type the following command to check the deployed monitoring resources. All the Pods should show that they are in *Running* states.
317317
318318
```markdown
319319
$ kubectl get all -n monitoring
@@ -354,7 +354,7 @@ NAME READY AGE
354354
statefulset.apps/prometheus-stack-alertmanager 1/1 4d17h
355355
```
356356
357-
Typing *helm list* command, it will show both Prometheus and Grafana helm charts and versions that are deployed to the *monitoring* namespace in the cluster:
357+
Type the *helm list* command to show both Prometheus and Grafana helm charts and versions that are deployed to the *monitoring* namespace in the cluster:
358358
359359
```markdown
360360
$ helm list -n monitoring
@@ -367,7 +367,7 @@ prometheus-stack monitoring 1 2023-11-22 15:28:13.290386574 +0100 CET de
367367
368368
#### Access Prometheus
369369
370-
Prometheus can be accessed by pointing your browser to the URL *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015*, extracted by the following command:
370+
Prometheus can be accessed by pointing your browser to the URL *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015*, extracted by using the following command:
371371
372372
```markdown
373373
$ kubectl get service/prometheus-stack-server -n monitoring -o jsonpath='{.metadata.annotations.hpecp-internal-gateway/80}'
@@ -380,7 +380,7 @@ You can execute the query in Prometheus by using some metrics, e.g., *node_pr
380380
381381
#### Access Grafana
382382
383-
Grafana can be accessed by pointing your browser to the URL *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016*. The URL and the admin password can be extracted by the following commands:
383+
Grafana can be accessed by pointing your browser to the URL *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016*. The URL and the admin password can be extracted by using the following commands:
384384
385385
```markdown
386386
$ kubectl get service/grafana-dashboard -n monitoring -o jsonpath='{.metadata.annotations.hpecp-internal-gateway/80}'
@@ -394,13 +394,13 @@ cs3O6LF2H9m0jLrgdR8UXplmZG22d9Co9WbnJNzx
394394
395395
#### Configure Grafana data sources
396396
397-
From Grafana Administration page, Prometheus can be configured as the data source by specifying the HTTP URL as *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015/*:
397+
From the Grafana Administration page, Prometheus can be configured as the data source by specifying the HTTP URL as *http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015/*:
398398
399399
![](/img/grafana-datasources.png)
400400
401401
#### Import Grafana dashboards
402402
403-
From [Grafana Labs](https://grafana.com/grafana/dashboards/), there is a list of Grafana dashboard templates you can download and then import them as monitoring dashboards to the Grafana.
403+
From [Grafana Labs](https://grafana.com/grafana/dashboards/), there is a list of Grafana dashboard templates you can download and then import as monitoring dashboards to the Grafana.
404404
405405
![](/img/grafana-dashboard-import.png)
406406
@@ -416,4 +416,4 @@ Here is another imported dashboard for *K8s Pod metrics*. It shows individual
416416
417417
This blog post described the detailed process to deploy and set up Prometheus and Grafana as a monitoring stack in a K8s cluster in HPE GreenLake for Private Cloud Enterprise. Prometheus excels at collecting and storing time-series data, enabling developers to monitor various aspects of K8s, including metrics, performance, and health. Grafana complements Prometheus by providing developers with intuitive dashboards and visualizations, enabling them to gain meaningful insights into K8s performance and behavior. Integration of Prometheus and Grafana by deploying them in the K8s cluster adds a monitoring stack. It empowers users to gain a deep understanding of the cluster’s internal states and behaviors, enabling them to identify potential issues, optimize performance and enhance overall reliability.
418418
419-
You can keep coming back to the [HPE Developer blog](https://developer.hpe.com/blog) to learn more about HPE GreenLake for Private Cloud Enterprise.
419+
Please keep coming back to the [HPE Developer blog](https://developer.hpe.com/blog) to learn more about HPE GreenLake for Private Cloud Enterprise.

0 commit comments

Comments
 (0)