Skip to content

Commit cd28d99

Browse files
authored
Bump node-exporter chart to 4.14.0 (#143)
* Bump node-exporter chart to 4.14.0 * terraform docs
1 parent 0a1b971 commit cd28d99

File tree

3 files changed

+32
-2
lines changed

3 files changed

+32
-2
lines changed

docs/eks/index.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -167,3 +167,18 @@ sum(up{job="custom-metrics"}) by (container_name, cluster, nodename)
167167
```
168168

169169
<img width="2560" alt="Screenshot 2023-01-31 at 11 16 21" src="https://user-images.githubusercontent.com/10175027/215869004-e05f557d-c81a-41fb-a452-ede9f986cb27.png">
170+
171+
## Troubleshooting
172+
173+
When you upgrade the eks-monitoring module from v2.1.0 or earlier, the following error may occur.
174+
175+
```bash
176+
Error: cannot patch "prometheus-node-exporter" with kind DaemonSet: DaemonSet.apps "prometheus-node-exporter" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"prometheus-node-exporter", "app.kubernetes.io/name":"prometheus-node-exporter"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
177+
```
178+
179+
This is due to the upgrade of the node-exporter chart from v2 to v4. Manually delete the node-exporter's DaemonSet as described in [the link here](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-node-exporter#3x-to-4x), and then apply.
180+
181+
```bash
182+
kubectl -n prometheus-node-exporter delete daemonset -l app=prometheus-node-exporter
183+
terraform apply
184+
```

modules/eks-monitoring/README.md

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ This module makes use of the open source [kube-prometheus-stack](https://github.
8585
| <a name="input_managed_prometheus_workspace_endpoint"></a> [managed\_prometheus\_workspace\_endpoint](#input\_managed\_prometheus\_workspace\_endpoint) | Amazon Managed Prometheus Workspace Endpoint | `string` | `""` | no |
8686
| <a name="input_managed_prometheus_workspace_id"></a> [managed\_prometheus\_workspace\_id](#input\_managed\_prometheus\_workspace\_id) | Amazon Managed Prometheus Workspace ID | `string` | `null` | no |
8787
| <a name="input_managed_prometheus_workspace_region"></a> [managed\_prometheus\_workspace\_region](#input\_managed\_prometheus\_workspace\_region) | Amazon Managed Prometheus Workspace's Region | `string` | `null` | no |
88-
| <a name="input_ne_config"></a> [ne\_config](#input\_ne\_config) | Node exporter configuration | <pre>object({<br> create_namespace = bool<br> k8s_namespace = string<br> helm_chart_name = string<br> helm_chart_version = string<br> helm_release_name = string<br> helm_repo_url = string<br> helm_settings = map(string)<br> helm_values = map(any)<br><br> scrape_interval = string<br> scrape_timeout = string<br> })</pre> | <pre>{<br> "create_namespace": true,<br> "helm_chart_name": "prometheus-node-exporter",<br> "helm_chart_version": "2.0.3",<br> "helm_release_name": "prometheus-node-exporter",<br> "helm_repo_url": "https://prometheus-community.github.io/helm-charts",<br> "helm_settings": {},<br> "helm_values": {},<br> "k8s_namespace": "prometheus-node-exporter",<br> "scrape_interval": "60s",<br> "scrape_timeout": "60s"<br>}</pre> | no |
88+
| <a name="input_ne_config"></a> [ne\_config](#input\_ne\_config) | Node exporter configuration | <pre>object({<br> create_namespace = bool<br> k8s_namespace = string<br> helm_chart_name = string<br> helm_chart_version = string<br> helm_release_name = string<br> helm_repo_url = string<br> helm_settings = map(string)<br> helm_values = map(any)<br><br> scrape_interval = string<br> scrape_timeout = string<br> })</pre> | <pre>{<br> "create_namespace": true,<br> "helm_chart_name": "prometheus-node-exporter",<br> "helm_chart_version": "4.14.0",<br> "helm_release_name": "prometheus-node-exporter",<br> "helm_repo_url": "https://prometheus-community.github.io/helm-charts",<br> "helm_settings": {},<br> "helm_values": {},<br> "k8s_namespace": "prometheus-node-exporter",<br> "scrape_interval": "60s",<br> "scrape_timeout": "60s"<br>}</pre> | no |
8989
| <a name="input_nginx_config"></a> [nginx\_config](#input\_nginx\_config) | Configuration object for NGINX monitoring | <pre>object({<br> enable_alerting_rules = bool<br> scrape_sample_limit = number<br> prometheus_metrics_endpoint = string<br> })</pre> | <pre>{<br> "enable_alerting_rules": true,<br> "prometheus_metrics_endpoint": "metrics",<br> "scrape_sample_limit": 1000<br>}</pre> | no |
9090
| <a name="input_prometheus_config"></a> [prometheus\_config](#input\_prometheus\_config) | Controls default values such as scrape interval, timeouts and ports globally | <pre>object({<br> global_scrape_interval = string<br> global_scrape_timeout = string<br> })</pre> | <pre>{<br> "global_scrape_interval": "60s",<br> "global_scrape_timeout": "15s"<br>}</pre> | no |
9191
| <a name="input_tags"></a> [tags](#input\_tags) | Additional tags (e.g. `map('BusinessUnit`,`XYZ`) | `map(string)` | `{}` | no |
@@ -99,3 +99,18 @@ This module makes use of the open source [kube-prometheus-stack](https://github.
9999
| <a name="output_eks_cluster_version"></a> [eks\_cluster\_version](#output\_eks\_cluster\_version) | EKS Cluster version |
100100
| <a name="output_grafana_dashboard_urls"></a> [grafana\_dashboard\_urls](#output\_grafana\_dashboard\_urls) | URLs for dashboards created |
101101
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
102+
103+
## Troubleshooting
104+
105+
When you upgrade the eks-monitoring module from v2.1.0 or earlier, the following error may occur.
106+
107+
```bash
108+
Error: cannot patch "prometheus-node-exporter" with kind DaemonSet: DaemonSet.apps "prometheus-node-exporter" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"prometheus-node-exporter", "app.kubernetes.io/name":"prometheus-node-exporter"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
109+
```
110+
111+
This is due to the upgrade of the node-exporter chart from v2 to v4. Manually delete the node-exporter's DaemonSet as described in [the link here](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-node-exporter#3x-to-4x), and then apply.
112+
113+
```bash
114+
kubectl -n prometheus-node-exporter delete daemonset -l app=prometheus-node-exporter
115+
terraform apply
116+
```

modules/eks-monitoring/variables.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ variable "ne_config" {
131131
default = {
132132
create_namespace = true
133133
helm_chart_name = "prometheus-node-exporter"
134-
helm_chart_version = "2.0.3"
134+
helm_chart_version = "4.14.0"
135135
helm_release_name = "prometheus-node-exporter"
136136
helm_repo_url = "https://prometheus-community.github.io/helm-charts"
137137
helm_settings = {}

0 commit comments

Comments
 (0)