33This subfolder contains a Helm chart to install [ NetApp Harvest] ( https://github.com/NetApp/harvest/blob/main/README.md )
44into an AWS EKS cluster to monitor multiple FSx for ONTAP file systems using the
55Grafana + Prometheus stack. It uses the AWS Secrets Manager to obtain
6- credentials for echo of the FSxN file systems so those credentials aren't insecurely stored.
6+ credentials for each of the FSxN file systems so those credentials aren't insecurely stored.
77
88## Introduction
99
@@ -15,27 +15,28 @@ Harvest Helm chart installation will result the following:
1515* Collecting metrics about your FSxNs and adding existing Grafana dashboards for better visualization.
1616
1717### Integration with AWS Secrets Manager
18- This Harvest installation uses the AWS Secrets Manager to obtain the credentials for the echo of FSxN file systems.
18+ This Harvest installation uses the AWS Secrets Manager to obtain the credentials for the each of FSxN file systems.
1919The format of the secret string should to be a json structure with a ` username ` and ` password ` keys. For example:
2020``` json
2121{
2222 "username" : " fsxadmin" ,
2323 "password" : " fsxadmin-password"
2424}
2525```
26- A service account should be created during the installation with the sufficient permissions to fetch the secrets.
26+ A service account should be created during the installation of Harvest with the sufficient permissions to fetch the secrets.
2727
2828### Prerequisites
2929* ` Helm ` - for resources installation.
3030* ` kubectl ` - for managing Kubernetes resources.
3131* ` eksctl ` - for creating and managing EKS clusters.
32+ * ` jq ` - for parsing JSON data in the command line. This is optional but recommended for some of the commands below.
3233* An FSx for ONTAP file system deployed in the same VPC as the EKS cluster.
3334
3435## Deployment
3536
3637### Deployment of Prometheus and Grafana
3738If you don't already have Prometheus and Grafana running in your EKS cluster, you can deploy both of them
38- from the Prometheus community repository by using the following commands:
39+ using the Helm chart from the Prometheus community repository by using the following commands:
3940
4041:memo : ** NOTE:** You need to make a substitution in the command below before running it.
4142``` bash
@@ -87,9 +88,13 @@ prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 0
8788### Deployment of the Harvest Helm chart
8889
8990#### 1. Download the Harvest Helm chart
90- Download the Harvest helm chart by copying the contents of the 'harvest' directory found in this repo. Put the contents in an empty directory.
91-
92- The custom Helm chart includes:
91+ Download the Harvest helm chart by copying the contents of the 'harvest' directory found in this repo. The easiest
92+ way to do that, is to simply clone the entire repo and change into the ` harvest ` directory:
93+ ``` bash
94+ git clone https://github.com/NetApp/FSx-ONTAP-samples-scripts.git
95+ cd FSx-ONTAP-samples-scripts/Monitoring/monitor_fsxn_with_harvest_on_eks/harvest
96+ ```
97+ This custom Helm chart includes:
9398* ` deplyment.yaml ` - Harvest deployment using Harvest latest version image
9499* ` harvest-config.yaml ` - Harvest backend configuration
95100* ` harvest-cm.yaml ` - Environment variables configuration for credentials script.
@@ -117,27 +122,20 @@ fsxs:
117122```
118123Of course replace the strings within the <> with your own values.
119124
120- :memo : ** NOTE:** Each FSxN cluster should have unique port number for promPort.
125+ :memo : ** NOTE:** Each FSxN cluster must have unique port number for promPort between the range of 12990 and 14000 .
121126
122- #### 3. Create a namespace
123-
124- If you don't already have a namespace where you want to deploy Harvest, you can create one using the following command:
125- ```
126- kubectl create ns <NAMESPACE>
127- ```
128-
129- #### 4. Create AWS Secrets Manager for FSxN credentials
127+ #### 3. Create AWS Secrets Manager for FSxN credentials
130128If you don't already have an AWS Secrets Manager secret with your FSxN credentials, you can create one using the AWS CLI.
131129```
132130aws secretsmanager create-secret --region <REGION> --name <SECRET_NAME> \
133131 --secret-string '{"USERNAME":"fsxadmin", "PASSWORD":"<YOUR_FSX_PASSWORD>"}'
134132```
135133Replace ` <YOUR_FSX_PASSWORD> ` with the actual password for the ` fsxadmin ` user on your FSxN file system.
136134
137- #### 5 . Create Service Account with permissions to read the AWS Secrets Manager secrets
135+ #### 4 . Create Service Account with permissions to read the AWS Secrets Manager secrets
138136
139- ##### 5a . Create Policy
140- The following IAM policy can be used to grant the all permissions required by Harvest to fetch the secrets.
137+ ##### 4a . Create Policy
138+ The following IAM policy can be used to grant the all the permissions required by Harvest to fetch the secrets.
141139Note in this example, it has places to put two AWS Secrets Manager ARNs. You should add all the secret ARNs
142140for all the FSxN you plan to monitor. Typically on per FSxN, but it is okay to use the same secret for multiple
143141FSxNs as long as the credentials are the same.
@@ -161,7 +159,7 @@ FSxNs as long as the credentials are the same.
161159 "Version": "2012-10-17"
162160}
163161```
164- Of course replace the strings within the <> with your own values.
162+ Of course replace the strings within the <> with your own values. Save the edited policy in a file named ` harvest-read-secrets-policy.json ` .
165163
166164You can use the following command to create the policy:
167165``` bash
@@ -170,24 +168,28 @@ POLICY_ARN=$(aws iam create-policy --policy-name harvest_read_secrets --policy-d
170168Note that this sets a variable named ` POLICY_ARN ` to the ARN of the policy that is created.
171169It is done this way to make it easy to pass that policy ARN when you create the service account in the next step.
172170
173- ##### 5b . Create ServiceAccount
171+ ##### 4b . Create ServiceAccount
174172The following command will create a role, associated with the policy created above, and an Kubernettes service account that Harvest will run under:
175173```
176174eksctl create iamserviceaccount --name harvest-sa --region=<REGION> --namespace <NAMESPACE> --role-name harvest-role --cluster <YOUR_CLUSTER_NAME> --attach-policy-arn "$POLICY_ARN" --approve
177175```
178- Of course replace the strings within the <> with your own values.
176+ Of course replace all the strings within the <> with your own values. Note that the <NAMESPACE > should
177+ be where your Prometheus stack is deployed. If you used the command above to install Prometheus
178+ then the namespace should be ` prometheus ` .
179179
180- #### 6. Install Harvest helm chart
181- Once you have update the values.yaml file, created the namespace, created the AWS Secrets Manager secrets,
182- and created the service account with permissions to read the secrets, you are ready to install the Harvest Helm chart.
180+ #### 5. Install Harvest helm chart
181+ Once you have update the values.yaml file, created the AWS Secrets Manager secrets,
182+ and created the service account with permissions to read the secrets, you are ready to install the Harvest Helm chart
183+ by running:
183184``` text
184185helm upgrade --install harvest -f values.yaml ./ --namespace=<NAMESPACE> --set promethues=<your_promethues_release_name>
185186```
186- Where ` <NAMESPACE> ` is the name of the namespace you created or are using for Harvest and ` <your_promethues_release_name> `
187- is the name of your Prometheus release (e.g. ` kube- prometheus-stack ` if you followed the previous steps) .
187+ Note that the <NAMESPACE > should be where your Prometheus stack is deployed. If you used the command above to install Prometheus
188+ then it will be ` prometheus ` .
188189
189190Once the deployment is complete, Harvest should be listed as a target on Prometheus. You can check that by running
190- the following command:
191+ the following commands. The first one sets up a port forwarder for port 9090 on your local machine to the Prometheus server running
192+ in the EKS cluster as a background job.
191193``` bash
192194kubectl port-forward -n prometheus prometheus-kube-prometheus-stack-prometheus-0 9090 &
193195curl -s http://localhost:9090/api/v1/targets | jq -r ' .data.activeTargets[] | select(.labels.service[0:14] == "harvest-poller") | "\(.labels.service) Status = \(.health)"'
@@ -205,14 +207,15 @@ Once you have obtain the status, you don't need the "kubctl port-forward" comman
205207``` bash
206208kill %? 9090
207209```
210+ That kills any background job that has 9090 in the command line, which the port forwarding command should have.
208211
209- #### Import FSxN CloudWatch metrics into your monitoring stack using YACE
212+ ### Import FSxN CloudWatch metrics into your monitoring stack using YACE
210213AWS CloudWatch provides metrics for the FSx for ONTAP file systems which cannot be collected by Harvest.
211- Therefore, we recommend to use yet-another-exporter (by Prometheus community) for collecting these metrics.
212- See [ YACE ] ( https://github.com/nerdswords/helm-charts ) for more information .
214+ Therefore, we recommend to using the [ yet-another-cloudwatch- exporter] ( https://github.com/prometheus- community/yet-another-cloudwatch-exporter )
215+ (by Prometheus community ) for collecting these metrics .
213216
214- ##### 1. Create ServiceAccount with permissions to AWS CloudWatch
215- The following IAM policy can be used to grant the all permissions required by yet-another-exporter to fetch the CloudWatch metrics:
217+ #### 1. Create Service Account with permissions to get AWS CloudWatch metrics
218+ The following IAM policy can be used to grant the all permissions required by YACE to fetch the CloudWatch metrics:
216219
217220```
218221{
@@ -242,27 +245,31 @@ The following IAM policy can be used to grant the all permissions required by ye
242245 }
243246
244247```
245- Run the following command in order to create the policy:
246-
248+ The policy shown above is in a file named ` yace-export-policy.json ` in the repo. You shouldn't
249+ have to modify the file so just run the following command in order to create the policy:
250+ ``` bash
247251POLICY_ARN=$( aws iam create-policy --policy-name yace-exporter-policy --policy-document file://yace-exporter-policy.json --query Policy.Arn --output text)
252+ ```
248253Note that this sets a variable named ` POLICY_ARN ` to the ARN of the policy that is created.
249254It is done this way to make it easy to pass that policy ARN when you create the service account in the next step.
250255
251- ##### 2. Create ServiceAccount
256+ #### 2. Create the service account
252257The following command will create a role associated with the policy created above, and a Kubernetes service account that YACE will run under:
253258
254259``` bash
255260eksctl create iamserviceaccount --name yace-exporter-sa --region=< REGION> --namespace < NAMESPACE> --role-name yace-cloudwatch-exporter-role --cluster < YOUR_CLUSTER_NAME> --attach-policy-arn " $POLICY_ARN " --approve
256261```
257- Of course replace the strings within the <> with your own values.
262+ Of course replace the strings within the <> with your own values. Note, the overrides file below assumes the account
263+ name is ` yace-exporter-sa ` so if you change it, you will need to update the overrides file accordingly.
258264
259- ##### 3. Install yace-exporter helm chart
265+ #### 3. Install yace-exporter helm chart
260266First add the nerdswords Helm repository to your local Helm client. This repository contains the YACE exporter chart.
261267
262268``` bash
263269helm repo add nerdswords https://nerdswords.github.io/helm-charts
270+ helm repo update
264271```
265- Edit the yace-override-values.yaml file by changing the prometheus release name for ServiceMonitor section:
272+ Edit the ` yace-override-values.yaml ` file found in this repo by changing the prometheus release name in the ServiceMonitor section:
266273```
267274serviceMonitor:
268275 enabled: true
@@ -271,7 +278,7 @@ serviceMonitor:
271278```
272279If you installed Prometheus using the previous steps, the release name will be ` kube-prometheus-stack ` .
273280
274- Also update the region name, in both places, to FSxN's region:
281+ While editing that file, also update the region name, in both places, to FSxN's region in the "config" section :
275282```
276283 apiVersion: v1alpha1
277284 sts-region: <Region_Name>
@@ -303,6 +310,7 @@ Finally, run the following command to install the yace-exporter helm chart:
303310``` text
304311helm install yace-cw-exporter --namespace <NAMESPACE> nerdswords/yet-another-cloudwatch-exporter -f yace-override-values.yaml
305312```
313+ Of course replace the strings within the <> with your own values.
306314
307315### Accessing Grafana
308316If you newly installed the Prometheus stack, that includes Grafana, you will need to provide a way of accessing it from the Kubernetes cluster.
@@ -338,16 +346,16 @@ Once you have access to Grafana, you can log in using the default credentials:
338346Once you login, you'll want to import some dashboards to visualize the metrics collected by Harvest and YACE. You will find
339347some example dashboards in the ` dashboards ` folder in this repository. You can import these dashboards into Grafana by following these steps:
3403481 . Log in to your Grafana instance.
341- 2 . Click on the "+" icon on the left-hand side menu and select "Import".
342- 3 . In the "Import via file" section, click on "Upload .json file" and select one of the dashboard JSON files from the ` dashboards ` folder in this repository.
343- 4 . Once the file is uploaded, Grafana will show a preview of the dashboard. You can change the dashboard name if you want.
344- 5 . Click on the "Import" button to import the dashboard.
345- 6 . After importing the dashboard, you can view it by clicking on the dashboard name in the left-hand side menu.
346- 7 . You can repeat the above steps for each of the dashboard JSON files you want to import.
349+ 2 . Click on the "+" icon on the left-hand side menu and select "Import Dashboard".
350+ 3 . Click in the box with "Upload dashboard JSON file" and browse to one of the dashboard JSON files from the ` dashboards ` folder in this repository.
351+ 4 . Click "Import."
352+
353+ You can repeat the steps above for each of the dashboard JSON files you want to import.
347354
348355You can also import the "default" dashboards from the Harvest repo found [ here] ( https://github.com/NetApp/harvest/tree/main/grafana/dashboards ) .
349- Only consider the dashboards in the 'cmode' and 'cmode-details' directories.
350- :memo : ** NOTE:** Since the special 'fsxadmin' account doesn't have access to all the metrics that a traditional 'admin' account would have,
356+ Only consider the dashboards in the ` cmode ` and ` cmode-details ` directories.
357+
358+ :memo : ** NOTE:** Since the special 'fsxadmin' account doesn't have access to all the metrics that a traditional 'admin' account would have,
351359some of the metrics and dashboards may not be fully applicable or available. The ones with 'fsx' tag are more relevant for FSxN.
352360
353361## Author Information
0 commit comments