Skip to content

Commit 537c576

Browse files
authored
Merge pull request #268178 from sethmanheim/esa-pp
Add Edge Storage Accelerator docs under Arc
2 parents a70142d + cc80864 commit 537c576

21 files changed

+1081
-20
lines changed

articles/azure-arc/breadcrumb/toc.yml

Lines changed: 0 additions & 17 deletions
This file was deleted.
Lines changed: 170 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,170 @@
1+
---
2+
title: Attach your application using the Azure IoT Operations data processor or Kubernetes native application (preview)
3+
description: Learn how to attach your app using the Azure IoT Operations data processor or Kubernetes native application in Edge Storage Accelerator.
4+
author: sethmanheim
5+
ms.author: sethm
6+
ms.topic: how-to
7+
ms.date: 04/08/2024
8+
zone_pivot_groups: attach-app
9+
---
10+
11+
# Attach your application (preview)
12+
13+
This article assumes you created a Persistent Volume (PV) and a Persistent Volume Claim (PVC). For information about creating a PV, see [Create a persistent volume](create-pv.md). For information about creating a PVC, see [Create a Persistent Volume Claim](create-pvc.md).
14+
15+
::: zone pivot="attach-iot-op"
16+
## Configure the Azure IoT Operations data processor
17+
18+
When you use Azure IoT Operations (AIO), the Data Processor is spawned without any mounts for Edge Storage Accelerator. You can perform the following tasks:
19+
20+
- Add a mount for the Edge Storage Accelerator PVC you created previously.
21+
- Reconfigure all pipelines' output stage to output to the Edge Storage Accelerator mount you just created.
22+
23+
## Add Edge Storage Accelerator to your aio-dp-runner-worker-0 pods
24+
25+
These pods are part of a **statefulSet**. You can't edit the statefulSet in place to add mount points. Instead, follow this procedure:
26+
27+
1. Dump the statefulSet to yaml:
28+
29+
```bash
30+
kubectl get statefulset -o yaml -n azure-iot-operations aio-dp-runner-worker > stateful_worker.yaml
31+
```
32+
33+
1. Edit the statefulSet to include the new mounts for ESA in volumeMounts and volumes:
34+
35+
```yaml
36+
volumeMounts:
37+
- mountPath: /etc/bluefin/config
38+
name: config-volume
39+
readOnly: true
40+
- mountPath: /var/lib/bluefin/registry
41+
name: nfs-volume
42+
- mountPath: /var/lib/bluefin/local
43+
name: runner-local
44+
### Add the next 2 lines ###
45+
- mountPath: /mnt/esa
46+
name: esa4
47+
48+
volumes:
49+
- configMap:
50+
defaultMode: 420
51+
name: file-config
52+
name: config-volume
53+
- name: nfs-volume
54+
persistentVolumeClaim:
55+
claimName: nfs-provisioner
56+
### Add the next 3 lines ###
57+
- name: esa4
58+
persistentVolumeClaim:
59+
claimName: esa4
60+
```
61+
62+
1. Delete the existing statefulSet:
63+
64+
```bash
65+
kubectl delete statefulset -n azure-iot-operations aio-dp-runner-worker
66+
```
67+
68+
This deletes all `aio-dp-runner-worker-n` pods. This is an outage-level event.
69+
70+
1. Create a new statefulSet of aio-dp-runner-worker(s) with the ESA mounts:
71+
72+
```bash
73+
kubectl apply -f stateful_worker.yaml -n azure-iot-operations
74+
```
75+
76+
When the `aio-dp-runner-worker-n` pods start, they include mounts to ESA. The PVC should convey this in the state.
77+
78+
1. Once you reconfigure your Data Processor workers to have access to the ESA volumes, you must manually update the pipeline configuration to use a local path that corresponds to the mounted location of your ESA volume on the worker PODs.
79+
80+
In order to modify the pipeline, use `kubectl edit pipeline <name of your pipeline>`. In that pipeline, replace your output stage with the following YAML:
81+
82+
```yaml
83+
output:
84+
batch:
85+
path: .payload
86+
time: 60s
87+
description: An example file output stage
88+
displayName: Sample File output
89+
filePath: '{{{instanceId}}}/{{{pipelineId}}}/{{{partitionId}}}/{{{YYYY}}}/{{{MM}}}/{{{DD}}}/{{{HH}}}/{{{mm}}}/{{{fileNumber}}}'
90+
format:
91+
type: jsonStream
92+
rootDirectory: /mnt/esa
93+
type: output/file@v1
94+
```
95+
96+
::: zone-end
97+
98+
::: zone pivot="attach-kubernetes"
99+
## Configure a Kubernetes native application
100+
101+
1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `configPod.yaml` with the following contents:
102+
103+
```yaml
104+
kind: Deployment
105+
apiVersion: apps/v1
106+
metadata:
107+
name: example-static
108+
labels:
109+
app: example-static
110+
### Uncomment the next line and add your namespace only if you are not using the default namespace (if you are using azure-iot-operations) as specified from Line 6 of your pvc.yaml. If you are not using the default namespace, all future kubectl commands require "-n YOUR_NAMESPACE" to be added to the end of your command.
111+
# namespace: YOUR_NAMESPACE
112+
spec:
113+
replicas: 1
114+
selector:
115+
matchLabels:
116+
app: example-static
117+
template:
118+
metadata:
119+
labels:
120+
app: example-static
121+
spec:
122+
containers:
123+
- image: mcr.microsoft.com/cbl-mariner/base/core:2.0
124+
name: mariner
125+
command:
126+
- sleep
127+
- infinity
128+
volumeMounts:
129+
### This name must match the 'volumes.name' attribute in the next section. ###
130+
- name: blob
131+
### This mountPath is where the PVC is attached to the pod's filesystem. ###
132+
mountPath: "/mnt/blob"
133+
volumes:
134+
### User-defined 'name' that's used to link the volumeMounts. This name must match 'volumeMounts.name' as specified in the previous section. ###
135+
- name: blob
136+
persistentVolumeClaim:
137+
### This claimName must refer to the PVC resource 'name' as defined in the PVC config. This name must match what your PVC resource was actually named. ###
138+
claimName: YOUR_CLAIM_NAME_FROM_YOUR_PVC
139+
```
140+
141+
> [!NOTE]
142+
> If you are using your own namespace, all future `kubectl` commands require `-n YOUR_NAMESPACE` to be appended to the command. For example, you must use `kubectl get pods -n YOUR_NAMESPACE` instead of the standard `kubectl get pods`.
143+
144+
1. To apply this .yaml file, run the following command:
145+
146+
```bash
147+
kubectl apply -f "configPod.yaml"
148+
```
149+
150+
1. Use `kubectl get pods` to find the name of your pod. Copy this name, as you need it for the next step.
151+
152+
1. Run the following command and replace `POD_NAME_HERE` with your copied value from the previous step:
153+
154+
```bash
155+
kubectl exec -it POD_NAME_HERE -- bash
156+
```
157+
158+
1. Change directories into the `/mnt/blob` mount path as specified from your `configPod.yaml`.
159+
160+
1. As an example, to write a file, run `touch file.txt`.
161+
162+
1. In the Azure portal, navigate to your storage account and find the container. This is the same container you specified in your `pv.yaml` file. When you select your container, you see `file.txt` populated within the container.
163+
164+
::: zone-end
165+
166+
## Next steps
167+
168+
After you complete these steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring or third-party monitoring with Prometheus and Grafana:
169+
170+
[Third-party monitoring](third-party-monitoring.md)
Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
---
2+
title: Azure Monitor and Kubernetes monitoring (preview)
3+
description: Learn how to monitor your deployment using Azure Monitor and Kubernetes monitoring in Edge Storage Accelerator.
4+
author: sethmanheim
5+
ms.author: sethm
6+
ms.topic: how-to
7+
ms.date: 04/08/2024
8+
9+
---
10+
11+
# Azure Monitor and Kubernetes monitoring (preview)
12+
13+
This article describes how to monitor your deployment using Azure Monitor and Kubernetes monitoring.
14+
15+
## Azure Monitor
16+
17+
[Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) is a full-stack monitoring service that you can use to monitor Azure resources for their availability, performance, and operation.
18+
19+
## Azure Monitor metrics
20+
21+
[Azure Monitor metrics](/azure/azure-monitor/essentials/data-platform-metrics) is a feature of Azure Monitor that collects data from monitored resources into a time-series database.
22+
23+
These metrics can originate from a number of different sources, including native platform metrics, native custom metrics via [Azure Monitor agent Application Insights](/azure/azure-monitor/insights/insights-overview), and [Azure Managed Prometheus](/azure/azure-monitor/essentials/prometheus-metrics-overview).
24+
25+
Prometheus metrics can be stored in an [Azure Monitor workspace](/azure/azure-monitor/essentials/azure-monitor-workspace-overview) for subsequent visualization via [Azure Managed Grafana](/azure/managed-grafana/overview).
26+
27+
### Metrics configuration
28+
29+
To configure the scraping of Prometheus metrics data into Azure Monitor, see the [Azure Monitor managed service for Prometheus scrape configuration](/azure/azure-monitor/containers/prometheus-metrics-scrape-configuration#enable-pod-annotation-based-scraping) article, which builds upon [this configmap](https://aka.ms/azureprometheus-addon-settings-configmap). Edge Storage Accelerator specifies the `prometheus.io/scrape:true` and `prometheus.io/port` values, and relies on the default of `prometheus.io/path: '/metrics'`. You must specify the Edge Storage Accelerator installation namespace under `pod-annotation-based-scraping` to properly scope your metrics' ingestion.
30+
31+
Once the Prometheus configuration has been completed, follow the [Azure Managed Grafana instructions](/azure/managed-grafana/overview) to create an [Azure Managed Grafana instance](/azure/managed-grafana/quickstart-managed-grafana-portal).
32+
33+
## Azure Monitor logs
34+
35+
[Azure Monitor logs](/azure/azure-monitor/logs/data-platform-logs) is a feature of Azure Monitor that collects and organizes log and performance data from monitored resources, and can be used to [analyze this data in many ways](/azure/azure-monitor/logs/data-platform-logs#what-can-you-do-with-azure-monitor-logs).
36+
37+
### Logs configuration
38+
39+
If you want to access log data via Azure Monitor, you must enable [Azure Monitor Container Insights](/azure/azure-monitor/containers/container-insights-overview) on your Arc-enabled Kubernetes cluster, and then analyze the collected data with [a collection of views](/azure/azure-monitor/containers/container-insights-analyze) and [workbooks](/azure/azure-monitor/containers/container-insights-reports).
40+
41+
Additionally, you can use [Azure Monitor Log Analytics](/azure/azure-monitor/logs/log-analytics-tutorial) to query collected log data.
42+
43+
## Next steps
44+
45+
[Edge Storage Accelerator overview](overview.md)
Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
---
2+
title: Create a persistent volume (preview)
3+
description: Learn about creating persistent volumes in Edge Storage Accelerator.
4+
author: sethmanheim
5+
ms.author: sethm
6+
ms.topic: how-to
7+
ms.date: 04/08/2024
8+
9+
---
10+
11+
# Create a persistent volume (preview)
12+
13+
This article describes how to create a persistent volume using storage key authentication.
14+
15+
## Prerequisites
16+
17+
This section describes the prerequisites for creating a persistent volume (PV).
18+
19+
1. Create a storage account [following the instructions here](/azure/storage/common/storage-account-create?tabs=azure-portal).
20+
21+
When you create your storage account, create it under the same resource group and region/location as your Kubernetes cluster.
22+
23+
1. Create a container in the storage account that you created in the previous step, [following the instructions here](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container).
24+
25+
## Storage key authentication configuration
26+
27+
1. Create a file named **add-key.sh** with the following contents. No edits or changes are necessary:
28+
29+
```bash
30+
#!/usr/bin/env bash
31+
32+
while getopts g:n:s: flag
33+
do
34+
case "${flag}" in
35+
g) RESOURCE_GROUP=${OPTARG};;
36+
s) STORAGE_ACCOUNT=${OPTARG};;
37+
n) NAMESPACE=${OPTARG};;
38+
esac
39+
done
40+
41+
SECRET=$(az storage account keys list -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT --query [0].value --output tsv)
42+
43+
kubectl create secret generic -n "${NAMESPACE}" "${STORAGE_ACCOUNT}"-secret --from-literal=azurestorageaccountkey="${SECRET}" --from-literal=azurestorageaccountname="${STORAGE_ACCOUNT}"
44+
```
45+
46+
1. After you create the file, change the write permissions on the file and execute the shell script using the following commands. Running these commands creates a secret named `{YOUR_STORAGE_ACCOUNT}-secret`. This secret name is used for the `secretName` value when configuring your PV:
47+
48+
```bash
49+
chmod +x add-key.sh
50+
./add-key.sh -g "$YOUR_RESOURCE_GROUP_NAME" -s "$YOUR_STORAGE_ACCOUNT_NAME" -n "$YOUR_KUBERNETES_NAMESPACE"
51+
```
52+
53+
## Create Persistent Volume (PV)
54+
55+
You must create a Persistent Volume (PV) for the Edge Storage Accelerator to create a local instance and bind to a remote BLOB storage account.
56+
57+
Note the `metadata: name:` (`esa4` in this example), as you must specify it in the `spec: volumeName` of the PVC that binds to it. Use your storage account and container that you created as part of the [prerequisites](#prerequisites).
58+
59+
1. Create a file named **pv.yaml**:
60+
61+
```yaml
62+
apiVersion: v1
63+
kind: PersistentVolume
64+
metadata:
65+
### Create a name here ###
66+
name: CREATE_A_NAME_HERE
67+
### Use a namespace that matches your intended consuming pod, or "default" ###
68+
namespace: INTENDED_CONSUMING_POD_OR_DEFAULT_HERE
69+
spec:
70+
capacity:
71+
### This storage capacity value is not enforced at this layer. ###
72+
storage: 10Gi
73+
accessModes:
74+
- ReadWriteMany
75+
persistentVolumeReclaimPolicy: Retain
76+
storageClassName: esa
77+
csi:
78+
driver: edgecache.csi.azure.com
79+
readOnly: false
80+
### Make sure this volumeid is unique in the cluster. You must specify it in the spec:volumeName of the PVC. ###
81+
volumeHandle: YOUR_NAME_FROM_METADATA_NAME_IN_LINE_4_HERE
82+
volumeAttributes:
83+
protocol: edgecache
84+
edgecache-storage-auth: AccountKey
85+
### Fill in the next two/three values with your information. ###
86+
secretName: YOUR_SECRET_NAME_HERE ### From the previous step, this name is "{YOUR_STORAGE_ACCOUNT}-secret" ###
87+
### If you use a non-default namespace, uncomment the following line and add your namespace. ###
88+
### secretNamespace: YOUR_NAMESPACE_HERE
89+
containerName: YOUR_CONTAINER_NAME_HERE
90+
```
91+
92+
1. To apply this .yaml file, run:
93+
94+
```bash
95+
kubectl apply -f "pv.yaml"
96+
```
97+
98+
## Next steps
99+
100+
- [Create a persistent volume claim](create-pvc.md)
101+
- [Edge Storage Accelerator overview](overview.md)
Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
---
2+
title: Create a Persistent Volume Claim (PVC) (preview)
3+
description: Learn how to create a Persistent Volume Claim (PVC) in Edge Storage Accelerator.
4+
author: sethmanheim
5+
ms.author: sethm
6+
ms.topic: how-to
7+
ms.date: 04/08/2024
8+
9+
---
10+
11+
# Create a Persistent Volume Claim (PVC) (preview)
12+
13+
The PVC is a persistent volume claim against the persistent volume that you can use to mount a Kubernetes pod.
14+
15+
This size does not affect the ceiling of blob storage used in the cloud to support this local cache. Note the name of this PVC, as you need it when you create your application pod.
16+
17+
## Create PVC
18+
19+
1. Create a file named **pvc.yaml** with the following contents:
20+
21+
```yaml
22+
apiVersion: v1
23+
kind: PersistentVolumeClaim
24+
metadata:
25+
### Create a name for your PVC ###
26+
name: CREATE_A_NAME_HERE
27+
### Use a namespace that matched your intended consuming pod, or "default" ###
28+
namespace: INTENDED_CONSUMING_POD_OR_DEFAULT_HERE
29+
spec:
30+
accessModes:
31+
- ReadWriteMany
32+
resources:
33+
requests:
34+
storage: 5Gi
35+
storageClassName: esa
36+
volumeMode: Filesystem
37+
### This name references your PV name in your PV config ###
38+
volumeName: INSERT_YOUR_PV_NAME
39+
status:
40+
accessModes:
41+
- ReadWriteMany
42+
capacity:
43+
storage: 5Gi
44+
```
45+
46+
> [!NOTE]
47+
> If you intend to use your PVC with the Azure IoT Operations Data Processor, use `azure-iot-operations` as the `namespace` on line 7.
48+
49+
1. To apply this .yaml file, run:
50+
51+
```bash
52+
kubectl apply -f "pvc.yaml"
53+
```
54+
55+
## Next steps
56+
57+
After you create a Persistent Volume Claim (PVC), attach your app (Azure IoT Operations Data Processor or Kubernetes Native Application):
58+
59+
[Attach your app](attach-app.md)

0 commit comments

Comments
 (0)