You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/essentials/edge-pipeline-configure.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,10 +8,6 @@ author: bwren
8
8
---
9
9
10
10
# Configuration of Azure Monitor edge pipeline
11
-
12
-
- Pipeline has small p
13
-
14
-
15
11
[Azure Monitor pipeline](./pipeline-overview.md) is a data ingestion pipeline providing consistent and centralized data collection for Azure Monitor. The [edge pipeline](./pipeline-overview.md#edge-pipeline) enables at-scale collection, and routing of telemetry data before it's sent to the cloud. It can cache data locally and sync with the cloud when connectivity is restored and route telemetry to Azure Monitor in cases where the network is segmented and data cannot be sent directly to the cloud. This article describes how to enable and configure the edge pipeline in your environment.
16
12
17
13
## Overview
@@ -99,11 +95,15 @@ The following tables and diagrams describe the detailed steps and components in
99
95
100
96
:::image type="content" source="media/edge-pipeline/layered-network.png" lightbox="media/edge-pipeline/layered-network.png" alt-text="Diagram of a layered network for Azure Monitor edge pipeline." border="false":::
101
97
102
-
To use Azure Monitor pipeline in a layered network configuration, you must add the following URLs to the allowlist for the Arc-enabled Kubernetes cluster. See [Configure Azure IoT Layered Network Management Preview on level 4 cluster](/azure/iot-operations/manage-layered-network/howto-configure-l4-cluster-layered-network?tabs=k3s#configure-layered-network-management-preview-service).
98
+
To use Azure Monitor pipeline in a layered network configuration, you must add the following entries to the allowlist for the Arc-enabled Kubernetes cluster. See [Configure Azure IoT Layered Network Management Preview on level 4 cluster](/azure/iot-operations/manage-layered-network/howto-configure-l4-cluster-layered-network?tabs=k3s#configure-layered-network-management-preview-service).
Edge devices in some environments may experience intermittent connectivity due to various factors such as network congestion, signal interference, power outage, or mobility. In these environments, you can configure the edge pipeline to cache data by creating a [persistent volume](https://kubernetes.io) in your cluster. The process for this will vary based on your particular environment, but the configuration must meet the following requirements:
123
123
124
-
- Metadata namespace must be the same as the specified instance of Azure Monitor Pipeline.
124
+
- Metadata namespace must be the same as the specified instance of Azure Monitor pipeline.
125
125
- Access mode must support `ReadWriteMany`.
126
126
127
127
Once the volume is created in the appropriate namespace, configure it using parameters in the pipeline configuration file below.
@@ -843,7 +843,7 @@ In the Azure portal, navigate to the **Kubernetes services** menu and select you
843
843
Click on the entry for **\<pipeline name\>-external-service** and note the IP address and port in the **Endpoints** column. This is the external IP address and port that your clients will send data to.
844
844
845
845
### Verify heartbeat
846
-
Each pipeline configured in your pipeline instance will send a heartbeat record to the `Heartbeat` table in your Log Analytics workspace every minute. If there are multiple workspaces in the pipeline instance, then the first one configured will be used.
846
+
Each pipeline configured in your pipeline instance will send a heartbeat record to the `Heartbeat` table in your Log Analytics workspace every minute. The contents of the `OSMajorVersion` column should match the name your pipeline instance. If there are multiple workspaces in the pipeline instance, then the first one configured will be used.
847
847
848
848
Retrieve the heartbeat records using a log query as in the following example:
849
849
@@ -871,7 +871,7 @@ If the application producing logs is external to the cluster, copy the *external
871
871
## Verify data
872
872
The final step is to verify that the data is received in the Log Analytics workspace. You can perform this verification by running a query in the Log Analytics workspace to retrieve data from the table.
873
873
874
-
[Screenshot placeholder]
874
+
:::image type="content" source="media/edge-pipeline/log-results-syslog.png" lightbox="media/edge-pipeline/log-results-syslog.png" alt-text="Screenshot of log query that returns of Syslog collection." :::
0 commit comments