You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
title: Run Azure IoT Edge on Ubuntu Virtual Machines
3
+
description: How to run Azure IoT Edge on an Ubuntu virtual machine
4
+
author: PatAltimore
8
5
ms.service: iot-edge
9
6
ms.custom: devx-track-azurecli
10
7
services: iot-edge
11
-
ms.topic: conceptual
12
-
ms.date: 01/20/2022
8
+
ms.topic: how-to
9
+
ms.date: 06/03/2024
13
10
ms.author: pdecarlo
14
11
---
15
12
# Run Azure IoT Edge on Ubuntu Virtual Machines
@@ -20,7 +17,7 @@ The Azure IoT Edge runtime is what turns a device into an IoT Edge device. The r
20
17
21
18
To learn more about how the IoT Edge runtime works and what components are included, see [Understand the Azure IoT Edge runtime and its architecture](iot-edge-runtime.md).
22
19
23
-
This article lists the steps to deploy an Ubuntu 20.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured using a presupplied device connection string. The deployment is accomplished using a [cloud-init](../virtual-machines/linux/using-cloud-init.md) based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy) project repository.
20
+
This article lists the steps to deploy an Ubuntu virtual machine with the Azure IoT Edge runtime installed and configured using a presupplied device connection string. The deployment is accomplished using a [cloud-init](../virtual-machines/linux/using-cloud-init.md) based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy) project repository.
24
21
25
22
On first boot, the virtual machine [installs the latest version of the Azure IoT Edge runtime via cloud-init](https://github.com/Azure/iotedge-vm-deploy/blob/main/cloud-init.txt). It also sets a supplied connection string before the runtime starts, allowing you to easily configure and connect the IoT Edge device without the need to start an SSH or remote desktop session.
26
23
@@ -29,7 +26,7 @@ On first boot, the virtual machine [installs the latest version of the Azure IoT
29
26
The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure-button.md) allows for streamlined deployment of [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) maintained on GitHub.
30
27
This section demonstrates usage of the Deploy to Azure Button contained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy) project repository.
31
28
32
-
1.We will deploy an Azure IoT Edge enabled Linux VM using the iotedge-vm-deploy Azure Resource Manager template. To begin, select the following button:
29
+
1.You will deploy an Azure IoT Edge enabled Linux VM using the iotedge-vm-deploy Azure Resource Manager template. To begin, select the following button:
33
30
34
31
[](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2Fmain%2FedgeDeploy.json)
35
32
@@ -45,14 +42,14 @@ The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
45
42
|**Resource group**| An existing or newly created Resource Group to contain the virtual machine and it's associated resources. |
46
43
|**Region**| The [geographic region](https://azure.microsoft.com/global-infrastructure/locations/) to deploy the virtual machine into, this value defaults to the location of the selected Resource Group. |
47
44
|**DNS Label Prefix**| A required value of your choosing that is used to prefix the hostname of the virtual machine. |
48
-
|**Admin Username**| A username, which will be provided root privileges on deployment. |
45
+
|**Admin Username**| A username that is provided root privileges on deployment. |
49
46
|**Device Connection String**| A [device connection string](./how-to-provision-single-device-linux-symmetric.md#view-registered-devices-and-retrieve-provisioning-information) for a device that was created within your intended [IoT Hub](../iot-hub/about-iot-hub.md). |
50
47
|**VM Size**| The [size](../cloud-services/cloud-services-sizes-specs.md) of the virtual machine to be deployed. |
51
48
|**Ubuntu OS Version**| The version of the Ubuntu OS to be installed on the base virtual machine. |
52
49
|**Authentication Type**| Choose **sshPublicKey** or **password** depending on your preference. |
53
50
|**Admin Password or Key**| The value of the SSH Public Key or the value of the password depending on the choice of Authentication Type. |
54
51
55
-
When all fields have been filled in, select the button at the bottom to move to `Next : Review + create`where you can review the terms and select **Create** to begin the deployment.
52
+
Select `Next : Review + create`to review the terms and select **Create** to begin the deployment.
56
53
57
54
1. Verify that the deployment completed successfully. A virtual machine resource is deployed into the selected resource group. Take note of the machine name, this should be in the format `vm-0000000000000`. Also, take note of the associated **DNS Name**, which should be in the format `<dnsLabelPrefix>`.`<location>`.cloudapp.azure.com.
58
55
@@ -114,7 +111,7 @@ The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
To authenticate with an SSH key, you may do so by specifying an **authenticationType** of `sshPublicKey`, then provide the value of the SSH key in the **adminPasswordOrKey** parameter. See the following example:
114
+
To authenticate with an SSH key, specify an **authenticationType** of `sshPublicKey`, then provide the value of the SSH key in the **adminPasswordOrKey** parameter. See the following example:
Copy file name to clipboardExpand all lines: articles/iot-edge/how-to-monitor-iot-edge-deployments.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: High-level monitoring including edgeHub and edgeAgent reported prop
4
4
author: PatAltimore
5
5
6
6
ms.author: patricka
7
-
ms.date: 9/22/2022
7
+
ms.date: 06/03/2024
8
8
ms.topic: conceptual
9
9
ms.reviewer: veyalla
10
10
ms.service: iot-edge
@@ -73,7 +73,7 @@ The deployment show command takes the following parameters:
73
73
***--deployment-id** - The name of the deployment that exists in the IoT hub. Required parameter.
74
74
***--hub-name** - Name of the IoT hub in which the deployment exists. The hub must be in the current subscription. Switch to the desired subscription with the command `az account set -s [subscription name]`
75
75
76
-
Inspect the deployment in the command window.The **metrics** property lists a count for each metric that is evaluated by each hub:
76
+
Inspect the deployment in the command window.The **metrics** property lists a count for each metric that is evaluated by each hub:
77
77
78
78
***targetedCount** - A system metric that specifies the number of device twins in IoT Hub that match the targeting condition.
79
79
***appliedCount** - A system metric specifies the number of devices that have had the deployment content applied to their module twins in IoT Hub.
Copy file name to clipboardExpand all lines: articles/iot-edge/how-to-observability.md
+18-18Lines changed: 18 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
1
---
2
2
title: How to implement IoT Edge observability using monitoring and troubleshooting
3
3
description: Learn how to build an observability solution for an IoT Edge System
4
-
author: eedorenko
4
+
author: PatAltimore
5
5
ms.author: iefedore
6
-
ms.date: 04/01/2022
7
-
ms.topic: conceptual
6
+
ms.date: 06/03/2024
7
+
ms.topic: how-to
8
8
ms.service: iot-edge
9
9
services: iot-edge
10
10
---
@@ -28,9 +28,9 @@ In order to go beyond abstract considerations, we'll use a *real-life* scenario
28
28
29
29
### La Niña
30
30
31
-
:::image type="content" source="media/how-to-observability/la-nina-high-level.png" alt-text="Illustration of La Niña solution collecting surface temperature from sensors into Azure I o T Edge.":::
31
+
:::image type="content" source="media/how-to-observability/la-nina-high-level.png" alt-text="Illustration of La Niña solution collecting surface temperature from sensors into Azure IoT Edge.":::
32
32
33
-
The La Niña service measures surface temperature in Pacific Ocean to predict La Niña winters. There is a number of buoys in the ocean with IoT Edge devices that send the surface temperature to Azure Cloud. The telemetry data with the temperature is pre-processed by a custom module on the IoT Edge device before sending it to the cloud. In the cloud, the data is processed by backend Azure Functions and saved to Azure Blob Storage. The clients of the service (ML inference workflows, decision making systems, various UIs, etc.) can pick up messages with temperature data from the Azure Blob Storage.
33
+
The La Niña service measures surface temperature in Pacific Ocean to predict La Niña winters. There are many buoys in the ocean with IoT Edge devices that send the surface temperature to Azure Cloud. The telemetry data with the temperature is pre-processed by a custom module on the IoT Edge device before sending it to the cloud. In the cloud, the data is processed by backend Azure Functions and saved to Azure Blob Storage. The clients of the service (ML inference workflows, decision making systems, various UIs, etc.) can pick up messages with temperature data from the Azure Blob Storage.
34
34
35
35
## Measuring and monitoring
36
36
@@ -40,10 +40,10 @@ Let's build a measuring and monitoring solution for the La Niña service focusin
40
40
41
41
To understand what we're going to monitor, we must understand what the service actually does and what the service clients expect from the system. In this scenario, the expectations of a common La Niña service consumer may be categorized by the following factors:
42
42
43
-
***_Coverage_**. The data is coming from most installed buoys
44
-
***_Freshness_**. The data coming from the buoys is fresh and relevant
45
-
***_Throughput_**. The temperature data is delivered from the buoys without significant delays
46
-
***_Correctness_**. The ratio of lost messages (errors) is small
43
+
***Coverage**. The data is coming from most installed buoys
44
+
***Freshness**. The data coming from the buoys is fresh and relevant
45
+
***Throughput**. The temperature data is delivered from the buoys without significant delays
46
+
***Correctness**. The ratio of lost messages (errors) is small
47
47
48
48
The satisfaction regarding these factors means that the service works according to the client's expectations.
49
49
@@ -56,7 +56,7 @@ The next step is to define instruments to measure values of these factors. This
56
56
|Ratio of devices successfully delivering messages to the total number of devices|Correctness|
57
57
|Ratio of devices delivering messages fast to the total number of devices| Throughput |
58
58
59
-
With that done, we can apply a sliding scale on each indicator and define exact threshold values that represent what it means for the client to be "satisfied". For this scenario, we have selected sample threshold values as laid out in the table below with formal Service Level Objectives (SLOs):
59
+
With that done, we can apply a sliding scale on each indicator and define exact threshold values that represent what it means for the client to be "satisfied". For this scenario, we select sample threshold values as laid out in the table below with formal Service Level Objectives (SLOs):
60
60
61
61
|**Service Level Objective**|**Factor**|
62
62
|-------------|----------|
@@ -76,7 +76,7 @@ SLOs definition must also describe the approach of how the indicator values are
76
76
77
77
At this point, it's clear what we're going to measure and what threshold values we're going to use to determine if the service performs according to the expectations.
78
78
79
-
It's a common practice to measure service level indicators, like the ones we've defined, by the means of **_metrics_**. This type of observability data is considered to be relatively small in values. It's produced by various system components and collected in a central observability backend to be monitored with dashboards, workbooks and alerts.
79
+
It's a common practice to measure service level indicators, like the ones we've defined, by the means of **metrics**. This type of observability data is considered to be relatively small in values. It's produced by various system components and collected in a central observability backend to be monitored with dashboards, workbooks and alerts.
80
80
81
81
Let's clarify what components the La Niña service consists of:
82
82
@@ -89,15 +89,15 @@ Azure .NET Function picks up the telemetry message from the IoT Hub events endpo
89
89
90
90
An IoT Hub device comes with system modules `edgeHub` and `edgeAgent`. These modules expose through a Prometheus endpoint [a list of built-in metrics](how-to-access-built-in-metrics.md). These metrics are collected and pushed to Azure Monitor Log Analytics service by the [metrics collector module](how-to-collect-and-transport-metrics.md) running on the IoT Edge device. In addition to the system modules, the `Temperature Sensor` and `Filter` modules can be instrumented with some business specific metrics too. However, the service level indicators that we've defined can be measured with the built-in metrics only. So, we don't really need to implement anything else at this point.
91
91
92
-
In this scenario, we have a fleet of 10 buoys. One of the buoys has been intentionally set up to malfunction so that we can demonstrate the issue detection and the follow-up troubleshooting.
92
+
In this scenario, we have a fleet of 10 buoys. One of the buoys is intentionally set up to malfunction so that we can demonstrate the issue detection and the follow-up troubleshooting.
93
93
94
94
### How do we monitor
95
95
96
96
We're going to monitor Service Level Objectives (SLO) and corresponding Service Level Indicators (SLI) with Azure Monitor Workbooks. This scenario deployment includes the *La Nina SLO/SLI* workbook assigned to the IoT Hub.
97
97
98
98
:::image type="content" source="media/how-to-observability/dashboard-path.png" alt-text="Screenshot of I o T Hub monitoring showing the Workbooks. From the Gallery in the Azure portal.":::
99
99
100
-
To achieve the best user experience the workbooks are designed to follow the _glance_ -> _scan_ -> _commit_ concept:
100
+
To achieve the best user experience the workbooks are designed to follow the *glance* -> *scan* -> *commit* concept:
101
101
102
102
#### Glance
103
103
@@ -140,9 +140,9 @@ In this scenario, all parameters of the trouble device look normal and it's not
140
140
141
141
The `Temperature Sensor` (tempSensor) module produced 120 telemetry messages, but only 49 of them went upstream to the cloud.
142
142
143
-
The first step we want to do is to check the logs produced by the `Filter` module. Click the **Troubleshoot live!**button and select the `Filter` module.
143
+
The first step we want to do is to check the logs produced by the `Filter` module. Select **Troubleshoot live!**then select the `Filter` module.
144
144
145
-
:::image type="content" source="media/how-to-observability/basic-logs.png" alt-text="Screenshot of the filter module log in the Azure portal.":::
145
+
:::image type="content" source="media/how-to-observability/basic-logs.png" alt-text="Screenshot of the filter module log within the Azure portal.":::
146
146
147
147
Analysis of the module logs doesn't discover the issue. The module receives messages, there are no errors. Everything looks good here.
148
148
@@ -152,7 +152,7 @@ There are two observability instruments serving the deep troubleshooting purpose
152
152
153
153
The La Niña service uses [OpenTelemetry](https://opentelemetry.io) to produce and collect traces and logs in Azure Monitor.
154
154
155
-
:::image type="content" source="media/how-to-observability/la-nina-detailed.png" alt-text="Diagram illustrating an I o T Edge device sending telemetry data to Azure Monitor.":::
155
+
:::image type="content" source="media/how-to-observability/la-nina-detailed.png" alt-text="Diagram illustrating an IoT Edge device sending telemetry data to Azure Monitor.":::
156
156
157
157
IoT Edge modules `Temperature Sensor` and `Filter` export the logs and tracing data via OTLP (OpenTelemetry Protocol) to the [OpenTelemetryCollector](https://opentelemetry.io/docs/collector/) module, running on the same edge device. The `OpenTelemetryCollector` module, in its turn, exports logs and traces to Azure Monitor Application Insights service.
158
158
@@ -174,15 +174,15 @@ In a few minutes, the traces and detailed logs will arrive to Azure Monitor from
174
174
175
175
:::image type="content" source="media/how-to-observability/application-map.png" alt-text="Screenshot of the application map in Application Insights.":::
176
176
177
-
From this map we can drill down to the traces and we can see that some of them look normal and contain all the steps of the flow, and some of them, are very short, so nothing happens after the `Filter` module.
177
+
From this map we can drill down to the traces and we can see that some of them look normal and contain all the steps of the flow, and some of them, are short, so nothing happens after the `Filter` module.
178
178
179
179
:::image type="content" source="media/how-to-observability/traces.png" alt-text="Screenshot of monitoring traces.":::
180
180
181
181
Let's analyze one of those short traces and find out what was happening in the `Filter` module, and why it didn't send the message upstream to the cloud.
182
182
183
183
Our logs are correlated with the traces, so we can query logs specifying the `TraceId` and `SpanId` to retrieve logs corresponding exactly to this execution instance of the `Filter` module:
184
184
185
-
:::image type="content" source="media/how-to-observability/logs.png" alt-text="Sample trace query filtering based on Trace I D and Span I D.":::
185
+
:::image type="content" source="media/how-to-observability/logs.png" alt-text="Sample trace query filtering based on Trace ID and Span ID.":::
186
186
187
187
The logs show that the module received a message with 70.465-degrees temperature. But the filtering threshold configured on this device is 30 to 70. So the message simply didn't pass the threshold. Apparently, this specific device was configured wrong. This is the cause of the issue we detected while monitoring the La Niña service performance with the workbook.
Copy file name to clipboardExpand all lines: articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-symmetric.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
1
---
2
-
title: Create and provision IoT Edge devices using symmetric keys on Linux on Windows - Azure IoT Edge | Microsoft Docs
2
+
title: Create and provision IoT Edge devices using symmetric keys on Linux on Windows
3
3
description: Use symmetric key attestation to test provisioning Linux on Windows devices at scale for Azure IoT Edge with device provisioning service
4
4
author: PatAltimore
5
5
ms.author: patricka
6
-
ms.date: 11/15/2022
7
-
ms.topic: conceptual
6
+
ms.date: 06/03/2024
7
+
ms.topic: how-to
8
8
ms.service: iot-edge
9
9
ms.custom: linux-related-content
10
10
services: iot-edge
@@ -68,7 +68,7 @@ You can verify that the group enrollment that you created in device provisioning
68
68
69
69
---
70
70
71
-
1.Log in to your IoT Edge for Linux on Windows virtual machine using the following command in your PowerShell session:
71
+
1.Sign in to your IoT Edge for Linux on Windows virtual machine using the following command in your PowerShell session:
72
72
73
73
```powershell
74
74
Connect-EflowVm
@@ -106,7 +106,7 @@ You can verify that the group enrollment that you created in device provisioning
106
106
>
107
107
>This error is expected on a newly provisioned device because the IoT Edge Hub module isn't running. To resolve the error, in IoT Hub, set the modules for the device and create a deployment. Creating a deployment for the device starts the modules on the device including the IoT Edge Hub module.
108
108
109
-
When you create a new IoT Edge device, it will display the status code `417 -- The device's deployment configuration is not set` in the Azure portal. This status is normal, and means that the device is ready to receive a module deployment.
109
+
When you create a new IoT Edge device, it displays the status code `417 -- The device's deployment configuration is not set` in the Azure portal. This status is normal, and means that the device is ready to receive a module deployment.
110
110
111
111
<!-- Uninstall IoT Edge for Linux on Windows H2 and content -->
0 commit comments