Skip to content

Commit 4a69155

Browse files
committed
acrolinx fixes
1 parent 4f57e8f commit 4a69155

File tree

1 file changed

+30
-28
lines changed

1 file changed

+30
-28
lines changed

articles/cognitive-services/Custom-Vision-Service/iot-visual-alert-tutorial.md

Lines changed: 30 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,9 @@ ms.author: pafarley
1515

1616
# Tutorial: IoT Visual Alert sample
1717

18-
This sample app illustrates how to use Azure Custom Vision to train a device with a camera to detect specific visual states. You can run this detection scenario directly on an IoT device by using an ONNX model exported from the Custom Vision service.
18+
This sample app illustrates how to use Azure Custom Vision to train a device with a camera to detect visual states. You can run this detection scenario on an IoT device by using an ONNX model exported from the Custom Vision service.
1919

20-
A visual state describes the content of an image: an empty room or a room with people; an empty driveway or a driveway with a truck, and so on. In the image below, you can see the app detect when a banana or an apple is placed in front of the camera.
20+
A visual state describes the content of an image: an empty room or a room with people, an empty driveway or a driveway with a truck, and so on. In the image below, you can see the app detect when a banana or an apple is placed in front of the camera.
2121

2222
![Animation of a UI labeling fruit in front of the camera](./media/iot-visual-alert-tutorial/scoring.gif)
2323

@@ -35,14 +35,14 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
3535
* [!INCLUDE [create-resources](includes/create-resources.md)]
3636
* You'll also need to create an IoT Hub resource on Azure.
3737
* Optionally, an IoT device running Windows 10 IoT Core version 17763 or higher. You can also run the app directly from your PC.
38-
* For Raspberry Pi 2 and 3, you can set up Windows 10 directly from the IoT Dashboard app. For other devices such as DrangonBoard, you'll need to flash it using the [eMMC method](https://docs.microsoft.com/windows/iot-core/tutorials/quickstarter/devicesetup#flashing-with-emmc-for-dragonboard-410c-other-qualcomm-devices). If you need help setting up a new device, see [Setting up your device](https://docs.microsoft.com/windows/iot-core/tutorials/quickstarter/devicesetup) in the Windows IoT documentation.
38+
* For Raspberry Pi 2 and 3, you can set up Windows 10 directly from the IoT Dashboard app. For other devices such as DrangonBoard, you'll need to flash it using the [eMMC method](https://docs.microsoft.com/windows/iot-core/tutorials/quickstarter/devicesetup#flashing-with-emmc-for-dragonboard-410c-other-qualcomm-devices). If you need help setting up a new device, see [Setting up your device](https://docs.microsoft.com/windows/iot-core/tutorials/quickstarter/devicesetup) in the Windows IoT documentation.
3939

4040
## About the app
4141

42-
The IoT Visual Alerts apps runs in a continuous loop, switching between four different states as appropriate:
42+
The IoT Visual Alerts app runs in a continuous loop, switching between four different states as appropriate:
4343

4444
* **No Model**: A no-op state. The app will continually sleep for one second and check the camera.
45-
* **Capturing Training Images**: In this state, the app captures a picture and uploads it as a training image to the target Custom Vision project. The app then sleeps for 500ms and repeats the procedure until the set maximum number of images are captured. Then it initiates the training of the Custom Vision model.
45+
* **Capturing Training Images**: In this state, the app captures a picture and uploads it as a training image to the target Custom Vision project. The app then sleeps for 500 ms and repeats the procedure until the set maximum number of images are captured. Then it starts the training of the Custom Vision model.
4646
* **Waiting For Trained Model**: In this state, the app calls the Custom Vision API every second to check whether the target project contains a trained iteration. When it finds one, it downloads the corresponding ONNX model to a local file and switches to the **Scoring** state.
4747
* **Scoring**: In this state, the app uses Windows ML to evaluate a single frame from the camera against the exported ONNX model. The resulting image classification is displayed on the screen and sent as a message to the IoT Hub. The app then sleeps for one second before scoring a new frame.
4848

@@ -54,9 +54,9 @@ The following files handle the main functionality of the app.
5454
|-------------|-------------|
5555
| [MainPage.xaml](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/MainPage.xaml) | This file defines the XAML user interface. It hosts the web camera control and contains the labels used for status updates.|
5656
| [MainPage.xaml.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/MainPage.xaml.cs) | This code controls the behavior of the XAML UI. It contains the state machine processing code.|
57-
| [CustomVision\CustomVisionServiceWrapper.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/CustomVision/CustomVisionServiceWrapper.cs) | This is a wrapper class that facilitates integration with the Custom Vision Service.|
58-
| [CustomVision\CustomVisionONNXModel.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/CustomVision/CustomVisionONNXModel.cs) | This is a wrapper class that facilitates integration with Windows ML for loading the ONNX model and scoring images against it.|
59-
| [IoTHub\IotHubWrapper.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/IoTHub/IotHubWrapper.cs) | This is a wrapper class that facilitates integration with IoT Hub for uploading scoring results to Azure.|
57+
| [CustomVision\CustomVisionServiceWrapper.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/CustomVision/CustomVisionServiceWrapper.cs) | This class is a wrapper that handles integration with the Custom Vision Service.|
58+
| [CustomVision\CustomVisionONNXModel.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/CustomVision/CustomVisionONNXModel.cs) | This class is a wrapper that handles integration with Windows ML for loading the ONNX model and scoring images against it.|
59+
| [IoTHub\IotHubWrapper.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/IoTHub/IotHubWrapper.cs) | This class is a wrapper that handles integration with IoT Hub for uploading scoring results to Azure.|
6060

6161
## Setup
6262

@@ -65,52 +65,54 @@ The following files handle the main functionality of the app.
6565
1. Set up the Custom Vision project:
6666
1. In the _CustomVision\CustomVisionServiceWrapper.cs_ script, update the `ApiKey` variable with your training key.
6767
1. Then update the `Endpoint` variable with the endpoint URL associated with your key.
68-
1. Update the `targetCVSProjectGuid` variable with the corresponding ID for the Custom Vision project that you want to use. **Important:** This needs to be a Compact image classification project, since we will be exporting the model to ONNX later.
69-
1. Set up IoT Hub setup:
68+
1. Update the `targetCVSProjectGuid` variable with the corresponding ID of the Custom Vision project that you want to use.
69+
> [!IMPORTANT]
70+
> This project needs to be a **Compact** image classification project, because we will be exporting the model to ONNX later.
71+
1. Set up the IoT Hub resource:
7072
1. In the _IoTHub\IotHubWrapper.cs_ script, update the `s_connectionString` variable with the proper connection string for your device.
71-
1. Using the Azure portal, load your IoT Hub instance, click on **IoT devices** under **Explorers**, click on your target device (or create one if needed), and find the connection string under **Primary Connection String**. The string will contain your IoT Hub name, device ID, and shared access key; it has the following format: `{your iot hub name}.azure-devices.net;DeviceId={your device id};SharedAccessKey={your access key}`.
73+
1. On the Azure portal, load your IoT Hub instance, click on **IoT devices** under **Explorers**, select on your target device (or create one if needed), and find the connection string under **Primary Connection String**. The string will contain your IoT Hub name, device ID, and shared access key; it has the following format: `{your iot hub name}.azure-devices.net;DeviceId={your device id};SharedAccessKey={your access key}`.
7274

7375
## Run the sample
7476

75-
If you're running the sample on your PC, select **Local Machine** for the target device in Visual Studio, and select **x64** or **x86** for the target platform. Then press F5 to run the program. The app should start and display the live feed from the camera, as well as a status message.
77+
If you're running the sample on your PC, select **Local Machine** for the target device in Visual Studio, and select **x64** or **x86** for the target platform. Then press F5 to run the program. The app should start and display the live feed from the camera and a status message.
7678

77-
If you're deploying to a IoT device running an ARM processor, you will need to select **ARM** as the target platform and **Remote Machine** as the target device. Provide the IP address of your device when prompted (it must be on the same network as your PC). You can get the IP Address from the Windows IoT default app once you boot the device and connect it to the network. Press F5 to run the program.
79+
If you're deploying to a IoT device with an ARM processor, you'll need to select **ARM** as the target platform and **Remote Machine** as the target device. Provide the IP address of your device when prompted (it must be on the same network as your PC). You can get the IP Address from the Windows IoT default app once you boot the device and connect it to the network. Press F5 to run the program.
7880

79-
When you run the app for the first time, it won't have any knowledge of visual states. It will simply display a status message that there is no model available.
81+
When you run the app for the first time, it won't have any knowledge of visual states. It will display a status message that no model is available.
8082

8183
## Capture training images
8284

83-
To set up a model, you need to put the app in the **Capturing Training Images** state. Do one of the following:
84-
* If you're running the app on PC, use the button on the top right corner of the UI.
85-
* If you're running the app on an IoT device, call the `EnterLearningMode` method on the device through the IoT Hub. You can do this through the device entry in the IoT Hub menu in Azure, or with a tool such as [IoT Hub Device Explorer](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/tools/DeviceExplorer).
85+
To set up a model, you need to put the app in the **Capturing Training Images** state. Do one of the following steps:
86+
* If you're running the app on PC, use the button on the top-right corner of the UI.
87+
* If you're running the app on an IoT device, call the `EnterLearningMode` method on the device through the IoT Hub. You can call it through the device entry in the IoT Hub menu in Azure, or with a tool such as [IoT Hub Device Explorer](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/tools/DeviceExplorer).
8688

87-
When the app enters the **Capturing Training Images** state, it'll capture about two images every second until it's reached the desired number of images. By default, this is 30 images, but you can set this parameter by passing the desired number as an argument to the `EnterLearningMode` IoT Hub method.
89+
When the app enters the **Capturing Training Images** state, it will capture about two images every second until it's reached the target number of images. By default, this is 30 images, but you can set this parameter by passing the desired number as an argument to the `EnterLearningMode` IoT Hub method.
8890

89-
While the app is capturing images, you must expose the camera to the types of visual states that you'd like to detect (for example, an empty room, a room with
91+
While the app is capturing images, you must expose the camera to the types of visual states that you want to detect (for example, an empty room, a room with
9092
people, an empty desk, a desk with a toy truck, and so on).
9193

9294
## Build a Custom Vision model
9395

94-
Once the app has finished capturing images, it will upload them and then switch to the **Waiting For Trained Model** state. At this point you need to go to the [Custom Vision portal](https://www.customvision.ai/) and build a model based on the new training images. The following animation shows an example of this process.
96+
Once the app has finished capturing images, it will upload them and then switch to the **Waiting For Trained Model** state. At this point, you need to go to the [Custom Vision portal](https://www.customvision.ai/) and build a model based on the new training images. The following animation shows an example of this process.
9597

9698
![Animation: tagging multiple images of bananas](./media/iot-visual-alert-tutorial/labeling.gif)
9799

98-
To repeat this with your own scenario:
100+
To repeat this process with your own scenario:
99101
1. Sign in to the [Custom Vision portal](http://customvision.ai).
100102
1. Find your target project, which should have all the training images that the app uploaded.
101103
1. For each visual state that you want to identify, select the appropriate images and manually apply a tag.
102-
* For example, if this is a classifier to distinguish between an empty room and a room with people in it, we recommend tagging five or more images with people as a new class (**People**, for instance), and tagging five or more images without people as the **Negative** tag. This will help the model differentiate between the two states.
103-
* As another example, if goal is to approximate how full a shelf is, then you might use tags such as **EmptyShelf**, **PartiallyFullShelf** and **FullShelf**.
104+
* For example, if your goal is to distinguish between an empty room and a room with people in it, we recommend tagging five or more images with people as a new class (**People**, for instance), and tagging five or more images without people as the **Negative** tag. This will help the model differentiate between the two states.
105+
* As another example, if your goal is to approximate how full a shelf is, then you might use tags such as **EmptyShelf**, **PartiallyFullShelf**, and **FullShelf**.
104106
1. When you're finished, select the **Train** button
105-
1. Once training is complete, the app on your PC or IoT device will detect that a trained iteration is available and will start the process of exporting the trained model to ONNX and downloading it to the device.
107+
1. Once training is complete, the app on your PC or IoT device will detect that a trained iteration is available. It will start the process of exporting the trained model to ONNX and downloading it to the device.
106108

107109
## Use the trained model
108110

109111
Once the app downloads the trained model, it will switch to the **Scoring** state and start scoring images from the camera in a continuous loop.
110112

111-
For each captured image, the app will display the top tag on the screen; if it doesn't recognize the visual state, it will display **No Matches**). The app also sends these messages to the IoT Hub as messages, and in the case of a class being detected, the message will include the label, the confidence, and a property called `detectedClassAlert` which can be used from IoT Hub clients interested in doing fast message routing based on properties.
113+
For each captured image, the app will display the top tag on the screen. If it doesn't recognize the visual state, it will display **No Matches**). The app also sends these messages to the IoT Hub as messages, and if there is a class being detected, the message will include the label, the confidence score, and a property called `detectedClassAlert`, which can be used from IoT Hub clients interested in doing fast message routing based on properties.
112114

113-
In addition, the sample uses a [Sense HAT library](https://github.com/emmellsoft/RPi.SenseHat) to detect when it's running on a Raspberry Pi with a Sense HAT unit, so it can use it as an output display by setting all display lights to red whenever a class is detected and to blank when nothing is detected.
115+
In addition, the sample uses a [Sense HAT library](https://github.com/emmellsoft/RPi.SenseHat) to detect when it's running on a Raspberry Pi with a Sense HAT unit, so it can use it as an output display by setting all display lights to red whenever it detects a class and blank when it doesn't detect anything.
114116

115117
## App life cycle
116118

@@ -124,7 +126,7 @@ If you're running the app on a device and need to retrieve the IP address again
124126

125127
Delete your Custom Vision project if you no longer want to maintain it. On the [Custom Vision website](https://customvision.ai), navigate to **Projects** and select the trash can under your new project.
126128

127-
![Screenshot of a panel labelled My New Project with a trash can icon](./media/csharp-tutorial/delete_project.png)
129+
![Screenshot of a panel labeled My New Project with a trash can icon](./media/csharp-tutorial/delete_project.png)
128130

129131
## Next steps
130132

@@ -133,6 +135,6 @@ In this tutorial, you set up and ran an application that detects visual state in
133135
> [!div class="nextstepaction"]
134136
> [IoTVisualAlerts sample](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/tree/master/IoTVisualAlerts)
135137
136-
* Add an IoT Hub method to switch the app directly to the **Waiting For Trained Model** state. This way, you can train the model with images that aren't captured by the device itself and push the new model to the device on command.
138+
* Add an IoT Hub method to switch the app directly to the **Waiting For Trained Model** state. This way, you can train the model with images that aren't captured by the device itself and then push the new model to the device on command.
137139
* Follow the [Visualize real-time sensor data](https://docs.microsoft.com/azure/iot-hub/iot-hub-live-data-visualization-in-power-bi) tutorial to create a Power BI Dashboard to visualize the IoT Hub alerts sent by the sample.
138140
* Follow the [IoT remote monitoring](https://docs.microsoft.com/azure/iot-hub/iot-hub-monitoring-notifications-with-azure-logic-apps) tutorial to create a Logic App that responds to the IoT Hub alerts when visual states are detected.

0 commit comments

Comments
 (0)