Skip to content

Commit 9342d04

Browse files
committed
Fixes
1 parent 24a9486 commit 9342d04

File tree

3 files changed

+41
-34
lines changed

3 files changed

+41
-34
lines changed

articles/azure-video-analyzer/video-analyzer-docs/deploy-on-stack-edge.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -433,7 +433,7 @@ To connect to your IoT hub by using the Azure IoT Tools extension, do the follow
433433
434434
Kubernetes supports [pod affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity), which can schedule pods on the same node. To achieve co-location, you can add to the inference pod manifest, a podAffinity section that references the Video Analyzer module.
435435
436-
```json
436+
```yaml
437437
// Example Video Analyzer module deployment match labels
438438
selector:
439439
matchLabels:
@@ -454,8 +454,8 @@ To connect to your IoT hub by using the Azure IoT Tools extension, do the follow
454454
topologyKey: "kubernetes.io/hostname"
455455
```
456456
457-
* **You get a 404 error code when you use the *rtspsim* module**
458-
457+
* **You get a 404 error code when you use the *rtspsim* module**
458+
459459
The container reads videos from exactly one folder within the container. If you map/bind an external folder into a folder that already exists within the container image, Docker hides the files present in the container image.
460460
461461
For example, with no bindings, the container might have these files:

articles/azure-video-analyzer/video-analyzer-docs/troubleshoot.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -114,13 +114,13 @@ Video Analyzer via the pipeline extension processors can extend the pipeline to
114114

115115
As an example, here is a Yolo v3 container that's running on local machine with an IP address of 172.17.0.3.
116116

117-
```
117+
```shell
118118
curl -X POST http://172.17.0.3/score -H "Content-Type: image/jpeg" --data-binary @<fullpath to jpg>
119119
```
120120

121121
Result returned:
122122

123-
```
123+
```json
124124
{"inferences": [{"type": "entity", "entity": {"tag": {"value": "car", "confidence": 0.8668569922447205}, "box": {"l": 0.3853073438008626, "t": 0.6063712999658677, "w": 0.04174524943033854, "h": 0.02989496027381675}}}]}
125125
```
126126

@@ -135,7 +135,7 @@ Video Analyzer via the pipeline extension processors can extend the pipeline to
135135

136136
Video Analyzer provides a direct method-based programming model that allows you to set up multiple topologies and multiple pipelines. As part of the topology and pipeline setup, you invoke multiple direct method calls on the IoT Edge module. If you invoke these multiple method calls in parallel, especially the ones that start and stop the pipelines, you might experience a timeout failure such as the following:
137137

138-
Assembly Initialization method Microsoft.Media.VideoAnalyzer.Test.Feature.Edge.AssemblyInitializer.InitializeAssemblyAsync threw exception. Microsoft.Azure.Devices.Common.Exceptions.IotHubException: <br/> `{"Message":"{\"errorCode\":504101,\"trackingId\":\"55b1d7845498428593c2738d94442607-G:32-TimeStamp:05/15/2020 20:43:10-G:10-TimeStamp:05/15/2020 20:43:10\",\"message\":\"Timed out waiting for the response from device.\",\"info\":{},\"timestampUtc\":\"2020-05-15T20:43:10.3899553Z\"}","ExceptionMessage":""}. Aborting test execution. `
138+
Assembly Initialization method Microsoft.Media.VideoAnalyzer.Test.Feature.Edge.AssemblyInitializer.InitializeAssemblyAsync threw exception. Microsoft.Azure.Devices.Common.Exceptions.IotHubException: <br/> `{"Message":"{\"errorCode\":504101,\"trackingId\":\"55b1d7845498428593c2738d94442607-G:32-TimeStamp:05/15/2020 20:43:10-G:10-TimeStamp:05/15/2020 20:43:10\",\"message\":\"Timed out waiting for the response from device.\",\"info\":{},\"timestampUtc\":\"2020-05-15T20:43:10.3899553Z\"}","ExceptionMessage":""}. Aborting test execution.`
139139

140140
We recommend that you _not_ call direct methods in parallel. Call them sequentially (that is, make one direct method call only after the previous one is finished).
141141

@@ -158,7 +158,7 @@ To gather the relevant logs that should be added to the ticket, follow the instr
158158
159159
On the IoT Edge device, use the following command after replacing `<avaedge>` with the name of your Video Analyzer edge module :
160160

161-
```cmd
161+
```shell
162162
sudo iotedge restart <avaedge>
163163
```
164164

@@ -208,7 +208,7 @@ When you need to gather logs from an IoT Edge device, the easiest way is to use
208208
209209
1. Run the `support-bundle` command with the _--since_ flag to specify how much time you want your logs to cover. For example, 2h will get logs for the last two hours. You can change the value of this flag to include logs for different periods.
210210
211-
```
211+
```shell
212212
sudo iotedge support-bundle --since 2h
213213
```
214214

articles/cognitive-services/Computer-vision/spatial-analysis-operations.md

Lines changed: 33 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -90,32 +90,33 @@ This is an example of the DETECTOR_NODE_CONFIG parameters for all Spatial Analys
9090
### Camera calibration node parameter settings
9191
This is an example of the `CAMERACALIBRATOR_NODE_CONFIG` parameters for all spatial analysis operations.
9292

93-
```
93+
```json
9494
{
95-
"gpu_index": 0,
96-
"do_calibration": true,
97-
"enable_breakpad": false,
98-
"enable_orientation": true
95+
"gpu_index": 0,
96+
"do_calibration": true,
97+
"enable_breakpad": false,
98+
"enable_orientation": true
9999
}
100100
```
101101

102-
| Name | Type| Description|
102+
| Name | Type | Description |
103103
|---------|---------|---------|
104104
| `do_calibration` | string | Indicates that calibration is turned on. `do_calibration` must be true for **cognitiveservices.vision.spatialanalysis-persondistance** to function properly. `do_calibration` is set by default to `True`. |
105105
| `enable_breakpad`| bool | Indicates whether to enable breakpad, which is used to generate a crash dump for debug use. It is `false` by default. If you set it to `true`, you also need to add `"CapAdd": ["SYS_PTRACE"]` in the `HostConfig` part of container `createOptions`. By default, the crash dump is uploaded to the [RealTimePersonTracking](https://appcenter.ms/orgs/Microsoft-Organization/apps/RealTimePersonTracking/crashes/errors?version=&appBuild=&period=last90Days&status=&errorType=all&sortCol=lastError&sortDir=desc) AppCenter app, if you want the crash dumps to be uploaded to your own AppCenter app, you can override the environment variable `RTPT_APPCENTER_APP_SECRET` with your app's app secret.
106106
| `enable_orientation` | bool | Indicates whether you want to compute the orientation for the detected people or not. `enable_orientation` is set by default to `True`. |
107107

108108
### Calibration config
109+
109110
This is an example of the `CALIBRATION_CONFIG` parameters for all spatial analysis operations.
110111

111-
```
112+
```json
112113
{
113-
"enable_recalibration": true,
114-
"calibration_quality_check_frequency_seconds": 86400,
115-
"calibration_quality_check_sample_collect_frequency_seconds": 300,
116-
"calibration_quality_check_one_round_sample_collect_num": 10,
117-
"calibration_quality_check_queue_max_size": 1000,
118-
"calibration_event_frequency_seconds": -1
114+
"enable_recalibration": true,
115+
"calibration_quality_check_frequency_seconds": 86400,
116+
"calibration_quality_check_sample_collect_frequency_seconds": 300,
117+
"calibration_quality_check_one_round_sample_collect_num": 10,
118+
"calibration_quality_check_queue_max_size": 1000,
119+
"calibration_event_frequency_seconds": -1
119120
}
120121
```
121122

@@ -128,10 +129,11 @@ This is an example of the `CALIBRATION_CONFIG` parameters for all spatial analys
128129
| `calibration_quality_check_queue_max_size` | int | Maximum number of data samples to store when camera model is calibrated. Default is `1000`. Only used when `enable_recalibration=True`.|
129130
| `calibration_event_frequency_seconds` | int | Output frequency (seconds) of camera calibration events. A value of `-1` indicates that the camera calibration should not be sent unless the camera calibration info has been changed. Default is `-1`.|
130131

131-
132132
### Camera calibration output
133+
133134
This is an example of the output from camera calibration if enabled. Ellipses indicate more of the same type of objects in a list.
134-
```
135+
136+
```json
135137
{
136138
"type": "cameraCalibrationEvent",
137139
"sourceInfo": {
@@ -231,25 +233,30 @@ You can configure the speed computation through the tracker node parameter setti
231233
|---------|---------|---------|
232234
| `enable_speed` | bool | Indicates whether you want to compute the speed for the detected people or not. `enable_speed` is set by default to `True`. It is highly recommended that you enable both speed and orientation to have the best estimated values. |
233235

234-
235236
## Spatial Analysis operations configuration and output
237+
236238
### Zone configuration for cognitiveservices.vision.spatialanalysis-personcount
237239

238-
This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that configures a zone. You may configure multiple zones for this operation.
240+
This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that configures a zone. You may configure multiple zones for this operation.
239241

240242
```json
241243
{
242-
"zones":[{
243-
"name": "lobbycamera",
244-
"polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
245-
"events":[{
246-
"type": "count",
247-
"config":{
248-
"trigger": "event",
244+
"zones": [
245+
{
246+
"name": "lobbycamera",
247+
"polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
248+
"events": [
249+
{
250+
"type": "count",
251+
"config": {
252+
"trigger": "event",
249253
"threshold": 16.00,
250254
"focus": "footprint"
251-
}
252-
}]
255+
}
256+
}
257+
]
258+
}
259+
]
253260
}
254261
```
255262

0 commit comments

Comments
 (0)