Skip to content

Commit f3678e8

Browse files
committed
pr fixes
1 parent fca77d9 commit f3678e8

File tree

5 files changed

+10
-10
lines changed

5 files changed

+10
-10
lines changed

articles/ai-services/computer-vision/includes/model-customization-deprecation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,6 @@ ms.author: pafarley
1111
---
1212

1313
> [!IMPORTANT]
14-
> This feature is now deprecated. On January 10, 2025, Azure AI Image Analysis 4.0 Custom Image Classification, Custom Object Detection, and Product Recognition preview API will be retired: after this date, API calls to these services will fail.
14+
> This feature is now deprecated. On January 10, 2025, Azure AI Image Analysis 4.0 Custom Image Classification, Custom Object Detection, and Product Recognition preview API will be retired. After this date, API calls to these services will fail.
1515
>
1616
> To maintain a smooth operation of your models, transition to [Azure AI Custom Vision](/azure/ai-services/Custom-Vision-Service/overview), which is now generally available. Custom Vision offers similar functionality to these retiring features.

articles/ai-services/computer-vision/spatial-analysis-camera-placement.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ The following illustration provides simulations for the left and right camera vi
136136
| ---------------------------------- | ----------------------------------- |
137137
| ![Left angle for linear queue](./media/spatial-analysis/camera-angle-linear-left.png) | ![Right angle for linear queue](./media/spatial-analysis/camera-angle-linear-right.png) |
138138

139-
For zig-zag queues, it's best to avoid placing the camera directly facing the queue line direction, as shown in the following illustration. Note that each of the four example camera positions in the illustration provide the ideal view with an acceptable deviation of +/- 15 degrees in each direction.
139+
For zig-zag queues, it's best to avoid placing the camera directly facing the queue line direction, as shown in the following illustration. Note that each of the four example camera positions in the illustration provides the ideal view with an acceptable deviation of +/- 15 degrees in each direction.
140140

141141
The following illustrations simulate the view from a camera placed in the ideal locations for a zig-zag queue.
142142

articles/ai-services/computer-vision/spatial-analysis-local.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ You can use Spatial Analysis with either recorded or live video. Use this guide
2323

2424
## Analyze a video file
2525

26-
To use Spatial Analysis for recorded video, record a video file and save it as a .mp4 file. Then take the following steps:
26+
To use Spatial Analysis for recorded video, record a video file and save it as an .mp4 file. Then take the following steps:
2727

2828
1. Create a blob storage account in Azure, or use an existing one. Then update the following blob storage settings in the Azure portal:
2929
1. Change **Secure transfer required** to **Disabled**

articles/ai-services/computer-vision/vehicle-analysis.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -29,16 +29,16 @@ Vehicle analysis is a set of capabilities that, when used with the Spatial Analy
2929
3030
## Vehicle analysis operations
3131

32-
Similar to Spatial Analysis, vehicle analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for vehicle analysis generates an output stream of JSON messages that are being sent to your instance of Azure IoT Hub.
32+
Similar to Spatial Analysis, vehicle analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for vehicle analysis generate an output stream of JSON messages that are being sent to your instance of Azure IoT Hub.
3333

34-
The following operations for vehicle analysis are available in the current Spatial Analysis container. Vehicle analysis offers operations optimized for both GPU and CPU (CPU operations include the ".cpu" distinction).
34+
The following operations for vehicle analysis are available in the current Spatial Analysis container. Vehicle analysis offers operations optimized for both GPU and CPU (CPU operations include the `.cpu` distinction).
3535

3636
| Operation identifier | Description |
3737
| -------------------- | ---------------------------------------- |
38-
| **cognitiveservices.vision.vehicleanalysis-vehiclecount-preview** and **cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview** | Counts vehicles parked in a designated zone in the camera's field of view. </br> Emits an initial _vehicleCountEvent_ event and then _vehicleCountEvent_ events when the count changes. |
39-
| **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview** and **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview** | Identifies when a vehicle parks in a designated parking region in the camera's field of view. </br> Emits a _vehicleInPolygonEvent_ event when the vehicle is parked inside a parking space. |
38+
| **cognitiveservices.vision.vehicleanalysis-vehiclecount-preview** and **cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview** | Counts vehicles parked in a designated zone in the camera's field of view. </br>Emits an initial _vehicleCountEvent_ event and then _vehicleCountEvent_ events when the count changes. |
39+
| **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview** and **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview** | Identifies when a vehicle parks in a designated parking region in the camera's field of view. </br>Emits a _vehicleInPolygonEvent_ event when the vehicle is parked inside a parking space. |
4040

41-
In addition to exposing the vehicle location, other estimated attributes for **cognitiveservices.vision.vehicleanalysis-vehiclecount-preview**, **cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview**, **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview** and **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview** include vehicle color and vehicle type. All of the possible values for these attributes are found in the output section (below).
41+
In addition to exposing the vehicle location, other estimated attributes for **cognitiveservices.vision.vehicleanalysis-vehiclecount-preview**, **cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview**, **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview**, and **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview** include vehicle color and vehicle type. All of the possible values for these attributes are found in the output section (below).
4242

4343
### Operation parameters for vehicle analysis
4444

@@ -53,7 +53,7 @@ The following table shows the parameters required by each of the vehicle analysi
5353
| VIDEO_IS_LIVE| True for camera devices; false for recorded videos.|
5454
| VIDEO_DECODE_GPU_INDEX| Index specifying which GPU will decode the video frame. By default it's 0. This should be the same as the `gpu_index` in other node configurations like `VICA_NODE_CONFIG`, `DETECTOR_NODE_CONFIG`.|
5555
| PARKING_REGIONS | JSON configuration for zone and line as outlined below. </br> PARKING_REGIONS must contain four points in normalized coordinates ([0, 1]) that define a convex region (the points follow a clockwise or counterclockwise order).|
56-
| EVENT_OUTPUT_MODE | Can be ON_INPUT_RATE or ON_CHANGE. ON_INPUT_RATE will generate an output on every single frame received (one FPS). ON_CHANGE will generate an output when something changes (number of vehicles or parking spot occupancy). |
56+
| EVENT_OUTPUT_MODE | Can be ON_INPUT_RATE or ON_CHANGE. ON_INPUT_RATE generates an output on every single frame received (one FPS). ON_CHANGE generates an output when something changes (number of vehicles or parking spot occupancy). |
5757
| PARKING_SPOT_METHOD | Can be BOX or PROJECTION. BOX uses an overlap between the detected bounding box and a reference bounding box. PROJECTIONS projects the centroid point into the parking spot polygon drawn on the floor. This is only used for Parking Spot and can be suppressed.|
5858

5959
Here is an example of a valid `PARKING_REGIONS` configuration:

articles/ai-services/computer-vision/whats-new.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Learn what's new in Azure AI Vision. Check this page to stay up to date with new
2222

2323
### Model customization and Product Recognition deprecation
2424

25-
On January 10, 2025, the Azure AI Vision Product Recognition and model customization features will be retired: after this date, API calls to these services will fail.
25+
On January 10, 2025, the Azure AI Vision Product Recognition and model customization features will be retired. After this date, API calls to these services will fail.
2626

2727
To maintain a smooth operation of your models, transition to [Azure AI Custom Vision](/azure/ai-services/Custom-Vision-Service/overview), which is now generally available. Custom Vision offers similar functionality to these retiring features.
2828

0 commit comments

Comments
 (0)