Skip to content

Commit a9eeecc

Browse files
author
Jinyu Li
committed
fix some grammar error
1 parent fc685c4 commit a9eeecc

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/ai-services/computer-vision/concept-face-liveness-detection.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -34,17 +34,17 @@ The liveness solution integration involves two distinct components: a frontend m
3434

3535
:::image type="content" source="./media/liveness/liveness-diagram.jpg" alt-text="Diagram of the liveness workflow in Azure AI Face." lightbox="./media/liveness/liveness-diagram.jpg":::
3636

37-
- **Orchestrate azure AI service in app server**: The app server serves as a backend server to create liveness detection sessions and obtain an short-lived authorization token from the Face service for a particular session. This token authorizes the frontend application to perform liveness detection. The app server's objectives are to manage the sessions, to grant authorization for frontend application, and to view the results of the liveness detection process.
37+
- **Orchestrate azure AI service in app server**: The app server serves as a backend server to create liveness detection sessions and obtain a short-lived authorization token from the Face service for a particular session. This token authorizes the frontend application to perform liveness detection. The app server's objectives are to manage the sessions, to grant authorization for frontend application, and to view the results of the liveness detection process.
3838
- **Integrate azure AI vision SDK into frontend application**: The frontend application should embed the Azure AI Vision Face SDK (iOS, Android, or JavaScript). The SDK opens the camera, guides the user through the passive or passive-active flow, encrypts video frames, and streams them—together with the short-lived liveness-session token received from your server—directly to the Azure AI Face endpoint.
39-
- **Optional quick link path**: It is possible to avoid embedding the client SDK, the backend service can swap the same session token for a one-time Liveness Quick Link (https://liveness.face.azure.com/?s=…). Redirect the user to that URL and Azure will host the entire capture experience in the browser, then notice the completion through optional call back. This option lowers integration cost and automatically keeps you on Azure’s always-up-to-date experience.
39+
- **Optional quick link path**: It is possible to avoid embedding the client SDK. The backend service can swap the same session token for an one-time Liveness Quick Link (https://liveness.face.azure.com/?s=…). Redirect the user to that URL and Azure hosts the entire capture experience in the browser, then notices the completion through optional callback. This option lowers integration cost and automatically keeps you on Azure’s always-up-to-date experience.
4040

4141
## Liveness detection modes
4242

4343
Azure Face liveness detection API includes options for both Passive and Passive-Active detection modes.
4444

45-
The **Passive mode** utilizes a passive liveness technique that requires no additional actions from the user. It requires a non-bright lighting environment to succeed and will fail in bright lighting environments with an "Environment not supported" error. It also requires high screen brightness for optimal performance which is configured automatically in the Mobile (iOS and Android) solutions. This mode can be chosen if you prefer minimal end-user interaction and expect end-users to primarily be in non-bright environments. A Passive mode check takes around 12 seconds on an average to complete.
45+
The **Passive mode** utilizes a passive liveness technique that requires no extra actions from the user. It requires a non-bright lighting environment to succeed and might fail in bright lighting environments with an "Environment not supported" error. It also requires high screen brightness for optimal performance which is configured automatically in the Mobile (iOS and Android) solutions. This mode can be chosen if you prefer minimal end-user interaction and expect end-users to primarily be in non-bright environments. A Passive mode check takes around 12 seconds on an average to complete.
4646

47-
The **Passive-Active mode** will behave the same as the Passive mode in non-bright lighting environments and only trigger the Active mode in bright lighting environments. This mode is preferable on Web browser solutions due to the lack of automatic screen brightness control available on browsers which hinders the Passive mode's operational envelope. This mode can be chosen if you want the liveness-check to work in any lighting environment. If the Active check is triggered due to a bright lighting environment, then the total completion time may take up to 20 seconds on average.
47+
The **Passive-Active mode** behaves the same as the Passive mode in non-bright lighting environments and only trigger the Active mode in bright lighting environments. This mode is preferable on Web browser solutions due to the lack of automatic screen brightness control available on browsers which hinders the Passive mode's operational envelope. This mode can be chosen if you want the liveness-check to work in any lighting environment. If the Active check is triggered due to a bright lighting environment, then the total completion time may take up to 20 seconds on average.
4848

4949
You can set the detection mode during the session creation step (see [Perform liveness detection](./tutorials/liveness.md#perform-liveness-detection)).
5050

0 commit comments

Comments
 (0)