You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This article explains the concept of Face liveness detection, its input and output schema, and related concepts.
19
+
This article explains the concept of Face liveness detection, its input and output schema, and related concepts.
20
+
21
+
## What it does
22
+
23
+
Face Liveness detection is used to determine if a face in an input video stream is real (live) or fake (spoofed). It's an important building block in a biometric authentication system to prevent imposters from gaining access to the system using a photograph, video, mask, or other means to impersonate another person.
24
+
25
+
The goal of liveness detection is to ensure that the system is interacting with a physically present, live person at the time of authentication. These systems are increasingly important with the rise of digital finance, remote access control, and online identity verification processes.
26
+
27
+
The Azure AI Face liveness detection solution successfully defends against various spoof types ranging from paper printouts, 2D/3D masks, and spoof presentations on phones and laptops. Liveness detection is an active area of research, with continuous improvements being made to counteract increasingly sophisticated spoofing attacks. Continuous improvements are rolled out to the client and the service components over time as the overall solution gets more robust to new types of attacks.
28
+
29
+
The Azure Face liveness detection API is [conformant to ISO/IEC 30107-3 PAD (Presentation Attack Detection) standards](https://www.ibeta.com/wp-content/uploads/2023/11/230622-Microsoft-PAD-Level-2-Confirmation-Letter.pdf) as validated by iBeta level 1 and level 2 conformance testing.
20
30
21
31
## How it works
22
32
23
-
TBD
33
+
The liveness solution integration involves two distinct components: a frontend mobile/web application and an app server/orchestrator.
34
+
35
+
:::image type="content" source="./media/liveness/liveness-diagram.jpg" alt-text="Diagram of the liveness workflow in Azure AI Face." lightbox="./media/liveness/liveness-diagram.jpg":::
24
36
25
-
The Face Liveness Check is conformant to ISO/IEC 30107-3 PAD (Presentation Attack Detection) standards as validated by iBeta level 1 and level 2 conformance testing; the report is [here](https://www.ibeta.com/wp-content/uploads/2023/11/230622-Microsoft-PAD-Level-2-Confirmation-Letter.pdf).
37
+
-**Frontend application**: The frontend application receives authorization from the app server to initiate liveness detection. Its primary objective is to activate the camera and guide end-users accurately through the liveness detection process.
38
+
-**App server**: The app server serves as a backend server to create liveness detection sessions and obtain an authorization token from the Face service for a particular session. This token authorizes the frontend application to perform liveness detection. The app server's objectives are to manage the sessions, to grant authorization for frontend application, and to view the results of the liveness detection process.
26
39
27
40
28
41
## Liveness detection modes
29
42
30
-
Azure Face liveness detection API includes the options for both passive and passive-active detection modes.
43
+
Azure Face liveness detection API includes options for both Passive and Passive-Active detection modes.
31
44
32
-
The **Passive mode** requires a non-bright lighting environment to succeed and will fail in bright lighting environments with an "Environment not supported" error. This mode can be chosen if you prefer minimal end-user interaction and expect end-users to primarily be in non-bright environments. This mode utilizes a passive liveness technique that requires no additional actions from the user. It also requires high screen brightness for optimal performance which is configured automatically in the Mobile (iOS and Android) solutions.
45
+
The **Passive mode**utilizes a passive liveness technique that requires no additional actions from the user. It requires a non-bright lighting environment to succeed and will fail in bright lighting environments with an "Environment not supported" error. It also requires high screen brightness for optimal performance which is configured automatically in the Mobile (iOS and Android) solutions. This mode can be chosen if you prefer minimal end-user interaction and expect end-users to primarily be in non-bright environments. A Passive mode check takes around 12 seconds on an average to complete.
33
46
34
-
The **Passive-Active mode** will still behave the same as the Passive mode in non-bright lighting environments and only trigger the Active mode in bright lighting environments. This mode can be chosen if you want the liveness-check to work in any lighting environment. This mode is preferable on Web browser solutions due to the lack of automatic screen brightness control available on browsers which hinders the Passive mode's operational envelope.
47
+
The **Passive-Active mode** will behave the same as the Passive mode in non-bright lighting environments and only trigger the Active mode in bright lighting environments. This mode is preferable on Web browser solutions due to the lack of automatic screen brightness control available on browsers which hinders the Passive mode's operational envelope. This mode can be chosen if you want the liveness-check to work in any lighting environment. If the Active check is triggered due to a bright lighting environment, then the total completion time may take up to 20 seconds on average.
35
48
36
-
This setting can be set during the Session-Creation step (see step 2 of [Perform liveness detection](#perform-liveness-detection)).
49
+
You can set the detection mode during the session creation step (see [Perform liveness detection](./tutorials/liveness.md#perform-liveness-detection)).
37
50
51
+
## Optional face verification
52
+
53
+
You can combine face verification with liveness detection to verify that the face in question belongs to the particular person designated. The following table describes details of the liveness detection features:
54
+
55
+
| Feature | Description |
56
+
| -- |--|
57
+
| Liveness detection | Determine an input is real or fake, and only the app server has the authority to start the liveness check and query the result. |
58
+
| Liveness detection with face verification | Determine an input is real or fake and verify the identity of the person based on a reference image you provided. Either the app server or the frontend application can provide a reference image. Only the app server has the authority to initial the liveness check and query the result. |
38
59
39
60
## Input requirements
40
61
@@ -48,13 +69,22 @@ TBD
48
69
49
70
We do not store any images or videos from the Face Liveness Check. No image/video data is stored in the liveness service after the liveness session has been concluded. Moreover, the image/video uploaded during the liveness check is only used to perform the liveness classification to determine if the user is real or a spoof (and optionally to perform a match against a reference image in the liveness-with-verify-scenario), and it cannot be viewed by any human and will not be used for any AI model improvements.
50
71
72
+
TBD
73
+
#### - Do you include any runtime application self-protections (RASP)? (concept)
74
+
75
+
Yes, we include additional RASP protections on our Mobile SDKs (iOS and Android) provided by [GuardSquare](https://www.guardsquare.com/blog/why-guardsquare).
76
+
51
77
## Output format
52
78
53
79
The liveness detection API returns a JSON object with the following information:
54
80
- A Real or a Spoof Face Liveness Decision. We handle the underlying accuracy and thresholding, so you don’t have to worry about interpreting “confidence scores” or making inferences yourself. This makes integration easier and more seamless for developers.
55
81
- Optionally a Face Verification result can be obtained if the liveness check is performed with verification (see [Perform liveness detection with face verification](#perform-liveness-detection-with-face-verification)).
56
82
- A quality filtered "session-image" that can be used to store for auditing purposes or for human review or to perform further analysis using the Face service APIs.
57
83
84
+
## Support options
85
+
86
+
In addition to using the main [Azure AI services support options](../../cognitive-services-support-options.md), you can also post your questions in the [issues](https://github.com/Azure-Samples/azure-ai-vision-sdk/issues) section of the SDK repo.
Face Liveness detection is used to determine if a face in an input video stream is real (live) or fake (spoofed). It's an important building block in a biometric authentication system to prevent imposters from gaining access to the system using a photograph, video, mask, or other means to impersonate another person.
18
-
19
-
The goal of liveness detection is to ensure that the system is interacting with a physically present, live person at the time of authentication. These systems are increasingly important with the rise of digital finance, remote access control, and online identity verification processes.
20
-
21
-
The Azure AI Face liveness detection solution successfully defends against various spoof types ranging from paper printouts, 2D/3D masks, and spoof presentations on phones and laptops. Liveness detection is an active area of research, with continuous improvements being made to counteract increasingly sophisticated spoofing attacks. Continuous improvements are rolled out to the client and the service components over time as the overall solution gets more robust to new types of attacks.
17
+
In this tutorial, you learn how to detect liveness in faces, using a combination of server-side code and a client-side mobile application. For general information about face liveness detection, see the [conceptual guide](../concept-face-liveness-detection.md).
The liveness solution integration involves two distinct components: a frontend mobile/web application and an app server/orchestrator.
28
-
29
-
:::image type="content" source="../media/liveness/liveness-diagram.jpg" alt-text="Diagram of the liveness workflow in Azure AI Face." lightbox="../media/liveness/liveness-diagram.jpg":::
30
-
31
-
-**Frontend application**: The frontend application receives authorization from the app server to initiate liveness detection. Its primary objective is to activate the camera and guide end-users accurately through the liveness detection process.
32
-
-**App server**: The app server serves as a backend server to create liveness detection sessions and obtain an authorization token from the Face service for a particular session. This token authorizes the frontend application to perform liveness detection. The app server's objectives are to manage the sessions, to grant authorization for frontend application, and to view the results of the liveness detection process.
33
-
34
-
Additionally, we combine face verification with liveness detection to verify whether the person is the specific person you designated. The following table describes details of the liveness detection features:
21
+
This tutorial demonstrates how to operate a frontend application and an app server to perform [liveness detection](#perform-liveness-detection), including the optional step of [face verification](#perform-liveness-detection-with-face-verification), across various language SDKs.
35
22
36
-
| Feature | Description |
37
-
| -- |--|
38
-
| Liveness detection | Determine an input is real or fake, and only the app server has the authority to start the liveness check and query the result. |
39
-
| Liveness detection with face verification | Determine an input is real or fake and verify the identity of the person based on a reference image you provided. Either the app server or the frontend application can provide a reference image. Only the app server has the authority to initial the liveness check and query the result. |
40
23
41
-
This tutorial demonstrates how to operate a frontend application and an app server to perform [liveness detection](#perform-liveness-detection) and [liveness detection with face verification](#perform-liveness-detection-with-face-verification) across various language SDKs.
24
+
> [!TIP]
25
+
> After you complete the prerequisites, you can get started faster by building and running a complete frontend sample (either on iOS, Android, or Web) from the [SDK samples folder](https://github.com/Azure-Samples/azure-ai-vision-sdk/tree/main/samples).
42
26
43
27
## Prerequisites
44
28
@@ -62,6 +46,11 @@ Once you have access to the SDK, follow instructions in the [azure-ai-vision-sdk
62
46
63
47
Once you've added the code into your application, the SDK handles starting the camera, guiding the end-user in adjusting their position, composing the liveness payload, and calling the Azure AI Face cloud service to process the liveness payload.
64
48
49
+
> [!TIP]
50
+
> SDK versions
51
+
>
52
+
> You can monitor the [Releases section](https://github.com/Azure-Samples/azure-ai-vision-sdk/releases) of the SDK repo for new SDK version updates.
53
+
65
54
### Download Azure AI Face client library for app server
66
55
67
56
The app server/orchestrator is responsible for controlling the lifecycle of a liveness session. The app server has to create a session before performing liveness detection, and then it can query the result and delete the session when the liveness check is finished. We offer a library in various languages for easily implementing your app server. Follow these steps to install the package you want:
@@ -802,13 +791,32 @@ The high-level steps involved in liveness with verification orchestration are il
Inadditiontousingthemain [AzureAIservicessupportoptions](../../cognitive-services-support-options.md), youcanalsopostyourquestionsinthe [issues](https://github.com/Azure-Samples/azure-ai-vision-sdk/issues) section of the SDK repo.
0 commit comments