-
Notifications
You must be signed in to change notification settings - Fork 1
Set Up Local AI Service: DeepFace (Sensing)
Arno Hartholt edited this page Dec 18, 2025
·
7 revisions
For local services on desktops and laptops, the VHToolkit uses a local endpoint wrapped around an AI model, which are often developed on Linux with Python. These can run on Windows with the Windows Subsystem for Linux (WSL).
This tutorial shows how to set up a local sensing solution called DeepFace, a light, open source suite of facial recognition related packages. We run DeepFace as a local endpoint Python server that the VHToolkit connects to.
- GitHub account
- On Windows: Windows Subsystem for Linux (WSL)
See here how to set up WSL.
Open a command line (Windows key + R > type ‘cmd’) and type:
wsl ~
conda create -n sen_deepface_env python=3.12
conda init
conda activate sen_deepface_env
When in the correct environment (conda activate sen_deepface_env), type:
pip install deepface==0.0.92
pip install tf-keras==2.16.0
When in the correct environment (conda activate sen_deepface_env), type:
run DeepFace: python %CONDA_PREFIX%/Lib/site-packages/deepface/api/src/api.py
- Make sure the local DeepFace endpoint server is running following the instructions above
- In Unity, go to the Sensing debug menu
- Click DeepFace to select the proper system
- Click Webcam Off to toggle the webcam
- Results are seen on the webcam video view and in the Console
- Note that the first time DeepFace is activated from Unity, it will download its models, which may take a while