Skip to content

Set Up Local AI Service: DeepFace (Sensing)

Arno Hartholt edited this page Dec 18, 2025 · 7 revisions

Purpose

For local services on desktops and laptops, the VHToolkit uses a local endpoint wrapped around an AI model, which are often developed on Linux with Python. These can run on Windows with the Windows Subsystem for Linux (WSL).

This tutorial shows how to set up a local sensing solution called DeepFace, a light, open source suite of facial recognition related packages. We run DeepFace as a local endpoint Python server that the VHToolkit connects to.

Requirements

Installation and Setup

Install WSL

See here how to set up WSL.

Create a Conda Environment

Open a command line (Windows key + R > type ‘cmd’) and type:

wsl ~
conda create -n sen_deepface_env python=3.12  
conda init  
conda activate sen_deepface_env

Install DeepFace

When in the correct environment (conda activate sen_deepface_env), type:

pip install deepface==0.0.92  
pip install tf-keras==2.16.0

Run DeepFace Server

When in the correct environment (conda activate sen_deepface_env), type:

run DeepFace: python %CONDA_PREFIX%/Lib/site-packages/deepface/api/src/api.py  

Test DeepFace in VHToolkit Unity Sample Project

  • Make sure the local DeepFace endpoint server is running following the instructions above
  • In Unity, go to the Sensing debug menu
  • Click DeepFace to select the proper system
  • Click Webcam Off to toggle the webcam
  • Results are seen on the webcam video view and in the Console
  • Note that the first time DeepFace is activated from Unity, it will download its models, which may take a while

Clone this wiki locally