Project for Bachelor's Degree in Computer Science by the University of Naples Parthenope. The goal is to create a virtual environment where users may walk around, and have the ability to start a virtual videoconference using the Web RTC protocol via a room system. Furthermore, each user is associated with an avatar whose face is the user's. The user is prompted to take a photo with their webcam, their face is then recognized with the Viola Jones face detection algorithm, and, in conjunction with landmark detection, it is placed on top of the face texture that is used for avatars. The avatar system is powered by UMA (Unity Multipurpose Avatar System) that grants a wide variety of customizability to avatar characters.
- Coturn Server - https://github.com/coturn/coturn
- MixedReality WebRTC - https://github.com/microsoft/MixedReality-WebRTC
- NuGet for Unity - https://github.com/GlitchEnzo/NuGetForUnity
- OpenCV Sharp - https://github.com/shimat/opencvsharp
- Haar cascades, frontal face - https://github.com/opencv/opencv/tree/master/data/haarcascades
- Facemark API for OpenCV (lbfmodel.yaml is supposed to be placed in the NeuralNets folder) - https://github.com/kurnianggoro/GSOC2017
- UMA Avatars - https://github.com/umasteeringgroup/UMA
I want to thank my supervisors, Professor Francesco Camastra and Professor Maurizio De Nino, for letting me work on this project that included a wide variety of topics, ranging from virtual reality to artificial intelligence. I also wish to thank Digitalcomoedia company for their support during my internship.