Replies: 1 comment
-
Take a look at the I opened an issue as I cannot use it on my Mac. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
This article uses an Elixir implementation of the Web Real-Time-Communication API to perform (pseudo) real-time object detection via a ML model.
Due to the latency of ML processes, this should be run only every nth frame.
A word about WebRTC
The WebRTC allows P2P communication and - as explained in the mdn link - is build on top of:
SIGNALING
),As a side note,
websockets
usesTCP
whilstwebRTC
usesUDP
(faster but less reliable).The blog and YT video
The ML model is a Hugging Face model.
They use
WebRTC
to capture a video stream to be sent to the server.The connection between the browser and the Phoenix server is made via a - dynamically supervised -
channel
attached to thesocket
.It uses
Bumblebee
to create anNx.Serving
from the model. It will receive a frame as an input from the channel and push back the result.Youtube
Beta Was this translation helpful? Give feedback.
All reactions