An intelligent on-premise surveillance system that uses AI-powered person detection with real-time alerts and monitoring capabilities. This project demonstrates how to build a complete surveillance solution using mobile cameras (for test) as video sources.
- Real-time Person Detection: Uses YOLOv8 for accurate human detection
- Live Video Streaming: RTMP to HLS conversion for web-based viewing
- Smart Alerts: Instant notifications when people are detected
- Analytics Dashboard: View detection trends and statistics
- Multi-platform Support: Works on Windows, macOS, and Linux
- On-premise Solution: No cloud dependency, works entirely on local network
- Mobile Camera Integration: Use your smartphone as a security camera
- Real-time Monitoring: Live dashboard with detection overlays
- Mobile App streams video via RTMP protocol
- MediaMTX Server receives RTMP stream and makes it available
- Backend AI System processes the video stream for person detection
- Frontend Dashboard displays live video with detection overlays and alerts
- Real-time Alerts notify users when people are detected
- Operating System: Windows 10/11, macOS, or Linux
- Python: Version 3.8 or higher
- Node.js: Version 16 or higher
- FFmpeg: For video processing
- Mobile Device: Android or iOS smartphone
- Network: WiFi network connecting all devices
Transform your smartphone into a security camera using these apps:
- App Name: RTMP Camera
- Download: Google Play Store
- App Name: IP Camera Lite
- Download: App Store

- Install the appropriate app on your mobile device
- Connect your mobile device to the same WiFi network as your computer
- Open the app and configure RTMP streaming:
- Server URL:
rtmp://[YOUR_COMPUTER_IP]:1935/input/1 - Resolution: 720p or 1080p (recommended)
- Frame Rate: 15-30 fps
- Server URL:
MediaMTX acts as the RTMP server that receives video from your mobile camera.
-
Download MediaMTX: (Check for latest version from the releases)
# For Windows wget https://github.com/bluenviron/mediamtx/releases/download/v1.13.1/mediamtx_v1.13.1_windows_amd64.zip # For macOS wget https://github.com/bluenviron/mediamtx/releases # For Linux wget https://github.com/bluenviron/mediamtx/releases/download/v1.13.1/mediamtx_v1.13.1_linux_amd64.tar.gz
-
Extract the files to a folder (e.g.,
C:\mediamtxon Windows) (can use this command to extract in linux)tar -xf mediamtx_v1.13.1_linux_amd64.tar.gz
-
Find your computer's IP address:
# Windows ipconfig # macOS/Linux ifconfig
-
Configure MediaMTX:
- Open
mediamtx.ymlin a text editor - Find the
rtmpAddresssetting and update it:
rtmpAddress: [YOUR_COMPUTER_IP]:1935
- Open
-
Start MediaMTX:
# Windows ./mediamtx.exe # macOS/Linux ./mediamtx
The server should start and listen on port 1935 for RTMP connections.
The backend handles AI detection and video processing.
-
Navigate to the backend directory:
cd backend -
Create a virtual environment (recommended):
# Windows python -m venv venv venv\Scripts\activate # macOS/Linux python3 -m venv venv source venv/bin/activate
-
Install dependencies:
pip install -r requirements.txt
-
Install FFmpeg:
- Windows: Download from ffmpeg.org and add to PATH
- macOS:
brew install ffmpeg - Linux:
sudo apt install ffmpeg(Ubuntu/Debian)
-
Configure the RTMP URL: You need to update the RTMP URL in multiple files to match your MediaMTX server address:
- main.py: Update the
RTMP_URLvariable:
RTMP_URL = "rtmp://[YOUR_COMPUTER_IP]:1935/input/1"
- openai_version.py: Update the
RTMP_URLvariable:
RTMP_URL = "rtmp://[YOUR_COMPUTER_IP]:1935/input/1"
- yolo_version.py: Update the
RTMP_URLvariable:
RTMP_URL = "rtmp://[YOUR_COMPUTER_IP]:1935/input/1"
- person_detection.py: Update the
DEFAULT_RTMP_URLvariable:
DEFAULT_RTMP_URL = "rtmp://[YOUR_COMPUTER_IP]:1935/input/1"
Replace
[YOUR_COMPUTER_IP]with the actual IP address of your computer running MediaMTX. - main.py: Update the
-
Start the backend server:
python main.py
The backend will start on http://localhost:8000
The frontend provides the web dashboard for monitoring.
-
Navigate to the frontend directory:
cd frontend -
Install dependencies:
npm install
-
Start the development server:
npm run dev
The frontend will start on http://localhost:5173
-
Start all services in this order:
- MediaMTX server
- Backend API server
- Frontend development server
-
Start mobile streaming:
- Open the camera app on your mobile device
- Start RTMP streaming to your server
-
Access the dashboard:
- Open your web browser
- Navigate to
http://localhost:5173 - You should see the live video feed with detection capabilities
-
Monitor detections:
- View live video with person detection overlays
- Check the alerts panel for recent detections
- Monitor analytics and trends
Mobile app can't connect to server:
- Verify all devices are on the same WiFi network
- Check firewall settings (allow port 1935)
- Ensure MediaMTX is running and configured correctly
- If you are using Mobile Hotspot it won't work, so switch to a WIFI network
No video in dashboard:
- Check if FFmpeg is properly installed
- Verify RTMP stream is being received by MediaMTX
- Check browser console for errors
Detection not working:
- Ensure Python dependencies are installed correctly
- Check if YOLO model is downloaded (happens automatically on first run)
- Verify sufficient system resources (CPU/GPU)
Performance issues:
- Reduce mobile camera resolution/frame rate
- Close unnecessary applications
- Consider using GPU acceleration if available
- FastAPI: Web framework for API development
- YOLOv8: AI model for person detection
- OpenCV: Computer vision processing
- FFmpeg: Video stream processing
- Uvicorn: ASGI server
- React: User interface framework
- Vite: Build tool and development server
- Tailwind CSS: Styling framework
- HLS.js: Video streaming library
- Recharts: Data visualization
- MediaMTX: RTMP server
- RTMP Protocol: Video streaming
- HLS Protocol: Web video delivery
This guide will help you set up live video streaming from an RTMP source to your React frontend.
- RTMP Stream Source β RTMP stream (
rtmp://82.112.235.249:1935/input/1) - FFmpeg β Converts RTMP to HLS format
- FastAPI Backend β Serves HLS stream and analytics API
- React Frontend β Displays live video using HLS.js
- Python 3.8+
- Node.js 16+
- FFmpeg (for RTMP to HLS conversion)
- Download FFmpeg from https://ffmpeg.org/download.html
- Extract to
C:\ffmpeg - Add
C:\ffmpeg\binto your system PATH - Verify:
ffmpeg -version
brew install ffmpegsudo apt update
sudo apt install ffmpegNavigate to the backend directory:
cd backend# Run the setup script
start.bat# Make script executable
chmod +x start.sh
# Run the setup script
./start.sh# Create virtual environment
python -m venv venv
# Activate virtual environment
# Windows:
venv\Scripts\activate
# Linux/macOS:
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Create HLS output directory
mkdir hls_output
# Start the server
python main.pyThe backend will start on http://localhost:8000
Navigate to the frontend directory:
cd frontendInstall dependencies and start the development server:
# Install dependencies
npm install
# Start the development server
npm run devThe frontend will start on http://localhost:5173
The FastAPI backend uses FFmpeg to convert the RTMP stream to HLS format:
- Input: RTMP stream from
rtmp://82.112.235.249:1935/input/1 - Output: HLS segments in
backend/hls_output/directory - Playlist:
stream.m3u8file that browsers can consume
The backend provides these endpoints:
GET /api/stream- Returns HLS stream URLGET /api/analytics/summary- Returns analytics dataGET /api/alerts- Returns alert historyGET /api/alerts/stream- Server-sent events for real-time alertsGET /hls/stream.m3u8- HLS playlist fileGET /hls/*.ts- HLS video segments
The React frontend:
- Uses
HLS.jsto play the HLS stream - Automatically retries connection on failures
- Displays loading and error states
- Shows live analytics and alerts
Create a .env file in the frontend directory with:
VITE_API_URL=http://localhost:8000/api
VITE_STREAM_URL=http://localhost:8000/hls/stream.m3u8In backend/main.py, you can modify:
RTMP_URL- Source RTMP stream URLHLS_OUTPUT_DIR- Directory for HLS filesJSONL_FILE- Path to alerts/events file
- Check FFmpeg: Ensure FFmpeg is installed and in PATH
- Check RTMP Source: Verify the RTMP stream is active
- Check Backend Logs: Look for FFmpeg errors in the console
- Check Network: Ensure ports 8000 and 5173 are not blocked
- Install FFmpeg and add to system PATH
- Restart terminal/command prompt after installation
- Check if the RTMP source is broadcasting
- Verify the RTMP URL is correct
- Check firewall settings
- Ensure backend is running on localhost:8000
- Check that frontend .env file has correct API URL
-
Check Backend Health:
curl http://localhost:8000/api/health
-
Check Stream Endpoint:
curl http://localhost:8000/api/stream
-
Check HLS Playlist:
curl http://localhost:8000/hls/stream.m3u8
-
Monitor Backend Logs: Watch the console where you started the backend
For production deployment:
- Use HTTPS: Configure SSL certificates
- Update CORS: Restrict origins to your domain
- Environment Variables: Use production URLs
- Process Management: Use PM2 or systemd for the backend
- Reverse Proxy: Use Nginx to serve static files and proxy API requests
- HLS Settings: Adjust segment duration and playlist size in FFmpeg command
- Quality Settings: Modify FFmpeg encoding presets for quality vs. performance
- Caching: Implement CDN for HLS segments in production
- Load Balancing: Use multiple backend instances for high traffic
- Authentication: Add API authentication for production
- Rate Limiting: Implement rate limiting on API endpoints
- Input Validation: Validate all API inputs
- HTTPS Only: Force HTTPS in production
- Stream Access: Restrict access to HLS endpoints
If you encounter issues:
- Check the troubleshooting section above
- Verify all prerequisites are installed
- Check console logs for error messages
- Ensure the RTMP source stream is active
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
If you encounter any issues or have questions:
- Check the troubleshooting section above
- Review the console logs for error messages
- Create an issue in the GitHub repository
Note: This system is designed for educational and demonstration purposes. For production surveillance systems, consider additional security measures and proper hardware.