A comprehensive telemedicine platform with real-time video calling capabilities, built using WebRTC technology for secure peer-to-peer communication between doctors and patients.
Medi-Mitra is a full-stack telemedicine solution that enables high-quality video consultations between healthcare providers and patients. The platform features a robust WebRTC implementation with advanced signaling, ICE handling, and media stream management.
- Peer-to-Peer Communication: Direct WebRTC connections between doctor and patient
- High-Quality Video: HD video streaming with automatic quality adaptation
- Crystal-Clear Audio: Two-way audio communication with noise suppression
- Cross-Platform Support: Works on desktop, mobile, and tablet devices
- End-to-End Encryption: All communication encrypted via WebRTC DTLS
- Secure Signaling: Socket.IO with authentication tokens
- HIPAA Compliant: Designed with healthcare privacy standards in mind
- No Data Storage: Video/audio streams are not recorded or stored
- Multiple ICE Servers: STUN/TURN servers for reliable connectivity
- Network Resilience: Automatic reconnection and ICE candidate handling
- Adaptive Streaming: Bandwidth optimization based on network conditions
- Browser Compatibility: Works across Chrome, Firefox, Safari, and Edge
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Doctor App β β Signaling β β Patient App β
β (Frontend) βββββΊβ Server βββββΊβ (Frontend) β
β β β (Backend) β β β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
β ββββββββββΌβββββββββ β
β β MongoDB β β
β β Database β β
β βββββββββββββββββββ β
β β
βββββββββββββββββΊ Direct WebRTC βββββββββββββββββ
Peer-to-Peer Connection
Medi-mitra1/
βββ frontend/ # React Frontend Application
β βββ src/
β β βββ components/
β β β βββ CallNotification.jsx # Incoming call handler
β β β βββ DashboardLayout.jsx # Main dashboard UI
β β βββ hooks/
β β β βββ useWebRTC.js # Core WebRTC implementation
β β βββ pages/
β β β βββ CallPage.jsx # Video call interface
β β β βββ DoctorDashboard.jsx # Doctor management panel
β β β βββ PatientDashboard.jsx # Patient interface
β β βββ utils/
β β βββ api.js # API communication
β β βββ socket.js # Socket.IO client setup
β βββ dist/ # Production build
β βββ package.json # Frontend dependencies
β
βββ backend/ # Node.js Backend Server
β βββ services/
β β βββ socket.js # WebRTC signaling server
β βββ controllers/
β β βββ authController.js # User authentication
β β βββ mainController.js # Core API endpoints
β βββ models/
β β βββ User.js # User data model
β β βββ Appointment.js # Appointment management
β βββ routes/
β β βββ auth.js # Authentication routes
β β βββ main.js # Main API routes
β βββ package.json # Backend dependencies
β
βββ WEBRTC_ISSUE_ANALYSIS.md # Technical issue documentation
βββ WEBRTC_OPTIMIZATION_RECOMMENDATIONS.md
βββ README.md # This documentation
The heart of the system is the useWebRTC.js hook, which manages:
// Core WebRTC configuration
const iceServers = [
{ urls: 'stun:stun.l.google.com:19302' },
{ urls: 'stun:stun1.l.google.com:19302' },
{
urls: 'turn:freeturn.net:3478',
username: 'free',
credential: 'free'
}
];
// Peer connection setup
const pcRef = useRef(new RTCPeerConnection({ iceServers }));Socket.IO Backend (backend/services/socket.js):
// WebRTC signaling events
socket.on("webrtc:offer", ({ to, offer, from }) => {
console.log(`π WebRTC offer from ${from} to ${to}`);
io.to(to).emit("webrtc:offer", { from, offer });
});
socket.on("webrtc:answer", ({ to, answer, from }) => {
console.log(`π WebRTC answer from ${from} to ${to}`);
io.to(to).emit("webrtc:answer", { from, answer });
});
socket.on("webrtc:ice-candidate", ({ to, candidate, from }) => {
io.to(to).emit("webrtc:ice-candidate", { from, candidate });
});Timing Issue Resolution:
// Store streams when video elements aren't ready
const pendingRemoteStreamRef = useRef(null);
pcRef.current.ontrack = (event) => {
const stream = event.streams[0];
pendingRemoteStreamRef.current = stream;
if (remoteVideoRef.current) {
// Immediate attachment
remoteVideoRef.current.srcObject = stream;
} else {
// Deferred attachment when element becomes available
console.log('π¦ Stream stored for later attachment');
}
};
// Retry mechanism for deferred attachment
const retryRemoteStreamAttachment = () => {
if (pendingRemoteStreamRef.current && remoteVideoRef.current) {
remoteVideoRef.current.srcObject = pendingRemoteStreamRef.current;
console.log('β
Pending stream attached successfully');
}
};// Queue ICE candidates until remote description is set
const queuedCandidatesRef = useRef([]);
const handleIceCandidate = async (payload) => {
if (pcRef.current.remoteDescription) {
await pcRef.current.addIceCandidate(new RTCIceCandidate(payload.candidate));
} else {
// Queue for later processing
queuedCandidatesRef.current.push(payload.candidate);
}
};- Node.js (v16 or higher)
- npm or yarn
- MongoDB (local or cloud)
- Modern Web Browser with WebRTC support
- Install Dependencies:
cd backend
npm install- Environment Configuration:
# Create .env file
MONGODB_URI=mongodb://localhost:27017/medimitra
JWT_SECRET=your_jwt_secret_key
PORT=5000- Start Backend Server:
npm start- Install Dependencies:
cd frontend
npm install- Development Mode:
npm run dev- Production Build:
npm run build
npm run previewsequenceDiagram
participant D as Doctor
participant S as Signaling Server
participant P as Patient
D->>S: webrtc:start-call
S->>P: call-notification
P->>S: call:accept
S->>D: call:accepted
sequenceDiagram
participant D as Doctor
participant S as Server
participant P as Patient
D->>D: Create Offer
D->>S: webrtc:offer
S->>P: webrtc:offer
P->>P: Create Answer
P->>S: webrtc:answer
S->>D: webrtc:answer
D->>D: Set Remote Description
Note over D,P: ICE Candidates Exchange
D->>S: webrtc:ice-candidate
S->>P: webrtc:ice-candidate
P->>S: webrtc:ice-candidate
S->>D: webrtc:ice-candidate
// Doctor starts call
const startCall = async (targetUserId) => {
// Get user media
const stream = await navigator.mediaDevices.getUserMedia({
video: { width: 1280, height: 720 },
audio: { echoCancellation: true, noiseSuppression: true }
});
// Add tracks to peer connection
stream.getTracks().forEach(track => {
pcRef.current.addTrack(track, stream);
});
// Create and send offer
const offer = await pcRef.current.createOffer();
await pcRef.current.setLocalDescription(offer);
socketRef.current.emit("webrtc:offer", {
offer,
to: targetUserId,
from: user.id
});
};Purpose: Core WebRTC functionality management
Key Features:
- Peer connection lifecycle management
- Media stream handling
- Signaling coordination
- ICE candidate processing
- Error handling and recovery
Functions:
startCall(targetUserId): Initiates outgoing callanswerCall(offer): Responds to incoming callendCall(): Terminates active callretryRemoteStreamAttachment(): Handles timing issues
Purpose: Video call user interface
Features:
- Local video preview (draggable)
- Remote video display (full screen)
- Call controls (mute, video toggle, end call)
- Connection status indicators
- Real-time debugging information
UI Elements:
{/* Remote video (main display) */}
<video ref={remoteVideoRef} className="remote-video" autoPlay playsInline />
{/* Local video (picture-in-picture) */}
<video ref={localVideoRef} style={{width:'100%'}} muted playsInline autoPlay />
{/* Call controls */}
<div className="call-controls">
<button onClick={toggleAudio}>{audioEnabled ? 'π€' : 'π'}</button>
<button onClick={toggleVideo}>{videoEnabled ? 'πΉ' : 'π·'}</button>
<button onClick={endCall}>π</button>
</div>Purpose: WebRTC signaling server
Key Functions:
- User room management
- Signal routing between peers
- Connection state tracking
- Debug logging and monitoring
Room Management:
// Users join rooms using their MongoDB user IDs
socket.on("join", (userId) => {
socket.join(userId);
console.log(`π User ${userId} joined room`);
});
// Relay WebRTC signals between rooms
socket.on("webrtc:offer", ({ to, offer, from }) => {
io.to(to).emit("webrtc:offer", { from, offer });
});// Automatic quality adjustment based on network conditions
const adaptQuality = (stats) => {
if (stats.packetLoss > 0.05) {
// Reduce video quality
const sender = pcRef.current.getSenders().find(s =>
s.track?.kind === 'video'
);
const params = sender.getParameters();
params.encodings[0].maxBitrate = 500000; // 500 Kbps
sender.setParameters(params);
}
};// ICE connection state monitoring
pcRef.current.oniceconnectionstatechange = () => {
const state = pcRef.current.iceConnectionState;
if (state === 'failed') {
console.log('π ICE connection failed, attempting restart');
pcRef.current.restartIce();
} else if (state === 'connected') {
console.log('β
ICE connection established');
}
};// Browser-specific optimizations
const getBrowserConstraints = () => {
const isFirefox = navigator.userAgent.includes('Firefox');
const isChrome = navigator.userAgent.includes('Chrome');
return {
video: {
width: { ideal: 1280 },
height: { ideal: 720 },
frameRate: isFirefox ? { ideal: 25 } : { ideal: 30 }
},
audio: {
echoCancellation: true,
noiseSuppression: !isFirefox, // Firefox handles this differently
autoGainControl: isChrome
}
};
};// Monitor bandwidth and adjust accordingly
const monitorBandwidth = () => {
pcRef.current.getStats().then(stats => {
stats.forEach(report => {
if (report.type === 'outbound-rtp' && report.mediaType === 'video') {
const bitrate = report.bytesSent * 8 / report.timestamp;
if (bitrate > MAX_BITRATE) {
reduceVideoQuality();
}
}
});
});
};// Cleanup on component unmount
useEffect(() => {
return () => {
// Stop all media tracks
if (localStreamRef.current) {
localStreamRef.current.getTracks().forEach(track => track.stop());
}
// Close peer connection
if (pcRef.current) {
pcRef.current.close();
}
// Clear intervals and timeouts
clearInterval(debugInterval);
};
}, []);The implementation includes extensive debugging capabilities:
// WebRTC state monitoring
console.log('π WebRTC Debug Check:', {
socketConnected: socketRef.current.connected,
hasOfferListener: socketRef.current.listeners('webrtc:offer').length > 0,
callState: callState,
userRole: user?.role,
userId: user?.id
});
// Stream attachment tracking
console.log('πΊ Remote stream detected:', {
streamId: stream.id,
videoTracks: stream.getVideoTracks().length,
audioTracks: stream.getAudioTracks().length,
active: stream.active
});// Debug information in UI (development mode)
{process.env.NODE_ENV === 'development' && (
<div className="debug-panel">
<div>Call State: {callState}</div>
<div>Has Remote Stream: {hasRemoteStream ? 'Yes' : 'No'}</div>
<div>ICE Connection: {iceConnectionState}</div>
<div>Signaling State: {signalingState}</div>
</div>
)}// JWT token validation for socket connections
io.use((socket, next) => {
const token = socket.handshake.auth.token;
jwt.verify(token, process.env.JWT_SECRET, (err, decoded) => {
if (err) return next(new Error('Authentication error'));
socket.userId = decoded.id;
next();
});
});- WebRTC DTLS: All media streams encrypted by default
- Socket.IO: Secure WebSocket connections (WSS) in production
- API Endpoints: HTTPS-only communication
- Database: Encrypted connections to MongoDB
// No media recording or storage
const mediaConstraints = {
video: true,
audio: true,
// Explicitly prevent recording
mediaRecorder: false,
screenCapture: false
};- Backend Deployment:
# Build for production
npm run build
# Start with PM2
pm2 start server.js --name "medi-mitra-backend"- Frontend Deployment:
# Build static assets
npm run build
# Deploy to CDN/Static hosting
# Files in dist/ directory- Environment Variables:
NODE_ENV=production
MONGODB_URI=mongodb+srv://user:pass@cluster.mongodb.net/medimitra
JWT_SECRET=complex_production_secret
ALLOWED_ORIGINS=https://yourdomain.com- Heroku: Easy deployment with MongoDB Atlas
- AWS: EC2 + RDS/DocumentDB + CloudFront
- Google Cloud: App Engine + Cloud SQL
- Vercel/Netlify: Frontend static hosting
- Connection Success Rate: >95%
- Audio/Video Quality: HD (720p) at 30fps
- Latency: <200ms for local connections
- Bandwidth Usage: ~1-2 Mbps per connection
- Browser Support: Chrome, Firefox, Safari, Edge
// Connection quality metrics
const getConnectionStats = async () => {
const stats = await pcRef.current.getStats();
const videoStats = {};
const audioStats = {};
stats.forEach(report => {
if (report.type === 'inbound-rtp') {
if (report.mediaType === 'video') {
videoStats.packetsLost = report.packetsLost;
videoStats.jitter = report.jitter;
} else if (report.mediaType === 'audio') {
audioStats.packetsLost = report.packetsLost;
audioStats.jitter = report.jitter;
}
}
});
return { videoStats, audioStats };
};Problem: Video elements show black screen Solution: Check the stream attachment timing
// Add retry mechanism
useEffect(() => {
if (remoteVideoRef.current && retryRemoteStreamAttachment) {
retryRemoteStreamAttachment();
}
}, [remoteVideoRef.current, callState]);Problem: Cannot establish peer connection Solution: Add more TURN servers
const iceServers = [
{ urls: 'stun:stun.l.google.com:19302' },
{
urls: 'turn:turn.example.com:3478',
username: 'turnuser',
credential: 'turnpass'
}
];Problem: Echo or feedback during calls Solution: Enable audio processing
const audioConstraints = {
echoCancellation: true,
noiseSuppression: true,
autoGainControl: true
};// WebRTC internals inspection
console.log('Peer Connection State:', pcRef.current.connectionState);
console.log('ICE Connection State:', pcRef.current.iceConnectionState);
console.log('Signaling State:', pcRef.current.signalingState);
// Media stream analysis
if (localStreamRef.current) {
console.log('Local Stream Tracks:',
localStreamRef.current.getTracks().map(t => ({
kind: t.kind,
enabled: t.enabled,
readyState: t.readyState
}))
);
}- Screen Sharing: Doctor can share medical documents
- Recording: Session recording for medical records
- Chat Integration: Text messaging during calls
- Multi-party Calls: Support for multiple participants
- Mobile Apps: React Native implementation
- AI Integration: Real-time health monitoring
- Load Balancing: Multiple signaling servers
- Media Servers: SFU implementation for group calls
- CDN Integration: Optimized media delivery
- Database Sharding: Handle increased user load
| Event | Payload | Description |
|---|---|---|
webrtc:start-call |
{patientId, fromUserName, appointmentId} |
Initiate call |
webrtc:offer |
{to, offer, from} |
Send WebRTC offer |
webrtc:answer |
{to, answer, from} |
Send WebRTC answer |
webrtc:ice-candidate |
{to, candidate, from} |
Share ICE candidate |
join |
userId |
Join user room |
| Event | Payload | Description |
|---|---|---|
webrtc:start-call |
{from, fromUserName, appointmentId, timestamp, type} |
Incoming call notification |
webrtc:offer |
{from, offer} |
Incoming WebRTC offer |
webrtc:answer |
{from, answer} |
Incoming WebRTC answer |
webrtc:ice-candidate |
{from, candidate} |
Incoming ICE candidate |
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/appointments/:id |
Get appointment details |
POST |
/api/auth/login |
User authentication |
GET |
/api/users/profile |
Get user profile |
PUT |
/api/appointments/:id/status |
Update appointment status |
- Code Style: Follow ESLint configuration
- Testing: Write unit tests for new features
- Documentation: Update README for any changes
- Commits: Use conventional commit format
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- WebRTC Specification: W3C and IETF standards
- Socket.IO: Real-time communication library
- React Team: Frontend framework
- Node.js Community: Backend runtime environment
- MongoDB: Database solution
For technical support or questions:
- Email: support@medi-mitra.com
- Documentation: Technical Docs
- Issues: GitHub Issues
Built with β€οΈ for better healthcare accessibility
Last Updated: September 28, 2025