The IRIS Facial Analysis Platform provides a comprehensive REST API and WebSocket interface for real-time facial analysis, age estimation, and emotion detection. This documentation covers all available endpoints, request/response formats, and integration examples.
- Development:
http://127.0.0.1:5001 - Production:
https://api.iris-analysis.com(configurable)
Currently, the API does not require authentication. All endpoints are publicly accessible.
- HTTP Requests: No explicit rate limiting (recommended: max 100 requests/minute)
- WebSocket: Connection-based throttling for real-time analysis
Check the health status of all backend services.
Response:
{
"status": "healthy|degraded|unhealthy",
"timestamp": "2024-01-15T10:30:00Z",
"services": {
"face_detector": true,
"dex_age_estimator": true,
"emonext_detector": true,
"insightface_fallback": true,
"video_processor": true
},
"metrics": {
"total_requests": 1250,
"active_connections": 5,
"avg_processing_time": 245.5,
"fps": 24.8,
"last_update": "2024-01-15T10:30:00Z"
}
}Get detailed system status including resource usage.
Response:
{
"timestamp": "2024-01-15T10:30:00Z",
"system": {
"platform": "Darwin",
"platform_version": "23.1.0",
"architecture": "arm64",
"python_version": "3.11.5"
},
"resources": {
"cpu_percent": 15.2,
"memory_percent": 45.8,
"disk_percent": 67.3
},
"services": { /* same as health */ },
"metrics": { /* same as health */ }
}Simple connectivity test.
Response:
{
"message": "pong",
"timestamp": "2024-01-15T10:30:00Z"
}Get information about all loaded AI models.
Response:
{
"models": {
"face_detection": {
"name": "MediaPipe Face Detection",
"version": "0.10.9",
"loaded": true,
"description": "Real-time face detection using MediaPipe",
"capabilities": ["face_detection", "face_landmarks"],
"status": "ready",
"last_updated": "2024-01-15T10:30:00Z"
},
"age_estimation": {
"name": "DEX (Deep EXpectation) VGG-16",
"version": "1.0.0",
"loaded": true,
"description": "Advanced age estimation using deep expectation regression",
"capabilities": ["age_estimation", "gender_detection"],
"status": "ready",
"last_updated": "2024-01-15T10:30:00Z",
"fallback": {
"name": "InsightFace ArcFace",
"version": "0.7.3",
"loaded": true
}
},
"emotion_recognition": {
"name": "EmoNeXt ConvNeXt-based",
"version": "1.0.0",
"loaded": true,
"description": "State-of-the-art emotion recognition using ConvNeXt architecture",
"capabilities": ["emotion_detection"],
"status": "ready",
"last_updated": "2024-01-15T10:30:00Z",
"fallback": {
"name": "Basic Heuristic",
"version": "1.0.0",
"loaded": true
}
}
},
"total_models": 3,
"loaded_models": 3,
"timestamp": "2024-01-15T10:30:00Z"
}Get detailed information about a specific model.
Parameters:
model_name: One offace_detection,age_estimation,emotion_recognition
Response:
{
"name": "face_detection",
"loaded": true,
"version": "0.10.9",
"timestamp": "2024-01-15T10:30:00Z",
"endpoint": "/api/models/face_detection"
}Get information about all available model capabilities.
Response:
{
"capabilities": {
"face_detection": {
"description": "Detect faces in images and video streams",
"input_formats": ["image/jpeg", "image/png", "video/mp4"],
"output_format": "bounding_boxes_with_confidence",
"real_time": true
},
"age_estimation": {
"description": "Estimate age from detected faces",
"input_formats": ["face_region"],
"output_format": "age_value_with_confidence",
"real_time": true
},
"emotion_detection": {
"description": "Detect emotions from facial expressions",
"input_formats": ["face_region"],
"output_format": "emotion_probabilities",
"real_time": true,
"emotions": ["happy", "sad", "angry", "surprised", "fearful", "disgusted", "neutral"]
}
},
"timestamp": "2024-01-15T10:30:00Z"
}Analyze a single image for faces, age, and emotions.
Request (Form Data):
Content-Type: multipart/form-data
image: [image file]
Request (JSON):
{
"image": "base64_encoded_image_data",
"timestamp": 1705312200000,
"options": {
"detectFaces": true,
"estimateAge": true,
"detectEmotion": true,
"detectGender": true
}
}Response:
{
"success": true,
"results": {
"faces": [
{
"id": "face_001",
"bbox": {
"x": 150,
"y": 100,
"width": 200,
"height": 250
},
"confidence": 0.95,
"age": {
"value": 28,
"confidence": 0.87
},
"gender": {
"value": "female",
"confidence": 0.92
},
"emotion": {
"value": "happy",
"confidence": 0.89,
"emotions": {
"happy": 0.89,
"neutral": 0.08,
"surprised": 0.02,
"sad": 0.01,
"angry": 0.00,
"fearful": 0.00,
"disgusted": 0.00
}
},
"landmarks": {
"left_eye": { "x": 180, "y": 140 },
"right_eye": { "x": 220, "y": 140 },
"nose": { "x": 200, "y": 170 },
"mouth": { "x": 200, "y": 200 }
}
}
],
"analysis": []
},
"processing_time": 245.5,
"timestamp": "2024-01-15T10:30:00Z",
"image_info": {
"width": 640,
"height": 480,
"channels": 3
}
}Analyze multiple images in a batch.
Request:
{
"images": [
"base64_encoded_image_1",
"base64_encoded_image_2"
],
"options": {
"detectFaces": true,
"estimateAge": true,
"detectEmotion": true
}
}Response:
{
"success": true,
"results": [
{
"index": 0,
"success": true,
"result": { /* same as single analysis */ }
},
{
"index": 1,
"success": true,
"result": { /* same as single analysis */ }
}
],
"total_images": 2,
"successful_analyses": 2,
"processing_time": 450.2,
"timestamp": "2024-01-15T10:30:00Z"
}Upload an image file for later analysis.
Request:
Content-Type: multipart/form-data
image: [image file]
Response:
{
"success": true,
"file_id": "uuid-generated-id",
"filename": "uuid-generated-id.jpg",
"original_filename": "my_photo.jpg",
"url": "/api/files/uuid-generated-id",
"file_info": {
"size": 245760,
"created": "2024-01-15T10:30:00Z",
"modified": "2024-01-15T10:30:00Z",
"exists": true
},
"timestamp": "2024-01-15T10:30:00Z"
}Retrieve an uploaded file.
Response: Binary file data
Get information about an uploaded file.
Response:
{
"file_id": "uuid-generated-id",
"filename": "uuid-generated-id.jpg",
"url": "/api/files/uuid-generated-id",
"file_info": {
"size": 245760,
"created": "2024-01-15T10:30:00Z",
"modified": "2024-01-15T10:30:00Z",
"exists": true
},
"timestamp": "2024-01-15T10:30:00Z"
}List all uploaded files.
Response:
{
"files": [
{
"file_id": "uuid-1",
"filename": "uuid-1.jpg",
"url": "/api/files/uuid-1",
"file_info": { /* file info object */ }
}
],
"total_files": 1,
"timestamp": "2024-01-15T10:30:00Z"
}Delete an uploaded file.
Response:
{
"success": true,
"message": "File deleted successfully",
"file_id": "uuid-generated-id",
"timestamp": "2024-01-15T10:30:00Z"
}Connect to the WebSocket server at /socket.io using Socket.IO client.
Connection URL: ws://127.0.0.1:5001/socket.io
Send a video frame for real-time analysis.
Payload:
{
"data": "base64_encoded_image_data",
"timestamp": 1705312200000,
"options": {
"detectFaces": true,
"estimateAge": true,
"detectEmotion": true,
"detectGender": true
}
}Request current system metrics.
Payload: {}
Join a room for group analysis.
Payload:
{
"room": "room_name"
}Leave a room.
Payload:
{
"room": "room_name"
}Sent when client successfully connects.
Payload:
{
"status": "connected",
"server_time": "2024-01-15T10:30:00Z",
"services_ready": true
}Sent when faces are detected in a video frame.
Payload:
{
"faces": [
{
"id": "face_001",
"bbox": { "x": 150, "y": 100, "width": 200, "height": 250 },
"confidence": 0.95,
"age": { "value": 28, "confidence": 0.87 },
"emotion": { "value": "happy", "confidence": 0.89 }
}
],
"timestamp": 1705312200000,
"processing_time": 45.2
}Sent when full analysis is complete.
Payload:
{
"results": {
"faces": [ /* array of analyzed faces */ ],
"analysis": [ /* additional analysis data */ ]
},
"timestamp": 1705312200000,
"processing_time": 245.5
}Sent when no faces are found in the frame.
Payload:
{
"timestamp": 1705312200000,
"processing_time": 15.3
}Sent with current system metrics.
Payload:
{
"total_requests": 1250,
"active_connections": 5,
"avg_processing_time": 245.5,
"fps": 24.8,
"last_update": "2024-01-15T10:30:00Z"
}Sent when an error occurs.
Payload:
{
"message": "Error description",
"code": "ERROR_CODE"
}200- Success400- Bad Request (invalid input)404- Not Found (endpoint or resource not found)413- Payload Too Large (file size exceeds 16MB)415- Unsupported Media Type (invalid file type)500- Internal Server Error503- Service Unavailable (AI services not ready)
{
"error": "Error description",
"message": "Detailed error message",
"timestamp": "2024-01-15T10:30:00Z",
"code": 400
}"No image provided"- Missing image in request"File too large"- File exceeds 16MB limit"Invalid file type"- Unsupported image format"Video processor not initialized"- AI services not ready"Failed to process image data"- Image processing error
import { irisApi } from '@/lib/api'
// Initialize the API client
await irisApi.initialize()const health = await irisApi.health.checkHealth()
if (health.success) {
console.log('Backend is healthy:', health.data.status)
} else {
console.error('Health check failed:', health.error)
}// Analyze uploaded file
const fileInput = document.getElementById('file') as HTMLInputElement
const file = fileInput.files[0]
const result = await irisApi.analysis.analyzeImageFile(file)
if (result.success) {
console.log('Analysis results:', result.data.results)
console.log(`Found ${result.data.results.faces.length} faces`)
} else {
console.error('Analysis failed:', result.error)
}// Connect to WebSocket
await irisApi.websocket.connect()
// Listen for face detection results
irisApi.websocket.on('face_detected', (data) => {
console.log(`Detected ${data.faces.length} faces`)
data.faces.forEach(face => {
console.log(`Face: age ${face.age?.value}, emotion ${face.emotion?.value}`)
})
})
// Send video frame
const video = document.getElementById('video') as HTMLVideoElement
const canvas = document.createElement('canvas')
const ctx = canvas.getContext('2d')
canvas.width = video.videoWidth
canvas.height = video.videoHeight
ctx.drawImage(video, 0, 0)
const imageData = canvas.toDataURL('image/jpeg', 0.8).split(',')[1]
irisApi.websocket.sendVideoFrame(imageData)const files = Array.from(fileInput.files)
const imagePromises = files.map(file =>
new Promise<string>((resolve) => {
const reader = new FileReader()
reader.onload = () => resolve(reader.result as string)
reader.readAsDataURL(file)
})
)
const images = await Promise.all(imagePromises)
const base64Images = images.map(img => img.split(',')[1])
const batchResult = await irisApi.analysis.analyzeBatch(base64Images)
if (batchResult.success) {
console.log(`Analyzed ${batchResult.data.successful_analyses} images`)
}Health Check:
curl -X GET http://127.0.0.1:5001/api/healthImage Analysis:
curl -X POST http://127.0.0.1:5001/api/analyze \
-F "image=@/path/to/image.jpg"File Upload:
curl -X POST http://127.0.0.1:5001/api/upload \
-F "image=@/path/to/image.jpg"Health Check:
const response = await fetch('http://127.0.0.1:5001/api/health')
const health = await response.json()
console.log('Health status:', health.status)Image Analysis:
const formData = new FormData()
formData.append('image', fileInput.files[0])
const response = await fetch('http://127.0.0.1:5001/api/analyze', {
method: 'POST',
body: formData
})
const result = await response.json()
console.log('Analysis results:', result.results)- HTTP API: Maximum 100 requests per minute per client
- WebSocket: Maximum 30 frames per second for real-time analysis
- File Upload: Maximum 16MB per file, 10 files per batch
- Error Handling: Always check the
successfield in responses - Retry Logic: Implement exponential backoff for failed requests
- File Validation: Validate file types and sizes before upload
- WebSocket Management: Properly handle connection/disconnection events
- Resource Cleanup: Cancel ongoing requests when components unmount
- Caching: Cache model information and capabilities to reduce API calls
- Use WebSocket for real-time analysis instead of polling HTTP endpoints
- Compress images before sending for analysis to reduce bandwidth
- Implement client-side face detection to reduce server load
- Use batch analysis for multiple images instead of individual requests
- Monitor processing times and adjust frame rates accordingly
Connection Refused:
- Verify backend server is running on correct port
- Check firewall settings
- Ensure CORS is properly configured
Analysis Fails:
- Verify image format is supported (JPEG, PNG, GIF, BMP, WebP)
- Check file size is under 16MB limit
- Ensure AI models are loaded (check
/api/models)
WebSocket Disconnections:
- Implement reconnection logic with exponential backoff
- Monitor connection status and handle offline scenarios
- Check network stability and proxy configurations
Slow Performance:
- Reduce image resolution before analysis
- Use appropriate quality settings for base64 encoding
- Monitor system resources on backend server
- Consider using batch processing for multiple images
Enable debug logging by setting environment variable:
NEXT_PUBLIC_DEBUG_API=trueThis will log all API requests, responses, and errors to the browser console.