An elegant, modular RKNN model conversion daemon that provides network interface services and supports conversion of various model formats.
- π Asynchronous Processing Architecture: High-performance asynchronous architecture based on asyncio
- π Multi-task Concurrency: Support for concurrent processing of multiple conversion tasks
- π Real-time Monitoring: Provides real-time task status and progress monitoring
- π Intelligent File Management: Support for model file upload and result download
- π§ Automatic Model Analysis: Intelligent recognition and processing of multi-file model formats
- π Detailed Logging System: Complete task logging system
- π‘οΈ Error Handling Mechanism: Comprehensive error handling and recovery mechanisms
- π§ Flexible Configuration Options: Support for various conversion configuration options
- ONNX (
.onnx) - Open Neural Network Exchange format - TensorFlow Lite (
.tflite) - Lightweight TensorFlow models - PyTorch (
.pt,.pth,.pytorch) - PyTorch model files
- Caffe (
.prototxt+.caffemodel) - Network structure file + weight file - Darknet (
.cfg+.weights) - Configuration file + weight file - TensorFlow (
.pb+ related files) - Graph definition file + weight files- Support for Frozen Graph (
.pb) - Support for SavedModel format
- Support for Checkpoint format (
.meta+.ckpt+.index+.data)
- Support for Frozen Graph (
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β API Server β β Task Manager β β Converter Workerβ
β β β β β β
β - HTTP InterfaceβββββΊβ - Task Queue MgmtβββββΊβ - Model Convert β
β - File Up/Down β β - Status Trackingβ β - Progress Updateβ
β - Multi-file β β - Worker Pool β β - Error Handlingβ
β Support β β - History Mgmt β β - RKNN Core β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Logger β β Config β β Model Analyzer β
β β β β β β
β - Unified Log β β - Config Mgmt β β - Format Detect β
β - Task Log β β - Param Valid β β - File Grouping β
β - Color Output β β - Default Configβ β - Validation β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
- Python 3.7+
- RKNN Toolkit2 1.4.0+
# Clone the project
git clone <repository-url>
cd rknn_model_conversion
# Install dependencies
pip install -r requirements.txt
# Create necessary directories
mkdir -p uploads outputs temp logs# Use the provided startup script to automatically check environment and dependencies
chmod +x start_server.sh
./start_server.sh# Start with default configuration
python main.py
# Start with custom configuration
python main.py --host 0.0.0.0 --port 8080 --workers 4
# Enable debug mode
python main.py --debugGET /healthResponse:
{
"status": "healthy",
"timestamp": "2024-01-01T12:00:00",
"version": "1.0.0"
}POST /api/tasks
Content-Type: application/json
{
"model_path": "/path/to/model.onnx",
"config": {
"target_platform": "rk3588",
"do_quantization": true,
"dataset": "./images.txt",
"quantized_dtype": "w8a8"
},
"callback_url": "http://example.com/callback"
}POST /api/upload_and_create_task
Content-Type: multipart/form-data
file: [model file(s)]
config: {
"target_platform": "rk3588",
"do_quantization": true,
"dataset": "./images.txt"
}GET /api/tasksGET /api/tasks/{task_id}DELETE /api/tasks/{task_id}POST /api/upload
Content-Type: multipart/form-data
file: [model file]GET /api/download/{task_id}GET /api/tasks/{task_id}/logsimport requests
import json
import time
def convert_onnx_model(model_path):
url = "http://127.0.0.1:8080/api/upload_and_create_task"
config = {
"target_platform": "rk3588",
"quantized_dtype": "w8a8",
"do_quantization": True,
"dataset": "./images.txt"
}
data = {"config": json.dumps(config)}
with open(model_path, "rb") as f:
files = {"file": f}
response = requests.post(url, data=data, files=files)
if response.status_code == 200:
result = response.json()
task_id = result["task_id"]
print(f"Task created successfully: {task_id}")
return task_id
else:
print(f"Task creation failed: {response.json()}")
return None
# Usage example
task_id = convert_onnx_model("model.onnx")def convert_caffe_model(prototxt_path, caffemodel_path):
url = "http://127.0.0.1:8080/api/upload_and_create_task"
config = {
"target_platform": "rk3588",
"quantized_dtype": "w8a8",
"do_quantization": True
}
data = {"config": json.dumps(config)}
with open(prototxt_path, "rb") as prototxt_file, \
open(caffemodel_path, "rb") as caffemodel_file:
files = {
"file1": prototxt_file,
"file2": caffemodel_file
}
response = requests.post(url, data=data, files=files)
return response.json()
# Usage example
result = convert_caffe_model("model.prototxt", "model.caffemodel")def wait_for_completion(task_id):
url = f"http://127.0.0.1:8080/api/tasks/{task_id}"
while True:
response = requests.get(url)
if response.status_code == 200:
task_info = response.json()
status = task_info["status"]
progress = task_info.get("progress", 0)
print(f"Status: {status}, Progress: {progress:.1f}%")
if status in ["completed", "failed", "cancelled"]:
break
time.sleep(5)
return task_info
# Usage example
task_info = wait_for_completion(task_id)# Health check
curl http://localhost:8080/health
# Upload single-file model
curl -X POST http://localhost:8080/api/upload_and_create_task \
-F "file=@model.onnx" \
-F 'config={"target_platform":"rk3588","do_quantization":true}'
# Upload multi-file model (Caffe)
curl -X POST http://localhost:8080/api/upload_and_create_task \
-F "file=@model.prototxt" \
-F "file=@model.caffemodel" \
-F 'config={"target_platform":"rk3588","quantized_dtype":"w8a8"}'
# Query task status
curl http://localhost:8080/api/tasks/{task_id}
# Download result
curl -O http://localhost:8080/api/download/{task_id}| Parameter | Default | Description |
|---|---|---|
| host | 0.0.0.0 | Server host address |
| port | 8080 | Server port |
| max_workers | 4 | Maximum worker threads |
| upload_folder | ./uploads | Upload file directory |
| output_folder | ./outputs | Output file directory |
| temp_folder | ./temp | Temporary file directory |
| max_file_size | 500MB | Maximum file size |
| Parameter | Default | Description |
|---|---|---|
| target_platform | rk3588 | Target platform |
| do_quantization | true | Whether to perform quantization |
| dataset | ./images.txt | Calibration dataset |
| mean_values | [0,0,0] | Mean values |
| std_values | [255,255,255] | Standard deviation |
| quantized_dtype | w8a8 | Quantization data type |
- rk3588
- rk3568
- rk3566
- rv1106
- rv1103
- rk3562
- rk3576
The system provides two levels of logging:
- Global Log: Records server runtime status and system events
- Task Log: Each conversion task has an independent log file
Log file locations:
- Global log:
./logs/server.log - Task log:
./logs/task_{task_id}.log
The system provides comprehensive error handling mechanisms:
- Input file validation
- Conversion process exception capture
- Network request error handling
- Resource cleanup and recovery
- Automatic model format recognition and validation
- Asynchronous I/O processing
- Multi-threaded task execution
- File streaming transmission
- Memory usage optimization
- Intelligent task scheduling
- File type validation
- File size limits
- Path security checks
- Error message filtering
- Upload file isolation
-
Port in use
# Check port usage netstat -tulpn | grep 8080 # Start with different port python main.py --port 8081
-
File permission issues
# Ensure directories have write permissions chmod 755 uploads outputs temp logs -
Insufficient memory
# Reduce worker thread count python main.py --workers 2 -
RKNN toolkit issues
# Check RKNN toolkit installation python -c "from rknn.api import RKNN; print('RKNN toolkit installed successfully')"
# Enable verbose logging
export PYTHONPATH=.
python -u main.py --debug
# View real-time logs
tail -f logs/server.log- Automatically save completed task records
- Support for querying historical task status
- Persistent storage of result files
- Intelligent model format recognition
- Automatic grouping of related files
- Model file integrity validation
- Support for task completion callbacks
- Custom notification URLs
- Status change notifications
Welcome to submit Issues and Pull Requests to improve the project.
git clone <repository-url>
cd rknn_model_conversion
pip install -r requirements.txt
python -m pytest tests/ # Run testsThis project is licensed under the MIT License - see the LICENSE file for details.
If you encounter problems or have any questions, please:
- Check the troubleshooting section of this documentation
- Search existing Issues
- Create a new Issue with detailed information
Note: Please ensure you have properly installed RKNN Toolkit2, which is the core dependency for model conversion.