A distributed plant counting service using object detection with FastAPI backend and Tkinter GUI frontend.
For automated deployment, see the auto_deploy directory with ready-to-use scripts:
# Full deployment (WSL/Linux)
./auto_deploy/deploy-service.sh
# Configure Windows nginx (PowerShell as Admin)
.\auto_deploy\setup-windows-nginx.ps1
# Check status
./auto_deploy/check-status.sh- Server: FastAPI application with GPU support for model training and inference
- Client: Tkinter GUI for annotation, training management, and result visualization
- Database: PostgreSQL/PostGIS for spatial data storage
- Deployment: Docker containers orchestrated with Kubernetes
- Network: Multi-layer stack for external access (External IP → Windows nginx → WSL socat → Minikube → Kubernetes)
- Interactive manual annotation with zoom capabilities
- YOLO dataset format support
- RT-DETR model training (rtdetr-l, rtdetr-x)
- SAHI-based sliced inference for large orthomosaics
- Distributed processing with Kubernetes (2 replicas)
- Remote GUI access from external networks
- Automated deployment scripts
Plant_count_service_0.1/
├── server/
│ ├── main.py # FastAPI server application
│ ├── requirements.txt # Server dependencies
│ └── Dockerfile # Server container
├── client/
│ ├── main.py # Main GUI application
│ ├── annotation_gui.py # Annotation window
│ ├── requirements.txt # Client dependencies
│ └── Dockerfile # Client container
├── auto_deploy/ # Automated deployment scripts
│ ├── deploy-service.sh # Full deployment script
│ ├── restart-service.sh # Quick restart after changes
│ ├── teardown-service.sh # Complete teardown
│ ├── check-status.sh # Comprehensive status check
│ ├── setup-windows-nginx.ps1 # Windows nginx configuration
│ └── README.md # Deployment documentation
├── k8s/
│ ├── server-deployment-minikube.yaml # Kubernetes manifests
│ ├── postgres-deployment.yaml
│ ├── persistent-volumes.yaml
│ └── secrets.yaml
├── backbones/
│ ├── rtdetr-l.pt # RT-DETR Large backbone
│ └── rtdetr-x.pt # RT-DETR Extra-large backbone
├── docker-compose.yml # Local development setup
├── test_service.py # Test script
└── README.md
- NVIDIA GPU with CUDA support
- Docker with NVIDIA Container Toolkit
- Kubernetes (for production deployment)
- 8GB+ GPU memory
- 16GB+ RAM
- Python 3.10+
- X11 display server (for GUI)
- Internet connection to server
- Move backbone files to server folder:
mkdir -p server/backbones
cp backbones/*.pt server/backbones/- Build and start services:
docker-compose up --buildThe server will be available at http://localhost:7677
This repository uses a centralized configuration approach to avoid hardcoding sensitive IP addresses and ports in the codebase.
This file contains actual network configuration. It is automatically ignored by git and should never be committed.
Location: Root directory of the repository
Status: Must be created manually from IP_ports.example
Template file showing the structure of the configuration. Safe to commit.
How to use:
# Copy the example file
cp IP_ports.example IP_ports
# Edit with your actual values
nano IP_ports # or use your preferred editorPython utility module that loads configuration from IP_ports and provides helper functions.
Usage in Python code:
from network_config import get_server_url, get_server_host_port
# Get server URL for client
server_url = get_server_url()
# Get host and port for server
host, port = get_server_host_port()You can maintain multiple configurations:
# Development
cp IP_ports.dev IP_ports
# Production
cp IP_ports.prod IP_ports
# Local testing
cp IP_ports.local IP_ports- Clone the repository
- Copy
IP_ports.exampletoIP_ports - Fill in your network details
- Start development
The IP_ports file should never be committed, keeping your configuration private.
Recommended: Use automated deployment scripts - See auto_deploy/README.md
# One-command deployment
./auto_deploy/deploy-service.sh
# Configure Windows nginx (PowerShell as Admin)
.\auto_deploy\setup-windows-nginx.ps1
# Check deployment status
./auto_deploy/check-status.shManual deployment (if you prefer step-by-step):
- Start Minikube:
minikube start --driver=docker --cpus=4 --memory=8192- Build server image:
docker build -t plant-count-server:latest -f server/Dockerfile .- Load image into Minikube:
minikube image load plant-count-server:latest- Apply Kubernetes manifests:
kubectl apply -f k8s/server-deployment-minikube.yaml- Configure network access:
# On WSL, set up port forwarding with socat
sudo apt-get install socat
socat TCP-LISTEN:7677,bind=0.0.0.0,fork,reuseaddr TCP:$(minikube ip):30677
# On Windows, configure nginx to forward to WSL IP
# Use setup-windows-nginx.ps1 script for automated configuration-
Configure router (for external access):
- Forward external port 7677 to Windows IP port 7677
- Example: 93.150.189.34:7677 → 192.168.1.186:7677
-
Verify deployment:
kubectl get pods
kubectl get services
curl http://localhost:7677The server will be accessible at multiple endpoints:
- From WSL:
http://localhost:7677 - From Windows:
http://<wsl-ip>:7677 - From external network:
http://93.150.189.34:7677
- Install dependencies:
cd client
pip install -r requirements.txt- Configure server URL (if different from default):
export SERVER_URL=http://93.150.189.34:7677
# or
export SERVER_URL=http://server.sagea.com:7677- Run client:
python main.py- Enter orthomosaic URL (e.g., from Zenodo)
- Set tile parameters:
- Orthomosaic resolution (auto-detected from metadata)
- Tile size in meters (default: 1.12m)
- Select ML model (rtdetr-l or rtdetr-x)
- Click Annotation button
In Annotation Window:
- Left-click and drag to create bounding boxes
- Right-click on a box to delete it
- Use Zoom + to zoom into a region
- Use Zoom - to return to full view
- Click OK to save and move to next tile
- Click STOP to end annotation session
Requirements:
- rtdetr-l: minimum 60 annotations
- rtdetr-x: minimum 100 annotations
- Ensure sufficient annotations
- Click Train button
- Monitor progress in Server Console
- Wait for "Training completed successfully!" message
Training Parameters:
- Batch size: 16
- Max epochs: 200
- Early stopping patience: 15
- Precision: Mixed (FP16)
- After successful training, click Predict
- Server processes all tiles with SAHI
- View prediction count in console
Prediction Parameters:
- Slice size: 224x224 pixels
- Overlap: 25%
- Post-processing: NMS with IOS metric
- Click Download button
- Results saved in GeoJSON format
- Contains bounding boxes with confidence scores
Run the test script to verify installation:
python test_service.pySelect mode when prompted:
- Server mode: Tests server endpoints and connectivity
- Client mode: Tests GUI dependencies and file structure
DB_HOST: PostgreSQL host (default: localhost)DB_PORT: PostgreSQL port (default: 5432)DB_NAME: Database name (default: plant_count_db)DB_USER: Database userDB_PASSWORD: Database password
SERVER_URL: Server address (default: http://93.150.189.34:7677)DISPLAY: X11 display for GUI (default: :0)
Server Pods (per replica):
- Requests: 2 CPU, 4GB RAM, 1 GPU
- Limits: 4 CPU, 8GB RAM, 1 GPU
- Replicas: 2
Database Pod:
- Requests: 500m CPU, 1GB RAM
- Limits: 1 CPU, 2GB RAM
- Replicas: 1
- Internal: ClusterIP on port 7677
- External: LoadBalancer with IP 93.150.189.34:7677
- NodePort: 30677 for testing
-
GPU not detected:
- Verify NVIDIA drivers:
nvidia-smi - Check NVIDIA Container Toolkit installation
- Ensure Kubernetes GPU support enabled
- Verify NVIDIA drivers:
-
Out of memory errors:
- Reduce batch size in training
- Use smaller model (rtdetr-l instead of rtdetr-x)
- Increase GPU memory allocation
-
Training fails:
- Check backbone files exist in
/app/backbones - Verify sufficient annotations
- Check server logs:
kubectl logs <pod-name>
- Check backbone files exist in
-
Cannot connect to server:
- Verify server URL:
curl http://93.150.189.34:7677 - Check firewall rules
- Ensure server is running
- Verify server URL:
-
GUI not displaying:
- Check X11 display:
echo $DISPLAY - Verify tkinter installation:
python -m tkinter - For Docker: allow X11 forwarding
- Check X11 display:
-
Annotation crashes:
- Ensure rasterio can open orthomosaic
- Check available disk space
- Verify image format is supported
Health check endpoint
Returns training and prediction status
Trains model with provided dataset
Request Body:
{
"map_url": "string",
"map_name": "string",
"model_name": "rtdetr-l|rtdetr-x",
"tiles_data": "json_string",
"yolo_dataset": "json_string"
}Runs prediction on orthomosaic
Request Body:
{
"map_name": "string",
"model_path": "string",
"tiles_data": "json_string"
}Downloads prediction results
- Training time: 30-60 minutes (depending on dataset size and GPU)
- Prediction time: ~1 minute per 100 tiles (with GPU)
- Annotation time: ~30 seconds per tile (user-dependent)
Copyright (c) 2025. All rights reserved.
For issues and questions, check the logs:
- Server logs:
kubectl logs <server-pod-name> - Client logs:
logs/test_*.log - Test script:
python test_service.py