The 0G Serving Broker enables you to become a provider on the 0G Compute Network. It handles service registration, settlement operations, and proxies user requests for both inference and fine-tuning services.
Transform your AI services into verifiable, revenue-generating endpoints on the 0G Compute Network.
Benefits:
- Monetize your GPU infrastructure
- Automated billing and settlements
- Trust through TEE verification
Prerequisites:
- Docker Compose 1.27+
- OpenAI-compatible model service
- Wallet with 0G tokens for gas fees
Service Requirements:
- Your AI service must implement the OpenAI API Interface
- TEE Verification (TeeML) requires:
- Intel TDX enabled CPU
- NVIDIA H100 or H200 GPU with TEE support
Offer computing power for model fine-tuning tasks on the 0G Compute Network.
Prerequisites:
- Docker and Docker Compose
- TDX-enabled Intel CPU
- Compatible NVIDIA GPU (H100/H200 with TEE support)
- Wallet with 0G tokens for gas fees
- Publicly accessible server
Visit the releases page to download the latest version.
# Download and extract
tar -xzf inference-broker.tar.gz
cd inference-broker
# Generate configuration files
./config# Copy config template
cp config.example.yaml config.local.yaml
# Edit config.local.yaml:
# - Set servingUrl to your publicly accessible URL
# - Set privateKeys with your wallet's private key
# Replace port in docker-compose.yml
sed -i 's/#PORT#/8080/g' docker-compose.ymlFor TEE-verified services, you need to set up a TEE node:
- Option 1: Follow the Dstack Getting Started Guide
- Option 2: Follow the 0G-TAPP README