Skip to content

Latest commit

 

History

History
88 lines (59 loc) · 2.5 KB

File metadata and controls

88 lines (59 loc) · 2.5 KB

0G Serving Broker

Overview

The 0G Serving Broker enables you to become a provider on the 0G Compute Network. It handles service registration, settlement operations, and proxies user requests for both inference and fine-tuning services.

Provider Types

Inference Provider

Transform your AI services into verifiable, revenue-generating endpoints on the 0G Compute Network.

Benefits:

  • Monetize your GPU infrastructure
  • Automated billing and settlements
  • Trust through TEE verification

Prerequisites:

  • Docker Compose 1.27+
  • OpenAI-compatible model service
  • Wallet with 0G tokens for gas fees

Service Requirements:

  • Your AI service must implement the OpenAI API Interface
  • TEE Verification (TeeML) requires:
    • Intel TDX enabled CPU
    • NVIDIA H100 or H200 GPU with TEE support

Fine-tuning Provider

Offer computing power for model fine-tuning tasks on the 0G Compute Network.

Prerequisites:

  • Docker and Docker Compose
  • TDX-enabled Intel CPU
  • Compatible NVIDIA GPU (H100/H200 with TEE support)
  • Wallet with 0G tokens for gas fees
  • Publicly accessible server

Quick Start

Download

Visit the releases page to download the latest version.

Inference Broker Setup

# Download and extract
tar -xzf inference-broker.tar.gz
cd inference-broker

# Generate configuration files
./config

Fine-tuning Broker Setup

# Copy config template
cp config.example.yaml config.local.yaml

# Edit config.local.yaml:
# - Set servingUrl to your publicly accessible URL
# - Set privateKeys with your wallet's private key

# Replace port in docker-compose.yml
sed -i 's/#PORT#/8080/g' docker-compose.yml

TEE Node Setup

For TEE-verified services, you need to set up a TEE node:

Documentation

Support