Skip to content

External model + profile registry for OpenClaw2Go — served via GitHub Pages

Notifications You must be signed in to change notification settings

runpod-workers/openclaw2go-registry

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Contributing Models to OpenClaw2Go Registry

Overview

The OpenClaw2Go model registry is an open collection of model configurations for running AI models on GPU pods. Community contributions help expand the model catalog for everyone.

How to Contribute

Option 1: GitHub Issue (Easiest)

  1. Run your model on an OpenClaw2Go pod
  2. Export your config: openclaw2go registry export --format issue
  3. Open a New Model Issue
  4. Paste the exported config and test evidence
  5. A maintainer will review and merge your contribution

Option 2: Direct Pull Request

  1. Fork this repository
  2. Create a new JSON file in models/ (use an existing file as reference)
  3. Run validation: python3 scripts/validate.py
  4. Submit a Pull Request

CI will automatically validate your JSON and check that the HuggingFace repo exists.

Model Config Reference

Each model is a JSON file in models/ with these fields:

Field Required Description
id Yes Unique ID in provider/name format (lowercase)
name Yes Human-readable model name
type Yes llm, audio, or image
engine Yes llamacpp, llamacpp-audio, image-gen, or vllm
repo Yes HuggingFace repository name
files Yes Array of files to download from the repo
downloadDir Yes Must start with /workspace/models/
servedAs Yes (LLM) Model name exposed via API
vram Yes Object with model (MB) and overhead (MB) fields
kvCacheMbPer1kTokens Recommended KV cache VRAM per 1k tokens (with q8_0)
defaults Recommended Default contextLength and port
startDefaults Optional Default values like gpuLayers, parallel
extraStartArgs Optional Additional CLI args for the engine
provider Yes (LLM) Provider config with name and api
default Yes Whether this is the default for its type (usually false)
status Yes stable, experimental, or deprecated
verifiedOn Optional Array of GPU names verified on

VRAM Estimation

VRAM values should be measured, not guessed:

  1. Start the model on a pod
  2. Run nvidia-smi and note VRAM usage
  3. Set vram.model to the model weight VRAM (approximate)
  4. Set vram.overhead to the remaining VRAM minus KV cache

KV Cache Rate

For LLM models, measure kvCacheMbPer1kTokens:

  1. Run model with a known context length (e.g., 150k)
  2. Note total VRAM used
  3. Calculate: (total_vram - model_vram - overhead) / (context_length / 1000)

This value should reflect q8_0 KV quantization (the entrypoint uses -ctk q8_0 -ctv q8_0).

Validation

Before submitting, validate your config:

python3 scripts/validate.py
python3 scripts/validate.py --check-hf  # Also verify HF repos exist

Security Requirements

  • downloadDir must start with /workspace/models/ (path restriction)
  • engine must be one of the known engines (engine whitelist)
  • extraStartArgs are passed as CLI args to known binaries only (no code execution)
  • All merges require maintainer review

About

External model + profile registry for OpenClaw2Go — served via GitHub Pages

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages