Skip to content

Roxonn-FutureTech/Roxonn-Node

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

exo logo

exo: Run your own AI cluster at home with everyday devices. Maintained by exo labs.

GitHub Repo stars Tests License: GPL v3

exo-explore%2Fexo | Trendshift

The Roxonn Fork: Decentralized AI with Proof of Compute

Note: This is the official Roxonn fork of the excellent exo-explore/exo project, originally created by exo labs.

The purpose of this fork is to integrate exo into the Roxonn Platform, creating a decentralized "Proof of Compute" network. This allows anyone, from individual hobbyists to large-scale operators, to contribute their spare compute resources to the network and earn ROXN token rewards for running AI tasks. You can view the on-chain activity for this network on the XDC block explorer here: Proof of Compute Contract on XDC Scan.

How to Participate and Earn Rewards

  1. Get a Wallet: You'll need an XDC wallet address from the Roxonn Platform.
  2. Run the Node: Start the exo node with your wallet address. This registers you on the network.
    exo --roxonn-wallet-address <YOUR_XDC_WALLET_ADDRESS>  --node-host <IP_ADDRESS> --node-port <52415> --default-model <llama-3.2-1b>
  3. Contribute & Earn: Once your node is running, it's available to receive AI inference tasks from the network. Every completed task is recorded on-chain as a "compute unit." These compute units will be the basis for earning ROXN token rewards as the network develops.

Contribute to the Project and Earn Bounties

We also welcome contributions to the exo codebase itself. Bounties for accepted pull requests are managed on the Roxonn Platform.


Unify your existing devices into one powerful GPU: iPhone, iPad, Android, Mac, NVIDIA, Raspberry Pi, pretty much any device!

Update: Roxonn is hiring. See here for more details.

Interested in running Roxonn Node in your business? Contact us to discuss.

Get Involved

Roxonn Node is experimental software. Expect bugs early on. Create issues so they can be fixed. The Roxonn Ecosystem team will strive to resolve issues quickly.

We also welcome contributions from the community. We have a list of bounties in our platform.

Features

Wide Model Support

exo supports different models including LLaMA (MLX and tinygrad), Mistral, LlaVA, Qwen, and Deepseek.

Dynamic Model Partitioning

exo optimally splits up models based on the current network topology and device resources available. This enables you to run larger models than you would be able to on any single device.

Automatic Device Discovery

exo will automatically discover other devices using the best method available. Zero manual configuration.

ChatGPT-compatible API

exo provides a ChatGPT-compatible API for running models. It's a one-line change in your application to run models on your own hardware using exo.

Device Equality

Unlike other distributed inference frameworks, exo does not use a master-worker architecture. Instead, exo devices connect p2p. As long as a device is connected somewhere in the network, it can be used to run models.

Exo supports different partitioning strategies to split up a model across devices. The default partitioning strategy is ring memory weighted partitioning. This runs an inference in a ring where each device runs a number of model layers proportional to the memory of the device.

"A screenshot of exo running 5 nodes

Watch the video

Installation

The current recommended way to install exo is from source.

Prerequisites

  • Python>=3.12.0 is required because of issues with asyncio in previous versions.
  • For Linux with NVIDIA GPU support (Linux-only, skip if not using Linux or NVIDIA):

Hardware Requirements

  • The only requirement to run exo is to have enough memory across all your devices to fit the entire model into memory. For example, if you are running llama 3.1 8B (fp16), you need 16GB of memory across all devices. Any of the following configurations would work since they each have more than 16GB of memory in total:
    • 2 x 8GB M3 MacBook Airs
    • 1 x 16GB NVIDIA RTX 4070 Ti Laptop
    • 2 x Raspberry Pi 400 with 4GB of RAM each (running on CPU) + 1 x 8GB Mac Mini
  • exo is designed to run on devices with heterogeneous capabilities. For example, you can have some devices with powerful GPUs and others with integrated GPUs or even CPUs. Adding less capable devices will slow down individual inference latency but will increase the overall throughput of the cluster.

From source

git clone https://github.com/Roxonn-FutureTech/Roxonn-Node
cd exo
pip install -e .
# alternatively, with venv
source install.sh

Troubleshooting

  • If running on Mac, MLX has an install guide with troubleshooting steps.

Performance

  • There are a number of things users have empirically found to improve performance on Apple Silicon Macs:
  1. Upgrade to the latest version of macOS Sequoia.
  2. Run ./configure_mlx.sh. This runs commands to optimize GPU memory allocation on Apple Silicon Macs.

Documentation

Example Usage on Multiple macOS Devices

Device 1:

exo

Device 2:

exo

That's it! No configuration required - exo will automatically discover the other device(s).

exo starts a ChatGPT-like WebUI (powered by tinygrad tinychat) on http://localhost:52415

For developers, exo also starts a ChatGPT-compatible API endpoint on http://localhost:52415/v1/chat/completions. Examples with curl:

Llama 3.2 3B:

curl http://localhost:52415/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "llama-3.2-3b",
     "messages": [{"role": "user", "content": "What is the meaning of exo?"}],
     "temperature": 0.7
   }'

Llama 3.1 405B:

curl http://localhost:52415/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "llama-3.1-405b",
     "messages": [{"role": "user", "content": "What is the meaning of exo?"}],
     "temperature": 0.7
   }'

DeepSeek R1 (full 671B):

curl http://localhost:52415/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "deepseek-r1",
     "messages": [{"role": "user", "content": "What is the meaning of exo?"}],
     "temperature": 0.7
   }'

Llava 1.5 7B (Vision Language Model):

curl http://localhost:52415/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "llava-1.5-7b-hf",
     "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "What are these?"
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "http://images.cocodataset.org/val2017/000000039769.jpg"
            }
          }
        ]
      }
    ],
     "temperature": 0.0
   }'

Example Usage on Multiple Heterogenous Devices (macOS + Linux)

Device 1 (macOS):

exo

Note: We don't need to explicitly tell exo to use the tinygrad inference engine. MLX and tinygrad are interoperable!

Device 2 (Linux):

exo

Linux devices will automatically default to using the tinygrad inference engine.

You can read about tinygrad-specific env vars here. For example, you can configure tinygrad to use the cpu by specifying CLANG=1.

Example Usage on a single device with "exo run" command

exo run llama-3.2-3b

With a custom prompt:

exo run llama-3.2-3b --prompt "What is the meaning of exo?"

Model Storage

Models by default are stored in ~/.cache/exo/downloads.

You can set a different model storage location by setting the EXO_HOME env var.

Model Downloading

Models are downloaded from Hugging Face. If you are running exo in a country with strict internet censorship, you may need to download the models manually and put them in the ~/.cache/exo/downloads directory.

To download models from a proxy endpoint, set the HF_ENDPOINT environment variable. For example, to run exo with the huggingface mirror endpoint:

HF_ENDPOINT=https://hf-mirror.com exo

Debugging

Enable debug logs with the DEBUG environment variable (0-9).

DEBUG=9 exo

For the tinygrad inference engine specifically, there is a separate DEBUG flag TINYGRAD_DEBUG that can be used to enable debug logs (1-6).

TINYGRAD_DEBUG=2 exo

Formatting

We use yapf to format the code. To format the code, first install the formatting requirements:

pip3 install -e '.[formatting]'

Then run the formatting script:

python3 format.py ./exo

Known Issues

  • On certain versions of Python on macOS, certificates may not installed correctly, potentially causing SSL errors (e.g., when accessing huggingface.co). To resolve this, run the Install Certificates command, typicall as follows:
/Applications/Python 3.x/Install Certificates.command
  • 🚧 As the library is evolving so quickly, the iOS implementation has fallen behind Python. We have decided for now not to put out the buggy iOS version and receive a bunch of GitHub issues for outdated code. We are working on solving this properly and will make an announcement when it's ready. If you would like access to the iOS implementation now, please email [email protected] with your GitHub username explaining your use-case and you will be granted access on GitHub.

Inference Engines

exo supports the following inference engines:

Discovery Modules

Peer Networking Modules

  • βœ… GRPC
  • 🚧 NCCL

About

Run your own AI cluster at home with everyday devices πŸ“±πŸ’» πŸ–₯️⌚

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 88.0%
  • JavaScript 5.2%
  • HTML 3.6%
  • CSS 2.7%
  • Shell 0.5%