Model Express is a Rust-based component meant to be placed next to existing model inference systems to speed up their startup times and improve overall performance.
The current version of Model Express acts as a cache for HuggingFace, providing fast access to pre-trained models and reducing the need for repeated downloads across multiple servers. Future versions will expand support to additional model providers and include features like model versioning, advanced caching strategies, advanced networking using NIXL, checkpoint storage, as well as a peer-to-peer model sharing system.
The project is organized as a Rust workspace with the following components:
modelexpress_server
: The main gRPC server that provides model servicesmodelexpress_client
: Client library for interacting with the servermodelexpress_common
: Shared code and constants between client and server
The current diagram represents a high-level overview of the Model Express architecture. It will evolve with time as we add new features and components.
architecture-beta
group MXS(cloud)[Model Express]
service db(database)[Database] in MXS
service disk(disk)[Persistent Volume Storage] in MXS
service server(server)[Server] in MXS
db:L -- R:server
disk:T -- B:server
group MXC(cloud)[Inference Server]
service client(server)[Client] in MXC
disk:T -- B:client
The client is either a library embedded in the inference server of your choice, or a CLI tool which can be used beforehand to hydrate the model cache.
The client library includes a command-line interface, meant to facilitate interaction with the Model Express server, and act as a HuggingFace CLI replacement. In the future, it will also abstract other model providers, making it a one-stop shop for interacting with various model APIs.
See docs/CLI.md for detailed CLI documentation.
- Rust: Latest stable version (recommended: 1.88)
- Cargo: Rust's package manager (included with Rust)
- protoc: The Protocol Buffers compiler is expected to be installed and usable
- Docker (optional): For containerized deployment
git clone <repository-url>
cd modelexpress
cargo build
cargo run --bin modelexpress-server
The server will start on 0.0.0.0:8001
by default.
# Start the gRPC server
cargo run --bin modelexpress-server
# In another terminal, run tests
cargo test
# Run integration tests
./run_integration_tests.sh
# Build and run with docker-compose
docker-compose up --build
# Or build and run manually
docker build -t model-express .
docker run -p 8000:8000 model-express
kubectl apply -f k8s-deployment.yaml
ModelExpress uses a layered configuration system that supports multiple sources in order of precedence:
- Command line arguments (highest priority)
- Environment variables
- Configuration files (YAML)
- Default values (lowest priority)
Create a configuration file (supports YAML):
# Generate a sample configuration file
cargo run --bin config_gen -- --output model-express.yaml
# Or use the provided sample
cp model-express.yaml my-config.yaml
Start the server with a configuration file:
cargo run --bin modelexpress-server -- --config my-config.yaml
You can use structured environment variables with the MODEL_EXPRESS_
prefix:
# Server settings
export MODEL_EXPRESS_SERVER_HOST="127.0.0.1"
export MODEL_EXPRESS_SERVER_PORT=8080
# Database settings
export MODEL_EXPRESS_DATABASE_PATH="/path/to/models.db"
# Cache settings
export MODEL_EXPRESS_CACHE_DIRECTORY="/path/to/cache"
export MODEL_EXPRESS_CACHE_EVICTION_ENABLED=true
# Logging settings
export MODEL_EXPRESS_LOGGING_LEVEL=debug
export MODEL_EXPRESS_LOGGING_FORMAT=json
# Basic usage
cargo run --bin modelexpress-server -- --port 8080 --log-level debug
# With configuration file
cargo run --bin modelexpress-server -- --config my-config.yaml --port 8080
# Validate configuration
cargo run --bin modelexpress-server -- --config my-config.yaml --validate-config
host
: Server host address (default: "0.0.0.0")port
: Server port (default: 8001)
path
: SQLite database file path (default: "./models.db"). Note that in the case of a multi node kubernetes deployment, the database should be shared among all nodes using a persistent volume.
directory
: Cache directory path (default: "./cache")max_size_bytes
: Maximum cache size in bytes (default: null/unlimited)eviction.enabled
: Enable cache eviction (default: true)eviction.check_interval_seconds
: Eviction check interval (default: 3600)eviction.policy.unused_threshold_seconds
: Unused threshold (default: 604800/7 days)eviction.policy.max_models
: Maximum models to keep (default: null/unlimited)eviction.policy.min_free_space_bytes
: Minimum free space (default: null/unlimited)
level
: Log level - trace, debug, info, warn, error (default: "info")format
: Log format - json, pretty, compact (default: "pretty")file
: Log file path (default: null/stdout)structured
: Enable structured logging (default: false)
- gRPC Port: 8001
- Server Address:
0.0.0.0:8001
(listens on all interfaces) - Client Endpoint:
http://localhost:8001
The server provides the following gRPC services:
- HealthService: Health check endpoints
- ApiService: General API endpoints
- ModelService: Model management and serving
cargo test
# Integration tests
cargo test --test integration_tests
# Client tests with specific model
cargo run --bin test_client -- --test-model "google-t5/t5-small"
# Fallback tests
cargo run --bin fallback_test
# Run tests with coverage (requires cargo-tarpaulin)
cargo tarpaulin --out Html
ModelExpress/
├── modelexpress_server/ # Main gRPC server
├── modelexpress_client/ # Client library
├── modelexpress_common/ # Shared code
├── workspace-tests/ # Integration tests
├── docker-compose.yml # Docker configuration
├── Dockerfile # Docker build file
├── k8s-deployment.yaml # Kubernetes deployment
└── run_integration_tests.sh # Test runner script
- Server Features: Add to
modelexpress_server/src/
- Client Features: Add to
modelexpress_client/src/
- Shared Code: Add to
modelexpress_common/src/
- Tests: Add to appropriate directory under
workspace-tests/
Key dependencies include:
tokio
: Async runtimetonic
: gRPC frameworkaxum
: Web framework (if needed)serde
: Serializationhf-hub
: Hugging Face Hub integrationrusqlite
: SQLite database
This repository uses pre-commit hooks to maintain code quality. In order to contribute effectively, please set up the pre-commit hooks:
pip install pre-commit
pre-commit install
The project includes benchmarking capabilities:
# Run benchmarks
cargo bench
The server uses structured logging with tracing
:
# Set log level
RUST_LOG=debug cargo run --bin modelexpress-server
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Run the test suite
- Submit a pull request
For issues and questions:
- Create an issue in the repository
- Check the integration tests for usage examples
- Review the client library documentation
Includes:
- Model Express being released as a CLI tool.
- Model weight caching within Kubernetes clusters using PVC.
- Database tracking of which models are stored on which nodes.
- Basic model download and storage management.
- Documentation for Kubernetes deployment and CLI usage.