Pre-built binaries for NitroInbox that aren't available in standard package managers.
The LoRA fine-tuning tool from llama.cpp. Used by NitroInbox to create personalized AI adapters based on user corrections.
- llama.cpp releases a new version with improvements/fixes
- Bug fixes needed for the fine-tuning tool
- Adding support for new platforms
-
Go to Actions tab → "Build llama-finetune" → "Run workflow"
-
Fill in the parameters:
llama_cpp_ref: The llama.cpp git ref to build from (tag, branch, or commit)- Use
masterfor latest - Use a specific tag like
b4600for reproducible builds
- Use
release_version: The version tag for this release (e.g.,v1.0.2)- Must be unique - can't reuse existing version tags
- Follow semver:
v{major}.{minor}.{patch}
-
Click "Run workflow"
-
Wait for build (~3 minutes for Apple Silicon)
-
Verify release was created at: https://github.com/dotnetfactory/nitroinbox-binaries/releases
After building a new version, update app.config.ts in NitroInbox:
// app.config.ts
fineTuning: {
binaryUrls: {
'darwin-arm64':
'https://github.com/dotnetfactory/nitroinbox-binaries/releases/download/v1.0.2/llama-finetune-macos-arm64.zip',
// ... other platforms
},
}The app automatically detects version changes by comparing:
- Local version: stored in
~/Library/Application Support/NitroInbox/tools/llama-finetune.version - Config version: extracted from URL (
/download/v1.0.2/→v1.0.2)
When versions differ, the old binary is deleted and the new one is downloaded.
| Platform | Runner | Status |
|---|---|---|
| macOS Apple Silicon | macos-14 |
✅ Supported |
| macOS Intel | macos-15 |
🔜 Can be added |
| Linux x64 | ubuntu-22.04 |
🔜 Can be added |
| Linux ARM64 | ubuntu-22.04-arm |
🔜 Can be added |
| Windows | - | ❌ Not planned (use WSL) |
To add more platforms, update .github/workflows/build-finetune.yml.
The workflow builds with these CMake flags:
cmake .. \
-DGGML_METAL=ON \ # Metal GPU acceleration (macOS)
-DLLAMA_BUILD_EXAMPLES=ON \ # Build example tools
-DLLAMA_BUILD_TOOLS=ON \ # Build CLI tools
-DBUILD_SHARED_LIBS=OFF \ # Static linking (self-contained binary)
-DLLAMA_STATIC=ON # Static linkingStatic linking ensures the binary works without requiring libllama.dylib.
- The llama.cpp ref might be too old or the API changed
- Try using
masteror a more recent tag
- Build wasn't static - ensure
BUILD_SHARED_LIBS=OFFandLLAMA_STATIC=ON
- Version is extracted from URL pattern:
/download/v{version}/ - Ensure release tag follows format:
v1.0.0,v1.0.1, etc.