Releases: Anil-matcha/Open-Generative-AI
v1.0.10 — Lipsync Infinite Talk null-prompt fix
What's new
- fix(lipsync): Infinite Talk image-to-video (and the other
hasPrompt: truelipsync models — Wan 2.2 s2v, LTX 2.3, LTX 2 19B, Infinite Talk v2v) no longer fail withfield "prompt" failed nullable validation: Value is not nullable; got nullwhen the prompt textarea is left blank. The client now always sends apromptfield (defaulting to an empty string) for those models so the backend never forwardsnullto the underlying API.
Downloads
- macOS (Apple Silicon):
Open Generative AI-1.0.10-arm64.dmg - macOS (Intel):
Open Generative AI-1.0.10.dmg - Windows (x64):
Open Generative AI Setup 1.0.10.exe - Linux (AppImage):
Open Generative AI-1.0.10.AppImage - Linux (Debian/Ubuntu):
open-generative-ai_1.0.10_amd64.deb
Full changelog: v1.0.9...v1.0.10
v1.0.9 — Local Wan2GP video generation
What's new
Closes #126 — local Wan2GP users could not upload reference media or generate video locally even with WanGP installed, hitting "Not authorized: missing or invalid credentials" because every Video Studio upload was hard-wired to the Muapi-hosted endpoint.
Fix
- Wan2GP upload bridge (
wan2gp:upload-fileIPC) — pushes files to the configured Wan2GP server's/uploadendpoint and rehydrates them into GradioFileDatadescriptors at generation time. - Local Video Studio — Wan 2.2 t2v / Wan 2.2 i2v / Hunyuan / LTX models now appear in the Video Studio model picker when running in the desktop app.
- Auth gate bypassed for local models — generating with a Wan2GP model no longer requires a Muapi API key.
- Generation routing — Video Studio's t2v and i2v paths call
localAI.generate(...)for Wan2GP models and surface step progress in the Generate button.
Setup for local video
- Run a Wan2GP server (
https://github.com/deepbeepmeep/Wan2GP) on a machine with a CUDA or ROCm GPU. - In Settings → Local Models, set the server URL (e.g.
http://localhost:7860). - Pick a Wan2GP entry from the Video Studio model dropdown.
If your Wan2GP build exposes Gradio function names different from the catalog defaults (wan22_t2v, wan22_i2v, hunyuan_video, ltx_video), check <server>/?view=api and edit electron/lib/wan2gpProvider.js accordingly.
Downloads
| Platform | File |
|---|---|
| macOS Apple Silicon | Open Generative AI-1.0.9-arm64.dmg |
| macOS Intel | Open Generative AI-1.0.9.dmg |
| Windows (x64) | Open Generative AI Setup 1.0.9.exe |
| Linux AppImage | Open Generative AI-1.0.9.AppImage |
| Linux Debian/Ubuntu | open-generative-ai_1.0.9_amd64.deb |
🤖 Generated with Claude Code
v1.0.8 — Windows build restored (Tailwind v3 revert)
What's new
- Windows build restored. v1.0.7 shipped Mac-only because the root
tailwindcsswas on v4 while workspace packages were on v3 — the v3/v4 mismatch broke the Windows electron build. v1.0.8 completes the revert to Tailwind v3 across the root app (CSS directives, vite config, postcss config) so the Windows installer builds and runs again.
Downloads
- Windows (x64) —
Open Generative AI Setup 1.0.8.exe(unsigned NSIS installer; click More info → Run anyway on the SmartScreen warning) - macOS users: stay on v1.0.7 — there are no macOS-affecting changes in v1.0.8.
Notes
The Windows installer is not code-signed. SmartScreen will warn on first install — this is expected.
v1.0.7 — Wan2GP local engine + Dreamshaper URL fix
Added
- Wan2GP HTTP provider — second local engine alongside the bundled sd.cpp. Run Wan2GP on any CUDA/ROCm box (your gaming PC, a workstation, or a rented RunPod/vast.ai instance), point the desktop app at its URL via Settings → Local Models, and Flux, Qwen-Image, Wan 2.2 (T2V/I2V), Hunyuan, and LTX become available. Image-capable models surface in Image Studio; video models live in the catalog awaiting Video Studio wiring.
- The two engines coexist — sd.cpp keeps working as before for SD 1.5, SDXL, and Z-Image. The renderer routes per-model based on a
providerfield.
Fixed
- Dreamshaper 8 download URL — the catalog pointed at
huggingface.co/Lykon/dreamshaper-8, which now returns 404. Switched to the live repohuggingface.co/Lykon/DreamShaper.
Docs
- README rewritten for the two-engine model with a comparison table and Wan2GP setup instructions.
- New "Verifying the SD 1.5 path" subsection — a copy-paste
curl+sd-clirecipe that bypasses the UI and validates the local engine end-to-end. Useful for confirming Metal is active on Apple Silicon. - Hardware notes flag that Z-Image is known to hang on small-RAM Macs — stick to SD 1.5 (Dreamshaper 8 / Realistic Vision / Anything v5) on those machines.
Downloads
- macOS Apple Silicon (M1/M2/M3/M4):
Open Generative AI-1.0.7-arm64.dmg - macOS Intel (x64):
Open Generative AI-1.0.7.dmg - Windows / Linux: v1.0.7 ships Mac-only. Use the v1.0.6 release for Windows
.exeand Linux.AppImage/.deb— both still work for sd.cpp local inference. The Wan2GP provider only requires the desktop app, so a v1.0.6 client can also point at a Wan2GP server (it just won't have the new model dropdown entries until rebuilt).
First launch on macOS:
xattr -cr "/Applications/Open Generative AI.app"then right-click → Open.
v1.0.6 — Local inference works on Linux & Windows again
Fixes
- Local inference engine on Linux/Windows is downloadable again (#108) — leejet's latest stable-diffusion.cpp release temporarily ships only Mac arm64 + Windows CUDA-runtime stub + Linux ROCm, so anyone outside that narrow set hit
Error invoking remote method 'local-ai:download-binary': No binary found for this platform. The downloader now walks the last 15 leejet releases until it finds one that actually has a build for the current OS/arch, and the Linux matcher prefers plain x86_64 → vulkan → rocm. macOS Intel users get a clearer error since leejet has never published an x86_64 macOS binary. - Plus #110 (submodule-safe setup) and #111 (prominent Settings button) from v1.0.5.
Downloads
- macOS Apple Silicon (M1/M2/M3/M4):
Open Generative AI-1.0.6-arm64.dmg - macOS Intel (x64):
Open Generative AI-1.0.6.dmg - Linux x86_64 — AppImage:
Open Generative AI-1.0.6.AppImage(chmod +x then run) - Linux x86_64 — Debian/Ubuntu:
open-generative-ai_1.0.6_amd64.deb(sudo apt install ./open-generative-ai_1.0.6_amd64.deb)
First launch on macOS:
xattr -cr "/Applications/Open Generative AI.app"then right-click → Open.
AppImage on Ubuntu 24.04+: if it silently fails,sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0or install the .deb instead (it ships an AppArmor profile).
v1.0.5 — Settings button + submodule setup fix
What's new
- Prominent Settings button in the header (#111) — replaces the bare key/avatar icon with a labeled gear button so users can find API key + local model settings at a glance. Applies to both the desktop app and the web shell.
- Submodule-safe setup (#110) —
git clone --recurse-submodulesis now the documented path, andnpm run setupinitializes submodules automatically sonext buildno longer fails withModule not found: Can't resolve 'ai-agent'.
Downloads
- macOS Apple Silicon (M1/M2/M3/M4):
Open Generative AI-1.0.5-arm64.dmg - macOS Intel (x64):
Open Generative AI-1.0.5.dmg
First launch on macOS: run
xattr -cr "/Applications/Open Generative AI.app"then right-click → Open. The app is not notarized.
v1.0.4 — Agents & Workflows tabs
What's new
- Agents tab added to the desktop app navbar — create, browse, and chat with AI agents
- Workflows tab added alongside Agents for the Electron build
Downloads
| Platform | File |
|---|---|
| macOS (Apple Silicon) | Open Generative AI-1.0.4-arm64.dmg |
| macOS (Intel) | Open Generative AI-1.0.4.dmg |
v1.0.3 — Metal GPU inference binaries
sd-cli Metal GPU binaries (macOS Apple Silicon)
This release ships pre-built Metal GPU-accelerated inference binaries for local image generation on macOS Apple Silicon (M1/M2/M3/M4).
What's included
| File | Contents |
|---|---|
sd-cli-metal-macos-arm64.zip |
sd-cli + libstable-diffusion.dylib — Metal-enabled, arm64 |
Build details
- Built from stable-diffusion.cpp source
- CMake flags:
-DGGML_METAL=ON -DSD_METAL=ON -DCMAKE_BUILD_TYPE=Release - Linked with
-force_load libggml-metal.ato pull in self-registering Metal backend symbols - Verified: 205
ggml_metalsymbols, "Using Metal backend" string present - Significantly faster than CPU-only on Apple Silicon for local image generation
Usage
These binaries are auto-downloaded by the Open Generative AI desktop app via Settings → Local Models.
v1.0.2
What's new in v1.0.2
- Renamed Seedance 2.0 models to SD 2 (text-to-video, extend, and image-to-video)
Downloads
| Platform | File |
|---|---|
| macOS Apple Silicon (M1/M2/M3/M4) | Open Generative AI-1.0.2-arm64.dmg |
| macOS Intel (x64) | Open Generative AI-1.0.2.dmg |
| Windows (x64) | Open Generative AI Setup 1.0.2.exe |
| Linux | Build locally with npm run electron:build:linux |
See the README for macOS Gatekeeper and Windows SmartScreen bypass instructions.
v1.0.1 — Desktop App (macOS)
What's new
- Fix AI Video Effects: The effect type (name) and prompt fields are now correctly sent to the API, resolving the 422 Unprocessable Entity error users experienced
- Added an Effect dropdown in the Video Studio controls so users can choose from all available effects (360 Rotation, Cakeify, Fire, etc.)
Downloads
| Platform | File |
|---|---|
| macOS Intel (x64) | Open Generative AI-1.0.1.dmg |
| macOS Apple Silicon (arm64) | Open Generative AI-1.0.1-arm64.dmg |
Windows build coming soon.