v3.5.0 #6195
mudler
announced in
Announcements
v3.5.0
#6195
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
🚀 LocalAI 3.5.0
Welcome to LocalAI 3.5.0! This release focuses on expanding backend support, improving usability, refining the overall experience, and keeping reducing footprint of LocalAI, to make it a truly portable, privacy-focused AI stack. We’ve added several new backends, enhanced the WebUI with new features, made significant performance improvements under the hood, and simplified LocalAI management with a new Launcher app (Alpha) available for Linux and MacOS.
TL;DR – What’s New in LocalAI 3.5.0 🎉
What’s New in Detail
🚀 New Backends and Model Support
We've significantly expanded the range of models you can run with LocalAI!
mlx-audio
backend. Example configuration:diffusers
backend, supporting both I2V and T2V. Example configuration:✨ WebUI Improvements
We've added several new features to make using LocalAI even easier:
🚀 Performance & Architecture Improvements
🛠️ Simplified Management – Introducing the LocalAI Launcher (Alpha)
We're excited to introduce the first version of the LocalAI Launcher! This application simplifies:
Please note: The launcher is in Alpha and may have bugs. The macOS build requires workarounds to run due to binaries not yet signed, and specific steps for running it are needed: https://discussions.apple.com/thread/253714860?answerId=257037956022#257037956022.
✅ Bug Fixes & Stability Improvements
libomp.so
issue on macOS Docker containers.libutf8
libraries.Additional Improvements
LOCALAI_BACKENDS_SYSTEM_PATH
or via command-line arguments) defaulting to/usr/share/localai/backends
. This allows specifying a read-only directory for backends, useful for package management and system-wide installations.ref_images
oversrc
for more robust loading behavior.🚨 Important Notes
The Complete Local Stack for Privacy-First AI
LocalAI
The free, Open Source OpenAI alternative. Acts as a drop-in replacement REST API compatible with OpenAI specifications for local AI inferencing. No GPU required.
Link: https://github.com/mudler/LocalAI
LocalAGI
A powerful Local AI agent management platform. Serves as a drop-in replacement for OpenAI's Responses API, supercharged with advanced agentic capabilities and a no-code UI.
Link: https://github.com/mudler/LocalAGI
LocalRecall
A RESTful API and knowledge base management system providing persistent memory and storage capabilities for AI agents. Designed to work alongside LocalAI and LocalAGI.
Link: https://github.com/mudler/LocalRecall
Thank you! ❤️
A massive THANK YOU to our incredible community and our sponsors! LocalAI has over 35.000 stars, and LocalAGI has already rocketed past 1100+ stars!
As a reminder, LocalAI is real FOSS (Free and Open Source Software) and its sibling projects are community-driven and not backed by VCs or a company. We rely on contributors donating their spare time and our sponsors to provide us the hardware! If you love open-source, privacy-first AI, please consider starring the repository, contributing code, reporting bugs, or spreading the word!
Full changelog 👇
👉 Click to expand 👈
What's Changed
Bug fixes 🐛
Exciting New Features 🎉
libutf8
libs by @mudler in fix(llama-cpp/darwin): make sure to bundlelibutf8
libs #6060🧠 Models
📖 Documentation and examples
👒 Dependencies
Other Changes
5527454cdb3e15d7e2b8a6e2afcb58cb61651fd2
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to5527454cdb3e15d7e2b8a6e2afcb58cb61651fd2
#6047f4586ee5986d6f965becb37876d6f3666478a961
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tof4586ee5986d6f965becb37876d6f3666478a961
#604816c2924cb2c4b5c9f79220aa7708eb5b346b029b
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to16c2924cb2c4b5c9f79220aa7708eb5b346b029b
#605529c8fbe4e05fd23c44950d0958299e25fbeabc5c
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to29c8fbe4e05fd23c44950d0958299e25fbeabc5c
#6054040510a132f0a9b51d4692b57a6abfd8c9660696
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to040510a132f0a9b51d4692b57a6abfd8c9660696
#60695e6229a8409ac786e62cb133d09f1679a9aec13e
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to5e6229a8409ac786e62cb133d09f1679a9aec13e
#60701fe00296f587dfca0957e006d146f5875b61e43d
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to1fe00296f587dfca0957e006d146f5875b61e43d
#607921c17b5befc5f6be5992bc87fc1ba99d388561df
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to21c17b5befc5f6be5992bc87fc1ba99d388561df
#60846d7f1117e3e3285d0c5c11b5ebb0439e27920082
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to6d7f1117e3e3285d0c5c11b5ebb0439e27920082
#6088fc45bb86251f774ef817e89878bb4c2636c8a58f
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp tofc45bb86251f774ef817e89878bb4c2636c8a58f
#6089fb22dd07a639e81c7415e30b146f545f1a2f2caf
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tofb22dd07a639e81c7415e30b146f545f1a2f2caf
#61127a6e91ad26160dd6dfb33d29ac441617422f28e7
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to7a6e91ad26160dd6dfb33d29ac441617422f28e7
#6116cd36b5e5c7fed2a3ac671dd542d579ca40b48b54
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tocd36b5e5c7fed2a3ac671dd542d579ca40b48b54
#6118710dfc465a68f7443b87d9f792cffba00ed739fe
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to710dfc465a68f7443b87d9f792cffba00ed739fe
#61267745fcf32846006128f16de429cfe1677c963b30
by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to7745fcf32846006128f16de429cfe1677c963b30
#6136043fb27d3808766d8ea8195bbd12359727264402
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to043fb27d3808766d8ea8195bbd12359727264402
#6137c4e9239064a564de7b94ee2b401ae907235a8fca
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toc4e9239064a564de7b94ee2b401ae907235a8fca
#61398b696861364360770e9f61a3422d32941a477824
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to8b696861364360770e9f61a3422d32941a477824
#6151fbef0fad7a7c765939f6c9e322fa05cd52cf0c15
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tofbef0fad7a7c765939f6c9e322fa05cd52cf0c15
#6155c97dc093912ad014f6d22743ede0d4d7fd82365a
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toc97dc093912ad014f6d22743ede0d4d7fd82365a
#61633d16b29c3bb1ec816ac0e782f20d169097063919
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to3d16b29c3bb1ec816ac0e782f20d169097063919
#6165e92d53b29e393fc4c0f9f1f7c3fe651be8d36faa
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toe92d53b29e393fc4c0f9f1f7c3fe651be8d36faa
#61694c6475f9176bf99271ccf5a2817b30a490b83db0
by @localai-bot in chore: ⬆️ Update leejet/stable-diffusion.cpp to4c6475f9176bf99271ccf5a2817b30a490b83db0
#6171d4d8dbe383e8b9600cbe8b42016e3a4529b51219
by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tod4d8dbe383e8b9600cbe8b42016e3a4529b51219
#6172New Contributors
Full Changelog: v3.4.0...v3.5.0
This discussion was created from the release v3.5.0.
Beta Was this translation helpful? Give feedback.
All reactions