Project AIRI Web AI Realtime Voice Chat Demo
Note
This project is part of (and also associate to) the Project AIRI, we aim to build a LLM-driven VTuber like Neuro-sama (subscribe if you didn't!) if you are interested in, please do give it a try on live demo.
Who are we?
We are a group of currently non-funded talented people made up with computer scientists, experts in multi-modal fields, designers, product managers, and popular open source contributors who loves the goal of where we are heading now.
Demos:
pnpm i
pnpm -F @proj-airi/vad-asr-chat dev
- WebAI Examples: WebGPU, and use of AI models inside Web Browsers (you could think of it as a type-safe and UI improved version of π€ Transformers.js' example repository)
- π€ candle Examples: Examples of using π€ candle for inference AI models in Rust, you could think of it as an alternative and more transformers like library than [Burn Examples (this repository)].
- Burn Examples: Examples of using Burn.dev for inference AI models in Rust, you could think of it as an alternative and more advanced library than [π€ candle Examples].
Other side projects born from Project AIRI
- Awesome AI VTuber: A curated list of AI VTubers and related projects
unspeech
: Universal endpoint proxy server for/audio/transcriptions
and/audio/speech
, like LiteLLM but for any ASR and TTShfup
: tools to help on deploying, bundling to HuggingFace Spacesxsai-transformers
: Experimental π€ Transformers.js provider for xsAI.- WebAI: Realtime Voice Chat: Full example of implementing ChatGPT's realtime voice from scratch with VAD + STT + LLM + TTS.
@proj-airi/drizzle-duckdb-wasm
: Drizzle ORM driver for DuckDB WASM@proj-airi/duckdb-wasm
: Easy to use wrapper for@duckdb/duckdb-wasm
- Airi Factorio: Allow Airi to play Factorio
- Factorio RCON API: RESTful API wrapper for Factorio headless server console
autorio
: Factorio automation librarytstl-plugin-reload-factorio-mod
: Reload Factorio mod when developing- Velin: Use Vue SFC and Markdown to write easy to manage stateful prompts for LLM
demodel
: Easily boost the speed of pulling your models and datasets from various of inference runtimes.inventory
: Centralized model catalog and default provider configurations backend service- MCP Launcher: Easy to use MCP builder & launcher for all possible MCP servers, just like Ollama for models!
- π₯Ί SAD: Documentation and notes for self-host and browser running LLMs.