|
1 | | -# ᯓ➤ **msgflux** |
| 1 | +--- |
| 2 | +hide: |
| 3 | + - navigation |
| 4 | + - toc |
| 5 | +--- |
2 | 6 |
|
3 | | -{ width="300", .center} |
| 7 | +<div class="hero fade-in-up" markdown> |
4 | 8 |
|
5 | | -**msgflux** is an open-source framework designed for building multimodal AI applications with ease and flexibility. Our mission is to seamlessly connect models from diverse domains—text, vision, speech, and beyond—into powerful, production-ready workflows. |
| 9 | +# msgFlux { .gradient-text } |
6 | 10 |
|
7 | | -``` bash |
| 11 | +**An open-source framework for building multimodal AI applications** { .subtitle } |
| 12 | + |
| 13 | +<p style="margin: 2rem 0;"> |
| 14 | + <a href="quickstart/" class="md-button md-button--primary"> |
| 15 | + :material-rocket-launch: Get Started |
| 16 | + </a> |
| 17 | + <a href="learn/models/model/" class="md-button"> |
| 18 | + :material-book-open: Documentation |
| 19 | + </a> |
| 20 | +</p> |
| 21 | + |
| 22 | +```bash |
8 | 23 | pip install msgflux |
9 | 24 | ``` |
10 | 25 |
|
11 | | -msgflux is built on four foundational pillars: **Privacy**, **Simplicity**, **Efficiency**, and **Practicality**. |
| 26 | +</div> |
| 27 | + |
| 28 | +--- |
| 29 | + |
| 30 | +## :material-shield-check: Core Principles |
| 31 | + |
| 32 | +<div class="grid cards" markdown> |
| 33 | + |
| 34 | +- :material-shield-lock:{ .lg .middle } **Privacy First** |
| 35 | + |
| 36 | + --- |
| 37 | + |
| 38 | + msgFlux does not collect or transmit user data. All telemetry is fully controlled by the user and remains local, ensuring data sovereignty and compliance. |
| 39 | + |
| 40 | +- :material-puzzle:{ .lg .middle } **Designed for Simplicity** |
| 41 | + |
| 42 | + --- |
| 43 | + |
| 44 | + Core building blocks—**Model**, **DataBase**, **Parser**, and **Retriever**—provide a unified and intuitive interface to interact with diverse AI resources. |
| 45 | + |
| 46 | +- :material-lightning-bolt:{ .lg .middle } **Powered by Efficiency** |
| 47 | + |
| 48 | + --- |
| 49 | + |
| 50 | + Leverages high-performance libraries like **Msgspec**, **Uvloop**, **Jinja**, and **Ray** for fast, scalable, and concurrent applications. |
| 51 | + |
| 52 | +- :material-cog:{ .lg .middle } **Practical** |
| 53 | + |
| 54 | + --- |
| 55 | + |
| 56 | + Workflow API inspired by `torch.nn`, enabling seamless composition with native Python. Advanced **versioning and reproducibility** out of the box. |
| 57 | + |
| 58 | +</div> |
| 59 | + |
| 60 | +--- |
| 61 | + |
| 62 | +## :material-cube-outline: High-Level Modules |
| 63 | + |
| 64 | +msgFlux introduces a set of high-level modules designed to streamline **multimodal inputs and outputs**. These modules encapsulate common AI pipeline tasks: |
| 65 | + |
| 66 | +<div class="grid cards" markdown> |
| 67 | + |
| 68 | +- :material-robot:{ .lg } **Agent** |
| 69 | + |
| 70 | + --- |
| 71 | + |
| 72 | + Orchestrates multimodal data, instructions, context, tools, and generation schemas. The cognitive core of complex workflows. |
| 73 | + |
| 74 | +- :material-microphone:{ .lg } **Speaker** |
| 75 | + |
| 76 | + --- |
| 77 | + |
| 78 | + Converts text into natural-sounding speech, enabling voice-based interactions. |
| 79 | + |
| 80 | +- :material-text-to-speech:{ .lg } **Transcriber** |
| 81 | + |
| 82 | + --- |
| 83 | + |
| 84 | + Transforms spoken language into text, supporting speech-to-text pipelines. |
| 85 | + |
| 86 | +- :material-image-edit:{ .lg } **Designer** |
| 87 | + |
| 88 | + --- |
| 89 | + |
| 90 | + Generates visual content from prompts and images, combining textual and visual modalities. |
| 91 | + |
| 92 | +- :material-database-search:{ .lg } **Retriever** |
| 93 | + |
| 94 | + --- |
| 95 | + |
| 96 | + Searches and extracts relevant information based on queries, ideal for grounding models in external knowledge. |
| 97 | + |
| 98 | +- :material-brain:{ .lg } **Predictor** |
| 99 | + |
| 100 | + --- |
| 101 | + |
| 102 | + Wraps predictive models (e.g., scikit-learn) for smooth integration into larger workflows. |
| 103 | + |
| 104 | +</div> |
| 105 | + |
| 106 | +--- |
| 107 | + |
| 108 | +## :material-code-braces: Quick Example |
| 109 | + |
| 110 | +=== "Chat Completion" |
| 111 | + |
| 112 | + ```python |
| 113 | + from msgflux.models import ChatCompletion |
| 114 | + |
| 115 | + model = ChatCompletion(provider="openai", model="gpt-4") |
| 116 | + |
| 117 | + response = model.call( |
| 118 | + messages=[{"role": "user", "content": "Hello!"}] |
| 119 | + ) |
| 120 | + |
| 121 | + print(response.content) |
| 122 | + ``` |
| 123 | + |
| 124 | +=== "Text Embeddings" |
| 125 | + |
| 126 | + ```python |
| 127 | + from msgflux.models import TextEmbedder |
| 128 | + |
| 129 | + embedder = TextEmbedder(provider="openai") |
| 130 | + |
| 131 | + embeddings = embedder.call( |
| 132 | + texts=["Hello world", "msgFlux is awesome"] |
| 133 | + ) |
| 134 | + |
| 135 | + print(embeddings.shape) |
| 136 | + ``` |
| 137 | + |
| 138 | +=== "Text-to-Speech" |
| 139 | + |
| 140 | + ```python |
| 141 | + from msgflux.models import TextToSpeech |
| 142 | + |
| 143 | + tts = TextToSpeech(provider="openai") |
| 144 | + |
| 145 | + audio = tts.call( |
| 146 | + text="Hello from msgFlux!", |
| 147 | + voice="alloy" |
| 148 | + ) |
| 149 | + |
| 150 | + audio.save("output.mp3") |
| 151 | + ``` |
| 152 | + |
| 153 | +=== "Neural Network Module" |
| 154 | + |
| 155 | + ```python |
| 156 | + from msgflux.nn import Agent |
| 157 | + |
| 158 | + agent = Agent( |
| 159 | + model="gpt-4", |
| 160 | + instructions="You are a helpful assistant", |
| 161 | + tools=[search_tool, calculator_tool] |
| 162 | + ) |
| 163 | + |
| 164 | + result = agent("What's the weather in Paris?") |
| 165 | + print(result) |
| 166 | + ``` |
| 167 | + |
| 168 | +--- |
12 | 169 |
|
13 | | -- **Privacy first**: msgflux does not collect or transmit user data. All telemetry is fully controlled by the user and remains local, ensuring data sovereignty and compliance from the ground up. |
| 170 | +## :material-speedometer: Why msgFlux? |
14 | 171 |
|
15 | | -- **Designed for simplicity**: msgflux introduces core building blocks—**Model**, **DataBase**, **Parser**, and **Retriever**—that provide a unified and intuitive interface to interact with diverse AI resources. |
| 172 | +<div class="feature-box" markdown> |
16 | 173 |
|
17 | | -- **Powered by efficiency**: msgflux leverages high-performance libraries such as **Msgspec**, **Uvloop**, **Jinja**, and **Ray** to deliver fast, scalable, and concurrent applications without compromising flexibility. |
| 174 | +### :material-layers-triple: Unified Interface |
18 | 175 |
|
19 | | -- **Practical**: msgflux features a workflow API inspired by `torch.nn`, enabling seamless composition of models and utilities using native Python. This architecture not only supports modular design but also tracks all parameters involved in workflow construction, offering advanced **versioning and reproducibility** out of the box. |
| 176 | +Work with **text**, **vision**, **speech**, and more through a single, consistent API. No need to learn different SDKs for each provider. |
20 | 177 |
|
21 | | -In addition to the standard container modules available in *PyTorch*—such as **Sequential**, **ModuleList**, and **ModuleDict**—*msgflux* introduces a set of high-level modules designed to streamline the handling of **multimodal inputs and outputs**. These modules encapsulate common tasks in AI pipelines, making them easy to integrate, compose, and reuse. |
| 178 | +</div> |
22 | 179 |
|
23 | | -The new modules include: |
| 180 | +<div class="feature-box" markdown> |
24 | 181 |
|
25 | | -- **Agent**: A central module that orchestrates multimodal data, instructions, context, tools, generation schemas, and templates. It acts as the cognitive core of complex workflows. |
| 182 | +### :material-swap-horizontal: Provider Agnostic |
26 | 183 |
|
27 | | -- **Speaker**: Converts text into natural-sounding speech, enabling voice-based interactions. |
| 184 | +Easily switch between **OpenAI**, **Anthropic**, **Google**, **Mistral**, and more without changing your code structure. |
28 | 185 |
|
29 | | -- **Transcriber**: Transforms spoken language into text, supporting speech-to-text pipelines. |
| 186 | +</div> |
30 | 187 |
|
31 | | -- **Designer**: Generates visual content from prompts and images, combining textual and visual modalities for tasks like image generation or editing. |
| 188 | +<div class="feature-box" markdown> |
32 | 189 |
|
33 | | -- **Retriever**: Searches and extracts relevant information based on a set of input queries, ideal for grounding AI models in external knowledge. |
| 190 | +### :material-timer-sand: Production Ready |
34 | 191 |
|
35 | | -- **Predictor**: A flexible module designed to wrap predictive models, such as those from scikit-learn or other machine learning libraries, enabling smooth integration into larger workflows. |
| 192 | +Built-in support for **async operations**, **retries**, **error handling**, and **observability**. Deploy with confidence. |
36 | 193 |
|
| 194 | +</div> |
37 | 195 |
|
38 | | -For full documentation visit [mkdocs.org](https://www.mkdocs.org). |
| 196 | +--- |
39 | 197 |
|
40 | | -## Commands |
| 198 | +## :material-rocket-launch-outline: Ready to Build? |
41 | 199 |
|
42 | | -* `mkdocs new [dir-name]` - Create a new project. |
43 | | -* `mkdocs serve` - Start the live-reloading docs server. |
44 | | -* `mkdocs build` - Build the documentation site. |
45 | | -* `mkdocs -h` - Print help message and exit. |
| 200 | +<div style="text-align: center; margin: 3rem 0;" markdown> |
46 | 201 |
|
47 | | -## Project layout |
| 202 | +[Get Started with msgFlux](quickstart/){ .md-button .md-button--primary } |
| 203 | +[Explore Examples](learn/models/model/){ .md-button } |
| 204 | +[View on GitHub :fontawesome-brands-github:](https://github.com/msgflux/msgflux){ .md-button } |
48 | 205 |
|
49 | | - mkdocs.yml # The configuration file. |
50 | | - docs/ |
51 | | - index.md # The documentation homepage. |
52 | | - ... # Other markdown pages, images and other files. |
| 206 | +</div> |
0 commit comments