You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+77-1Lines changed: 77 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,6 +32,7 @@ with structured, maintainable, robust, and efficient AI workflows.
32
32
- inference providers
33
33
- model families
34
34
- model sizes
35
+
- **BeeAI Framework** - Enterprise-grade AI orchestration with advanced tool calling
35
36
* Easily integrate the power of LLMs into legacy code-bases (mify).
36
37
* Sketch applications by writing specifications and letting `mellea` fill in
37
38
the details (generative slots).
@@ -69,6 +70,8 @@ pip install mellea
69
70
> uv pip install mellea[all] # for all the optional dependencies
70
71
>```
71
72
>
73
+
>**BeeAI backend is included by default** - no additional installation required!
74
+
>
72
75
> You can also install all the optional dependencies with `uv sync --all-extras`
73
76
74
77
> [!NOTE]
@@ -89,7 +92,7 @@ print(m.chat("What is the etymology of mellea?").content)
89
92
90
93
Then run it:
91
94
> [!NOTE]
92
-
> Before we get started, you will need to download and install [ollama](https://ollama.com/). Mellea can work with many different types of backends, but everything in this tutorial will "just work" on a Macbook running IBM's Granite 3.3 8B model.
95
+
> Before we get started, you will need to download and install [ollama](https://ollama.com/). Mellea can work with many different types of backends, but everything in this tutorial will "just work" on a Macbook running IBM's Granite 3.3 2B model. The BeeAI backend automatically detects Ollama models and works seamlessly with local inference.
93
96
```shell
94
97
uv run --with mellea docs/examples/tutorial/example.py
95
98
```
@@ -211,6 +214,79 @@ if __name__ == "__main__":
211
214
print("Output sentiment is:", sentiment)
212
215
```
213
216
217
+
## Getting Started with BeeAI Backend
218
+
219
+
Mellea now supports the [BeeAI Framework](https://github.com/i-am-bee/beeai-framework), providing enterprise-grade AI orchestration with advanced tool calling capabilities. The BeeAI backend integrates seamlessly with Mellea's existing patterns and workflows.
220
+
221
+
### Installation
222
+
223
+
The BeeAI backend is included with Mellea by default. No additional installation is required.
224
+
225
+
### Basic Usage
226
+
227
+
```python
228
+
from mellea.backends.beeai import BeeAIBackend
229
+
from mellea.stdlib.session import MelleaSession
230
+
from mellea.stdlib.base import CBlock
231
+
232
+
# Initialize the BeeAI backend with local Ollama
233
+
backend = BeeAIBackend(
234
+
model_id="granite3.3:2b",
235
+
base_url="http://localhost:11434"
236
+
)
237
+
238
+
# Create a session with the backend
239
+
session = MelleaSession(backend=backend)
240
+
241
+
# Generate text
242
+
result = session.backend.generate_from_context(
243
+
action=CBlock("Write a short poem about AI"),
244
+
ctx=session.ctx
245
+
)
246
+
247
+
print(result.text)
248
+
```
249
+
250
+
**Note**: The BeeAI backend automatically detects Ollama models (like `granite3.3:2b`, `llama2`, `mistral`) and configures itself for local inference. No API key is required for local Ollama usage.
251
+
252
+
### Advanced Features
253
+
254
+
The BeeAI backend supports all Mellea features including:
255
+
- **Structured Output**: Generate Pydantic models and structured data
256
+
- **Tool Calling**: Advanced function calling with custom tools
257
+
- **Model Options**: Temperature, max tokens, top-p, and more
258
+
- **Context Management**: Full conversation history and context handling
259
+
- **Formatting**: Jinja2 template support for complex prompts
260
+
261
+
### Tool Calling Example
262
+
263
+
```python
264
+
from mellea.stdlib.base import Component, TemplateRepresentation
265
+
266
+
class CalculatorComponent(Component):
267
+
def parts(self):
268
+
return []
269
+
270
+
def format_for_llm(self):
271
+
return TemplateRepresentation(
272
+
obj=self,
273
+
args={"content": "Calculate 2+2"},
274
+
tools={"calculator": lambda x: eval(x)},
275
+
template_order=["*", "ContentBlock"]
276
+
)
277
+
278
+
# Use with tool calling enabled
279
+
result = session.backend.generate_from_context(
280
+
action=CalculatorComponent(),
281
+
ctx=session.ctx,
282
+
tool_calls=True
283
+
)
284
+
```
285
+
286
+
For more examples, see [docs/examples/beeai/101_example.py](docs/examples/beeai/101_example.py).
287
+
288
+
**Current Status**: The BeeAI backend is fully implemented and tested with comprehensive unit tests. It supports all Mellea features including structured output, tool calling, model options, and context management. The backend is ready for production use with proper API configuration.
0 commit comments