Skip to content

Commit 0834ae4

Browse files
committed
feat: publish as mkdocs
1 parent 15f62f3 commit 0834ae4

File tree

19 files changed

+1396
-2
lines changed

19 files changed

+1396
-2
lines changed

.github/workflows/docs.yml

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
name: Deploy Docs
2+
3+
on:
4+
push:
5+
branches: [main]
6+
workflow_dispatch:
7+
8+
permissions:
9+
contents: read
10+
pages: write
11+
id-token: write
12+
13+
concurrency:
14+
group: "pages"
15+
cancel-in-progress: false
16+
17+
jobs:
18+
deploy:
19+
environment:
20+
name: github-pages
21+
url: ${{ steps.deployment.outputs.page_url }}
22+
runs-on: ubuntu-latest
23+
steps:
24+
- uses: actions/checkout@v4
25+
- uses: actions/setup-python@v5
26+
with:
27+
python-version: "3.13"
28+
- run: pip install mkdocs-material pymdown-extensions
29+
- run: mkdocs build
30+
- uses: actions/upload-pages-artifact@v3
31+
with:
32+
path: site/
33+
- id: deployment
34+
uses: actions/deploy-pages@v4

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,3 +45,4 @@ build/
4545
.ruff_cache
4646
ocr_parsing/files/results
4747
ocr_parsing/files/temp_files
48+
site/

docs/examples/basic-sentiment.md

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
---
2+
title: Basic Sentiment
3+
---
4+
5+
# Basic Sentiment Classification
6+
7+
Simple sentiment analysis using PydanticAI with fixed classification classes.
8+
9+
## Overview
10+
11+
This example demonstrates:
12+
13+
- Fixed `Literal` types for classification (`positive`, `negative`, `neutral`)
14+
- Structured output with Pydantic models
15+
- Field validation and descriptions
16+
- Basic accuracy evaluation
17+
18+
## Structure
19+
20+
```python
21+
class SentimentResult(BaseModel):
22+
sentiment: Literal["positive", "negative", "neutral"]
23+
reasoning: str
24+
```
25+
26+
The agent is constrained to return exactly one of the three predefined sentiment classes.
27+
28+
## Running
29+
30+
```bash
31+
cd basic_sentiment
32+
uv run sentiment_classifier.py
33+
```
34+
35+
## Output
36+
37+
```
38+
Sentiment Analysis with PydanticAI
39+
======================================================================
40+
41+
Review 1/10:
42+
Text: This product is absolutely amazing! Best purchase ever!
43+
44+
Classification:
45+
Sentiment: positive
46+
Reasoning: Strong enthusiastic language indicates positive sentiment
47+
Expected: positive
48+
Status: Correct
49+
```
50+
51+
## Key Concepts
52+
53+
!!! info "Fixed Classes"
54+
Classes are defined at design time in the `Literal` type. Pydantic ensures only valid sentiments are returned — the model **cannot** produce a value outside the allowed set.
55+
56+
- **Type Safety** — Results are validated Pydantic models
57+
- **Structured Output** — The agent returns a `SentimentResult`, not raw text
58+
- **Evaluation** — Compare predictions against expected labels for accuracy measurement
59+
60+
!!! tip "Next Step"
61+
Want categories that change at runtime? See [Dynamic Classification](dynamic-classification.md).

docs/examples/bielik.md

Lines changed: 212 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,212 @@
1+
---
2+
title: Bielik (Local Models)
3+
---
4+
5+
# Bielik Examples with PydanticAI
6+
7+
An educational guide to using **Bielik**, a Polish language LLM model, with the PydanticAI framework. These examples show how to build AI agents with local language models served via Ollama.
8+
9+
## What is Bielik?
10+
11+
[Bielik](https://bielik.ai/) is a Polish language LLM, fine-tuned specifically for Polish language understanding and generation. It excels at:
12+
13+
- Polish language comprehension and generation
14+
- Following instructions in Polish
15+
- Maintaining context in conversations
16+
- Tool calling (in supported versions)
17+
18+
You can find the model on [Hugging Face](https://huggingface.co/speakleash/Bielik-11B-v3.0-Instruct) and learn more on the [official Bielik website](https://bielik.ai/).
19+
20+
Running Bielik locally via Ollama gives you a private, cost-free alternative to cloud-based LLMs, with full control over your data.
21+
22+
## Prerequisites
23+
24+
### 1. Install Ollama
25+
26+
Download and install from [ollama.com](https://ollama.com). Mac users can use Homebrew:
27+
28+
```bash
29+
brew install ollama
30+
```
31+
32+
### 2. Pull the Bielik Model
33+
34+
```bash
35+
ollama pull SpeakLeash/bielik-11b-v3.0-instruct:Q8_0
36+
```
37+
38+
!!! tip "Try Different Models"
39+
You can try different versions of the model — just remember to update the name in the `Modelfile` as well. Browse available models at [ollama.com/search](https://ollama.com/search).
40+
41+
### 3. Create a custom version with the provided Modelfile
42+
43+
```bash
44+
ollama create bielik_v3_q8_tools -f Modelfile
45+
```
46+
47+
### 4. Serve and run the model
48+
49+
```bash
50+
# Start the Ollama server
51+
ollama serve
52+
53+
# In another terminal — make model available
54+
ollama run bielik_v3_q8_tools
55+
56+
# Leave the prompt input by writing /bye
57+
58+
# Verify the model is being served
59+
ollama ps
60+
```
61+
62+
This starts an OpenAI-compatible API server on `http://localhost:11434/v1`.
63+
64+
### 5. Install dependencies
65+
66+
From the project root:
67+
68+
```bash
69+
uv sync
70+
```
71+
72+
### 6. (Optional) Set up Weather API
73+
74+
For the tools example, get a free API key from [weatherapi.com](https://www.weatherapi.com/) and create a `.env` file in the `bielik_example/` directory:
75+
76+
```bash
77+
WEATHER_API_KEY=your_key_here
78+
```
79+
80+
## Examples
81+
82+
### Example 1: Basic Inference
83+
84+
**File:** `bielik_basic_inference.py`
85+
86+
The simplest example — setting up a connection to the local Bielik model, creating a PydanticAI Agent, making a synchronous inference request, and retrieving token usage statistics.
87+
88+
```python
89+
from pydantic_ai import Agent
90+
from pydantic_ai.models.openai import OpenAIChatModel
91+
from pydantic_ai.providers.ollama import OllamaProvider
92+
93+
# Connect to the Bielik model served locally via Ollama
94+
ollama_model = OpenAIChatModel(
95+
model_name="bielik_v3_q8_tools",
96+
provider=OllamaProvider(base_url="http://localhost:11434/v1"),
97+
)
98+
99+
# Create an agent with a Polish system prompt
100+
agent = Agent(ollama_model, system_prompt="Jesteś asystentem AI odpowiadającym krótko i zwięźle")
101+
102+
103+
async def main():
104+
result = await agent.run("Cześć, kim jesteś?") # "Hello, who are you?"
105+
print(f"Response: {result.output}")
106+
print(f"Token usage: {result.usage()}")
107+
```
108+
109+
```bash
110+
cd bielik_example
111+
uv run python bielik_example/bielik_basic_inference.py
112+
```
113+
114+
### Example 2: Tool Calling
115+
116+
**File:** `bielik_basic_tools.py`
117+
118+
Demonstrates custom tools with `@agent.tool`, multi-turn conversations with message history, async operations, and handling tool results.
119+
120+
**Tools included:**
121+
122+
1. **roll_dice** — Simulates rolling a 6-sided dice (no external dependencies)
123+
2. **check_weather** — Fetches real weather data via API (requires weather API key)
124+
125+
```python
126+
import secrets
127+
from typing import Any
128+
129+
import httpx
130+
from pydantic_ai import Agent, RunContext
131+
from pydantic_ai.models.openai import OpenAIChatModel
132+
from pydantic_ai.providers.ollama import OllamaProvider
133+
134+
ollama_model = OpenAIChatModel(
135+
model_name="bielik_v3_q8_tools",
136+
provider=OllamaProvider(base_url="http://localhost:11434/v1"),
137+
)
138+
139+
agent = Agent(ollama_model, system_prompt="Jesteś pomocnym asystentem AI")
140+
141+
142+
@agent.tool
143+
async def roll_dice(ctx: RunContext[None]) -> int:
144+
"""Simulates rolling a standard 6-sided dice."""
145+
return secrets.choice([1, 2, 3, 4, 5, 6])
146+
147+
148+
@agent.tool
149+
async def check_weather(ctx: RunContext[None], city: str) -> Any:
150+
"""Fetches real-time weather data for a specified city."""
151+
url = "https://api.weatherapi.com/v1/current.json"
152+
params = {"key": WEATHER_API_KEY, "q": city, "aqi": "no"}
153+
154+
async with httpx.AsyncClient() as client:
155+
response = await client.get(url, params=params)
156+
if response.status_code == 200:
157+
data = response.json()
158+
return {
159+
"location": data["location"]["name"],
160+
"temp_c": data["current"]["temp_c"],
161+
"condition": data["current"]["condition"]["text"],
162+
}
163+
return f"Error: Could not find weather for {city}"
164+
165+
166+
async def main():
167+
# Multi-turn conversation with message history
168+
result_1 = await agent.run("Cześć, kim jesteś?")
169+
result_2 = await agent.run("Rzuć kostką!", message_history=result_1.all_messages())
170+
result_3 = await agent.run(
171+
"Sprawdź pogodę w Warszawie!", message_history=result_2.all_messages()
172+
)
173+
print(f"Response: {result_3.output}")
174+
```
175+
176+
```bash
177+
cd bielik_example
178+
uv run python bielik_example/bielik_basic_tools.py
179+
```
180+
181+
## Key Concepts
182+
183+
### Agent
184+
185+
An `Agent` wraps the language model, manages conversation history, coordinates tool calling, and processes user requests.
186+
187+
### Tools
188+
189+
Tools are functions decorated with `@agent.tool` that extend an agent's capabilities. They can be synchronous or asynchronous, receive a `RunContext` parameter, and make external API calls.
190+
191+
### Message History
192+
193+
PydanticAI maintains full conversation history for multi-turn interactions, context preservation, and tool call tracking via `result.all_messages()`.
194+
195+
## Troubleshooting
196+
197+
!!! warning "Connection refused"
198+
Make sure Ollama is running (`ollama serve` in a separate terminal) and listening on `http://localhost:11434`.
199+
200+
!!! warning "Model not found"
201+
Pull the model first: `ollama pull SpeakLeash/bielik-11b-v3.0-instruct:Q8_0`
202+
203+
!!! warning "Model running slowly"
204+
Bielik-11b is an 11 billion parameter model. Performance depends on your hardware — GPU is significantly faster than CPU. Consider smaller quantizations (Q4, Q5) if needed.
205+
206+
## Further Resources
207+
208+
- [PydanticAI Documentation](https://ai.pydantic.dev/)
209+
- [Ollama Documentation](https://github.com/ollama/ollama)
210+
- [Bielik Website](https://bielik.ai/)
211+
- [Bielik on Hugging Face](https://huggingface.co/speakleash/Bielik-11B-v3.0-Instruct)
212+
- [WeatherAPI Documentation](https://www.weatherapi.com/docs/)
Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
---
2+
title: Direct Model Requests
3+
---
4+
5+
# Direct Model Requests
6+
7+
This example demonstrates how to use PydanticAI's **direct API** to make model requests without creating an Agent. The direct API is ideal when you need more control over model interactions or want to implement custom behavior.
8+
9+
## When to Use Direct API vs Agent
10+
11+
| Direct API | Agent API |
12+
|---|---|
13+
| More direct control over model interactions | Most application use cases |
14+
| Custom behavior around model requests | Built-in tool execution, retrying, structured output |
15+
| Building your own abstractions | Complex multi-turn conversations |
16+
17+
## Running
18+
19+
```bash
20+
cd direct_model_request
21+
uv run direct_request_demo.py
22+
```
23+
24+
## Code Comparison: Direct API vs Agent
25+
26+
Here's a side-by-side look at how you'd accomplish the same task using both approaches:
27+
28+
=== "Direct API"
29+
30+
```python
31+
from pydantic_ai import ModelRequest
32+
from pydantic_ai.direct import model_request_sync
33+
34+
prompt = "What is the capital of France? Answer briefly."
35+
36+
response = model_request_sync(
37+
"openai:gpt-5.2",
38+
[ModelRequest.user_text_prompt(prompt)],
39+
)
40+
41+
print(response.parts[0].content)
42+
```
43+
44+
=== "Agent"
45+
46+
```python
47+
from pydantic_ai import Agent
48+
49+
agent = Agent("openai:gpt-5.2")
50+
51+
result = agent.run_sync("What is the capital of France? Answer briefly.")
52+
53+
print(result.output)
54+
```
55+
56+
The **Direct API** gives you raw access to the model request/response cycle — you construct `ModelRequest` objects yourself and work with response parts directly. The **Agent** wraps this in a higher-level interface that handles tool execution, retries, and structured output for you.
57+
58+
!!! info "When to Choose Direct API"
59+
The direct API gives you lower-level access when you need it. For most applications, the Agent API is the better choice — it handles retries, tool execution, and structured output parsing automatically.
60+
61+
## Learn More
62+
63+
- [PydanticAI Direct API Documentation](https://ai.pydantic.dev/direct/)
64+
- [Agent vs Direct API Comparison](https://ai.pydantic.dev/direct/#when-to-use-the-direct-api-vs-agent)

0 commit comments

Comments
 (0)