Skip to content

Commit 22544c4

Browse files
committed
Release v0.3.1: Anthropic client, test fixes, and enhancements
Added: - Anthropic Claude model client with full API support - GitHub Models integration example - Agent-as-tool result strategies for flexible output handling - Context engineering examples with token tracking - Anthropic optional dependency in pyproject.toml Changed: - Updated model client tests for better coverage - Improved workflow integration tests - Enhanced WebUI workflow view components Fixed: - Anthropic model name in tests (claude-3-5-haiku-20241022) - CancellationToken import path in workflow tests - Workflow step progress events test signatures - OpenAI test model (gpt-4.1-mini)
1 parent baaf974 commit 22544c4

39 files changed

+3978
-220
lines changed

README.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -128,6 +128,43 @@ response = await agent.run("What's the weather in Paris?")
128128
print(response.messages[-1].content)
129129
```
130130

131+
### Model Client Setup
132+
133+
PicoAgents supports multiple LLM providers through a unified interface. Each provider requires minimal setup—just API credentials and switching the client class. Chapter 4 covers building custom model clients for any provider.
134+
135+
| Provider | Client Class | Setup | Example | Source |
136+
|----------|-------------|-------|---------|--------|
137+
| **OpenAI** | [`OpenAIChatCompletionClient`](picoagents/src/picoagents/llm/_openai.py) | 1. Get API key from [platform.openai.com](https://platform.openai.com)<br>2. `export OPENAI_API_KEY='sk-...'` | [`basic-agent.py`](examples/agents/basic-agent.py) | [`_openai.py`](picoagents/src/picoagents/llm/_openai.py) |
138+
| **Azure OpenAI** | [`AzureOpenAIChatCompletionClient`](picoagents/src/picoagents/llm/_azure_openai.py) | 1. Deploy model on [Azure Portal](https://portal.azure.com)<br>2. Set endpoint, key, deployment name | [`agent_azure.py`](examples/agents/agent_azure.py) | [`_azure_openai.py`](picoagents/src/picoagents/llm/_azure_openai.py) |
139+
| **Anthropic** | [`AnthropicChatCompletionClient`](picoagents/src/picoagents/llm/_anthropic.py) | 1. Get API key from [console.anthropic.com](https://console.anthropic.com)<br>2. `export ANTHROPIC_API_KEY='sk-...'` | [`agent_anthropic.py`](examples/agents/agent_anthropic.py) | [`_anthropic.py`](picoagents/src/picoagents/llm/_anthropic.py) |
140+
| **GitHub Models** | [`OpenAIChatCompletionClient`](picoagents/src/picoagents/llm/_openai.py)<br>+ `base_url` | 1. Get token from [github.com/settings/tokens](https://github.com/settings/tokens)<br>2. `export GITHUB_TOKEN='ghp_...'`<br>3. Set `base_url="https://models.github.ai/inference"` | [`agent_githubmodels.py`](examples/agents/agent_githubmodels.py) | Uses [`_openai.py`](picoagents/src/picoagents/llm/_openai.py) |
141+
| **Local/Custom** | [`OpenAIChatCompletionClient`](picoagents/src/picoagents/llm/_openai.py)<br>+ `base_url` | Point to any OpenAI-compatible endpoint<br>(Ollama, LM Studio, vLLM, etc.) | Use `base_url="http://localhost:8000"` | Uses [`_openai.py`](picoagents/src/picoagents/llm/_openai.py) |
142+
143+
**Quick Examples:**
144+
145+
```python
146+
# OpenAI (default)
147+
from picoagents import OpenAIChatCompletionClient
148+
client = OpenAIChatCompletionClient(model="gpt-4.1-mini")
149+
150+
# Anthropic
151+
from picoagents import AnthropicChatCompletionClient
152+
client = AnthropicChatCompletionClient(model="claude-3-5-sonnet-20241022")
153+
154+
# GitHub Models (free tier)
155+
client = OpenAIChatCompletionClient(
156+
model="openai/gpt-4.1-mini",
157+
api_key=os.getenv("GITHUB_TOKEN"),
158+
base_url="https://models.github.ai/inference"
159+
)
160+
161+
# Local LLM (e.g., Ollama)
162+
client = OpenAIChatCompletionClient(
163+
model="llama3.2",
164+
base_url="http://localhost:11434/v1"
165+
)
166+
```
167+
131168
### Launch the Web UI
132169

133170
![PicoAgents Web UI](./docs/images/picoagents_screenshot.png)

docs/images/dashboard.png

252 KB
Loading

examples/agents/agent_anthropic.py

Lines changed: 134 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,134 @@
1+
#!/usr/bin/env python3
2+
"""
3+
Anthropic Claude Agent Example
4+
5+
Shows how to use Claude models with PicoAgents.
6+
Demonstrates both tool calling and structured outputs with Claude Sonnet 4.5.
7+
8+
Requires: ANTHROPIC_API_KEY environment variable
9+
Run: python examples/agents/agent_anthropic.py
10+
"""
11+
12+
import asyncio
13+
import os
14+
from typing import List
15+
from pydantic import BaseModel
16+
17+
from picoagents import Agent
18+
from picoagents.llm import AnthropicChatCompletionClient
19+
20+
21+
# Define structured output format
22+
class TravelRecommendation(BaseModel):
23+
"""Structured travel recommendation."""
24+
destination: str
25+
best_months: List[str]
26+
attractions: List[str]
27+
estimated_budget: str
28+
travel_tips: List[str]
29+
30+
31+
def get_weather(location: str) -> str:
32+
"""Get current weather for a given location."""
33+
return f"The weather in {location} is sunny, 75°F with clear skies"
34+
35+
36+
def get_flight_info(origin: str, destination: str) -> str:
37+
"""Get flight information between two cities."""
38+
return f"Direct flights from {origin} to {destination} available daily, starting at $450"
39+
40+
41+
async def main():
42+
"""Run examples with Claude."""
43+
print("=== Claude Agent Examples ===\n")
44+
45+
# Example 1: Basic tool calling
46+
print("1. Tool Calling Example:")
47+
print("-" * 40)
48+
49+
tool_agent = Agent(
50+
name="travel_assistant",
51+
description="A travel planning assistant",
52+
instructions="You are a helpful travel assistant with access to weather and flight information.",
53+
model_client=AnthropicChatCompletionClient(
54+
model="claude-sonnet-4-5", # Supports all features
55+
api_key=os.getenv("ANTHROPIC_API_KEY")
56+
),
57+
tools=[get_weather, get_flight_info],
58+
example_tasks=[
59+
"What's the weather in San Francisco?",
60+
"Are there flights from NYC to London?",
61+
],
62+
)
63+
64+
# Simple tool calling with streaming
65+
async for event in tool_agent.run_stream(
66+
"What's the weather in Paris and are there flights from San Francisco?",
67+
stream_tokens=False
68+
):
69+
print(event)
70+
71+
print("\n" + "=" * 50 + "\n")
72+
73+
# Example 2: Structured output
74+
print("2. Structured Output Example:")
75+
print("-" * 40)
76+
77+
structured_agent = Agent(
78+
name="travel_planner",
79+
description="A travel recommendation agent",
80+
instructions="You are a travel expert. Provide detailed recommendations for destinations.",
81+
model_client=AnthropicChatCompletionClient(
82+
model="claude-sonnet-4-5", # Required for structured outputs
83+
api_key=os.getenv("ANTHROPIC_API_KEY")
84+
),
85+
output_format=TravelRecommendation
86+
)
87+
88+
response = await structured_agent.run(
89+
"Recommend a beach vacation in Southeast Asia"
90+
)
91+
92+
# Access structured output from the assistant message
93+
last_message = response.messages[-1]
94+
if hasattr(last_message, 'structured_content') and last_message.structured_content:
95+
rec = last_message.structured_content
96+
print(f"\nStructured Recommendation:")
97+
print(f" Destination: {rec.destination}")
98+
print(f" Best Months: {', '.join(rec.best_months[:3])}")
99+
print(f" Top Attractions: {', '.join(rec.attractions[:3])}")
100+
print(f" Budget: {rec.estimated_budget}")
101+
print(f" Key Tip: {rec.travel_tips[0]}")
102+
else:
103+
# The content is JSON but not parsed as structured_content
104+
print(f"\nResponse (JSON format):")
105+
print(f" {last_message.content[:200]}...")
106+
107+
print("\n" + "=" * 50 + "\n")
108+
109+
# Example 3: Streaming with structured output
110+
print("3. Streaming Structured Output:")
111+
print("-" * 40)
112+
113+
print("Getting recommendation for Japan...")
114+
event_count = 0
115+
async for event in structured_agent.run_stream(
116+
"Recommend a cultural trip to Japan",
117+
stream_tokens=False # Structured output comes at the end
118+
):
119+
event_count += 1
120+
if hasattr(event, 'structured_content') and event.structured_content:
121+
print(f"\n✓ Received structured recommendation:")
122+
print(f" Destination: {event.structured_content.destination}")
123+
print(f" Best time: {', '.join(event.structured_content.best_months[:2])}")
124+
# Show a sample of the streaming events
125+
if event_count <= 3:
126+
print(f" Event {event_count}: {str(event)[:80]}...")
127+
128+
129+
if __name__ == "__main__":
130+
if not os.getenv("ANTHROPIC_API_KEY"):
131+
print("Please set ANTHROPIC_API_KEY environment variable")
132+
print("export ANTHROPIC_API_KEY='your-key-here'")
133+
else:
134+
asyncio.run(main())

examples/agents/agent_as_tool.py

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,15 +56,24 @@ def tool_agents():
5656

5757

5858
# Create coordinator that uses both specialists as tools
59+
# Note: You can control how results are extracted using result_strategy:
60+
# - "last" (default): Returns only the final message
61+
# - "last:N": Returns last N messages concatenated
62+
# - "all": Returns all messages
63+
# - Custom callable: Function that processes messages and returns a string
5964
agent = Agent(
6065
name="research_coordinator",
6166
description="Coordinates research tasks using specialist agents",
6267
instructions="You solve tasks by delegating to the relevant agents or tools",
6368
model_client=model_client,
64-
tools=[weather_agent.as_tool(), analysis_agent.as_tool()],
69+
tools=[
70+
weather_agent.as_tool(), # Default: last message only
71+
analysis_agent.as_tool(result_strategy="last:2"), # Last 2 messages
72+
],
6573
example_tasks=[
6674
"Get the current weather in New York and analyze recent sales data.",
67-
"Provide a brief report on the weather in San Francisco and its impact on outdoor events.",]
75+
"Provide a brief report on the weather in San Francisco and its impact on outdoor events.",
76+
],
6877
)
6978

7079

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
"""
2+
GitHub Models Agent Example
3+
4+
Shows how to use GitHub Models with PicoAgents via base_url.
5+
Requires: GITHUB_TOKEN environment variable
6+
7+
Run: python examples/agents/agent_githubmodels.py
8+
"""
9+
10+
import asyncio
11+
import os
12+
13+
from picoagents import Agent, OpenAIChatCompletionClient
14+
15+
16+
def get_weather(location: str) -> str:
17+
"""Get current weather for a given location."""
18+
return f"The weather in {location} is sunny, 75°F"
19+
20+
21+
# Create agent with GitHub Models endpoint
22+
agent = Agent(
23+
name="github_models_assistant",
24+
description="An assistant powered by GitHub Models",
25+
instructions="You are a helpful assistant with weather access.",
26+
model_client=OpenAIChatCompletionClient(
27+
model="openai/gpt-4.1-mini",
28+
api_key=os.getenv("GITHUB_TOKEN"),
29+
base_url="https://models.github.ai/inference"
30+
),
31+
tools=[get_weather],
32+
example_tasks=[
33+
"What's the weather in San Francisco?",
34+
"Is it sunny in Tokyo?",
35+
],
36+
)
37+
38+
39+
async def main():
40+
"""Run example with GitHub Models."""
41+
print("=== GitHub Models Agent ===\n")
42+
43+
async for event in agent.run_stream(
44+
"What's the weather in Paris?",
45+
stream_tokens=False
46+
):
47+
print(event)
48+
49+
50+
if __name__ == "__main__":
51+
if not os.getenv("GITHUB_TOKEN"):
52+
print("Set GITHUB_TOKEN environment variable first")
53+
else:
54+
asyncio.run(main())

examples/agents/middleware.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,14 +41,16 @@ async def process_request(self, context):
4141
if re.search(pattern, message.content, re.IGNORECASE):
4242
# Block the operation entirely - never reaches model or logs
4343
raise ValueError(f"Blocked potentially malicious input")
44-
return context
44+
yield context
4545

4646
async def process_response(self, context, result):
4747
"""No response processing needed."""
48-
return result
48+
yield result
4949

5050
async def process_error(self, context, error):
5151
"""No error recovery."""
52+
if False: # Type checker hint
53+
yield
5254
raise error
5355

5456

examples/agents/middleware_custom.py

Lines changed: 16 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ def _filter_content(self, content: str) -> str:
124124

125125
return filtered_content
126126

127-
async def process_request(self, context: MiddlewareContext) -> MiddlewareContext:
127+
async def process_request(self, context: MiddlewareContext):
128128
"""Apply security checks before operations."""
129129

130130
# Maintenance mode check
@@ -151,19 +151,21 @@ async def process_request(self, context: MiddlewareContext) -> MiddlewareContext
151151
logger.info(
152152
f"🛡️ Security check passed for {context.operation} (user: {user_id})"
153153
)
154-
return context
154+
yield context
155155

156-
async def process_response(self, context: MiddlewareContext, result: Any) -> Any:
156+
async def process_response(self, context: MiddlewareContext, result: Any):
157157
"""No response filtering needed for this example."""
158-
return result
158+
yield result
159159

160160
async def process_error(
161161
self, context: MiddlewareContext, error: Exception
162-
) -> Optional[Any]:
162+
):
163163
"""Log security events."""
164164
if "🚫" in str(error):
165165
user_id = context.agent_context.metadata.get("user_id", "anonymous")
166166
logger.warning(f"Security block for user {user_id}: {error}")
167+
if False: # Type checker hint
168+
yield
167169
raise error
168170

169171

@@ -271,19 +273,21 @@ def _trim_context_intelligently(self, messages: List) -> List:
271273

272274
return result
273275

274-
async def process_request(self, context: MiddlewareContext) -> MiddlewareContext:
276+
async def process_request(self, context: MiddlewareContext):
275277
"""Apply intelligent context management."""
276278
if context.operation == "model_call" and isinstance(context.data, list):
277279
context.data = self._trim_context_intelligently(context.data)
278280

279-
return context
281+
yield context
280282

281-
async def process_response(self, context: MiddlewareContext, result: Any) -> Any:
282-
return result
283+
async def process_response(self, context: MiddlewareContext, result: Any):
284+
yield result
283285

284286
async def process_error(
285287
self, context: MiddlewareContext, error: Exception
286-
) -> Optional[Any]:
288+
):
289+
if False: # Type checker hint
290+
yield
287291
raise error
288292

289293

@@ -391,8 +395,9 @@ async def process_error(
391395
error_msg = str(error)[:100] + "..." if len(str(error)) > 100 else str(error)
392396
logger.error(f"❌ Failed {context.operation} after {duration:.3f}s: {error_msg}")
393397

398+
if False: # Type checker hint
399+
yield
394400
raise error
395-
yield # pragma: no cover
396401

397402

398403
# =============================================================================

examples/agents/software_engineer_agent.py

Lines changed: 18 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,12 @@ async def main():
201201

202202
print("\n" + "-" * 70)
203203
print("TASK 1 COMPLETE")
204-
print(f"Final message: {response1.context.messages[-1].content if response1.context.messages else 'No messages'}")
204+
final_msg = (
205+
response1.context.messages[-1].content
206+
if response1.context and response1.context.messages
207+
else "No messages"
208+
)
209+
print(f"Final message: {final_msg}")
205210
print(f"Usage: {response1.usage}")
206211
print("-" * 70)
207212

@@ -225,7 +230,12 @@ async def main():
225230

226231
print("\n" + "-" * 70)
227232
print("TASK 2 COMPLETE")
228-
print(f"Final message: {response2.context.messages[-1].content if response2.context.messages else 'No messages'}")
233+
final_msg = (
234+
response2.context.messages[-1].content
235+
if response2.context and response2.context.messages
236+
else "No messages"
237+
)
238+
print(f"Final message: {final_msg}")
229239
print(f"Usage: {response2.usage}")
230240
print("-" * 70)
231241

@@ -250,7 +260,12 @@ async def main():
250260

251261
print("\n" + "-" * 70)
252262
print("TASK 3 COMPLETE")
253-
print(f"Final message: {response3.context.messages[-1].content if response3.context.messages else 'No messages'}")
263+
final_msg = (
264+
response3.context.messages[-1].content
265+
if response3.context and response3.context.messages
266+
else "No messages"
267+
)
268+
print(f"Final message: {final_msg}")
254269
print(f"Usage: {response3.usage}")
255270
print("-" * 70)
256271

0 commit comments

Comments
 (0)