Skip to content

Commit 4bfade1

Browse files
committed
📦 NEW: Python SDK
1 parent 63707a9 commit 4bfade1

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+2762
-450
lines changed

examples/agent/README.md

Lines changed: 187 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,187 @@
1+
# Agent Examples
2+
3+
This directory contains examples demonstrating how to use the Langbase Python SDK's agent functionality.
4+
5+
## Prerequisites
6+
7+
Before running these examples, make sure you have:
8+
9+
1. **Langbase API Key**: Sign up at [Langbase](https://langbase.com) and get your API key
10+
2. **LLM API Key**: Get an API key from your preferred LLM provider (OpenAI, Anthropic, etc.)
11+
3. **Python Dependencies**: Install the required packages:
12+
```bash
13+
pip install langbase requests
14+
```
15+
16+
## Environment Variables
17+
18+
Set the following environment variables:
19+
20+
```bash
21+
export LANGBASE_API_KEY="your_langbase_api_key"
22+
export LLM_API_KEY="your_llm_api_key" # OpenAI, Anthropic, etc.
23+
```
24+
25+
For specific examples, you may need additional API keys:
26+
- `RESEND_API_KEY` for the email tool example
27+
- `OPENAI_API_KEY` for examples that specifically use OpenAI
28+
29+
## Examples
30+
31+
### 1. Basic Agent Run (`agent.run.py`)
32+
33+
Demonstrates how to run a basic agent with a user message.
34+
35+
```bash
36+
python agent.run.py
37+
```
38+
39+
**Features:**
40+
- Simple agent execution
41+
- Basic instructions
42+
- Single user message
43+
44+
### 2. Agent Run with Streaming (`agent.run.stream.py`)
45+
46+
Shows how to run an agent with streaming response for real-time output.
47+
48+
```bash
49+
python agent.run.stream.py
50+
```
51+
52+
**Features:**
53+
- Streaming response handling
54+
- Real-time output processing
55+
- Server-sent events parsing
56+
57+
### 3. Agent Run with Structured Output (`agent.run.structured.py`)
58+
59+
Demonstrates how to get structured JSON output from an agent using response schemas.
60+
61+
```bash
62+
python agent.run.structured.py
63+
```
64+
65+
**Features:**
66+
- JSON schema definition
67+
- Structured output validation
68+
- Math problem solving example
69+
70+
### 4. Agent Run with Memory (`agent.run.memory.py`)
71+
72+
Shows how to retrieve and use memory in agent calls for context-aware responses.
73+
74+
```bash
75+
python agent.run.memory.py
76+
```
77+
78+
**Features:**
79+
- Memory retrieval
80+
- Context injection
81+
- Career advice example
82+
83+
**Note:** You'll need to have a memory named "career-advisor-memory" created in your Langbase account.
84+
85+
### 5. Agent Run with Tools (`agent.run.tool.py`)
86+
87+
Demonstrates how to create and use tools with agents, including function calling and execution.
88+
89+
```bash
90+
python agent.run.tool.py
91+
```
92+
93+
**Features:**
94+
- Tool schema definition
95+
- Function calling
96+
- Email sending example with Resend API
97+
- Tool execution handling
98+
99+
**Additional Requirements:**
100+
- `RESEND_API_KEY` environment variable
101+
- Resend account for email functionality
102+
103+
### 6. Agent Run with MCP (`agent.run.mcp.py`)
104+
105+
Shows how to use Model Context Protocol (MCP) servers with agents.
106+
107+
```bash
108+
python agent.run.mcp.py
109+
```
110+
111+
**Features:**
112+
- MCP server configuration
113+
- External data source integration
114+
- Technical documentation queries
115+
116+
## Common Patterns
117+
118+
### Error Handling
119+
120+
All examples include basic error handling and environment variable validation:
121+
122+
```python
123+
if not os.environ.get("LANGBASE_API_KEY"):
124+
print("❌ Missing LANGBASE_API_KEY in environment variables.")
125+
exit(1)
126+
```
127+
128+
### Client Initialization
129+
130+
Standard client initialization pattern:
131+
132+
```python
133+
from langbase import Langbase
134+
135+
langbase = Langbase(api_key=os.environ.get("LANGBASE_API_KEY"))
136+
```
137+
138+
### Agent Execution
139+
140+
Basic agent run pattern:
141+
142+
```python
143+
response = langbase.agent_run(
144+
model="openai:gpt-4.1-mini",
145+
api_key=os.environ.get("LLM_API_KEY"),
146+
instructions="Your instructions here",
147+
input=[
148+
{
149+
"role": "user",
150+
"content": "Your message here"
151+
}
152+
]
153+
)
154+
```
155+
156+
## Model Support
157+
158+
These examples work with various LLM providers:
159+
- OpenAI (gpt-4.1, gpt-4.1-mini, gpt-3.5-turbo)
160+
- Anthropic (claude-3-opus, claude-3-sonnet, claude-3-haiku)
161+
- Google (gemini-pro, gemini-pro-vision)
162+
- And many more
163+
164+
## Troubleshooting
165+
166+
### Common Issues
167+
168+
1. **Missing API Keys**: Ensure all required environment variables are set
169+
2. **Network Issues**: Check your internet connection and API endpoint accessibility
170+
3. **Rate Limits**: Some providers have rate limits; implement appropriate backoff strategies
171+
4. **Response Format**: Ensure your response format schemas are valid JSON Schema
172+
173+
### Debug Mode
174+
175+
To enable debug mode, you can modify the examples to include additional logging:
176+
177+
```python
178+
import logging
179+
logging.basicConfig(level=logging.DEBUG)
180+
```
181+
182+
## Next Steps
183+
184+
- Explore the [Langbase Documentation](https://docs.langbase.com)
185+
- Try creating your own custom tools
186+
- Experiment with different models and parameters
187+
- Build multi-agent workflows

examples/agent/agent.run.mcp.py

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
"""
2+
Run Agent with MCP
3+
4+
This example demonstrates how to run an agent with MCP (Model Context Protocol).
5+
"""
6+
7+
import os
8+
from langbase import Langbase
9+
from dotenv import load_dotenv
10+
11+
load_dotenv()
12+
13+
def main():
14+
# Check for required environment variables
15+
langbase_api_key = os.environ.get("LANGBASE_API_KEY")
16+
llm_api_key = os.environ.get("LLM_API_KEY")
17+
18+
if not langbase_api_key:
19+
print("❌ Missing LANGBASE_API_KEY in environment variables.")
20+
exit(1)
21+
22+
if not llm_api_key:
23+
print("❌ Missing LLM_API_KEY in environment variables.")
24+
exit(1)
25+
26+
# Initialize Langbase client
27+
langbase = Langbase(api_key=langbase_api_key, timeout=500)
28+
29+
# Run the agent with MCP server
30+
response = langbase.agent_run(
31+
stream=False,
32+
model="openai:gpt-4.1-mini",
33+
api_key=llm_api_key,
34+
instructions="You are a helpful assistant that help users summarize text.",
35+
input=[
36+
{
37+
"role": "user",
38+
"content": "What transport protocols does the 2025-03-26 version of the MCP spec (modelcontextprotocol/modelcontextprotocol) support?"
39+
}
40+
],
41+
mcp_servers=[
42+
{
43+
"type": "url",
44+
"name": "deepwiki",
45+
"url": "https://mcp.deepwiki.com/sse"
46+
}
47+
]
48+
)
49+
50+
print("response:", response.get("output"))
51+
52+
if __name__ == "__main__":
53+
main()

examples/agent/agent.run.memory.py

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
"""
2+
Run Agent with Memory
3+
4+
This example demonstrates how to retrieve and attach memory to an agent call.
5+
"""
6+
7+
import os
8+
from langbase import Langbase
9+
from dotenv import load_dotenv
10+
11+
load_dotenv()
12+
13+
def main():
14+
# Check for required environment variables
15+
langbase_api_key = os.environ.get("LANGBASE_API_KEY")
16+
llm_api_key = os.environ.get("LLM_API_KEY")
17+
18+
if not langbase_api_key:
19+
print("❌ Missing LANGBASE_API_KEY in environment variables.")
20+
exit(1)
21+
22+
if not llm_api_key:
23+
print("❌ Missing LLM_API_KEY in environment variables.")
24+
exit(1)
25+
26+
# Initialize Langbase client
27+
langbase = Langbase(api_key=langbase_api_key)
28+
29+
create_memory()
30+
31+
# Step 1: Retrieve memory
32+
memory_response = langbase.memories.retrieve(
33+
memory=[
34+
{
35+
"name": "career-advisor-memory"
36+
}
37+
],
38+
query="Who is an AI Engineer?",
39+
top_k=2
40+
)
41+
42+
# Step 2: Run the agent with the retrieved memory
43+
response = langbase.agent_run(
44+
model="openai:gpt-4.1",
45+
api_key=llm_api_key,
46+
instructions="You are a career advisor who helps users understand AI job roles.",
47+
input=[
48+
{
49+
"role": "user",
50+
"content": f"{memory_response}\n\nNow, based on the above, who is an AI Engineer?"
51+
}
52+
]
53+
)
54+
55+
# Step 3: Display output
56+
print("Agent Response:", response.get("output"))
57+
58+
59+
def create_memory():
60+
langbase_api_key = os.environ.get("LANGBASE_API_KEY")
61+
langbase = Langbase(api_key=langbase_api_key)
62+
63+
if not langbase.memories.list():
64+
memory = langbase.memories.create(
65+
name="career-advisor-memory",
66+
description="A memory for the career advisor agent"
67+
)
68+
69+
print("Memory created: ", memory)
70+
71+
content = """
72+
An AI Engineer is a software engineer who specializes in building AI systems.
73+
"""
74+
75+
76+
langbase.memories.documents.upload(
77+
memory_name="career-advisor-memory",
78+
document_name="career-advisor-document",
79+
document=content,
80+
content_type="text/plain"
81+
)
82+
83+
print("Document uploaded")
84+
else:
85+
print("Memory already exists")
86+
87+
88+
if __name__ == "__main__":
89+
main()
90+

examples/agent/agent.run.py

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
"""
2+
Run Agent
3+
4+
This example demonstrates how to run an agent with a user message.
5+
"""
6+
7+
import os
8+
from langbase import Langbase
9+
from dotenv import load_dotenv
10+
11+
load_dotenv()
12+
def main():
13+
# Check for required environment variables
14+
langbase_api_key = os.environ.get("LANGBASE_API_KEY")
15+
llm_api_key = os.environ.get("LLM_API_KEY")
16+
17+
if not langbase_api_key:
18+
print("❌ Missing LANGBASE_API_KEY in environment variables.")
19+
print("Please set: export LANGBASE_API_KEY='your_langbase_api_key'")
20+
exit(1)
21+
22+
if not llm_api_key:
23+
print("❌ Missing LLM_API_KEY in environment variables.")
24+
print("Please set: export LLM_API_KEY='your_llm_api_key'")
25+
exit(1)
26+
27+
# Initialize Langbase client
28+
langbase = Langbase(api_key=langbase_api_key)
29+
30+
# Run the agent
31+
response = langbase.agent_run(
32+
stream=False,
33+
model="openai:gpt-4.1-mini",
34+
api_key=llm_api_key,
35+
instructions="You are a helpful assistant that help users summarize text.",
36+
input=[
37+
{
38+
"role": "user",
39+
"content": "Who is an AI Engineer?"
40+
}
41+
]
42+
)
43+
44+
print("response:", response.get("output"))
45+
46+
if __name__ == "__main__":
47+
main()

0 commit comments

Comments
 (0)