Skip to content

Commit 3c31af6

Browse files
authored
[Integration]: add agno to integrations (#6)
* add agno to integrations * update example + openai to requirements * env vars'
1 parent 2e2a571 commit 3c31af6

File tree

6 files changed

+164
-0
lines changed

6 files changed

+164
-0
lines changed

README.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,16 @@ Powerful integrations for AgentKit workflows with both Browserbase and Stagehand
3535
- **[Browserbase Implementation](./examples/integrations/agentkit/browserbase/README.md)** - Direct Browserbase integration for AgentKit
3636
- **[Stagehand Implementation](./examples/integrations/agentkit/stagehand/README.md)** - AI-powered web automation using Stagehand
3737

38+
#### [**Agno Integration**](./examples/integrations/agno/README.md)
39+
**Intelligent Web Scraping with AI Agents** - Natural language web scraping using Agno's AI agents powered by Browserbase's cloud browser infrastructure. Perfect for complex data extraction, market research, and automated content monitoring.
40+
41+
**Key Features:**
42+
- Natural language scraping instructions
43+
- AI agents that adapt to page changes
44+
- Visual analysis and screenshot capabilities
45+
- Structured data extraction (JSON, CSV)
46+
- Automatic error recovery and retries
47+
3848
#### [**LangChain Integration**](./examples/integrations/langchain/README.md)
3949
Integrate Browserbase with LangChain's ecosystem for advanced AI applications. Build chains that can browse, extract, and interact with web content as part of larger AI workflows.
4050

@@ -90,6 +100,7 @@ integrations/
90100
│ ├── langchain/ # LangChain framework integration
91101
│ ├── browser-use/ # Simplified browser automation
92102
│ ├── braintrust/ # Evaluation and testing tools
103+
│ ├── agno/ # AI-powered web scraping agents
93104
│ ├── mongodb/ # MongoDB data extraction & storage
94105
│ └── agentkit/ # AgentKit implementations
95106
└── README.md # This file
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
BROWSERBASE_API_KEY=
2+
BROWSERBASE_PROJECT_ID=
3+
4+
OPENAI_API_KEY=

examples/integrations/agno/.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
/venv

examples/integrations/agno/README.md

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
# Agno + Browserbase Integration
2+
3+
**Intelligent web scraping with AI agents powered by Browserbase's cloud browser infrastructure.**
4+
5+
Agno provides AI agents that can understand natural language instructions for web scraping, while Browserbase delivers the reliable browser infrastructure needed to handle modern JavaScript-heavy websites and bypass anti-bot protection.
6+
7+
## 🚀 Why This Integration?
8+
9+
### Traditional Scraping Challenges
10+
- **JavaScript-Heavy Sites**: Content loads dynamically after page load
11+
- **Anti-Bot Protection**: Advanced detection systems block traditional scrapers
12+
- **Infrastructure Complexity**: Managing browsers and scaling is difficult
13+
14+
### Agno + Browserbase Solution
15+
- **🤖 AI-Powered**: Natural language scraping instructions
16+
- **🚀 Real Browser**: Full Chrome with JavaScript execution
17+
- **🛡️ Stealth Capabilities**: Bypasses anti-bot systems
18+
- **⚡ Zero Infrastructure**: Cloud-managed browsers with automatic scaling
19+
20+
## 📦 Key Features
21+
22+
**Intelligent Automation**: AI agents adapt to page changes and handle complex workflows
23+
**Visual Analysis**: Screenshots, layout detection, visual regression testing
24+
**Multi-Step Navigation**: Handle pagination, forms, and complex user journeys
25+
**Structured Data**: Extract data in JSON, CSV, or custom formats
26+
**Error Recovery**: Automatic retries and intelligent error handling
27+
28+
## 🔧 Setup
29+
30+
**Install packages:**
31+
```bash
32+
pip install browserbase playwright agno
33+
```
34+
35+
**Set environment variables:**
36+
```bash
37+
export BROWSERBASE_API_KEY=your_api_key_here
38+
export BROWSERBASE_PROJECT_ID=your_project_id_here
39+
```
40+
41+
*Get credentials from the [Browserbase dashboard](https://browserbase.com)*
42+
43+
## 🤝 Support & Resources
44+
45+
- **📧 Support**: [[email protected]](mailto:[email protected])
46+
- **📚 Documentation**: [docs.browserbase.com](https://docs.browserbase.com)
47+
- **🔧 Agno Docs**: [agno documentation](https://agno.dev)
48+
- **💬 Community**: [GitHub Issues](https://github.com/browserbase/integrations/issues)

examples/integrations/agno/main.py

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
import os
2+
from dotenv import load_dotenv
3+
4+
from agno.agent import Agent
5+
from agno.tools.browserbase import BrowserbaseTools
6+
7+
load_dotenv()
8+
9+
# Browserbase Configuration
10+
# -------------------------------
11+
# These environment variables are required for the BrowserbaseTools to function properly.
12+
# You can set them in your .env file or export them directly in your terminal.
13+
14+
# BROWSERBASE_API_KEY: Your API key from Browserbase dashboard
15+
# - Required for authentication
16+
# - Format: Starts with "bb_live_" or "bb_test_" followed by a unique string
17+
BROWSERBASE_API_KEY = os.getenv("BROWSERBASE_API_KEY")
18+
19+
# BROWSERBASE_PROJECT_ID: The project ID from your Browserbase dashboard
20+
# - Required to identify which project to use for browser sessions
21+
# - Format: UUID string (8-4-4-4-12 format)
22+
BROWSERBASE_PROJECT_ID = os.getenv("BROWSERBASE_PROJECT_ID")
23+
24+
agent = Agent(
25+
name="Web Automation Assistant",
26+
tools=[BrowserbaseTools(
27+
api_key=BROWSERBASE_API_KEY,
28+
project_id=BROWSERBASE_PROJECT_ID,
29+
)],
30+
instructions=[
31+
"You are a web automation assistant that can help with:",
32+
"1. Capturing screenshots of websites",
33+
"2. Extracting content from web pages",
34+
"3. Monitoring website changes",
35+
"4. Taking visual snapshots of responsive layouts",
36+
"5. Automated web testing and verification",
37+
],
38+
show_tool_calls=True,
39+
markdown=True,
40+
)
41+
42+
# Content Extraction and SS
43+
44+
# Hacker News Example
45+
# agent.print_response("""
46+
# Go to https://news.ycombinator.com and extract:
47+
# 1. The page title
48+
# 2. Take a screenshot of the top stories section
49+
# 3. Extract the first 5 stories and their links
50+
# 4. Then go to those links and extract the title and description of each story in JSON format
51+
# {
52+
# "title": "string",
53+
# "description": "string",
54+
# "link": "string"
55+
# }
56+
# """)
57+
58+
agent.print_response("""
59+
Visit https://quotes.toscrape.com and:
60+
1. Extract the first 5 quotes and their authors
61+
2. Navigate to page 2
62+
3. Extract the first 5 quotes from page 2
63+
""")
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
agno==1.5.5
2+
annotated-types==0.7.0
3+
anyio==4.9.0
4+
browserbase==1.4.0
5+
certifi==2025.4.26
6+
click==8.2.1
7+
distro==1.9.0
8+
docstring_parser==0.16
9+
gitdb==4.0.12
10+
GitPython==3.1.44
11+
greenlet==3.2.2
12+
h11==0.16.0
13+
httpcore==1.0.9
14+
httpx==0.28.1
15+
idna==3.10
16+
jiter==0.10.0
17+
markdown-it-py==3.0.0
18+
mdurl==0.1.2
19+
openai==1.82.0
20+
playwright==1.52.0
21+
pydantic==2.11.5
22+
pydantic-settings==2.9.1
23+
pydantic_core==2.33.2
24+
pyee==13.0.0
25+
Pygments==2.19.1
26+
python-dotenv==1.1.0
27+
python-multipart==0.0.20
28+
PyYAML==6.0.2
29+
rich==14.0.0
30+
shellingham==1.5.4
31+
smmap==5.0.2
32+
sniffio==1.3.1
33+
tomli==2.2.1
34+
tqdm==4.67.1
35+
typer==0.16.0
36+
typing-inspection==0.4.1
37+
typing_extensions==4.13.2

0 commit comments

Comments
 (0)