Skip to content

Commit b4a1693

Browse files
committed
Use updated class of the tool version
1 parent a6888ce commit b4a1693

File tree

1 file changed

+33
-101
lines changed

1 file changed

+33
-101
lines changed

tools/toolkits/others/apify.mdx

Lines changed: 33 additions & 101 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,8 @@ This guide demonstrates how to integrate and use [Apify](https://apify.com/actor
1818
2. Obtain your Apify API token (can be obtained from [Apify](https://docs.apify.com/platform/integrations/api))
1919
3. Install the required packages:
2020

21-
2221
```bash
23-
pip install agno apify-client
22+
pip install agno apify-client langchain-apify
2423
```
2524

2625
## Basic Usage
@@ -35,7 +34,8 @@ from agno.tools.apify import ApifyTools
3534
agent = Agent(
3635
tools=[
3736
ApifyTools(
38-
api_key="your_apify_api_key" # Or set the APIFY_TOKEN environment variable
37+
actors=["apify/rag-web-browser"], # Specify which Apify actors to use, use multiple ones if needed
38+
apify_api_token="your_apify_api_key" # Or set the APIFY_API_TOKEN environment variable
3939
)
4040
],
4141
show_tool_calls=True,
@@ -60,7 +60,7 @@ from agno.tools.apify import ApifyTools
6060

6161
agent = Agent(
6262
tools=[
63-
ApifyTools() # RAG web search is enabled by default
63+
ApifyTools(["apify/rag-web-browser"])
6464
],
6565
show_tool_calls=True,
6666
markdown=True
@@ -80,9 +80,7 @@ from agno.tools.apify import ApifyTools
8080

8181
agent = Agent(
8282
tools=[
83-
ApifyTools(
84-
use_website_content_crawler=True # Disabled by default
85-
)
83+
ApifyTools(["apify/website-content-crawler"])
8684
],
8785
markdown=True
8886
)
@@ -91,50 +89,7 @@ agent = Agent(
9189
agent.print_response("Summarize the content from https://docs.agno.com/introduction", markdown=True)
9290
```
9391

94-
### 3. Web Scraper
95-
96-
The Web Scraper tool uses Apify's Web Scraper actor to extract structured data from websites.
97-
98-
```python
99-
from agno.agent import Agent
100-
from agno.tools.apify import ApifyTools
101-
102-
agent = Agent(
103-
tools=[
104-
ApifyTools(
105-
use_web_scraper=True # Disabled by default
106-
)
107-
],
108-
show_tool_calls=True
109-
)
110-
111-
# Extract specific elements from a webpage
112-
agent.print_response("Extract the main heading and first paragraph from https://www.example.com", markdown=True)
113-
```
114-
115-
### 4. Instagram Scraper
116-
117-
The Instagram Scraper tool allows your agent to extract information from Instagram profiles, hashtags, or places.
118-
119-
```python
120-
from agno.agent import Agent
121-
from agno.tools.apify import ApifyTools
122-
123-
agent = Agent(
124-
tools=[
125-
ApifyTools(
126-
use_instagram_scraper=True # Enabled by default
127-
)
128-
],
129-
show_tool_calls=True
130-
)
131-
132-
# Extract information from Instagram
133-
agent.print_response("Find trending posts for the hashtag #AI", markdown=True)
134-
agent.print_response("Get information about the Instagram user 'Instagram'", markdown=True)
135-
```
136-
137-
### 5. Google Places Crawler
92+
### 3. Google Places Crawler
13893

13994
This tool extracts data about businesses from Google Maps and Google Places.
14095

@@ -144,9 +99,7 @@ from agno.tools.apify import ApifyTools
14499

145100
agent = Agent(
146101
tools=[
147-
ApifyTools(
148-
use_google_places_crawler=True # Enabled by default
149-
)
102+
ApifyTools(["compass/crawler-google-places"])
150103
],
151104
show_tool_calls=True
152105
)
@@ -167,10 +120,10 @@ from agno.tools.apify import ApifyTools
167120

168121
agent = Agent(
169122
tools=[
170-
ApifyTools(
171-
use_rag_web_search=True,
172-
use_google_places_crawler=True
173-
)
123+
ApifyTools([
124+
"apify/rag-web-browser",
125+
"compass/crawler-google-places"
126+
])
174127
],
175128
show_tool_calls=True
176129
)
@@ -180,7 +133,7 @@ agent.print_response(
180133
"""
181134
I'm traveling to Tokyo next month.
182135
1. Research the best time to visit and major attractions
183-
2. Find highly-rated sushi restaurants near Shinjuku
136+
2. Find one good rated sushi restaurants near Shinjuku
184137
Compile a comprehensive travel guide with this information.
185138
""",
186139
markdown=True
@@ -194,57 +147,36 @@ Below is a simplified implementation reference for the ApifyTools class. You can
194147
```python
195148
from agno.tools import Toolkit
196149
from apify_client import ApifyClient
150+
from langchain_apify import ApifyActorsTool
197151

198152
class ApifyTools(Toolkit):
199153
def __init__(
200154
self,
201-
api_key=None,
202-
max_results=4,
203-
use_rag_web_search=True,
204-
use_website_content_crawler=False,
205-
use_web_scraper=False,
206-
use_instagram_scraper=True,
207-
use_google_places_crawler=True
155+
actors: Union[str, List[str]] = None,
156+
apify_api_token: Optional[str] = None
208157
):
209-
# Setup code...
210-
211-
def rag_web_search(self, query, timeout=45):
212-
# Implementation...
213-
214-
def website_content_crawler(self, urls, timeout=60):
215-
# Implementation...
216-
217-
def web_scraper(self, urls, timeout=60):
218-
# Implementation...
219-
220-
def instagram_scraper(self, search, search_type="user", search_limit=10, timeout=180):
221-
# Implementation...
158+
# Initialize toolkit with Apify API token
159+
super().__init__(name="ApifyTools")
160+
self.apify_api_token = apify_api_token or os.getenv('APIFY_API_TOKEN')
161+
self.client = ApifyClient(self.apify_api_token)
222162

223-
def google_places_crawler(self, location_query, search_terms=None, max_crawled_places=30, timeout=45):
224-
# Implementation...
163+
# Register specific actors if provided
164+
if actors:
165+
actor_list = [actors] if isinstance(actors, str) else actors
166+
for actor_id in actor_list:
167+
self.register_actor(actor_id)
168+
169+
def register_actor(self, actor_id: str) -> None:
170+
# Register an Apify actor as a function in the toolkit
171+
# Implementation details...
225172
```
226173

227174
## Toolkit Params
228175

229-
| Parameter | Type | Default | Description |
230-
| ---------------------------- | ------- | ------- | ---------------------------------------------------------------- |
231-
| `api_key` | `str` | `None` | Apify API key (or set via APIFY_TOKEN environment variable) |
232-
| `max_results` | `int` | `4` | Maximum number of results for web searches |
233-
| `use_rag_web_search` | `bool` | `True` | Enable RAG web search tool |
234-
| `use_website_content_crawler`| `bool` | `False` | Enable website content crawler tool |
235-
| `use_web_scraper` | `bool` | `False` | Enable general web scraper tool |
236-
| `use_instagram_scraper` | `bool` | `True` | Enable Instagram scraper tool |
237-
| `use_google_places_crawler` | `bool` | `True` | Enable Google Places crawler tool |
238-
239-
## Toolkit Functions
240-
241-
| Function | Description |
242-
| -------------------------- | ---------------------------------------------------------------- |
243-
| `rag_web_search` | Searches the web for information using the RAG Web Browser actor |
244-
| `website_content_crawler` | Crawls websites using Apify's website-content-crawler actor |
245-
| `web_scraper` | Scrapes websites using Apify's web-scraper actor |
246-
| `instagram_scraper` | Scrapes Instagram profiles, hashtags, or places |
247-
| `google_places_crawler` | Crawls Google Places for business information |
176+
| Parameter | Type | Default | Description |
177+
| ---------------------------- | ------------------- | ------- | ------------------------------------------------------------------ |
178+
| `apify_api_token` | `str` | `None` | Apify API token (or set via APIFY_API_TOKEN environment variable) |
179+
| `actors` | `str` or `List[str]`| `None` | Single actor ID or list of actor IDs to register |
248180

249181
## Developer Resources
250182

@@ -256,4 +188,4 @@ class ApifyTools(Toolkit):
256188
- [Apify Platform Documentation](https://docs.apify.com)
257189
- [Apify Actor Documentation](https://docs.apify.com/Actors)
258190
- [Apify Store - Browse available Actors](https://apify.com/store)
259-
- [Agno Framework Documentation](https://docs.agno.com)
191+
- [Agno Framework Documentation](https://docs.agno.com)

0 commit comments

Comments
 (0)