Skip to content

Commit 2763f35

Browse files
Merge pull request #53 from MervinPraison/develop
v0.0.31
2 parents 2f94604 + 11b9e71 commit 2763f35

File tree

17 files changed

+578
-162
lines changed

17 files changed

+578
-162
lines changed

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
FROM python:3.11-slim
22
WORKDIR /app
33
COPY . .
4-
RUN pip install flask praisonai==0.0.30 gunicorn markdown
4+
RUN pip install flask praisonai==0.0.31 gunicorn markdown
55
EXPOSE 8080
66
CMD ["gunicorn", "-b", "0.0.0.0:8080", "api:app"]

docs/api/praisonai/deploy.html

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ <h1 class="title">Module <code>praisonai.deploy</code></h1>
8484
file.write(&#34;FROM python:3.11-slim\n&#34;)
8585
file.write(&#34;WORKDIR /app\n&#34;)
8686
file.write(&#34;COPY . .\n&#34;)
87-
file.write(&#34;RUN pip install flask praisonai==0.0.30 gunicorn markdown\n&#34;)
87+
file.write(&#34;RUN pip install flask praisonai==0.0.31 gunicorn markdown\n&#34;)
8888
file.write(&#34;EXPOSE 8080\n&#34;)
8989
file.write(&#39;CMD [&#34;gunicorn&#34;, &#34;-b&#34;, &#34;0.0.0.0:8080&#34;, &#34;api:app&#34;]\n&#39;)
9090

@@ -250,7 +250,7 @@ <h2 id="raises">Raises</h2>
250250
file.write(&#34;FROM python:3.11-slim\n&#34;)
251251
file.write(&#34;WORKDIR /app\n&#34;)
252252
file.write(&#34;COPY . .\n&#34;)
253-
file.write(&#34;RUN pip install flask praisonai==0.0.30 gunicorn markdown\n&#34;)
253+
file.write(&#34;RUN pip install flask praisonai==0.0.31 gunicorn markdown\n&#34;)
254254
file.write(&#34;EXPOSE 8080\n&#34;)
255255
file.write(&#39;CMD [&#34;gunicorn&#34;, &#34;-b&#34;, &#34;0.0.0.0:8080&#34;, &#34;api:app&#34;]\n&#39;)
256256

@@ -416,7 +416,7 @@ <h2 id="raises">Raises</h2>
416416
file.write(&#34;FROM python:3.11-slim\n&#34;)
417417
file.write(&#34;WORKDIR /app\n&#34;)
418418
file.write(&#34;COPY . .\n&#34;)
419-
file.write(&#34;RUN pip install flask praisonai==0.0.30 gunicorn markdown\n&#34;)
419+
file.write(&#34;RUN pip install flask praisonai==0.0.31 gunicorn markdown\n&#34;)
420420
file.write(&#34;EXPOSE 8080\n&#34;)
421421
file.write(&#39;CMD [&#34;gunicorn&#34;, &#34;-b&#34;, &#34;0.0.0.0:8080&#34;, &#34;api:app&#34;]\n&#39;)</code></pre>
422422
</details>

docs/create_custom_tools.md

Lines changed: 0 additions & 60 deletions
This file was deleted.

docs/custom_tools.md

Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,135 @@
1+
# Create Custom Tools
2+
3+
Sure! Let's go through the steps to install and set up the PraisonAI tool.
4+
5+
## Step 1: Install the `praisonai` Package
6+
7+
First, you need to install the `praisonai` package. Open your terminal and run the following command:
8+
9+
```bash
10+
pip install praisonai
11+
```
12+
13+
## Step 2: Create the `InternetSearchTool`
14+
15+
Next, create a file named `tools.py` and add the following code to define the `InternetSearchTool`:
16+
17+
```python
18+
from duckduckgo_search import DDGS
19+
from praisonai_tools import BaseTool
20+
21+
class InternetSearchTool(BaseTool):
22+
name: str = "Internet Search Tool"
23+
description: str = "Search Internet for relevant information based on a query or latest news"
24+
25+
def _run(self, query: str):
26+
ddgs = DDGS()
27+
results = ddgs.text(keywords=query, region='wt-wt', safesearch='moderate', max_results=5)
28+
return results
29+
```
30+
31+
## Step 3: Define the Agent Configuration
32+
33+
Create a file named `agents.yaml` and add the following content to configure the agent:
34+
35+
```yaml
36+
framework: crewai
37+
topic: research about the causes of lung disease
38+
roles:
39+
research_analyst:
40+
backstory: Experienced in analyzing scientific data related to respiratory health.
41+
goal: Analyze data on lung diseases
42+
role: Research Analyst
43+
tasks:
44+
data_analysis:
45+
description: Gather and analyze data on the causes and risk factors of lung diseases.
46+
expected_output: Report detailing key findings on lung disease causes.
47+
tools:
48+
- InternetSearchTool
49+
```
50+
51+
## Step 4: Run the PraisonAI Tool
52+
53+
To run the PraisonAI tool, simply type the following command in your terminal:
54+
55+
```bash
56+
praisonai
57+
```
58+
59+
If you want to run the `autogen` framework, use:
60+
61+
```bash
62+
praisonai --framework autogen
63+
```
64+
65+
## Prerequisites
66+
67+
Ensure you have the `duckduckgo_search` package installed. If not, you can install it using:
68+
69+
```bash
70+
pip install duckduckgo_search
71+
```
72+
73+
That's it! You should now have the PraisonAI tool installed and configured.
74+
75+
## Other information
76+
77+
### TL;DR to Create a Custom Tool
78+
79+
```bash
80+
pip install praisonai duckduckgo-search
81+
export OPENAI_API_KEY="Enter your API key"
82+
praisonai --init research about the latest AI News and prepare a detailed report
83+
```
84+
85+
- Add `- InternetSearchTool` in the agents.yaml file in the tools section.
86+
- Create a file called tools.py and add this code [tools.py](./tools.py)
87+
88+
```bash
89+
praisonai
90+
```
91+
92+
### Pre-requisite to Create a Custom Tool
93+
`agents.yaml` file should be present in the current directory.
94+
95+
If it doesn't exist, create it by running the command `praisonai --init research about the latest AI News and prepare a detailed report`.
96+
97+
#### Step 1 to Create a Custom Tool
98+
99+
Create a file called tools.py in the same directory as the agents.yaml file.
100+
101+
```python
102+
# example tools.py
103+
from duckduckgo_search import DDGS
104+
from praisonai_tools import BaseTool
105+
106+
class InternetSearchTool(BaseTool):
107+
name: str = "InternetSearchTool"
108+
description: str = "Search Internet for relevant information based on a query or latest news"
109+
110+
def _run(self, query: str):
111+
ddgs = DDGS()
112+
results = ddgs.text(keywords=query, region='wt-wt', safesearch='moderate', max_results=5)
113+
return results
114+
```
115+
116+
#### Step 2 to Create a Custom Tool
117+
118+
Add the tool to the agents.yaml file as show below under the tools section `- InternetSearchTool`.
119+
120+
```yaml
121+
framework: crewai
122+
topic: research about the latest AI News and prepare a detailed report
123+
roles:
124+
research_analyst:
125+
backstory: Experienced in gathering and analyzing data related to AI news trends.
126+
goal: Analyze AI News trends
127+
role: Research Analyst
128+
tasks:
129+
gather_data:
130+
description: Conduct in-depth research on the latest AI News trends from reputable
131+
sources.
132+
expected_output: Comprehensive report on current AI News trends.
133+
tools:
134+
- InternetSearchTool
135+
```

docs/firecrawl.md

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
# Firecrawl PraisonAI Integration
2+
3+
## Firecrawl running in Localhost:3002
4+
5+
```
6+
from firecrawl import FirecrawlApp
7+
from praisonai_tools import BaseTool
8+
import re
9+
10+
class WebPageScraperTool(BaseTool):
11+
name: str = "Web Page Scraper Tool"
12+
description: str = "Scrape and extract information from a given web page URL."
13+
14+
def _run(self, url: str) -> str:
15+
app = FirecrawlApp(api_url='http://localhost:3002')
16+
response = app.scrape_url(url=url)
17+
content = response["content"]
18+
# Remove all content above the line "========================================================"
19+
if "========================================================" in content:
20+
content = content.split("========================================================", 1)[1]
21+
22+
# Remove all menu items and similar patterns
23+
content = re.sub(r'\*\s+\[.*?\]\(.*?\)', '', content)
24+
content = re.sub(r'\[Skip to the content\]\(.*?\)', '', content)
25+
content = re.sub(r'\[.*?\]\(.*?\)', '', content)
26+
content = re.sub(r'\s*Menu\s*', '', content)
27+
content = re.sub(r'\s*Search\s*', '', content)
28+
content = re.sub(r'Categories\s*', '', content)
29+
30+
# Remove all URLs
31+
content = re.sub(r'http\S+', '', content)
32+
33+
# Remove empty lines or lines with only whitespace
34+
content = '\n'.join([line for line in content.split('\n') if line.strip()])
35+
36+
# Limit to the first 1000 words
37+
words = content.split()
38+
if len(words) > 1000:
39+
content = ' '.join(words[:1000])
40+
41+
return content
42+
```

docs/langchain.md

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
# Langchain Tools
2+
3+
## Integrate Langchain Direct Tools
4+
5+
```
6+
pip install youtube_search praisonai langchain_community langchain
7+
```
8+
9+
```
10+
# tools.py
11+
from langchain_community.tools import YouTubeSearchTool
12+
```
13+
14+
```
15+
# agents.yaml
16+
framework: crewai
17+
topic: research about the causes of lung disease
18+
roles:
19+
research_analyst:
20+
backstory: Experienced in analyzing scientific data related to respiratory health.
21+
goal: Analyze data on lung diseases
22+
role: Research Analyst
23+
tasks:
24+
data_analysis:
25+
description: Gather and analyze data on the causes and risk factors of lung
26+
diseases.
27+
expected_output: Report detailing key findings on lung disease causes.
28+
tools:
29+
- 'YouTubeSearchTool'
30+
```
31+
32+
## Integrate Langchain with Wrappers
33+
34+
```
35+
pip install wikipedia langchain_community
36+
```
37+
38+
```
39+
# tools.py
40+
from langchain_community.utilities import WikipediaAPIWrapper
41+
class WikipediaSearchTool(BaseTool):
42+
name: str = "WikipediaSearchTool"
43+
description: str = "Search Wikipedia for relevant information based on a query."
44+
45+
def _run(self, query: str):
46+
api_wrapper = WikipediaAPIWrapper(top_k_results=4, doc_content_chars_max=100)
47+
results = api_wrapper.load(query=query)
48+
return results
49+
```
50+
51+
```
52+
# agents.yaml
53+
framework: crewai
54+
topic: research about nvidia growth
55+
roles:
56+
data_collector:
57+
backstory: An experienced researcher with the ability to efficiently collect and
58+
organize vast amounts of data.
59+
goal: Gather information on Nvidia's growth by providing the Ticket Symbol to YahooFinanceNewsTool
60+
role: Data Collector
61+
tasks:
62+
data_collection_task:
63+
description: Collect data on Nvidia's growth from various sources such as
64+
financial reports, news articles, and company announcements.
65+
expected_output: A comprehensive document detailing data points on Nvidia's
66+
growth over the years.
67+
tools:
68+
- 'WikipediaSearchTool'
69+
```

docs/reddit.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
# Reddit PraisonAI Integration
2+
3+
```
4+
export REDDIT_USER_AGENT=[USER]
5+
export REDDIT_CLIENT_SECRET=xxxxxx
6+
export REDDIT_CLIENT_ID=xxxxxx
7+
```
8+
9+
tools.py
10+
11+
```
12+
from langchain_community.tools.reddit_search.tool import RedditSearchRun
13+
```
14+
15+
agents.yaml
16+
17+
```
18+
framework: crewai
19+
topic: research about the causes of lung disease
20+
roles:
21+
research_analyst:
22+
backstory: Experienced in analyzing scientific data related to respiratory health.
23+
goal: Analyze data on lung diseases
24+
role: Research Analyst
25+
tasks:
26+
data_analysis:
27+
description: Gather and analyze data on the causes and risk factors of lung
28+
diseases.
29+
expected_output: Report detailing key findings on lung disease causes.
30+
tools:
31+
- 'RedditSearchRun'
32+
```

docs/tavily.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Tavily PraisonAI Integration
2+
3+
````
4+
from praisonai_tools import BaseTool
5+
from langchain.utilities.tavily_search import TavilySearchAPIWrapper
6+
7+
class TavilyTool(BaseTool):
8+
name: str = "TavilyTool"
9+
description: str = "Search Tavily for relevant information based on a query."
10+
11+
def _run(self, query: str):
12+
api_wrapper = TavilySearchAPIWrapper()
13+
results = api_wrapper.results(query=query, max_results=5)
14+
return results
15+
```

0 commit comments

Comments
 (0)