Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions examples/weather_bot/.env_sample
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
OPENWEATHER_API_KEY=replace with yours
GEMINI_API_KEY=replace with yours
OPENAI_API_KEY=replace with yours
LLM_MODEL=gemini/gemini-2.0-flash
124 changes: 124 additions & 0 deletions examples/weather_bot/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
# Weather Bot

An intelligent weather assistant powered by AI that provides real-time weather information through both CLI and web interfaces.
![bot](screenshorts/bot.png)
## Features

- 🤖 AI-powered conversational weather queries
- 🌍 Real-time weather data from OpenWeather API
- 💬 Dual interface: Command-line and Web UI
- 🔧 Support for multiple LLM providers (Gemini, OpenAI)
- ⚡ Fast and responsive


## Installation

### 1. Clone the Repository

```bash
git clone https://github.com/jentic/standard-agent.git
cd standard_agent
```

### 2. Install Dependencies

```bash
make install
```

### 3. Activate Virtual Environment

```bash
source .venv/bin/activate
```

### 4. Navigate to Weather Bot

```bash
cd examples/weather_bot
```

### 5. Install Weather Bot Requirements

```bash
pip3 install -r requirements.txt
```

## Configuration

### 1. Create Environment File

Create a `.env` file in the `examples/weather_bot` directory:

```bash
touch .env
```

### 2. Add API Keys

Add the following environment variables to your `.env` file:

```env
OPENWEATHER_API_KEY=your_openweather_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
LLM_MODEL=gemini/gemini-2.0-flash
```

### 3. Get API Keys

- **OpenWeather API Key**: [https://home.openweathermap.org/api_keys](https://home.openweathermap.org/api_keys)
- **Gemini API Key**: [https://aistudio.google.com/api-keys](https://aistudio.google.com/api-keys)
- **OpenAI API Key** (optional): [https://platform.openai.com/api-keys](https://platform.openai.com/api-keys)

## Usage

### Command Line Interface (CLI)

Run the weather bot in your terminal:

```bash
python -m app.cli_bot
```

Comment on lines +81 to +83
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove leading whitespace from command.

The indentation before python -m app.cli_bot may cause rendering issues in some markdown viewers.

-        python -m app.cli_bot
+python -m app.cli_bot
🤖 Prompt for AI Agents
In `@examples/weather_bot/README.md` around lines 81 - 83, Remove the leading
indentation before the command inside the README's fenced code block so the line
reads exactly `python -m app.cli_bot` (no leading spaces) to avoid markdown
rendering issues; update the code block content where the command appears to
ensure the backticks remain on their own lines and the command line has no
preceding whitespace.

You can then interact with the bot by typing natural language queries like:
- "What's the weather in London?"
- "Will it rain in Tokyo tomorrow?"

### Web Interface

Start the web server:

```bash
fastapi run app/bot.py
```
Comment on lines +93 to +94
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove leading whitespace from command.

Same issue as above—leading whitespace before the command.

-        fastapi run app/bot.py 
+fastapi run app/bot.py
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fastapi run app/bot.py
```
fastapi run app/bot.py
🤖 Prompt for AI Agents
In `@examples/weather_bot/README.md` around lines 93 - 94, The README contains a
code block line with a leading space before the command "fastapi run
app/bot.py"; remove the leading whitespace so the command starts at column 0
(i.e., change "        fastapi run app/bot.py" to "fastapi run app/bot.py") to
ensure correct rendering and copy-paste behavior in the examples/weather_bot
README.


The web interface will be available at:
- **Local**: http://localhost:8000/chat
- **API Docs**: http://localhost:8000/docs

Open your browser and navigate to the local URL to interact with the weather bot through a user-friendly web interface.

## Supported LLM Models

You can configure different LLM models by changing the `LLM_MODEL` variable in your `.env` file:

- `gemini/gemini-2.0-flash`
- `gemini/gemini-1.5-pro`
- `openai/gpt-4`
- `openai/gpt-3.5-turbo`

## Project Structure

```
examples/weather_bot/
├── app/
│ ├── agent.py # CLI agent implementation
│ └── ...
├── requirements.txt # Python dependencies
├── .env # Environment variables (create this)
└── README.md # This file
```


![alt text](screenshorts/image.png)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same typo in image path.

This image reference also uses "screenshorts" instead of "screenshots".

-![alt text](screenshorts/image.png)
+![Weather Bot Interface](screenshots/image.png)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
![alt text](screenshorts/image.png)
![Weather Bot Interface](screenshots/image.png)
🤖 Prompt for AI Agents
In `@examples/weather_bot/README.md` at line 124, The image reference in the
README uses the misspelled directory "screenshorts" instead of "screenshots";
update the image path in the Markdown (the line containing "![alt
text](screenshorts/image.png)") to "![alt text](screenshots/image.png)" and
verify the file exists at that corrected path, also search the README for any
other occurrences of "screenshorts" and correct them to "screenshots".

Empty file.
27 changes: 27 additions & 0 deletions examples/weather_bot/app/agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
from dotenv import load_dotenv
from agents.standard_agent import StandardAgent
from agents.llm.litellm import LiteLLM
from agents.memory.dict_memory import DictMemory
from agents.reasoner.react import ReACTReasoner
from .weather_tools import func_tools


load_dotenv()


# try changing to you own prefered model and add the API key in the .env file/ environment variable
llm = LiteLLM(max_tokens=50)
Comment on lines +12 to +13
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

max_tokens=50 is likely too restrictive for weather responses.

A 50-token limit will severely truncate most LLM responses. Weather descriptions, forecasts, and conversational replies typically require 200-500+ tokens. This will cause incomplete or cut-off answers.

Additionally, the comment mentions changing the model, but no model parameter is passed to LiteLLM. Ensure LLM_MODEL is read from the environment or pass it explicitly.

Proposed fix
+import os
+
 # try changing to you own prefered model and add the API key in the .env file/ environment variable
-llm = LiteLLM(max_tokens=50)
+llm = LiteLLM(model=os.getenv("LLM_MODEL"), max_tokens=1024)
🤖 Prompt for AI Agents
In `@examples/weather_bot/app/agent.py` around lines 12 - 13, The llm setup uses
LiteLLM with max_tokens=50 which will truncate weather answers; update the llm
initialization (variable llm and constructor LiteLLM) to use a much larger token
budget (e.g., 200–500) or make max_tokens configurable from an environment
variable, and also pass the model explicitly by reading an LLM_MODEL env var (or
default) into the LiteLLM constructor so the chosen model is used rather than
relying on defaults.


tools = func_tools
memory = DictMemory()

# Step 2: Pick a reasoner profile (single-file implementation)
custom_reasoner = ReACTReasoner(llm=llm, tools=tools, memory=memory, max_turns=5)

# Step 3: Wire everything together in the StandardAgent
weather_agent = StandardAgent(
llm=llm,
tools=tools,
memory=memory,
reasoner=custom_reasoner,
)
Loading