Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,9 @@ __pycache__
.env
.venv
.coverage
.pytest_cache

config/*
!config/mcp.template.json

coverage.xml
37 changes: 27 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# AI Agent

An intelligent AI agent framework written in Python, designed to facilitate seamless integration with Azure OpenAI services, file operations, web fetching, and search functionalities. This project provides modular components to build and extend AI-driven applications with best practices in testing, linting, and continuous integration.
An intelligent AI agent framework written in Python, designed to facilitate seamless integration with Model Context Protocol (MCP) servers, Azure OpenAI services, file operations, web fetching, and search functionalities. This project provides modular components to build and extend AI-driven applications with best practices in testing, linting, and continuous integration.

## Table of Contents
- [Features](#features)
Expand All @@ -13,25 +13,30 @@ An intelligent AI agent framework written in Python, designed to facilitate seam
- [License](#license)

## Features
- Integration with Azure OpenAI for chat and completion services
- Integration with Model Context Protocol (MCP) servers for AI tool execution
- Support for Azure OpenAI for chat and completion services
- Modular file operations (read, write, list)
- Web fetching and conversion utilities
- Search client with pluggable backends
- Tooling for codegen workflows
- Configurable via environment variables
- Configurable via environment variables and JSON configuration files

## Architecture
The codebase follows a modular structure under `src/`:

```
src/
├── agent.py # Entry point for the AI agent
├── chat.py # Chat interface implementation
├── main.py # Main application entry point
├── libs/ # Core libraries and abstractions
│ ├── azureopenai/ # Azure OpenAI wrappers (chat, client)
│ ├── fileops/ # File operations utilities
│ ├── search/ # Search client and service
│ └── webfetch/ # Web fetching and conversion services
└── tools/ # Command-line tools for file and web operations
├── tools/ # Command-line tools for file, web operations and more
└── utils/ # Utility modules
├── azureopenai/ # Azure OpenAI wrappers (chat, client)
└── mcpclient/ # MCP client for server interactions
```

## Installation
Expand All @@ -44,17 +49,22 @@ src/
2. Create and activate a Python 3.9+ virtual environment:
```bash
python3 -m venv .venv
source .venv/bin/activate # On Windows: .venv\\Scripts\\activate
source .venv/bin/activate # On Windows: .venv\Scripts\activate
```
3. Install dependencies:
```bash
pip install -r requirements.txt
```
4. Copy `.env.example` to `.env` and configure your Azure OpenAI credentials:
4. Copy `.env.example` to `.env` and configure your credentials:
```bash
cp .env.example .env
# Edit .env to set environment variables
```
5. Configure MCP servers (optional):
```bash
cp config/mcp.template.json config/mcp.json
# Edit the config/mcp.json file to configure your MCP servers
```

## Usage

Expand All @@ -68,7 +78,14 @@ Run the **AI Agent** with:
python -m src.agent
```

Customize behavior via environment variables defined in `.env`.
Run the **Main Application** with:
```bash
python -m src.main
```

Customize behavior via:
- Environment variables defined in `.env`
- MCP server configurations in `config/mcp.json`

## Development

Expand All @@ -91,14 +108,14 @@ mypy src

## Testing

All changes must be validated with tests.
All changes must be validated with tests. The `tests/` directory mirrors the structure of `src/`.

Run unit and integration tests with coverage:
```bash
pytest --cov=src
```

Ensure 100% pass before committing.
Ensure all tests pass before committing.

## Contributing

Expand Down
13 changes: 13 additions & 0 deletions config/mcp.template.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
"servers": {
"<Server Name>": {
"command": "<Command to run the server>",
"args": [
"<Arguments to pass to the server>"
],
"env": {
"<Environment Variable Name>": "<Environment Variable Value>"
}
}
}
}
79 changes: 29 additions & 50 deletions src/agent.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
from dotenv import load_dotenv
load_dotenv()

import json
from typing import Dict, Any, List, Generator
from datetime import date

from utils import chatloop
import json
from typing import Dict, Any, Generator, List

from utils import chatutil, graceful_exit
from utils.azureopenai.chat import Chat
from tools import Tool
from tools.google_search import GoogleSearch
from tools.read_file import ReadFile
from tools.write_file import WriteFile
Expand All @@ -20,38 +22,21 @@
"list_files": ListFiles(),
"web_fetch": WebFetch()
}
chat = Chat.create(tool_map)
def add_tool(tool: Tool) -> None:
tool_map[tool.name] = tool
chat.add_tool(tool)

def process_tool_calls(response: Dict[str, Any]) -> Generator[Dict[str, Any], None, None]:
"""Process tool calls from the LLM response and return results.

Args:
response: The response from the LLM containing tool calls.

Yields:
Dict with tool response information.
"""
# Handle case where tool_calls is None or not present
if not response or not response.get("tool_calls") or not isinstance(response.get("tool_calls"), list):
return

async def process_tool_calls(response: Dict[str, Any], call_back) -> None:
for tool_call in response.get("tool_calls", []):
if not isinstance(tool_call, dict):
continue

tool_id = tool_call.get("id", "unknown_tool")

# Extract function data, handling possible missing keys
function_data = tool_call.get("function", {})
if not isinstance(function_data, dict):
continue

tool_name = function_data.get("name")
tool_name = function_data.get("name", "")
if not tool_name:
continue

arguments = function_data.get("arguments", "{}")

print(f"<Tool: {tool_name}>")
print(f"<Tool: {tool_name}> ", arguments)

try:
args = json.loads(arguments)
Expand All @@ -65,20 +50,22 @@ def process_tool_calls(response: Dict[str, Any]) -> Generator[Dict[str, Any], No
if tool_name in tool_map:
tool_instance = tool_map[tool_name]
try:
tool_result = tool_instance.run(**args)
tool_result = await tool_instance.run(**args)
print(f"<Tool Result: {tool_name}> ", tool_result)
except Exception as e:
tool_result = {
"error": f"Error running tool '{tool_name}': {str(e)}"
}
print(f"<Tool Error: {tool_name}> ", tool_result)

yield {
call_back({
"role": "tool",
"tool_call_id": tool_id,
"tool_call_id": tool_call.get("id", "unknown_tool"),
"content": json.dumps(tool_result)
}
})

# Define enhanced system role with instructions on using all available tools
system_role = """
system_role = f"""
You are a helpful assistant.
Your Name is Agent Smith and you have access to various capabilities:

Expand All @@ -90,44 +77,35 @@ def process_tool_calls(response: Dict[str, Any]) -> Generator[Dict[str, Any], No

Use these tools appropriately to provide comprehensive assistance.
Synthesize and cite your sources correctly when using search or web content.

Today is {date.today().strftime("%d %B %Y")}.
"""

chat = Chat.create(tool_map)
messages = [{"role": "system", "content": system_role}]

@chatloop("Agent")
async def run_conversation(user_prompt):
@graceful_exit
@chatutil("Agent")
async def run_conversation(user_prompt) -> str:
# Example:
# user_prompt = """
# Who is the current chancellor of Germany?
# Write the result to a file with the name 'chancellor.txt' in a folder with the name 'docs'.
# Then list me all files in my root directory and put the result in another file called 'list.txt' in the same 'docs' folder.
# """

messages.append({"role": "user", "content": user_prompt})
response = await chat.send_messages(messages)

# Handle possible None response
if not response:
return ""

# Handle missing or empty choices
choices = response.get("choices", [])
if not choices:
return ""

assistant_message = choices[0].get("message", {})
messages.append(assistant_message)

# Handle the case where tool_calls might be missing or not a list
while assistant_message.get("tool_calls"):
for result in process_tool_calls(assistant_message):
messages.append(result)
await process_tool_calls(assistant_message, messages.append)

response = await chat.send_messages(messages)

# Handle possible None response or missing choices
if not response or not response.get("choices"):
if not (response and response.get("choices", None)):
break

assistant_message = response.get("choices", [{}])[0].get("message", {})
Expand All @@ -136,4 +114,5 @@ async def run_conversation(user_prompt):
return assistant_message.get("content", "")

if __name__ == "__main__":
run_conversation()
import asyncio
asyncio.run(run_conversation())
11 changes: 6 additions & 5 deletions src/chat.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
from dotenv import load_dotenv
load_dotenv()

from typing import Dict, Any, Optional

from utils import chatloop
from utils import chatutil, graceful_exit, mainloop
from utils.azureopenai.chat import Chat

# Initialize the Chat client
Expand All @@ -20,7 +18,9 @@

messages = [{"role": "system", "content": system_role}]

@chatloop("Chat")
@mainloop
@graceful_exit
@chatutil("Chat")
async def run_conversation(user_prompt: str) -> str:
"""Run a conversation with the user.

Expand Down Expand Up @@ -50,4 +50,5 @@ async def run_conversation(user_prompt: str) -> str:
return content

if __name__ == "__main__":
run_conversation()
import asyncio
asyncio.run(run_conversation())
38 changes: 27 additions & 11 deletions src/main.py
Original file line number Diff line number Diff line change
@@ -1,21 +1,37 @@
import asyncio
import agent

import agent, chat
from utils import graceful_exit, mainloop
from utils.mcpclient.sessions_manager import MCPSessionManager

async def process_one():
while True:
print("Processing one...")
await asyncio.sleep(1)
session_manager = MCPSessionManager()

async def process_two():
@graceful_exit
async def mcp_discovery():
success = await session_manager.load_mcp_sessions()
if not success:
print("No valid MCP sessions found in configuration")
return

await session_manager.list_tools()
for tool in session_manager.tools:
agent.add_tool(tool)

@mainloop
@graceful_exit
async def agent_task():
await agent.run_conversation()

@graceful_exit
async def main():
# Run both coroutines concurrently
await asyncio.gather(
process_one(),
process_two()
)
print("<Discovery: MCP Server>")
await mcp_discovery()
print("\n" + "-" * 50 + "\n")

for server_name in session_manager.sessions.keys():
print(f"<Active MCP Server: {server_name}>")

await agent_task()

if __name__ == "__main__":
asyncio.run(main())
Loading