A modular multi-agent system built with LangGraph and Groq that coordinates between Researcher, Analyst, and Writer agents to process complex tasks.
- Modular Architecture: Clean separation of concerns with dedicated modules for agents, core logic, and UI
- Multi-Agent Coordination: Supervisor agent orchestrates workflow between specialized agents
- Interactive Web UI: Beautiful Streamlit interface for easy interaction
- Real-time Processing: Live updates during task execution
- Task History: Track and review previous analyses
- Downloadable Reports: Export generated reports in text format
supervisor_agent_app/
├── agents/ # Agent implementations
│ ├── __init__.py
│ ├── supervisor.py # Coordinates workflow
│ ├── researcher.py # Gathers information
│ ├── analyst.py # Analyzes data
│ └── writer.py # Creates reports
├── core/ # Core system logic
│ ├── __init__.py
│ ├── state.py # State definitions
│ ├── workflow.py # Graph workflow
│ ├── router.py # Agent routing
│ └── llm_config.py # LLM configuration
├── ui/ # User interface
│ └── streamlit_app.py # Streamlit web app
├── main.py # CLI application
├── requirements.txt # Dependencies
└── .env.example # Environment template
pip install -r requirements.txt-
Copy
.env.exampleto.env:cp .env.example .env
-
Add your Groq API key to
.env:GROQ_API_KEY=your_actual_groq_api_key_here -
Get a Groq API key from: https://console.groq.com/keys
Run the Streamlit app:
streamlit run ui/streamlit_app.pyThen open your browser to http://localhost:8501
Run the CLI version:
python main.py- Supervisor Agent: Receives the task and decides which agent should work next
- Researcher Agent: Gathers comprehensive information about the topic
- Analyst Agent: Analyzes the research data and provides insights
- Writer Agent: Creates a professional report with findings and recommendations
The workflow continues until all agents have completed their tasks and a final report is generated.
- "What are the benefits and risks of AI in healthcare?"
- "Analyze the current state of renewable energy adoption"
- "Research the impact of remote work on productivity"
- "Investigate the latest trends in cybersecurity"
The system uses Groq's llama-3.3-70b-versatile model by default. You can modify this in core/llm_config.py.
- Create a new agent file in the
agents/directory - Implement the agent function following the existing pattern
- Add the agent to the workflow in
core/workflow.py - Update the router in
core/router.py
- Python 3.8+
- Groq API key
- Internet connection for LLM API calls
- Missing API Key: Ensure your
.envfile contains a validGROQ_API_KEY - Import Errors: Make sure all dependencies are installed with
pip install -r requirements.txt - Port Issues: If Streamlit port 8501 is busy, use:
streamlit run ui/streamlit_app.py --server.port 8502
- Check the console output for detailed error messages
- Ensure your Groq API key has sufficient credits
- Verify internet connectivity for API calls
- Average task completion time: 30-60 seconds
- Supports concurrent requests in Streamlit
- Optimized for tasks requiring research and analysis
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is open source and available under the MIT License.
For issues and questions:
- Check the troubleshooting section above
- Review the console output for error details
- Ensure all requirements are properly installed