BudSimulator is an advanced AI model simulation and benchmarking platform built on top of the GenZ framework. It provides comprehensive tools for evaluating, comparing, and optimizing AI models with hardware recommendations.
- 🚀 Model Benchmarking: Comprehensive performance analysis across different hardware configurations
- 💻 Hardware Recommendations: Intelligent suggestions for optimal hardware based on your requirements
- 📊 Interactive Dashboard: Beautiful web interface for visualizing model performance
- 🔍 Model Analysis: Detailed insights into model architecture and capabilities
- 🛠️ Easy Setup: One-click automated installation and configuration
- Python 3.8 or higher
- Node.js 14+ and npm
- Git
Simply run the automated setup script:
python setup.py
This will:
- Check system requirements
- Create a virtual environment
- Install all Python and npm dependencies
- Set up the database with pre-populated model data
- Configure your LLM provider (OpenAI, Anthropic, Ollama, etc.)
- Run system tests
- Start both backend and frontend servers
- Open the application in your browser
If you prefer manual setup:
-
Clone the repository
git clone <repository-url> cd BudSimulator
-
Create virtual environment
python -m venv env source env/bin/activate # On Windows: env\Scripts\activate
-
Install Python dependencies
pip install -r requirements.txt
-
Install frontend dependencies
cd frontend npm install cd ..
-
Set up database
python scripts/setup_database.py
-
Configure environment Create a
.env
file with your LLM configuration:LLM_PROVIDER=openai LLM_API_KEY=your-api-key LLM_MODEL=gpt-4 LLM_API_URL=https://api.openai.com/v1/chat/completions
-
Start the servers
Backend (stable mode - recommended):
python run_api.py
Backend (with hot-reload for development):
# Unix/Linux/macOS RELOAD=true python run_api.py # Windows set RELOAD=true && python run_api.py
Frontend (new terminal):
cd frontend npm start
Once the setup is complete:
- Access the application: http://localhost:3000
- API Documentation: http://localhost:8000/docs
- Model Dashboard: Browse and analyze AI models
- Hardware Recommendations: Get optimal hardware suggestions
- Benchmarking: Run performance tests on different configurations
BudSimulator/
├── apis/ # Backend API endpoints
├── frontend/ # React frontend application
├── scripts/ # Utility scripts
├── tests/ # Test suite
├── data/ # Pre-populated database
├── models/ # Model definitions
├── services/ # Business logic services
├── utils/ # Utility functions
├── setup.py # Automated setup script
└── requirements.txt # Python dependencies
BudSimulator supports multiple LLM providers:
- OpenAI: GPT-3.5, GPT-4
- Anthropic: Claude models
- Ollama: Local models
- Custom: Any OpenAI-compatible API
LLM_PROVIDER
: Your LLM provider (openai, anthropic, ollama, custom)LLM_API_KEY
: API key for your providerLLM_MODEL
: Model name to useLLM_API_URL
: API endpoint URL
Run the test suite:
pytest tests/ -v
If ports 3000 or 8000 are already in use, the setup script will automatically find available ports.
If you encounter database issues:
rm -rf ~/.genz_simulator/db
python scripts/setup_database.py
Clear npm cache and reinstall:
cd frontend
rm -rf node_modules package-lock.json
npm cache clean --force
npm install
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
For issues and questions:
- Open an issue on GitHub
- Check the documentation at http://localhost:8000/docs
- Review the logs in the console output
Built on top of the GenZ framework for advanced AI model management.