A powerful AI-driven application for creating and managing dynamic personas, enabling interactive group chats with AI personalities powered by Ollama.
Key Features
- Dynamic Persona Generation: Create detailed AI personas with unique backgrounds, personalities, and skills
- Customizable Models: Choose different Ollama models for each persona (mistral, llama2, etc.)
- Advanced Settings: Fine-tune temperature and token settings per persona
- Interactive Group Chat: Engage in conversations with multiple AI personas simultaneously
- Persona Management: Add tags, notes, and customize settings for each persona
- Python 3.8+
- Ollama server running locally
- Required Python packages (see
requirements.txt
)
-
Install Ollama following the instructions at ollama.ai
-
Clone the repository:
git clone https://github.com/marc-shade/ai-persona-lab.git cd ai-persona-lab
-
Install dependencies:
pip install -r requirements.txt
-
Start Ollama server and pull required models:
ollama serve # Start the Ollama server ollama pull mistral:instruct # Pull the default model
-
Start the application:
streamlit run app.py
-
Create Personas:
- Use the quick generate buttons or custom input for new personas
- Edit persona details in the expandable sections
- Customize model settings and parameters
- Add tags and notes for better organization
-
Manage Personas:
- Edit any persona attribute through the UI
- Add/remove tags and update notes
- Customize model settings per persona
- Toggle personas active/inactive in chat
-
Chat Interface:
- Select which personas to include in the conversation
- Start conversations with natural language
- Watch personas interact based on their unique characteristics
- Each persona maintains context and personality throughout the chat
ai-persona-lab/
├── app.py # Main Streamlit application and UI
├── models/
│ └── persona.py # Persona class and management logic
├── chat/
│ └── interface.py # Chat interface and message handling
├── data/ # Storage for personas and chat history
├── requirements.txt # Python dependencies
└── README.md # Documentation
- Ollama API URL:
http://localhost:11434/api
- Default Model:
mistral:instruct
- Temperature:
0.7
- Max Tokens:
500
- Avatar Size:
200x200
Each persona can be configured with:
- Any Ollama model
- Temperature (0.0 to 1.0)
- Max tokens (50 to 2000)
- Custom system prompts via notes
- Personas are automatically saved to
data/personas.json
- Chat history is maintained during the session
- All changes are persisted immediately
Common issues and solutions:
-
Ollama Connection Error
- Ensure Ollama server is running (
ollama serve
) - Check if the API URL is accessible (
http://localhost:11434/api
) - Verify firewall settings
- Ensure Ollama server is running (
-
Model Loading Issues
- Pull the model explicitly:
ollama pull mistral:instruct
- Check available models:
ollama list
- Ensure sufficient disk space
- Pull the model explicitly:
-
UI Issues
- Clear browser cache
- Restart Streamlit server
- Check console for JavaScript errors
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.