XandAI is a modern and intelligent virtual assistant built with React and Material-UI, featuring complete integration with OLLAMA for local AI models and Stable Diffusion for image generation. The application offers an elegant chat interface with dark theme and advanced model management functionalities.
- Dark Theme: Elegant and modern design optimized to reduce visual fatigue
- Responsive: Adaptive interface for desktop and mobile
- Material-UI: Consistent and accessible components
- Smooth Animations: Enhanced transitions and visual feedback
- OLLAMA Integration: Connect with local AI models
- Stable Diffusion: Generate images from text prompts
- Automatic Fallback: Intelligent system that switches between OLLAMA and mock responses
- Model Selection: Interface to choose and manage available models
- Real-time Status: Visual indicators of connection and model status
- Real-time Messages: Fluid and responsive chat interface
- Chat History: Persistent conversation history with backend integration
- Typing Indicators: Visual feedback during processing
- Session Management: Complete control over chat sessions
- Image Generation: Generate images from chat responses
- Attachment Support: View generated images in chat history
- Settings Panel: Intuitive interface to configure OLLAMA and Stable Diffusion
- Connectivity Testing: Automatic service availability verification
- Model Management: Download, select, and remove models
- Persistent Configuration: Settings saved locally and on backend
The project follows Clean Architecture principles with clear separation of responsibilities:
src/
βββ components/ # Reusable React components
β βββ chat/ # Chat-specific components
β βββ settings/ # Settings and panels
β βββ auth/ # Authentication components
β βββ common/ # Shared components
βββ application/ # Application layer
β βββ hooks/ # Custom React hooks
β βββ services/ # Business services
βββ domain/ # Entities and business rules
β βββ entities/ # Data models
β βββ repositories/ # Repository interfaces
βββ infrastructure/ # Infrastructure implementations
β βββ api/ # External API integrations
β βββ mock-api/ # Mock implementations
βββ styles/ # Global themes and styles
backend/
βββ src/
β βββ domain/ # Business entities and interfaces
β βββ application/ # Use cases and DTOs
β βββ infrastructure/ # Technical implementations
β βββ presentation/ # HTTP interface
- Node.js 16+ and npm/yarn
- OLLAMA installed (optional, for local AI)
- Stable Diffusion WebUI (optional, for image generation)
-
Clone the repository
git clone https://github.com/XandAI-project/XandAI.git cd XandAI
-
Install frontend dependencies
npm install
-
Install backend dependencies
cd backend npm install cd ..
-
Configure environment
cp env.local.example env.local
-
Start the backend
cd backend npm run start:dev
-
Start the frontend
npm start
-
Access in browser
http://localhost:3000
To use local AI models, configure OLLAMA:
-
Install OLLAMA
# Linux/macOS curl -fsSL https://ollama.ai/install.sh | sh # Windows: Download from official site
-
Start the service
ollama serve
-
Download models
ollama pull llama2:latest ollama pull mistral:latest
-
Configure in XandAI
- Click the settings button in the header
- Enable OLLAMA integration
- Test the connection
- Select a model
To generate images, configure Stable Diffusion:
-
Install Stable Diffusion WebUI Follow the official installation guide
-
Start with API enabled
./webui.sh --api
-
Configure in XandAI
- Open settings
- Enable Stable Diffusion integration
- Set the API URL (default: http://localhost:7860)
- Test the connection
- Model Selector: In the header, choose between Mock AI or OLLAMA models
- Chat: Type messages in the input area
- Image Generation: Click the image icon next to assistant messages
- Settings: Access via settings button to manage OLLAMA and Stable Diffusion
- History: Use the sidebar to navigate between conversations
- Connection: Configure OLLAMA URL (default: http://localhost:11434)
- Models: View, select, and manage available models
- Status: Monitor connectivity and model status
- Timeout: Adjust timeout for requests
- Connection: Configure Stable Diffusion WebUI URL
- Models: Select available models
- Parameters: Adjust generation settings (steps, CFG scale, etc.)
- Token: Configure authentication token if needed
- Send Messages: Type and press Enter or click "Send"
- Generate Images: Click the image icon next to assistant responses
- Automatic Fallback: If OLLAMA fails, system uses mock responses automatically
- History: Conversations are saved with the backend
- Indicators: See when the assistant is "typing"
Create an env.local
file in the project root:
# Backend API URL
REACT_APP_BACKEND_URL=http://localhost:3001
# Default OLLAMA URL
REACT_APP_OLLAMA_DEFAULT_URL=http://localhost:11434
# Default Stable Diffusion URL
REACT_APP_SD_DEFAULT_URL=http://localhost:7860
# Enable debug
REACT_APP_DEBUG=true
Edit src/styles/theme/theme.js
to customize colors and styles:
export const customTheme = createTheme({
palette: {
primary: {
main: '#your-primary-color',
},
// ... other configurations
}
});
- OLLAMA Integration - Complete OLLAMA integration guide
- Architecture - System architecture details
OllamaConfig
{
baseUrl: string, // OLLAMA URL
timeout: number, // Timeout in ms
selectedModel: string, // Selected model
enabled: boolean // If enabled
}
StableDiffusionConfig
{
baseUrl: string, // Stable Diffusion URL
enabled: boolean, // If enabled
model: string, // Selected model
steps: number, // Generation steps
cfgScale: number // CFG Scale
}
Message
{
id: string, // Unique ID
content: string, // Message content
sender: 'user'|'assistant', // Sender
timestamp: Date, // Timestamp
isTyping: boolean, // If typing message
attachments: Array // Message attachments
}
npm start # Start development server
npm test # Run tests
npm run build # Build for production
npm run eject # Eject configuration (irreversible)
cd backend
npm run start:dev # Start development server
npm run build # Build for production
npm run start:prod # Start production server
npm run lint # Check code issues
npm run format # Format code automatically
Run automated tests:
# Unit tests
npm test
# Tests with coverage
npm test -- --coverage
# Tests in watch mode
npm test -- --watch
npm run build
cd backend
npm run build
The build generates static files in the build/
folder that can be served by any web server:
# Serve locally for testing
npx serve -s build
# Deploy to Netlify, Vercel, etc.
# Upload the build/ folder
- Fork the project
- Create a feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
- Follow established code standards
- Write tests for new features
- Document important changes
- Use semantic commits
- User Authentication: Login system and profiles
- Multi-language Support: Internationalization
- Conversation Export: PDF, text, etc.
- Custom Plugins: Extension system
- Cloud Sync: Automatic backup
- Voice Commands: Speech-to-text integration
- Collaborative Mode: Group chat
- PWA: Progressive Web App
- Offline Mode: Offline functionality
- Performance: Loading optimizations
- Accessibility: A11y improvements
- Docker: Containerization
- First execution may be slow (model loading)
- Requires significant resources (CPU/GPU/RAM)
- Limited compatibility with supported models
- Requires Stable Diffusion WebUI running
- Generation can be slow depending on settings
- Model size affects memory usage
- Mobile needs additional optimizations
- Some components may not work in older browsers
This project is licensed under the MIT License - see the LICENSE file for details.
- OLLAMA Team - For the excellent local AI tool
- Automatic1111 - For Stable Diffusion WebUI
- Material-UI - For the component system
- React Team - For the incredible framework
- Open Source Community - For inspiration and contributions
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: [email protected]
XandAI - Building the future of AI interfaces, one conversation at a time. π