Full Stack AI Platform for Time-Series Agent Creation and Management
AI-Time-Machines is a comprehensive Full Stack platform that combines React, Node.js, Python ML services, and PostgreSQL to create and manage time-series AI agents with advanced forecasting capabilities.
- 🎨 Modern, responsive UI with TailwindCSS
- 🔐 User authentication (Login/Register)
- 📊 Interactive dashboard with real-time statistics
- 📈 Time-series data upload and management
- 🤖 AI model training interface
- 🔮 Predictions visualization
- 🛠️ AI tools and toolkits management
- ⚡ RESTful API with Express.js
- 🔒 JWT-based authentication with bcrypt
- 🗄️ PostgreSQL database with Sequelize ORM
- 📝 Comprehensive API endpoints for:
- User management
- Time-series data operations
- AI model training workflows
- Prediction generation
- AI tools and toolkits catalog
- 🧠 Time-series forecasting models:
- LSTM (Long Short-Term Memory)
- GRU (Gated Recurrent Unit)
- ARIMA (Statistical forecasting)
- Prophet (Seasonal patterns)
- Transformer (Complex patterns)
- 📊 Model training and prediction APIs
- 💾 Model persistence and versioning
- 🐳 Docker containerization
- 🔄 Docker Compose orchestration
- 🚀 CI/CD with GitHub Actions
- 🔍 Security scanning and dependency review
- Node.js (version 18.0.0 or higher)
- Python (version 3.11 or higher)
- PostgreSQL (version 15 or higher)
- Docker and Docker Compose (for containerized deployment)
- OpenAI API Key (optional, for ChatGPT integration)
-
Clone the repository
git clone https://github.com/lippytm/AI-Time-Machines.git cd AI-Time-Machines -
Set up environment variables
cp .env.example .env # Edit .env and add your configuration -
Start all services
docker-compose up -d
-
Access the application
- Frontend: http://localhost:3000
- Backend API: http://localhost:5000
- Python ML Service: http://localhost:8000
cp .env.example .envEdit .env with your configuration:
# Database
DB_HOST=localhost
DB_PORT=5432
DB_NAME=ai_time_machines
DB_USER=postgres
DB_PASSWORD=your_password
# JWT
JWT_SECRET=your_secret_key
JWT_EXPIRES_IN=7d
# Servers
BACKEND_PORT=5000
FRONTEND_PORT=3000
PYTHON_SERVICE_PORT=8000createdb ai_time_machinesBackend:
cd backend
npm installFrontend:
cd frontend
npm installPython Service:
cd python-service
pip install -r requirements.txtTerminal 1 - Backend:
cd backend
npm run devTerminal 2 - Frontend:
cd frontend
npm startTerminal 3 - Python Service:
cd python-service
python app.py- Navigate to http://localhost:3000
- Click "Register" and create your account
- Login with your credentials
- Go to "Time Series" section
- Click "+ Upload Time Series"
- Provide a name and description
- Sample data will be generated automatically (or upload your CSV)
- Navigate to "Models" section
- Click "+ Train New Model"
- Select your time series data
- Choose a model type (LSTM, GRU, ARIMA, Prophet, or Transformer)
- Click "Train Model"
- Go to "Predictions" section
- Click "+ Generate Prediction"
- Select a trained model
- Set prediction horizon (1-100 steps)
- View predicted values and confidence intervals
- Navigate to "AI Tools" section
- Click "+ Add AI Tool" to add a new tool
- Fill in tool details (name, category, type, URLs, etc.)
- Filter tools by category or type
- View and manage your AI tools catalog
AI-Time-Machines/
├── frontend/ # React frontend application
│ ├── src/
│ │ ├── components/ # Reusable UI components
│ │ ├── contexts/ # React contexts (Auth)
│ │ ├── pages/ # Page components
│ │ ├── services/ # API service layer
│ │ └── App.js # Main application
│ └── Dockerfile
├── backend/ # Node.js backend API
│ ├── src/
│ │ ├── config/ # Database configuration
│ │ ├── controllers/ # Request handlers
│ │ ├── middleware/ # Auth & validation
│ │ ├── models/ # Sequelize models
│ │ ├── routes/ # API routes
│ │ └── server.js # Express server
│ └── Dockerfile
├── python-service/ # Python ML service
│ ├── models/ # ML model implementations
│ ├── app.py # Flask application
│ ├── requirements.txt # Python dependencies
│ └── Dockerfile
├── docker-compose.yml # Multi-container orchestration
└── .env.example # Environment template
POST /api/auth/register- Register new userPOST /api/auth/login- Login userGET /api/auth/me- Get current userPUT /api/auth/profile- Update profile
GET /api/timeseries- List all time seriesPOST /api/timeseries- Create time seriesGET /api/timeseries/:id- Get time seriesPUT /api/timeseries/:id- Update time seriesDELETE /api/timeseries/:id- Delete time series
GET /api/models- List all modelsPOST /api/models- Create and train modelGET /api/models/:id- Get model detailsDELETE /api/models/:id- Delete model
GET /api/predictions- List all predictionsPOST /api/predictions- Generate predictionGET /api/predictions/:id- Get predictionDELETE /api/predictions/:id- Delete prediction
GET /api/aitools- List all AI toolsGET /api/aitools/categories- Get available categoriesGET /api/aitools/types- Get available tool typesPOST /api/aitools- Create AI toolGET /api/aitools/:id- Get AI toolPUT /api/aitools/:id- Update AI toolDELETE /api/aitools/:id- Delete AI tool
Backend Tests:
cd backend
npm testFrontend Tests:
cd frontend
npm testRun All Tests:
npm test# Build all images
docker-compose build
# Start services
docker-compose up -d
# View logs
docker-compose logs -f
# Stop services
docker-compose down
# Rebuild and restart
docker-compose up -d --build- ✅ JWT-based authentication
- ✅ Password hashing with bcrypt
- ✅ Environment variable management
- ✅ CORS configuration
- ✅ Helmet.js security headers
- ✅ Input validation with express-validator
- ✅ SQL injection protection (Sequelize ORM)
- ✅ Regular security scanning (Trivy)
The project includes GitHub Actions workflows for:
- Code quality and security scanning
- Dependency review
- Backend and frontend testing
- Docker image builds
- Documentation validation
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.
# Check if PostgreSQL is running
pg_isready
# Restart PostgreSQL
sudo systemctl restart postgresqlIf ports 3000, 5000, or 8000 are in use:
# Find and kill process on port
lsof -ti:3000 | xargs kill -9# Clean up Docker resources
docker system prune -a
# Rebuild from scratch
docker-compose down -v
docker-compose up --build- React Documentation
- Express.js Documentation
- TensorFlow Documentation
- Sequelize Documentation
- Docker Documentation
- Advanced visualization with Chart.js/D3.js
- Real-time predictions with WebSockets
- Model comparison and benchmarking
- Export predictions to CSV/Excel
- Multi-variate time series support
- Automated model hyperparameter tuning
- Integration with external data sources
- Mobile application
For questions and support:
- Open an issue on GitHub
- Check GitHub Discussions
Built with ❤️ for AI and Time-Series Enthusiasts