A full-stack project for exploring, modeling, and serving energy consumption predictions using Jupyter notebooks, a Python FastAPI backend, and a Next.js frontend.
Quick links
- Backend entry: backend/app.py
- Frontend app: frontend/app/page.tsx
- Dataset: data/raw/Energy_consumption.csv
- Notebooks: notebooks/01_eda.ipynb, notebooks/02_Preprocessing.ipynb, notebooks/03_Modeling.ipynb, notebooks/4_Analysis_Results.ipynb
Repository overview
backend/— FastAPI server and model-loading code (backend/app.py).frontend/— Next.js UI and components.data/raw/— Raw CSV dataset (data/raw/Energy_consumption.csv).notebooks/— EDA, preprocessing, modeling, and analysis notebooks.models/— Trained model artifacts (savemodel.pklhere).requirements.txt— Python dependencies.
Prerequisites
- Python 3.9+ and a virtual environment manager (venv/virtualenv).
- Node.js 18+ and npm (for frontend).
Backend — run the API
- Create and activate a Python environment, then install dependencies:
python3 -m venv myenv
source myenv/bin/activate
pip install -r requirements.txt- Start the API (uses
uvicorn) — the FastAPI app is inbackend/app.py:
uvicorn backend.app:app --reload --host 127.0.0.1 --port 8000- Open interactive docs (if running) at
http://127.0.0.1:8000/docs.
API endpoints (examples)
The backend exposes the following endpoints (see backend/app.py for implementation details):
- GET / — root welcome message
- GET /health — model health/status
- POST /predict — predict energy consumption; accepts a JSON body with feature values
Health check (cURL)
curl -X GET http://127.0.0.1:8000/healthPredict (cURL)
Example request payload for POST /predict — fields match the InputFeatures Pydantic model in the API:
{
"Temperature": 24.5,
"Humidity": 45.0,
"SquareFootage": 1200.0,
"Occupancy": 3.0,
"RenewableEnergy": 0.2,
"HeatIndex": null,
"HVACUsage": "On",
"LightingUsage": "On",
"DayOfWeek": "Monday",
"Holiday": "No"
}Use this cURL command to post a prediction request:
curl -s -X POST http://127.0.0.1:8000/predict \
-H "Content-Type: application/json" \
-d '{"Temperature":24.5,"Humidity":45.0,"SquareFootage":1200.0,"Occupancy":3.0,"RenewableEnergy":0.2,"HeatIndex":null,"HVACUsage":"On","LightingUsage":"On","DayOfWeek":"Monday","Holiday":"No"}'Example successful response:
{ "prediction": [123.45] }Notes about the model
- The app expects a serialized model at
models/model.pkl. The application attempts to load the model on startup and exposes model error details via/healthif loading fails (see backend/app.py). - If
HeatIndexis omitted or null, the API computes a simple fallback:HeatIndex = Temperature + 0.5 * Humidity.
Frontend — run the web UI
cd frontend
npm install
npm run devOpen http://localhost:3000 to view the UI.
Makefile & Docker
This repo includes a Makefile and docker-compose.yml to simplify local development and containerized runs.
Makefile (common targets):
make up: Build and start services with Docker Compose (usesdocker-compose.yml).make build: Build backend and frontend containers.make backend: Start only the backend locally (virtualenv).make frontend: Start only the frontend (npm).make down: Stop and remove containers.make logs: Show combined logs.make clean: Remove built images and temporary artifacts.
Examples:
# start with Docker Compose
make up
# run backend locally in venv
make backend
# run frontend locally
make frontend
# stop containers
make downDocker usage (docker-compose):
A docker-compose.yml is provided for containerized development. To build and run:
docker-compose build
docker-compose up -d
docker-compose logs -fThe backend exposes port 8000 (FastAPI) and the frontend exposes port 3000 by default — adjust ports in docker-compose.yml as needed.
Data & notebooks
- Reproduce analysis and model training with the notebooks in
notebooks/(recommended order: EDA → Preprocessing → Modeling → Analysis results). - Store final trained artifacts in
models/and ensurebackend/app.pycan loadmodels/model.pkl.
Development & improvements
- Add a
LICENSEand.gitignoreto excludemyenv/and large artifacts. - Add unit tests for preprocessing and API endpoints.
- Consider versioning models (MLflow or timestamped filenames) and adding CI pipelines.