Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions .github/workflows/docker-build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
name: Docker Build and Test

on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]

jobs:
build:
runs-on: ubuntu-latest

steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Build Backend
run: |
cd Backend
docker build -t inpactai-backend:test .
- name: Build Frontend
run: |
cd Frontend
docker build -t inpactai-frontend:test .
- name: Start services
run: |
docker compose up -d
sleep 30
- name: Check backend health
run: |
curl -f http://localhost:8000/ || exit 1
- name: Check frontend health
run: |
curl -f http://localhost:5173/ || exit 1
Comment on lines +30 to +41
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Increase startup buffer and improve health check resilience.

A 30-second sleep may be insufficient for slow CI environments, and a single curl attempt without retry logic is fragile. The workflow should wait for service readiness rather than assuming a fixed startup time.

Apply this diff to improve health check resilience:

    - name: Start services
      run: |
        docker compose up -d
-       sleep 30
        
-   - name: Check backend health
+   - name: Wait for services to be healthy
       run: |
-        curl -f http://localhost:8000/ || exit 1
-        
-   - name: Check frontend health
+        echo "Waiting for backend..."
+        for i in {1..60}; do
+          if curl -sf http://localhost:8000/docs > /dev/null 2>&1; then
+            echo "Backend is healthy"
+            break
+          fi
+          echo "Attempt $i: Backend not ready, waiting..."
+          sleep 2
+        done
+        
+        echo "Waiting for frontend..."
+        for i in {1..60}; do
+          if curl -sf http://localhost:5173/ > /dev/null 2>&1; then
+            echo "Frontend is healthy"
+            break
+          fi
+          echo "Attempt $i: Frontend not ready, waiting..."
+          sleep 2
+        done
+        
+        curl -f http://localhost:8000/ || exit 1
         run: |
           curl -f http://localhost:5173/ || exit 1

Alternatively, leverage docker compose health checks if backend and frontend services define HEALTHCHECK directives:

    - name: Start services
      run: |
        docker compose up -d
+       
+   - name: Wait for service health checks
+     run: |
+       docker compose ps
+       # Wait for all services to be healthy
+       timeout 120 bash -c 'while [[ $(docker compose ps --services --filter "status=running" | wc -l) -lt 3 ]]; do sleep 2; done' || exit 1
         
-   - name: Check backend health
+   - name: Verify endpoints
       run: |
        curl -f http://localhost:8000/ || exit 1
+        curl -f http://localhost:5173/ || exit 1
🤖 Prompt for AI Agents
In .github/workflows/docker-build.yml around lines 30 to 41 the workflow uses a
fixed 30s sleep and a single curl call which is fragile in slow CI; replace the
fixed sleep and single-shot curls with a resilient readiness loop or use docker
compose health checks: either (A) wait for containers to become healthy by using
docker compose with healthcheck awareness (e.g. bring services up then poll
docker compose ps or inspect container health until all are "healthy" with a
configurable timeout), or (B) implement retry logic for HTTP checks — repeatedly
curl the backend and frontend with short sleeps and a total timeout (e.g., try N
times with exponential backoff) and fail only after the timeout — ensuring clear
log messages and a sensible overall timeout so the job doesn’t hang
indefinitely.

- name: Show logs on failure
if: failure()
run: |
docker compose logs
- name: Cleanup
if: always()
run: |
docker compose down -v
21 changes: 21 additions & 0 deletions Backend/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
__pycache__
*.pyc
*.pyo
*.pyd
.Python
*.so
.env
.venv
env/
venv/
ENV/
.git
.gitignore
.pytest_cache
.coverage
htmlcov/
dist/
build/
*.egg-info/
.DS_Store
*.log
12 changes: 12 additions & 0 deletions Backend/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
user=postgres
password=your_postgres_password
host=your_postgres_host
port=5432
dbname=postgres
GROQ_API_KEY=your_groq_api_key
SUPABASE_URL=your_supabase_url
SUPABASE_KEY=your_supabase_key
GEMINI_API_KEY=your_gemini_api_key
YOUTUBE_API_KEY=your_youtube_api_key
REDIS_HOST=redis
REDIS_PORT=6379
Comment on lines +1 to +12
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Env var names likely mismatched (lowercase user/password/host/... vs expected DB_* / uppercase), which can break setup scripts and runtime config.
I’d strongly align this template to the actual env contract used by the backend (and keep keys uppercase).

-user=postgres
-password=your_postgres_password
-host=your_postgres_host
-port=5432
-dbname=postgres
+DB_USER=postgres
+DB_PASSWORD=your_postgres_password
+DB_HOST=your_postgres_host
+DB_PORT=5432
+DB_NAME=postgres
 GROQ_API_KEY=your_groq_api_key
 SUPABASE_URL=your_supabase_url
-SUPABASE_KEY=your_supabase_key
+SUPABASE_KEY=your_supabase_key
 GEMINI_API_KEY=your_gemini_api_key
 YOUTUBE_API_KEY=your_youtube_api_key
 REDIS_HOST=redis
 REDIS_PORT=6379
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
user=postgres
password=your_postgres_password
host=your_postgres_host
port=5432
dbname=postgres
GROQ_API_KEY=your_groq_api_key
SUPABASE_URL=your_supabase_url
SUPABASE_KEY=your_supabase_key
GEMINI_API_KEY=your_gemini_api_key
YOUTUBE_API_KEY=your_youtube_api_key
REDIS_HOST=redis
REDIS_PORT=6379
DB_USER=postgres
DB_PASSWORD=your_postgres_password
DB_HOST=your_postgres_host
DB_PORT=5432
DB_NAME=postgres
GROQ_API_KEY=your_groq_api_key
SUPABASE_URL=your_supabase_url
SUPABASE_KEY=your_supabase_key
GEMINI_API_KEY=your_gemini_api_key
YOUTUBE_API_KEY=your_youtube_api_key
REDIS_HOST=redis
REDIS_PORT=6379
🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 1-1: [LowercaseKey] The user key should be in uppercase

(LowercaseKey)


[warning] 2-2: [LowercaseKey] The password key should be in uppercase

(LowercaseKey)


[warning] 2-2: [UnorderedKey] The password key should go before the user key

(UnorderedKey)


[warning] 3-3: [LowercaseKey] The host key should be in uppercase

(LowercaseKey)


[warning] 3-3: [UnorderedKey] The host key should go before the password key

(UnorderedKey)


[warning] 4-4: [LowercaseKey] The port key should be in uppercase

(LowercaseKey)


[warning] 4-4: [UnorderedKey] The port key should go before the user key

(UnorderedKey)


[warning] 5-5: [LowercaseKey] The dbname key should be in uppercase

(LowercaseKey)


[warning] 5-5: [UnorderedKey] The dbname key should go before the host key

(UnorderedKey)


[warning] 6-6: [UnorderedKey] The GROQ_API_KEY key should go before the dbname key

(UnorderedKey)


[warning] 7-7: [UnorderedKey] The SUPABASE_URL key should go before the dbname key

(UnorderedKey)


[warning] 8-8: [UnorderedKey] The SUPABASE_KEY key should go before the SUPABASE_URL key

(UnorderedKey)


[warning] 9-9: [UnorderedKey] The GEMINI_API_KEY key should go before the GROQ_API_KEY key

(UnorderedKey)


[warning] 10-10: [UnorderedKey] The YOUTUBE_API_KEY key should go before the dbname key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The REDIS_HOST key should go before the SUPABASE_KEY key

(UnorderedKey)


[warning] 12-12: [UnorderedKey] The REDIS_PORT key should go before the SUPABASE_KEY key

(UnorderedKey)

🤖 Prompt for AI Agents
In Backend/.env.example lines 1-12, the env var names are lowercase and likely
don’t match the backend’s expected uppercase DB_* contract; update the file to
use the canonical, uppercase environment variable names the code expects (e.g.,
DB_USER, DB_PASSWORD, DB_HOST, DB_PORT, DB_NAME) and keep other keys uppercase
and consistent (GROQ_API_KEY, SUPABASE_URL, SUPABASE_KEY, GEMINI_API_KEY,
YOUTUBE_API_KEY, REDIS_HOST, REDIS_PORT), ensuring names exactly match the
runtime/config schema used by the backend.

18 changes: 18 additions & 0 deletions Backend/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
FROM python:3.10-slim

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
libpq-dev \
curl \
&& rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
33 changes: 33 additions & 0 deletions Backend/Dockerfile.prod
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
FROM python:3.10-slim AS builder

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt

FROM python:3.10-slim

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
libpq5 \
&& rm -rf /var/lib/apt/lists/* \
&& groupadd -r appuser && useradd -r -g appuser appuser

COPY --from=builder /root/.local /root/.local
COPY . .

RUN chown -R appuser:appuser /app

USER appuser

ENV PATH=/root/.local/bin:$PATH
Comment on lines +22 to +29
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix PATH accessibility for non-root user.

The appuser cannot access files at /root/.local/bin since ownership remains with root. The chown on line 25 only affects /app, not the Python packages installed in the builder stage.

Apply this diff to fix the issue:

  COPY --from=builder /root/.local /root/.local
  COPY . .
  
  RUN chown -R appuser:appuser /app
+ 
+ # Create a properly owned directory for pip packages
+ RUN mkdir -p /home/appuser/.local && \
+     chown -R appuser:appuser /home/appuser/.local
  
  USER appuser
  
- ENV PATH=/root/.local/bin:$PATH
+ ENV PATH=/home/appuser/.local/bin:$PATH

Alternatively, copy the builder artifacts into a directory owned by appuser:

  COPY --from=builder /root/.local /home/appuser/.local
  COPY . .
  
  RUN chown -R appuser:appuser /app && \
+     chown -R appuser:appuser /home/appuser/.local
  
  USER appuser
  
- ENV PATH=/root/.local/bin:$PATH
+ ENV PATH=/home/appuser/.local/bin:$PATH

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In Backend/Dockerfile.prod around lines 22–29 the added PATH points to
/root/.local/bin which remains owned by root (chown only covered /app); fix by
making the installed packages accessible to appuser: either run chown -R
appuser:appuser /root/.local after copying from the builder so /root/.local/bin
is owned by appuser, or instead copy the builder artifacts into a directory
owned by appuser (e.g., /home/appuser/.local) and update ENV PATH accordingly to
that user-writable location.


EXPOSE 8000

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Comment on lines +1 to +33
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add health check to enable service readiness detection.

No health check is defined, making it difficult for orchestration systems to detect when the backend is ready to serve traffic. This increases startup flakiness in production.

Add a health check after the EXPOSE directive:

  EXPOSE 8000
  
+ HEALTHCHECK --interval=10s --timeout=5s --start-period=10s --retries=3 \
+   CMD python -c "import requests; requests.get('http://localhost:8000/').raise_for_status()" || exit 1
  
  CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

Alternatively, if the requests library isn't available, use a simple TCP check:

  EXPOSE 8000
  
+ HEALTHCHECK --interval=10s --timeout=5s --start-period=10s --retries=3 \
+   CMD python -c "import socket; socket.create_connection(('localhost', 8000), timeout=1)" || exit 1
  
  CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
FROM python:3.10-slim AS builder
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt
FROM python:3.10-slim
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
libpq5 \
&& rm -rf /var/lib/apt/lists/* \
&& groupadd -r appuser && useradd -r -g appuser appuser
COPY --from=builder /root/.local /root/.local
COPY . .
RUN chown -R appuser:appuser /app
USER appuser
ENV PATH=/root/.local/bin:$PATH
EXPOSE 8000
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
FROM python:3.10-slim AS builder
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt
FROM python:3.10-slim
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
libpq5 \
&& rm -rf /var/lib/apt/lists/* \
&& groupadd -r appuser && useradd -r -g appuser appuser
COPY --from=builder /root/.local /root/.local
COPY . .
RUN chown -R appuser:appuser /app
USER appuser
ENV PATH=/root/.local/bin:$PATH
EXPOSE 8000
HEALTHCHECK --interval=10s --timeout=5s --start-period=10s --retries=3 \
CMD python -c "import socket; socket.create_connection(('localhost', 8000), timeout=1)" || exit 1
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
🤖 Prompt for AI Agents
In Backend/Dockerfile.prod around lines 1 to 33, add a Docker HEALTHCHECK
immediately after the EXPOSE 8000 line so orchestration can detect readiness;
implement an HTTP check that polls /health (use curl -f against
http://localhost:8000/health) with sensible options for interval, timeout,
start-period and retries, and fall back to a simple TCP check (using a CMD-SHELL
that attempts a connection to localhost:8000 via /dev/tcp or similar) if curl is
not available; ensure the healthcheck returns non-zero on failure so
Docker/Kubernetes treat the container as unhealthy.

42 changes: 40 additions & 2 deletions Backend/app/main.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from fastapi import FastAPI
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from starlette.middleware.base import BaseHTTPMiddleware
from .db.db import engine
from .db.seed import seed_db
from .models import models, chat
Expand All @@ -9,13 +10,21 @@
from sqlalchemy.exc import SQLAlchemyError
import logging
import os
import time
from dotenv import load_dotenv
from contextlib import asynccontextmanager
from app.routes import ai

# Load environment variables
load_dotenv()

# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)


# Async function to create database tables with exception handling
async def create_tables():
Expand All @@ -38,13 +47,42 @@ async def lifespan(app: FastAPI):
print("App is shutting down...")


# Custom middleware for logging and timing
class RequestMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next):
start_time = time.time()

logger.info(f"Incoming: {request.method} {request.url.path}")

response = await call_next(request)

process_time = time.time() - start_time
response.headers["X-Process-Time"] = str(process_time)
response.headers["X-Content-Type-Options"] = "nosniff"
response.headers["X-Frame-Options"] = "DENY"
response.headers["X-XSS-Protection"] = "1; mode=block"

logger.info(f"Completed: {request.method} {request.url.path} - {response.status_code} ({process_time:.3f}s)")

return response
Comment on lines +50 to +67
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove deprecated X-XSS-Protection header.

The X-XSS-Protection header (line 63) has been deprecated since 2020 and is ignored by modern browsers (Chrome, Firefox, Safari, and Edge). Modern browsers have built-in XSS protections via their Content Security Policy implementations.

Apply this diff:

         response.headers["X-Process-Time"] = str(process_time)
         response.headers["X-Content-Type-Options"] = "nosniff"
         response.headers["X-Frame-Options"] = "DENY"
-        response.headers["X-XSS-Protection"] = "1; mode=block"
         
         logger.info(f"Completed: {request.method} {request.url.path} - {response.status_code} ({process_time:.3f}s)")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Custom middleware for logging and timing
class RequestMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next):
start_time = time.time()
logger.info(f"Incoming: {request.method} {request.url.path}")
response = await call_next(request)
process_time = time.time() - start_time
response.headers["X-Process-Time"] = str(process_time)
response.headers["X-Content-Type-Options"] = "nosniff"
response.headers["X-Frame-Options"] = "DENY"
response.headers["X-XSS-Protection"] = "1; mode=block"
logger.info(f"Completed: {request.method} {request.url.path} - {response.status_code} ({process_time:.3f}s)")
return response
# Custom middleware for logging and timing
class RequestMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next):
start_time = time.time()
logger.info(f"Incoming: {request.method} {request.url.path}")
response = await call_next(request)
process_time = time.time() - start_time
response.headers["X-Process-Time"] = str(process_time)
response.headers["X-Content-Type-Options"] = "nosniff"
response.headers["X-Frame-Options"] = "DENY"
logger.info(f"Completed: {request.method} {request.url.path} - {response.status_code} ({process_time:.3f}s)")
return response
🤖 Prompt for AI Agents
In Backend/app/main.py around lines 50 to 67, the middleware sets the deprecated
"X-XSS-Protection" response header; remove the line that sets
response.headers["X-XSS-Protection"] = "1; mode=block" so the header is no
longer emitted, leaving the other security headers intact
(X-Content-Type-Options and X-Frame-Options) and keep logging and timing
behavior unchanged.


# Initialize FastAPI
app = FastAPI(lifespan=lifespan)

# Add custom middleware
app.add_middleware(RequestMiddleware)

# Add CORS middleware
app.add_middleware(
CORSMiddleware,
allow_origins=["http://localhost:5173"],
allow_origins=[
"http://localhost:5173",
"http://localhost:5174",
"http://localhost:5175",
"http://localhost:5176",
"http://frontend:5173",
"http://127.0.0.1:5173"
],
Comment on lines +78 to +85
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for references to the various ports in config and docs

echo "=== Checking for port references in config files ==="
rg -n "517[3-6]" --type-add 'config:*.{json,yml,yaml,env,conf}' --type=config

echo -e "\n=== Checking for port references in docker files ==="
find . -type f \( -name "docker-compose*.yml" -o -name "docker-compose*.yaml" -o -name "Dockerfile*" -o -name ".env*" \) -exec rg -l "517[3-6]" {} \;

echo -e "\n=== Checking for frontend:5173 references ==="
rg -n "frontend:5173" --max-count=10

echo -e "\n=== Checking for CORS configuration in main.py and nearby ==="
rg -n "allow_origins" -A 10 -B 2

echo -e "\n=== Checking for comments explaining the ports ==="
rg -n "5173|5174|5175|5176" --type=python -B 3 -A 1 | head -50

echo -e "\n=== Check for documentation files ==="
find . -type f \( -name "README*" -o -name "SETUP*" -o -name "DEPLOY*" -o -name "*.md" \) | head -20

Repository: AOSSIE-Org/InPactAI

Length of output: 1755


🏁 Script executed:

#!/bin/bash
# Check documentation for any mention of the multi-port setup

echo "=== Checking DOCKER-ARCHITECTURE.md for port details ==="
grep -n "5173\|5174\|5175\|5176\|port" DOCKER-ARCHITECTURE.md | head -20

echo -e "\n=== Checking GETTING-STARTED.md for port details ==="
grep -n "5173\|5174\|5175\|5176\|port" GETTING-STARTED.md | head -20

echo -e "\n=== Checking Frontend README for port details ==="
grep -n "5173\|5174\|5175\|5176\|port" Frontend/README.md | head -20

echo -e "\n=== Checking for vite config in Frontend ==="
find Frontend -name "vite.config*" -o -name "package.json" | head -5

echo -e "\n=== Check vite config for port configuration ==="
find Frontend -name "vite.config*" -exec cat {} \;

echo -e "\n=== Check if any test/dev setup uses multiple ports ==="
rg -n "517[3-6]|multiple.*port|parallel.*port" --type=md

Repository: AOSSIE-Org/InPactAI

Length of output: 3132


Remove unused CORS origins to reduce attack surface.

Your configuration includes four consecutive localhost ports (5173-5176) and a redundant 127.0.0.1 origin, but only port 5173 is referenced in documentation, Docker configuration, and the Vite server config. Remove ports 5174, 5175, and 5176, and consolidate localhost:5173 with 127.0.0.1:5173 into a single origin. Only keep:

  • http://localhost:5173
  • http://frontend:5173
🤖 Prompt for AI Agents
In Backend/app/main.py around lines 78 to 85, the CORS allow_origins list
exposes unused localhost ports and a redundant 127.0.0.1 entry; remove the
entries for ports 5174, 5175, 5176 and the 127.0.0.1:5173 origin so the list
contains only "http://localhost:5173" and "http://frontend:5173"; update the
allow_origins array accordingly to minimize the attack surface.

allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
Expand Down
24 changes: 18 additions & 6 deletions Backend/app/routes/post.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,25 +18,37 @@
import uuid
from datetime import datetime, timezone

# Load environment variables
load_dotenv()
url: str = os.getenv("SUPABASE_URL")
key: str = os.getenv("SUPABASE_KEY")
supabase: Client = create_client(url, key)

url: str = os.getenv("SUPABASE_URL", "")
key: str = os.getenv("SUPABASE_KEY", "")

if not url or not key or "your-" in url:
print("⚠️ Supabase credentials not configured. Some features will be limited.")
Copy link

Copilot AI Dec 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Print statement may execute during import.

Copilot uses AI. Check for mistakes.
supabase = None
else:
try:
supabase: Client = create_client(url, key)
except Exception as e:
print(f"❌ Supabase connection failed: {e}")
Copy link

Copilot AI Dec 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Print statement may execute during import.

Copilot uses AI. Check for mistakes.
supabase = None

# Define Router
router = APIRouter()

# Helper Functions
def generate_uuid():
return str(uuid.uuid4())

def current_timestamp():
return datetime.now(timezone.utc).isoformat()

# ========== USER ROUTES ==========
def check_supabase():
if not supabase:
raise HTTPException(status_code=503, detail="Database service unavailable. Please configure Supabase credentials.")

@router.post("/users/")
async def create_user(user: UserCreate):
check_supabase()
user_id = generate_uuid()
t = current_timestamp()

Expand Down
Loading