Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions .github/workflows/docker-build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
name: Docker Build and Test

on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]

jobs:
build:
runs-on: ubuntu-latest

steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Build Backend
run: |
cd Backend
docker build -t inpactai-backend:test .
- name: Build Frontend
run: |
cd Frontend
docker build -t inpactai-frontend:test .
Comment on lines +20 to +28
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Remove redundant image builds.

These build steps create images tagged as :test, but the subsequent docker compose up command (line 32) will build images according to the docker-compose.yml configuration, not using the :test tagged images. This results in building images twice.

Remove these redundant build steps:

-    - name: Build Backend
-      run: |
-        cd Backend
-        docker build -t inpactai-backend:test .
-        
-    - name: Build Frontend
-      run: |
-        cd Frontend
-        docker build -t inpactai-frontend:test .
-        
     - name: Start services
       run: |
-        docker compose up -d
+        docker compose up -d --build
         sleep 30

The --build flag ensures images are built by docker compose if needed.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In .github/workflows/docker-build.yml around lines 20 to 28, the workflow
unnecessarily builds Backend and Frontend images with a :test tag before running
docker compose up, causing duplicate builds because docker compose --build will
rebuild images from docker-compose.yml; remove the two separate "Build Backend"
and "Build Frontend" steps so the compose step is the sole build path, relying
on the --build flag to produce images as defined in docker-compose.yml.

- name: Start services
run: |
docker compose up -d
sleep 30
- name: Check backend health
run: |
curl -f http://localhost:8000/ || exit 1
- name: Check frontend health
run: |
curl -f http://localhost:5173/ || exit 1
Comment on lines +30 to +41
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Implement proper health check retry logic.

The fixed 30-second sleep (line 33) and single-attempt health checks (lines 37, 41) are fragile. Services may take longer to start, or the checks might run before services are ready.

Replace with retry logic:

     - name: Start services
       run: |
-        docker compose up -d
-        sleep 30
+        docker compose up -d --build
         
     - name: Check backend health
       run: |
-        curl -f http://localhost:8000/ || exit 1
+        for i in {1..30}; do
+          if curl -f http://localhost:8000/; then
+            echo "Backend is healthy"
+            exit 0
+          fi
+          echo "Waiting for backend... ($i/30)"
+          sleep 2
+        done
+        echo "Backend health check failed"
+        exit 1
         
     - name: Check frontend health
       run: |
-        curl -f http://localhost:5173/ || exit 1
+        for i in {1..30}; do
+          if curl -f http://localhost:5173/; then
+            echo "Frontend is healthy"
+            exit 0
+          fi
+          echo "Waiting for frontend... ($i/30)"
+          sleep 2
+        done
+        echo "Frontend health check failed"
+        exit 1

This provides up to 60 seconds for each service with better feedback.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Start services
run: |
docker compose up -d
sleep 30
- name: Check backend health
run: |
curl -f http://localhost:8000/ || exit 1
- name: Check frontend health
run: |
curl -f http://localhost:5173/ || exit 1
- name: Start services
run: |
docker compose up -d --build
- name: Check backend health
run: |
for i in {1..30}; do
if curl -f http://localhost:8000/; then
echo "Backend is healthy"
exit 0
fi
echo "Waiting for backend... ($i/30)"
sleep 2
done
echo "Backend health check failed"
exit 1
- name: Check frontend health
run: |
for i in {1..30}; do
if curl -f http://localhost:5173/; then
echo "Frontend is healthy"
exit 0
fi
echo "Waiting for frontend... ($i/30)"
sleep 2
done
echo "Frontend health check failed"
exit 1
🤖 Prompt for AI Agents
In .github/workflows/docker-build.yml around lines 30 to 41, the workflow uses a
fixed sleep and a single curl attempt which is fragile; replace the static sleep
and single-attempt checks with retry loops that poll each service for up to ~60
seconds (e.g., loop with max attempts or timeout, sleep between attempts), print
progress/attempt count, and exit non-zero if the service never becomes healthy;
apply this for both backend (http://localhost:8000/) and frontend
(http://localhost:5173/) and ensure docker compose up runs before beginning the
retries.

- name: Show logs on failure
if: failure()
run: |
docker compose logs
- name: Cleanup
if: always()
run: |
docker compose down -v
21 changes: 21 additions & 0 deletions Backend/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
__pycache__
*.pyc
*.pyo
*.pyd
.Python
*.so
.env
.venv
env/
venv/
ENV/
.git
.gitignore
.pytest_cache
.coverage
htmlcov/
dist/
build/
*.egg-info/
.DS_Store
*.log
12 changes: 12 additions & 0 deletions Backend/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
user=postgres
password=your_postgres_password
host=your_postgres_host
port=5432
dbname=postgres
GROQ_API_KEY=your_groq_api_key
SUPABASE_URL=your_supabase_url
SUPABASE_KEY=your_supabase_key
GEMINI_API_KEY=your_gemini_api_key
YOUTUBE_API_KEY=your_youtube_api_key
REDIS_HOST=redis
REDIS_PORT=6379
Comment on lines +1 to +12
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Look for common env access patterns in Backend
echo "=== Searching for env var access patterns ==="
rg -n --hidden -S "os\.environ\.get\(|os\.getenv\(|process\.env\.|dotenv|environ\[" Backend -t py -t js -t json 2>/dev/null || echo "No matches found with combined pattern"

echo ""
echo "=== Checking for specific example keys being read ==="
rg -n --hidden -S '\b(user|password|host|port|dbname)\b' Backend -t py -t js 2>/dev/null || echo "No matches found for example keys"

Repository: AOSSIE-Org/InPactAI

Length of output: 8043


Env var names mix lowercase and uppercase conventions; align to standard naming and remove unused variables.

The backend reads user, password, host, port, dbname as lowercase (Backend/app/db/db.py), but this conflicts with standard practice. API keys use uppercase (GROQ_API_KEY, SUPABASE_URL, etc.), creating inconsistency. Standardize to uppercase throughout (e.g., DB_USER, DB_PASSWORD, DB_HOST, DB_PORT, DB_NAME).

Additionally, REDIS_HOST and REDIS_PORT in .env.example are not read by the backend—Backend/app/services/redis_client.py hardcodes localhost and 6379. Remove these unused variables or update the backend to read from environment.

🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 1-1: [LowercaseKey] The user key should be in uppercase

(LowercaseKey)


[warning] 2-2: [LowercaseKey] The password key should be in uppercase

(LowercaseKey)


[warning] 2-2: [UnorderedKey] The password key should go before the user key

(UnorderedKey)


[warning] 3-3: [LowercaseKey] The host key should be in uppercase

(LowercaseKey)


[warning] 3-3: [UnorderedKey] The host key should go before the password key

(UnorderedKey)


[warning] 4-4: [LowercaseKey] The port key should be in uppercase

(LowercaseKey)


[warning] 4-4: [UnorderedKey] The port key should go before the user key

(UnorderedKey)


[warning] 5-5: [LowercaseKey] The dbname key should be in uppercase

(LowercaseKey)


[warning] 5-5: [UnorderedKey] The dbname key should go before the host key

(UnorderedKey)


[warning] 6-6: [UnorderedKey] The GROQ_API_KEY key should go before the dbname key

(UnorderedKey)


[warning] 7-7: [UnorderedKey] The SUPABASE_URL key should go before the dbname key

(UnorderedKey)


[warning] 8-8: [UnorderedKey] The SUPABASE_KEY key should go before the SUPABASE_URL key

(UnorderedKey)


[warning] 9-9: [UnorderedKey] The GEMINI_API_KEY key should go before the GROQ_API_KEY key

(UnorderedKey)


[warning] 10-10: [UnorderedKey] The YOUTUBE_API_KEY key should go before the dbname key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The REDIS_HOST key should go before the SUPABASE_KEY key

(UnorderedKey)


[warning] 12-12: [UnorderedKey] The REDIS_PORT key should go before the SUPABASE_KEY key

(UnorderedKey)

🤖 Prompt for AI Agents
In Backend/.env.example around lines 1 to 12, the environment variable names mix
lowercase DB keys with uppercase API keys and include REDIS_* variables that the
code doesn't read; rename the DB vars to uppercase DB_USER, DB_PASSWORD,
DB_HOST, DB_PORT, DB_NAME to match standard convention and update
Backend/app/db/db.py to read these new names from the environment, and either
remove REDIS_HOST/REDIS_PORT from .env.example or (preferable) update
Backend/app/services/redis_client.py to read REDIS_HOST and REDIS_PORT from the
environment (with sensible defaults) so the example and code are consistent.

18 changes: 18 additions & 0 deletions Backend/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
FROM python:3.10-slim

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
libpq-dev \
curl \
&& rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
33 changes: 33 additions & 0 deletions Backend/Dockerfile.prod
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
FROM python:3.10-slim AS builder

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt

FROM python:3.10-slim

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
libpq5 \
&& rm -rf /var/lib/apt/lists/* \
&& groupadd -r appuser && useradd -r -g appuser appuser

COPY --from=builder /root/.local /root/.local
COPY . .

RUN chown -R appuser:appuser /app

USER appuser

ENV PATH=/root/.local/bin:$PATH
Comment on lines +22 to +29
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Non-root user cannot access /root directory.

Line 22 copies Python packages to /root/.local, and line 29 adds /root/.local/bin to PATH. However, line 27 switches to the non-root appuser, which does not have permission to access the /root directory. This will cause the application to fail to start because it cannot find the installed Python packages.

Copy the dependencies to a location accessible by the non-root user:

-COPY --from=builder /root/.local /root/.local
+COPY --from=builder /root/.local /home/appuser/.local
 COPY . .
 
 RUN chown -R appuser:appuser /app
 
 USER appuser
 
-ENV PATH=/root/.local/bin:$PATH
+ENV PATH=/home/appuser/.local/bin:$PATH
 
 EXPOSE 8000
 
 CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

Alternatively, install to a system-wide location in the builder and copy to the runtime stage.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
COPY --from=builder /root/.local /root/.local
COPY . .
RUN chown -R appuser:appuser /app
USER appuser
ENV PATH=/root/.local/bin:$PATH
COPY --from=builder /root/.local /home/appuser/.local
COPY . .
RUN chown -R appuser:appuser /app
USER appuser
ENV PATH=/home/appuser/.local/bin:$PATH
🤖 Prompt for AI Agents
In Backend/Dockerfile.prod around lines 22 to 29, the Dockerfile copies Python
packages into /root/.local and then switches to non-root appuser so the user
cannot access those files; replace that flow by copying the builder's .local
into a location the appuser can access (for example /home/appuser/.local),
ensure ownership is set to appuser (chown -R appuser:appuser
/home/appuser/.local), and update ENV PATH to include /home/appuser/.local/bin;
alternatively, install into a system-wide location in the builder stage and copy
that system path into the runtime image so the non-root user can access the
packages.


EXPOSE 8000

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
42 changes: 40 additions & 2 deletions Backend/app/main.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from fastapi import FastAPI
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from starlette.middleware.base import BaseHTTPMiddleware
from .db.db import engine
from .db.seed import seed_db
from .models import models, chat
Expand All @@ -9,13 +10,21 @@
from sqlalchemy.exc import SQLAlchemyError
import logging
import os
import time
from dotenv import load_dotenv
from contextlib import asynccontextmanager
from app.routes import ai

# Load environment variables
load_dotenv()

# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)

Comment on lines +21 to +27
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

uvicorn logging configuration conflicts basicConfig Python

💡 Result:

Short answer
Uvicorn configures logging itself (uses dictConfig and often sets disable_existing_loggers=True), so calling logging.basicConfig() in your app can either be ignored or cause duplicate/unexpected handlers and levels. To avoid conflicts, either let Uvicorn manage logging (use its log_config) or disable/override Uvicorn’s logging before it runs and then initialize your own logging.

Why it happens (one line)

  • Uvicorn applies a dictConfig that controls handlers/propagation; basicConfig operates on the root logger and can add handlers or rely on propagation that Uvicorn may have disabled, producing missing or duplicated log lines. [1][2]

Solutions (concise)

  • Use Uvicorn’s logging config: provide --log-config or uvicorn.run(..., log_config=your_dict) and include your handlers/loggers in that dict. This is the recommended approach. [1][6]
  • Turn off Uvicorn’s logging and use your own: uvicorn.run(app, log_config=None) (or pass an empty config file via --log-config) and then call logging.basicConfig() or logging.config.dictConfig(...) in your app. [3][1]
  • If you must merge: create a dictConfig that sets "disable_existing_loggers": False so uvicorn won’t disable other loggers, and include your handlers/loggers there. [2][1]
  • For access logs only: use --no-access-log / access_log=False to stop access log duplication while leaving other uvicorn logging intact. [1]

Minimal examples

  • Let uvicorn use your dict config:
    • uvicorn.run(app, log_config=my_logging_dict)
  • Disable uvicorn config and use basicConfig:
    • uvicorn.run(app, log_config=None)
    • then in your app: logging.basicConfig(level=logging.INFO)

References

  • Uvicorn settings / logging docs. [1]
  • GitHub issue discussing duplicate logs when using logging.basicConfig. [2]
  • Stack Overflow answers showing uvicorn.run(log_config=None) and --log-config patterns. [3]

If you want, tell me how you run Uvicorn (CLI, gunicorn+uvicorn workers, or uvicorn.run in code) and I’ll give the exact logging dict or small example that fits your setup.


🌐 Web query:

FastAPI logging best practices basicConfig uvicorn

💡 Result:

Short best-practices for FastAPI + Uvicorn logging (concise):

  1. Prefer a single, explicit logging config (dictConfig / file) instead of ad-hoc basicConfig.

    • Uvicorn supports --log-config / uvicorn.run(..., log_config=...) (JSON/YAML/dict). Use this to control uvicorn.error / uvicorn.access and avoid conflicts. [1][6]
  2. If you must use logging.basicConfig, call it very early (before Uvicorn starts) so the root logger and console handler exist — but note basicConfig is a one‑time, simple setup and won’t suit multi-handler or per-module rules. For more control use logging.config.dictConfig(). [1][2]

  3. Ensure uvicorn doesn’t disable other loggers you rely on. If using a custom config, set disable_existing_loggers=False (or reattach handlers to uvicorn.* and gunicorn.* loggers) so your app loggers aren’t dropped. [3][5]

  4. Target the right loggers when changing behavior:

    • uvicorn.error for server errors, uvicorn.access for HTTP access logs, and your module loggers (e.g., myapp or fastapi.logger). You can adjust levels/handlers on those explicitly. [3]
  5. Docker / production notes:

    • Stream handlers must write to stdout/stderr (Docker reads those). Use PYTHONUNBUFFERED=1 or run python -u to avoid buffering. Avoid file-only handlers unless you also ship files to log collectors. [2]
  6. Production recommendations:

    • Use structured logging (JSON) or a logging library (structlog/loguru) plugged into dictConfig for easier ingestion.
    • Keep DEBUG local; use INFO/WARNING in prod and rotate/ship logs to a collector. [1][4]

Minimal example (startup/main.py) showing safe, simple approach:

  • configure dictConfig or call basicConfig(level=...) before uvicorn.run() so root logger and handlers are present; ensure handlers stream to stdout; set uvicorn log level or pass --log-config to uvicorn. [1][2][6]

References:
[1] Uvicorn settings / logging docs.
[2] Article explaining using logging.basicConfig to surface app logs with Uvicorn.
[3] StackOverflow discussion on uvicorn disabling existing loggers and using uvicorn.error.
[4] Packages / patterns for unified FastAPI/Uvicorn/Gunicorn logging.
[5] Guidance for FastAPI + Gunicorn/Uvicorn logging in containers.
[6] FastAPI debugging / uvicorn.run notes.

(If you want, I can give a copy-paste dictConfig example tuned for JSON output + Docker.)


🌐 Web query:

ASGI application logging configuration root handlers

💡 Result:

Short answer

  • ASGI servers (e.g. Uvicorn/Gunicorn) configure Python logging for you by default, so if you want your app logs to go to your own handlers you must explicitly configure the root logger (the "root" key in a logging dictConfig) or override the server loggers (uvicorn/uvicorn.access/uvicorn.error). Use Uvicorn's --log-config to load a dictConfig file or call logging.config.dictConfig() in your app startup. [1][2]

Key points / common pitfalls

  • The root logger’s "handlers" list must reference handler names defined in the same config; mismatched names cause invalid configs. [4]
  • Set disable_existing_loggers: False when using dictConfig if you don’t want to silently drop other configured loggers. [3]
  • Uvicorn adds its own uvicorn.* loggers (and access logger). If you want only your handlers, either:
    • configure root and make uvicorn loggers propagate to root (propagate: True), or
    • explicitly configure uvicorn, uvicorn.error and uvicorn.access to use your handlers (or set propagate: False with handlers you define). [1][3][8]
  • To avoid duplicate/undesired access logs, run Uvicorn with --no-access-log or remove/replace uvicorn.access handlers in code. [1][6]

Minimal dictConfig example (works for Uvicorn/FastAPI)

  • YAML or dict for logging.config.dictConfig / --log-config:

{
"version": 1,
"disable_existing_loggers": false,
"formatters": {
"default": {
"()": "uvicorn.logging.DefaultFormatter",
"fmt": "%(asctime)s %(levelprefix)s %(name)s %(message)s"
},
"access": {
"()": "uvicorn.logging.AccessFormatter",
"fmt": '%(asctime)s %(levelprefix)s %(name)s %(client_addr)s - "%(request_line)s" %(status_code)s'
}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "default",
"stream": "ext://sys.stderr"
},
"access_console": {
"class": "logging.StreamHandler",
"formatter": "access",
"stream": "ext://sys.stdout"
}
},
"loggers": {
"uvicorn": { "handlers": ["console"], "level": "INFO", "propagate": false },
"uvicorn.error": { "handlers": ["console"], "level": "INFO", "propagate": false },
"uvicorn.access": { "handlers": ["access_console"], "level": "INFO", "propagate": false }
},
"root": { "level": "INFO", "handlers": ["console"], "propagate": false }
}

(Adapt handler names/formatters to your needs. Example adapted from common Uvicorn config snippets.) [3][8]

If you need more specific help (FastAPI, Gunicorn+Uvicorn workers, or a concrete dict/YAML tailored to your app), tell me:

  • which ASGI server/version you run, and
  • whether you start with --log-config or configure logging programmatically — and I’ll produce a tested config. [1][2][4]

🏁 Script executed:

find . -type f -name "main.py" | head -20

Repository: AOSSIE-Org/InPactAI

Length of output: 84


🏁 Script executed:

cd Backend && find . -type f -name "main.py" -o -name "*.py" | grep -E "(main|server|app)" | head -20

Repository: AOSSIE-Org/InPactAI

Length of output: 492


🏁 Script executed:

cat -n Backend/app/main.py

Repository: AOSSIE-Org/InPactAI

Length of output: 3841


🏁 Script executed:

find Backend -type f \( -name "*.sh" -o -name "Dockerfile" -o -name "docker-compose.yml" -o -name "*.toml" -o -name "*.cfg" \) | head -20

Repository: AOSSIE-Org/InPactAI

Length of output: 108


🏁 Script executed:

cat Backend/requirements.txt 2>/dev/null || cat Backend/app/requirements.txt 2>/dev/null || find Backend -name "requirements*.txt" -exec cat {} \;

Repository: AOSSIE-Org/InPactAI

Length of output: 993


🏁 Script executed:

cat -n Backend/Dockerfile

Repository: AOSSIE-Org/InPactAI

Length of output: 547


🏁 Script executed:

cat -n Backend/docker-compose.yml

Repository: AOSSIE-Org/InPactAI

Length of output: 386


Avoid logging.basicConfig(...) in ASGI app modules (conflicts with uvicorn's logging setup).
When uvicorn starts (via uvicorn app.main:app), it configures logging first using dictConfig. Calling basicConfig at module import time can cause duplicate handlers or lost logs. Guard the call to prevent re-initialization:

-logging.basicConfig(
-    level=logging.INFO,
-    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
-)
+root_logger = logging.getLogger()
+if not root_logger.handlers:
+    logging.basicConfig(
+        level=logging.INFO,
+        format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
+    )
 logger = logging.getLogger(__name__)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
# Configure logging
root_logger = logging.getLogger()
if not root_logger.handlers:
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
)
logger = logging.getLogger(__name__)
🤖 Prompt for AI Agents
In Backend/app/main.py around lines 21 to 27, calling logging.basicConfig(...)
at module import time conflicts with uvicorn's logging (can add duplicate
handlers or drop logs); remove or guard the basicConfig call so it only runs
when the module is executed directly or when no handlers exist. Replace the
unconditional basicConfig with either (a) move the configuration into an if
__name__ == "__main__": block so uvicorn-imports won't reconfigure logging, or
(b) guard it with a hasHandlers() check (e.g., only call basicConfig if
logging.getLogger().hasHandlers() is False). Keep the logger =
logging.getLogger(__name__) line unchanged.


# Async function to create database tables with exception handling
async def create_tables():
Expand All @@ -38,13 +47,42 @@ async def lifespan(app: FastAPI):
print("App is shutting down...")


# Custom middleware for logging and timing
class RequestMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next):
start_time = time.time()

logger.info(f"Incoming: {request.method} {request.url.path}")

response = await call_next(request)

process_time = time.time() - start_time
response.headers["X-Process-Time"] = str(process_time)
response.headers["X-Content-Type-Options"] = "nosniff"
response.headers["X-Frame-Options"] = "DENY"
response.headers["X-XSS-Protection"] = "1; mode=block"

logger.info(f"Completed: {request.method} {request.url.path} - {response.status_code} ({process_time:.3f}s)")

return response

Comment on lines +50 to +68
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd -type f -name "main.py" | head -20

Repository: AOSSIE-Org/InPactAI

Length of output: 233


🏁 Script executed:

find . -type f -path "*/Backend/app/main.py" 2>/dev/null | head -5

Repository: AOSSIE-Org/InPactAI

Length of output: 84


🌐 Web query:

time.perf_counter vs time.time for measuring duration Python

💡 Result:

Short answer: use time.perf_counter() (or perf_counter_ns()) to measure elapsed durations; use time.time() only for wall‑clock timestamps (e.g., logging, comparing with external clocks). (docs.python.org)

Why:

  • perf_counter is a high‑resolution, monotonic performance counter (won’t go backwards, includes time spent sleeping) — ideal for timing code. Use perf_counter_ns() to avoid float precision loss. (docs.python.org)
  • time.time() returns the system (epoch) clock which is adjustable by the OS/NTP and can jump; it’s intended for timestamps, not robust interval timing. (peps.python.org)

Minimal example:

  • Short timing with floats:
    start = time.perf_counter()
    ...do work...
    elapsed = time.perf_counter() - start
  • Nanosecond-safe:
    start = time.perf_counter_ns()
    elapsed_ns = time.perf_counter_ns() - start

References: Python time docs (perf_counter/perf_counter_ns) and PEP 418 (monotonic/perf_counter rationale). (docs.python.org)

Citations:


🏁 Script executed:

cat -n Backend/app/main.py | sed -n '40,75p'

Repository: AOSSIE-Org/InPactAI

Length of output: 1467


Use time.perf_counter() for duration measurement and wrap call_next() with try/finally to ensure completion logging on exceptions.

time.time() returns the system clock (adjustable by OS/NTP) and is unsuitable for measuring durations; time.perf_counter() provides a monotonic, high-resolution timer designed for this purpose. Currently, if call_next() raises an exception, the "Completed" log and security headers are never set. Use try/finally to guarantee these execute regardless of exceptions, with a null-check for response before accessing headers (since exceptions mean no response object).

 class RequestMiddleware(BaseHTTPMiddleware):
     async def dispatch(self, request: Request, call_next):
-        start_time = time.time()
-        
-        logger.info(f"Incoming: {request.method} {request.url.path}")
-        
-        response = await call_next(request)
-        
-        process_time = time.time() - start_time
-        response.headers["X-Process-Time"] = str(process_time)
-        response.headers["X-Content-Type-Options"] = "nosniff"
-        response.headers["X-Frame-Options"] = "DENY"
-        response.headers["X-XSS-Protection"] = "1; mode=block"
-        
-        logger.info(f"Completed: {request.method} {request.url.path} - {response.status_code} ({process_time:.3f}s)")
-        
-        return response
+        start_time = time.perf_counter()
+        logger.info("Incoming: %s %s", request.method, request.url.path)
+
+        response = None
+        try:
+            response = await call_next(request)
+            return response
+        finally:
+            process_time = time.perf_counter() - start_time
+            if response is not None:
+                response.headers["X-Process-Time"] = f"{process_time:.6f}"
+                response.headers["X-Content-Type-Options"] = "nosniff"
+                response.headers["X-Frame-Options"] = "DENY"
+                response.headers["X-XSS-Protection"] = "1; mode=block"
+                logger.info(
+                    "Completed: %s %s - %s (%.3fs)",
+                    request.method,
+                    request.url.path,
+                    response.status_code,
+                    process_time,
+                )
🤖 Prompt for AI Agents
In Backend/app/main.py around lines 50 to 68, the middleware uses time.time()
for duration and doesn't guarantee completion logging or header setting when
call_next() raises; replace time.time() with time.perf_counter() for monotonic
high-resolution timing, wrap the call_next(request) and response handling in a
try/finally so the elapsed time is computed and the "Completed" log is emitted
regardless of exceptions, and only set response.headers after checking response
is not None (or handle the absence of response appropriately) so header access
won't fail when an exception occurs.

# Initialize FastAPI
app = FastAPI(lifespan=lifespan)

# Add custom middleware
app.add_middleware(RequestMiddleware)

# Add CORS middleware
app.add_middleware(
CORSMiddleware,
allow_origins=["http://localhost:5173"],
allow_origins=[
"http://localhost:5173",
"http://localhost:5174",
"http://localhost:5175",
"http://localhost:5176",
"http://frontend:5173",
"http://127.0.0.1:5173"
],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
Expand Down
24 changes: 18 additions & 6 deletions Backend/app/routes/post.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,25 +18,37 @@
import uuid
from datetime import datetime, timezone

# Load environment variables
load_dotenv()
url: str = os.getenv("SUPABASE_URL")
key: str = os.getenv("SUPABASE_KEY")
supabase: Client = create_client(url, key)

url: str = os.getenv("SUPABASE_URL", "")
key: str = os.getenv("SUPABASE_KEY", "")

if not url or not key or "your-" in url:
print("⚠️ Supabase credentials not configured. Some features will be limited.")
Copy link

Copilot AI Dec 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Print statement may execute during import.

Copilot uses AI. Check for mistakes.
supabase = None
else:
try:
supabase: Client = create_client(url, key)
except Exception as e:
print(f"❌ Supabase connection failed: {e}")
Copy link

Copilot AI Dec 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Print statement may execute during import.

Copilot uses AI. Check for mistakes.
supabase = None

# Define Router
router = APIRouter()

# Helper Functions
def generate_uuid():
return str(uuid.uuid4())

def current_timestamp():
return datetime.now(timezone.utc).isoformat()

# ========== USER ROUTES ==========
def check_supabase():
if not supabase:
raise HTTPException(status_code=503, detail="Database service unavailable. Please configure Supabase credentials.")

@router.post("/users/")
async def create_user(user: UserCreate):
check_supabase()
user_id = generate_uuid()
t = current_timestamp()

Expand Down
Loading