diff --git a/.env.example b/.env.example index b900db64..6e2cdc49 100644 --- a/.env.example +++ b/.env.example @@ -150,3 +150,11 @@ REDIS_DB=0 # How long cached responses are stored before expiring # Individual endpoints may override this value CACHE_DEFAULT_TTL=300 + +# Rate Limiting Settings (opt-in, disabled by default) +# When enabled, uses Redis if available, otherwise in-memory storage +# Protects authentication endpoints from brute force and abuse +# Note: Rate limiting automatically uses Redis when both +# RATE_LIMIT_ENABLED=true and REDIS_ENABLED=true +# No additional Redis configuration needed +RATE_LIMIT_ENABLED=false diff --git a/README.md b/README.md index e487aa97..a609f4de 100644 --- a/README.md +++ b/README.md @@ -128,6 +128,12 @@ following advantages to starting your own from scratch : collection tracks HTTP performance (requests, latency, in-flight), business metrics (auth failures, API key usage, login attempts), and custom application metrics. Exposed via `/metrics` endpoint when enabled. +- **Rate limiting** for authentication endpoints to protect against brute force + attacks, spam, and abuse. Disabled by default, supports both in-memory (for + single-instance) and Redis (for multi-instance) backends. Conservative limits + applied to login (5/15min), registration (3/hour), password recovery (3/hour), + and other auth routes. Returns HTTP 429 with `Retry-After` header when limits + exceeded. Violations tracked via Prometheus metrics when enabled. - **A command-line admin tool**. This allows to configure the project metadata very easily, add users (and make admin), and run a development server. This can easily be modified to add your own functionality (for example bulk add diff --git a/SECURITY-REVIEW.md b/SECURITY-REVIEW.md index 8d561954..26bb0f06 100644 --- a/SECURITY-REVIEW.md +++ b/SECURITY-REVIEW.md @@ -51,11 +51,13 @@ ### 3. No Rate Limiting on Authentication Endpoints > [!NOTE] -> -> In the TODO, was waiting until Redis was integrated - it now has been d/t -> caching implementation. +> ✅ **Done**: Rate limiting implemented using slowapi with conservative limits +> on all 7 authentication endpoints. Supports both in-memory (single-instance) +> and Redis (multi-instance) backends. Disabled by default, opt-in via +> `RATE_LIMIT_ENABLED=true`. Returns HTTP 429 with Retry-After header when +> limits exceeded. Violations tracked via Prometheus metrics. -**Location**: All routes in `app/resources/auth.py`, entire codebase +**Location**: All routes in `app/resources/auth.py`, `app/rate_limit/` - **Issue**: No rate limiting implementation found anywhere. Vulnerable to: - Brute force login attacks (`/login/`) @@ -65,10 +67,13 @@ - **Impact**: Attackers can perform unlimited login attempts, spam registration emails, and exhaust server resources. - **Fix**: Implement rate limiting middleware (e.g., slowapi, fastapi-limiter): - - Login: 5 attempts per 15 minutes per IP - - Registration: 3 per hour per IP - - Password reset: 3 per hour per email - - API requests: 100 per minute per user/IP + - Login: 5 attempts per 15 minutes per IP ✅ + - Registration: 3 per hour per IP ✅ + - Password reset: 3 per hour per email ✅ + - Email verification: 10 per minute per IP ✅ + - Token refresh: 20 per minute per IP ✅ + - Reset password GET: 10 per minute per IP ✅ + - Reset password POST: 5 per hour per IP ✅ ### 4. API Key Scopes Stored But Never Enforced @@ -509,6 +514,37 @@ ## Feature Ideas (Future Enhancements) +- **User-based rate limiting for authenticated endpoints**: Current IP-based + limiting (#3) protects auth endpoints. Next enhancement: add user/API-key based + limiting for authenticated API operations. Benefits: + - Fair usage across shared IPs (corporate NAT, proxies) + - Per-user quotas and tiered limits (free vs premium) + - Better tracking of individual user API consumption + - Prevent individual account abuse + + Implementation approach: + - Create middleware to extract user ID from JWT/API key before rate limiting + - Add `user_limiter` alongside existing IP-based `limiter` + - Use custom key function: `user:{user_id}` if authenticated, fallback to + `ip:{address}` for unauthenticated + - Apply to protected endpoints (keep IP-based for auth endpoints) + - Support role-based dynamic limits (admins get higher quotas) + + Example usage: + + ```python + @router.get("/api/data") + @user_rate_limited("500/hour") # Per user, not per IP + async def get_data(user: User = Depends(AuthManager())): + pass + ``` + + Technical considerations: + - Middleware runs before dependencies, so need early auth extraction + - Reuse existing Redis/in-memory backend + - No impact on current auth endpoint protection + - Enables per-user analytics and quota enforcement + - **Session management**: list/revoke active refresh tokens per user (ties into logout/invalidations and #8 jti claims) - **Account security notifications**: email user when password/email changes or @@ -531,7 +567,7 @@ | Priority | Count | Must Fix Before Production? | |--------------|---------------|-------------------------------------| -| **CRITICAL** | 5 (3 closed) | ✅ YES - Security vulnerabilities | +| **CRITICAL** | 5 (4 closed) | ✅ YES - Security vulnerabilities | | **High** | 9 (0 closed) | ✅ YES - Important security/quality | | **Medium** | 14 (0 closed) | ⚠️ Recommended - Hardening needed | | **Low** | 5 (0 closed) | 💡 Optional - Nice to have | @@ -556,10 +592,10 @@ rate limiting, token validation, and API key scope enforcement. ### Sprint 1 - CRITICAL (This Week) -1. **Fix CORS configuration** (#2) - Remove credentials or restrict origins -2. **Implement rate limiting** (#3) - All auth endpoints -3. **Add token type validation** (#1) - `get_jwt_user()` enforcement -4. **Redact sensitive data from logs** (#5) - Query parameter filtering +1. ✅ **Fix CORS configuration** (#2) - Remove credentials or restrict origins +2. ✅ **Implement rate limiting** (#3) - All auth endpoints +3. ✅ **Add token type validation** (#1) - `get_jwt_user()` enforcement +4. ✅ **Redact sensitive data from logs** (#5) - Query parameter filtering 5. **Fix API key scope enforcement** (#4) - Add validation logic ### Sprint 2 - High Priority (Next Week) @@ -613,7 +649,7 @@ rate limiting, token validation, and API key scope enforcement. - `app/managers/api_key.py` - Scope enforcement (#4), last_used_at (#23) - `app/managers/user.py` - Timing attacks (#6), email enumeration (#10), race condition (#11) -- **NEW FILE**: `app/middleware/rate_limiting.py` - Rate limiting (#3) +- ✅ `app/rate_limit/` - Rate limiting implementation (#3) **High Priority:** diff --git a/TODO.md b/TODO.md index 2646ce2b..b34744ed 100644 --- a/TODO.md +++ b/TODO.md @@ -4,6 +4,8 @@ - allow to tag endpoints as belonging to a group so can then have similar auth etc. +- expand the IP-based rate-limiting to allow user-based limiting. this will help + to set user-based quotas. - add time-limited bans (configurable) - Add certain users that will not time-expire (or much longer time) for eg for owner or premium access. diff --git a/app/config/settings.py b/app/config/settings.py index d488a869..c08702e6 100644 --- a/app/config/settings.py +++ b/app/config/settings.py @@ -138,6 +138,11 @@ class Settings(BaseSettings): redis_db: int = 0 cache_default_ttl: int = 300 # 5 minutes + # Rate limiting settings (opt-in, disabled by default) + # Automatically uses Redis when both rate_limit_enabled and + # redis_enabled are True + rate_limit_enabled: bool = False + # gatekeeper settings! # this is to ensure that people read the damn instructions and changelogs i_read_the_damn_docs: bool = False @@ -160,6 +165,21 @@ def redis_url(self) -> str: ) return f"redis://{self.redis_host}:{self.redis_port}/{self.redis_db}" + @property + def rate_limit_storage_url(self) -> str: + """Generate rate limit storage URL. + + Returns Redis URL when both redis_enabled and rate_limit_enabled + are True, otherwise returns empty string to trigger in-memory + storage. + + Returns: + Redis URL for rate limiting or empty string for in-memory. + """ + if self.rate_limit_enabled and self.redis_enabled: + return self.redis_url + return "" + @field_validator("api_root") @classmethod def check_api_root(cls: type[Settings], value: str) -> str: diff --git a/app/main.py b/app/main.py index fa15add1..169b1137 100644 --- a/app/main.py +++ b/app/main.py @@ -4,7 +4,7 @@ import sys from collections.abc import AsyncGenerator from contextlib import asynccontextmanager -from typing import Any +from typing import Any, cast from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware @@ -16,6 +16,8 @@ from loguru import logger as loguru_logger from redis import RedisError from redis.asyncio import Redis +from slowapi.errors import RateLimitExceeded +from slowapi.middleware import SlowAPIMiddleware from sqlalchemy.exc import SQLAlchemyError from app.admin import register_admin @@ -27,6 +29,8 @@ from app.metrics.instrumentator import register_metrics from app.middleware.cache_logging import CacheLoggingMiddleware from app.middleware.logging_middleware import LoggingMiddleware +from app.rate_limit import limiter +from app.rate_limit.handlers import rate_limit_handler from app.resources import config_error from app.resources.routes import api_router @@ -160,6 +164,13 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[Any, None]: # Customize OpenAPI schema for special endpoints app.openapi = lambda: custom_openapi(app) # type: ignore[method-assign] +# Register custom exception handler for rate limits +app.add_exception_handler( + RateLimitExceeded, + cast("Any", rate_limit_handler), +) + + # register the API routes app.include_router(api_router) @@ -190,5 +201,11 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[Any, None]: # Add cache logging middleware app.add_middleware(CacheLoggingMiddleware) +# Add SlowAPI middleware and state (required for rate limiting) +# NOTE: app.state.limiter must be set regardless of rate_limit_enabled +# The limiter itself has enabled=False when rate limiting is disabled +app.state.limiter = limiter +app.add_middleware(SlowAPIMiddleware) + # Add pagination support add_pagination(app) diff --git a/app/metrics/__init__.py b/app/metrics/__init__.py index 59fd6ffb..2af81a1d 100644 --- a/app/metrics/__init__.py +++ b/app/metrics/__init__.py @@ -4,6 +4,7 @@ increment_api_key_validation, increment_auth_failure, increment_login_attempt, + increment_rate_limit_exceeded, ) from app.metrics.instrumentator import get_instrumentator from app.metrics.namespace import METRIC_NAMESPACE @@ -14,4 +15,5 @@ "increment_api_key_validation", "increment_auth_failure", "increment_login_attempt", + "increment_rate_limit_exceeded", ] diff --git a/app/metrics/custom.py b/app/metrics/custom.py index 883dcc52..78e9b05e 100644 --- a/app/metrics/custom.py +++ b/app/metrics/custom.py @@ -29,6 +29,14 @@ namespace=METRIC_NAMESPACE, ) +# Rate limit tracking +rate_limit_exceeded_total = Counter( + "rate_limit_exceeded_total", + "Total rate limit violations by endpoint and limit", + ["endpoint", "limit"], + namespace=METRIC_NAMESPACE, +) + # Helper functions (only increment if metrics enabled) def increment_auth_failure(reason: str, method: str) -> None: @@ -47,3 +55,12 @@ def increment_login_attempt(status: str) -> None: """Increment login attempt counter.""" if get_settings().metrics_enabled: login_attempts_total.labels(status=status).inc() + + +def increment_rate_limit_exceeded(endpoint: str, limit: str) -> None: + """Increment rate limit exceeded counter.""" + if get_settings().metrics_enabled: + rate_limit_exceeded_total.labels( + endpoint=endpoint, + limit=limit, + ).inc() diff --git a/app/rate_limit/__init__.py b/app/rate_limit/__init__.py new file mode 100644 index 00000000..530046ce --- /dev/null +++ b/app/rate_limit/__init__.py @@ -0,0 +1,76 @@ +"""Rate limiting module using slowapi.""" + +from typing import TYPE_CHECKING + +from slowapi import Limiter +from slowapi.errors import RateLimitExceeded +from slowapi.util import get_remote_address + +from app.config.settings import get_settings +from app.logs import LogCategory, category_logger + +if TYPE_CHECKING: + from slowapi import Limiter as LimiterType + +# Initialize limiter with storage backend selection +_limiter: "LimiterType | None" = None + + +def get_limiter() -> Limiter: + """Get or create the rate limiter instance. + + Returns: + Configured Limiter with Redis or in-memory storage. + + Note: + When RATE_LIMIT_ENABLED=False, returns a limiter that + allows all requests (enabled=False tells slowapi to + not enforce any limits). + """ + global _limiter # noqa: PLW0603 + + if _limiter is not None: + return _limiter + + settings = get_settings() + + # Determine storage backend + # Always use memory:// for in-memory storage to ensure proper init + if settings.rate_limit_enabled and settings.redis_enabled: + storage_uri = settings.redis_url + category_logger.info( + "Rate limiting initialized with Redis storage", + LogCategory.AUTH, + ) + else: + # Use memory:// for in-memory storage (works even when disabled) + storage_uri = "memory://" + if settings.rate_limit_enabled: + category_logger.info( + "Rate limiting initialized with in-memory storage", + LogCategory.AUTH, + ) + else: + category_logger.info( + "Rate limiting is disabled (RATE_LIMIT_ENABLED=false)", + LogCategory.AUTH, + ) + + # Create limiter instance + _limiter = Limiter( + key_func=get_remote_address, # Rate limit by IP address + storage_uri=storage_uri, + enabled=settings.rate_limit_enabled, + ) + + return _limiter + + +# Export for convenience +limiter = get_limiter() + +__all__ = [ + "RateLimitExceeded", + "get_limiter", + "limiter", +] diff --git a/app/rate_limit/config.py b/app/rate_limit/config.py new file mode 100644 index 00000000..36c6dc99 --- /dev/null +++ b/app/rate_limit/config.py @@ -0,0 +1,33 @@ +"""Rate limit configurations for different endpoints.""" + +from typing import ClassVar + + +class RateLimits: + """Rate limit definitions for authentication endpoints. + + Following conservative limits from SECURITY-REVIEW.md #3. + Format: "{count}/{period}" where period can be: + - second, minute, hour, day + """ + + # Registration - prevent spam account creation + REGISTER: ClassVar[str] = "3/hour" + + # Login - brute force protection + LOGIN: ClassVar[str] = "5/15minutes" + + # Password recovery - prevent email flooding + FORGOT_PASSWORD: ClassVar[str] = "3/hour" # noqa: S105 + + # Email verification - prevent abuse + VERIFY: ClassVar[str] = "10/minute" + + # Token refresh - prevent token harvesting + REFRESH: ClassVar[str] = "20/minute" + + # Password reset GET (form page) - prevent reconnaissance + RESET_PASSWORD_GET: ClassVar[str] = "10/minute" # noqa: S105 + + # Password reset POST (actual reset) - critical security + RESET_PASSWORD_POST: ClassVar[str] = "5/hour" # noqa: S105 diff --git a/app/rate_limit/decorators.py b/app/rate_limit/decorators.py new file mode 100644 index 00000000..8ca446af --- /dev/null +++ b/app/rate_limit/decorators.py @@ -0,0 +1,81 @@ +"""Rate limit decorator with logging and metrics integration.""" + +from collections.abc import Callable +from functools import wraps +from typing import Any + +from slowapi.errors import RateLimitExceeded + +from app.logs import LogCategory, category_logger +from app.metrics import increment_rate_limit_exceeded +from app.rate_limit import get_limiter + + +def rate_limited(limit: str) -> Callable[..., Any]: + """Apply rate limiting with integrated logging and metrics. + + Args: + limit: Rate limit string (e.g., "5/minute", "3/hour") + + Returns: + Decorated endpoint with rate limiting applied + + Example: + ```python + from app.rate_limit import rate_limited + from app.rate_limit.config import RateLimits + + @router.post("/login/") + @rate_limited(RateLimits.LOGIN) + async def login( + request: Request, user_data: UserLoginRequest + ): + ... + ``` + + Note: + - Decorator must be applied AFTER route decorator + - Request parameter must be present in endpoint signature + - Logs violations and increments metrics automatically + """ + limiter = get_limiter() + + def decorator(func: Callable[..., Any]) -> Callable[..., Any]: + # Apply slowapi's limit decorator + limited_func = limiter.limit(limit)(func) + + @wraps(func) + async def wrapper( + *args: Any, # noqa: ANN401 + **kwargs: Any, # noqa: ANN401 + ) -> Any: # noqa: ANN401 + try: + return await limited_func(*args, **kwargs) + except RateLimitExceeded: + # Extract request from kwargs or args + request = kwargs.get("request") or next( + (arg for arg in args if hasattr(arg, "client")), + None, + ) + + client_ip = "unknown" + if request and hasattr(request, "client") and request.client: + client_ip = request.client.host + + category_logger.warning( + f"Rate limit exceeded for {client_ip}: {limit}", + LogCategory.AUTH, + ) + + # Increment metrics + increment_rate_limit_exceeded( + endpoint=func.__name__, + limit=limit, + ) + + # Re-raise to trigger FastAPI's exception handler + raise + + return wrapper + + return decorator diff --git a/app/rate_limit/handlers.py b/app/rate_limit/handlers.py new file mode 100644 index 00000000..cb5ed319 --- /dev/null +++ b/app/rate_limit/handlers.py @@ -0,0 +1,76 @@ +"""Exception handlers for rate limiting.""" + +import re + +from fastapi import Request, status +from fastapi.responses import JSONResponse +from slowapi.errors import RateLimitExceeded + +# Default Retry-After value in seconds (1 hour) +DEFAULT_RETRY_AFTER = "3600" + +# Unit to seconds mapping +UNIT_TO_SECONDS = { + "second": 1, + "minute": 60, + "hour": 3600, + "day": 86400, +} + + +def parse_retry_after(limit_str: str) -> str: + """Parse rate limit string and return Retry-After seconds. + + Args: + limit_str: Rate limit string like "3 per 1 hour" or "5/15minutes" + + Returns: + Retry-After value in seconds as string + """ + # Handle both "X per Y period" and "X/Y period" formats + limit_str = limit_str.replace(" per ", "/") + + # Extract the time period + parts = limit_str.split("/") + if len(parts) < 2: # noqa: PLR2004 + return DEFAULT_RETRY_AFTER # Default to 1 hour + + period_str = parts[1].strip().lower() + + # Parse period to seconds + # Extract number prefix if present (e.g., "15minutes" -> 15) + match = re.match(r"(\d+)?\s*(\w+)", period_str) + if not match: + return DEFAULT_RETRY_AFTER + + multiplier_str, unit = match.groups() + multiplier = int(multiplier_str) if multiplier_str else 1 + + # Find matching unit (using startswith for flexibility) + seconds = next( + (v for k, v in UNIT_TO_SECONDS.items() if unit.startswith(k)), + int(DEFAULT_RETRY_AFTER), # Default to hour if no match + ) + + return str(multiplier * seconds) + + +async def rate_limit_handler( + request: Request, # noqa: ARG001 - Required by FastAPI + exc: RateLimitExceeded, +) -> JSONResponse: + """Handle rate limit exceeded exceptions. + + Returns a 429 status with Retry-After header. + """ + # Calculate Retry-After from limit string + retry_after = DEFAULT_RETRY_AFTER # Default to 1 hour + if exc.limit is not None: + limit_str = str(exc.limit.limit) + retry_after = parse_retry_after(limit_str) + + return JSONResponse( + status_code=status.HTTP_429_TOO_MANY_REQUESTS, + content={"detail": "Rate limit exceeded. Please try again later."}, + headers={"Retry-After": retry_after}, + ) diff --git a/app/resources/auth.py b/app/resources/auth.py index f5bef3f0..55eafb32 100644 --- a/app/resources/auth.py +++ b/app/resources/auth.py @@ -26,6 +26,8 @@ from app.managers.helpers import MAX_JWT_TOKEN_LENGTH, is_valid_jwt_format from app.managers.user import UserManager from app.models.user import User +from app.rate_limit.config import RateLimits +from app.rate_limit.decorators import rate_limited from app.schemas.request.auth import ( ForgotPasswordRequest, ResetPasswordRequest, @@ -53,7 +55,9 @@ name="register_a_new_user", response_model=TokenResponse, ) +@rate_limited(RateLimits.REGISTER) async def register( + request: Request, # noqa: ARG001 - Required by slowapi decorator background_tasks: BackgroundTasks, user_data: UserRegisterRequest, session: Annotated[AsyncSession, Depends(get_database)], @@ -83,7 +87,9 @@ async def register( response_model=TokenResponse, status_code=status.HTTP_200_OK, ) +@rate_limited(RateLimits.LOGIN) async def login( + request: Request, # noqa: ARG001 - Required by slowapi decorator user_data: UserLoginRequest, session: Annotated[AsyncSession, Depends(get_database)], ) -> dict[str, str]: @@ -105,7 +111,9 @@ async def login( name="refresh_an_expired_token", response_model=TokenRefreshResponse, ) +@rate_limited(RateLimits.REFRESH) async def generate_refresh_token( + request: Request, # noqa: ARG001 - Required by slowapi decorator refresh_token: TokenRefreshRequest, session: Annotated[AsyncSession, Depends(get_database)], ) -> dict[str, str]: @@ -119,8 +127,11 @@ async def generate_refresh_token( @router.get("/verify/", status_code=status.HTTP_200_OK) +@rate_limited(RateLimits.VERIFY) async def verify( - session: Annotated[AsyncSession, Depends(get_database)], code: str = "" + request: Request, # noqa: ARG001 - Required by slowapi decorator + session: Annotated[AsyncSession, Depends(get_database)], + code: str = "", ) -> None: """Verify a new user. @@ -138,7 +149,9 @@ async def verify( response_model=PasswordResetResponse, status_code=status.HTTP_200_OK, ) +@rate_limited(RateLimits.FORGOT_PASSWORD) async def forgot_password( + request: Request, # noqa: ARG001 - Required by slowapi decorator background_tasks: BackgroundTasks, request_data: ForgotPasswordRequest, session: Annotated[AsyncSession, Depends(get_database)], @@ -163,6 +176,7 @@ async def forgot_password( response_model=None, include_in_schema=True, ) +@rate_limited(RateLimits.RESET_PASSWORD_GET) async def reset_password_form( request: Request, session: Annotated[AsyncSession, Depends(get_database)], @@ -235,6 +249,7 @@ async def reset_password_form( response_model=None, status_code=status.HTTP_200_OK, ) +@rate_limited(RateLimits.RESET_PASSWORD_POST) async def reset_password( request: Request, session: Annotated[AsyncSession, Depends(get_database)], diff --git a/docs/index.md b/docs/index.md index 57fd6671..0abdee34 100644 --- a/docs/index.md +++ b/docs/index.md @@ -77,6 +77,12 @@ following advantages to starting your own from scratch : collection tracks HTTP performance (requests, latency, in-flight), business metrics (auth failures, API key usage, login attempts), and custom application metrics. Exposed via `/metrics` endpoint when enabled. +- **Rate limiting** for authentication endpoints to protect against brute force + attacks, spam, and abuse. Disabled by default, supports both in-memory (for + single-instance) and Redis (for multi-instance) backends. Conservative limits + applied to login (5/15min), registration (3/hour), password recovery (3/hour), + and other auth routes. Returns HTTP 429 with `Retry-After` header when limits + exceeded. Violations tracked via Prometheus metrics when enabled. - **A command-line admin tool**. This allows to configure the project metadata very easily, add users (and make admin), and run a development server. This can easily be modified to add your own functionality (for example bulk add diff --git a/docs/usage/caching.md b/docs/usage/caching.md index 488b040c..b329fb5f 100644 --- a/docs/usage/caching.md +++ b/docs/usage/caching.md @@ -212,6 +212,24 @@ async def expensive_query(request: Request, response: Response): The `@cached()` decorator MUST be placed AFTER the route decorator (`@router.get()`, etc.) to work correctly. +!!! tip "Using with Rate Limiting" + When using both `@cached()` and `@rate_limited()` decorators together, + place rate limiting first for security. Rate limits should be checked + before returning any response (including cached ones): + + ```python + from app.cache import cached + from app.rate_limit.decorators import rate_limited + + @router.get("/endpoint") + @rate_limited("10/minute") # Check rate limit first + @cached(expire=300) # Then check cache + async def handler(request: Request, response: Response): + return data + ``` + + This ensures rate-limited users don't bypass limits via cached responses. + ### With Custom Key Builders For user-scoped caching, use built-in key builders: diff --git a/docs/usage/rate-limiting.md b/docs/usage/rate-limiting.md new file mode 100644 index 00000000..6e06c6ce --- /dev/null +++ b/docs/usage/rate-limiting.md @@ -0,0 +1,573 @@ +# Rate Limiting and Security + +## Overview + +The FastAPI Template provides optional rate limiting to protect +authentication endpoints from brute force attacks, abuse, and denial +of service attempts. Rate limiting is **disabled by default** and +must be explicitly enabled. When enabled, conservative rate limits +are applied to all authentication routes to protect your application +while maintaining good user experience. + +The rate limiting system supports two backends: + +- **In-Memory Rate Limiting** - No external dependencies, perfect for + single-instance deployments +- **Redis Rate Limiting** - Distributed rate limiting for production + multi-instance deployments, enabling coordinated limits across all + instances + +## Enabling Rate Limiting + +Rate limiting is **disabled by default** to minimize overhead and +external dependencies. To enable rate limiting: + +### Option 1: In-Memory Rate Limiting (Development/Single Instance) + +No external dependencies required: + +```bash +# In your .env file +RATE_LIMIT_ENABLED=true +REDIS_ENABLED=false +``` + +This uses in-memory rate limiting which is perfect for development or +single-instance deployments. Note that limits are per-instance and +don't persist across restarts. + +### Option 2: Redis Rate Limiting (Production/Multi-Instance) + +For production deployments with multiple instances, use Redis to +coordinate rate limits across all instances: + +1. Ensure Redis is running and accessible: + + ```bash + # Install Redis Server (Linux) + sudo apt install redis-server + + # Or via Docker + docker run -d -p 6379:6379 redis:alpine + ``` + +2. Add to your `.env` file: + + ```bash + RATE_LIMIT_ENABLED=true + REDIS_ENABLED=true + REDIS_HOST=localhost + REDIS_PORT=6379 + REDIS_PASSWORD= + REDIS_DB=0 + ``` + +3. Restart your application: + + ```bash + api-admin serve + ``` + +If Redis connection fails, the application automatically falls back to +in-memory rate limiting. + +## How It Works + +### Rate Limiting Behavior + +The rate limiting system has three operating modes: + +1. **Rate Limiting Disabled** (`RATE_LIMIT_ENABLED=false`, default): + No rate limiting occurs, `@rate_limited()` decorator is a no-op, + zero overhead +2. **In-Memory Rate Limiting** (`RATE_LIMIT_ENABLED=true, + REDIS_ENABLED=false`): Uses in-memory backend, limits are + per-instance +3. **Redis Rate Limiting** (`RATE_LIMIT_ENABLED=true, + REDIS_ENABLED=true`): Uses Redis backend with coordinated limits + across all instances + +### Automatic Fallback + +When Redis is enabled but connection fails: + +1. Application logs a warning about the Redis connection failure +2. Automatically falls back to in-memory rate limiting +3. Application continues functioning normally +4. Rate limit violations are logged but don't break the application + +This ensures your application remains functional even if Redis goes +down. + +### Protected Endpoints + +Rate limits are automatically applied to these authentication +endpoints: + +| Endpoint | Limit | Purpose | +| --------------------- | ----------- | -------------------------- | +| POST /register/ | 3/hour | Prevent spam registrations | +| POST /login/ | 5/15minutes | Brute force protection | +| POST /forgot-password | 3/hour | Email flooding prevention | +| GET /verify/ | 10/minute | Verification abuse | +| POST /refresh/ | 20/minute | Token refresh abuse | +| GET /reset-password/ | 10/minute | Reset link access | +| POST /reset-password/ | 5/hour | Reset abuse prevention | + +### Rate Limit Response + +When a rate limit is exceeded, the API returns: + +- **HTTP 429 Too Many Requests** status code +- **`Retry-After`** header indicating when the limit resets (in + seconds) +- JSON error message: `{"detail": "Rate limit exceeded. Please try + again later."}` + +Example response: + +```http +HTTP/1.1 429 Too Many Requests +Retry-After: 3600 +Content-Type: application/json + +{ + "detail": "Rate limit exceeded. Please try again later." +} +``` + +## Configuration + +### Rate Limit Settings + +**`RATE_LIMIT_ENABLED`** (default: `false`) +Master switch to enable/disable rate limiting entirely. When `false`, +no rate limit initialization occurs and `@rate_limited()` decorator is +a no-op with zero overhead. + +**`REDIS_ENABLED`** (default: `false`) +Controls whether to use Redis or in-memory rate limiting backend. Only +relevant when `RATE_LIMIT_ENABLED=true`. + +### Redis Connection Settings + +Redis backend configuration (when `RATE_LIMIT_ENABLED=true` and +`REDIS_ENABLED=true`): + +**`REDIS_HOST`** (default: `localhost`) +Redis server hostname or IP address. + +**`REDIS_PORT`** (default: `6379`) +Redis server port. + +**`REDIS_PASSWORD`** (default: empty) +Redis password if authentication is enabled. Special characters are +automatically URL-encoded. + +**`REDIS_DB`** (default: `0`) +Redis database number (0-15). + +### Example `.env` Configuration + +```bash +# Rate limiting disabled (default) +RATE_LIMIT_ENABLED=false + +# Development setup (in-memory rate limiting) +RATE_LIMIT_ENABLED=true +REDIS_ENABLED=false + +# Production setup (Redis rate limiting) +RATE_LIMIT_ENABLED=true +REDIS_ENABLED=true +REDIS_HOST=redis.example.com +REDIS_PORT=6380 +REDIS_PASSWORD=my_secure_password! +REDIS_DB=1 +``` + +!!! note "Testing Configuration" + Tests automatically enable rate limiting with in-memory backend to + ensure rate limit code paths are tested without requiring Redis. + +## Using Rate Limiting in Your Endpoints + +### Basic Usage + +Use the `@rate_limited()` decorator on route handlers: + +```python +from fastapi import APIRouter, Request +from app.rate_limit.config import RateLimits +from app.rate_limit.decorators import rate_limited + +router = APIRouter() + +@router.post("/sensitive-operation") +@rate_limited("10/minute") +async def sensitive_operation( + request: Request, # Required by slowapi + data: dict +): + # This endpoint is limited to 10 requests per minute per IP + return {"status": "success"} +``` + +!!! note "Decorator Order and Request Parameter" + - The `@rate_limited()` decorator MUST be placed AFTER the route + decorator (`@router.post()`, etc.) + - The route handler MUST include a `request: Request` parameter + for rate limiting to work + +!!! tip "Using with Caching" + When using both `@rate_limited()` and `@cached()` decorators together, + place rate limiting first for security. Rate limits should be checked + before returning any response (including cached ones): + + ```python + from app.cache import cached + from app.rate_limit.decorators import rate_limited + + @router.get("/endpoint") + @rate_limited("10/minute") # Check rate limit first + @cached(expire=300) # Then check cache + async def handler(request: Request, response: Response): + return data + ``` + + This ensures rate-limited users don't bypass limits via cached responses. + See [Caching Documentation](caching.md#using-cache-in-your-endpoints) for + more details. + +### Using Predefined Limits + +The template provides conservative default limits in +`app/rate_limit/config.py`: + +```python +from app.rate_limit.config import RateLimits +from app.rate_limit.decorators import rate_limited + +@router.post("/api/v1/data") +@rate_limited(RateLimits.LOGIN) # Reuse login limit (5/15minutes) +async def create_data(request: Request, data: dict): + return await process_data(data) +``` + +### Custom Rate Limits + +Define your own rate limits using the slowapi format: + +```python +@router.get("/api/search") +@rate_limited("100/hour") # Custom limit +async def search(request: Request, query: str): + return await perform_search(query) +``` + +Supported time units: + +- `second` or `seconds` +- `minute` or `minutes` +- `hour` or `hours` +- `day` or `days` + +Examples: + +- `"10/second"` - 10 requests per second +- `"50/5minutes"` - 50 requests per 5 minutes +- `"1000/day"` - 1000 requests per day + +## Rate Limiting Strategy + +### IP-Based Limiting + +All rate limits are based on the client's IP address (via +`get_remote_address`). This means: + +- Each IP address has independent limits +- Shared networks (NAT, proxies) share the same limit +- For user-specific limits, consider implementing custom rate limiting + based on user ID + +### Conservative Defaults + +The template uses conservative limits that protect against abuse while +allowing legitimate use: + +- **Registration (3/hour)**: Prevents spam account creation while + allowing retries for legitimate users +- **Login (5/15min)**: Blocks brute force attacks with ~20 attempts + per hour max +- **Password Recovery (3/hour)**: Prevents email flooding attacks +- **Email Verification (10/minute)**: Allows retries without enabling + abuse +- **Token Refresh (20/minute)**: Generous limit for legitimate + re-authentication +- **Password Reset (5-10/hour)**: Protects against enumeration and + abuse + +### Customizing Limits + +To modify the default limits, edit +`app/rate_limit/config.py`: + +```python +from typing import ClassVar + +class RateLimits: + """Conservative rate limits for authentication endpoints.""" + + REGISTER: ClassVar[str] = "5/hour" # More restrictive + LOGIN: ClassVar[str] = "10/15minutes" # More permissive + # ... other limits +``` + +After changing limits, restart your application. + +## Monitoring Rate Limiting + +### Enable Rate Limit Logging + +Add `AUTH` to your `LOG_CATEGORIES` to log rate limit violations: + +```bash +LOG_CATEGORIES=AUTH,REQUESTS +``` + +Example log output: + +```code +2026-01-06 14:23:45 | AUTH | Rate limit exceeded for /login/ +(5/15minutes) from 192.168.1.100 +``` + +### Metrics Integration + +When Prometheus metrics are enabled (`METRICS_ENABLED=true`), rate +limit violations are tracked: + +```python +# Metric: rate_limit_exceeded_total +# Labels: endpoint, limit +rate_limit_exceeded_total{endpoint="/login/",limit="5/15minutes"} 12 +``` + +Access metrics at `/metrics`: + +```bash +curl http://localhost:8000/metrics | grep rate_limit +``` + +### Check Rate Limit Headers + +When a rate limit is exceeded, check the response headers: + +```bash +curl -i -X POST http://localhost:8000/login/ \ + -H "Content-Type: application/json" \ + -d '{"email":"test@example.com","password":"wrong"}' + +# After 5 failed attempts: +HTTP/1.1 429 Too Many Requests +Retry-After: 900 +Content-Type: application/json + +{"detail": "Rate limit exceeded. Please try again later."} +``` + +The `Retry-After` header indicates how many seconds until the limit +resets. + +## Security Considerations + +### Protection Against Attacks + +Rate limiting protects against: + +1. **Brute Force Attacks**: Login limits prevent password guessing +2. **Denial of Service**: Request limits prevent resource exhaustion +3. **Spam Registration**: Registration limits prevent bot signups +4. **Email Flooding**: Password recovery limits prevent email abuse +5. **Account Enumeration**: Combined with other measures, limits slow + enumeration + +### Limitations + +Rate limiting alone doesn't fully protect against: + +- **Distributed Attacks**: Attackers using many IPs can bypass + IP-based limits +- **Credential Stuffing**: Limits slow but don't prevent attacks using + valid credentials +- **Application-Level DoS**: Some attacks target expensive operations + beyond authentication + +Combine rate limiting with: + +- CAPTCHA on critical endpoints +- Account lockout after failed attempts +- IP reputation services +- Web Application Firewall (WAF) +- Monitoring and alerting + +### Redis Security + +When using Redis for rate limiting: + +1. **Use Authentication**: Set `REDIS_PASSWORD` in production +2. **Network Isolation**: Bind Redis to localhost or private network +3. **TLS Encryption**: Use `rediss://` URLs for encrypted connections +4. **Regular Updates**: Keep Redis patched and updated +5. **Monitoring**: Track Redis memory and connection usage + +## Performance Impact + +### Overhead + +- **Rate Limit Check**: ~0.1-0.5ms additional latency per request + - In-Memory: ~0.1ms + - Redis: ~0.5ms (local network) +- **When Disabled**: Zero overhead - rate limiting code is not + executed + +### Redis Memory Usage + +Each rate limit entry consumes minimal memory: + +- ~100 bytes per tracked IP per endpoint +- Automatic expiry after limit window +- Example: 1000 active IPs across 7 endpoints = ~700KB + +Monitor Redis memory: + +```bash +redis-cli INFO memory +``` + +## Troubleshooting + +### Rate Limit Not Working + +**Symptom**: Endpoints accept unlimited requests + +**Checklist**: + +- Is `RATE_LIMIT_ENABLED=true` in `.env`? (Required for any rate + limiting) +- Did you restart the application after changing `.env`? +- Is the `@rate_limited()` decorator AFTER the route decorator? +- Does the route handler have a `request: Request` parameter? +- Check logs for rate limit initialization messages + +### Redis Connection Failures + +**Symptom**: Application starts but logs "Failed to connect to Redis +for rate limiting" + +**Solution**: + +1. Check Redis is running: `redis-cli ping` (should return `PONG`) +2. Verify `REDIS_HOST` and `REDIS_PORT` in `.env` +3. Check Redis logs: `journalctl -u redis` or `docker logs ` +4. Application continues with in-memory limiting (safe fallback) + +### Legitimate Users Blocked + +**Symptom**: Valid users can't access endpoints due to rate limits + +**Solutions**: + +1. **Increase Limits**: Edit `app/rate_limit/config.py` for more + generous limits +2. **Whitelist IPs**: Implement IP whitelist for known good IPs + (requires custom code) +3. **User-Based Limits**: Implement per-user rate limiting instead of + per-IP +4. **Inform Users**: Add `Retry-After` header information to your + frontend + +### Shared Network Issues + +**Symptom**: Users behind NAT/proxy hit limits quickly + +**Problem**: All users share the same IP address + +**Solutions**: + +1. **Increase Limits**: Allow for shared IP scenarios +2. **X-Forwarded-For**: Configure slowapi to use forwarded IP headers + (only if behind trusted proxy) +3. **User-Based Limits**: Implement authenticated user rate limiting + +## Advanced Usage + +### Custom Rate Limit Decorator + +Create endpoint-specific rate limiting: + +```python +from app.rate_limit.decorators import rate_limited + +# High-value operation with strict limit +@router.post("/api/premium-feature") +@rate_limited("1/minute") +async def premium_feature( + request: Request, + user: User = Depends(AuthManager()) +): + return await expensive_operation() + +# Public endpoint with generous limit +@router.get("/api/public-data") +@rate_limited("100/minute") +async def public_data(request: Request): + return await fetch_public_data() +``` + +### Dynamic Rate Limits + +!!! note "Conceptual Pattern" + The example below shows the structure for role-based rate limiting + but requires additional implementation to integrate with slowapi's + limiter. Consider using separate endpoints with different limits + or implementing custom middleware for complex dynamic scenarios. + +For user-tier based limits, you could structure a custom decorator +like this: + +```python +from functools import wraps +from app.models.enums import Role +from app.rate_limit import get_limiter + +def role_based_rate_limit( + admin_limit: str, + user_limit: str +): + def decorator(func): + @wraps(func) + async def wrapper(*args, **kwargs): + user = kwargs.get("user") + limit = admin_limit if user.role == Role.ADMIN else user_limit + # Note: Actual dynamic limit application requires custom + # slowapi integration - this is a simplified illustration + return await func(*args, **kwargs) + return wrapper + return decorator +``` + +**Alternative approach:** Use separate endpoints for different user +tiers instead of dynamic limits, which provides better clarity and +simpler implementation. + +## See Also + +- [Configuration Guide](configuration/environment.md) - All + environment variables +- [Metrics and Observability](metrics.md) - Monitor rate limit + violations +- [Project Organization](../project-organization.md) - Rate limit + module structure +- [Logging Configuration](configuration/environment.md#configure-logging-optional) - + Enable rate limit logging diff --git a/mkdocs.yml b/mkdocs.yml index 1081faad..da153ca7 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -79,6 +79,7 @@ nav: - API Keys: usage/api-keys.md - Admin Panel: usage/admin-panel.md - Caching and Performance: usage/caching.md + - Rate Limiting and Security: usage/rate-limiting.md - Metrics and Observability: usage/metrics.md - Customization: - Metadata: customization/meta.md diff --git a/pyproject.toml b/pyproject.toml index 046aee4b..4ba0425c 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -34,6 +34,7 @@ dependencies = [ "fastapi-pagination>=0.12.34", "loguru>=0.7.2", "fastapi-cache2[redis]>=0.2.2", + "slowapi>=0.1.9", ] [project.scripts] @@ -240,6 +241,7 @@ addopts = [ filterwarnings = [ "ignore:'crypt' is deprecated:DeprecationWarning", "ignore:ValidatedEmail.email is deprecated:DeprecationWarning", + "ignore:'asyncio.iscoroutinefunction' is deprecated:DeprecationWarning:slowapi.extension", ] markers = [ "unit: Unit tests", @@ -254,6 +256,7 @@ env = [ "CACHE_ENABLED=true", "LOG_FILENAME=test_api.log", "METRICS_ENABLED=True", + "RATE_LIMIT_ENABLED=false", ] asyncio_default_fixture_loop_scope = "function" diff --git a/requirements-dev.txt b/requirements-dev.txt index 1cf69b1c..ef2913fd 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -92,7 +92,9 @@ cryptography==46.0.3 csscompressor==0.9.5 # via mkdocs-minify-plugin deprecated==1.2.18 - # via pygithub + # via + # limits + # pygithub distro==1.9.0 # via openai dnspython==2.7.0 @@ -179,6 +181,8 @@ libpass==1.9.0 # via api-template librt==0.7.4 ; platform_python_implementation != 'PyPy' # via mypy +limits==5.6.0 + # via slowapi loguru==0.7.3 # via api-template mako==1.3.9 @@ -242,6 +246,7 @@ openai==2.14.0 # via mkdoc packaging==24.2 # via + # limits # mkdocs # pytest # pytest-sugar @@ -391,6 +396,8 @@ simple-toml-settings==0.9.0 # via github-changelog-md six==1.17.0 # via python-dateutil +slowapi==0.1.9 + # via api-template smmap==5.0.2 # via gitdb sniffio==1.3.1 @@ -457,6 +464,7 @@ typing-extensions==4.15.0 # fastapi-mail # fastapi-pagination # libpass + # limits # mkdocstrings-python # mypy # openai diff --git a/requirements.txt b/requirements.txt index c2a4b4a6..70b4ddc1 100644 --- a/requirements.txt +++ b/requirements.txt @@ -50,6 +50,8 @@ colorama==0.4.6 ; sys_platform == 'win32' # uvicorn cryptography==46.0.3 # via fastapi-mail +deprecated==1.2.18 + # via limits dnspython==2.7.0 # via email-validator email-validator==2.3.0 @@ -105,6 +107,8 @@ jinja2==3.1.6 # sqladmin libpass==1.9.0 # via api-template +limits==5.6.0 + # via slowapi loguru==0.7.3 # via api-template mako==1.3.9 @@ -118,6 +122,8 @@ markupsafe==3.0.2 # wtforms mdurl==0.1.2 # via markdown-it-py +packaging==24.2 + # via limits pendulum==3.1.0 # via fastapi-cache2 prometheus-client==0.23.1 @@ -185,6 +191,8 @@ shellingham==1.5.4 # via typer six==1.17.0 # via python-dateutil +slowapi==0.1.9 + # via api-template sniffio==1.3.1 # via anyio sqladmin==0.20.1 @@ -217,6 +225,7 @@ typing-extensions==4.15.0 # fastapi-mail # fastapi-pagination # libpass + # limits # pydantic # pydantic-core # pydantic-extra-types @@ -248,5 +257,7 @@ websockets==15.0 # via uvicorn win32-setctime==1.2.0 ; sys_platform == 'win32' # via loguru +wrapt==1.17.2 + # via deprecated wtforms==3.1.2 # via sqladmin diff --git a/tests/integration/test_rate_limit_metrics.py b/tests/integration/test_rate_limit_metrics.py new file mode 100644 index 00000000..90c4c5f2 --- /dev/null +++ b/tests/integration/test_rate_limit_metrics.py @@ -0,0 +1,66 @@ +"""Test rate limit metrics integration.""" + +import pytest +from fastapi import status +from httpx import AsyncClient + + +@pytest.mark.integration +class TestRateLimitMetrics: + """Test rate limiting metrics.""" + + @pytest.mark.asyncio + async def test_rate_limit_metrics_recorded( + self, client: AsyncClient, mocker, monkeypatch + ) -> None: + """Ensure rate limit violations are recorded in metrics.""" + # Enable metrics (import here to avoid startup issues) + from app.config.settings import ( # noqa: PLC0415 + get_settings, + ) + + settings = get_settings() + monkeypatch.setattr(settings, "metrics_enabled", True) + monkeypatch.setattr(settings, "rate_limit_enabled", True) + + mocker.patch("app.managers.user.EmailManager.template_send") + + # Trigger rate limit by exceeding the register limit + for i in range(4): + await client.post( + "/register/", + json={ + "email": f"user{i}@example.com", + "first_name": "Test", + "last_name": "User", + "password": "password123!", + }, + ) + + # Get metrics + metrics_response = await client.get("/metrics") + + # Check if metrics endpoint is available + if metrics_response.status_code == status.HTTP_200_OK: + metrics_text = metrics_response.text + # Verify rate limit metric was recorded + assert "rate_limit_exceeded_total" in metrics_text + + @pytest.mark.asyncio + async def test_metrics_endpoint_available( + self, client: AsyncClient, monkeypatch + ) -> None: + """Ensure metrics endpoint is available when enabled.""" + # Enable metrics (import here to avoid startup issues) + from app.config.settings import ( # noqa: PLC0415 + get_settings, + ) + + settings = get_settings() + monkeypatch.setattr(settings, "metrics_enabled", True) + + # Get metrics + metrics_response = await client.get("/metrics") + + # Metrics should be available + assert metrics_response.status_code == status.HTTP_200_OK diff --git a/tests/integration/test_rate_limit_redis.py b/tests/integration/test_rate_limit_redis.py new file mode 100644 index 00000000..8ed2f8b0 --- /dev/null +++ b/tests/integration/test_rate_limit_redis.py @@ -0,0 +1,45 @@ +"""Test rate limiting with Redis backend.""" + +import pytest + +from app.config.settings import Settings + + +@pytest.mark.integration +class TestRateLimitRedis: + """Test rate limiting Redis backend settings.""" + + def test_rate_limit_storage_url_with_redis(self) -> None: + """Test rate_limit_storage_url property with Redis enabled.""" + # Create settings instance with both flags enabled + settings = Settings( + rate_limit_enabled=True, + redis_enabled=True, + redis_host="localhost", + redis_port=6379, + ) + + # Test the rate_limit_storage_url property returns Redis URL + assert "redis://localhost:6379" in settings.rate_limit_storage_url + + def test_rate_limit_storage_url_without_redis(self) -> None: + """Test rate_limit_storage_url property without Redis.""" + # Create settings with rate limiting enabled but Redis disabled + settings = Settings( + rate_limit_enabled=True, + redis_enabled=False, + ) + + # Test the rate_limit_storage_url property returns empty string + assert settings.rate_limit_storage_url == "" + + def test_rate_limit_storage_url_disabled(self) -> None: + """Test rate_limit_storage_url property when rate limiting disabled.""" + # Create settings with rate limiting disabled + settings = Settings( + rate_limit_enabled=False, + redis_enabled=True, + ) + + # Test the rate_limit_storage_url property returns empty string + assert settings.rate_limit_storage_url == "" diff --git a/tests/integration/test_rate_limiting.py b/tests/integration/test_rate_limiting.py new file mode 100644 index 00000000..616282d9 --- /dev/null +++ b/tests/integration/test_rate_limiting.py @@ -0,0 +1,226 @@ +"""Test rate limiting functionality.""" + +from collections.abc import Generator +from typing import TYPE_CHECKING + +import pytest +from fastapi import status +from httpx import AsyncClient + +if TYPE_CHECKING: + from slowapi import Limiter + + +@pytest.fixture(autouse=True) +def reset_rate_limiter() -> Generator["Limiter", None, None]: + """Reset rate limiter storage between tests and yield limiter instance.""" + # Import here to avoid circular imports + from app.rate_limit import limiter # noqa: PLC0415 + + # Reset the limiter's storage before each test + if hasattr(limiter, "_storage"): + limiter._storage.reset() + yield limiter + # Clean up after test + if hasattr(limiter, "_storage"): + limiter._storage.reset() + + +@pytest.mark.integration +class TestRateLimiting: + """Test rate limiting on authentication endpoints.""" + + @pytest.mark.asyncio + async def test_register_rate_limit( + self, + client: AsyncClient, + mocker, + monkeypatch, + reset_rate_limiter, + ) -> None: + """Ensure register endpoint enforces 3/hour limit.""" + # Enable rate limiting for this test + monkeypatch.setattr(reset_rate_limiter, "enabled", True) + + # Mock email sending + mocker.patch("app.managers.user.EmailManager.template_send") + + # Make 3 successful requests (within limit) + for i in range(3): + response = await client.post( + "/register/", + json={ + "email": f"user{i}@example.com", + "first_name": "Test", + "last_name": "User", + "password": "password123!", + }, + ) + assert response.status_code == status.HTTP_201_CREATED + + # 4th request should be rate limited + response = await client.post( + "/register/", + json={ + "email": "user4@example.com", + "first_name": "Test", + "last_name": "User", + "password": "password123!", + }, + ) + assert response.status_code == status.HTTP_429_TOO_MANY_REQUESTS + assert "detail" in response.json() + + @pytest.mark.asyncio + async def test_login_rate_limit( + self, + client: AsyncClient, + test_db, + mocker, + monkeypatch, + reset_rate_limiter, + ) -> None: + """Ensure login endpoint enforces 5/15minutes limit.""" + # Enable rate limiting for this test + monkeypatch.setattr(reset_rate_limiter, "enabled", True) + + # Mock email sending + mocker.patch("app.managers.user.EmailManager.template_send") + + # Make 5 login attempts (within limit) + for _ in range(5): + response = await client.post( + "/login/", + json={ + "email": "test@example.com", + "password": "wrongpassword", + }, + ) + # May fail auth (400, 401) but shouldn't be rate limited + assert response.status_code in [ + status.HTTP_400_BAD_REQUEST, + status.HTTP_401_UNAUTHORIZED, + status.HTTP_200_OK, + ] + + # 6th attempt should be rate limited + response = await client.post( + "/login/", + json={ + "email": "test@example.com", + "password": "wrongpassword", + }, + ) + assert response.status_code == status.HTTP_429_TOO_MANY_REQUESTS + + @pytest.mark.asyncio + async def test_forgot_password_rate_limit( + self, + client: AsyncClient, + mocker, + monkeypatch, + reset_rate_limiter, + ) -> None: + """Ensure forgot-password endpoint enforces 3/hour limit.""" + # Enable rate limiting for this test + monkeypatch.setattr(reset_rate_limiter, "enabled", True) + + # Mock email sending + mocker.patch("app.managers.email.EmailManager.template_send") + + # Make 3 requests (within limit) + for _ in range(3): + response = await client.post( + "/forgot-password/", + json={"email": "test@example.com"}, + ) + assert response.status_code == status.HTTP_200_OK + + # 4th request should be rate limited + response = await client.post( + "/forgot-password/", + json={"email": "test@example.com"}, + ) + assert response.status_code == status.HTTP_429_TOO_MANY_REQUESTS + + @pytest.mark.asyncio + async def test_rate_limit_headers( + self, + client: AsyncClient, + mocker, + monkeypatch, + reset_rate_limiter, + ) -> None: + """Ensure rate limit responses include Retry-After header.""" + # Enable rate limiting for this test + monkeypatch.setattr(reset_rate_limiter, "enabled", True) + + mocker.patch("app.managers.user.EmailManager.template_send") + + # Exceed rate limit + for i in range(4): + await client.post( + "/register/", + json={ + "email": f"user{i}@example.com", + "first_name": "Test", + "last_name": "User", + "password": "password123!", + }, + ) + + response = await client.post( + "/register/", + json={ + "email": "user5@example.com", + "first_name": "Test", + "last_name": "User", + "password": "password123!", + }, + ) + + assert response.status_code == status.HTTP_429_TOO_MANY_REQUESTS + # slowapi should include Retry-After header + headers_lower = {k.lower(): v for k, v in response.headers.items()} + assert "retry-after" in headers_lower + + @pytest.mark.asyncio + async def test_rate_limit_error_message( + self, + client: AsyncClient, + mocker, + monkeypatch, + reset_rate_limiter, + ) -> None: + """Ensure rate limit error message is user-friendly.""" + # Enable rate limiting for this test + monkeypatch.setattr(reset_rate_limiter, "enabled", True) + + mocker.patch("app.managers.user.EmailManager.template_send") + + # Exceed rate limit + for i in range(4): + await client.post( + "/register/", + json={ + "email": f"user{i}@example.com", + "first_name": "Test", + "last_name": "User", + "password": "password123!", + }, + ) + + response = await client.post( + "/register/", + json={ + "email": "user5@example.com", + "first_name": "Test", + "last_name": "User", + "password": "password123!", + }, + ) + + assert response.status_code == status.HTTP_429_TOO_MANY_REQUESTS + data = response.json() + assert "detail" in data + assert "rate limit" in data["detail"].lower() diff --git a/tests/unit/test_logging.py b/tests/unit/test_logging.py index d06ce062..39aba687 100644 --- a/tests/unit/test_logging.py +++ b/tests/unit/test_logging.py @@ -551,14 +551,6 @@ async def test_middleware_redacts_sensitive_query_parameters( assert "bad" not in log_message assert result == mock_response - def test_redact_query_returns_empty_for_blank_query(self) -> None: - """Test redaction helper returns empty for blank query.""" - assert LoggingMiddleware._redact_query("") == "" - - def test_redact_query_returns_original_when_no_params(self) -> None: - """Test redaction helper keeps original when parse yields no params.""" - assert LoggingMiddleware._redact_query("&") == "&" - async def test_middleware_logs_without_query_parameters( self, mocker: MockerFixture ) -> None: diff --git a/tests/unit/test_logging_helpers.py b/tests/unit/test_logging_helpers.py new file mode 100644 index 00000000..fc47b205 --- /dev/null +++ b/tests/unit/test_logging_helpers.py @@ -0,0 +1,66 @@ +"""Test logging helper functions.""" + +from __future__ import annotations + +import pytest + +from app.middleware.logging_middleware import LoggingMiddleware + + +@pytest.mark.unit +class TestRedactQuery: + """Test query parameter redaction helper.""" + + def test_redact_query_returns_empty_for_blank_query(self) -> None: + """Test redaction helper returns empty for blank query.""" + assert LoggingMiddleware._redact_query("") == "" + + def test_redact_query_returns_original_when_no_params(self) -> None: + """Test redaction helper keeps original when parse yields no params.""" + assert LoggingMiddleware._redact_query("&") == "&" + + def test_redact_query_redacts_sensitive_keys(self) -> None: + """Test redaction of sensitive query parameters.""" + assert "token=REDACTED" in LoggingMiddleware._redact_query( + "token=secret123" + ) + assert "api_key=REDACTED" in LoggingMiddleware._redact_query( + "api_key=mykey" + ) + assert "code=REDACTED" in LoggingMiddleware._redact_query("code=abc123") + + def test_redact_query_preserves_non_sensitive_keys(self) -> None: + """Test non-sensitive parameters are preserved.""" + result = LoggingMiddleware._redact_query("name=test&page=1") + assert "name=test" in result + assert "page=1" in result + + def test_redact_query_handles_mixed_keys(self) -> None: + """Test redaction with both sensitive and non-sensitive parameters.""" + result = LoggingMiddleware._redact_query( + "name=test&token=secret&page=1" + ) + assert "name=test" in result + assert "token=REDACTED" in result + assert "page=1" in result + + def test_redact_query_handles_multiple_sensitive_keys(self) -> None: + """Test redaction of multiple sensitive parameters.""" + result = LoggingMiddleware._redact_query( + "token=secret&api_key=mykey&code=abc123" + ) + assert "token=REDACTED" in result + assert "api_key=REDACTED" in result + assert "code=REDACTED" in result + + def test_redact_query_handles_blank_values(self) -> None: + """Test redaction with blank parameter values.""" + result = LoggingMiddleware._redact_query("token=&name=test") + assert "token=REDACTED" in result + assert "name=test" in result + + def test_redact_query_case_insensitive(self) -> None: + """Test redaction is case-insensitive for key names.""" + result = LoggingMiddleware._redact_query("TOKEN=secret&Token=test") + assert "TOKEN=REDACTED" in result + assert "Token=REDACTED" in result diff --git a/tests/unit/test_rate_limit_config.py b/tests/unit/test_rate_limit_config.py new file mode 100644 index 00000000..b546bf3f --- /dev/null +++ b/tests/unit/test_rate_limit_config.py @@ -0,0 +1,39 @@ +"""Test rate limit configuration.""" + +import re + +from app.rate_limit.config import RateLimits + + +class TestRateLimitConfig: + """Test rate limit configuration values.""" + + def test_rate_limit_format(self) -> None: + """Ensure all rate limits follow correct format.""" + # Dynamically get all RateLimits string constants + limits = [ + getattr(RateLimits, attr) + for attr in dir(RateLimits) + if not attr.startswith("_") + and isinstance(getattr(RateLimits, attr), str) + ] + + for limit in limits: + assert "/" in limit + count, period = limit.split("/") + assert count.isdigit() + assert int(count) > 0, f"Count must be positive: {limit}" + # Period: [number]unit (second/minute/hour/day + optional 's') + assert re.match( + r"^\d*\s*(seconds?|minutes?|hours?|days?)$", period + ), f"Invalid period format: {period}" + + def test_conservative_limits(self) -> None: + """Ensure limits match SECURITY-REVIEW.md requirements.""" + assert RateLimits.REGISTER == "3/hour" + assert RateLimits.LOGIN == "5/15minutes" + assert RateLimits.FORGOT_PASSWORD == "3/hour" # noqa: S105 + assert RateLimits.VERIFY == "10/minute" + assert RateLimits.REFRESH == "20/minute" + assert RateLimits.RESET_PASSWORD_GET == "10/minute" # noqa: S105 + assert RateLimits.RESET_PASSWORD_POST == "5/hour" # noqa: S105 diff --git a/tests/unit/test_rate_limit_helpers.py b/tests/unit/test_rate_limit_helpers.py new file mode 100644 index 00000000..f6a1e233 --- /dev/null +++ b/tests/unit/test_rate_limit_helpers.py @@ -0,0 +1,45 @@ +"""Test rate limit helper functions.""" + +from app.rate_limit.handlers import DEFAULT_RETRY_AFTER, parse_retry_after + + +class TestParseRetryAfter: + """Test the parse_retry_after helper function.""" + + def test_parse_hour_format(self) -> None: + """Test parsing '3 per 1 hour' format.""" + assert parse_retry_after("3 per 1 hour") == DEFAULT_RETRY_AFTER + + def test_parse_minutes_with_number(self) -> None: + """Test parsing '5/15minutes' format.""" + assert parse_retry_after("5/15minutes") == "900" + + def test_parse_minute_singular(self) -> None: + """Test parsing '10/minute' format.""" + assert parse_retry_after("10/minute") == "60" + + def test_parse_second_format(self) -> None: + """Test parsing second-based limits.""" + assert parse_retry_after("5/second") == "1" + assert parse_retry_after("10/10seconds") == "10" + + def test_parse_day_format(self) -> None: + """Test parsing day-based limits.""" + assert parse_retry_after("1/day") == "86400" + assert parse_retry_after("3/2days") == "172800" + + def test_parse_invalid_format_no_slash(self) -> None: + """Test invalid format without slash returns default.""" + assert parse_retry_after("invalid") == DEFAULT_RETRY_AFTER + + def test_parse_invalid_format_no_match(self) -> None: + """Test format that doesn't match regex returns default.""" + assert parse_retry_after("5/@@@") == DEFAULT_RETRY_AFTER + + def test_parse_unknown_unit(self) -> None: + """Test unknown time unit returns default.""" + assert parse_retry_after("5/unknown") == DEFAULT_RETRY_AFTER + + def test_parse_empty_string(self) -> None: + """Test empty string returns default.""" + assert parse_retry_after("") == DEFAULT_RETRY_AFTER diff --git a/tests/unit/test_rate_limit_init.py b/tests/unit/test_rate_limit_init.py new file mode 100644 index 00000000..8de96861 --- /dev/null +++ b/tests/unit/test_rate_limit_init.py @@ -0,0 +1,57 @@ +"""Test rate limiter initialization logging.""" + +import pytest + +import app.rate_limit + + +@pytest.mark.unit +class TestRateLimitInitialization: + """Test rate limiter initialization logging.""" + + def test_redis_initialization_logging(self, mocker, monkeypatch) -> None: + """Test Redis storage initialization logs correctly.""" + # Mock the logger to capture calls + mock_logger = mocker.patch("app.rate_limit.category_logger.info") + + # Reset the cached limiter to force re-initialization + monkeypatch.setattr(app.rate_limit, "_limiter", None) + + # Mock settings to return Redis enabled + mock_settings = mocker.Mock() + mock_settings.rate_limit_enabled = True + mock_settings.redis_enabled = True + mock_settings.redis_url = "redis://localhost:6379/0" + mocker.patch("app.rate_limit.get_settings", return_value=mock_settings) + + # Call get_limiter to trigger initialization + app.rate_limit.get_limiter() + + # Verify Redis logging was called (lines 40-41) + mock_logger.assert_called_with( + "Rate limiting initialized with Redis storage", + mocker.ANY, # LogCategory.AUTH + ) + + def test_memory_initialization_logging(self, mocker, monkeypatch) -> None: + """Test in-memory storage initialization logs correctly.""" + # Mock the logger to capture calls + mock_logger = mocker.patch("app.rate_limit.category_logger.info") + + # Reset the cached limiter to force re-initialization + monkeypatch.setattr(app.rate_limit, "_limiter", None) + + # Mock settings to return in-memory mode + mock_settings = mocker.Mock() + mock_settings.rate_limit_enabled = True + mock_settings.redis_enabled = False + mocker.patch("app.rate_limit.get_settings", return_value=mock_settings) + + # Call get_limiter to trigger initialization + app.rate_limit.get_limiter() + + # Verify in-memory logging was called (line 49) + mock_logger.assert_called_with( + "Rate limiting initialized with in-memory storage", + mocker.ANY, # LogCategory.AUTH + ) diff --git a/uv.lock b/uv.lock index ad2148d6..1e213e56 100644 --- a/uv.lock +++ b/uv.lock @@ -111,6 +111,7 @@ dependencies = [ { name = "python-decouple" }, { name = "rich" }, { name = "rtoml" }, + { name = "slowapi" }, { name = "sqladmin", extra = ["full"] }, { name = "sqlalchemy", extra = ["asyncio"] }, { name = "typer" }, @@ -179,6 +180,7 @@ requires-dist = [ { name = "python-decouple", specifier = ">=3.8" }, { name = "rich", specifier = ">=13.9.4" }, { name = "rtoml", specifier = ">=0.11.0" }, + { name = "slowapi", specifier = ">=0.1.9" }, { name = "sqladmin", extras = ["full"], specifier = ">=0.20.1" }, { name = "sqlalchemy", extras = ["asyncio"], specifier = ">=2.0.36" }, { name = "typer", specifier = ">=0.12.5" }, @@ -1399,6 +1401,20 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/36/e9/a0aa60f5322814dd084a89614e9e31139702e342f8459ad8af1984a18168/librt-0.7.4-cp314-cp314t-win_arm64.whl", hash = "sha256:76b2ba71265c0102d11458879b4d53ccd0b32b0164d14deb8d2b598a018e502f", size = 39724, upload-time = "2025-12-15T16:52:29.836Z" }, ] +[[package]] +name = "limits" +version = "5.6.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "deprecated" }, + { name = "packaging" }, + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/bb/e5/c968d43a65128cd54fb685f257aafb90cd5e4e1c67d084a58f0e4cbed557/limits-5.6.0.tar.gz", hash = "sha256:807fac75755e73912e894fdd61e2838de574c5721876a19f7ab454ae1fffb4b5", size = 182984, upload-time = "2025-09-29T17:15:22.689Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/40/96/4fcd44aed47b8fcc457653b12915fcad192cd646510ef3f29fd216f4b0ab/limits-5.6.0-py3-none-any.whl", hash = "sha256:b585c2104274528536a5b68864ec3835602b3c4a802cd6aa0b07419798394021", size = 60604, upload-time = "2025-09-29T17:15:18.419Z" }, +] + [[package]] name = "loguru" version = "0.7.3" @@ -2875,6 +2891,18 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274", size = 11050, upload-time = "2024-12-04T17:35:26.475Z" }, ] +[[package]] +name = "slowapi" +version = "0.1.9" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "limits" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/a0/99/adfc7f94ca024736f061257d39118e1542bade7a52e86415a4c4ae92d8ff/slowapi-0.1.9.tar.gz", hash = "sha256:639192d0f1ca01b1c6d95bf6c71d794c3a9ee189855337b4821f7f457dddad77", size = 14028, upload-time = "2024-02-05T12:11:52.13Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/2b/bb/f71c4b7d7e7eb3fc1e8c0458a8979b912f40b58002b9fbf37729b8cb464b/slowapi-0.1.9-py3-none-any.whl", hash = "sha256:cfad116cfb84ad9d763ee155c1e5c5cbf00b0d47399a769b227865f5df576e36", size = 14670, upload-time = "2024-02-05T12:11:50.898Z" }, +] + [[package]] name = "smmap" version = "5.0.2"