diff --git a/.github/workflows/test.yaml b/.github/workflows/test.yaml index faf539e..8956a36 100644 --- a/.github/workflows/test.yaml +++ b/.github/workflows/test.yaml @@ -10,6 +10,7 @@ jobs: test-template: runs-on: ubuntu-latest strategy: + fail-fast: false matrix: config-file: - full.yaml @@ -25,9 +26,36 @@ jobs: with: python-version: "3.x" + - name: Install yq + run: | + sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 + sudo chmod +x /usr/local/bin/yq + - name: Install cookiecutter run: pip install cookiecutter - name: Run cookiecutter template run: | cookiecutter . --no-input --config-file tests/${{ matrix.config-file }} --output-dir /tmp + + - name: Get project name + id: project + run: | + PROJECT_NAME=$(yq -r '.default_context.package_name' tests/${{ matrix.config-file }}) + echo "name=$PROJECT_NAME" >> $GITHUB_OUTPUT + + - name: Run pytest + working-directory: /tmp/${{ steps.project.outputs.name }} + run: make pytest + + - name: Run ruff check + working-directory: /tmp/${{ steps.project.outputs.name }} + run: make ruff_check + + - name: Run black format check + working-directory: /tmp/${{ steps.project.outputs.name }} + run: make black_check + + - name: Run mypy + working-directory: /tmp/${{ steps.project.outputs.name }} + run: make mypy_check diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..ecb5e96 --- /dev/null +++ b/.gitignore @@ -0,0 +1 @@ +workspaces/ diff --git a/AGENTS.md b/AGENTS.md new file mode 100644 index 0000000..bb310d5 --- /dev/null +++ b/AGENTS.md @@ -0,0 +1,9 @@ +# Agent Instructions + +This project is a CookieCutter template used to generate new python projects. It is extremely dynamic, with many optional settings. Make sure to review the README.md to get more context, as well as the AGENTS.md file int he project template itself (`{{cookiecutter.__package_slug}}/AGENTS.md`). + +Since this is a Cookiecutter template you should expect to encounter Jinja2 template blocks in various files. + +When being asked to test functionality that requires you to create a new project from the template create them in the `workspaces` directory. + +When creating new files for optional services make sure you include them in the post_gen_project.py configuration so that unneeded files are removed. For example, if the caching functionality is not enabled the system should remove all of the caching related files. diff --git a/README.md b/README.md index 9657d11..21870cf 100644 --- a/README.md +++ b/README.md @@ -86,6 +86,17 @@ Pick and choose the features you need. Unused components are completely removed - Type-safe configuration using Pydantic for automatic validation of input data, with configurable queue sizes, worker counts, and graceful shutdown handling - Perfect for CPU-intensive workloads like data processing, image manipulation, scientific computing, and batch operations that need to scale beyond single-threaded execution +### Caching + +**[aiocache](https://aiocache.readthedocs.io/) Integration** + +- High-performance async caching library with support for multiple backends including Redis, Memcached, and in-memory storage, providing millisecond-level response times for frequently accessed data +- Automatic cache configuration and connection management with separate cache instances for different TTL requirements: default (5 minutes), persistent (1 hour), and custom durations for specific use cases +- Decorator-based caching with `@cached` for effortless function result memoization, automatically serializing complex Python objects including Pydantic models, dataclasses, and custom types +- Built-in cache warming on application startup for Celery workers and web servers, pre-populating critical data to eliminate cold-start latency and ensure consistent performance from the first request +- Type-safe settings configuration for cache behavior including host, port, TTL values, and enable/disable flags, with automatic validation and clear error messages for misconfigurations +- Production-ready Redis integration with connection pooling, automatic reconnection handling, and graceful degradation when cache is unavailable, preventing cascading failures + ### Database & ORM **[SQLAlchemy](https://www.sqlalchemy.org/) + [Alembic](https://alembic.sqlalchemy.org/en/latest/)** @@ -156,6 +167,7 @@ Every generated project includes documentation tailored to your selected feature - **Developer Guide Hub**: Organized documentation index in `docs/dev/` with dedicated guides for each enabled feature - **FastAPI Documentation**: Integration guide covering static file serving, Docker configuration, and FastAPI dependency system usage - **Database Documentation**: SQLAlchemy and Alembic guide covering model organization, migration creation using Make commands, FastAPI integration, and automatic schema diagram generation with Paracelsus +- **Caching Documentation**: aiocache integration guide covering cache configuration, decorator usage, multiple TTL strategies, and cache warming for optimal performance - **Task Processing Guides**: Documentation for Celery (worker and beat configuration, Docker setup) and QuasiQueue (configuration file location, Docker images) - **CLI Documentation**: Guide showing how to use the generated CLI and where to add new commands - **Docker Documentation**: Container setup documentation covering image sources, development environment, and registry publishing @@ -167,7 +179,8 @@ Every generated project includes documentation tailored to your selected feature The template intelligently configures itself based on your choices through sophisticated post-generation hooks: - **Surgical Dependency Management**: Only includes packages you actually need in `pyproject.toml`, with proper optional dependency groups for dev tools, testing, and feature-specific requirements, avoiding bloated dependency trees -- **Conditional Docker Services**: Automatically generates docker-compose.yaml with only the services your project requires: PostgreSQL for SQLAlchemy, Redis for Celery/caching, with properly configured health checks, volumes, and networking +- **Conditional Docker Services**: Automatically generates docker-compose.yaml with only the services your project requires: PostgreSQL for SQLAlchemy, Redis for Celery/aiocache caching, with properly configured health checks, volumes, and networking +- **Cache-Aware Configuration**: When aiocache is enabled, automatically configures Redis connection settings, multiple cache instances with different TTL strategies, and cache warming hooks for FastAPI and Celery startup events - **Database-Aware Configuration**: Sets up appropriate connection strings, pool sizes, and dialect-specific settings for PostgreSQL or SQLite, with Alembic migrations configured for cross-database compatibility - **Feature-Driven CI/CD Workflows**: GitHub Actions workflows are conditionally installed based on your feature selection: container building and publishing only when Docker is enabled, PyPI publishing workflow only when configured, eliminating unused automation files from your repository - **Framework Integration**: Automatically wires together selected components (FastAPI with SQLAlchemy database dependencies, Celery with Redis broker, CLI with async command support) providing working examples of how pieces fit together diff --git a/cookiecutter.json b/cookiecutter.json index c94e60b..6f5ff80 100644 --- a/cookiecutter.json +++ b/cookiecutter.json @@ -3,6 +3,7 @@ "author_name": "", "short_description": "", "python_version": "3.14", + "github_org": "EXAMPLE", "license": [ "All Rights Reserved", "MIT license", @@ -15,7 +16,7 @@ "include_sqlalchemy": "y/N", "include_quasiqueue": "y/N", "include_jinja2": "y/N", - "include_dogpile": "y/N", + "include_aiocache": "y/N", "include_celery": "y/N", "include_docker": "y/N", "include_github_actions": "y/N", diff --git a/hooks/post_gen_project.py b/hooks/post_gen_project.py index 4a1cb61..eae75b6 100644 --- a/hooks/post_gen_project.py +++ b/hooks/post_gen_project.py @@ -10,7 +10,7 @@ INCLUDE_DOCKER={% if cookiecutter.include_docker == "y" %}True{% else %}False{% endif %} INCLUDE_QUASIQUEUE={% if cookiecutter.include_quasiqueue == "y" %}True{% else %}False{% endif %} INCLUDE_JINJA2={% if cookiecutter.include_jinja2 == "y" %}True{% else %}False{% endif %} -INCLUDE_DOGPILE={% if cookiecutter.include_dogpile == "y" %}True{% else %}False{% endif %} +INCLUDE_AIOCACHE={% if cookiecutter.include_aiocache == "y" %}True{% else %}False{% endif %} INCLUDE_SQLALCHEMY={% if cookiecutter.include_sqlalchemy == "y" %}True{% else %}False{% endif %} INCLUDE_GITHUB_ACTIONS={% if cookiecutter.include_github_actions == "y" %}True{% else %}False{% endif %} INCLUDE_REQUIREMENTS_FILES={% if cookiecutter.include_requirements_files == "y" %}True{% else %}False{% endif %} @@ -30,6 +30,7 @@ remove_paths.add(f'dockerfile.www') remove_paths.add(f'docker/www') remove_paths.add(f'docs/dev/api.md') + remove_paths.add(f'tests/test_www.py') if INCLUDE_CELERY: docker_containers.add('celery') @@ -38,6 +39,7 @@ remove_paths.add(f'dockerfile.celery') remove_paths.add(f'docker/celery') remove_paths.add(f'docs/dev/celery.md') + remove_paths.add(f'tests/test_celery.py') if INCLUDE_QUASIQUEUE: docker_containers.add('qq') @@ -45,6 +47,7 @@ remove_paths.add(f'{PACKAGE_SLUG}/qq.py') remove_paths.add(f'dockerfile.qq') remove_paths.add(f'docs/dev/quasiqueue.md') + remove_paths.add(f'tests/test_qq.py') if not INCLUDE_SQLALCHEMY: remove_paths.add(f'{PACKAGE_SLUG}/models') @@ -59,16 +62,23 @@ if not INCLUDE_CLI: remove_paths.add(f'{PACKAGE_SLUG}/cli.py') remove_paths.add(f'docs/dev/cli.md') + remove_paths.add(f'tests/test_cli.py') if not INCLUDE_JINJA2: remove_paths.add(f'{PACKAGE_SLUG}/templates') remove_paths.add(f'{PACKAGE_SLUG}/services/jinja.py') remove_paths.add(f'docs/dev/templates.md') + remove_paths.add(f'tests/services/test_jinja.py') -if not INCLUDE_DOGPILE: +if not INCLUDE_AIOCACHE: + remove_paths.add(f'{PACKAGE_SLUG}/conf/cache.py') remove_paths.add(f'{PACKAGE_SLUG}/services/cache.py') + remove_paths.add(f'tests/services/test_cache.py') remove_paths.add(f'docs/dev/cache.md') +# Always include test_settings.py as it tests core settings functionality +# that exists regardless of optional features + if not INCLUDE_DOCKER: remove_paths.add('.dockerignore') remove_paths.add('compose.yaml') diff --git a/tests/bare.yaml b/tests/bare.yaml index c8cf98e..2dbe069 100644 --- a/tests/bare.yaml +++ b/tests/bare.yaml @@ -8,7 +8,7 @@ default_context: include_fastapi: "n" include_sqlalchemy: "n" include_jinja2: "n" - include_dogpile: "n" + include_aiocache: "n" include_celery: "n" include_docker: "n" include_github_actions: "n" diff --git a/tests/full.yaml b/tests/full.yaml index 964c2ea..bc39810 100644 --- a/tests/full.yaml +++ b/tests/full.yaml @@ -9,7 +9,7 @@ default_context: include_sqlalchemy: "y" include_quasiqueue: "y" include_jinja2: "y" - include_dogpile: "y" + include_aiocache: "y" include_celery: "y" include_docker: "y" include_github_actions: "y" diff --git a/tests/library.yaml b/tests/library.yaml index 57d7f51..6cfe183 100644 --- a/tests/library.yaml +++ b/tests/library.yaml @@ -8,7 +8,7 @@ default_context: include_fastapi: "n" include_sqlalchemy: "n" include_jinja2: "n" - include_dogpile: "n" + include_aiocache: "n" include_celery: "n" include_docker: "n" include_github_actions: "y" diff --git a/{{cookiecutter.__package_slug}}/README.md b/{{cookiecutter.__package_slug}}/README.md index 673919f..068aef2 100644 --- a/{{cookiecutter.__package_slug}}/README.md +++ b/{{cookiecutter.__package_slug}}/README.md @@ -22,3 +22,27 @@ pip install {{ cookiecutter.package_name }} ``` {%- endif %} + +## Developer Documentation + +Comprehensive developer documentation is available in [`docs/dev/`](./docs/dev/) covering testing, configuration, deployment, and all project features. + +### Quick Start for Developers + +```bash +# Install development environment +make install +{%- if cookiecutter.include_docker == "y" %} + +# Start services with Docker +docker compose up -d +{%- endif %} + +# Run tests +make tests + +# Auto-fix formatting +make chores +``` + +See the [developer documentation](./docs/dev/README.md) for complete guides and reference. diff --git a/{{cookiecutter.__package_slug}}/compose.yaml b/{{cookiecutter.__package_slug}}/compose.yaml index c53b299..6bbb9dd 100644 --- a/{{cookiecutter.__package_slug}}/compose.yaml +++ b/{{cookiecutter.__package_slug}}/compose.yaml @@ -19,13 +19,17 @@ services: {%- if cookiecutter.include_celery == "y" %} CELERY_BROKER: redis://redis:6379/0 {%- endif %} -{%- if cookiecutter.include_sqlalchemy == "y" or cookiecutter.include_celery == "y" %} +{%- if cookiecutter.include_aiocache == "y" %} + CACHE_REDIS_HOST: redis + CACHE_REDIS_PORT: 6379 +{%- endif %} +{%- if cookiecutter.include_sqlalchemy == "y" or cookiecutter.include_celery == "y" or cookiecutter.include_aiocache == "y" %} depends_on: {%- endif %} {%- if cookiecutter.include_sqlalchemy == "y" %} - db {%- endif %} -{%- if cookiecutter.include_celery == "y" %} +{%- if cookiecutter.include_celery == "y" or cookiecutter.include_aiocache == "y" %} - redis {%- endif %} {%- endif %} @@ -46,13 +50,17 @@ services: {%- if cookiecutter.include_celery == "y" %} CELERY_BROKER: redis://redis:6379/0 {%- endif %} -{%- if cookiecutter.include_sqlalchemy == "y" or cookiecutter.include_celery == "y" %} +{%- if cookiecutter.include_aiocache == "y" %} + CACHE_REDIS_HOST: redis + CACHE_REDIS_PORT: 6379 +{%- endif %} +{%- if cookiecutter.include_sqlalchemy == "y" or cookiecutter.include_celery == "y" or cookiecutter.include_aiocache == "y" %} depends_on: {%- endif %} {%- if cookiecutter.include_sqlalchemy == "y" %} - db {%- endif %} -{%- if cookiecutter.include_celery == "y" %} +{%- if cookiecutter.include_celery == "y" or cookiecutter.include_aiocache == "y" %} - redis {%- endif %} @@ -71,13 +79,17 @@ services: {%- if cookiecutter.include_celery == "y" %} CELERY_BROKER: redis://redis:6379/0 {%- endif %} -{%- if cookiecutter.include_sqlalchemy == "y" or cookiecutter.include_celery == "y" %} +{%- if cookiecutter.include_aiocache == "y" %} + CACHE_REDIS_HOST: redis + CACHE_REDIS_PORT: 6379 +{%- endif %} +{%- if cookiecutter.include_sqlalchemy == "y" or cookiecutter.include_celery == "y" or cookiecutter.include_aiocache == "y" %} depends_on: {%- endif %} {%- if cookiecutter.include_sqlalchemy == "y" %} - db {%- endif %} -{%- if cookiecutter.include_celery == "y" %} +{%- if cookiecutter.include_celery == "y" or cookiecutter.include_aiocache == "y" %} - redis {%- endif %} {%- endif %} @@ -96,18 +108,22 @@ services: {%- if cookiecutter.include_celery == "y" %} CELERY_BROKER: redis://redis:6379/0 {%- endif %} -{%- if cookiecutter.include_sqlalchemy == "y" or cookiecutter.include_celery == "y" %} +{%- if cookiecutter.include_aiocache == "y" %} + CACHE_REDIS_HOST: redis + CACHE_REDIS_PORT: 6379 +{%- endif %} +{%- if cookiecutter.include_sqlalchemy == "y" or cookiecutter.include_celery == "y" or cookiecutter.include_aiocache == "y" %} depends_on: {%- endif %} {%- if cookiecutter.include_sqlalchemy == "y" %} - db {%- endif %} -{%- if cookiecutter.include_celery == "y" %} +{%- if cookiecutter.include_celery == "y" or cookiecutter.include_aiocache == "y" %} - redis {%- endif %} {%- endif %} -{% if cookiecutter.include_celery == "y" or cookiecutter.include_dogpile == "y" %} +{% if cookiecutter.include_celery == "y" or cookiecutter.include_aiocache == "y" %} redis: image: redis {%- endif %} diff --git a/{{cookiecutter.__package_slug}}/docs/dev/README.md b/{{cookiecutter.__package_slug}}/docs/dev/README.md index 092e027..f10f72e 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/README.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/README.md @@ -1,38 +1,116 @@ -# Developer Readme +# Developer Documentation -{%- if cookiecutter.include_fastapi == "y" %} +Welcome to the developer documentation! This directory contains comprehensive guides for working with this project's features, tools, and workflows. + +## Getting Started + +New to this project? Start here: -1. [Rest API](./api.md) +1. **[Makefile](./makefile.md)** - Essential commands for development, testing, and building +2. **[Dependencies](./dependencies.md)** - Managing project dependencies, virtual environments, and package installation +3. **[Settings](./settings.md)** - Environment configuration and settings management +{%- if cookiecutter.include_docker == "y" %} +4. **[Docker](./docker.md)** - Containerization, deployment, and local development with Docker {%- endif %} -{%- if cookiecutter.include_dogpile == "y" %} -1. [Caching](./cache.md) + +## Core Features + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +### [Database](./database.md) + +SQLAlchemy ORM integration, models, migrations with Alembic, and database patterns. {%- endif %} -{%- if cookiecutter.include_celery == "y" %} -1. [Celery](./celery.md) +{%- if cookiecutter.include_aiocache == "y" %} + +### [Caching](./cache.md) + +Redis-backed caching with aiocache for performance optimization. +{%- endif %} +{%- if cookiecutter.include_fastapi == "y" %} + +### [REST API](./api.md) + +FastAPI web framework, endpoints, middleware, and API development. {%- endif %} {%- if cookiecutter.include_cli == "y" %} -1. [CLI](./cli.md) + +### [CLI](./cli.md) + +Command-line interface built with Typer for management and automation tasks. {%- endif %} -{%- if cookiecutter.include_sqlalchemy == "y" %} -1. [Database](./database.md) +{%- if cookiecutter.include_celery == "y" %} + +### [Celery](./celery.md) + +Distributed task queue for background processing and asynchronous jobs. {%- endif %} +{%- if cookiecutter.include_quasiqueue == "y" %} -1. [Dependencies](./dependencies.md) +### [QuasiQueue](./quasiqueue.md) -{%- if cookiecutter.include_docker == "y" %} +Lightweight message queue for simpler asynchronous task handling. +{%- endif %} +{%- if cookiecutter.include_jinja2 == "y" %} -1. [Docker](./docker.md) +### [Templates](./templates.md) + +Jinja2 templating for HTML rendering and template-based content generation. {%- endif %} + +## Development Practices + +### [Testing](./testing.md) + +Comprehensive testing guide covering pytest, fixtures, async testing, mocking, and code coverage. + +### [Documentation](./documentation.md) + +Standards and best practices for writing and maintaining project documentation. {%- if cookiecutter.include_github_actions == "y" %} -1. [Github Actions](./github.md) + +### [GitHub Actions](./github.md) + +CI/CD workflows for testing, linting, building, and deployment automation. {%- endif %} {%- if cookiecutter.publish_to_pypi == "y" %} -1. [PyPI](./pypi.md) + +### [PyPI](./pypi.md) + +Publishing packages to the Python Package Index. {%- endif %} -1. [Settings](./settings.md) +## Project-Specific Documentation -{%- if cookiecutter.include_jinja2 == "y" %} +As your project grows, add documentation for: + +- **Architecture** - System design, component interactions, and architectural decisions +- **API Reference** - Detailed API endpoints, request/response formats, and authentication +- **Deployment** - Production deployment procedures, monitoring, and operations +- **Troubleshooting** - Common issues, debugging techniques, and solutions +- **Contributing** - Guidelines for contributors and development workflows + +## Documentation Standards + +All documentation in this project follows the standards outlined in [documentation.md](./documentation.md). When adding new documentation: -1. [Templates](./template.md) +- Use real, working code examples from this project +- Include practical usage patterns +- Test all code examples before publishing +- Keep documentation updated as code changes +- Follow the established structure and style + +## Quick Reference + +- **Setup**: Run `make install` to set up your development environment +- **Testing**: Run `make tests` for full test suite, see [testing.md](./testing.md) for details +- **Formatting**: Run `make chores` before committing to fix formatting issues +- **Configuration**: See [settings.md](./settings.md) for environment variables and settings +{%- if cookiecutter.include_docker == "y" %} +- **Local Development**: Use `docker compose up` for local services, see [docker.md](./docker.md) {%- endif %} +- **All Make Commands**: See [makefile.md](./makefile.md) for complete reference + +--- + +*This documentation is maintained by the development team. If you find issues or have suggestions, please contribute improvements!* diff --git a/{{cookiecutter.__package_slug}}/docs/dev/api.md b/{{cookiecutter.__package_slug}}/docs/dev/api.md index 1ea982b..fa6b494 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/api.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/api.md @@ -1,14 +1,378 @@ -# API +# FastAPI -This project uses [FastAPI](https://fastapi.tiangolo.com/). +This project uses [FastAPI](https://fastapi.tiangolo.com/), a modern, fast web framework for building APIs with Python based on standard Python type hints. -Static files can be added to `{{ cookiecutter.__package_slug }}/static` and will be passed through the `/static/` endpoint. +## Application Structure +The FastAPI application is defined in `{{cookiecutter.__package_slug}}/www.py` and includes: + +- **Automatic API documentation** at `/docs` (Swagger UI) and `/redoc` (ReDoc) +- **Static file serving** from `{{cookiecutter.__package_slug}}/static/` via the `/static/` endpoint +- **OpenAPI schema** available at `/openapi.json` +- **Root redirect** from `/` to `/docs` for convenient access to documentation + +## Configuration + +### Environment Variables + +FastAPI-specific settings can be configured through environment variables in the Settings class: + +- **PROJECT_NAME**: The name of the project (displayed in API docs) +- **DEBUG**: Enable debug mode (default: `False`) + - Shows detailed error messages + - Enables hot-reload in development + +### Startup Events + +The application automatically initializes required services on startup: +{%- if cookiecutter.include_aiocache == "y" %} + +- **Cache initialization**: If aiocache is enabled, caches are configured and ready +{%- endif %} +{%- if cookiecutter.include_sqlalchemy == "y" %} + +Note: Database connections are NOT initialized at startup. Instead, they are established lazily when first accessed via dependency injection (see Database Integration section below). +{%- endif %} + +## Adding Routes + +### Basic Route + +Create a new route in `{{cookiecutter.__package_slug}}/www.py`: + +```python +@app.get("/hello") +async def hello_world(): + return {"message": "Hello, World!"} +``` + +### Route with Path Parameters + +```python +@app.get("/users/{user_id}") +async def get_user(user_id: int): + # FastAPI automatically validates user_id is an integer + return {"user_id": user_id, "name": "John Doe"} +``` + +### Route with Query Parameters + +```python +from typing import Optional + +@app.get("/items") +async def list_items(skip: int = 0, limit: int = 10, search: Optional[str] = None): + # Query params: ?skip=0&limit=10&search=foo + return {"skip": skip, "limit": limit, "search": search} +``` + +### Route with Request Body + +```python +from pydantic import BaseModel + +class UserCreate(BaseModel): + username: str + email: str + full_name: Optional[str] = None + +@app.post("/users") +async def create_user(user: UserCreate): + # FastAPI automatically validates and deserializes the JSON body + return {"user": user.dict(), "id": 123} +``` + +## Response Models + +Use Pydantic models to define response schemas: + +```python +class UserResponse(BaseModel): + id: int + username: str + email: str + created_at: datetime + +@app.get("/users/{user_id}", response_model=UserResponse) +async def get_user(user_id: int): + # FastAPI ensures the response matches UserResponse schema + return { + "id": user_id, + "username": "johndoe", + "email": "john@example.com", + "created_at": datetime.now() + } +``` + +## Dependency Injection + +FastAPI's dependency injection system allows you to share logic across routes: + +```python +from fastapi import Depends, HTTPException + +async def get_current_user(token: str = Header(...)): + # Validate token and get user + if not token: + raise HTTPException(status_code=401, detail="Not authenticated") + return {"username": "johndoe"} + +@app.get("/me") +async def read_current_user(current_user: dict = Depends(get_current_user)): + return current_user +``` + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +## Database Integration + +If SQLAlchemy is enabled, use dependency injection for database sessions: + +```python +from sqlalchemy.ext.asyncio import AsyncSession +from {{cookiecutter.__package_slug}}.services.db import get_session_depends + +@app.get("/users") +async def list_users(session: AsyncSession = Depends(get_session_depends)): + result = await session.execute(select(User)) + users = result.scalars().all() + return users +``` + +{%- endif %} + +## Error Handling + +### Custom Exception Handlers + +```python +from fastapi import HTTPException, Request +from fastapi.responses import JSONResponse + +@app.exception_handler(ValueError) +async def value_error_handler(request: Request, exc: ValueError): + return JSONResponse( + status_code=400, + content={"detail": str(exc)} + ) +``` + +### Raising HTTP Exceptions + +```python +from fastapi import HTTPException + +@app.get("/users/{user_id}") +async def get_user(user_id: int): + user = await fetch_user(user_id) + if not user: + raise HTTPException(status_code=404, detail="User not found") + return user +``` + +## Static Files + +Static files are served from `{{cookiecutter.__package_slug}}/static/`: + +1. **Add files** to the `static/` directory: + + ``` + {{cookiecutter.__package_slug}}/static/ + ├── css/ + │ └── styles.css + ├── js/ + │ └── app.js + └── images/ + └── logo.png + ``` + +2. **Access files** via the `/static/` URL path: + - `http://localhost:8000/static/css/styles.css` + - `http://localhost:8000/static/images/logo.png` + +## Middleware + +Add middleware for cross-cutting concerns: + +```python +from fastapi.middleware.cors import CORSMiddleware + +app.add_middleware( + CORSMiddleware, + allow_origins=["*"], + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) +``` + +## Background Tasks + +Run tasks in the background without blocking the response: + +```python +from fastapi import BackgroundTasks + +def send_email(email: str, message: str): + # Send email logic here + print(f"Sending email to {email}: {message}") + +@app.post("/send-notification") +async def send_notification( + email: str, + background_tasks: BackgroundTasks +): + background_tasks.add_task(send_email, email, "Hello from FastAPI!") + return {"message": "Notification will be sent"} +``` + +## Testing + +### Using the FastAPI Client Fixture + +The project includes a `fastapi_client` fixture in `tests/conftest.py` that provides a TestClient instance. Use this fixture in your tests: + +```python +# tests/conftest.py +import pytest_asyncio +from fastapi.testclient import TestClient +from {{cookiecutter.__package_slug}}.www import app + + +@pytest_asyncio.fixture +async def fastapi_client(): + """Fixture to create a FastAPI test client.""" + client = TestClient(app) + yield client +``` + +### Writing Tests with the Fixture + +Use the `fastapi_client` fixture in your test functions: + +```python +# tests/test_www.py + +def test_root_redirects_to_docs(fastapi_client): + """Test that root path redirects to /docs.""" + response = fastapi_client.get("/", follow_redirects=False) + assert response.status_code == 307 # Temporary redirect + assert response.headers["location"] == "/docs" + + +def test_root_redirect_follows(fastapi_client): + """Test that following redirect from root goes to docs.""" + response = fastapi_client.get("/", follow_redirects=True) + assert response.status_code == 200 + # Should reach the OpenAPI docs page + + +def test_api_endpoint(fastapi_client): + """Test a custom API endpoint.""" + response = fastapi_client.get("/api/users/123") + assert response.status_code == 200 + data = response.json() + assert data["user_id"] == 123 +``` + +### Testing POST Requests + +```python +def test_create_user(fastapi_client): + """Test creating a user via POST.""" + user_data = { + "username": "testuser", + "email": "test@example.com" + } + response = fastapi_client.post("/api/users", json=user_data) + assert response.status_code == 201 + data = response.json() + assert data["username"] == "testuser" + assert "id" in data +``` + +### Testing with Headers + +```python +def test_authenticated_endpoint(fastapi_client): + """Test endpoint that requires authentication.""" + headers = {"Authorization": "Bearer test-token"} + response = fastapi_client.get("/api/me", headers=headers) + assert response.status_code == 200 +``` + +## Running the Application + +### Development + +```bash +# Using uvicorn directly +uvicorn {{cookiecutter.__package_slug}}.www:app --reload --host 0.0.0.0 --port 8000 + +# The app is accessible at http://localhost:8000 +# API docs available at http://localhost:8000/docs +``` + +### Production + +```bash +# With more workers for production +uvicorn {{cookiecutter.__package_slug}}.www:app --host 0.0.0.0 --port 8000 --workers 4 + +# Or using gunicorn with uvicorn workers +gunicorn {{cookiecutter.__package_slug}}.www:app -w 4 -k uvicorn.workers.UvicornWorker +``` {%- if cookiecutter.include_docker == "y" %} -## Docker +### Docker -The FastAPI images are based off of the [Multi-Py Uvicorn Project](https://github.com/multi-py/python-uvicorn) and work for ARM and AMD out of the box. +If Docker is configured, use docker-compose: + +```bash +docker-compose up www +``` {%- endif %} + +## Best Practices + +1. **Use Response Models**: Always define Pydantic models for responses to ensure type safety and automatic documentation + +2. **Leverage Dependency Injection**: Use `Depends()` to share logic like authentication, database sessions, and configuration + +3. **Async All the Way**: Use `async def` for route handlers when performing I/O operations (database, external APIs, file operations) + +4. **Validate Input**: Leverage Pydantic's validation for request bodies and FastAPI's parameter validation for path and query parameters + +5. **Document Your API**: Add docstrings to route functions - they appear in the auto-generated docs: + + ```python + @app.get("/users") + async def list_users(): + """ + Retrieve a list of all users. + + Returns a paginated list of user objects with their basic information. + """ + return users + ``` + +6. **Use HTTP Status Codes**: Return appropriate status codes (201 for created, 204 for no content, etc.): + + ```python + from fastapi import status + + @app.post("/users", status_code=status.HTTP_201_CREATED) + async def create_user(user: UserCreate): + return {"user": user} + ``` + +7. **Separate Concerns**: Keep business logic separate from route handlers - use service layers or utility modules + +## References + +- [FastAPI Documentation](https://fastapi.tiangolo.com/) +- [FastAPI Tutorial](https://fastapi.tiangolo.com/tutorial/) +- [Pydantic Documentation](https://docs.pydantic.dev/) +- [Uvicorn Documentation](https://www.uvicorn.org/) diff --git a/{{cookiecutter.__package_slug}}/docs/dev/cache.md b/{{cookiecutter.__package_slug}}/docs/dev/cache.md index d423f06..bab648b 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/cache.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/cache.md @@ -1 +1,210 @@ -# Cache +# Caching + +This project uses [aiocache](https://aiocache.readthedocs.io/) for caching, providing both in-memory and Redis-backed cache backends with full async/await support. + +## Configuration + +Caching is configured through the settings module with the following environment variables: + +### Cache Control + +- **CACHE_ENABLED**: Enable or disable caching (default: `True`) + - When set to `False`, all cache operations become no-ops without requiring code changes + +### Redis Configuration + +- **CACHE_REDIS_HOST**: Redis hostname (default: `None`) + - If not set, the persistent cache falls back to in-memory storage +- **CACHE_REDIS_PORT**: Redis port (default: `6379`) + +### Default TTLs + +- **CACHE_DEFAULT_TTL**: Default TTL for memory cache in seconds (default: `300` / 5 minutes) +- **CACHE_PERSISTENT_TTL**: Default TTL for persistent cache in seconds (default: `3600` / 1 hour) + +## Cache Backends + +Two cache backends are configured: + +### Memory Cache + +- **Alias**: `memory` +- **Implementation**: Always uses in-memory storage +- **Use case**: Fast, ephemeral caching for request-scoped or temporary data +- **Serializer**: Pickle +- **Default TTL**: 300 seconds (configurable via `CACHE_DEFAULT_TTL`) + +### Persistent Cache + +- **Alias**: `persistent` +- **Implementation**: Uses Redis if `CACHE_REDIS_HOST` is configured, otherwise falls back to in-memory +- **Use case**: Data that needs to persist across restarts or be shared across instances +- **Serializer**: Pickle +- **Default TTL**: 3600 seconds (configurable via `CACHE_PERSISTENT_TTL`) + +## Usage + +### Basic Cache Operations + +```python +from {{cookiecutter.__package_slug}}.services.cache import get_cached, set_cached, delete_cached, clear_cache + +# Get a cached value (uses memory cache by default) +value = await get_cached("my_key") + +# Get from persistent cache +value = await get_cached("my_key", alias="persistent") + +# Set a cached value with default TTL (5 minutes for memory cache) +await set_cached("my_key", "my_value") + +# Set with custom TTL +await set_cached("my_key", "my_value", ttl=300, alias="persistent") + +# Delete a cached value +await delete_cached("my_key", alias="persistent") + +# Clear entire cache +await clear_cache(alias="persistent") +``` + +### Using Cache Decorators + +You can use aiocache's built-in decorators directly: + +```python +from aiocache import cached + +@cached(ttl=600, alias="persistent", key_builder=lambda f, *args, **kwargs: f"user:{args[0]}") +async def get_user_data(user_id: int): + # Expensive operation here + return await fetch_user_from_database(user_id) +``` + +### Direct Cache Access + +For more control, you can get a cache instance directly: + +```python +from {{cookiecutter.__package_slug}}.services.cache import get_cache + +# Get memory cache +cache = get_cache("memory") +await cache.set("key", "value", ttl=300) +value = await cache.get("key") + +# Get persistent cache (Redis or fallback to memory) +cache = get_cache("persistent") +await cache.set("key", "value", ttl=3600) +value = await cache.get("key") +``` + +## Initialization + +The cache system must be initialized before use. + +{%- if cookiecutter.include_fastapi == "y" %} + +### FastAPI + +Caches are automatically initialized in the FastAPI startup event. No manual initialization is required. +{%- endif %} +{%- if cookiecutter.include_celery == "y" %} + +### Celery + +Caches are automatically initialized when Celery workers start. No manual initialization is required. +{%- endif %} +{%- if cookiecutter.include_quasiqueue == "y" %} + +### QuasiQueue + +Caches are automatically initialized when QuasiQueue starts via the main script. No manual initialization is required. +{%- endif %} + +### Manual Initialization + +If you need to initialize caches manually (e.g., in a custom script or CLI command), use: + +```python +from {{cookiecutter.__package_slug}}.services.cache import configure_caches +from {{cookiecutter.__package_slug}}.settings import settings + +configure_caches(settings) +``` + +## Best Practices + +1. **Choose the right backend**: + - Use `memory` cache for request-scoped or temporary data + - Use `persistent` cache for data that needs to survive restarts or be shared across instances + +2. **Set appropriate TTLs**: + - Default TTLs are configured via settings and automatically applied + - Override with custom TTLs only when needed + - Shorter TTLs for frequently changing data, longer TTLs for stable data + +3. **Use meaningful keys**: + - Include version numbers or namespaces in cache keys to avoid conflicts + - Example: `user:v1:123` instead of just `123` + +4. **Handle cache misses**: + - Always check if cached data is `None` and have a fallback mechanism + - Cache operations are safe when caching is disabled + +5. **Disable caching in development**: + - Set `CACHE_ENABLED=False` to disable caching without code changes + - Useful for debugging or testing uncached behavior + +6. **Monitor cache size**: + - Redis caches can grow large; implement eviction policies and monitor memory usage + - Use appropriate TTLs to prevent unbounded growth + +## Development vs Production + +Configure caching behavior through environment variables: + +### Development + +```bash +# Disable caching entirely for debugging +export CACHE_ENABLED=False + +# Or use caching without Redis (both backends use memory) +export CACHE_REDIS_HOST= +export CACHE_DEFAULT_TTL=60 +export CACHE_PERSISTENT_TTL=300 + +# Or use local Redis +export CACHE_REDIS_HOST=localhost +export CACHE_REDIS_PORT=6379 +``` + +### Production + +```bash +# Enable caching with Redis +export CACHE_ENABLED=True +export CACHE_REDIS_HOST=redis-cluster +export CACHE_REDIS_PORT=6379 +export CACHE_DEFAULT_TTL=300 +export CACHE_PERSISTENT_TTL=3600 +``` + +## Disabling Caches + +To disable caching without changing code: + +1. **Via Environment Variable**: Set `CACHE_ENABLED=False` +2. **Result**: All cache operations (get, set, delete, clear) become no-ops +3. **Use Cases**: + - Debugging issues related to stale cache data + - Testing application behavior without caching + - Temporary troubleshooting in production + +When caching is disabled, your application continues to work normally - cache operations simply don't store or retrieve any data. + +## References + +- [aiocache Documentation](https://aiocache.readthedocs.io/) +- [Redis Best Practices](https://redis.io/docs/manual/patterns/) diff --git a/{{cookiecutter.__package_slug}}/docs/dev/celery.md b/{{cookiecutter.__package_slug}}/docs/dev/celery.md index 5c6d0bd..7739fde 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/celery.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/celery.md @@ -1,13 +1,638 @@ # Celery -This project uses [Celery](https://docs.celeryq.dev/en/stable/) and [Celery Beat](https://docs.celeryq.dev/en/stable/userguide/periodic-tasks.html). +This project uses [Celery](https://docs.celeryq.dev/), a distributed task queue system for processing asynchronous and scheduled tasks in Python. + +## Configuration + +The Celery application is defined in `{{cookiecutter.__package_slug}}/celery.py`. Unlike other components, Celery does NOT read configuration from the project's Settings class. Instead, Celery must be configured using environment variables. + +### Required Environment Variable + +- **CELERY_BROKER_URL**: Message broker URL (required) + - Redis example: `redis://localhost:6379/0` + - RabbitMQ example: `amqp://guest:guest@localhost:5672//` + +Set this environment variable before running Celery workers: + +```bash +export CELERY_BROKER_URL="redis://localhost:6379/0" +``` + +### Optional Configuration + +Celery can be further configured using additional environment variables prefixed with `CELERY_` or by creating a `celeryconfig.py` file in your project root. See the [Celery Configuration Documentation](https://docs.celeryq.dev/en/stable/userguide/configuration.html) for all available options. + +## Defining Tasks + +### Basic Task + +Create tasks by decorating functions with `@celery.task`: + +```python +from {{cookiecutter.__package_slug}}.celery import celery + +@celery.task +def send_email(to: str, subject: str, body: str): + """Send an email asynchronously.""" + # Email sending logic here + print(f"Sending email to {to}: {subject}") + return {"status": "sent", "to": to} +``` + +### Task with Options + +Configure task behavior with decorator options: + +```python +@celery.task( + bind=True, + max_retries=3, + default_retry_delay=60, + autoretry_for=(Exception,), + retry_backoff=True, +) +def process_payment(self, payment_id: int): + """Process a payment with automatic retries.""" + try: + # Payment processing logic + return {"payment_id": payment_id, "status": "processed"} + except PaymentError as exc: + # Manual retry with exponential backoff + raise self.retry(exc=exc, countdown=2 ** self.request.retries) +``` + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +### Async Task (with Database Access) + +Use async tasks for I/O-bound operations: + +```python +from {{cookiecutter.__package_slug}}.services.db import get_engine, AsyncSession +from sqlalchemy.ext.asyncio import async_sessionmaker + +@celery.task +def process_user_sync(user_id: int): + """Synchronous wrapper for async task.""" + import asyncio + return asyncio.run(process_user(user_id)) + +async def process_user(user_id: int): + """Process user data with async database access.""" + engine = await get_engine() + SessionLocal = async_sessionmaker(engine, class_=AsyncSession) + + async with SessionLocal() as session: + result = await session.execute(select(User).where(User.id == user_id)) + user = result.scalar_one_or_none() + + if user: + # Process user + user.last_processed = datetime.now() + await session.commit() + + return {"user_id": user_id, "processed": bool(user)} +``` + +{%- endif %} + +## Calling Tasks + +### Fire and Forget + +Execute a task asynchronously without waiting for results: + +```python +# Call the task +send_email.delay("user@example.com", "Welcome", "Thanks for signing up!") + +# Or with apply_async for more options +send_email.apply_async( + args=["user@example.com", "Welcome", "Thanks for signing up!"], + countdown=60, # Execute after 60 seconds +) +``` + +### Getting Results + +Retrieve task results synchronously: + +```python +# Call task and get AsyncResult object +result = send_email.delay("user@example.com", "Hello", "Message body") + +# Wait for result (blocking) +output = result.get(timeout=10) # Raises TimeoutError if not done in 10s +print(output) # {"status": "sent", "to": "user@example.com"} + +# Check task state +print(result.state) # 'PENDING', 'STARTED', 'SUCCESS', 'FAILURE', etc. +``` + +### Task Options + +Use `apply_async` for advanced options: + +```python +send_email.apply_async( + args=["user@example.com", "Subject", "Body"], + countdown=300, # Delay 5 minutes + expires=3600, # Expire after 1 hour + priority=9, # Higher priority (0-9) + queue='emails', # Route to specific queue + retry=True, # Enable retries + retry_policy={ + 'max_retries': 3, + 'interval_start': 0, + 'interval_step': 0.2, + 'interval_max': 0.2, + } +) +``` + +## Periodic Tasks (Celery Beat) + +### Configuration + +Periodic tasks can be configured in two ways: + +**Method 1: Using signal handler** (template default): + +```python +@celery.on_after_finalize.connect +def setup_periodic_tasks(sender, **kwargs): + """Configure periodic tasks when Celery is ready.""" + # Add a task that runs every 15 seconds + sender.add_periodic_task(15.0, hello_world.s(), name="Test Task") + + # Add a task with crontab schedule + from celery.schedules import crontab + sender.add_periodic_task( + crontab(hour=2, minute=0), + cleanup_old_data.s(), + name='Daily Cleanup' + ) +``` + +**Method 2: Using beat_schedule** (alternative): + +```python +from celery.schedules import crontab + +celery.conf.beat_schedule = { + 'cleanup-old-data': { + 'task': '{{cookiecutter.__package_slug}}.tasks.cleanup_old_data', + 'schedule': crontab(hour=2, minute=0), # Run daily at 2 AM + }, + 'send-weekly-report': { + 'task': '{{cookiecutter.__package_slug}}.tasks.send_weekly_report', + 'schedule': crontab(day_of_week='monday', hour=9, minute=0), + }, + 'check-status-every-5-min': { + 'task': '{{cookiecutter.__package_slug}}.tasks.check_status', + 'schedule': 300.0, # Run every 5 minutes (in seconds) + }, +} +``` + +### Schedule Types + +**Crontab Schedule** (like Unix cron): + +```python +from celery.schedules import crontab + +# Every midnight +crontab(hour=0, minute=0) + +# Every Monday at 9 AM +crontab(day_of_week='monday', hour=9, minute=0) + +# Every 15 minutes +crontab(minute='*/15') + +# First day of every month +crontab(day_of_month='1', hour=0, minute=0) +``` + +**Interval Schedule**: + +```python +from celery.schedules import schedule + +# Every 30 seconds +schedule(run_every=30.0) + +# Can also use timedelta +from datetime import timedelta +schedule(run_every=timedelta(hours=1)) +``` + +## Task Organization + +Organize tasks in separate modules: + +``` +{{cookiecutter.__package_slug}}/ +├── celery.py # Celery app configuration +└── tasks/ + ├── __init__.py + ├── email.py # Email-related tasks + ├── reports.py # Report generation tasks + └── cleanup.py # Maintenance tasks +``` + +Import tasks in `celery.py` to ensure they're registered: + +```python +# In {{cookiecutter.__package_slug}}/celery.py +from {{cookiecutter.__package_slug}}.tasks import email, reports, cleanup +``` + +## Running Workers + +### Development + +Start a Celery worker in development mode: + +```bash +# Basic worker +celery -A {{cookiecutter.__package_slug}}.celery worker --loglevel=info + +# With concurrency limit +celery -A {{cookiecutter.__package_slug}}.celery worker --loglevel=info --concurrency=4 + +# With specific queues +celery -A {{cookiecutter.__package_slug}}.celery worker --loglevel=info -Q celery,emails +``` + +### Celery Beat (Scheduler) + +Run the beat scheduler for periodic tasks: + +```bash +# In a separate terminal +celery -A {{cookiecutter.__package_slug}}.celery beat --loglevel=info + +# Or combine worker and beat (NOT recommended for production) +celery -A {{cookiecutter.__package_slug}}.celery worker --beat --loglevel=info +``` + +### Production + +Use multiple workers with proper concurrency: + +```bash +# Prefork pool (default) - good for CPU-bound tasks +celery -A {{cookiecutter.__package_slug}}.celery worker \ + --loglevel=info \ + --concurrency=8 \ + --pool=prefork + +# Eventlet pool - better for I/O-bound tasks +celery -A {{cookiecutter.__package_slug}}.celery worker \ + --loglevel=info \ + --concurrency=100 \ + --pool=eventlet + +# Gevent pool - alternative for I/O-bound tasks +celery -A {{cookiecutter.__package_slug}}.celery worker \ + --loglevel=info \ + --concurrency=100 \ + --pool=gevent +``` {%- if cookiecutter.include_docker == "y" %} -## Docker +### Docker + +If Docker is configured: -The Celery images are based off of the [Multi-Py Celery Project](https://github.com/multi-py/python-celery) and work for ARM and AMD out of the box. +```bash +# Start worker +docker-compose up celery -For scheduling to work one container has to be launched with [ENABLE_BEAT](https://github.com/multi-py/python-celery#enable_beat) set to `true`. +# Start beat scheduler +docker-compose up beat +``` {%- endif %} + +## Monitoring + +### Command Line + +Monitor tasks from the command line: + +```bash +# List active tasks +celery -A {{cookiecutter.__package_slug}}.celery inspect active + +# List scheduled tasks (ETA tasks) +celery -A {{cookiecutter.__package_slug}}.celery inspect scheduled + +# List registered tasks +celery -A {{cookiecutter.__package_slug}}.celery inspect registered + +# Worker statistics +celery -A {{cookiecutter.__package_slug}}.celery inspect stats + +# Ping workers +celery -A {{cookiecutter.__package_slug}}.celery inspect ping +``` + +{%- if cookiecutter.include_aiocache == "y" %} + +## Cache Integration + +If aiocache is enabled, the cache setup handler runs automatically: + +```python +from {{cookiecutter.__package_slug}}.celery import celery + +@celery.task +def task_using_cache(): + """Task that uses caching.""" + from {{cookiecutter.__package_slug}}.services.cache import cache + + # Use cache in tasks + cache.set("key", "value", ttl=300) + return cache.get("key") +``` + +{%- endif %} + +## Testing Celery Tasks + +### Testing Task Registration + +Test that tasks are properly registered with Celery: + +```python +# tests/test_celery.py +from {{cookiecutter.__package_slug}}.celery import celery, hello_world + + +def test_celery_app_exists(): + """Test that Celery app is properly instantiated.""" + assert celery is not None + assert hasattr(celery, "tasks") + + +def test_celery_app_name(): + """Test that Celery app has correct name.""" + assert celery.main == "{{cookiecutter.__package_slug}}" + + +def test_hello_world_task_registered(): + """Test that hello_world task is registered with Celery.""" + assert "{{cookiecutter.__package_slug}}.celery.hello_world" in celery.tasks + + +def test_hello_world_is_task(): + """Test that hello_world is a Celery task.""" + assert hasattr(hello_world, "delay") + assert hasattr(hello_world, "apply_async") + assert callable(hello_world) +``` + +### Testing Task Execution + +Test tasks by calling them directly (synchronously): + +```python +def test_hello_world_execution(capsys): + """Test that hello_world task executes without error.""" + # Run the task directly (not async) + hello_world() + + # Check that it printed the expected message + captured = capsys.readouterr() + assert "Hello World!" in captured.out + + +def test_task_with_return_value(): + """Test task that returns a value.""" + @celery.task + def add_numbers(a: int, b: int) -> int: + return a + b + + # Call directly for testing + result = add_numbers(2, 3) + assert result == 5 + + +def test_task_with_args(): + """Test task with multiple arguments.""" + result = process_data(user_id=123, action="update") + assert result["status"] == "success" + assert result["user_id"] == 123 +``` + +### Testing Periodic Tasks + +Test that periodic tasks are properly configured: + +```python +def test_periodic_task_setup_exists(): + """Test that periodic task setup function exists.""" + assert hasattr(celery, "on_after_finalize") + + +def test_periodic_tasks_registered(): + """Test that periodic tasks are configured.""" + # Note: This requires Celery to be fully configured + # You may need to call setup_periodic_tasks manually in tests + from {{cookiecutter.__package_slug}}.celery import setup_periodic_tasks + + # Mock sender + class MockSender: + def __init__(self): + self.periodic_tasks = [] + + def add_periodic_task(self, interval, task, name=None): + self.periodic_tasks.append({ + "interval": interval, + "task": task, + "name": name + }) + + sender = MockSender() + setup_periodic_tasks(sender) + + # Verify periodic task was added + assert len(sender.periodic_tasks) > 0 + assert sender.periodic_tasks[0]["name"] == "Test Task" +``` + +### Testing Signal Handlers + +Test Celery signal handlers like cache setup: + +```python +def test_cache_setup_handler_exists(): + """Test that cache setup signal handler is registered.""" + from {{cookiecutter.__package_slug}}.celery import setup_caches + assert callable(setup_caches) + + +def test_cache_setup_imports(): + """Test that cache setup can import required modules.""" + from {{cookiecutter.__package_slug}}.services.cache import configure_caches + from {{cookiecutter.__package_slug}}.settings import settings + + # Should not raise ImportError + assert callable(configure_caches) + assert settings is not None +``` + +### Testing Task Errors + +Test error handling and retries: + +```python +def test_task_with_error_handling(): + """Test that task handles errors gracefully.""" + @celery.task(bind=True, max_retries=3) + def failing_task(self): + try: + raise ValueError("Test error") + except ValueError as exc: + raise self.retry(exc=exc, countdown=1) + + # Test that task raises retry exception + with pytest.raises(Exception): + failing_task() + + +def test_task_retry_logic(): + """Test task retry configuration.""" + @celery.task(max_retries=3, default_retry_delay=60) + def retryable_task(): + pass + + assert retryable_task.max_retries == 3 + assert retryable_task.default_retry_delay == 60 +``` + +### Testing with Mocks + +Mock external dependencies in task tests: + +```python +from unittest.mock import patch, MagicMock + + +def test_task_with_external_api(monkeypatch): + """Test task that calls external API.""" + mock_response = MagicMock() + mock_response.json.return_value = {"status": "success"} + + with patch('requests.get', return_value=mock_response): + result = fetch_external_data("https://api.example.com") + assert result["status"] == "success" + + +def test_task_with_database(monkeypatch): + """Test task that uses database.""" + # Mock database operations + mock_session = MagicMock() + + with patch('{{cookiecutter.__package_slug}}.services.db.get_session', return_value=mock_session): + result = process_user_task(user_id=123) + assert result is not None +``` + +## Best Practices + +1. **Keep Tasks Small**: Break large operations into smaller, composable tasks that can be chained or grouped + +2. **Set Task Timeouts**: Always set time limits to prevent tasks from running indefinitely: + + ```python + @celery.task(time_limit=300, soft_time_limit=270) + def long_running_task(): + # Will be terminated after 5 minutes + pass + ``` + +3. **Handle Failures Gracefully**: Use retries and error handling: + + ```python + @celery.task(bind=True, max_retries=3) + def fragile_task(self): + try: + # Potentially failing operation + pass + except Exception as exc: + raise self.retry(exc=exc, countdown=60) + ``` + +4. **Use Task Queues**: Route different task types to different queues: + + ```python + @celery.task(queue='high-priority') + def urgent_task(): + pass + + @celery.task(queue='low-priority') + def background_cleanup(): + pass + ``` + +5. **Idempotent Tasks**: Design tasks to be safely retried without side effects - check if work is already done before proceeding + +6. **Avoid Passing Complex Objects**: Pass IDs instead of full objects to tasks: + + ```python + # Good + @celery.task + def process_user(user_id: int): + user = User.query.get(user_id) + # Process user + + # Bad - objects can't be serialized reliably + @celery.task + def process_user(user: User): + pass + ``` + +7. **Monitor Task Performance**: Use Flower or logging to track task execution times and failure rates + +## Development vs Production + +### Development + +```bash +# Single worker with auto-reload +export CELERY_BROKER_URL="redis://localhost:6379/0" +celery -A {{cookiecutter.__package_slug}}.celery worker --loglevel=debug --pool=solo +``` + +### Production + +```bash +# Multiple workers with production settings +export CELERY_BROKER_URL="redis://prod-redis:6379/0" +export CELERY_RESULT_BACKEND="redis://prod-redis:6379/1" + +# Worker with proper pool +celery -A {{cookiecutter.__package_slug}}.celery worker \ + --loglevel=info \ + --concurrency=10 \ + --max-tasks-per-child=1000 \ + --time-limit=3600 \ + --soft-time-limit=3300 + +# Beat scheduler (separate process) +celery -A {{cookiecutter.__package_slug}}.celery beat --loglevel=info +``` + +## References + +- [Celery Documentation](https://docs.celeryq.dev/) +- [Celery Best Practices](https://docs.celeryq.dev/en/stable/userguide/tasks.html#tips-and-best-practices) +- [Redis Documentation](https://redis.io/docs/) +- [Flower Documentation](https://flower.readthedocs.io/) + +This project uses [Celery](https://docs.celeryq.dev/en/stable/) and [Celery Beat](https://docs.celeryq.dev/en/stable/userguide/periodic-tasks.html). diff --git a/{{cookiecutter.__package_slug}}/docs/dev/cli.md b/{{cookiecutter.__package_slug}}/docs/dev/cli.md index d53c47d..4b2bc18 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/cli.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/cli.md @@ -1,11 +1,549 @@ # CLI -This project uses [Typer](https://typer.tiangolo.com/) and [Click](https://click.palletsprojects.com/) for CLI functionality. When the project is installed the cli is available at `{{ cookiecutter.__package_slug }}`. +This project uses [Typer](https://typer.tiangolo.com/) for building command-line interfaces, with [Click](https://click.palletsprojects.com/) as the underlying framework. -The full help contents can be visited with the help flag. +## Configuration + +The CLI application is defined in `{{cookiecutter.__package_slug}}/cli.py` and automatically configured as an entry point in `pyproject.toml`: + +```toml +[project.scripts] +{{cookiecutter.__package_slug}} = "{{cookiecutter.__package_slug}}.cli:app" +``` + +After installation, the CLI is available as the `{{cookiecutter.__package_slug}}` command. + +## Basic Usage + +### Getting Help + +View all available commands: + +```bash +{{cookiecutter.__package_slug}} --help +``` + +Get help for a specific command: + +```bash +{{cookiecutter.__package_slug}} command-name --help +``` + +### Running Commands + +Execute a command: + +```bash +{{cookiecutter.__package_slug}} my-command --option value argument +``` + +## Adding Commands + +### Simple Command + +Add a basic command to `{{cookiecutter.__package_slug}}/cli.py`: + +```python +import typer + +app = typer.Typer() + +@app.command() +def hello(name: str): + """Greet someone by name.""" + typer.echo(f"Hello, {name}!") +``` + +Usage: + +```bash +{{cookiecutter.__package_slug}} hello "World" +# Output: Hello, World! +``` + +### Command with Options + +Add commands with optional flags: + +```python +@app.command() +def process( + input_file: str, + output_file: str = typer.Option(None, "--output", "-o", help="Output file path"), + verbose: bool = typer.Option(False, "--verbose", "-v", help="Enable verbose output"), +): + """Process an input file and optionally save to output.""" + if verbose: + typer.echo(f"Processing {input_file}...") + + # Processing logic here + + if output_file: + typer.echo(f"Saved to {output_file}") +``` + +Usage: + +```bash +{{cookiecutter.__package_slug}} process input.txt --output output.txt --verbose +{{cookiecutter.__package_slug}} process input.txt -o output.txt -v +``` + +### Command with Type Validation + +Typer automatically validates types: + +```python +from pathlib import Path +from enum import Enum + +class OutputFormat(str, Enum): + json = "json" + yaml = "yaml" + csv = "csv" + +@app.command() +def export( + count: int = typer.Option(10, min=1, max=1000, help="Number of records"), + format: OutputFormat = typer.Option(OutputFormat.json, help="Output format"), + output: Path = typer.Option(Path("output.txt"), help="Output file"), +): + """Export data in specified format.""" + typer.echo(f"Exporting {count} records as {format.value} to {output}") +``` + +Usage: ```bash -{{ cookiecutter.__package_slug }} --help +{{cookiecutter.__package_slug}} export --count 50 --format yaml --output data.yaml ``` -The CLI itself is defined at `{{ cookiecutter.__package_slug }}.cli`. New commands can be added there. +### Interactive Prompts + +Use prompts for interactive input: + +```python +@app.command() +def configure(): + """Interactive configuration setup.""" + name = typer.prompt("What is your name?") + age = typer.prompt("What is your age?", type=int) + password = typer.prompt("Enter password", hide_input=True) + + if typer.confirm("Save configuration?"): + typer.echo("Configuration saved!") + else: + typer.echo("Configuration discarded") +``` + +## Async Commands + +### Using the Syncify Decorator + +The template includes a `syncify` decorator for async CLI commands: + +```python +from {{cookiecutter.__package_slug}}.cli import syncify +import httpx + +@app.command() +@syncify +async def fetch_data(url: str): + """Fetch data from a URL asynchronously.""" + async with httpx.AsyncClient() as client: + response = await client.get(url) + typer.echo(f"Status: {response.status_code}") + return response.text +``` + +### Database Access + +Use async database operations in CLI commands: + +```python +from {{cookiecutter.__package_slug}}.services.db import get_engine +from sqlalchemy.ext.asyncio import async_sessionmaker, AsyncSession + +@app.command() +@syncify +async def list_users(): + """List all users from the database.""" + engine = await get_engine() + SessionLocal = async_sessionmaker(engine, class_=AsyncSession) + + async with SessionLocal() as session: + result = await session.execute(select(User)) + users = result.scalars().all() + + for user in users: + typer.echo(f"User: {user.username} ({user.email})") +``` + +## Organizing Commands + +### Command Groups + +Organize related commands into groups: + +```python +import typer + +app = typer.Typer() +{%- if cookiecutter.include_sqlalchemy == "y" %} + +# Create subcommands +db_app = typer.Typer() +user_app = typer.Typer() + +# Add subcommands to main app +app.add_typer(db_app, name="db", help="Database management commands") +app.add_typer(user_app, name="user", help="User management commands") + +# Define commands in each group +@db_app.command("migrate") +def db_migrate(): + """Run database migrations.""" + typer.echo("Running migrations...") + +@db_app.command("seed") +def db_seed(): + """Seed database with initial data.""" + typer.echo("Seeding database...") + +@user_app.command("create") +def user_create(username: str, email: str): + """Create a new user.""" + typer.echo(f"Creating user {username} ({email})") + +@user_app.command("list") +def user_list(): + """List all users.""" + typer.echo("Listing users...") +``` + +{%- endif %} + +Usage: + +```bash +{{cookiecutter.__package_slug}} db migrate +{{cookiecutter.__package_slug}} db seed +{{cookiecutter.__package_slug}} user create john john@example.com +{{cookiecutter.__package_slug}} user list +``` + +### Separate Command Modules + +For larger projects, split commands into separate files: + +``` +{{cookiecutter.__package_slug}}/ +├── cli.py # Main app +└── commands/ + ├── __init__.py + ├── database.py # Database commands + └── users.py # User commands +``` + +In `cli.py`: + +```python +from {{cookiecutter.__package_slug}}.commands import database, users + +app = typer.Typer() +app.add_typer(database.app, name="db") +app.add_typer(users.app, name="user") +``` + +## Output Formatting + +### Styled Output + +Use typer's styling for colored output: + +```python +@app.command() +def status(): + """Check system status.""" + typer.secho("✓ System operational", fg=typer.colors.GREEN, bold=True) + typer.secho("⚠ Warning: High memory usage", fg=typer.colors.YELLOW) + typer.secho("✗ Error: Database connection failed", fg=typer.colors.RED) +``` + +### Progress Bars + +Show progress for long-running operations: + +```python +import time + +@app.command() +def process_items(): + """Process multiple items with progress bar.""" + items = range(100) + + with typer.progressbar(items, label="Processing") as progress: + for item in progress: + # Simulate processing + time.sleep(0.1) + + typer.echo("Processing complete!") +``` + +### Tables + +For structured output, use rich tables: + +```python +from rich.console import Console +from rich.table import Table + +@app.command() +def report(): + """Generate a formatted report.""" + console = Console() + + table = Table(title="User Report") + table.add_column("ID", style="cyan") + table.add_column("Name", style="magenta") + table.add_column("Email", style="green") + + table.add_row("1", "John Doe", "john@example.com") + table.add_row("2", "Jane Smith", "jane@example.com") + + console.print(table) +``` + +## Error Handling + +### Graceful Error Messages + +Handle errors with user-friendly messages: + +```python +@app.command() +def delete_user(user_id: int): + """Delete a user by ID.""" + try: + # Delete logic here + if not user_exists(user_id): + typer.secho(f"Error: User {user_id} not found", fg=typer.colors.RED) + raise typer.Exit(code=1) + + typer.echo(f"User {user_id} deleted successfully") + except Exception as e: + typer.secho(f"Error: {str(e)}", fg=typer.colors.RED) + raise typer.Exit(code=1) +``` + +### Exit Codes + +Use proper exit codes for scripts: + +```python +@app.command() +def validate_config(): + """Validate configuration file.""" + if config_is_valid(): + typer.echo("Configuration is valid") + raise typer.Exit(code=0) # Success + else: + typer.secho("Configuration has errors", fg=typer.colors.RED) + raise typer.Exit(code=1) # Failure +``` + +## Testing CLI Commands + +### Using CliRunner + +The project uses Typer's `CliRunner` for testing CLI commands. Create a module-level runner instance for reuse across tests: + +```python +# tests/test_cli.py +from typer.testing import CliRunner +from {{cookiecutter.__package_slug}}.cli import app, syncify +import asyncio + +# Module-level runner instance +runner = CliRunner() + + +def test_cli_app_exists(): + """Test that Typer app is properly instantiated.""" + assert app is not None + assert hasattr(app, "command") + + +def test_version_command_runs(): + """Test that version command executes successfully.""" + result = runner.invoke(app, ["version"]) + assert result.exit_code == 0 + assert "{{cookiecutter.__package_slug}}" in result.stdout.lower() + + +def test_help_command(): + """Test that help command works.""" + result = runner.invoke(app, ["--help"]) + assert result.exit_code == 0 + # Check for command names in help output +``` + +### Testing Commands with Arguments + +```python +def test_command_with_args(): + """Test command that takes arguments.""" + result = runner.invoke(app, ["process", "input.txt"]) + assert result.exit_code == 0 + assert "Processing input.txt" in result.output + + +def test_command_with_options(): + """Test command with optional flags.""" + result = runner.invoke(app, ["process", "input.txt", "--verbose"]) + assert result.exit_code == 0 + assert "Processing input.txt" in result.output +``` + +### Testing Async Commands + +Test async commands that use the `syncify` decorator: + +```python +def test_syncify_decorator(): + """Test the syncify decorator for async CLI commands.""" + @syncify + async def async_function(): + await asyncio.sleep(0.01) + return "success" + + result = async_function() + assert result == "success" + + +@app.command() +@syncify +async def async_example(): + """Example async command.""" + await asyncio.sleep(0.01) + print("Async command completed") + + +def test_async_command(): + """Test async CLI command.""" + result = runner.invoke(app, ["async-example"]) + assert result.exit_code == 0 + assert "Async command completed" in result.stdout +``` + +### Testing with Environment Variables + +Use pytest's `monkeypatch` to set environment variables for tests: + +```python +def test_with_env_vars(monkeypatch): + """Test command that uses environment variables.""" + monkeypatch.setenv("API_KEY", "test-key-12345") + monkeypatch.setenv("DEBUG", "True") + + result = runner.invoke(app, ["fetch-data"]) + assert result.exit_code == 0 + + +def test_settings_in_cli(monkeypatch): + """Test that CLI commands can access settings.""" + monkeypatch.setenv("PROJECT_NAME", "Test Project") + + result = runner.invoke(app, ["show-config"]) + assert result.exit_code == 0 + assert "Test Project" in result.stdout +``` + +### Testing Error Handling + +```python +def test_command_with_invalid_input(): + """Test command error handling.""" + result = runner.invoke(app, ["process", "nonexistent.txt"]) + assert result.exit_code != 0 + assert "Error" in result.stdout or "not found" in result.stdout.lower() + + +def test_command_validation(): + """Test that command validates input.""" + # Test with invalid argument type + result = runner.invoke(app, ["process-count", "not-a-number"]) + assert result.exit_code != 0 +``` + +## Best Practices + +1. **Clear Command Names**: Use descriptive, action-oriented names (e.g., `create-user`, `export-data`) + +2. **Comprehensive Help Text**: Always add docstrings to commands - they become the help text: + + ```python + @app.command() + def my_command(arg: str): + """ + This is the command description shown in --help. + + Provide details about what the command does and any important notes. + """ + pass + ``` + +3. **Validate Input Early**: Use Typer's type system and validation to catch errors before processing: + + ```python + @app.command() + def process( + count: int = typer.Option(..., min=1, max=1000), + file: Path = typer.Option(..., exists=True, file_okay=True), + ): + pass + ``` + +4. **Use Enums for Choices**: Define fixed sets of options with enums instead of strings + +5. **Provide Defaults**: Always provide sensible defaults for optional parameters + +6. **Exit with Appropriate Codes**: Use exit code 0 for success, non-zero for errors + +7. **Use Progress Indicators**: For long-running operations, show progress to keep users informed + +## Running the CLI + +### During Development + +```bash +# Install in editable mode +pip install -e . + +# Run commands +{{cookiecutter.__package_slug}} --help +{{cookiecutter.__package_slug}} my-command +``` + +### After Installation + +```bash +# Regular installation +pip install . + +# Commands are available globally +{{cookiecutter.__package_slug}} --help +``` + +### Direct Python Execution + +```bash +# Without installation +python -m {{cookiecutter.__package_slug}}.cli --help +``` + +## References + +- [Typer Documentation](https://typer.tiangolo.com/) +- [Click Documentation](https://click.palletsprojects.com/) +- [Rich Documentation](https://rich.readthedocs.io/) (for advanced formatting) diff --git a/{{cookiecutter.__package_slug}}/docs/dev/database.md b/{{cookiecutter.__package_slug}}/docs/dev/database.md index ef91a50..04078dd 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/database.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/database.md @@ -1,44 +1,777 @@ # Database -This project uses [SQLAlchemy](https://www.sqlalchemy.org/) and [Alembic](https://alembic.sqlalchemy.org/en/latest/) to build migrations. +This project uses [SQLAlchemy](https://www.sqlalchemy.org/) as its ORM (Object-Relational Mapper) and [Alembic](https://alembic.sqlalchemy.org/) for database migrations, providing full async/await support for high-performance database operations. + +## Configuration + +Database configuration is managed through Pydantic settings with the following environment variable: + +- **DATABASE_URL**: Database connection string (default: `sqlite:///./test.db`) + - SQLite: `sqlite:///./database.db` (local file) or `sqlite:///:memory:` (in-memory) + - PostgreSQL: `postgresql://user:password@localhost:5432/dbname` + +The database service automatically transforms the connection string for async operations: + +- `sqlite` → `sqlite+aiosqlite` (async SQLite driver) +- `postgresql` → `postgresql+asyncpg` (async PostgreSQL driver) ## Database API -This project uses SQLAlchemy 1.4 and 2.0 APIs. It exposes the Async Engine and AsyncSessions. +This project uses the modern SQLAlchemy 2.0 API with full async/await support: -## Engines +- **Async Engine**: Provides asynchronous database connections +- **AsyncSession**: Manages database transactions asynchronously +- **Future-compatible**: Uses the `future=True` flag for forward compatibility -This project is geared towards Postgres and SQLite. +### Supported Databases -## Models +- **SQLite**: Perfect for development and testing, supports both file-based and in-memory databases +- **PostgreSQL**: Recommended for production, provides full relational database features -Models exists in the `{{cookiecutter.__package_slug}}/models` directory. +## Defining Models +Models are defined in the `{{cookiecutter.__package_slug}}/models` directory and inherit from the declarative base. -## Migrations +### Basic Model Structure -Migrations are created with Alembic. It will automatically find all models in the `{{cookiecutter.__package_slug}}/models` directory. +```python +from sqlalchemy import String +from sqlalchemy.orm import Mapped, mapped_column +from {{cookiecutter.__package_slug}}.models.base import Base -Migrations can be using `make`. This method uses SQLite. +class User(Base): + """User model.""" + __tablename__ = "users" -```bash -make create_migration MESSAGE="migration description" + id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True) + name: Mapped[str] = mapped_column(String(100)) + email: Mapped[str] = mapped_column(String(255), unique=True, index=True) + bio: Mapped[str | None] = mapped_column(String(500)) +``` + +### Column Types + +SQLAlchemy provides a rich set of column types: + +```python +import datetime +from typing import Any +from sqlalchemy import String, Text, func +from sqlalchemy.orm import Mapped, mapped_column + +class Article(Base): + __tablename__ = "articles" + + id: Mapped[int] = mapped_column(primary_key=True) + title: Mapped[str] = mapped_column(String(200)) + content: Mapped[str] = mapped_column(Text) + is_published: Mapped[bool] = mapped_column(default=False) + view_count: Mapped[int] = mapped_column(default=0) + rating: Mapped[float | None] + metadata: Mapped[dict[str, Any] | None] + created_at: Mapped[datetime.datetime] = mapped_column(server_default=func.now()) + updated_at: Mapped[datetime.datetime | None] = mapped_column(onupdate=func.now()) +``` + +### Constraints and Indexes + +```python +from sqlalchemy import String, UniqueConstraint, Index, CheckConstraint +from sqlalchemy.orm import Mapped, mapped_column + +class Product(Base): + __tablename__ = "products" + + id: Mapped[int] = mapped_column(primary_key=True) + sku: Mapped[str] = mapped_column(String(50), unique=True) + name: Mapped[str] = mapped_column(String(200)) + price: Mapped[float] + category: Mapped[str] = mapped_column(String(100)) + + __table_args__ = ( + # Composite unique constraint + UniqueConstraint("name", "category", name="uq_product_name_category"), + # Multi-column index for better query performance + Index("idx_category_price", "category", "price"), + # Check constraint + CheckConstraint("price > 0", name="ck_product_price_positive"), + ) +``` + +## Relationships + +SQLAlchemy provides powerful relationship patterns for connecting models. + +### One-to-Many Relationship + +```python +from typing import List +from sqlalchemy import String, ForeignKey +from sqlalchemy.orm import Mapped, mapped_column, relationship + +class Author(Base): + __tablename__ = "authors" + + id: Mapped[int] = mapped_column(primary_key=True) + name: Mapped[str] = mapped_column(String(100)) + + # Relationship to books (one author has many books) + books: Mapped[List["Book"]] = relationship(back_populates="author", cascade="all, delete-orphan") + + +class Book(Base): + __tablename__ = "books" + + id: Mapped[int] = mapped_column(primary_key=True) + title: Mapped[str] = mapped_column(String(200)) + author_id: Mapped[int] = mapped_column(ForeignKey("authors.id", ondelete="CASCADE")) + + # Relationship to author (many books belong to one author) + author: Mapped["Author"] = relationship(back_populates="books") +``` + +### Many-to-Many Relationship + +```python +from typing import List +from sqlalchemy import Table, Column, Integer, String, ForeignKey +from sqlalchemy.orm import Mapped, mapped_column, relationship + +# Association table for many-to-many relationship +student_course_association = Table( + "student_courses", + Base.metadata, + Column("student_id", Integer, ForeignKey("students.id", ondelete="CASCADE")), + Column("course_id", Integer, ForeignKey("courses.id", ondelete="CASCADE")), +) + + +class Student(Base): + __tablename__ = "students" + + id: Mapped[int] = mapped_column(primary_key=True) + name: Mapped[str] = mapped_column(String(100)) + + # Many-to-many relationship to courses + courses: Mapped[List["Course"]] = relationship( + secondary=student_course_association, + back_populates="students" + ) + + +class Course(Base): + __tablename__ = "courses" + + id: Mapped[int] = mapped_column(primary_key=True) + title: Mapped[str] = mapped_column(String(200)) + + # Many-to-many relationship to students + students: Mapped[List["Student"]] = relationship( + secondary=student_course_association, + back_populates="courses" + ) +``` + +### Self-Referential Relationship + +```python +from typing import List +from sqlalchemy import String, ForeignKey +from sqlalchemy.orm import Mapped, mapped_column, relationship + +class Employee(Base): + __tablename__ = "employees" + + id: Mapped[int] = mapped_column(primary_key=True) + name: Mapped[str] = mapped_column(String(100)) + manager_id: Mapped[int | None] = mapped_column(ForeignKey("employees.id")) + + # Self-referential relationship + manager: Mapped["Employee | None"] = relationship( + remote_side="Employee.id", + back_populates="subordinates" + ) + subordinates: Mapped[List["Employee"]] = relationship(back_populates="manager") +``` + +## Session Management + +The database service provides async context managers for session management. + +### Basic Session Usage + +```python +from {{cookiecutter.__package_slug}}.services.db import get_session + +async def create_user(name: str, email: str): + """Create a new user.""" + async with get_session() as session: + user = User(name=name, email=email) + session.add(user) + await session.commit() + await session.refresh(user) # Get generated ID + return user ``` -To check if a migration is available run `make check_ungenerated_migrations`. +### Querying Data + +```python +from sqlalchemy import select + +async def get_user_by_email(email: str): + """Find a user by email.""" + async with get_session() as session: + result = await session.execute( + select(User).where(User.email == email) + ) + return result.scalar_one_or_none() + + +async def get_all_users(): + """Get all users.""" + async with get_session() as session: + result = await session.execute(select(User)) + return result.scalars().all() + + +async def get_users_by_name(name: str): + """Find users by name pattern.""" + async with get_session() as session: + result = await session.execute( + select(User).where(User.name.like(f"%{name}%")) + ) + return result.scalars().all() +``` + +### Updating Data + +```python +async def update_user_email(user_id: int, new_email: str): + """Update a user's email.""" + async with get_session() as session: + result = await session.execute( + select(User).where(User.id == user_id) + ) + user = result.scalar_one() + user.email = new_email + await session.commit() + return user +``` + +### Deleting Data + +```python +async def delete_user(user_id: int): + """Delete a user.""" + async with get_session() as session: + result = await session.execute( + select(User).where(User.id == user_id) + ) + user = result.scalar_one() + await session.delete(user) + await session.commit() +``` + +### Transaction Management + +```python +async def transfer_credits(from_user_id: int, to_user_id: int, amount: int): + """Transfer credits between users with transaction safety.""" + async with get_session() as session: + try: + # Get both users + from_user = (await session.execute( + select(User).where(User.id == from_user_id) + )).scalar_one() + + to_user = (await session.execute( + select(User).where(User.id == to_user_id) + )).scalar_one() + + # Perform transfer + if from_user.credits < amount: + raise ValueError("Insufficient credits") + + from_user.credits -= amount + to_user.credits += amount + + await session.commit() + + except Exception: + await session.rollback() + raise +``` {%- if cookiecutter.include_fastapi == "y" %} ## FastAPI Integration -The function `{{cookiecutter.__package_slug}}.db:get_session_depends` is designed to work with the [FastAPI Dependency system](https://fastapi.tiangolo.com/tutorial/dependencies/), and can be passed directly to [Depends](https://fastapi.tiangolo.com/tutorial/dependencies/dependencies-in-path-operation-decorators/). +The `get_session_depends` function integrates seamlessly with FastAPI's dependency injection system. + +### Using Database Sessions in Endpoints + +```python +from fastapi import Depends +from sqlalchemy.ext.asyncio import AsyncSession +from {{cookiecutter.__package_slug}}.services.db import get_session_depends + +@app.get("/users/{user_id}") +async def get_user( + user_id: int, + session: AsyncSession = Depends(get_session_depends) +): + """Get a user by ID.""" + result = await session.execute( + select(User).where(User.id == user_id) + ) + user = result.scalar_one_or_none() + if not user: + raise HTTPException(status_code=404, detail="User not found") + return user + + +@app.post("/users") +async def create_user( + user_data: UserCreate, + session: AsyncSession = Depends(get_session_depends) +): + """Create a new user.""" + user = User(**user_data.dict()) + session.add(user) + await session.commit() + await session.refresh(user) + return user +``` + +### Testing with Database Fixtures + +The test suite provides database fixtures that override the dependency: + +```python +def test_create_user(fastapi_client): + """Test creating a user via API.""" + response = fastapi_client.post( + "/users", + json={"name": "Test User", "email": "test@example.com"} + ) + assert response.status_code == 200 + data = response.json() + assert data["name"] == "Test User" +``` +See [Testing Documentation](./testing.md#testing-database-operations) for more details on testing with databases. {%- endif %} -## Schema +## Migrations with Alembic + +Alembic manages database schema changes through migration scripts, allowing you to version and track database structure over time. + +### Creating Migrations + +Alembic automatically detects changes in your models and generates migration scripts: + +```bash +# Create a new migration +make create_migration MESSAGE="add user table" + +# This creates a file like: db/versions/abc123_add_user_table.py +``` + +The migration generation process: + +1. Creates a temporary SQLite database +2. Applies all existing migrations to it +3. Compares the current models with the migrated database schema +4. Generates a migration script with the differences +5. Automatically formats the generated script with ruff + +### Migration Structure + +Generated migrations contain `upgrade()` and `downgrade()` functions: + +```python +"""add user table + +Revision ID: abc123 +Revises: xyz789 +Create Date: 2024-01-15 10:30:00.000000 + +""" +from alembic import op +import sqlalchemy as sa + +# revision identifiers, used by Alembic. +revision = 'abc123' +down_revision = 'xyz789' +branch_labels = None +depends_on = None + + +def upgrade() -> None: + """Upgrade database schema.""" + op.create_table( + 'users', + sa.Column('id', sa.Integer(), nullable=False), + sa.Column('name', sa.String(100), nullable=False), + sa.Column('email', sa.String(255), nullable=False), + sa.PrimaryKeyConstraint('id'), + sa.UniqueConstraint('email') + ) + + +def downgrade() -> None: + """Downgrade database schema.""" + op.drop_table('users') +``` + +### Running Migrations + +```bash +# Apply all pending migrations +make run_migrations -This schema is generated with Paracelsus. To update run `make document_schema`. +# Equivalent to: +alembic upgrade head + +# Downgrade one migration +alembic downgrade -1 + +# Downgrade to a specific revision +alembic downgrade abc123 + +# View migration history +alembic history + +# Show current migration version +alembic current +``` + +### Checking for Ungenerated Migrations + +Before creating a new migration, check if there are pending model changes: + +```bash +# Check if models have changed since last migration +make check_ungenerated_migrations + +# Equivalent to: +alembic check +``` + +This command will: + +- Exit with code 0 if no changes are detected +- Exit with code 1 if there are ungenerated changes +- Useful in CI/CD to ensure migrations are created for all model changes + +### Migration Best Practices + +1. **Descriptive messages**: Use clear, concise migration messages + + ```bash + make create_migration MESSAGE="add user email verification fields" + ``` + +2. **Small, focused migrations**: Each migration should address one logical change + + ```bash + # Good - separate migrations + make create_migration MESSAGE="add users table" + make create_migration MESSAGE="add user indexes" + + # Bad - one large migration + make create_migration MESSAGE="add users and products and orders" + ``` + +3. **Test migrations**: Always test both upgrade and downgrade + + ```bash + # Test upgrade + make run_migrations + + # Test downgrade + alembic downgrade -1 + + # Re-upgrade + make run_migrations + ``` + +4. **Review generated migrations**: Always review auto-generated migrations before committing + - Check for unintended changes + - Add data migrations if needed + - Verify indexes and constraints + +5. **Data migrations**: For complex data transformations, add custom logic + + ```python + def upgrade() -> None: + # Schema change + op.add_column('users', sa.Column('full_name', sa.String(200))) + + # Data migration + connection = op.get_bind() + connection.execute( + sa.text("UPDATE users SET full_name = name WHERE full_name IS NULL") + ) + ``` + +### Database Reset and Cleanup + +```bash +# Clear the database (removes SQLite file) +make clear_db + +# Clear and re-run all migrations +make reset_db +``` + +## Common CRUD Patterns + +### Create + +```python +async def create_record(data: dict): + """Create a new record.""" + async with get_session() as session: + record = MyModel(**data) + session.add(record) + await session.commit() + await session.refresh(record) + return record +``` + +### Read + +```python +from sqlalchemy import select + +async def get_record_by_id(record_id: int): + """Get a single record by ID.""" + async with get_session() as session: + result = await session.execute( + select(MyModel).where(MyModel.id == record_id) + ) + return result.scalar_one_or_none() + + +async def get_all_records(skip: int = 0, limit: int = 100): + """Get paginated records.""" + async with get_session() as session: + result = await session.execute( + select(MyModel).offset(skip).limit(limit) + ) + return result.scalars().all() + + +async def get_filtered_records(status: str): + """Get records with filtering.""" + async with get_session() as session: + result = await session.execute( + select(MyModel).where(MyModel.status == status) + ) + return result.scalars().all() +``` + +### Update + +```python +async def update_record(record_id: int, updates: dict): + """Update a record.""" + async with get_session() as session: + result = await session.execute( + select(MyModel).where(MyModel.id == record_id) + ) + record = result.scalar_one() + + for key, value in updates.items(): + setattr(record, key, value) + + await session.commit() + await session.refresh(record) + return record +``` + +### Delete + +```python +async def delete_record(record_id: int): + """Delete a record.""" + async with get_session() as session: + result = await session.execute( + select(MyModel).where(MyModel.id == record_id) + ) + record = result.scalar_one() + await session.delete(record) + await session.commit() +``` + +## Testing Database Operations + +The test suite provides fixtures for database testing with isolated, in-memory databases. + +### Using the db_session Fixture + +```python +import pytest +from sqlalchemy import select + +@pytest.mark.asyncio +async def test_create_user(db_session): + """Test creating a user.""" + user = User(name="Test User", email="test@example.com") + db_session.add(user) + await db_session.commit() + + # Verify creation + result = await db_session.execute( + select(User).where(User.email == "test@example.com") + ) + saved_user = result.scalar_one() + assert saved_user.name == "Test User" +``` + +See [Testing Documentation](./testing.md) for comprehensive testing patterns. + +## Best Practices + +1. **Always use async/await**: This project uses async SQLAlchemy exclusively + + ```python + # Good + async with get_session() as session: + result = await session.execute(query) + + # Bad - will not work + with get_session() as session: + result = session.execute(query) + ``` + +2. **Use context managers for sessions**: Ensures proper cleanup and connection management + + ```python + # Good + async with get_session() as session: + # operations here + + # Bad - manual session management + session = create_session() + # operations + session.close() # Easy to forget! + ``` + +3. **Use select() for queries**: Modern SQLAlchemy 2.0 style + + ```python + # Good - SQLAlchemy 2.0 style + result = await session.execute(select(User).where(User.id == 1)) + user = result.scalar_one() + + # Old - SQLAlchemy 1.x style (avoid) + user = session.query(User).filter(User.id == 1).one() + ``` + +4. **Handle exceptions properly**: Always be prepared for database errors + + ```python + from sqlalchemy.exc import IntegrityError + + try: + session.add(user) + await session.commit() + except IntegrityError: + await session.rollback() + # Handle duplicate email, etc. + ``` + +5. **Use scalar_one_or_none() for single results**: Prevents exceptions on missing data + + ```python + # Good - returns None if not found + user = result.scalar_one_or_none() + if user is None: + # handle not found + + # Bad - raises exception if not found + user = result.scalar_one() # Will raise if no result + ``` + +6. **Refresh after commit to get generated values**: Get auto-generated IDs and defaults + + ```python + session.add(user) + await session.commit() + await session.refresh(user) # Now user.id is populated + ``` + +7. **Use relationships for related data**: Let SQLAlchemy handle joins + + ```python + # Good - use relationships + author = result.scalar_one() + books = author.books # SQLAlchemy handles the query + + # Less efficient - manual joins + books = await session.execute( + select(Book).where(Book.author_id == author.id) + ) + ``` + +8. **Index frequently queried columns**: Improve query performance + + ```python + email = Column(String(255), unique=True, index=True) # Indexed for fast lookups + ``` + +## Development vs Production + +### Development Configuration + +```bash +# SQLite for local development (fast and simple) +export DATABASE_URL="sqlite:///./dev.db" + +# Or in-memory for testing +export DATABASE_URL="sqlite:///:memory:" +``` + +### Production Configuration + +```bash +# PostgreSQL for production (recommended) +export DATABASE_URL="postgresql://username:password@hostname:5432/database" + +# With connection pool settings +export DATABASE_URL="postgresql://username:password@hostname:5432/database?pool_size=20&max_overflow=0" +``` + +### Database Initialization + +In production, ensure migrations are run before starting the application: + +```bash +# Run all pending migrations +make run_migrations + +# Start your application +python -m {{cookiecutter.__package_slug}}.www # or celery, etc. +``` + +## Schema Documentation + +This schema is automatically generated with [Paracelsus](https://github.com/tedivm/paracelsus). To update: + +```bash +make document_schema +``` + +## References + +- [SQLAlchemy Documentation](https://docs.sqlalchemy.org/en/20/) +- [SQLAlchemy ORM Documentation](https://docs.sqlalchemy.org/en/20/orm/) +- [SQLAlchemy Async Documentation](https://docs.sqlalchemy.org/en/20/orm/extensions/asyncio.html) +- [Alembic Documentation](https://alembic.sqlalchemy.org/) +- [Alembic Tutorial](https://alembic.sqlalchemy.org/en/latest/tutorial.html) diff --git a/{{cookiecutter.__package_slug}}/docs/dev/dependencies.md b/{{cookiecutter.__package_slug}}/docs/dev/dependencies.md index 3c3f15c..84befec 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/dependencies.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/dependencies.md @@ -1 +1,730 @@ # Dependencies + +This project uses modern Python packaging standards with `pyproject.toml` as the central configuration file for all dependencies, build settings, and tool configurations. + +## Dependency Management Structure + +Dependencies are organized in `pyproject.toml` using the modern Python packaging standard (PEP 621): + +```toml +[project] +name = "{{cookiecutter.__package_slug}}" +dependencies = [ + # Runtime dependencies required to run the application +] + +[project.optional-dependencies] +dev = [ + # Development dependencies for testing, linting, etc. +] +``` + +### Main Dependencies vs Dev Dependencies + +**Main Dependencies** (`dependencies`): + +- Required to run the application in production +- Installed with `pip install {{cookiecutter.__package_slug}}` +- Includes frameworks, libraries, and runtime requirements +- Example: `fastapi`, `sqlalchemy`, `pydantic` + +**Dev Dependencies** (`optional-dependencies.dev`): + +- Only needed during development and testing +- Installed with `pip install {{cookiecutter.__package_slug}}[dev]` +- Includes testing tools, linters, formatters, and build tools +- Example: `pytest`, `ruff`, `mypy` + +## Project Dependencies + +### Core Runtime Dependencies + +All projects include these essential dependencies: + +- **pydantic~=2.0**: Data validation and settings management using Python type annotations +- **pydantic-settings**: Extension for loading configuration from environment variables + +{%- if cookiecutter.include_fastapi == "y" %} + +### Web Framework + +- **fastapi**: Modern, high-performance web framework for building APIs with automatic validation and documentation +{%- endif %} + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +### Database + +- **SQLAlchemy**: Comprehensive SQL toolkit and ORM for database operations +- **alembic**: Database migration tool for SQLAlchemy +- **aiosqlite**: Async SQLite driver for development and testing +- **asyncpg**: High-performance async PostgreSQL driver for production +- **psycopg2-binary**: Traditional PostgreSQL adapter (synchronous operations) +{%- endif %} + +{%- if cookiecutter.include_celery == "y" %} + +### Task Queue + +- **celery**: Distributed task queue for asynchronous job processing +- **redis**: Redis client library for Celery broker and result backend +{%- endif %} + +{%- if cookiecutter.include_aiocache == "y" %} + +### Caching + +- **aiocache**: Async caching library supporting multiple backends (Redis, in-memory) +- **redis**: Redis client for cache persistence +{%- endif %} + +{%- if cookiecutter.include_jinja2 == "y" %} + +### Templating + +- **jinja2**: Powerful template engine for generating HTML, configuration files, etc. +{%- endif %} + +{%- if cookiecutter.include_quasiqueue == "y" %} + +### Multiprocessing + +- **QuasiQueue**: Async-native multiprocessing library for CPU-intensive tasks +{%- endif %} + +{%- if cookiecutter.include_cli == "y" %} + +### CLI + +- **typer**: Modern CLI framework based on Python type hints with automatic help generation +{%- endif %} + +### Development Dependencies + +Development dependencies are organized in the `[project.optional-dependencies]` section: + +- **pytest**: Testing framework with powerful fixtures and assertion introspection +- **pytest-asyncio**: Plugin for testing async/await code +- **pytest-cov**: Code coverage reporting plugin +- **pytest-pretty**: Beautiful test output formatting +- **ruff**: Fast Python linter and formatter (replaces Black, isort, Flake8) +- **mypy**: Static type checker for catching type-related bugs +- **build**: PEP 517 build frontend for creating distribution packages +- **dapperdata**: Data formatting and validation tool +- **glom**: Nested data access and transformation +- **greenlet**: Lightweight concurrent programming support (required for coverage with async) +- **toml-sort**: Automatic TOML file sorting for consistency +{%- if cookiecutter.include_fastapi == "y" %} +- **httpx**: Modern HTTP client for testing FastAPI endpoints +{%- endif %} +{%- if cookiecutter.include_sqlalchemy == "y" %} +- **paracelsus**: Automatic database schema documentation generator +{%- endif %} +{%- if cookiecutter.include_requirements_files == "y" %} +- **uv**: Fast Python package installer and resolver for generating requirements files +{%- endif %} + +## Adding New Dependencies + +### Add Runtime Dependency + +Edit `pyproject.toml` and add to the `dependencies` list: + +```toml +[project] +dependencies = [ + "requests", # Add new dependency + "pydantic~=2.0", + # ... other dependencies +] +``` + +Then install: + +```bash +# Install with pip +pip install -e . + +# Or use make +make install +``` + +### Add Development Dependency + +Edit `pyproject.toml` and add to the `dev` list: + +```toml +[project.optional-dependencies] +dev = [ + "black", # Add new dev dependency + "pytest", + # ... other dev dependencies +] +``` + +Then install: + +```bash +# Install with dev dependencies +pip install -e .[dev] + +# Or use make +make install +``` + +### Using pip Directly + +You can also add dependencies using pip and then update `pyproject.toml`: + +```bash +# Install a package +pip install requests + +# Manually add to pyproject.toml dependencies list +# Then reinstall to ensure consistency +pip install -e .[dev] +``` + +## Removing Dependencies + +1. Remove the dependency from `pyproject.toml` +2. Reinstall the package: + +```bash +pip install -e .[dev] +``` + +3. Verify the dependency is removed: + +```bash +pip list | grep package-name +``` + +4. If needed, explicitly uninstall: + +```bash +pip uninstall package-name +``` + +## Version Pinning Strategies + +### Compatible Release (Recommended) + +Use the `~=` operator for compatible versions: + +```toml +"pydantic~=2.0" # Allows >=2.0.0, <3.0.0 +``` + +**Benefits**: + +- Gets bug fixes and minor updates automatically +- Avoids breaking changes from major version bumps +- Balance between stability and updates + +### Minimum Version + +Specify only minimum version: + +```toml +"requests>=2.28.0" # Any version >= 2.28.0 +``` + +**Use cases**: + +- When you need a specific feature added in a version +- Maximum flexibility for dependency resolution + +### Exact Version (Not Recommended) + +Pin to an exact version: + +```toml +"requests==2.31.0" # Only version 2.31.0 +``` + +**Use cases**: + +- Troubleshooting version-specific bugs +- Temporary pin during debugging +- **Warning**: Prevents security updates and bug fixes + +### Version Range + +Specify a range: + +```toml +"django>=4.0,<5.0" # Version 4.x only +``` + +### Best Practice + +For most dependencies, use compatible release (`~=`): + +```toml +dependencies = [ + "pydantic~=2.0", # Get 2.x updates, avoid 3.x + "fastapi~=0.109", # Get 0.109.x updates + "sqlalchemy~=2.0", # Get 2.x updates +] +``` + +{%- if cookiecutter.include_requirements_files == "y" %} + +## Requirements Files + +This project can optionally generate `requirements.txt` files for compatibility with tools that don't support `pyproject.toml`: + +### Generate Requirements Files + +```bash +# Generate both requirements files +make dependencies + +# Or manually: +make rebuild_dependencies +``` + +This creates: + +- `requirements.txt`: Runtime dependencies only +- `requirements-dev.txt`: Runtime + development dependencies + +### How It Works + +Uses [uv](https://github.com/astral-sh/uv) for fast dependency resolution: + +```bash +# Generate runtime requirements +uv pip compile --output-file=requirements.txt pyproject.toml + +# Generate dev requirements +uv pip compile --extra=dev --output-file=requirements-dev.txt pyproject.toml +``` + +### When to Use Requirements Files + +**Use `pyproject.toml`** (preferred): + +- Modern Python projects +- Publishing to PyPI +- Editable installs (`pip install -e .`) + +**Use `requirements.txt`**: + +- Legacy CI/CD systems +- Docker images (for layer caching optimization) +- Tools that don't support pyproject.toml +- Exact reproducible environments + +### Installing from Requirements Files + +```bash +# Install runtime requirements +pip install -r requirements.txt + +# Install dev requirements +pip install -r requirements-dev.txt +``` + +{%- endif %} + +## Updating Dependencies + +### Update All Dependencies + +```bash +# Update all packages to latest compatible versions +pip install --upgrade -e .[dev] + +# Verify updates +pip list --outdated +``` + +{%- if cookiecutter.include_requirements_files == "y" %} + +### Rebuild Requirements Files with Updates + +```bash +# Force update of all dependencies in requirements files +make rebuild_dependencies +``` + +{%- endif %} + +### Update Specific Dependency + +```bash +# Update one package +pip install --upgrade package-name + +# Verify new version +pip show package-name +``` + +### Check for Outdated Packages + +```bash +# List all outdated packages +pip list --outdated + +# Show detailed information +pip list --outdated --format=columns +``` + +## Security Considerations + +### Security Scanning + +Regularly scan for security vulnerabilities: + +```bash +# Using pip-audit (install separately) +pip install pip-audit +pip-audit + +# Using safety (install separately) +pip install safety +safety check +``` + +### Keeping Dependencies Updated + +1. **Regular updates**: Update dependencies monthly +2. **Security patches**: Apply security updates immediately +3. **Test thoroughly**: Run full test suite after updates +4. **Review changelogs**: Check breaking changes before major version updates + +### Dependabot Integration + +{%- if cookiecutter.include_github_actions == "y" %} + +This project includes GitHub's Dependabot for automatic dependency updates: + +- Automatically creates PRs for dependency updates +- Checks for security vulnerabilities +- Configured in `.github/dependabot.yml` + +See [GitHub Actions Documentation](./github.md) for more details. +{%- endif %} + +## Virtual Environment Management + +### Creating a Virtual Environment + +```bash +# Create virtual environment +python -m venv .venv + +# Or using make +make install +``` + +### Activating the Virtual Environment + +```bash +# On macOS/Linux +source .venv/bin/activate + +# On Windows +.venv\Scripts\activate +``` + +### Using pyenv for Python Version Management + +This project uses pyenv to manage Python versions: + +```bash +# Install Python version specified in .python-version +make pyenv + +# Or manually +pyenv install $(cat .python-version) + +# Set local Python version +pyenv local 3.11.0 +``` + +### Checking Virtual Environment + +```bash +# Verify you're in the virtual environment +which python +# Should show: /path/to/project/.venv/bin/python + +# Check installed packages +pip list + +# Check Python version +python --version +``` + +### Deactivating Virtual Environment + +```bash +deactivate +``` + +## Dependency Resolution Conflicts + +### Common Conflicts + +When pip cannot resolve dependencies: + +```bash +ERROR: pip's dependency resolver does not currently take into account all the packages +that are installed. This behavior is the source of the following dependency conflicts. +``` + +### Troubleshooting Steps + +1. **Update pip**: + + ```bash + pip install --upgrade pip + ``` + +2. **Check for incompatible versions**: + + ```bash + pip check + ``` + +3. **Create fresh virtual environment**: + + ```bash + rm -rf .venv + python -m venv .venv + source .venv/bin/activate + pip install -e .[dev] + ``` + +4. **Install dependencies one at a time**: + + ```bash + pip install pydantic + pip install fastapi + # etc. + ``` + +5. **Check for pre-release versions**: + + ```bash + # Allow pre-release versions if needed + pip install --pre package-name + ``` + +### Using pip-compile for Locking + +For exact reproducibility, use pip-compile (pip-tools): + +```bash +# Install pip-tools +pip install pip-tools + +# Generate locked requirements +pip-compile pyproject.toml --output-file=requirements.lock + +# Install from locked file +pip install -r requirements.lock +``` + +## Development vs Production Dependencies + +### Development Installation + +```bash +# Install with all dev dependencies +pip install -e .[dev] + +# Or use make +make install +``` + +**Includes**: + +- Testing frameworks (pytest) +- Linting and formatting (ruff, mypy) +- Build tools +- Documentation generators + +### Production Installation + +```bash +# Install only runtime dependencies +pip install . + +# Or from PyPI +pip install {{cookiecutter.__package_slug}} +``` + +**Includes**: + +- Only dependencies needed to run the application +- Smaller installation size +- Faster installation time + +### Docker Production Images + +Production Docker images should only install runtime dependencies: + +```dockerfile +# Install only runtime dependencies (no [dev]) +RUN pip install --no-cache-dir -r requirements.txt +``` + +## Optional Dependency Groups + +You can create multiple optional dependency groups: + +```toml +[project.optional-dependencies] +dev = [ + "pytest", + "ruff", +] + +docs = [ + "sphinx", + "sphinx-rtd-theme", +] + +performance = [ + "uvloop", + "orjson", +] +``` + +Install specific groups: + +```bash +# Install dev dependencies +pip install -e .[dev] + +# Install multiple groups +pip install -e .[dev,docs] + +# Install all optional dependencies +pip install -e .[dev,docs,performance] +``` + +## Build System Configuration + +The build system is configured at the top of `pyproject.toml`: + +```toml +[build-system] +build-backend = "setuptools.build_meta" +requires = ["setuptools>=67.0", "setuptools_scm[toml]>=7.1"] +``` + +**Components**: + +- **build-backend**: Uses setuptools for building packages +- **setuptools**: Modern Python build system +- **setuptools_scm**: Automatic versioning from git tags + +### Building Distribution Packages + +```bash +# Build source and wheel distributions +make build + +# Or manually +python -m build + +# Creates: +# dist/{{cookiecutter.__package_slug}}-X.Y.Z.tar.gz (source) +# dist/{{cookiecutter.__package_slug}}-X.Y.Z-py3-none-any.whl (wheel) +``` + +## Best Practices + +1. **Use pyproject.toml as single source of truth**: Don't mix with `setup.py` or `setup.cfg` + +2. **Pin major versions with ~=**: Allows updates while preventing breaking changes + + ```toml + "pydantic~=2.0" # Good + "pydantic" # Bad - no version constraint + "pydantic==2.5.0" # Bad - too restrictive + ``` + +3. **Separate runtime and dev dependencies**: Keep production images lean + +4. **Use editable installs for development**: `-e` flag for faster iteration + + ```bash + pip install -e .[dev] + ``` + +5. **Keep dependencies updated**: Regular updates prevent security issues + +6. **Test after updates**: Run full test suite after dependency updates + + ```bash + pip install --upgrade -e .[dev] + make test + ``` + +7. **Document why dependencies are needed**: Add comments in pyproject.toml + + ```toml + dependencies = [ + "pydantic~=2.0", # Settings and validation + "requests~=2.31", # HTTP client for external APIs + ] + ``` + +8. **Use virtual environments**: Always work in virtual environments + +9. **Lock dependencies for production**: Use requirements files or pip-compile for exact reproducibility + +10. **Review dependency licenses**: Ensure compatibility with your project's license + +## Troubleshooting + +### "ModuleNotFoundError" After Adding Dependency + +```bash +# Reinstall to pick up new dependencies +pip install -e .[dev] + +# Verify package is installed +pip show package-name +``` + +### "No module named 'setuptools_scm'" + +```bash +# Update pip and install build dependencies +pip install --upgrade pip setuptools wheel +pip install -e .[dev] +``` + +### Slow Dependency Resolution + +```bash +# Use uv for faster dependency resolution +pip install uv +uv pip install -e .[dev] +``` + +### Conflicting Dependencies + +```bash +# Show dependency tree +pip install pipdeptree +pipdeptree + +# Find conflicts +pipdeptree --warn conflicts +``` + +## References + +- [PEP 621 - Project Metadata](https://peps.python.org/pep-0621/) +- [Python Packaging User Guide](https://packaging.python.org/) +- [pip Documentation](https://pip.pypa.io/) +- [pyenv Documentation](https://github.com/pyenv/pyenv) +- [uv - Fast Python Package Installer](https://github.com/astral-sh/uv) diff --git a/{{cookiecutter.__package_slug}}/docs/dev/docker.md b/{{cookiecutter.__package_slug}}/docs/dev/docker.md index 94fce21..09f05d0 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/docker.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/docker.md @@ -1,31 +1,804 @@ # Docker -## Images +This project includes Docker containerization for all services, making it easy to develop, test, and deploy in consistent environments across different platforms. -{% if cookiecutter.include_cli == "y" %} +## Docker Images -### FastAPI +The project uses specialized base images from the Multi-Py project, optimized for different Python workloads: -Images are created using the [Multi-Py Uvicorn Project](https://github.com/multi-py/python-uvicorn). +{%- if cookiecutter.include_fastapi == "y" %} -{% endif %} +### FastAPI (Web Server) -{% if cookiecutter.include_celery == "y" %} +**Base Image**: [ghcr.io/multi-py/python-uvicorn](https://github.com/multi-py/python-uvicorn) -### Celery +The FastAPI image is built on the Multi-Py Uvicorn base, providing: -Images are created using the [Multi-Py Celery Project](https://github.com/multi-py/python-celery). +- Pre-configured Uvicorn ASGI server +- Automatic hot-reload in development mode +- Production-ready performance optimizations +- Health check endpoints +- Graceful shutdown handling -{% endif %} +**Dockerfile**: `dockerfile.www` + +```dockerfile +ARG PYTHON_VERSION={{ cookiecutter.__python_short_version }} +FROM ghcr.io/multi-py/python-uvicorn:py${PYTHON_VERSION}-slim-LATEST + +ENV APP_MODULE={{ cookiecutter.__package_slug }}.www:app + +COPY requirements.txt /requirements.txt +RUN pip install --no-cache-dir -r /requirements.txt + +COPY ./docker/www/prestart.sh /app/prestart.sh +COPY ./ /app +``` + +**Key Features**: + +- Automatically runs `prestart.sh` before starting Uvicorn +- Supports hot-reload via `RELOAD=true` environment variable +- Runs on port 80 by default +- Includes health check support +{%- endif %} + +{%- if cookiecutter.include_celery == "y" %} + +### Celery (Task Queue) + +**Base Image**: [ghcr.io/multi-py/python-celery](https://github.com/multi-py/python-celery) + +The Celery image is built on the Multi-Py Celery base, providing: + +- Pre-configured Celery worker and beat scheduler +- Automatic task discovery +- Graceful shutdown with task completion +- Memory leak protection with max-tasks-per-child +- Production-optimized concurrency settings + +**Dockerfile**: `dockerfile.celery` + +```dockerfile +ARG PYTHON_VERSION={{ cookiecutter.__python_short_version }} +FROM ghcr.io/multi-py/python-celery:py${PYTHON_VERSION}-slim-LATEST + +ENV APP_MODULE={{ cookiecutter.__package_slug }}.celery:celery + +COPY requirements.txt /requirements.txt +RUN pip install --no-cache-dir -r /requirements.txt + +COPY ./docker/celery/prestart.sh /app/prestart.sh +COPY ./ /app +``` + +**Key Features**: + +- Separate containers for scheduler (beat) and workers +- Automatically runs migrations before starting workers +- Supports autoscaling worker processes +- Configurable concurrency and max tasks per child +{%- endif %} + +{%- if cookiecutter.include_quasiqueue == "y" %} + +### QuasiQueue (Multiprocessing) + +**Base Image**: Python slim + +The QuasiQueue image provides a containerized environment for running multiprocessing jobs: + +**Dockerfile**: `dockerfile.qq` + +```dockerfile +ARG PYTHON_VERSION={{ cookiecutter.__python_short_version }} +FROM python:${PYTHON_VERSION}-slim + +COPY requirements.txt /requirements.txt +RUN pip install --no-cache-dir -r /requirements.txt + +COPY ./ /app +WORKDIR /app + +CMD ["python", "-m", "{{ cookiecutter.__package_slug }}.qq"] +``` + +{%- endif %} + +## Docker Compose + +The project includes a `compose.yaml` file for orchestrating all services in development and testing. + +### Services Overview + +{%- if cookiecutter.include_fastapi == "y" %} + +**www**: FastAPI web server + +- Port: 80 (host) → 80 (container) +- Hot-reload enabled in development +- Volume-mounted source code for live updates +{%- endif %} + +{%- if cookiecutter.include_celery == "y" %} + +**celery-scheduler**: Celery beat scheduler for periodic tasks + +- Runs scheduled tasks at configured intervals +- Single instance (do not scale) + +**celery-node**: Celery worker for processing tasks + +- Processes tasks from the queue +- Can be scaled horizontally (`docker-compose up --scale celery-node=3`) +{%- endif %} + +{%- if cookiecutter.include_quasiqueue == "y" %} + +**qq**: QuasiQueue multiprocessing service + +- Processes CPU-intensive jobs in parallel +{%- endif %} + +{%- if cookiecutter.include_celery == "y" or cookiecutter.include_aiocache == "y" %} + +**redis**: Redis cache and message broker + +- Used for Celery task queue +- Used for distributed caching +- Persists data to disk by default +{%- endif %} + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +**db**: PostgreSQL database + +- Development database with default credentials +- Data persists across container restarts +- Port 5432 (internal only by default) +{%- endif %} + +### Running with Docker Compose + +```bash +# Start all services +docker-compose up + +# Start in detached mode (background) +docker-compose up -d + +# Start specific service +docker-compose up www + +# View logs +docker-compose logs -f + +# View logs for specific service +docker-compose logs -f www + +# Stop all services +docker-compose down + +# Stop and remove volumes (deletes database data!) +docker-compose down -v +``` + +### Scaling Services + +{%- if cookiecutter.include_celery == "y" %} + +Scale Celery workers for increased throughput: + +```bash +# Run 3 worker instances +docker-compose up --scale celery-node=3 + +# In production, use orchestration like Kubernetes for auto-scaling +``` + +{%- endif %} + +## Environment Variables in Docker + +Environment variables are configured in `compose.yaml` for development: + +### Common Variables + +- **IS_DEV**: Set to `true` to enable development features +{%- if cookiecutter.include_fastapi == "y" %} +- **RELOAD**: Set to `true` to enable hot-reload in Uvicorn +{%- endif %} + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +### Database Configuration + +- **DATABASE_URL**: `postgresql://main:main12345@db/main` + - Format: `postgresql://[user]:[password]@[host]/[database]` + - Host `db` refers to the PostgreSQL service in compose +{%- endif %} + +{%- if cookiecutter.include_celery == "y" %} + +### Celery Configuration + +- **CELERY_BROKER**: `redis://redis:6379/0` + - Points to the Redis service for task queue +{%- endif %} + +{%- if cookiecutter.include_aiocache == "y" %} + +### Cache Configuration + +- **CACHE_REDIS_HOST**: `redis` +- **CACHE_REDIS_PORT**: `6379` +{%- endif %} + +### Override Environment Variables + +Create a `.env` file in the project root to override default values: + +```bash +# .env file +DATABASE_URL=postgresql://custom_user:custom_pass@db/custom_db +DEBUG=True +CACHE_ENABLED=True +``` + +Docker Compose automatically loads `.env` files. + +## Volume Mounts for Development + +The compose file mounts source code as volumes for live development: + +{%- if cookiecutter.include_fastapi == "y" %} + +```yaml +volumes: + - "./{{cookiecutter.__package_slug}}:/app/{{cookiecutter.__package_slug}}" # Source code + - "./db:/app/db" # Migration scripts + - "./docker/www/prestart.sh:/app/prestart.sh" # Startup script +``` + +{%- endif %} + +**Benefits**: + +- Code changes are immediately reflected in the container +- No need to rebuild images during development +- Fast iteration cycle + +**Note**: Volume mounts should NOT be used in production. Production images should have code baked in during build. + +## Building Images + +### Build All Images + +```bash +# Build all services +docker-compose build + +# Build with no cache (clean build) +docker-compose build --no-cache + +# Build specific service +docker-compose build www +``` + +### Build for Production + +Production images should not use volume mounts: + +```bash +# Build production image +docker build -f dockerfile.www -t {{cookiecutter.__package_slug}}-www:latest . + +# Tag for registry +docker tag {{cookiecutter.__package_slug}}-www:latest ghcr.io/your-org/{{cookiecutter.__package_slug}}-www:latest + +# Push to registry +docker push ghcr.io/your-org/{{cookiecutter.__package_slug}}-www:latest +``` + +## Docker Ignore File + +The project includes a `.dockerignore` file that controls which files are copied into Docker images during the build process. + +### Default Ignore Strategy + +The `.dockerignore` file uses a **deny-by-default** approach for maximum security and minimal image size: + +``` +# Ignore everything by default +* + +# Explicitly allow only what's needed +!/{{cookiecutter.__package_slug}} +!/.python-version +!/db +!/docker +!/alembic.ini +!/LICENSE +!/makefile +!/pyproject.toml +!/README.md +!/setup.* +!/requirements* +``` + +**Why deny-by-default?** + +- **Security**: Prevents accidentally including sensitive files (`.env`, credentials, SSH keys) +- **Image Size**: Keeps images small by excluding unnecessary files +- **Build Speed**: Reduces build context size for faster builds +- **Explicit Control**: You must consciously decide what goes into the image + +### Adding New Files to Docker Images + +When you add new files or directories that need to be in the Docker image, you **must update `.dockerignore`**: + +```bash +# Example: Adding a new static assets directory +!/static + +# Example: Adding a configuration directory +!/config + +# Example: Adding documentation that should be in the image +!/docs +``` + +**Important**: The `!` prefix means "don't ignore this" (include it). + +### Common Files to Keep Excluded + +These should remain excluded from Docker images: + +``` +.git/ # Git repository data +.venv/ # Virtual environments +__pycache__/ # Python bytecode cache +*.pyc # Compiled Python files +.pytest_cache/ # Test cache +.env # Environment variables file +.env.* # Environment variable variants +node_modules/ # Node.js dependencies (if applicable) +.DS_Store # macOS metadata +*.log # Log files +.coverage # Coverage reports +htmlcov/ # Coverage HTML reports +dist/ # Distribution builds +*.egg-info/ # Python package metadata +``` -## Dev Environment +### Troubleshooting Missing Files -The build in docker compose environment can be used to development. +If your Docker container is missing files you expect: -{% if cookiecutter.include_github_actions == "y" %} +1. **Check `.dockerignore`**: Ensure the file/directory is explicitly allowed -## Registry + ```bash + # View what's being excluded + cat .dockerignore + ``` -Images are automatically created and published to the Github Container Registry using Github Actions. +2. **Test the build context**: + ```bash + # See what files Docker will copy + docker build --no-cache -f dockerfile.www --progress=plain . 2>&1 | grep "COPY" + ``` + +3. **Add the missing path**: + + ``` + # In .dockerignore, add: + !/path/to/your/file + ``` + +4. **Rebuild the image**: + + ```bash + docker-compose build --no-cache + ``` + +### Example: Adding Custom Templates + +If you add custom templates outside the main package: + +``` +{{cookiecutter.__package_slug}}/ +templates/ # Custom templates directory (new) +{{cookiecutter.__package_slug}}/ +``` + +Update `.dockerignore`: + +``` +# ... existing entries ... +!/templates +``` + +## Multi-Stage Builds + +The base images from Multi-Py already use multi-stage builds for optimization. You can extend them for additional optimization: + +```dockerfile +# Example: Multi-stage build with build dependencies +FROM ghcr.io/multi-py/python-uvicorn:py{{ cookiecutter.__python_short_version }}-slim-LATEST AS builder + +# Install build dependencies +RUN apt-get update && apt-get install -y gcc g++ make + +# Install Python packages +COPY requirements.txt /requirements.txt +RUN pip install --no-cache-dir -r /requirements.txt + +# Final stage - copy only what's needed +FROM ghcr.io/multi-py/python-uvicorn:py{{ cookiecutter.__python_short_version }}-slim-LATEST + +# Copy installed packages from builder +COPY --from=builder /usr/local/lib/python{{ cookiecutter.__python_short_version }}/site-packages/ /usr/local/lib/python{{ cookiecutter.__python_short_version }}/site-packages/ + +# Copy application +COPY ./ /app +``` + +## Prestart Scripts + +Each service includes a prestart script that runs before the main application: + +{%- if cookiecutter.include_fastapi == "y" %} + +### FastAPI Prestart (`docker/www/prestart.sh`) + +The FastAPI prestart script: + +1. **Waits for database**: Uses `netcat` to check PostgreSQL availability +2. **Runs migrations**: Executes `alembic upgrade head` automatically +3. **Creates test data**: If `CREATE_TEST_DATA` is set, populates the database + +```bash +#!/usr/bin/env bash + +{% if cookiecutter.include_sqlalchemy == "y" %} +# Wait for PostgreSQL to be ready +if [ ! -z "$IS_DEV" ]; then + DB_HOST=$(python -c "from urllib.parse import urlparse; print(urlparse('${DATABASE_URL}').netloc.split('@')[-1]);") + if [ ! -z "$DB_HOST" ]; then + while ! nc -zv ${DB_HOST} 5432 > /dev/null 2> /dev/null; do + echo "Waiting for postgres to be available at host '${DB_HOST}'" + sleep 1 + done + fi +fi + +# Run migrations +echo "Run Database Migrations" +python -m alembic upgrade head + +# Create test data if requested +if [ ! -z "$CREATE_TEST_DATA" ]; then + echo "Creating test data..." + python -m {{cookiecutter.__package_slug}}.cli test-data +fi {% endif %} +``` + +{%- endif %} + +{%- if cookiecutter.include_celery == "y" %} + +### Celery Prestart (`docker/celery/prestart.sh`) + +Similar to FastAPI, ensures database is ready before starting workers. +{%- endif %} + +## Development vs Production + +### Development Configuration + +**docker-compose.yaml** is optimized for development: + +- Volume mounts for live code updates +- Hot-reload enabled +- Debug logging enabled +- Exposed ports for direct access +- Simple passwords and credentials + +```bash +# Start development environment +docker-compose up + +# Your code changes are immediately reflected +# No need to rebuild images +``` + +### Production Configuration + +For production, create a separate `docker-compose.prod.yaml`: + +```yaml +services: + www: + image: ghcr.io/your-org/{{cookiecutter.__package_slug}}-www:latest + restart: always + # NO volume mounts - code is in image + ports: + - "8000:80" # Don't expose on port 80 directly + environment: + IS_DEV: false + RELOAD: false + DATABASE_URL: ${DATABASE_URL} # Load from secure secrets + SECRET_KEY: ${SECRET_KEY} + deploy: + replicas: 3 + resources: + limits: + cpus: '1' + memory: 512M + reservations: + cpus: '0.5' + memory: 256M +``` + +**Production Best Practices**: + +1. Use tagged image versions (not `latest`) +2. Load secrets from secure stores (not .env files) +3. Don't expose internal ports +4. Configure resource limits +5. Enable restart policies +6. Use health checks +7. Run behind a reverse proxy (nginx, Traefik) + +## Debugging in Docker + +### View Container Logs + +```bash +# All services +docker-compose logs -f + +# Specific service +docker-compose logs -f www + +# Last 100 lines +docker-compose logs --tail=100 www +``` + +### Execute Commands in Running Containers + +```bash +# Open shell in container +docker-compose exec www bash + +# Run a command +docker-compose exec www python -m {{cookiecutter.__package_slug}}.cli version + +# Check database connection +docker-compose exec www python -c "from {{cookiecutter.__package_slug}}.services.db import engine; print(engine)" +``` + +### Debug Application Code + +Add this to your FastAPI code for interactive debugging: + +```python +import debugpy + +# Enable remote debugging on port 5678 +debugpy.listen(("0.0.0.0", 5678)) +print("Waiting for debugger to attach...") +debugpy.wait_for_client() +``` + +Then expose the port in compose: + +```yaml +services: + www: + ports: + - "80:80" + - "5678:5678" # Debugger port +``` + +## Health Checks + +Add health checks to ensure containers are running properly: + +```yaml +services: + www: + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost/docs"] + interval: 30s + timeout: 10s + retries: 3 + start_period: 40s +``` + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +```yaml + db: + healthcheck: + test: ["CMD-SHELL", "pg_isready -U main"] + interval: 10s + timeout: 5s + retries: 5 +``` + +{%- endif %} + +## Resource Limits + +Configure resource limits to prevent containers from consuming excessive resources: + +```yaml +services: + www: + deploy: + resources: + limits: + cpus: '2' # Maximum 2 CPU cores + memory: 1G # Maximum 1GB RAM + reservations: + cpus: '0.5' # Guaranteed 0.5 CPU cores + memory: 512M # Guaranteed 512MB RAM +``` + +{%- if cookiecutter.include_github_actions == "y" %} + +## Container Registry + +Images are automatically built and published to the GitHub Container Registry (ghcr.io) using GitHub Actions: + +### Automated Image Building + +On every push to main: + +1. GitHub Actions builds Docker images +2. Images are tagged with: + - `latest` for the main branch + - Git commit SHA for traceability + - Version tags from releases +3. Images are pushed to `ghcr.io/{{cookiecutter.github_org}}/{{cookiecutter.__package_slug}}` + +### Pull Images from Registry + +```bash +# Pull latest image +docker pull ghcr.io/{{cookiecutter.github_org}}/{{cookiecutter.__package_slug}}-www:latest + +# Pull specific version +docker pull ghcr.io/{{cookiecutter.github_org}}/{{cookiecutter.__package_slug}}-www:v1.2.3 + +# Use in docker-compose +services: + www: + image: ghcr.io/{{cookiecutter.github_org}}/{{cookiecutter.__package_slug}}-www:latest +``` + +See [GitHub Actions Documentation](./github.md) for more details on CI/CD workflows. +{%- endif %} + +## Networking + +Docker Compose automatically creates a network for service communication: + +- Services can reference each other by service name +- Example: `postgresql://user:pass@db/dbname` (where `db` is the service name) +- Internal communication doesn't require port exposure + +### Custom Networks + +For complex setups, define custom networks: + +```yaml +services: + www: + networks: + - frontend + - backend + + db: + networks: + - backend + +networks: + frontend: + driver: bridge + backend: + driver: bridge + internal: true # No external access +``` + +## Troubleshooting + +### Container Won't Start + +```bash +# Check logs for errors +docker-compose logs www + +# Check container status +docker-compose ps + +# Rebuild without cache +docker-compose build --no-cache www +docker-compose up www +``` + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +### Database Connection Issues + +```bash +# Check if database is running +docker-compose ps db + +# Check database logs +docker-compose logs db + +# Verify connection from www container +docker-compose exec www nc -zv db 5432 + +# Connect to database directly +docker-compose exec db psql -U main -d main +``` + +{%- endif %} + +### Port Already in Use + +If port 80 is already in use, modify the port mapping in `compose.yaml`: + +```yaml +services: + www: + ports: + - "8080:80" # Use port 8080 on host instead +``` + +### Out of Disk Space + +```bash +# Remove unused images and containers +docker system prune + +# Remove all stopped containers, unused images, and volumes +docker system prune -a --volumes +``` + +## Best Practices + +1. **Use .dockerignore**: This project uses a deny-by-default `.dockerignore` strategy. When adding new files/directories to your project that need to be in Docker images, you must explicitly allow them in `.dockerignore`. See the [Docker Ignore File](#docker-ignore-file) section for details. + +2. **Layer caching**: Order Dockerfile commands from least to most frequently changed + + ```dockerfile + COPY requirements.txt /requirements.txt + RUN pip install -r /requirements.txt + COPY ./ /app # Do this last + ``` + +3. **Don't run as root**: Use non-root users in production (Multi-Py images handle this) + +4. **Keep images small**: Use slim base images and multi-stage builds + +5. **Use specific tags**: Never use `latest` in production + +6. **Health checks**: Always define health checks for production containers + +7. **Logs to stdout**: All application logs should go to stdout/stderr (already configured) + +8. **Secrets management**: Never hardcode secrets, use environment variables or secrets managers + +## References + +- [Docker Documentation](https://docs.docker.com/) +- [Docker Compose Documentation](https://docs.docker.com/compose/) +- [Multi-Py Uvicorn Images](https://github.com/multi-py/python-uvicorn) +- [Multi-Py Celery Images](https://github.com/multi-py/python-celery) +- [Best Practices for Writing Dockerfiles](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/) diff --git a/{{cookiecutter.__package_slug}}/docs/dev/documentation.md b/{{cookiecutter.__package_slug}}/docs/dev/documentation.md new file mode 100644 index 0000000..5d6bbad --- /dev/null +++ b/{{cookiecutter.__package_slug}}/docs/dev/documentation.md @@ -0,0 +1,468 @@ +# Documentation + +This project maintains comprehensive developer documentation to help developers understand, extend, and contribute to the codebase. All documentation follows consistent standards and best practices to ensure quality, accuracy, and usefulness. + +## Overview + +Documentation is organized in the `docs/dev/` directory, with each major feature or topic having its own dedicated file. Documentation is written in Markdown and follows consistent structures appropriate to the content type. + +## Documentation Standards + +All documentation in this project follows these standards to ensure consistency and quality: + +### 1. Structure and Organization + +**Feature Documentation Structure**: Documentation for features and tools should follow this standard structure: + +```markdown +# Feature Name + +Brief introduction explaining what the feature is and what library/tool it uses. + +## Overview + +High-level explanation of the feature's purpose and capabilities. + +## Configuration + +How to configure the feature (environment variables, settings, etc.). + +## Usage + +How to use the feature with practical examples. + +## Testing + +How to test code that uses this feature. + +## Best Practices + +Recommendations and patterns for using the feature effectively. + +## Development vs Production + +Differences between development and production configurations. + +## References + +Links to official documentation and related resources. +``` + +**Other Documentation Types**: Tutorials, guides, and conceptual documentation may follow different structures appropriate to their purpose. The key is consistency within each documentation type. + +**Optional Sections** (include when relevant): + +- Common Patterns +- Troubleshooting +- Advanced Usage +- Performance Considerations +- Security Considerations + +### 2. Code Examples + +**Real, Working Code**: All code examples must be: + +- Taken from or compatible with the actual project structure +- Fully functional and runnable +- Using actual project imports and modules +- Verified against the implementation + +**Bad Example** (generic, fictional): + +```python +# Don't do this - generic example not specific to the project +from some_library import cache + +@cache.cached() +def get_data(): + return "data" +``` + +**Good Example** (project-specific): + +```python +# Do this - uses actual project structure +from {{cookiecutter.__package_slug}}.services.cache import get_cached, set_cached + +async def get_user_profile(user_id: int): + """Get user profile with caching.""" + # Check cache first + cached_profile = await get_cached(f"user:{user_id}", alias="persistent") + if cached_profile: + return cached_profile + + # Fetch from database + profile = await fetch_profile_from_db(user_id) + + # Cache for 1 hour + await set_cached(f"user:{user_id}", profile, ttl=3600, alias="persistent") + return profile +``` + +### 3. Testing Examples + +**Use Actual Fixtures**: Testing examples must use the project's actual test fixtures defined in `conftest.py`: + +- `db_session` - Database session fixture +- `fastapi_client` - FastAPI test client fixture +- `runner` - Typer CLI test runner fixture + +**Bad Example** (fictional fixtures): + +```python +# Don't do this - uses made-up fixtures +def test_something(mock_client): + response = mock_client.get("/endpoint") + assert response.status_code == 200 +``` + +**Good Example** (actual fixtures): + +```python +# Do this - uses actual project fixtures +def test_api_endpoint(fastapi_client): + """Test the API endpoint using the actual test client fixture.""" + response = fastapi_client.get("/users/1") + assert response.status_code == 200 + assert "name" in response.json() +``` + +#### Completeness + +Documentation should cover the feature comprehensively: + +- **Complete Lifecycle**: From setup to advanced usage +- **Error Handling**: What can go wrong and how to fix it +- **Real Examples**: Working code from the actual project +- **Depth**: Provide enough detail for both basic usage and advanced scenarios + +### 5. Accuracy and Verification + +**Verify Everything**: Before documenting: + +- Run all code examples to ensure they work +- Check that imports and module paths are correct +- Verify environment variables and settings +- Test commands and makefile targets +- Confirm library versions and behavior + +**Keep Updated**: When code changes: + +- Update affected documentation +- Verify examples still work +- Update version-specific information +- Check for deprecated features + +### 6. Contextual Teaching + +**Teach in Context**: Documentation should: + +- Explain **how to use features within this project's structure** +- Show **actual patterns from the project** +- Demonstrate **integration with other features** +- Reference **actual project files and modules** + +**Avoid Generic Tutorials**: Don't just copy library documentation. Instead: + +- Show how the library is configured **in this project** +- Demonstrate patterns **specific to this project** +- Explain decisions and conventions **used in this codebase** +- Link to official docs for additional details + +### 7. Development-Focused + +**Target Audience**: Documentation is written for developers who: + +- Are building and maintaining this application +- Need to understand how features work together +- Want to extend or customize functionality +- Are contributing to this project + +**Practical Focus**: Emphasize: + +- Common development tasks +- Real-world usage patterns +- Integration points between features +- Testing strategies +- Debugging techniques + +## Writing New Documentation + +When creating new documentation or expanding existing docs: + +### 1. Research Phase + +Before writing: + +- Review the feature's implementation in the codebase +- Test the feature with various configurations +- Examine how it's used in the project +- Check official library documentation +- Look at test files for usage patterns + +### 2. Outline Phase + +Create a structure: + +```markdown +# Feature Name +## Overview +## Configuration +## Basic Usage +## Common Patterns +## Testing +## Best Practices +## References +``` + +### 3. Writing Phase + +Follow these guidelines: + +**Start with Introduction**: + +```markdown +# Feature Name + +This project uses [Library Name](url) for [purpose], providing [key capabilities]. +``` + +**Configuration Section**: + +- List all environment variables +- Show default values +- Explain what each setting controls +- Group related settings together + +**Usage Section**: + +- Start with simplest example +- Add complexity gradually +- Show multiple approaches +- Include comments explaining code + +**Testing Section**: + +- Use actual project fixtures +- Show test structure and patterns +- Demonstrate assertions +- Cover async testing when applicable + +**Best Practices Section**: + +Numbered list of recommendations: + +```markdown +1. **Practice Name**: Brief explanation + + ```python + # Good + example_code() + + # Bad + wrong_code() + ``` + +``` + +### 4. Review Phase + +Before finalizing: + +- [ ] Run all code examples +- [ ] Verify all imports work +- [ ] Test all commands and makefile targets +- [ ] Check links to external resources +- [ ] Ensure consistent formatting +- [ ] Verify it matches the standard structure +- [ ] Get feedback from another developer + +## Documentation Maintenance + +### Regular Updates + +Documentation should be updated when: + +- New features are added +- APIs change +- Configuration options change +- Dependencies are updated +- Best practices evolve + +### Version Considerations + +When documenting version-specific behavior: + +```markdown +**Note**: This feature requires Python 3.11+ +``` + +### Deprecation Notices + +When features are deprecated: + +```markdown +**Deprecated**: This approach is deprecated in favor of [new approach]. +See [link to new docs] for the recommended pattern. +``` + +## Common Documentation Patterns + +### Command Examples + +Show commands with explanations: + +```markdown +```bash +# Run tests with coverage +make pytest + +# Run specific test file +pytest tests/test_api.py + +# Run with verbose output +pytest -v +``` + +``` + +### Configuration Tables + +Use lists for configuration options: + +```markdown +- **SETTING_NAME**: Description (default: `value`) + - Additional details or notes +``` + +### Code Annotations + +Add comments to explain code: + +```python +async def example_function(): + """Docstring explaining the function.""" + # Step 1: Fetch data + data = await fetch_data() + + # Step 2: Process the result + processed = process(data) + + # Step 3: Return formatted output + return format_output(processed) +``` + + + +## Testing Documentation + +Documentation itself should be tested: + +### Automated Checks + +The project includes checks for: + +- Broken links (internal and external) +- Code syntax in examples +- Markdown formatting + +### Manual Testing + +When updating documentation: + +1. Follow the documentation steps yourself +2. Run all example commands +3. Verify code examples work +4. Check that links are valid + +## Documentation Tools + +### Markdown Linting + +The project uses markdownlint for consistency: + +```bash +# Check markdown formatting +make lint_markdown +``` + +### Schema Documentation + +Database schema is auto-generated: + +```bash +# Update schema documentation +make document_schema +``` + +This uses [Paracelsus](https://github.com/tedivm/paracelsus) to inject schema information into database.md. + +### Link Checking + +Verify all links in documentation: + +```bash +# Check for broken links +make check_links +``` + +## Best Practices + +1. **Write as You Code**: Document features as you implement them, not after + +2. **Test Your Examples**: Never publish documentation with untested code examples + +3. **Use Actual Imports**: Always use the project's actual module structure in examples + +4. **Show, Don't Tell**: Prefer code examples over lengthy explanations + +5. **Link to Official Docs**: Reference official library documentation for detailed API information + +6. **Keep It Current**: Update documentation when you change code + +7. **Be Specific**: Use concrete examples from the project, not generic tutorials + +8. **Consider Your Audience**: Write for developers working on this project, not library beginners + +9. **Explain Decisions**: Document why certain patterns or configurations are used + +10. **Maintain Consistency**: Follow the established structure and style + +## Resources + +- [Markdown Guide](https://www.markdownguide.org/) +- [Write the Docs](https://www.writethedocs.org/) +- [Google Developer Documentation Style Guide](https://developers.google.com/style) +- [Divio Documentation System](https://documentation.divio.com/) + +## Contributing to Documentation + +When contributing documentation improvements: + +1. Review existing documentation for style and structure +2. Follow the standards outlined in this document +3. Test all code examples in the project +4. Use actual project fixtures and patterns +5. Get feedback through pull request review +6. Update this documentation.md if adding new standards + +## Meta-Documentation + +This file itself follows the standards it describes: + +- Consistent structure with clear sections +- Practical examples of documentation patterns +- Best practices with numbered lists +- References to external resources +- Focus on project-specific context +- Teaching through demonstration + +By following these standards, we ensure that all project documentation is: + +- **Accurate**: Reflects actual implementation +- **Useful**: Helps developers accomplish tasks +- **Consistent**: Follows predictable patterns +- **Maintainable**: Easy to update as code evolves +- **Comprehensive**: Covers common use cases and edge cases + +Good documentation is a force multiplier that enables developers to work effectively and confidently. diff --git a/{{cookiecutter.__package_slug}}/docs/dev/github.md b/{{cookiecutter.__package_slug}}/docs/dev/github.md index 15657f3..9b3179a 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/github.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/github.md @@ -1 +1,598 @@ -# Github +# GitHub Actions + +This project includes comprehensive GitHub Actions workflows for automated testing, linting, building, and deployment. Every push and pull request is automatically validated to ensure code quality and functionality. + +## Available Workflows + +The project includes the following GitHub Actions workflows in `.github/workflows/`: + +### Testing Workflows + +**pytest.yaml** - Test Suite + +- **Trigger**: Every push and pull request +- **Purpose**: Runs the full test suite with coverage reporting +- **Matrix**: Tests against Python 3.10, 3.11, 3.12, 3.13, and 3.14 +- **Command**: `make pytest` + +### Code Quality Workflows + +**ruff.yaml** - Linting + +- **Trigger**: Every push and pull request +- **Purpose**: Checks code follows linting rules (code quality, style violations, unused imports, etc.) +- **Command**: `make ruff_check` +- **Note**: Ruff handles linting only; formatting is checked by black.yaml + +**black.yaml** - Code Formatting + +- **Trigger**: Every push and pull request +- **Purpose**: Enforces Black formatting standard using Ruff as the formatter +- **Command**: `make black_check` +- **Note**: Black is the formatting standard; Ruff is the tool that enforces it + +**mypy.yaml** - Type Checking + +- **Trigger**: Every push and pull request +- **Purpose**: Validates type hints and catches type-related errors +- **Command**: `make mypy_check` + +**dapperdata.yaml** - Data Format Validation + +- **Trigger**: Every push and pull request +- **Purpose**: Validates data file formatting (YAML, JSON, etc.) +- **Command**: `make dapperdata_check` + +**tomlsort.yaml** - TOML File Sorting + +- **Trigger**: Every push and pull request +- **Purpose**: Ensures TOML files (like pyproject.toml) are properly sorted +- **Command**: `make tomlsort_check` + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +### Database Workflows + +**alembic.yaml** - Migration Validation + +- **Trigger**: Every push and pull request +- **Purpose**: Ensures all database model changes have corresponding migrations +- **Command**: `make check_ungenerated_migrations` +- **Failure**: Indicates model changes without a migration + +**paracelsus.yaml** - Schema Documentation + +- **Trigger**: Every push and pull request +- **Purpose**: Validates database schema documentation is up-to-date +- **Command**: `make paracelsus_check` +{%- endif %} + +{%- if cookiecutter.include_requirements_files == "y" %} + +### Dependency Workflows + +**lockfiles.yaml** - Requirements File Validation + +- **Trigger**: Every push and pull request +- **Purpose**: Ensures requirements.txt files are synchronized with pyproject.toml +- **Command**: `make dependencies` +{%- endif %} + +{%- if cookiecutter.include_docker == "y" %} + +### Build and Deployment Workflows + +**docker.yaml** - Container Image Publishing + +- **Trigger**: + - Pull requests (build only, no push) + - Pushes to `main` branch + - Version tags (v*.*.*) +- **Purpose**: Builds and publishes Docker images to GitHub Container Registry +- **Images**: +{%- if cookiecutter.include_fastapi == "y" %} + - `ghcr.io/{{cookiecutter.github_org}}/{{cookiecutter.__package_slug}}.www` - FastAPI web server +{%- endif %} +{%- if cookiecutter.include_celery == "y" %} + - `ghcr.io/{{cookiecutter.github_org}}/{{cookiecutter.__package_slug}}.celery` - Celery workers +{%- endif %} +- **Platforms**: linux/amd64, linux/arm64 (multi-architecture support) +- **Tags**: + - `main` - Latest development version + - `pr-N` - Pull request builds + - `v1.2.3` - Semantic version tags + - `v1.2`, `v1` - Major/minor version aliases +{%- endif %} + +{%- if cookiecutter.publish_to_pypi == "y" %} + +**pypi.yaml** - PyPI Package Publishing + +- **Trigger**: + - All pushes and pull requests (build only) + - Version tags `v*.*.*` (build and publish) +- **Purpose**: Builds Python wheel and publishes to PyPI +- **Authentication**: Uses PyPI Trusted Publishers (OIDC, no tokens needed) +- **Permissions**: Requires `id-token: write` for trusted publishing +{%- endif %} + +## Workflow Triggers + +### Push Events + +Workflows trigger on pushes to any branch: + +```yaml +on: + push: +``` + +Most workflows run on every push to ensure code quality at all times. + +### Pull Request Events + +All quality checks run on pull requests: + +```yaml +on: + pull_request: +``` + +This ensures new code meets quality standards before merging. + +### Tag Events + +Publishing workflows trigger on version tags: + +```yaml +on: + push: + tags: + - "v*.*.*" # Matches v1.0.0, v2.1.3, etc. +``` + +Create a tag to trigger a release: + +```bash +git tag v1.0.0 +git push origin v1.0.0 +``` + +### Branch-Specific Triggers + +Some workflows only run on specific branches: + +```yaml +on: + push: + branches: + - main +``` + +## Configuring Secrets + +{%- if cookiecutter.publish_to_pypi == "y" %} + +### PyPI Publishing (Trusted Publishers) + +This project uses PyPI's Trusted Publisher feature, which doesn't require manual API tokens: + +1. **On PyPI**: + - Go to your project on PyPI + - Navigate to "Publishing" settings + - Add GitHub as a trusted publisher: + - Owner: `{{cookiecutter.github_org}}` + - Repository: `{{cookiecutter.__package_slug}}` + - Workflow: `pypi.yaml` + - Environment: (leave blank) + +2. **No GitHub Secrets Needed**: The workflow uses OIDC authentication automatically + +For more details, see [PyPI Trusted Publishers Documentation](https://docs.pypi.org/trusted-publishers/). +{%- endif %} + +### Docker Registry (Automatic) + +Docker image publishing uses `GITHUB_TOKEN` which is automatically provided by GitHub Actions. No manual configuration needed. + +### Custom Secrets + +To add custom secrets: + +1. Go to your repository on GitHub +2. Navigate to Settings → Secrets and variables → Actions +3. Click "New repository secret" +4. Add your secret name and value + +Use secrets in workflows: + +```yaml +steps: + - name: Use Secret + env: + MY_SECRET: {% raw %}${{ secrets.MY_SECRET }}{% endraw %} + run: echo "Using secret" +``` + +## Branch Protection Rules + +Configure branch protection for `main` to require passing checks: + +1. Go to Settings → Branches +2. Add branch protection rule for `main` +3. Enable: + - ✅ Require a pull request before merging + - ✅ Require status checks to pass before merging + - ✅ Require branches to be up to date before merging + - Select required status checks: + - pytest + - ruff + - mypy + {%- if cookiecutter.include_sqlalchemy == "y" %} + - alembic + - paracelsus + {%- endif %} + +This prevents merging code that fails tests or quality checks. + +## Automated Releases + +### Creating a Release + +1. **Update version** (optional - setuptools-scm handles this automatically): + + ```bash + git tag v1.2.3 + ``` + +2. **Push the tag**: + + ```bash + git push origin v1.2.3 + ``` + +3. **Automated actions**: + {%- if cookiecutter.publish_to_pypi == "y" %} + - Builds Python package + - Publishes to PyPI + {%- endif %} + {%- if cookiecutter.include_docker == "y" %} + - Builds Docker images + - Publishes to GitHub Container Registry with version tags + {%- endif %} + +### Version Tag Format + +Use semantic versioning for tags: + +- `v1.0.0` - Major release +- `v1.1.0` - Minor release +- `v1.1.1` - Patch release + +The `v` prefix is required for workflows to trigger. + +### Automated Versioning with setuptools-scm + +This project uses `setuptools-scm` for automatic versioning: + +- Version derived from git tags +- Commit count added for development versions +- No manual version updates needed + +```bash +# Check current version +python -c "from {{cookiecutter.__package_slug}}._version import version; print(version)" + +# Development version format: 1.2.3.dev4+g5f8a7bc +# Released version format: 1.2.3 +``` + +{%- if cookiecutter.include_docker == "y" %} + +## Container Image Publishing + +### Image Naming + +Images are published to GitHub Container Registry (GHCR): + +{%- if cookiecutter.include_fastapi == "y" %} + +- `ghcr.io/{{cookiecutter.github_org}}/{{cookiecutter.__package_slug}}.www` +{%- endif %} +{%- if cookiecutter.include_celery == "y" %} +- `ghcr.io/{{cookiecutter.github_org}}/{{cookiecutter.__package_slug}}.celery` +{%- endif %} + +### Image Tags + +Multiple tags are created for each build: + +- **Branch builds**: `main`, `develop`, etc. +- **PR builds**: `pr-123` +- **Version builds**: `v1.2.3`, `v1.2`, `v1`, `latest` + +### Multi-Architecture Support + +Images are built for multiple architectures: + +- `linux/amd64` - Intel/AMD processors +- `linux/arm64` - ARM processors (Apple Silicon, ARM servers) + +Use the same image tag across architectures: + +```bash +# Automatically pulls correct architecture +docker pull ghcr.io/{{cookiecutter.github_org}}/{{cookiecutter.__package_slug}}.www:latest +``` + +### Pulling Images + +```bash +# Pull latest development version +docker pull ghcr.io/{{cookiecutter.github_org}}/{{cookiecutter.__package_slug}}.www:main + +# Pull specific version +docker pull ghcr.io/{{cookiecutter.github_org}}/{{cookiecutter.__package_slug}}.www:v1.2.3 + +# Use in docker-compose +services: + www: + image: ghcr.io/{{cookiecutter.github_org}}/{{cookiecutter.__package_slug}}.www:v1.2.3 +``` + +### Image Visibility + +By default, images are public. To make them private: + +1. Go to the package page on GitHub +2. Click "Package settings" +3. Change visibility to "Private" +{%- endif %} + +## Customizing Workflows + +### Adding a New Workflow + +Create `.github/workflows/my-workflow.yaml`: + +```yaml +name: My Custom Workflow + +on: + push: + pull_request: + +jobs: + my-job: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v5 + + - uses: actions/setup-python@v6 + with: + python-version-file: .python-version + + - name: Install Dependencies + run: make install + + - name: Run Custom Command + run: echo "Hello, World!" +``` + +### Modifying Existing Workflows + +Edit workflow files in `.github/workflows/`: + +```yaml +# Add a new Python version to test matrix +strategy: + matrix: + version: ["3.10", "3.11", "3.12", "3.15"] # Add 3.15 +``` + +### Conditional Workflow Execution + +Run jobs only on specific branches: + +```yaml +jobs: + deploy: + if: {% raw %}github.ref == 'refs/heads/main'{% endraw %} + runs-on: ubuntu-latest + steps: + - name: Deploy + run: echo "Deploying..." +``` + +### Workflow Dependencies + +Make jobs depend on others: + +```yaml +jobs: + test: + runs-on: ubuntu-latest + steps: + - name: Run Tests + run: make test + + deploy: + needs: test # Only runs if 'test' succeeds + runs-on: ubuntu-latest + steps: + - name: Deploy + run: echo "Deploying..." +``` + +## Debugging Failed Workflows + +### View Workflow Logs + +1. Go to the Actions tab on GitHub +2. Click on the failed workflow run +3. Click on the failed job +4. Expand the failed step to see detailed logs + +### Re-run Failed Jobs + +Click "Re-run jobs" → "Re-run failed jobs" to retry without new commits + +### Debug Locally + +Run the same commands locally: + +```bash +# Run what pytest workflow runs +make install +make pytest + +# Run what ruff workflow runs +make install +make ruff_check + +# Run all checks +make tests +``` + +### Enable Debug Logging + +Add `ACTIONS_STEP_DEBUG` secret with value `true` for verbose logging. + +### Common Issues + +**Tests pass locally but fail in CI:** + +- Check Python version differences +- Verify environment variables +- Check for missing dependencies +- Review test isolation + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +**Alembic check fails:** + +- Run `make create_migration MESSAGE="description"` locally +- Commit and push the new migration file +{%- endif %} + +**Docker build fails:** + +- Check Dockerfile syntax +- Verify base image exists +- Review build logs for missing dependencies + +## Dependabot Configuration + +The project includes Dependabot for automated dependency updates. + +### Configuration + +`.github/dependabot.yml`: + +```yaml +version: 2 + +updates: + - package-ecosystem: "github-actions" + directory: "/" + schedule: + interval: "weekly" +``` + +### What Dependabot Does + +- **Checks weekly** for GitHub Actions updates +- **Creates PRs** automatically for outdated actions +- **Runs tests** on dependency update PRs +- **Provides changelogs** and release notes in PR descriptions + +### Managing Dependabot PRs + +1. **Review the PR**: Check changelog and breaking changes +2. **Run tests**: CI automatically runs on Dependabot PRs +3. **Merge if green**: Merge when all checks pass +4. **Close if not needed**: Close if update isn't desired + +## Workflow Performance + +### Optimization Tips + +1. **Cache dependencies**: + + ```yaml + - uses: actions/cache@v4 + with: + path: ~/.cache/pip + key: {% raw %}${{ runner.os }}-pip-${{ hashFiles('pyproject.toml') }}{% endraw %} + ``` + +2. **Use matrix builds** for parallel testing: + + ```yaml + strategy: + matrix: + version: ["3.10", "3.11", "3.12"] + ``` + +3. **Fail fast** to save time on obvious failures: + + ```yaml + strategy: + fail-fast: true + ``` + +4. **Skip workflows** on documentation-only changes: + + ```yaml + on: + push: + paths-ignore: + - '**.md' + - 'docs/**' + ``` + +## Best Practices + +1. **Keep workflows simple**: One clear purpose per workflow + +2. **Use make commands**: Workflows run `make` targets for consistency with local development + +3. **Test workflow changes**: Test in a branch before merging workflow changes + +4. **Pin action versions**: Use specific versions for actions (e.g., `@v5` not `@latest`) + +5. **Use secrets for sensitive data**: Never hardcode credentials + +6. **Document custom workflows**: Add comments explaining complex logic + +7. **Monitor workflow usage**: Check Actions tab regularly for failures + +8. **Keep dependencies updated**: Review and merge Dependabot PRs promptly + +## Workflow Costs and Limits + +GitHub Actions has usage limits: + +- **Public repositories**: Unlimited minutes (with some restrictions) +- **Private repositories**: 2,000 minutes/month free, then paid +- **Storage**: 500 MB free, artifacts expire after 90 days + +To optimize: + +- Use caching to reduce build times +- Clean up old artifacts +- Use `concurrency` to cancel outdated runs + +```yaml +concurrency: + group: {% raw %}${{ github.workflow }}-${{ github.ref }}{% endraw %} + cancel-in-progress: true +``` + +## References + +- [GitHub Actions Documentation](https://docs.github.com/en/actions) +- [GitHub Actions Marketplace](https://github.com/marketplace?type=actions) +- [Workflow Syntax Reference](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions) +- [PyPI Trusted Publishers](https://docs.pypi.org/trusted-publishers/) +- [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) diff --git a/{{cookiecutter.__package_slug}}/docs/dev/makefile.md b/{{cookiecutter.__package_slug}}/docs/dev/makefile.md new file mode 100644 index 0000000..ad23606 --- /dev/null +++ b/{{cookiecutter.__package_slug}}/docs/dev/makefile.md @@ -0,0 +1,992 @@ +# Makefile + +This project uses a comprehensive makefile to automate common development tasks including installation, testing, formatting, dependency management, and packaging. The makefile provides a consistent interface for all developers regardless of their environment. + +## Shell Autocomplete + +**Recommendation**: Enable makefile target autocomplete in your shell to make finding and running targets easier. + +**Bash**: + +Add to your `~/.bashrc`: + +```bash +complete -W "\`grep -oE '^[a-zA-Z0-9_.-]+:([^=]|$)' Makefile | sed 's/[^a-zA-Z0-9_.-]*$//'\`" make +``` + +**Zsh**: + +Add to your `~/.zshrc`: + +```zsh +# Enable bash completion compatibility +autoload -U +X bashcompinit && bashcompinit + +# Makefile target completion +complete -W "\`grep -oE '^[a-zA-Z0-9_.-]+:([^=]|$)' Makefile | sed 's/[^a-zA-Z0-9_.-]*$//'\`" make +``` + +**Fish**: + +Add to your `~/.config/fish/config.fish`: + +```fish +complete -c make -a "(make -qp | awk -F':' '/^[a-zA-Z0-9][^$#\/\t=]*:([^=]|$)/ {split(\$1,A,/ /);for(i in A)print A[i]}')" +``` + +After adding and sourcing your shell config, you can type `make` followed by `` to see all available targets: + +```bash +$ make +install pytest chores build +tests ruff_check black_fixes dependencies +# ... and more +``` + +## Overview + +The makefile handles: + +- Python environment setup and package installation +- Code formatting and linting +- Test execution with coverage +- Type checking with mypy +- Dependency compilation +{%- if cookiecutter.include_sqlalchemy == "y" %} +- Database migrations and schema documentation +{%- endif %} +- Package building + +All makefile targets are designed to work in both local development and CI environments. + +## Quick Reference + +```bash +# Initial setup +make install + +# Run all tests and checks +make tests + +# Auto-fix formatting issues +make chores + +# Run tests with coverage +make pytest + +# Build package +make build +``` + +## Installation Targets + +### `make install` + +**Purpose**: Complete environment setup for new developers. + +**What it does**: + +1. Installs the correct Python version using pyenv (local only) +2. Creates a virtual environment (`.venv`) +3. Installs the package and all development dependencies + +**Usage**: + +```bash +# First time setup +make install + +# After pulling changes that update dependencies +make install +``` + +**Notes**: + +- Safe to run multiple times (idempotent) +- In CI environments, skips pyenv and uses system Python +- Creates `.venv` directory if it doesn't exist + +### `make pip` + +**Purpose**: Install or update Python dependencies. + +**What it does**: + +- Installs the package in editable mode with development extras +- Updates dependencies if `pyproject.toml` has changed + +**Usage**: + +```bash +# Update dependencies after pyproject.toml changes +make pip +``` + +### `make pyenv` + +**Purpose**: Install the project's Python version using pyenv. + +**What it does**: + +- Reads the Python version from `.python-version` +- Installs that version using pyenv +- Skips if the version is already installed + +**Usage**: + +```bash +# Install Python version (usually done automatically by `make install`) +make pyenv +``` + +## Formatting Targets + +### `make chores` + +**Purpose**: Automatically fix all formatting and style issues. + +**What it does**: + +- Fixes linting issues with Ruff +- Formats code with Black (via Ruff) +- Formats data files with dapperdata +- Sorts TOML files +{%- if cookiecutter.include_sqlalchemy == "y" %} +- Updates database schema documentation +{%- endif %} + +**Usage**: + +```bash +# Before committing code +make chores +``` + +**Best Practice**: Run this before committing to ensure code passes CI checks. + +### `make ruff_fixes` + +**Purpose**: Automatically fix linting issues. + +**What it does**: + +- Runs Ruff with `--fix` flag +- Fixes issues like unused imports, missing commas, etc. + +**Usage**: + +```bash +# Fix linting issues +make ruff_fixes +``` + +### `make black_fixes` + +**Purpose**: Format code to Black standard. + +**What it does**: + +- Runs Ruff's formatter (Black-compatible) +- Formats all Python files consistently + +**Usage**: + +```bash +# Format all Python files +make black_fixes +``` + +### `make dapperdata_fixes` + +**Purpose**: Format JSON and YAML data files. + +**What it does**: + +- Pretty-prints JSON files +- Formats YAML files consistently +- Fixes indentation and structure + +**Usage**: + +```bash +# Format data files +make dapperdata_fixes +``` + +### `make tomlsort_fixes` + +**Purpose**: Sort and format TOML files. + +**What it does**: + +- Sorts keys in TOML files alphabetically +- Ensures consistent TOML formatting + +**Usage**: + +```bash +# Sort TOML files +make tomlsort_fixes +``` + +## Testing Targets + +### `make tests` + +**Purpose**: Run the complete test suite including all checks. + +**What it does**: + +1. Ensures dependencies are installed +2. Runs pytest with coverage +3. Checks linting (ruff) +4. Checks formatting (black) +5. Runs type checking (mypy) +6. Checks data file formatting +7. Checks TOML file sorting +{%- if cookiecutter.include_sqlalchemy == "y" %} +8. Verifies database schema documentation is up-to-date +{%- endif %} + +**Usage**: + +```bash +# Run full test suite (what CI runs) +make tests +``` + +**Best Practice**: Run this before pushing to ensure CI will pass. + +### `make pytest` + +**Purpose**: Run pytest with coverage reporting. + +**What it does**: + +- Executes all tests in the `tests/` directory +- Generates coverage report +- Shows which lines are covered by tests +- Fails if coverage is below threshold + +**Usage**: + +```bash +# Run tests with coverage +make pytest + +# See detailed output +make pytest_loud +``` + +**Output Example**: + +``` +tests/test_api.py ........ [ 25%] +tests/test_models.py ............ [ 75%] +tests/test_services.py .... [100%] + +---------- coverage: platform darwin, python 3.12.0 ----------- +Name Stmts Miss Cover Missing +------------------------------------------------------------ +myproject/__init__.py 4 0 100% +myproject/services/cache.py 45 2 96% 78-79 +------------------------------------------------------------ +TOTAL 250 2 99% +``` + +### `make pytest_loud` + +**Purpose**: Run pytest with verbose debug logging. + +**What it does**: + +- Same as `make pytest` but with debug logging enabled +- Shows all log messages during test execution +- Useful for debugging test failures + +**Usage**: + +```bash +# Debug test failures +make pytest_loud +``` + +### `make ruff_check` + +**Purpose**: Check for linting issues without fixing them. + +**What it does**: + +- Runs Ruff linter +- Reports issues but doesn't modify files +- Exits with error if issues are found + +**Usage**: + +```bash +# Check linting +make ruff_check +``` + +### `make black_check` + +**Purpose**: Check code formatting without modifying files. + +**What it does**: + +- Verifies code matches Black style +- Reports files that would be reformatted +- Exits with error if formatting is needed + +**Usage**: + +```bash +# Check if code needs formatting +make black_check +``` + +### `make mypy_check` + +**Purpose**: Run static type checking. + +**What it does**: + +- Analyzes code for type errors +- Checks type hints are correct +- Ensures type consistency across the codebase + +**Usage**: + +```bash +# Check types +make mypy_check +``` + +### `make dapperdata_check` + +**Purpose**: Check data file formatting without modifying. + +**What it does**: + +- Verifies JSON/YAML files are properly formatted +- Exits with error if files need formatting + +**Usage**: + +```bash +# Check data file formatting +make dapperdata_check +``` + +### `make tomlsort_check` + +**Purpose**: Verify TOML files are properly sorted. + +**What it does**: + +- Checks if TOML files are sorted alphabetically +- Exits with error if sorting is needed + +**Usage**: + +```bash +# Check TOML sorting +make tomlsort_check +``` + +{%- if cookiecutter.include_requirements_files == "y" %} + +## Dependency Management + +### `make dependencies` + +**Purpose**: Compile dependency lock files from `pyproject.toml`. + +**What it does**: + +- Generates `requirements.txt` from main dependencies +- Generates `requirements-dev.txt` including development dependencies +- Pins exact versions for reproducible installations +- Only runs if `pyproject.toml` has changed + +**Usage**: + +```bash +# Update lock files after changing pyproject.toml +make dependencies +``` + +**Files Generated**: + +- `requirements.txt` - Production dependencies with pinned versions +- `requirements-dev.txt` - Development dependencies with pinned versions + +### `make rebuild_dependencies` + +**Purpose**: Force rebuild of all dependency files with latest versions. + +**What it does**: + +- Updates all dependencies to their latest compatible versions +- Regenerates both requirements files +- Uses `--upgrade` flag with uv + +**Usage**: + +```bash +# Update to latest dependency versions +make rebuild_dependencies +``` + +**When to Use**: + +- Monthly dependency updates +- After security vulnerability announcements +- When you want the latest compatible versions +{%- endif %} + +## Packaging Targets + +### `make build` + +**Purpose**: Build distributable package. + +**What it does**: + +- Creates source distribution (`.tar.gz`) +- Creates wheel distribution (`.whl`) +- Places builds in `dist/` directory + +**Usage**: + +```bash +# Build package for distribution +make build +``` + +**Output**: + +- `dist/{{cookiecutter.__package_slug}}-X.Y.Z.tar.gz` - Source distribution +- `dist/{{cookiecutter.__package_slug}}-X.Y.Z-py3-none-any.whl` - Wheel distribution +{%- if cookiecutter.include_sqlalchemy == "y" %} + +## Database Targets + +### `make run_migrations` + +**Purpose**: Run all pending database migrations. + +**What it does**: + +- Executes Alembic migrations to bring database up to date +- Applies all migrations that haven't been run yet +- Updates the database schema + +**Usage**: + +```bash +# Apply pending migrations +make run_migrations +``` + +**Best Practice**: Run this after pulling changes that include new migrations. + +### `make create_migration` + +**Purpose**: Create a new database migration from model changes. + +**What it does**: + +1. Creates a temporary database +2. Applies all existing migrations +3. Compares current models to database schema +4. Generates migration file for differences +5. Formats the migration file + +**Usage**: + +```bash +# Create migration with descriptive message +make create_migration MESSAGE="add user profile fields" +``` + +**Requirements**: + +- Must provide a `MESSAGE` parameter +- Message should describe the schema changes + +**Output**: Creates a new file in `db/versions/` with the migration code. + +**Example**: + +```bash +# Add new column +make create_migration MESSAGE="add email column to users" + +# Create new table +make create_migration MESSAGE="add products table" + +# Modify relationship +make create_migration MESSAGE="update order-product relationship" +``` + +### `make check_ungenerated_migrations` + +**Purpose**: Verify no model changes exist without migrations. + +**What it does**: + +- Compares current models to latest migration +- Exits with error if unmigrated changes are detected +- Ensures developers create migrations for model changes + +**Usage**: + +```bash +# Check for missing migrations +make check_ungenerated_migrations +``` + +**Best Practice**: Run this in CI to catch forgotten migrations. + +### `make document_schema` + +**Purpose**: Update database schema documentation. + +**What it does**: + +- Introspects SQLAlchemy models +- Generates schema tables and diagrams +- Injects schema into `docs/dev/database.md` +- Uses Paracelsus to auto-generate documentation + +**Usage**: + +```bash +# Update schema docs after model changes +make document_schema +``` + +**Best Practice**: Include this in `make chores` to keep docs current. + +### `make paracelsus_check` + +**Purpose**: Verify database schema documentation is up-to-date. + +**What it does**: + +- Checks if schema docs match current models +- Exits with error if docs are outdated +- Doesn't modify any files + +**Usage**: + +```bash +# Check schema documentation +make paracelsus_check +``` + +**Best Practice**: Run this in CI to ensure schema docs stay current. + +### `make reset_db` + +**Purpose**: Clear and recreate the database. + +**What it does**: + +1. Removes all database files +2. Runs all migrations from scratch +3. Creates a fresh database schema + +**Usage**: + +```bash +# Reset development database +make reset_db +``` + +**Warning**: This deletes all data! Only use in development. + +### `make clear_db` + +**Purpose**: Delete all database files. + +**What it does**: + +- Removes SQLite database files +- Cleans up journal and WAL files + +**Usage**: + +```bash +# Delete database files +make clear_db +``` + +**Warning**: This deletes all data! Only use in development. +{%- endif %} + +## Environment Variables + +The makefile respects several environment variables: + +### `CI` + +**Purpose**: Indicates running in CI environment. + +**Effect**: + +- Skips pyenv installation +- Uses system Python instead of creating `.venv` +- Adjusts paths for CI environment + +**Usage**: + +```bash +# Automatically set by GitHub Actions and other CI systems +CI=true make tests +``` + +### `USE_SYSTEM_PYTHON` + +**Purpose**: Use system Python instead of virtual environment. + +**Effect**: + +- Skips `.venv` creation +- Installs packages to system Python +- Useful for containers + +**Usage**: + +```bash +# Use system Python +USE_SYSTEM_PYTHON=true make install +``` + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +### `DATABASE_URL` + +**Purpose**: Override database connection URL. + +**Effect**: + +- Used by Alembic for migrations +- Allows targeting different databases + +**Usage**: + +```bash +# Run migrations against specific database +DATABASE_URL=postgresql://localhost/mydb make run_migrations +``` + +{%- endif %} + +## Common Workflows + +### New Developer Setup + +```bash +# 1. Clone repository +git clone +cd + +# 2. Complete setup +make install + +# 3. Verify everything works +make tests +``` + +### Daily Development + +```bash +# 1. Pull latest changes +git pull + +# 2. Update dependencies if needed +make install + +# 3. Make code changes +# ... edit files ... + +# 4. Run tests frequently +make pytest + +# 5. Fix formatting before committing +make chores + +# 6. Run full test suite +make tests + +# 7. Commit and push +git add . +git commit -m "Description" +git push +``` + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +### Database Changes + +```bash +# 1. Modify models +# ... edit model files ... + +# 2. Create migration +make create_migration MESSAGE="describe changes" + +# 3. Review generated migration +# ... check db/versions/latest_file.py ... + +# 4. Apply migration locally +make run_migrations + +# 5. Update schema documentation +make document_schema + +# 6. Test migration is reversible +make reset_db + +# 7. Commit migration and documentation +git add db/versions/ docs/dev/database.md +git commit -m "Add migration: describe changes" +``` + +{%- endif %} +{%- if cookiecutter.include_requirements_files == "y" %} + +### Dependency Updates + +```bash +# 1. Update pyproject.toml +# ... modify dependencies ... + +# 2. Compile lock files +make dependencies + +# 3. Install updated dependencies +make install + +# 4. Run tests to verify compatibility +make tests + +# 5. Commit changes +git add pyproject.toml requirements*.txt +git commit -m "Update dependencies" +``` + +### Monthly Maintenance + +```bash +# 1. Update to latest dependency versions +make rebuild_dependencies + +# 2. Install updated dependencies +make install + +# 3. Run full test suite +make tests + +# 4. Fix any breaking changes +# ... update code if needed ... + +# 5. Commit updates +git add requirements*.txt +git commit -m "Update dependencies to latest versions" +``` + +{%- endif %} + +### Pre-Release Checklist + +```bash +# 1. Ensure all tests pass +make tests + +# 2. Verify formatting +make chores +{%- if cookiecutter.include_sqlalchemy == "y" %} + +# 3. Check for unmigrated changes +make check_ungenerated_migrations +{%- endif %} + +# 4. Build package +make build + +# 5. Test installation from build +pip install dist/*.whl + +# 6. Tag and release +git tag v1.0.0 +git push --tags +``` + +## Makefile Architecture + +### Python Environment Detection + +The makefile automatically detects the environment: + +- **Local Development**: Uses `.venv` and pyenv +- **CI Environment**: Uses system Python +- **System Python Mode**: Skips virtual environment + +### Target Dependencies + +Targets declare dependencies to ensure proper setup: + +```makefile +make pytest # Requires install +make tests # Requires install + pytest + all checks +make build # Requires install +``` + +### Phony Targets + +All operational targets are marked as `.PHONY` to ensure they run even if files with those names exist: + +```makefile +.PHONY: tests pytest install build +``` + +## Troubleshooting + +### "python: command not found" + +**Problem**: Python is not installed or not in PATH. + +**Solution**: + +```bash +# Install Python using pyenv +make pyenv + +# Or install Python via your system package manager +# Then run make install +``` + +### "make: command not found" + +**Problem**: Make is not installed. + +**Solution**: + +```bash +# macOS +xcode-select --install + +# Ubuntu/Debian +sudo apt-get install build-essential + +# Fedora/RHEL +sudo dnf install make +``` + +### "No rule to make target" + +**Problem**: Typo in make target or target doesn't exist. + +**Solution**: + +```bash +# List all available targets +make help # If available +grep "^[a-zA-Z]" makefile # Show all targets +``` + +### Tests fail after pulling changes + +**Problem**: Dependencies are out of sync. + +**Solution**: + +```bash +# Reinstall dependencies +make install + +# Run tests again +make tests +``` + +{%- if cookiecutter.include_sqlalchemy == "y" %} + +### Migration fails + +**Problem**: Database schema conflict or migration error. + +**Solution**: + +```bash +# Reset database and try again +make reset_db + +# If problem persists, check migration file +# Then create new migration +make create_migration MESSAGE="fix schema issue" +``` + +{%- endif %} + +## Best Practices + +1. **Run `make install` first**: Always start with a complete installation + +2. **Use `make chores` before committing**: Ensures code passes formatting checks + +3. **Run `make tests` before pushing**: Catches issues before CI + +4. **Keep dependencies updated**: Run `make dependencies` after changing `pyproject.toml` +{%- if cookiecutter.include_sqlalchemy == "y" %} + +5. **Create migrations for model changes**: Always run `make create_migration` after modifying models + +6. **Update schema docs**: Include `make document_schema` in your workflow +{%- endif %} + +7. **Use specific targets during development**: Run `make pytest` frequently rather than the full `make tests` + +8. **Check target dependencies**: Some targets require `make install` to run first + +## Integration with CI/CD + +The makefile is designed to work seamlessly in CI environments: + +**GitHub Actions**: + +```yaml +- name: Run tests + run: make tests + env: + CI: true +``` + +**Key CI Behaviors**: + +- Skips pyenv (uses system Python) +- Skips virtual environment creation +- All checks run identically to local +- Exit codes propagate correctly + +## References + +- [GNU Make Manual](https://www.gnu.org/software/make/manual/) +- [Python Packaging Guide](https://packaging.python.org/) +- [pytest Documentation](https://docs.pytest.org/) +{%- if cookiecutter.include_sqlalchemy == "y" %} +- [Alembic Documentation](https://alembic.sqlalchemy.org/) +{%- endif %} + +## See Also + +- [Dependencies](./dependencies.md) - Detailed dependency management guide +- [Testing](./testing.md) - Comprehensive testing documentation +{%- if cookiecutter.include_sqlalchemy == "y" %} +- [Database](./database.md) - Database and migration guide +{%- endif %} +{%- if cookiecutter.include_github_actions == "y" %} +- [GitHub Actions](./github.md) - CI/CD workflow documentation +{%- endif %} diff --git a/{{cookiecutter.__package_slug}}/docs/dev/pypi.md b/{{cookiecutter.__package_slug}}/docs/dev/pypi.md index 5daeafd..7ec7ff0 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/pypi.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/pypi.md @@ -1 +1,592 @@ -# PyPI +# PyPI Publishing + +This project is configured to build and publish Python packages to the Python Package Index (PyPI) using automated GitHub Actions workflows. The publishing process uses OpenID Connect (OIDC) for secure, token-free authentication. + +## Overview + +The PyPI workflow automatically: + +- Builds package distributions on every push and pull request +- Validates the package can be built successfully +- Publishes to PyPI when version tags are pushed (if enabled) +- Uses trusted publishing via OIDC (no API tokens needed) + +## Package Configuration + +### Version Management + +This project uses `setuptools_scm` for automatic versioning based on git tags: + +**Development Versions**: + +- Versions are automatically generated from git history +- Format: `0.0.0.devN+gHASH` (e.g., `0.2.3.dev42+g1234abc`) +- Includes commit count and hash + +**Release Versions**: + +- Determined by git tags +- Must follow semantic versioning: `vMAJOR.MINOR.PATCH` (e.g., `v1.2.3`) +- Tag format triggers PyPI publishing + +**Configuration** (`pyproject.toml`): + +```toml +[tool.setuptools_scm] +fallback_version = "0.0.0-dev" +write_to = "{{cookiecutter.__package_slug}}/_version.py" +``` + +The version is written to `_version.py` and imported by the package. + +### Package Metadata + +Key metadata in `pyproject.toml`: + +```toml +[project] +name = "{{cookiecutter.__package_slug}}" +description = "{{cookiecutter.short_description}}" +authors = [{"name" = "{{cookiecutter.author_name}}"}] +readme = {file = "README.md", content-type = "text/markdown"} +license = {"file" = "LICENSE"} +dynamic = ["version"] +``` + +**Important Fields**: + +- **name**: Package name on PyPI (must be unique) +- **description**: Short description shown in search results +- **readme**: Long description shown on PyPI page +- **license**: License information +- **dynamic**: Version is determined by setuptools_scm + +## Building Packages + +### Local Build + +Build packages locally for testing: + +```bash +# Build source distribution and wheel +make build + +# Output in dist/ directory +ls dist/ +# {{cookiecutter.__package_slug}}-1.2.3.tar.gz +# {{cookiecutter.__package_slug}}-1.2.3-py3-none-any.whl +``` + +**Build Artifacts**: + +- **Source Distribution** (`.tar.gz`): Complete source code package +- **Wheel** (`.whl`): Pre-built binary package (faster to install) + +### Verify Build + +Test installation from the built package: + +```bash +# Create clean virtual environment +python -m venv test-env +source test-env/bin/activate + +# Install from wheel +pip install dist/{{cookiecutter.__package_slug}}-*.whl + +# Verify installation +python -c "import {{cookiecutter.__package_slug}}; print({{cookiecutter.__package_slug}}.__version__)" + +# Clean up +deactivate +rm -rf test-env +``` + +### Check Package + +Validate package metadata and contents: + +```bash +# Install twine for checking +pip install twine + +# Check package +twine check dist/* +``` + +This verifies: + +- README renders correctly on PyPI +- Metadata is valid +- Package structure is correct + +## GitHub Actions Workflow + +### Workflow Trigger + +The PyPI workflow (`.github/workflows/pypi.yaml`) runs on: + +**Every Push and PR**: + +- Builds the package +- Validates build succeeds +- Does not publish + +**Version Tags**: + +- Builds the package +- Publishes to PyPI (if `PUBLISH_TO_PYPI=true`) +- Only on tags matching `v[0-9]+.[0-9]+.[0-9]+` + +### Workflow Configuration + +```yaml +name: PyPI + +on: + push: + branches: + - "**" + tags: + - "v[0-9]+.[0-9]+.[0-9]+" + pull_request: + +env: + PUBLISH_TO_PYPI: true # Set during project generation + +jobs: + pypi: + runs-on: ubuntu-latest + permissions: + id-token: write # Required for OIDC + steps: + - uses: actions/checkout@v5 + with: + fetch-depth: 0 # Full history for setuptools_scm + fetch-tags: true # Ensure tags are fetched + + - uses: actions/setup-python@v6 + with: + python-version-file: .python-version + + - name: Install Dependencies + run: make install + + - name: Build Wheel + run: make build + + - name: Publish package + if: env.PUBLISH_TO_PYPI == 'true' && github.event_name == 'push' && startsWith(github.ref, 'refs/tags') + uses: pypa/gh-action-pypi-publish@release/v1 +``` + +**Key Points**: + +- **fetch-depth: 0**: Fetches complete git history (required for version calculation) +- **fetch-tags: true**: Ensures tags are available +- **permissions.id-token: write**: Enables OIDC authentication +- **Conditional publish**: Only publishes on tag pushes when enabled + +## OIDC Trusted Publishing + +### What is OIDC Publishing? + +OpenID Connect (OIDC) publishing is a secure, token-free way to publish packages to PyPI: + +**Benefits**: + +- No API tokens to manage or rotate +- No secrets to store in GitHub +- Automatic authentication via GitHub's identity +- Reduced security risk (no leaked tokens) +- Scoped to specific repository and workflow + +**How It Works**: + +1. GitHub Actions generates a temporary OIDC token +2. Token proves the workflow's identity to PyPI +3. PyPI validates the token matches configured publisher +4. Package is published if validation succeeds + +### Setting Up OIDC on PyPI + +**Prerequisites**: + +- PyPI account with verified email +- Repository with PyPI workflow configured +- Package name available on PyPI + +#### Step 1: Create PyPI Account + +1. Go to [https://pypi.org/account/register/](https://pypi.org/account/register/) +2. Create account and verify email +3. Enable two-factor authentication (recommended) + +#### Step 2: Register Package Name + +**Option A: Reserve Name (Recommended)** + +1. Go to [https://pypi.org/manage/account/publishing/](https://pypi.org/manage/account/publishing/) +2. Click "Add a new pending publisher" +3. Fill in the form: + - **PyPI Project Name**: `{{cookiecutter.__package_slug}}` + - **Owner**: Your GitHub username or organization + - **Repository name**: Your repository name + - **Workflow name**: `pypi.yaml` + - **Environment name**: Leave blank (not used) +4. Click "Add" + +**Option B: Publish First Version Manually** + +If you prefer to publish the first version manually: + +```bash +# Build package +make build + +# Install twine +pip install twine + +# Upload to PyPI (will prompt for credentials) +twine upload dist/* +``` + +Then configure OIDC for future releases. + +#### Step 3: Configure Trusted Publisher + +If you published manually first: + +1. Go to your project page: `https://pypi.org/project/{{cookiecutter.__package_slug}}/` +2. Click "Manage" → "Publishing" +3. Scroll to "Trusted Publishers" +4. Click "Add a new publisher" +5. Select "GitHub Actions" +6. Fill in: + - **Owner**: Your GitHub username/org + - **Repository name**: Your repository name + - **Workflow name**: `pypi.yaml` + - **Environment name**: Leave blank +7. Click "Add" + +#### Step 4: Verify Configuration + +Check the configuration: + +```yaml +# Should match your PyPI trusted publisher settings +Owner: your-username +Repository: your-repo-name +Workflow: pypi.yaml +Environment: (none) +``` + +## Publishing a Release + +### Step 1: Prepare Release + +```bash +# Ensure you're on main branch +git checkout main +git pull + +# Ensure all tests pass +make tests + +# Verify build works +make build + +# Check package metadata +pip install twine +twine check dist/* +``` + +### Step 2: Create Version Tag + +Choose semantic version number: + +- **Major** (v2.0.0): Breaking changes +- **Minor** (v1.3.0): New features, backward compatible +- **Patch** (v1.2.4): Bug fixes, backward compatible + +```bash +# Create annotated tag +git tag -a v1.2.3 -m "Release version 1.2.3" + +# View tag details +git show v1.2.3 + +# Push tag to GitHub +git push origin v1.2.3 +``` + +**Important**: The tag must match the pattern `v[0-9]+.[0-9]+.[0-9]+` exactly. + +### Step 3: Monitor Workflow + +1. Go to your repository on GitHub +2. Click "Actions" tab +3. Find the "PyPI" workflow run +4. Watch the build and publish steps + +**Expected Output**: + +``` +✓ Checkout code +✓ Setup Python +✓ Install Dependencies +✓ Build Wheel +✓ Publish package to PyPI +``` + +### Step 4: Verify Publication + +Check the package on PyPI: + +```bash +# View on PyPI +open https://pypi.org/project/{{cookiecutter.__package_slug}}/ + +# Install from PyPI +pip install {{cookiecutter.__package_slug}} + +# Verify version +python -c "import {{cookiecutter.__package_slug}}; print({{cookiecutter.__package_slug}}.__version__)" +``` + +## Release Checklist + +Before tagging a release: + +- [ ] All tests pass (`make tests`) +- [ ] Changelog/release notes updated +- [ ] Version number decided (semantic versioning) +- [ ] README is current +- [ ] Documentation is up-to-date +- [ ] Breaking changes documented (if major version) +- [ ] Dependencies are up-to-date +- [ ] Build succeeds locally (`make build`) +- [ ] Package check passes (`twine check dist/*`) +- [ ] OIDC trusted publisher configured on PyPI + +After tagging: + +- [ ] GitHub Actions workflow succeeds +- [ ] Package appears on PyPI +- [ ] Installation from PyPI works +- [ ] Create GitHub Release with notes +- [ ] Announce release (if appropriate) + +## Troubleshooting + +### Build Fails: "No module named '_version'" + +**Problem**: Version file not generated. + +**Solution**: + +```bash +# Ensure git tags are present +git fetch --tags + +# Reinstall with setuptools_scm +pip install -e . + +# Check version file exists +ls {{cookiecutter.__package_slug}}/_version.py +``` + +### Publish Fails: "Not a valid publisher" + +**Problem**: OIDC trusted publisher not configured correctly. + +**Solution**: + +1. Verify configuration on PyPI matches workflow +2. Check repository owner/name spelling +3. Ensure workflow name is exactly `pypi.yaml` +4. Verify tag format: `v1.2.3` (not `1.2.3` or `version-1.2.3`) + +### Publish Fails: "Package already exists" + +**Problem**: Version already published to PyPI. + +**Solution**: + +PyPI does not allow replacing versions. You must: + +```bash +# Delete the tag +git tag -d v1.2.3 +git push origin :refs/tags/v1.2.3 + +# Create new patch version +git tag -a v1.2.4 -m "Release version 1.2.4" +git push origin v1.2.4 +``` + +### Version is "0.0.0-dev" + +**Problem**: Git tags not available or setuptools_scm not configured. + +**Solution**: + +```bash +# Fetch tags +git fetch --tags + +# Create initial tag if none exist +git tag v0.1.0 +git push origin v0.1.0 + +# Reinstall package +pip install -e . +``` + +### Workflow Doesn't Trigger + +**Problem**: Tag format doesn't match workflow pattern. + +**Solution**: + +Ensure tag format is exactly `vMAJOR.MINOR.PATCH`: + +```bash +# ✓ Correct formats +v1.0.0 +v2.3.4 +v10.20.30 + +# ✗ Incorrect formats +1.0.0 # Missing 'v' prefix +v1.0 # Missing patch version +v1.0.0-beta # Has suffix +version1.0.0 # Wrong prefix +``` + +### Permission Denied on PyPI + +**Problem**: OIDC not configured or wrong repository. + +**Solution**: + +1. Verify you're on the correct repository +2. Check OIDC configuration on PyPI +3. Ensure `permissions.id-token: write` in workflow +4. Verify PyPI account has access to package name + +## Security Best Practices + +1. **Use OIDC**: Avoid storing PyPI tokens as GitHub secrets + +2. **Protected Tags**: Configure branch protection for tags: + - Settings → Branches → Add tag protection rule + - Pattern: `v*` + - Prevents unauthorized releases + +3. **Required Reviews**: Require PR reviews before merging to main + +4. **Two-Factor Auth**: Enable 2FA on PyPI account + +5. **Monitor Releases**: Watch for unexpected package publications + +6. **Verify Checksums**: Check package integrity after publishing + +7. **Audit Logs**: Review PyPI and GitHub audit logs regularly + +## Version Strategy + +### Semantic Versioning + +Follow [Semantic Versioning 2.0.0](https://semver.org/): + +**MAJOR.MINOR.PATCH** (e.g., 2.3.1) + +- **MAJOR**: Incompatible API changes +- **MINOR**: Backward-compatible new features +- **PATCH**: Backward-compatible bug fixes + +### Pre-release Versions + +For pre-releases, use suffixes: + +```bash +# Alpha release +git tag v1.0.0-alpha.1 + +# Beta release +git tag v1.0.0-beta.1 + +# Release candidate +git tag v1.0.0-rc.1 +``` + +**Note**: Pre-release tags don't match the workflow pattern and won't auto-publish. This is intentional for safety. + +### Development Versions + +Between releases, setuptools_scm generates dev versions: + +```python +# After v1.2.0, before next tag +"1.2.1.dev5+g1234abc" +# 1.2.1: Next version +# dev5: 5 commits since tag +# g1234abc: Git commit hash +``` + +## Manual Publishing + +If needed, you can publish manually: + +```bash +# Build package +make build + +# Install twine +pip install twine + +# Upload to PyPI +twine upload dist/* + +# Or upload to Test PyPI first +twine upload --repository testpypi dist/* +``` + +**Test PyPI**: + +- URL: [https://test.pypi.org/](https://test.pypi.org/) +- Use for testing before production +- Separate account from production PyPI + +## Continuous Delivery + +This setup enables continuous delivery: + +1. **Develop**: Make changes on feature branches +2. **Test**: PR builds verify package builds successfully +3. **Merge**: Merge to main after review +4. **Release**: Tag commit to trigger publication +5. **Deploy**: Package automatically published to PyPI + +**Benefits**: + +- Fast releases (tag and done) +- Consistent build process +- No manual upload steps +- Built-in verification + +## References + +- [PyPI Trusted Publishers Guide](https://docs.pypi.org/trusted-publishers/) +- [GitHub Actions OIDC](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) +- [Semantic Versioning](https://semver.org/) +- [setuptools_scm Documentation](https://setuptools-scm.readthedocs.io/) +- [Python Packaging Guide](https://packaging.python.org/) +- [twine Documentation](https://twine.readthedocs.io/) + +## See Also + +- [GitHub Actions](./github.md) - Complete CI/CD workflow documentation +- [Dependencies](./dependencies.md) - Managing project dependencies +- [Makefile](./makefile.md) - Build commands and automation diff --git a/{{cookiecutter.__package_slug}}/docs/dev/quasiqueue.md b/{{cookiecutter.__package_slug}}/docs/dev/quasiqueue.md index 7d8692a..a78791a 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/quasiqueue.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/quasiqueue.md @@ -1,11 +1,597 @@ -# QuasiQueue +# QuasiQueue Integration -The [QuasiQueue](https://github.com/tedivm/quasiqueue) multiprocessing library is configured in the `qq.py` file. +This project uses [QuasiQueue](https://github.com/tedivm/quasiqueue), a multiprocessing library for Python that simplifies the creation and management of long-running jobs. -{%- if cookiecutter.include_docker == "y" %} +## Overview -## Docker +QuasiQueue handles all the complexity of multiprocessing so you only need to define two simple functions: -The QuasiQueue images are based off of the [Multi-Py QuasiQueue Project](https://github.com/multi-py/python-quasiqueue) and work for ARM and AMD out of the box. +- **Writer**: An async generator that yields items when the queue is low +- **Reader**: A function that processes individual items from the queue +The library handles process creation/cleanup, signal management, cross-process communication, and all the other complexity that makes working with multiprocessing difficult. + +## Configuration + +### Basic Setup + +QuasiQueue is initialized in `{{cookiecutter.__package_slug}}/qq.py` with three main components: + +```python +from quasiqueue import QuasiQueue + +# 1. Writer: yields items to queue +async def writer(desired: int): + """Called when queue needs items.""" + for x in range(0, desired): + yield x + +# 2. Reader: processes items from queue +async def reader(identifier: int | str): + """Processes one item.""" + print(f"Processing: {identifier}") + +# 3. Runner: orchestrates everything +runner = QuasiQueue( + settings.project_name, + reader=reader, + writer=writer, + settings=settings +) +``` + +### Settings Configuration + +Your Settings class inherits from `QuasiQueueSettings` to get QuasiQueue configuration: + +```python +from quasiqueue import Settings as QuasiQueueSettings + +class Settings(QuasiQueueSettings, ...): + project_name: str = "my_project" +``` + +Environment variables use your project name as a prefix: + +```bash +# Basic configuration (using TEST_FULL_DOCS as example prefix) +TEST_FULL_DOCS_NUM_PROCESSES=4 # Number of reader processes (default: 2) +TEST_FULL_DOCS_MAX_QUEUE_SIZE=500 # Maximum queue size (default: 300) +TEST_FULL_DOCS_LOOKUP_BLOCK_SIZE=20 # Items writer fetches per call (default: 10) + +# Performance tuning +TEST_FULL_DOCS_CONCURRENT_TASKS_PER_PROCESS=4 # Async tasks per process (default: 4) +TEST_FULL_DOCS_MAX_JOBS_PER_PROCESS=200 # Jobs before process restart (default: 200) +TEST_FULL_DOCS_PREVENT_REQUEUING_TIME=300 # Seconds to prevent requeuing (default: 300) + +# Sleep intervals +TEST_FULL_DOCS_EMPTY_QUEUE_SLEEP_TIME=1.0 # Sleep when writer returns nothing (default: 1.0) +TEST_FULL_DOCS_FULL_QUEUE_SLEEP_TIME=5.0 # Sleep when queue is full (default: 5.0) +TEST_FULL_DOCS_QUEUE_INTERACTION_TIMEOUT=0.01 # Queue lock timeout (default: 0.01) +TEST_FULL_DOCS_GRACEFUL_SHUTDOWN_TIMEOUT=30 # Graceful shutdown wait (default: 30) +``` + +All settings have sensible defaults and are optional. + +## Writer Function + +The writer is an **async generator** that yields items to be processed: + +```python +async def writer(desired: int): + """Yields items when queue is low. + + Args: + desired: Suggested number of items to yield (optional to honor) + """ + # Simple range example + for x in range(0, desired): + yield x +``` + +### Writer Behavior + +- Called automatically when the queue needs items (below 30% full) +- Receives a `desired` parameter suggesting how many items to yield +- Can honor, ignore, or partially honor the `desired` count +- Should `yield` items that can be pickled (strings, integers, simple objects) +- Can return or yield nothing when no work is available + +### Database Example + +```python +async def writer(desired: int): + """Fetch pending jobs from database.""" + async with get_db_session() as session: + jobs = await session.execute( + select(Job) + .where(Job.status == "pending") + .limit(desired) + ) + for job in jobs.scalars(): + yield job.id + # No explicit None needed - generator ends naturally +``` + +### Writer Features + +QuasiQueue automatically: + +- Prevents duplicate items from being re-queued within a time window +- Sleeps when writer yields nothing (empty queue sleep time) +- Sleeps when queue is full (full queue sleep time) +- Passes optional `settings` argument if function signature includes it + +## Reader Function + +The reader processes individual items from the queue: + +```python +async def reader(identifier: int | str): + """Processes one item from queue. + + Args: + identifier: Item yielded by writer function + """ + print(f"Processing {identifier}") +``` + +### Reader Variations + +The reader can be sync or async: + +```python +# Async reader (preferred for I/O bound work) +async def reader(item: int | str): + await process_item(item) + +# Sync reader (for CPU bound work) +def reader(item: int | str): + process_item(item) +``` + +### Reader with Context + +Use a context function to share resources across reader calls: + +```python +def context(): + """Initialize once per reader process.""" + return { + 'http': get_http_connection_pool(), + 'dbengine': get_db_engine() + } + +async def reader(item: int | str, ctx: dict): + """ctx contains result from context function.""" + async with ctx['dbengine'].session() as session: + # Use shared database engine + job = await session.get(Job, item) + await job.process() + +runner = QuasiQueue( + settings.project_name, + reader=reader, + writer=writer, + context=context, # Pass context function + settings=settings +) +``` + +### Reader with Settings + +Access settings in your reader: + +```python +async def reader(item: int | str, settings: dict): + """settings is dict of all QuasiQueue settings.""" + if settings.get('debug'): + print(f"Debug: Processing {item}") + + max_retries = settings.get('max_retries', 3) + # Use settings as needed +``` + +### Concurrent Tasks + +For async readers, `concurrent_tasks_per_process` controls parallelism: + +```bash +# Each process runs up to 4 reader tasks concurrently +TEST_FULL_DOCS_CONCURRENT_TASKS_PER_PROCESS=4 +``` + +If you have 4 processes with 4 concurrent tasks each, that's 16 reader instances running simultaneously. + +## Running QuasiQueue + +### Command Line + +Run as a standalone process: + +```bash +# Run the qq module directly +python -m {{cookiecutter.__package_slug}}.qq +``` + +### In Code + +```python +import asyncio +from {{cookiecutter.__package_slug}}.qq import runner + +if __name__ == "__main__": + asyncio.run(runner.main()) +``` + +### What Happens + +When you run QuasiQueue: + +1. Creates a multiprocess queue +2. Launches reader processes (number controlled by `num_processes`) +3. Writer fills the queue with items +4. Reader processes pull items and process them +5. Processes are restarted after `max_jobs_per_process` jobs +6. Handles SIGTERM/SIGINT for graceful shutdown +{%- if cookiecutter.include_aiocache == "y" %} + +## Cache Integration + +If using aiocache, caches are automatically initialized before the QuasiQueue runner starts and are available in your reader functions: + +```python +from {{cookiecutter.__package_slug}}.services.cache import cache + +async def reader(item: int | str): + """Reader can access initialized caches.""" + # Check cache first + cached_result = await cache.get(f'result_{item}') + if cached_result: + return cached_result + + # Process and cache result + result = await process_item(item) + await cache.set(f'result_{item}', result, ttl=3600) + return result +``` + +Cache initialization is handled automatically by the application startup, so you don't need to worry about it in your QuasiQueue functions. {%- endif %} + +## Testing + +### Component Tests + +Test writer and reader functions individually: + +```python +"""Tests for QuasiQueue components.""" +import pytest +from {{cookiecutter.__package_slug}}.qq import runner, writer, reader + + +def test_runner_exists(): + """QuasiQueue runner should be instantiated.""" + assert runner is not None + + +def test_runner_has_settings(): + """Runner should have settings configured.""" + assert hasattr(runner, "settings") + assert runner.settings is not None + + +def test_writer_is_async_generator(): + """Writer should be an async generator function.""" + import inspect + assert inspect.isasyncgenfunction(writer) + + +@pytest.mark.asyncio +async def test_writer_yields_items(): + """Writer should yield expected number of items.""" + desired = 5 + results = [] + + async for item in writer(desired): + results.append(item) + + assert len(results) == desired + assert results == list(range(0, desired)) + + +@pytest.mark.asyncio +async def test_reader_processes_item(): + """Reader should process an item without error.""" + # Should not raise exceptions + await reader(42) +``` + +### Context Function Tests + +If using a context function: + +```python +import inspect +from {{cookiecutter.__package_slug}}.qq import context, reader + + +def test_context_returns_dict(): + """Context should return a dictionary of resources.""" + if context: + ctx = context() + assert isinstance(ctx, dict) + + +@pytest.mark.asyncio +async def test_reader_uses_context(): + """Reader should work with context resources.""" + if context: + ctx = context() if not inspect.iscoroutinefunction(context) else await context() + + # Check if reader accepts ctx parameter + sig = inspect.signature(reader) + if 'ctx' in sig.parameters: + await reader(1, ctx=ctx) +``` + +### Integration Tests + +For full workflow testing: + +```python +@pytest.mark.asyncio +async def test_quasiqueue_workflow(): + """Test complete QuasiQueue workflow.""" + from quasiqueue import QuasiQueue, Settings + + processed = [] + + async def test_writer(desired: int): + for x in range(0, 10): + yield x + + async def test_reader(item: int): + processed.append(item) + + test_runner = QuasiQueue( + "test_queue", + reader=test_reader, + writer=test_writer, + settings=Settings( + num_processes=2, + max_queue_size=50, + graceful_shutdown_timeout=1 + ) + ) + + # Run briefly then cancel + import asyncio + task = asyncio.create_task(test_runner.main()) + await asyncio.sleep(2) + task.cancel() + + # Should have processed some items + assert len(processed) >= 5 +``` + +## Best Practices + +### Process Configuration + +**Number of Processes**: Match to workload type + +- CPU-bound work: Match CPU core count +- I/O-bound work: Can exceed core count (2-4x) + +```bash +# CPU-bound: intensive calculations +TEST_FULL_DOCS_NUM_PROCESSES=8 # Match your CPU cores + +# I/O-bound: database queries, HTTP requests +TEST_FULL_DOCS_NUM_PROCESSES=16 # Can exceed cores +TEST_FULL_DOCS_CONCURRENT_TASKS_PER_PROCESS=4 # Even more parallelism +``` + +**Process Recycling**: Prevent memory leaks by restarting processes + +```bash +# Restart reader process after 200 jobs +TEST_FULL_DOCS_MAX_JOBS_PER_PROCESS=200 +``` + +### Writer Optimization + +**Honor `desired` Parameter**: Better performance when you yield close to requested count + +```python +async def writer(desired: int): + # Good: respects desired count + jobs = await fetch_pending_jobs(limit=desired) + for job in jobs: + yield job.id +``` + +**Batch Database Queries**: Fetch multiple items at once + +```python +async def writer(desired: int): + # Efficient: single query for multiple items + async with get_db_session() as session: + jobs = await session.execute( + select(Job) + .where(Job.status == "pending") + .limit(desired) + ) + for job in jobs.scalars(): + yield job.id +``` + +**Signal Empty Queue**: Return/yield nothing when no work available + +```python +async def writer(desired: int): + jobs = await fetch_pending_jobs(limit=desired) + + if not jobs: + # QuasiQueue will sleep (empty_queue_sleep_time) + return + + for job in jobs: + yield job.id +``` + +### Reader Optimization + +**Prefer Async Readers**: Better for I/O-bound work + +```python +# Good: async reader with concurrent tasks +async def reader(item: int, ctx: dict): + async with ctx['http'].get(url) as response: + data = await response.json() + # Process data +``` + +**Use Context Function**: Share expensive resources + +```python +def context(): + """Initialize once per process, not per job.""" + return { + 'http': get_http_connection_pool(), + 'db': get_db_engine(), + 'redis': get_redis_pool() + } + +async def reader(item: int, ctx: dict): + # Reuse pooled connections + async with ctx['db'].session() as session: + # Database work + pass +``` + +**Error Handling**: Prevent process crashes + +```python +async def reader(item: int): + try: + # Process item + await process(item) + except Exception as e: + logger.error(f"Failed to process {item}: {e}") + # Don't let exception kill the process +``` + +### Shutdown Handling + +QuasiQueue automatically handles graceful shutdown: + +- **SIGTERM**: Waits for readers to finish (up to `graceful_shutdown_timeout`) +- **SIGINT**: Same as SIGTERM +- **After Timeout**: Forcefully terminates remaining processes + +```bash +# Give readers 60 seconds to finish current work +TEST_FULL_DOCS_GRACEFUL_SHUTDOWN_TIMEOUT=60 +``` + +For quick shutdown during development: + +```bash +# Shorter timeout for faster restart cycles +TEST_FULL_DOCS_GRACEFUL_SHUTDOWN_TIMEOUT=5 +``` + +## Development vs Production + +### Development + +Focus on debuggability: + +```bash +# .env.development +TEST_FULL_DOCS_NUM_PROCESSES=1 # Single process easier to debug +TEST_FULL_DOCS_MAX_QUEUE_SIZE=20 # Smaller queue +TEST_FULL_DOCS_DEBUG=true # Enable debug logging +TEST_FULL_DOCS_GRACEFUL_SHUTDOWN_TIMEOUT=2 # Fast shutdown for restarts +``` + +Add logging for visibility: + +```python +import logging + +logger = logging.getLogger(__name__) + +async def reader(identifier: int | str): + logger.info(f"Started processing {identifier}") + # Process item + logger.info(f"Completed {identifier}") +``` + +### Production + +Optimize for throughput and reliability: + +```bash +# .env.production +TEST_FULL_DOCS_NUM_PROCESSES=16 # Scale to workload +TEST_FULL_DOCS_MAX_QUEUE_SIZE=500 # Larger buffer +TEST_FULL_DOCS_CONCURRENT_TASKS_PER_PROCESS=4 # More parallelism +TEST_FULL_DOCS_MAX_JOBS_PER_PROCESS=200 # Memory leak protection +TEST_FULL_DOCS_GRACEFUL_SHUTDOWN_TIMEOUT=60 # Don't lose work +``` + +Monitor with metrics: + +```python +from prometheus_client import Counter, Histogram + +jobs_processed = Counter('jobs_processed_total', 'Total jobs processed') +job_duration = Histogram('job_duration_seconds', 'Job processing time') + +async def reader(item: int | str): + with job_duration.time(): + # Process item + pass + jobs_processed.inc() +``` + +### Deployment + +**Daemonization**: Run as a service + +```ini +# systemd unit file +[Service] +ExecStart=/path/to/venv/bin/python -m {{cookiecutter.__package_slug}}.qq +Restart=always +``` + +**Docker**: Run in a container + +```dockerfile +CMD ["python", "-m", "{{cookiecutter.__package_slug}}.qq"] +``` + +**Health Monitoring**: Track queue metrics + +- Queue depth (items waiting) +- Processing rate (items/second) +- Process count (should match config) +- Error rate + +**Graceful Deploys**: Use SIGTERM for zero-downtime + +```bash +# Send SIGTERM, processes finish current work then exit +kill -TERM +``` + +## Additional Resources + +- [QuasiQueue GitHub Repository](https://github.com/tedivm/quasiqueue) +- [QuasiQueue README Documentation](https://github.com/tedivm/quasiqueue/blob/main/README.md) - Includes additional use case examples for web servers, web scraping, image processing, and more +- [Python Multiprocessing Documentation](https://docs.python.org/3/library/multiprocessing.html) diff --git a/{{cookiecutter.__package_slug}}/docs/dev/settings.md b/{{cookiecutter.__package_slug}}/docs/dev/settings.md index 3aad776..1e88629 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/settings.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/settings.md @@ -1,3 +1,718 @@ # Settings -This project uses the [Pydantic Base Settings](https://docs.pydantic.dev/usage/settings/) system. The `{{ cookiecutter.__package_slug }}.conf.settings:Settings` class can be expanded to include new settings. An active instance of the settings class can be found at `{{ cookiecutter.__package_slug }}.conf:settings`. +This project uses [Pydantic Settings](https://docs.pydantic.dev/latest/concepts/pydantic_settings/) for configuration management, providing type-safe settings with environment variable support and validation. + +## Configuration Structure + +The settings system is organized into multiple modules: + +- **`{{cookiecutter.__package_slug}}/conf/settings.py`**: Main Settings class +{%- if cookiecutter.include_aiocache == "y" %} +- **`{{cookiecutter.__package_slug}}/conf/cache.py`**: Cache-specific settings +{%- endif %} +{%- if cookiecutter.include_sqlalchemy == "y" %} +- **`{{cookiecutter.__package_slug}}/conf/db.py`**: Database settings +{%- endif %} +- **`{{cookiecutter.__package_slug}}/settings.py`**: Global settings instance + +## Accessing Settings + +### Global Settings Instance + +A pre-configured settings instance is available throughout the application: + +```python +from {{cookiecutter.__package_slug}}.conf import settings + +# Access settings +print(settings.project_name) +print(settings.debug) +print(settings.database_url) +``` + +### In Different Components + +{%- if cookiecutter.include_fastapi == "y" %} + +**FastAPI Routes**: + +```python +from {{cookiecutter.__package_slug}}.conf import settings +from fastapi import APIRouter + +router = APIRouter() + +@router.get("/config") +async def get_config(): + return { + "project": settings.project_name, + "debug": settings.debug, + } +``` + +{%- endif %} +{%- if cookiecutter.include_celery == "y" %} + +**Celery Tasks**: + +```python +from {{cookiecutter.__package_slug}}.conf import settings +from {{cookiecutter.__package_slug}}.celery import celery + +@celery.task +def example_task(): + # Note: Celery configuration (broker, backend) is NOT in Settings + # Use settings for application-specific configuration only + project_name = settings.project_name + # Task logic here +``` + +{%- endif %} +{%- if cookiecutter.include_cli == "y" %} + +**CLI Commands**: + +```python +from {{cookiecutter.__package_slug}}.conf import settings +from {{cookiecutter.__package_slug}}.cli import app + +@app.command() +def show_config(): + """Display current configuration.""" + print(f"Project: {settings.project_name}") + print(f"Debug: {settings.debug}") +``` + +{%- endif %} + +## Environment Variables + +### Setting Values + +Configure the application using environment variables: + +```bash +# Set environment variables +export PROJECT_NAME="My Application" +export DEBUG="True" +export DATABASE_URL="postgresql+asyncpg://user:pass@localhost/mydb" + +# Or use a .env file +echo 'PROJECT_NAME="My Application"' > .env +echo 'DEBUG=True' >> .env +echo 'DATABASE_URL="postgresql+asyncpg://user:pass@localhost/mydb"' >> .env +``` + +### Loading from .env Files + +The settings system automatically loads from `.env` files in the project root: + +```python +# {{cookiecutter.__package_slug}}/conf/settings.py +from pydantic_settings import BaseSettings, SettingsConfigDict + +class Settings(BaseSettings): + model_config = SettingsConfigDict( + env_file=".env", + env_file_encoding="utf-8", + case_sensitive=False, + extra="ignore", + ) +``` + +### Environment Variable Prefixes + +To avoid conflicts, you can add a prefix to all environment variables: + +```python +class Settings(BaseSettings): + model_config = SettingsConfigDict( + env_prefix="MYAPP_", # Now use MYAPP_DEBUG instead of DEBUG + ) + + debug: bool = False +``` + +## Core Settings + +### Default Settings + +The base Settings class includes: + +```python +from pydantic_settings import BaseSettings + +class Settings(BaseSettings): + # Application + project_name: str = "{{cookiecutter.__package_slug}}" + version: str = "0.1.0" + debug: bool = False +{%- if cookiecutter.include_sqlalchemy == "y" %} + + # Database + database_url: str = "postgresql+asyncpg://user:pass@localhost/db" +{%- endif %} +{%- if cookiecutter.include_aiocache == "y" %} + + # Cache + cache_backend: str = "memory" + redis_url: str = "redis://localhost:6379/0" +{%- endif %} +{%- if cookiecutter.include_quasiqueue == "y" %} + + # QuasiQueue settings come from QuasiQueueSettings base class +{%- endif %} +``` + +{%- if cookiecutter.include_celery == "y" %} + +**Note**: Celery does NOT use the Settings class. Celery must be configured using environment variables (prefixed with `CELERY_`), a `celeryconfig.py` file, or programmatically. See the [Celery documentation](celery.md) for details. +{%- endif %} + +## Adding Custom Settings + +### Simple Settings + +Add new fields to the Settings class: + +```python +# {{cookiecutter.__package_slug}}/conf/settings.py +from pydantic import SecretStr +from pydantic_settings import BaseSettings + +class Settings(BaseSettings): + # Existing settings... + + # Add new settings + max_upload_size: int = 10_000_000 # 10MB default + allowed_hosts: list[str] = ["localhost", "127.0.0.1"] + api_key: SecretStr = SecretStr("") # Use SecretStr for secrets + + model_config = SettingsConfigDict( + env_file=".env", + ) +``` + +Usage: + +```bash +export MAX_UPLOAD_SIZE="20000000" +export ALLOWED_HOSTS='["example.com", "api.example.com"]' +export API_KEY="secret-key-here" +``` + +**Note**: Use `SecretStr` for sensitive values like passwords, API keys, and tokens. This prevents secrets from being accidentally logged or exposed: + +```python +from pydantic import SecretStr + +class Settings(BaseSettings): + api_key: SecretStr + database_password: SecretStr + +# Access the secret value when needed +settings.api_key.get_secret_value() # Returns the actual string +print(settings.api_key) # Prints: ********** +``` + +### Nested Settings + +Organize related settings into nested models: + +```python +from pydantic import BaseModel +from pydantic_settings import BaseSettings + +class EmailSettings(BaseModel): + smtp_host: str = "localhost" + smtp_port: int = 587 + smtp_user: str = "" + smtp_password: str = "" + from_address: str = "noreply@example.com" + +class Settings(BaseSettings): + project_name: str = "{{cookiecutter.__package_slug}}" + + # Nested settings + email: EmailSettings = EmailSettings() +``` + +Usage: + +```bash +export EMAIL__SMTP_HOST="smtp.gmail.com" # Double underscore for nested +export EMAIL__SMTP_PORT="587" +export EMAIL__FROM_ADDRESS="noreply@myapp.com" +``` + +Access nested settings: + +```python +from {{cookiecutter.__package_slug}}.conf import settings + +print(settings.email.smtp_host) +print(settings.email.from_address) +``` + +### Computed Properties + +Add derived values using properties: + +```python +class Settings(BaseSettings): + debug: bool = False + database_url: str = "postgresql+asyncpg://localhost/db" + + @property + def is_production(self) -> bool: + """Check if running in production mode.""" + return not self.debug + + @property + def database_name(self) -> str: + """Extract database name from URL.""" + # Parse database_url and return database name + return self.database_url.split("/")[-1] +``` + +## Validation + +### Type Validation + +Pydantic automatically validates types: + +```python +class Settings(BaseSettings): + port: int = 8000 + timeout: float = 30.0 + debug: bool = False + allowed_hosts: list[str] = [] +``` + +Invalid values raise validation errors: + +```bash +export PORT="not-a-number" # Raises ValidationError +``` + +### Custom Validators + +Add custom validation logic: + +```python +from pydantic import field_validator, BaseSettings + +class Settings(BaseSettings): + port: int = 8000 + database_url: str = "" + + @field_validator("port") + @classmethod + def validate_port(cls, v: int) -> int: + if not 1 <= v <= 65535: + raise ValueError("Port must be between 1 and 65535") + return v + + @field_validator("database_url") + @classmethod + def validate_database_url(cls, v: str) -> str: + if not v.startswith(("postgresql://", "postgresql+asyncpg://")): + raise ValueError("Database URL must use PostgreSQL") + return v +``` + +### Required Settings + +Mark settings as required by not providing defaults: + +```python +class Settings(BaseSettings): + # Required - will raise error if not provided + api_key: str + database_url: str + + # Optional - has default + debug: bool = False +``` + +## Conditional Settings + +### Feature-Based Configuration + +Load different settings based on enabled features: + +```python +from typing import Optional + +class Settings(BaseSettings): + # Core settings + project_name: str = "{{cookiecutter.__package_slug}}" + debug: bool = False + + # Optional feature settings + database_url: Optional[str] = None + redis_url: Optional[str] = None + + @property + def has_database(self) -> bool: + return self.database_url is not None + + @property + def has_cache(self) -> bool: + return self.redis_url is not None +``` + +**Note**: In the actual template, optional features like database and cache settings are added via inheritance from specialized settings classes (e.g., `DatabaseSettings`, `CacheSettings`) only when those features are enabled. + +### Environment-Specific Settings + +Use different settings per environment: + +```python +from enum import Enum + +class Environment(str, Enum): + DEVELOPMENT = "development" + STAGING = "staging" + PRODUCTION = "production" + +class Settings(BaseSettings): + environment: Environment = Environment.DEVELOPMENT + debug: bool = False + + @property + def is_production(self) -> bool: + return self.environment == Environment.PRODUCTION + + def __init__(self, **kwargs): + super().__init__(**kwargs) + + # Auto-set debug based on environment + if self.environment != Environment.PRODUCTION: + self.debug = True +``` + +## Multiple Settings Files + +### Environment-Specific Files + +Load different .env files per environment: + +```python +import os +from pydantic_settings import BaseSettings, SettingsConfigDict + +# Determine environment +env = os.getenv("ENVIRONMENT", "development") +env_file = f".env.{env}" # .env.development, .env.production, etc. + +class Settings(BaseSettings): + model_config = SettingsConfigDict( + env_file=env_file, + env_file_encoding="utf-8", + ) +``` + +Project structure: + +``` +.env # Default settings +.env.development # Development overrides +.env.staging # Staging overrides +.env.production # Production overrides +``` + +## Testing with Settings + +### Testing Settings Existence + +Test that settings are properly instantiated and accessible: + +```python +# tests/test_settings.py +from {{cookiecutter.__package_slug}}.settings import settings +from {{cookiecutter.__package_slug}}.conf.settings import Settings + + +def test_settings_exists(): + """Test that settings instance exists.""" + assert settings is not None + + +def test_settings_is_settings_class(): + """Test that settings is an instance of Settings.""" + assert isinstance(settings, Settings) + + +def test_settings_has_project_name(): + """Test that settings has project_name attribute.""" + assert hasattr(settings, "project_name") + assert settings.project_name is not None + assert len(settings.project_name) > 0 + + +def test_settings_has_debug(): + """Test that settings has debug attribute.""" + assert hasattr(settings, "debug") + assert isinstance(settings.debug, bool) +``` + +### Testing Settings Inheritance + +Test that Settings properly inherits from base classes: + +```python +from {{cookiecutter.__package_slug}}.conf.cache import CacheSettings + + +def test_settings_inherits_from_cache_settings(): + """Test that Settings inherits from CacheSettings.""" + assert issubclass(Settings, CacheSettings) + + +def test_settings_inherits_from_quasiqueue_settings(): + """Test that Settings inherits from QuasiQueueSettings.""" + from quasiqueue import Settings as QuasiQueueSettings + assert issubclass(Settings, QuasiQueueSettings) +``` + +### Testing Settings Instantiation + +Test that Settings can be created and configured: + +```python +def test_settings_can_be_instantiated(): + """Test that Settings can be instantiated.""" + test_settings = Settings() + assert test_settings is not None + assert isinstance(test_settings, Settings) + + +def test_settings_has_defaults(): + """Test that settings have default values.""" + test_settings = Settings() + assert hasattr(test_settings, "project_name") + assert hasattr(test_settings, "debug") +``` + +### Override Settings with Monkeypatch + +Use pytest's `monkeypatch` to override environment variables: + +```python +def test_debug_from_env(monkeypatch): + """Test that DEBUG setting can be set from environment.""" + monkeypatch.setenv("DEBUG", "True") + + # Create new settings instance to pick up env var + test_settings = Settings() + assert test_settings.debug is True + + +def test_project_name_from_env(monkeypatch): + """Test that PROJECT_NAME can be overridden.""" + monkeypatch.setenv("PROJECT_NAME", "Test Project") + + test_settings = Settings() + assert test_settings.project_name == "Test Project" + + +def test_database_url_from_env(monkeypatch): + """Test database URL configuration.""" + test_url = "postgresql+asyncpg://test:pass@localhost/testdb" + monkeypatch.setenv("DATABASE_URL", test_url) + + test_settings = Settings() + assert test_settings.database_url == test_url +``` + +### Testing Cache Configuration + +Test cache-related settings when cache is enabled: + +```python +def test_cache_configuration_exists(): + """Test that cache configuration is present.""" + from {{cookiecutter.__package_slug}}.conf.cache import CacheSettings + + cache_settings = CacheSettings() + assert hasattr(cache_settings, "cache_backend") + + +def test_cache_backend_setting(monkeypatch): + """Test cache backend configuration.""" + monkeypatch.setenv("CACHE_BACKEND", "redis") + + test_settings = Settings() + assert test_settings.cache_backend == "redis" + + +def test_redis_url_setting(monkeypatch): + """Test Redis URL configuration.""" + redis_url = "redis://localhost:6379/1" + monkeypatch.setenv("REDIS_URL", redis_url) + + test_settings = Settings() + assert test_settings.redis_url == redis_url +``` + +### Test Fixtures for Settings + +Create reusable fixtures for common test scenarios: + +```python +import pytest + + +@pytest.fixture +def test_settings(monkeypatch): + """Provide settings configured for testing.""" + monkeypatch.setenv("DEBUG", "True") + monkeypatch.setenv("DATABASE_URL", "sqlite:///test.db") + monkeypatch.setenv("CACHE_BACKEND", "memory") + + from {{cookiecutter.__package_slug}}.conf.settings import Settings + return Settings() + + +@pytest.fixture +def production_settings(monkeypatch): + """Provide settings configured for production.""" + monkeypatch.setenv("DEBUG", "False") + monkeypatch.setenv("DATABASE_URL", "postgresql://prod/db") + + from {{cookiecutter.__package_slug}}.conf.settings import Settings + return Settings() + + +def test_with_test_settings(test_settings): + """Test using the test_settings fixture.""" + assert test_settings.debug is True + assert "sqlite" in test_settings.database_url + + +def test_with_production_settings(production_settings): + """Test using the production_settings fixture.""" + assert production_settings.debug is False +``` + +### Testing Optional Features + +Test that settings handle optional features correctly: + +```python +def test_settings_with_all_features(): + """Test settings when all features are enabled.""" + test_settings = Settings() + + # Check that all feature-related settings exist + # (based on which features are enabled in the template) + if hasattr(test_settings, "database_url"): + assert test_settings.database_url is not None + if hasattr(test_settings, "cache_backend"): + assert test_settings.cache_backend in ["memory", "redis"] + + +def test_cache_settings_conditional(): + """Test cache settings are available when cache is enabled.""" + from {{cookiecutter.__package_slug}}.conf import settings + + if hasattr(settings, "cache_backend"): + assert settings.cache_backend in ["memory", "redis"] +``` + +## Best Practices + +1. **Use Type Hints**: Always provide type hints for settings - enables IDE autocomplete and validation: + + ```python + port: int = 8000 # Good + port = 8000 # Bad - no type checking + ``` + +2. **Use SecretStr for Sensitive Data**: Protect passwords, API keys, tokens, and other secrets from accidental exposure: + + ```python + from pydantic import SecretStr + + class Settings(BaseSettings): + api_key: SecretStr # Good - prevents logging secrets + database_password: SecretStr # Good + jwt_secret: SecretStr # Good + + # api_key: str # Bad - secret can be logged + + # Access when needed + actual_key = settings.api_key.get_secret_value() + ``` + + Benefits of `SecretStr`: + - Prevents secrets from appearing in logs + - Hides values in error messages and tracebacks + - Shows `**********` when printed or serialized + - Makes it explicit which fields contain sensitive data + +3. **Provide Sensible Defaults**: Set reasonable defaults for all optional settings + +4. **Document Settings**: Add docstrings to explain each setting: + + ```python + class Settings(BaseSettings): + max_connections: int = 10 + """Maximum number of concurrent database connections.""" + + timeout: float = 30.0 + """Request timeout in seconds.""" + ``` + +5. **Group Related Settings**: Use nested models to organize related configuration + +6. **Validate Early**: Use validators to catch configuration errors at startup rather than runtime + +7. **Keep Secrets Secret**: Never commit .env files with secrets to version control: + + ```bash + # .gitignore + .env + .env.* + !.env.example + ``` + +8. **Provide .env.example**: Include a template showing all available settings: + + ```bash + # .env.example + PROJECT_NAME="My App" + DEBUG=False + DATABASE_URL="postgresql+asyncpg://user:pass@localhost/db" + API_KEY="your-api-key-here" + ``` + +## Development vs Production + +### Development + +```bash +# .env.development +DEBUG=True +DATABASE_URL="postgresql+asyncpg://localhost/dev_db" +REDIS_URL="redis://localhost:6379/0" +LOG_LEVEL="DEBUG" +``` + +### Production + +```bash +# .env.production +DEBUG=False +DATABASE_URL="postgresql+asyncpg://prod-db:5432/app_db" +REDIS_URL="redis://prod-redis:6379/0" +LOG_LEVEL="INFO" +API_KEY="${SECRET_API_KEY}" # Loaded from secure vault +``` + +## References + +- [Pydantic Settings Documentation](https://docs.pydantic.dev/latest/concepts/pydantic_settings/) +- [Pydantic Validation](https://docs.pydantic.dev/latest/concepts/validators/) +- [Python-dotenv](https://github.com/theskumar/python-dotenv) diff --git a/{{cookiecutter.__package_slug}}/docs/dev/templates.md b/{{cookiecutter.__package_slug}}/docs/dev/templates.md index 684234f..c114398 100644 --- a/{{cookiecutter.__package_slug}}/docs/dev/templates.md +++ b/{{cookiecutter.__package_slug}}/docs/dev/templates.md @@ -1 +1,872 @@ -# Templates +# Jinja2 Templates + +This project uses [Jinja2](https://jinja.palletsprojects.com/), a modern and designer-friendly templating language for Python. + +## Overview + +Jinja2 is integrated with FastAPI to render HTML templates for web pages, emails, and other text-based content. The template system provides: + +- **Template inheritance** for building page layouts +- **Variable interpolation** for dynamic content +- **Control structures** (loops, conditionals) +- **Filters** for transforming data +- **Custom functions** and filters + +## Configuration + +### Template Location + +Templates are stored in the `{{cookiecutter.__package_slug}}/templates/` directory: + +``` +{{cookiecutter.__package_slug}}/ +└── templates/ + ├── base.html # Base layout template + ├── index.html # Homepage template + ├── components/ + │ ├── header.html # Reusable header + │ └── footer.html # Reusable footer + └── emails/ + └── welcome.html # Email templates +``` + +### Jinja2 Environment + +The Jinja2 environment is configured in `{{cookiecutter.__package_slug}}/services/jinja.py`: + +```python +from jinja2 import Environment, PackageLoader, select_autoescape + +# Create Jinja2 environment +env = Environment( + loader=PackageLoader("{{cookiecutter.__package_slug}}", "templates"), + autoescape=select_autoescape(["html", "xml"]), + trim_blocks=True, + lstrip_blocks=True, +) +``` + +{%- if cookiecutter.include_fastapi == "y" %} + +### FastAPI Integration + +In FastAPI routes, use the Jinja2Templates class: + +```python +from fastapi import FastAPI, Request +from fastapi.templating import Jinja2Templates + +app = FastAPI() +templates = Jinja2Templates(directory="{{cookiecutter.__package_slug}}/templates") + +@app.get("/") +async def homepage(request: Request): + return templates.TemplateResponse( + "index.html", + {"request": request, "title": "Welcome"} + ) +``` + +{%- endif %} + +## Basic Template Usage + +### Simple Template + +Create a basic template (`templates/hello.html`): + +```html +{%- raw %} + + + + {{ title }} + + +

Hello, {{ name }}!

+

Welcome to {{ project_name }}

+ + +{% endraw -%} +``` + +Render in a route: + +```python +@app.get("/hello/{name}") +async def hello(request: Request, name: str): + return templates.TemplateResponse( + "hello.html", + { + "request": request, + "title": f"Hello {name}", + "name": name, + "project_name": "My Application" + } + ) +``` + +### Variables + +Use double curly braces for variable interpolation: + +```html +{%- raw %} +{{ variable }} +{{ user.name }} +{{ items[0] }} +{{ data['key'] }} +{{ function() }} +{% endraw -%} +``` + +### Comments + +```html +{%- raw %} +{# This is a comment and won't appear in output #} + +{# +Multi-line +comment +#} +{% endraw -%} +``` + +## Template Inheritance + +### Base Template + +Create a base layout (`templates/base.html`): + +```html +{%- raw %} + + + + + + {% block title %}Default Title{% endblock %} + + + + {% block extra_head %}{% endblock %} + + +
+ {% include 'components/header.html' %} +
+ +
+ {% block content %} +

Default content

+ {% endblock %} +
+ +
+ {% include 'components/footer.html' %} +
+ + {% block extra_scripts %}{% endblock %} + + +{% endraw -%} +``` + +### Child Template + +Extend the base template (`templates/page.html`): + +```html +{%- raw %} +{% extends "base.html" %} + +{% block title %}My Page Title{% endblock %} + +{% block extra_head %} + +{% endblock %} + +{% block content %} +

Welcome to My Page

+

This content replaces the default content block.

+{% endblock %} + +{% block extra_scripts %} + +{% endblock %} +{% endraw -%} +``` + +## Control Structures + +### Conditionals + +```html +{%- raw %} +{% if user.is_authenticated %} +

Welcome back, {{ user.name }}!

+{% elif user.is_guest %} +

Welcome, guest!

+{% else %} +

Please log in.

+{% endif %} +{% endraw -%} +``` + +### Loops + +```html +{%- raw %} +
    +{% for item in items %} +
  • {{ loop.index }}: {{ item.name }}
  • +{% endfor %} +
+ + +
    +{% for user in users %} +
  • {{ user.name }}
  • +{% else %} +
  • No users found.
  • +{% endfor %} +
+ + +{% for item in items %} + {{ loop.index }} + {{ loop.index0 }} + {{ loop.first }} + {{ loop.last }} + {{ loop.length }} +{% endfor %} +{% endraw -%} +``` + +### Filters + +Transform variables with filters: + +```html +{%- raw %} +{{ name|upper }} +{{ text|lower }} +{{ number|abs }} +{{ items|length }} +{{ price|round(2) }} +{{ html_content|safe }} +{{ description|truncate(100) }} +{{ date|default("N/A") }} +{{ items|join(", ") }} +{{ text|replace("old", "new") }} +{% endraw -%} +``` + +## Custom Filters + +### Adding Filters + +Add custom filters to the Jinja2 environment: + +```python +# {{cookiecutter.__package_slug}}/services/jinja.py +from jinja2 import Environment, PackageLoader +from datetime import datetime + +env = Environment(loader=PackageLoader("{{cookiecutter.__package_slug}}", "templates")) + +def format_datetime(value, format="%Y-%m-%d %H:%M:%S"): + """Format datetime object.""" + if isinstance(value, datetime): + return value.strftime(format) + return value + +def currency(value): + """Format as currency.""" + return f"${value:,.2f}" + +# Register custom filters +env.filters["datetime"] = format_datetime +env.filters["currency"] = currency +``` + +Usage in templates: + +```html +{%- raw %} +

Date: {{ created_at|datetime("%B %d, %Y") }}

+

Price: {{ amount|currency }}

+{% endraw -%} +``` + +### Common Custom Filters + +```python +def pluralize(count, singular, plural=None): + """Return singular or plural form based on count.""" + if plural is None: + plural = singular + "s" + return singular if count == 1 else plural + +def markdown_to_html(text): + """Convert Markdown to HTML.""" + import markdown + return markdown.markdown(text) + +def nl2br(text): + """Convert newlines to
tags.""" + return text.replace("\n", "
") + +# Register filters +env.filters["pluralize"] = pluralize +env.filters["markdown"] = markdown_to_html +env.filters["nl2br"] = nl2br +``` + +## Custom Functions + +### Global Functions + +Add functions available in all templates: + +```python +# {{cookiecutter.__package_slug}}/services/jinja.py +from {{cookiecutter.__package_slug}}.conf import settings + +def url_for(endpoint: str, **params) -> str: + """Generate URL for endpoint.""" + # URL generation logic + return f"/{endpoint}" + +def asset_url(path: str) -> str: + """Generate URL for static asset.""" + return f"/static/{path}" + +# Register global functions +env.globals["url_for"] = url_for +env.globals["asset_url"] = asset_url +env.globals["settings"] = settings # Access settings in templates +``` + +Usage in templates: + +```html +{%- raw %} +User Profile +Logo +

App Name: {{ settings.project_name }}

+{% endraw -%} +``` + +## Template Components + +### Including Templates + +Reuse template fragments with `include`: + +```html +{%- raw %} + +
+ +
+ + +{% include 'components/header.html' %} +
+

Page content here

+
+{% endraw -%} +``` + +### Macros + +Create reusable template functions with macros: + +```html +{%- raw %} + +{% macro input(name, type="text", placeholder="", required=false) %} +
+ +
+{% endmacro %} + +{% macro button(text, type="submit", classes="btn-primary") %} + +{% endmacro %} + + +{% from 'macros/forms.html' import input, button %} + +
+ {{ input('username', placeholder='Enter username', required=true) }} + {{ input('password', type='password', placeholder='Enter password', required=true) }} + {{ button('Login') }} +
+{% endraw -%} +``` + +## Rendering Templates Outside FastAPI + +### Direct Rendering + +Render templates in other contexts (tasks, CLI, emails): + +```python +from {{cookiecutter.__package_slug}}.services.jinja import env + +def send_welcome_email(user_email: str, user_name: str): + """Send welcome email using template.""" + template = env.get_template("emails/welcome.html") + html_content = template.render( + name=user_name, + email=user_email, + year=2024, + ) + + # Send email with html_content + send_email(user_email, "Welcome!", html_content) +``` + +### Celery Tasks + +Use templates in Celery tasks: + +```python +from {{cookiecutter.__package_slug}}.celery import celery +from {{cookiecutter.__package_slug}}.services.jinja import env + +@celery.task +def generate_report(report_id: int): + """Generate HTML report.""" + template = env.get_template("reports/monthly.html") + + # Get report data + data = fetch_report_data(report_id) + + # Render template + html = template.render( + report_id=report_id, + data=data, + generated_at=datetime.now(), + ) + + # Save or send report + save_report(report_id, html) +``` + +## Autoescape + +### HTML Escaping + +By default, variables are HTML-escaped for security: + +```html +{%- raw %} +{{ user_input }} + + + +{% endraw -%} +``` + +### Marking Safe Content + +Mark trusted content as safe to bypass escaping: + +```python +from markupsafe import Markup + +@app.get("/page") +async def page(request: Request): + safe_html = Markup("Bold text") + return templates.TemplateResponse( + "page.html", + {"request": request, "content": safe_html} + ) +``` + +Or in the template: + +```html +{%- raw %} +{{ content|safe }} +{% endraw -%} +``` + +## Error Handling + +### Template Not Found + +Handle missing templates gracefully: + +```python +from jinja2 import TemplateNotFound + +@app.get("/page/{name}") +async def dynamic_page(request: Request, name: str): + try: + return templates.TemplateResponse( + f"pages/{name}.html", + {"request": request} + ) + except TemplateNotFound: + return templates.TemplateResponse( + "404.html", + {"request": request}, + status_code=404 + ) +``` + +### Debug Mode + +Enable debug mode during development: + +```python +env = Environment( + loader=PackageLoader("{{cookiecutter.__package_slug}}", "templates"), + autoescape=select_autoescape(["html", "xml"]), + auto_reload=True, # Reload templates on change (development only) +) +``` + +## Best Practices + +1. **Use Template Inheritance**: Create a base layout and extend it for consistency across pages + +2. **Separate Concerns**: Keep logic in Python code, use templates for presentation only + +3. **Escape User Input**: Never use `|safe` on user-provided content - risk of XSS attacks + +4. **Organize Templates**: Group related templates in subdirectories: + + ```text + templates/ + ├── base.html + ├── pages/ + │ ├── home.html + │ └── about.html + ├── components/ + │ ├── header.html + │ └── footer.html + └── emails/ + └── welcome.html + ``` + +5. **Use Macros for Repetitive HTML**: Create macros for common components like forms, buttons, cards + +6. **Cache Templates in Production**: Disable `auto_reload` in production for better performance + +7. **Pass Context Explicitly**: Always pass data explicitly rather than relying on globals: + + ```python + # Good + templates.TemplateResponse("page.html", {"request": request, "user": user}) + + # Bad - implicit global access + ``` + +## Development vs Production + +### Development + +```python +# Development environment with auto-reload +env = Environment( + loader=PackageLoader("{{cookiecutter.__package_slug}}", "templates"), + autoescape=select_autoescape(["html", "xml"]), + auto_reload=True, # Reload on changes + cache_size=0, # Disable caching +) +``` + +### Production + +```python +# Production environment optimized for performance +env = Environment( + loader=PackageLoader("{{cookiecutter.__package_slug}}", "templates"), + autoescape=select_autoescape(["html", "xml"]), + auto_reload=False, # Don't reload + cache_size=400, # Cache compiled templates + trim_blocks=True, + lstrip_blocks=True, +) +``` + +## Testing Templates + +### Testing Jinja2 Environment + +Test that the Jinja2 environment is properly configured: + +```python +# tests/services/test_jinja.py +from jinja2 import Environment +from fastapi.templating import Jinja2Templates +from {{cookiecutter.__package_slug}}.services.jinja import env, response_templates + + +def test_env_exists(): + """Test that Jinja2 environment is properly instantiated.""" + assert env is not None + assert isinstance(env, Environment) + + +def test_env_has_loader(): + """Test that environment has a loader configured.""" + assert env.loader is not None + + +def test_env_autoescape_enabled(): + """Test that autoescape is enabled for security.""" + assert env.autoescape is True or callable(env.autoescape) + + +def test_response_templates_exists(): + """Test that response_templates is properly instantiated.""" + assert response_templates is not None + assert isinstance(response_templates, Jinja2Templates) + + +def test_response_templates_uses_custom_env(): + """Test that response_templates uses our custom environment.""" + assert response_templates.env is env +``` + +### Testing Template Rendering + +Test that templates can be compiled and rendered: + +```python +def test_env_can_compile_template(): + """Test that environment can compile a simple template.""" + template = env.from_string("{%- raw %}Hello {{ name }}!{% endraw -%}") + result = template.render(name="World") + assert result == "Hello World!" + + +def test_template_rendering_with_variables(): + """Test template rendering with multiple variables.""" + template = env.from_string("""{%- raw %} +

{{ title }}

+

Welcome, {{ user }}!

+ {% endraw -%}""") + + output = template.render(title="Dashboard", user="John") + assert "

Dashboard

" in output + assert "Welcome, John!" in output +``` + +### Testing Template Loops and Conditionals + +Test template control structures: + +```python +def test_template_with_loop(): + """Test template with for loop.""" + template = env.from_string("""{%- raw %} +
    + {% for item in items %} +
  • {{ item }}
  • + {% endfor %} +
+ {% endraw -%}""") + + output = template.render(items=["Apple", "Banana", "Cherry"]) + assert "
  • Apple
  • " in output + assert "
  • Banana
  • " in output + assert "
  • Cherry
  • " in output + + +def test_template_with_conditional(): + """Test template with if statement.""" + template = env.from_string("""{%- raw %} + {% if user.is_admin %} +

    Admin Panel

    + {% else %} +

    User Panel

    + {% endif %} + {% endraw -%}""") + + # Test admin path + output = template.render(user={"is_admin": True}) + assert "Admin Panel" in output + + # Test user path + output = template.render(user={"is_admin": False}) + assert "User Panel" in output +``` + +### Testing Template Filters + +Test that filters work correctly: + +```python +def test_template_upper_filter(): + """Test upper filter.""" + template = env.from_string("{%- raw %}{{ text|upper }}{% endraw -%}") + result = template.render(text="hello") + assert result == "HELLO" + + +def test_template_default_filter(): + """Test default filter.""" + template = env.from_string("{%- raw %}{{ value|default('N/A') }}{% endraw -%}") + + # With value + result = template.render(value="Something") + assert result == "Something" + + # Without value + result = template.render(value=None) + assert result == "N/A" + + +def test_template_length_filter(): + """Test length filter.""" + template = env.from_string("{%- raw %}{{ items|length }}{% endraw -%}") + result = template.render(items=[1, 2, 3, 4, 5]) + assert result == "5" +``` + +### Testing Custom Filters + +Test custom filters added to the environment: + +```python +def test_custom_filter_registered(): + """Test that custom filters are registered.""" + # If you added a custom 'currency' filter + assert "currency" in env.filters + + +def test_custom_currency_filter(): + """Test custom currency filter.""" + # Add the filter first (or ensure it's in services/jinja.py) + def currency(value): + return f"${value:,.2f}" + + env.filters["currency"] = currency + + template = env.from_string("{%- raw %}{{ amount|currency }}{% endraw -%}") + result = template.render(amount=1234.56) + assert result == "$1,234.56" +``` + +### Testing Template Loading + +Test that templates can be loaded from files: + +```python +def test_template_loader_can_find_templates(): + """Test that loader can locate templates.""" + # Assumes you have a test template in templates/ + template_source = env.loader.get_source(env, "base.html") + assert template_source is not None + + +def test_can_load_template_from_file(): + """Test loading a template file.""" + # This will raise TemplateNotFound if template doesn't exist + template = env.get_template("base.html") + assert template is not None +``` + +### Testing Template Inheritance + +Test that template inheritance works correctly: + +```python +def test_template_inheritance(): + """Test template extends and blocks.""" + # Create base template + base = env.from_string("""{%- raw %} + + {% block title %}Default Title{% endblock %} + {% block content %}Default Content{% endblock %} + + {% endraw -%}""") + + # Create child template + child = env.from_string("""{%- raw %} + {% extends base %} + {% block title %}Custom Title{% endblock %} + {% block content %}Custom Content{% endblock %} + {% endraw -%}""") + + output = child.render(base=base) + assert "Custom Title" in output + assert "Custom Content" in output +``` + +### Testing Template Security (Autoescape) + +Test that HTML is properly escaped: + +```python +def test_autoescape_prevents_xss(): + """Test that HTML is escaped by default.""" + template = env.from_string("{%- raw %}

    {{ user_input }}

    {% endraw -%}") + + # Potentially malicious input + result = template.render(user_input="") + + # Should be escaped + assert "<script>" in result + assert "") + # Should escape < and > + assert "<" in result or "