Skip to content

Conversation

@a5chin
Copy link
Owner

@a5chin a5chin commented Nov 10, 2025

PR Type

Documentation, Enhancement


Description

  • Introduce a comprehensive CLAUDE.md guide for AI interaction.

  • Revamp README.md with detailed sections and quick start guides.

  • Expand all core documentation guides for tools, configuration, and use cases.

  • Update GitHub Actions to deploy docs on any .md file change.


Diagram Walkthrough

flowchart LR
  A[Old Docs] --> B{Documentation Overhaul};
  B --> C[New CLAUDE.md];
  B --> D[Revamped README.md];
  B --> E[Expanded Guides & Configs];
  E --> E1[Getting Started];
  E --> E2[Development Guides];
  E --> E3[Configuration Reference];
  E --> E4[Built-in Utilities];
  E --> E5[Use Cases];
  B --> F[Updated gh-deploy.yml Trigger];
  F -- "Monitors" --> E;
  F -- "Monitors" --> D;
  F -- "Monitors" --> C;
Loading

File Walkthrough

Relevant files
Configuration changes
1 files
gh-deploy.yml
Update documentation deployment trigger to all Markdown files
+1/-1     
Documentation
11 files
CLAUDE.md
Add new guide for Claude Code AI interaction                         
+267/-0 
README.md
Completely overhaul README with new structure, features, and quick
start
+395/-191
index.md
Expand configuration reference with detailed guides and best practices
+288/-10
index.md
Enhance getting started guide with setup options and troubleshooting
+162/-4 
index.md
Restructure development guides with overviews and quick references
+241/-14
config.md
Rewrite configuration guide with advanced usage and best practices
+363/-27
index.md
Expand built-in utilities overview with architecture and use cases
+221/-4 
logger.md
Rewrite logger guide with detailed usage, formatters, and best
practices
+288/-11
tracer.md
Rewrite tracer guide with real-world examples and advanced usage
+370/-20
index.md
Revamp main documentation landing page with overview and quick
navigation
+144/-60
index.md
Expand use cases with featured examples and best practices
+339/-4 

@a5chin a5chin self-assigned this Nov 10, 2025
@a5chin
Copy link
Owner Author

a5chin commented Nov 10, 2025

/review

@github-actions
Copy link
Contributor

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Documentation Review

The README.md has been completely overhauled with extensive new content. A thorough review is needed to ensure all sections are accurate, clear, and consistent with the project's current state and best practices. Pay close attention to the Quick Start, Development Workflow, and Built-in Utility Modules sections, verifying command examples and descriptions.

A production-ready Python development environment template using modern tools: **uv** for blazing-fast package management, **Ruff** for lightning-fast linting and formatting, and **VSCode Dev Containers** for reproducible development environments.

<div align="center">
<img src="docs/img/ruff.gif" width="49%"> <img src="docs/img/jupyter.gif" width="49%">
</div>

---

## 📋 Table of Contents

- [Python Development with uv and Ruff](#python-development-with-uv-and-ruff)
  - [📋 Table of Contents](#-table-of-contents)
  - [✨ Features](#-features)
  - [🚀 Quick Start](#-quick-start)
    - [Using Dev Container (Recommended)](#using-dev-container-recommended)
    - [Using Docker Only](#using-docker-only)
    - [Local Setup (Without Docker)](#local-setup-without-docker)
  - [📚 Development Workflow](#-development-workflow)
    - [Installing Dependencies](#installing-dependencies)
    - [Running Tasks](#running-tasks)
    - [Pre-commit Hooks](#pre-commit-hooks)
    - [Documentation](#documentation)
  - [🏗️ Project Structure](#️-project-structure)
    - [Built-in Utility Modules](#built-in-utility-modules)
      - [**Logger** - Dual-mode logging system](#logger---dual-mode-logging-system)
      - [**Configuration** - Environment-based settings](#configuration---environment-based-settings)
      - [**Timer** - Performance monitoring](#timer---performance-monitoring)
  - [⚙️ Configuration](#️-configuration)
    - [Ruff Configuration](#ruff-configuration)
    - [Pyright Configuration](#pyright-configuration)
    - [Pytest Configuration](#pytest-configuration)
  - [🔄 CI/CD](#-cicd)
  - [🎨 VSCode Configuration](#-vscode-configuration)
  - [🍪 Cookiecutter Templates](#-cookiecutter-templates)
  - [📖 Documentation](#-documentation)
  - [🌿 Branches](#-branches)
  - [📄 License](#-license)
  - [🙏 Acknowledgments](#-acknowledgments)

---

## ✨ Features

- 🚀 **Ultra-fast package management** with [uv](https://github.com/astral-sh/uv) (10-100x faster than pip)
-**Lightning-fast linting & formatting** with [Ruff](https://github.com/astral-sh/ruff) (replacing Black, isort, Flake8, and more)
- 🐳 **Dev Container ready** - Consistent development environment across all machines
- 🔍 **Type checking** with Pyright
-**Pre-configured testing** with pytest (75% coverage requirement)
- 🔄 **Automated CI/CD** with GitHub Actions
- 📦 **Reusable utilities** - Logger, configuration management, and performance tracing tools
- 🎯 **Task automation** with nox
- 🪝 **Pre-commit hooks** for automatic code quality checks

## 🚀 Quick Start

### Using Dev Container (Recommended)

1. **Prerequisites**: Install [Docker](https://www.docker.com/) and [VSCode](https://code.visualstudio.com/) with the [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)

2. **Open in container**:
   ```bash
   git clone https://github.com/a5chin/python-uv.git
   cd python-uv
   code .

When prompted, click "Reopen in Container"

  1. Start developing:
    # Install dependencies
    uv sync
    
    # Run tests
    uv run nox -s test
    
    # Format and lint
    uv run nox -s fmt
    uv run nox -s lint -- --pyright --ruff

Using Docker Only

# Build the image
docker build -t python-uv .

# Run container
docker run -it --rm -v $(pwd):/workspace python-uv

Local Setup (Without Docker)

Prerequisites: Python 3.10+ and uv

# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Clone and setup
git clone https://github.com/a5chin/python-uv.git
cd python-uv

# Install dependencies
uv sync

# Install pre-commit hooks (optional)
uv run pre-commit install

📚 Development Workflow

Installing Dependencies

# Install all dependencies (including dev dependencies)
uv sync

# Install without dev dependencies
uv sync --no-dev

# Add new dependencies
uv add requests pandas

# Add dev dependencies
uv add --dev pytest-mock

Running Tasks

This project uses nox for task automation. All common development tasks are available as nox sessions:

# Format code with Ruff
uv run nox -s fmt

# Run linters (Pyright + Ruff)
uv run nox -s lint -- --pyright --ruff

# Run only Pyright
uv run nox -s lint -- --pyright

# Run only Ruff linter
uv run nox -s lint -- --ruff

# Run tests with coverage (75% minimum required)
uv run nox -s test

# Run tests with JUnit XML output (for CI)
uv run nox -s test -- --junitxml=results.xml

You can also run tools directly:

# Run pytest directly
uv run pytest

# Run specific test file
uv run pytest tests/tools/test__logger.py

# Format with Ruff
uv run ruff format .

# Lint with Ruff
uv run ruff check . --fix

# Type check with Pyright
uv run pyright

Pre-commit Hooks

Pre-commit hooks automatically run code quality checks before each commit:

# Install hooks
uv run pre-commit install

# Run manually on all files
uv run pre-commit run --all-files

Configured hooks:

  • Ruff formatting and linting
  • JSON, YAML, TOML validation
  • Trailing whitespace removal
  • End-of-file fixer
  • Private key detection
  • Dockerfile linting with hadolint

Documentation

Generate and serve documentation with MkDocs:

# Serve locally at http://127.0.0.1:8000
uv run mkdocs serve

# Build static site
uv run mkdocs build

# Deploy to GitHub Pages
uv run mkdocs gh-deploy

🏗️ Project Structure

.
├── tools/                    # Reusable utility modules
│   ├── config/              # Configuration management (Settings, FastAPI config)
│   ├── logger/              # Logging utilities (Local & Google Cloud formatters)
│   └── tracer/              # Performance tracing (Timer decorator/context manager)
├── tests/                   # Test suite (mirrors tools/ structure)
│   └── tools/              # Unit tests for utility modules
├── docs/                    # MkDocs documentation
│   ├── getting-started/    # Setup guides
│   ├── guides/             # Tool usage guides
│   ├── configurations/     # Configuration references
│   └── usecases/           # Real-world examples
├── .devcontainer/           # Dev Container configuration
├── .github/                 # GitHub Actions workflows and reusable actions
├── noxfile.py              # Task automation configuration (test, lint, fmt)
├── pyproject.toml          # Project metadata and dependencies (uv)
├── ruff.toml               # Ruff linter/formatter configuration
├── pyrightconfig.json      # Pyright type checking configuration
└── pytest.ini              # Pytest configuration (75% coverage requirement)

Built-in Utility Modules

The tools/ package provides production-ready utilities that can be used in your projects:

Logger - Dual-mode logging system

Environment-aware logging with support for local development and cloud environments:

from tools.logger import Logger, LogType

# Local development (colored console output)
logger = Logger(__name__, log_type=LogType.LOCAL)

# Google Cloud (structured JSON logging)
logger = Logger(__name__, log_type=LogType.GOOGLE_CLOUD, project="my-project")

logger.info("Application started")

Configuration - Environment-based settings

Type-safe configuration management using Pydantic:

from tools.config import Settings

settings = Settings()  # Loads from .env and .env.local
api_url = settings.api_prefix_v1
is_debug = settings.DEBUG

Timer - Performance monitoring

Automatic execution time logging for functions and code blocks:

from tools.tracer import Timer

# As context manager
with Timer("database_query"):
    result = db.query()  # Logs execution time automatically

# As decorator
@Timer("process_data")
def process_data(data):
    return transform(data)  # Logs execution time when function completes

⚙️ Configuration

Ruff Configuration

Ruff replaces multiple tools (Black, isort, Flake8, pydocstyle, pyupgrade, autoflake) with a single, fast tool.

Key settings in ruff.toml:

  • Line length: 88 (Black-compatible)
  • Target Python: 3.14
  • Rules: ALL enabled by default with specific exclusions
  • Test files: Exempt from INP001 (namespace packages) and S101 (assert usage)

See Ruff documentation for customization options.

Pyright Configuration

Static type checking for Python code.

Key settings in pyrightconfig.json:

  • Python version: 3.14
  • Type checking mode: Standard
  • Include: tools/ package only
  • Virtual environment: .venv

See Pyright documentation for advanced configuration.

Pytest Configuration

Testing framework with coverage enforcement.

Key settings in pytest.ini:

  • Coverage requirement: 75% minimum (including branch coverage)
  • Test file pattern: test__*.py (double underscore)
  • Coverage reports: HTML and terminal
  • Import mode: importlib

See pytest documentation for additional options.

🔄 CI/CD

Automated workflows ensure code quality and consistency. All workflows run on push and pull requests.

Available workflows in .github/workflows/:

Workflow Purpose Tools Used
docker.yml Validate Docker build Docker
devcontainer.yml Validate Dev Container configuration devcontainer CLI
format.yml Check code formatting Ruff
lint.yml Run static analysis Pyright, Ruff
test.yml Run test suite with coverage pytest, coverage
gh-deploy.yml Deploy documentation to GitHub Pages MkDocs
pr-agent.yml Automated PR reviews Qodo AI PR Agent
publish-devcontainer.yml Publish Dev Container image Docker, GHCR

🎨 VSCode Configuration

The Dev Container includes pre-configured extensions and settings for optimal Python development.

Python Development:

  • Ruff - Fast linting and formatting
  • Pyright - Static type checking
  • Python - Core Python support
  • autodocstring - Automatic docstring generation
  • python-indent - Correct Python indentation

Code Quality:

  • GitLens - Enhanced Git integration
  • Error Lens - Inline error highlighting
  • indent-rainbow - Visual indentation guide
  • trailing-spaces - Highlight trailing whitespace

File Support:

  • YAML, TOML, Markdown - Configuration file support
  • Docker - Dockerfile and docker-compose support
  • Material Icon Theme - File icons

Editor Settings:

  • ✅ Format on save (Python, JSON, YAML, TOML, Dockerfile)
  • ✅ Auto-trim trailing whitespace
  • ✅ Auto-insert final newline
  • ✅ Organize imports on save

Troubleshooting: If Ruff formatting doesn't work, reload the window: Cmd+Shift+P → "Developer: Reload Window"

🍪 Cookiecutter Templates

This repository can be used as a base template for various Python projects. Combine it with Cookiecutter to bootstrap project-specific setups:

# Install cookiecutter
uv add --dev cookiecutter

# Use a template
uv run cookiecutter <template-url>

Recommended templates:

📖 Documentation

Comprehensive documentation is available at https://a5chin.github.io/python-uv

Topics covered:

  • 🚀 Getting Started - Docker, VSCode, Dev Containers setup
  • ⚙️ Tool Configurations - uv, Ruff, Pyright, pre-commit
  • 🧪 Testing Strategies - pytest, coverage, and best practices
  • 🛠️ Utility Modules - Config, logger, and tracer guides
  • 💡 Use Cases - Jupyter, FastAPI, OpenCV examples

🌿 Branches

This repository maintains multiple branches for different use cases:

  • main - Current production-ready template (recommended)
  • jupyter - Archived: Jupyter-specific configuration
  • rye - Archived: Rye package manager version (replaced by uv)

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

This template is built on top of excellent open-source tools:

  • uv by Astral - Ultra-fast Python package manager
  • Ruff by Astral - Lightning-fast linter and formatter
  • Pyright by Microsoft - Static type checker for Python
  • nox - Flexible task automation for Python
  • pytest - Testing framework for Python
  • MkDocs - Documentation site generator

Special thanks to the open-source community for making these tools available!


</details>

<details><summary><a href='https://github.com/a5chin/python-uv/pull/129/files#diff-6ebdb617a8104a7756d0cf36578ab01103dc9f07e4dc6feb751296b9c402faf7R1-R267'><strong>AI Guidance Accuracy</strong></a>

The new CLAUDE.md file is a valuable addition for AI-assisted development. Review its content to ensure it accurately and concisely summarizes the project's overview, development commands, architecture, and key patterns. Verify that the "Important Notes" section correctly highlights critical conventions and tool usage.
</summary>

```markdown
# CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

## Project Overview

This is a Python development environment template using **uv** (fast Python package manager) and **Ruff** (linter/formatter). The repository serves dual purposes:
1. A template for starting new Python projects
2. A reusable `tools/` package with production-ready utilities (Logger, Config, Timer)

## Development Commands

### Package Management
```bash
# Install dependencies
uv sync

# Add new dependency
uv add <package>

# Add dev dependency
uv add --dev <package>

# Remove dependency
uv remove <package>

Testing

# Run all tests with coverage (75% minimum required)
uv run nox -s test

# Run specific test file
uv run pytest tests/tools/test__logger.py

# Run with JUnit XML output for CI
uv run nox -s test -- --junitxml=results.xml

# Run pytest directly (bypasses nox)
uv run pytest

Linting & Formatting

# Format code
uv run nox -s fmt

# Lint with both Pyright and Ruff
uv run nox -s lint -- --pyright --ruff

# Lint with Pyright only
uv run nox -s lint -- --pyright

# Lint with Ruff only
uv run nox -s lint -- --ruff

# Run Ruff directly
uv run ruff check . --fix
uv run ruff format .

# Run Pyright directly
uv run pyright

Pre-commit Hooks

# Install hooks
uv run pre-commit install

# Run all hooks manually
uv run pre-commit run --all-files

# Run specific hook
uv run pre-commit run ruff-format

Documentation

# Serve docs locally at http://127.0.0.1:8000
uv run mkdocs serve

# Build documentation
uv run mkdocs build

# Deploy to GitHub Pages
uv run mkdocs gh-deploy

Architecture

Core Modules

The tools/ package provides three main utility modules:

tools/logger/ - Dual-Mode Logging System

  • Logger class extends logging.Logger with environment-aware formatting
  • LogType.LOCAL: Colored console output via LocalFormatter for development
  • LogType.GOOGLE_CLOUD: Structured JSON via GoogleCloudFormatter for production
  • Key pattern: Use Settings.IS_LOCAL to switch between modes automatically
from tools.config import Settings
from tools.logger import Logger, LogType

settings = Settings()
logger = Logger(
    __name__,
    log_type=LogType.LOCAL if settings.IS_LOCAL else LogType.GOOGLE_CLOUD
)

tools/config/ - Environment-Based Configuration

  • Settings class uses Pydantic for type-safe configuration
  • Loads from .env (version controlled) and .env.local (local overrides, in .gitignore)
  • FastAPIKwArgs provides ready-to-use FastAPI initialization parameters
  • Pattern: Extend Settings to add project-specific configuration fields
from tools.config import Settings

settings = Settings()
api_url = settings.api_prefix_v1  # Loaded from environment

tools/tracer/ - Performance Monitoring

  • Timer class works as both decorator and context manager
  • Automatically logs execution time in milliseconds at DEBUG level
  • Uses the Logger module for output (inherits logging configuration)
  • Pattern: Nest timers to measure both overall and component performance
from tools.tracer import Timer

@Timer("full_operation")
def process():
    with Timer("step1"):
        do_step1()
    with Timer("step2"):
        do_step2()

Test Structure

Tests in tests/tools/ mirror the package structure:

  • Naming convention: test__*.py (double underscore)
  • Coverage requirement: 75% minimum (including branch coverage)
  • Test files exempt from: INP001 (namespace packages), S101 (assert usage)

Configuration Philosophy

Ruff (ruff.toml):

  • ALL rules enabled by default with specific exclusions
  • Line length: 88 (Black-compatible)
  • Target Python: 3.14
  • Per-file ignores for test files

Pyright (pyrightconfig.json):

  • Type checking mode: standard
  • Only includes tools/ package (not tests)
  • venv: .venv

pytest (pytest.ini):

  • Coverage: 75% minimum with branch coverage
  • Reports: HTML + terminal
  • Import mode: importlib

Nox Task Automation

The noxfile.py uses a custom CLIArgs parser (Pydantic-based):

  • All sessions use python=False (rely on uv run)
  • Arguments passed via -- --flag value syntax
  • Sessions: fmt, lint, test

Example of the argument parsing pattern:

# noxfile.py
@nox.session(python=False)
def lint(session: nox.Session) -> None:
    args = CLIArgs.parse(session.posargs)
    if args.pyright:
        session.run("uv", "run", "pyright")
    if args.ruff:
        session.run("uv", "run", "ruff", "check", ".", "--fix")

Key Patterns for Development

Adding New Configuration Fields

Extend the Settings class in tools/config/settings.py:

class Settings(BaseSettings):
    # Existing fields...

    # Add your new fields
    NEW_SETTING: str = "default_value"
    ANOTHER_SETTING: int = 42

Then add to .env.local:

NEW_SETTING=custom_value
ANOTHER_SETTING=100

Adding New Logger Formatters

Create a new formatter in tools/logger/:

  1. Extend logging.Formatter
  2. Export from tools/logger/__init__.py
  3. Update Logger.__init__() to support the new type

Testing Utilities

When testing the utilities themselves:

  • Logger: Capture logs using assertLogs context manager
  • Config: Use Pydantic's model instantiation with kwargs to override values
  • Timer: Check debug logs for execution time messages

Documentation Structure

The docs/ directory is organized for MkDocs:

  • docs/index.md: Main landing page
  • docs/getting-started/: Setup guides (Docker, VSCode, Dev Container)
  • docs/guides/: Tool usage guides (uv, Ruff, Pyright, pre-commit, tools package)
  • docs/configurations/: Detailed configuration references
  • docs/usecases/: Real-world examples (Jupyter, FastAPI, OpenCV)

When adding new utilities to tools/, add corresponding documentation to docs/guides/tools/.

CI/CD Workflows

GitHub Actions workflows in .github/workflows/:

  • docker.yml: Validate Docker build
  • devcontainer.yml: Validate Dev Container configuration
  • format.yml: Check Ruff formatting
  • lint.yml: Run Pyright + Ruff linting
  • test.yml: Run pytest with coverage
  • gh-deploy.yml: Deploy documentation to GitHub Pages

All workflows use the same nox commands as local development.

Environment Variables

Critical environment variables (set in .env.local):

  • IS_LOCAL: Boolean flag for local vs production (affects logging, configuration)
  • DEBUG: Boolean for debug mode
  • FastAPI settings: TITLE, VERSION, API_PREFIX_V1, etc.

Important Notes

  • Coverage is enforced: Tests must maintain 75% coverage (configured in pytest.ini)
  • uv replaces pip/poetry: Use uv add not pip install, use uv.lock not requirements.txt
  • Ruff replaces multiple tools: No need for Black, isort, Flake8, etc.
  • nox is the task runner: Prefer uv run nox -s <session> over direct tool calls
  • Test naming: Use test__*.py pattern (double underscore)
  • Type checking targets tools/ only: Pyright only checks the tools/ package, not tests

Template Usage Pattern

When using this as a template for a new project:

  1. Update pyproject.toml with new project name/description
  2. Modify or extend tools/config/settings.py for project-specific configuration
  3. Use the utilities from tools/ or remove if not needed
  4. Update .env with base configuration, .env.local with local overrides
  5. Customize Ruff rules in ruff.toml if needed (but start with defaults)

</details>

<details><summary><a href='https://github.com/a5chin/python-uv/pull/129/files#diff-9047284510742c6ff28141cae32fef6bcfe3640a76dd84aa0be4182bc1931cf7R22-R294'><strong>Code Example Verification</strong></a>

The new documentation for the Tracer module includes numerous code examples. It's important to verify that all code snippets are syntactically correct, runnable, and accurately demonstrate the described functionality, especially in the "Real-World Examples" and "Nested Timers" sections.
</summary>

```markdown
import time
from tools.tracer import Timer

with Timer("database_query"):
    time.sleep(1)  # Simulate database query

Output:

2038-01-19 03:14:07,000 | DEBUG | database_query:__exit__:50 - executed in 1000.000000 ms

As a Decorator

Use @Timer() to measure function execution:

import time
from tools.tracer import Timer

@Timer("process_data")
def process_data(data):
    time.sleep(1)  # Simulate processing
    return data

result = process_data([1, 2, 3])

Output:

2038-01-19 03:14:07,000 | DEBUG | process_data:__exit__:50 - executed in 1000.000000 ms

Real-World Examples

API Endpoint Monitoring

Monitor API endpoint performance:

from fastapi import FastAPI
from tools.tracer import Timer

app = FastAPI()

@app.get("/users/{user_id}")
@Timer("get_user_endpoint")
async def get_user(user_id: int):
    with Timer("database_lookup"):
        user = await db.get_user(user_id)

    with Timer("user_serialization"):
        return user.dict()

Output:

2038-01-19 03:14:07,000 | DEBUG | database_lookup:__exit__:50 - executed in 45.123000 ms
2038-01-19 03:14:07,100 | DEBUG | user_serialization:__exit__:50 - executed in 2.456000 ms
2038-01-19 03:14:07,150 | DEBUG | get_user_endpoint:__exit__:50 - executed in 50.789000 ms

Data Processing Pipeline

Monitor each stage of a data pipeline:

from tools.tracer import Timer
from tools.logger import Logger

logger = Logger(__name__)

@Timer("full_pipeline")
def process_dataset(data):
    logger.info(f"Processing {len(data)} records")

    with Timer("data_validation"):
        validated = validate_data(data)

    with Timer("data_transformation"):
        transformed = transform_data(validated)

    with Timer("data_enrichment"):
        enriched = enrich_data(transformed)

    with Timer("data_storage"):
        save_data(enriched)

    logger.info("Pipeline complete")
    return enriched

Database Operations

Monitor individual database operations:

from sqlalchemy.orm import Session
from tools.tracer import Timer

class UserRepository:
    def __init__(self, db: Session):
        self.db = db

    @Timer("user_create")
    def create_user(self, user_data: dict):
        user = User(**user_data)
        self.db.add(user)
        self.db.commit()
        return user

    @Timer("user_bulk_import")
    def import_users(self, users_data: list[dict]):
        with Timer("user_validation"):
            validated = [validate(u) for u in users_data]

        with Timer("user_db_insert"):
            self.db.bulk_insert_mappings(User, validated)
            self.db.commit()

File Processing

Track file I/O operations:

import json
from tools.tracer import Timer

@Timer("process_json_file")
def process_json_file(filepath: str):
    with Timer("file_read"):
        with open(filepath, 'r') as f:
            data = json.load(f)

    with Timer("data_processing"):
        processed = transform(data)

    with Timer("file_write"):
        with open(f"{filepath}.processed", 'w') as f:
            json.dump(processed, f)

    return processed

Nested Timers

You can nest timers to measure both overall and component timings:

from tools.tracer import Timer

@Timer("complete_analysis")
def analyze_data(dataset):
    with Timer("load_models"):
        model_a = load_model_a()
        model_b = load_model_b()

    with Timer("run_models"):
        with Timer("model_a_inference"):
            results_a = model_a.predict(dataset)

        with Timer("model_b_inference"):
            results_b = model_b.predict(dataset)

    with Timer("combine_results"):
        final = combine(results_a, results_b)

    return final

Output:

2038-01-19 03:14:07,000 | DEBUG | load_models:__exit__:50 - executed in 500.000000 ms
2038-01-19 03:14:07,550 | DEBUG | model_a_inference:__exit__:50 - executed in 120.000000 ms
2038-01-19 03:14:07,700 | DEBUG | model_b_inference:__exit__:50 - executed in 130.000000 ms
2038-01-19 03:14:07,850 | DEBUG | run_models:__exit__:50 - executed in 250.000000 ms
2038-01-19 03:14:07,900 | DEBUG | combine_results:__exit__:50 - executed in 50.000000 ms
2038-01-19 03:14:07,950 | DEBUG | complete_analysis:__exit__:50 - executed in 800.000000 ms

Integration with Logging

The Timer automatically uses the Logger module. You can combine them for comprehensive monitoring:

from tools.logger import Logger
from tools.tracer import Timer

logger = Logger(__name__)

@Timer("expensive_operation")
def expensive_operation(items: list):
    logger.info(f"Starting operation with {len(items)} items")

    with Timer("preprocessing"):
        preprocessed = preprocess(items)
        logger.debug(f"Preprocessed {len(preprocessed)} items")

    with Timer("main_processing"):
        results = process(preprocessed)
        logger.debug(f"Processed into {len(results)} results")

    logger.info("Operation complete")
    return results

Best Practices

1. Meaningful Timer Names

Use descriptive names that clearly indicate what's being measured:

# Good
with Timer("database_user_query"):
    user = db.query(User).filter_by(id=user_id).first()

# Less useful
with Timer("query"):
    user = db.query(User).filter_by(id=user_id).first()

2. Measure at Appropriate Granularity

Don't time trivial operations - focus on operations that matter:

# Good - measures significant operations
@Timer("process_large_dataset")
def process_dataset(data):
    return [transform(item) for item in data]

# Too granular - overhead not worth it
for item in data:
    with Timer("process_single_item"):  # Too fine-grained
        process(item)

3. Use Consistent Naming

Establish naming conventions for different operation types:

# Database operations
with Timer("db_query_users"):
    users = db.query(User).all()

# API calls
with Timer("api_call_external_service"):
    response = requests.get(url)

# File operations
with Timer("file_read_config"):
    config = load_config()

4. Combine with Monitoring

Use Timer data for performance monitoring and optimization:

from tools.tracer import Timer
from tools.logger import Logger

logger = Logger(__name__)

SLOW_QUERY_THRESHOLD_MS = 1000

@Timer("database_query")
def query_users(filters):
    start = time.time()

@a5chin
Copy link
Owner Author

a5chin commented Nov 10, 2025

/improve

a5chin and others added 3 commits November 10, 2025 16:10
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@a5chin a5chin merged commit ff9efd4 into main Nov 11, 2025
2 checks passed
@a5chin a5chin deleted the fix/docs branch November 11, 2025 01:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants