-
Notifications
You must be signed in to change notification settings - Fork 68
Update Docs #129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Docs #129
Conversation
|
/review |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
🎨 VSCode ConfigurationThe Dev Container includes pre-configured extensions and settings for optimal Python development. Python Development:
Code Quality:
File Support:
Editor Settings:
🍪 Cookiecutter TemplatesThis repository can be used as a base template for various Python projects. Combine it with Cookiecutter to bootstrap project-specific setups: # Install cookiecutter
uv add --dev cookiecutter
# Use a template
uv run cookiecutter <template-url>Recommended templates:
📖 DocumentationComprehensive documentation is available at https://a5chin.github.io/python-uv Topics covered:
🌿 BranchesThis repository maintains multiple branches for different use cases:
📄 LicenseThis project is licensed under the MIT License - see the LICENSE file for details. 🙏 AcknowledgmentsThis template is built on top of excellent open-source tools:
Special thanks to the open-source community for making these tools available! Testing# Run all tests with coverage (75% minimum required)
uv run nox -s test
# Run specific test file
uv run pytest tests/tools/test__logger.py
# Run with JUnit XML output for CI
uv run nox -s test -- --junitxml=results.xml
# Run pytest directly (bypasses nox)
uv run pytestLinting & Formatting# Format code
uv run nox -s fmt
# Lint with both Pyright and Ruff
uv run nox -s lint -- --pyright --ruff
# Lint with Pyright only
uv run nox -s lint -- --pyright
# Lint with Ruff only
uv run nox -s lint -- --ruff
# Run Ruff directly
uv run ruff check . --fix
uv run ruff format .
# Run Pyright directly
uv run pyrightPre-commit Hooks# Install hooks
uv run pre-commit install
# Run all hooks manually
uv run pre-commit run --all-files
# Run specific hook
uv run pre-commit run ruff-formatDocumentation# Serve docs locally at http://127.0.0.1:8000
uv run mkdocs serve
# Build documentation
uv run mkdocs build
# Deploy to GitHub Pages
uv run mkdocs gh-deployArchitectureCore ModulesThe tools/logger/ - Dual-Mode Logging System
from tools.config import Settings
from tools.logger import Logger, LogType
settings = Settings()
logger = Logger(
__name__,
log_type=LogType.LOCAL if settings.IS_LOCAL else LogType.GOOGLE_CLOUD
)tools/config/ - Environment-Based Configuration
from tools.config import Settings
settings = Settings()
api_url = settings.api_prefix_v1 # Loaded from environmenttools/tracer/ - Performance Monitoring
from tools.tracer import Timer
@Timer("full_operation")
def process():
with Timer("step1"):
do_step1()
with Timer("step2"):
do_step2()Test StructureTests in
Configuration PhilosophyRuff (ruff.toml):
Pyright (pyrightconfig.json):
pytest (pytest.ini):
Nox Task AutomationThe
Example of the argument parsing pattern: # noxfile.py
@nox.session(python=False)
def lint(session: nox.Session) -> None:
args = CLIArgs.parse(session.posargs)
if args.pyright:
session.run("uv", "run", "pyright")
if args.ruff:
session.run("uv", "run", "ruff", "check", ".", "--fix")Key Patterns for DevelopmentAdding New Configuration FieldsExtend the class Settings(BaseSettings):
# Existing fields...
# Add your new fields
NEW_SETTING: str = "default_value"
ANOTHER_SETTING: int = 42Then add to NEW_SETTING=custom_value
ANOTHER_SETTING=100Adding New Logger FormattersCreate a new formatter in
Testing UtilitiesWhen testing the utilities themselves:
Documentation StructureThe
When adding new utilities to CI/CD WorkflowsGitHub Actions workflows in
All workflows use the same nox commands as local development. Environment VariablesCritical environment variables (set in
Important Notes
Template Usage PatternWhen using this as a template for a new project:
Output: As a DecoratorUse import time
from tools.tracer import Timer
@Timer("process_data")
def process_data(data):
time.sleep(1) # Simulate processing
return data
result = process_data([1, 2, 3])Output: Real-World ExamplesAPI Endpoint MonitoringMonitor API endpoint performance: from fastapi import FastAPI
from tools.tracer import Timer
app = FastAPI()
@app.get("/users/{user_id}")
@Timer("get_user_endpoint")
async def get_user(user_id: int):
with Timer("database_lookup"):
user = await db.get_user(user_id)
with Timer("user_serialization"):
return user.dict()Output: Data Processing PipelineMonitor each stage of a data pipeline: from tools.tracer import Timer
from tools.logger import Logger
logger = Logger(__name__)
@Timer("full_pipeline")
def process_dataset(data):
logger.info(f"Processing {len(data)} records")
with Timer("data_validation"):
validated = validate_data(data)
with Timer("data_transformation"):
transformed = transform_data(validated)
with Timer("data_enrichment"):
enriched = enrich_data(transformed)
with Timer("data_storage"):
save_data(enriched)
logger.info("Pipeline complete")
return enrichedDatabase OperationsMonitor individual database operations: from sqlalchemy.orm import Session
from tools.tracer import Timer
class UserRepository:
def __init__(self, db: Session):
self.db = db
@Timer("user_create")
def create_user(self, user_data: dict):
user = User(**user_data)
self.db.add(user)
self.db.commit()
return user
@Timer("user_bulk_import")
def import_users(self, users_data: list[dict]):
with Timer("user_validation"):
validated = [validate(u) for u in users_data]
with Timer("user_db_insert"):
self.db.bulk_insert_mappings(User, validated)
self.db.commit()File ProcessingTrack file I/O operations: import json
from tools.tracer import Timer
@Timer("process_json_file")
def process_json_file(filepath: str):
with Timer("file_read"):
with open(filepath, 'r') as f:
data = json.load(f)
with Timer("data_processing"):
processed = transform(data)
with Timer("file_write"):
with open(f"{filepath}.processed", 'w') as f:
json.dump(processed, f)
return processedNested TimersYou can nest timers to measure both overall and component timings: from tools.tracer import Timer
@Timer("complete_analysis")
def analyze_data(dataset):
with Timer("load_models"):
model_a = load_model_a()
model_b = load_model_b()
with Timer("run_models"):
with Timer("model_a_inference"):
results_a = model_a.predict(dataset)
with Timer("model_b_inference"):
results_b = model_b.predict(dataset)
with Timer("combine_results"):
final = combine(results_a, results_b)
return finalOutput: Integration with LoggingThe Timer automatically uses the Logger module. You can combine them for comprehensive monitoring: from tools.logger import Logger
from tools.tracer import Timer
logger = Logger(__name__)
@Timer("expensive_operation")
def expensive_operation(items: list):
logger.info(f"Starting operation with {len(items)} items")
with Timer("preprocessing"):
preprocessed = preprocess(items)
logger.debug(f"Preprocessed {len(preprocessed)} items")
with Timer("main_processing"):
results = process(preprocessed)
logger.debug(f"Processed into {len(results)} results")
logger.info("Operation complete")
return resultsBest Practices1. Meaningful Timer NamesUse descriptive names that clearly indicate what's being measured: # Good
with Timer("database_user_query"):
user = db.query(User).filter_by(id=user_id).first()
# Less useful
with Timer("query"):
user = db.query(User).filter_by(id=user_id).first()2. Measure at Appropriate GranularityDon't time trivial operations - focus on operations that matter: # Good - measures significant operations
@Timer("process_large_dataset")
def process_dataset(data):
return [transform(item) for item in data]
# Too granular - overhead not worth it
for item in data:
with Timer("process_single_item"): # Too fine-grained
process(item)3. Use Consistent NamingEstablish naming conventions for different operation types: # Database operations
with Timer("db_query_users"):
users = db.query(User).all()
# API calls
with Timer("api_call_external_service"):
response = requests.get(url)
# File operations
with Timer("file_read_config"):
config = load_config()4. Combine with MonitoringUse Timer data for performance monitoring and optimization: from tools.tracer import Timer
from tools.logger import Logger
logger = Logger(__name__)
SLOW_QUERY_THRESHOLD_MS = 1000
@Timer("database_query")
def query_users(filters):
start = time.time() |
|
/improve |
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
PR Type
Documentation, Enhancement
Description
Introduce a comprehensive
CLAUDE.mdguide for AI interaction.Revamp
README.mdwith detailed sections and quick start guides.Expand all core documentation guides for tools, configuration, and use cases.
Update GitHub Actions to deploy docs on any
.mdfile change.Diagram Walkthrough
flowchart LR A[Old Docs] --> B{Documentation Overhaul}; B --> C[New CLAUDE.md]; B --> D[Revamped README.md]; B --> E[Expanded Guides & Configs]; E --> E1[Getting Started]; E --> E2[Development Guides]; E --> E3[Configuration Reference]; E --> E4[Built-in Utilities]; E --> E5[Use Cases]; B --> F[Updated gh-deploy.yml Trigger]; F -- "Monitors" --> E; F -- "Monitors" --> D; F -- "Monitors" --> C;File Walkthrough
1 files
Update documentation deployment trigger to all Markdown files11 files
Add new guide for Claude Code AI interactionCompletely overhaul README with new structure, features, and quickstartExpand configuration reference with detailed guides and best practicesEnhance getting started guide with setup options and troubleshootingRestructure development guides with overviews and quick referencesRewrite configuration guide with advanced usage and best practicesExpand built-in utilities overview with architecture and use casesRewrite logger guide with detailed usage, formatters, and bestpracticesRewrite tracer guide with real-world examples and advanced usageRevamp main documentation landing page with overview and quicknavigationExpand use cases with featured examples and best practices