Stop configuring Claude Code from scratch for every Python project. pyclaudefig is a heavily opinionated, ready-to-go configuration designed for teams that have embraced the Astral ecosystem — ruff, uv, and ty — as their standard toolchain.
This configuration assumes you already have:
- Claude Code installed and working
- uv managing your project virtualenv
- Dev tools installed and configured in your venv:
ruff(formatting + linting)ty(type checking)pytest+pytest-check+pytest-cov(testing + coverage)pydoclint(docstring formatting)- All tools are runnable via
uv run .claude/scripts/validate_code.py
- Context7 free account for fetching up-to-date framework documentation via MCP
-
Download
.claude/into your project rootcd /path/to/your/project # Download pyclaudefig curl -L https://api.github.com/repos/libertininick/pyclaudefig/tarball/HEAD | tar -xz # Move .claude directory to root of project and remove remaining items from download mv libertininick-pyclaudefig-*/.claude ./ && rm -rf libertininick-pyclaudefig-* # Allow scripts to be executed chmod +x .claude/scripts/*.py
-
Connect the Context7 MCP server to Claude Code
# Generate an API key at https://context7.com after signing up claude mcp add context7 -- npx -y @upstash/context7-mcp --api-key YOUR_API_KEYVerify it's connected by running
/mcpinside a Claude session — you should seecontext7 · ✔ connected. -
Start a Claude Code session in your project's root directory
cd /path/to/your/project claude -
Add approved frameworks i.e. the python libraries your project depends on (see Adding a New Framework)
/add-framework <python library>
-
Run
/syncinside your Claude session to generate context bundles for the subagents (they're gitignored):/sync
Re-run
/syncany time you make changes to.claude/, add a skill/command/agent, or update a setting.
After initial setup, you may want to sync your local config with changed pushed to pyclaudefig. Here's an approach using rsync:
cd /path/to/your/project
# Download pyclaudefig
curl -L https://api.github.com/repos/libertininick/pyclaudefig/tarball/HEAD | tar -xz
# Sync changes
# NOTE: use --dry-run to look before you leap ;)
rsync -av \
--delete \
--exclude='agent-outputs' \
--exclude='bundles' \
--exclude='**/__pycache__' \
libertininick-pyclaudefig-*/.claude/ \
.claude/ \
--dry-run
# Remove remaining items from download
rm -rf libertininick-pyclaudefig-*The core loop is: plan → implement → review → commit → update plan, one phase at a time.
Here's what a typical session looks like:
# 1. Create a plan — the planner agent explores the codebase, asks
# clarifying questions, and writes a phased implementation plan.
/plan Add retry logic to the API client
# Output: .claude/agent-outputs/plans/<timestamp>-api-client-plan.md
# Review the plan and iterate until you're happy with it.
# 2. Implement Phase 1 — code-writer and test-writer agents execute
# the first phase, then validate with ruff, ty, and pytest.
/implement Phase 1 from .claude/agent-outputs/plans/<timestamp>-api-client-plan.md
# 3. Review Phase 1 — style, substance, and test-quality reviewers
# run in parallel and produce a unified review report.
/review --staged --plan .claude/agent-outputs/plans/<timestamp>-api-client-plan.md --phase 1
# Address any findings, then stage your changes.
# 4. Commit — once the review is clean, commit the phase.
git add -p && git commit
# 5. Update the plan — syncs with main, marks Phase 1 complete,
# and creates a new versioned plan file with adjustments.
# (The original plan is never modified.)
/update-plan .claude/agent-outputs/plans/<timestamp>-api-client-plan.md --completed-phases 1
# 6. Repeat from step 2 for Phase 2, 3, etc.Every command supports --help for usage and examples (e.g., /review --help).
Deep dives into specific topics. Start here after you're comfortable with the Quick Start workflow.
| Guide | What You'll Learn |
|---|---|
| Understanding LLM Coding Agents | Step-by-step guide to how AI coding agents actually work. Pattern matching, tool use, context windows, and agentic loops demystified. |
| Agentic Coding Workflow | Complete walkthrough of the plan → implement → review cycle. Commands, agents, validation, troubleshooting. |
| Context Window Management | Why AI performance degrades as context fills up, and how this configuration uses agents and bundles to keep sessions efficient. |
| Reviewer-Friendly PRs | Creating PRs that respect reviewers' time. Validation checklists, description templates, structuring large changes. |
| Thinking Tokens & Model Selection | How thinking tokens actually work, when extended thinking helps (and when it doesn't), and practical model selection guidance for code generation. |
This configuration separates concerns into three distinct layers:
┌─────────────────────────────────────────────────────────────┐
│ COMMANDS │
│ Orchestration: workflows that use agents │
│ /plan /implement /review /pr-description │
└─────────────────────────┬───────────────────────────────────┘
│ invoke
▼
┌─────────────────────────────────────────────────────────────┐
│ AGENTS │
│ Execution: specialists that do work │
│ planner code-writer test-writer test-reviewer reviewers │
└─────────────────────────┬───────────────────────────────────┘
│ load
▼
┌─────────────────────────────────────────────────────────────┐
│ SKILLS │
│ Knowledge: conventions, templates, criteria │
│ class-design test-writing frameworks plan-template ... │
└─────────────────────────────────────────────────────────────┘
Why this separation matters:
- Skills = Knowledge that multiple agents share (conventions, templates)
- Agents = Focused specialists with specific responsibilities
- Commands = User-facing workflows that compose agents
manifest.json defines all relationships:
{
"skills": [{ "name": "class-design", "category": "conventions", ... }],
"agents": [{ "name": "planner", "depends_on_skills": ["plan-template", ...] }],
"commands": [{ "name": "plan", "depends_on_agents": ["planner"] }]
}Changes to relationships happen in one place. The manifest drives:
- Bundle generation (what skills each agent receives)
- CLAUDE.md generation (via
/sync) - Documentation of dependencies
Agents need skill knowledge, but loading skills individually at runtime is inefficient. Instead:
- manifest.json declares which skills each agent depends on
- generate_bundles.py pre-composes skills into bundles
- Agents load a single bundle file with all their context
# Regenerate bundles after modifying skills
uv run .claude/scripts/generate_bundles.pyTwo bundle variants are generated:
- Full bundle (
planner.md): Complete skill content including examples - Compact bundle (
planner-compact.md): Quick Reference sections only
Complex skills split content across files:
skills/class-design/
├── SKILL.md # Main content + Quick Reference table
├── rules.md # Decision flow and rules
└── examples.md # Code examples
The frontmatter in SKILL.md declares layers:
---
name: class-design
layers:
rules: rules.md
examples: examples.md
---Bundles include all layers. Compact bundles include only Quick Reference.
Two directories regenerate as needed:
.claude/
├── bundles/ # Generated by generate_bundles.py
└── agent-outputs/ # Written by agents at runtime
├── plans/
├── reviews/
└── pr-descriptions/
Both are .gitignored. Regenerate bundles after skill changes. Agent outputs are timestamped for history.
.claude/
├── CLAUDE.md # Root agent instructions (auto-generated by /sync)
├── settings.json # Project specific Claude settings
│
├── commands/ # User-invocable workflows
│ ├── plan.md
│ ├── implement.md
│ ├── review.md
│ └── ...
│
├── agents/ # Execution specialists
│ ├── planner.md
│ ├── python-code-writer.md
│ └── ...
│
├── manifest.json # Single source of truth
│
├── skills/ # Knowledge & conventions
│ ├── class-design/
│ │ ├── SKILL.md
│ │ ├── rules.md
│ │ └── examples.md
│ ├── test-writing/
│ │ └── SKILL.md
│ └── ...
│
├── bundles/ # Pre-composed context (generated, gitignored)
│ ├── planner.md
│ ├── planner-compact.md
│ └── ...
│
├── agent-outputs/ # Agent work products (generated, gitignored)
│ ├── plans/
│ ├── reviews/
│ └── pr-descriptions/
│
└── scripts/ # Automation
├── generate_bundles.py
└── sync_context.py
Use the /add-framework command to register a new approved library:
/add-framework httpxThis will:
- Resolve the doc ID (for documentation lookup)
- Find the official documentation URL
- Confirm the details with you
- Update
skills/frameworks/SKILL.mdwith the new entry - Regenerate bundles and validate the manifest
Run /add-framework --help for more examples.
-
Create the skill directory and files:
/create-skill # Follow prompts for name, category, description -
Edit the generated SKILL.md with your conventions
-
Add to manifest.json (if not auto-added):
{ "name": "your-skill", "category": "conventions", "description": "What this skill provides" } -
Add to agent dependencies in manifest.json:
{ "name": "python-code-writer", "depends_on_skills": ["your-skill", ...] } -
Regenerate bundles:
uv run .claude/scripts/generate_bundles.py
-
Update CLAUDE.md:
/sync
-
Create agent file in
agents/:--- name: your-agent description: What this agent does model: sonnet # or opus bundle: bundles/your-agent.md tools: - Read - Write - Edit --- Instructions for the agent...
-
Add to manifest.json:
{ "name": "your-agent", "description": "What this agent does", "depends_on_skills": ["skill1", "skill2"] } -
Generate bundles with sync:
/sync
-
Create command file in
commands/:--- name: your-command description: What this command does depends_on_agents: - agent-it-uses --- # Command Title Instructions for what this command does...
-
Add to manifest.json:
{ "name": "your-command", "description": "What this command does", "depends_on_agents": ["agent-it-uses"] } -
Sync context:
/sync
Use /learn to turn mistakes (or good patterns) into permanent configuration changes:
/learn You kept using Optional[str] instead of str | None. Enforce the modern union syntax.
/learn Always run tests before claiming a task is done. --no-sessionThe config-learner agent analyzes your feedback, proposes targeted changes to skills, CLAUDE.md, or agent instructions, and applies them after your approval. See Learning from Mistakes for the full guide.
- Edit the skill's SKILL.md (and rules.md/examples.md if layered)
- Regenerate bundles so agents get updated context
- Test by running a command that uses the affected agents
| Category | Purpose | Examples |
|---|---|---|
| conventions | How code should be written | class-design, naming-conventions, test-writing |
| assessment | Criteria for code review | maintainability, testability, test-quality |
| templates | Output format specifications | plan-template, review-template |
| utilities | Reusable operations | run-python-safely, write-markdown-output |
- Check
manifest.jsonthat the agent'sdepends_on_skillsincludes the skill - Regenerate bundles:
uv run .claude/scripts/generate_bundles.py
- Ensure the command file has correct frontmatter
- Run
/syncto regenerate CLAUDE.md
- Regenerate bundles after any skill modification
- Bundles are gitignored, so they won't auto-update
- Run
/syncto regenerate from current disk state