Skip to content

Context Engineering exercises for AI Native Development Chapter 4

Notifications You must be signed in to change notification settings

panaversity/claude-code-context-exercises

Repository files navigation

Context Engineering Exercises — Chapter 4

The Context Lab

These exercises accompany Chapter 4: Context Engineering of the Agent Factory textbook. Unlike standalone exercises, this is a single evolving project: you start with a broken "Contract Review Agent" and progressively engineer its context to production quality.

Each module applies one technique from the chapter and measures the improvement — so you can see exactly how context engineering affects agent behavior.

Prerequisites

  • Claude Code installed and working (claude --version)
  • Completed Chapter 4 reading (or working through it alongside these exercises)
  • A text editor for reviewing and editing files
  • Basic familiarity with CLAUDE.md, skills, and hooks concepts

Quick Start

  1. Clone or download this repository
  2. Open the starter-agent/ folder — this is your broken Contract Review Agent
  3. Read starter-agent/review-tasks.md to understand the three standardized tasks
  4. Read starter-agent/scoring-rubric.md to understand how you will measure quality
  5. Open EXERCISE-GUIDE.md for the full walkthrough
  6. Begin with Module 1: Context Rot

Module Overview

Module Topic Key Technique Time
1 Context Rot Identifying 4 rot types 30 min
2 Signal vs Noise 4-Question Audit Framework 45 min
3 Architecture Tool selection mapping 45 min
4 Persistence Tasks, knowledge, /clear survival 45 min
5 Lifecycle Zone monitoring, compaction 60 min
6 Memory Corpus design, drift measurement 45 min
7 Isolation Multi-agent pipelines 60 min
A Capstone: Your Domain Full production agent 90 min
B Capstone: Context Relay Multi-session continuity 90 min
C Capstone: Forensics Diagnose 3 failing agents 60 min

Total estimated time: 8-10 hours across all modules and capstones.

How Measurement Works

Every module follows the same cycle:

  1. Baseline: Run the 3 standardized tasks, score with the rubric
  2. Apply technique: Make the changes the module teaches
  3. Re-measure: Run the same 3 tasks, score again
  4. Compare: Did scores improve? By how much? Which criteria changed?

This gives you concrete evidence that context engineering works — not just theory.

Repository Structure

starter-agent/          The broken Contract Review Agent (your starting point)
module-1-context-rot/   Identify and classify rot in CLAUDE.md
module-2-signal-noise/  Cut noise, amplify signal
module-3-architecture/  Map content to the right context tools
module-4-persistence/   Survive /clear, persist knowledge
module-5-lifecycle/     Monitor and manage the context window
module-6-memory/        Design memory systems for consistency
module-7-isolation/     Multi-agent pipelines for clean context
capstone-A-your-domain-agent/   Build a production agent for your domain
capstone-B-context-relay/       Multi-session relay race
capstone-C-forensics-challenge/ Forensics: diagnose 3 failing agents

Links

License

Educational use. Part of the Agent Factory curriculum.

About

Context Engineering exercises for AI Native Development Chapter 4

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •