Skip to content

Latest commit

 

History

History
30 lines (25 loc) · 4.04 KB

File metadata and controls

30 lines (25 loc) · 4.04 KB

Usage Patterns

A pattern is a reusable solution to a recurring problem when building API simulations with Counterfact. Each pattern below describes a context, the problem it addresses, the solution, and its consequences.

Most projects start with Explore a New API or Executable Spec to get a running server from an OpenAPI spec with no code. From there, Mock APIs with Dummy Data and AI-Assisted Implementation are the natural next steps for adding realistic responses — the former by hand, the latter with an AI agent doing the heavy lifting. As the mock grows, Federated Context Files and Test the Context, Not the Handlers keep the stateful logic organized and reliable. Throughout all of this, Live Server Inspection with the REPL is Counterfact's most distinctive feature: it lets you seed data, send requests, and toggle behavior in real time without restarting. Simulate Failures and Edge Cases and Simulate Realistic Latency extend any mock to cover error paths and performance characteristics that real services exhibit. Reference Implementation and Executable Spec make the mock a first-class artifact that teams can rely on as the API evolves. Finally, Agentic Sandbox and Hybrid Proxy address the two common integration strategies — isolating an AI agent from the real service, or blending mock and live traffic across endpoints. Automated Integration Tests shows how to embed the mock server in a test suite using the programmatic API, while Custom Middleware covers cross-cutting concerns like authentication and response headers without touching individual handlers.

Pattern When to use it
Explore a New API You have a spec but no running backend or production access
Executable Spec You want immediate feedback on how spec changes affect the running server during API design
Mock APIs with Dummy Data You need realistic-looking responses to build a UI, run a demo, or write assertions
AI-Assisted Implementation You want an AI agent to replace random responses with working handler logic
Federated Context Files You want each domain to own its state, with explicit cross-domain dependencies
Test the Context, Not the Handlers You want to keep shared stateful logic reliable as the mock grows
Live Server Inspection with the REPL You want to seed data, send requests, and toggle behavior without restarting the server
Simulate Failures and Edge Cases You need reproducible, on-demand error conditions for development or testing
Simulate Realistic Latency You want to test how clients and UIs behave under realistic response times
Reference Implementation You want a working, executable implementation that expresses intended API behavior in code
Agentic Sandbox You are building an AI coding agent and want to avoid rate limits and costs during development
Hybrid Proxy Some endpoints exist in the real backend; others need to be mocked
Automated Integration Tests You want to run real HTTP tests against the mock in a CI-friendly test suite
Custom Middleware You want authentication, headers, or logging applied uniformly across a group of routes

See also