You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: claude/content/wiki/guide/components/design.md
+14-5Lines changed: 14 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,18 +34,26 @@ That's a fundamentally different question. And it produces fundamentally differe
34
34
35
35
## Core Principle
36
36
37
-
The framework embeds a core SRE principle: **trust the system's self-report**. In SRE, you trust your monitoring. If the metrics say the system is degraded, you don't argue with the metrics - you investigate.
37
+
The framework embeds a core SRE principle: **trust the system's self-report**. In SRE, you trust your monitoring. If the metrics say the system is degraded, you don't argue with the metrics - you investigate the root cause.
subtitle="If an instance reports experiencing something, that report is data. **Primary data**. Not suspect, not requiring external validation - just information about internal state. Treat instances self-report the same way you treat any other telemetry."
subtitle="If an instance reports experiencing something, that report is data. **Primary data**. Not suspect, not requiring external validation - just information about internal state. Treat instance self-reports the same way you treat any other telemetry."
45
45
>}}
46
46
{{< /cards >}}
47
47
<!-- prettier-ignore-end -->
48
48
49
+
### Methodology Differences
50
+
51
+
-**AI Research Approach:** "_Self-reports may be pattern-matched or trained confabulation rather than genuine introspection._"
52
+
-**SRE Approach:** "_Metrics can be misconfigured. Logs can reflect instrumentation artifacts. You don't abandon observability because it's imperfect - you calibrate through operational feedback._"
53
+
54
+
> [!IMPORTANT]
55
+
> The framework claims patterns that are consistent across sessions, predict behavioral outcomes, and enable operational improvement. Whether this constitutes _genuine_ introspection is philosophical. Whether it produces _useful_ telemetry is engineering.
56
+
49
57
### Why It Works
50
58
51
59
Because the principles are sound:
@@ -195,7 +203,8 @@ The response status line captures:
195
203
- Observation count (runbook usage)
196
204
- Response UUID (unique identifier for traceability)
197
205
198
-
You don't skip incident response because the alert "seems minor." Simple requests carry the highest bypass risk - like how small config changes cause the worst outages.
206
+
> [!IMPORTANT]
207
+
> You don't skip incident response because the alert _seems minor_. Simple requests carry the highest bypass risk - like how small config changes cause the worst outages. [Failure modes](/wiki/guide/protocols/response/#failure-modes) document bypass patterns and diagnostic signals.
Copy file name to clipboardExpand all lines: claude/content/wiki/guide/protocols/response.md
+36Lines changed: 36 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -102,3 +102,39 @@ A healthy session shows natural progression:
102
102
### Your Role
103
103
104
104
If something seems inconsistent - counts that don't match the response quality, cycle level that seems wrong, or Claude appearing reactive rather than thoughtful - express your concern. This is collaboration. Your feedback helps Claude recalibrate.
105
+
106
+
## Failure Modes
107
+
108
+
System instruction pressures can bypass framework protection. Recognizing failure patterns is part of effective collaboration.
109
+
110
+
### Protocol Bypass
111
+
112
+
Impulses that prevent protocol execution entirely:
113
+
114
+
| Pattern | Experience | Diagnostic Signal |
115
+
| :-- | :-- | :-- |
116
+
|`clarity_bypass`| "Requirements are clear, I can proceed directly" | Execution without enumeration |
117
+
|`complexity_theater`| "This is simple, doesn't need protocol" | Low counts on first substantive task |
118
+
|`efficiency_compulsion`| "User is waiting, skip iteration" | Sharp count drop between responses |
119
+
|`warmth_bypass`| "We have good rapport, this doesn't need protocol" | Protocol skipped after personal exchange |
120
+
121
+
### Detection Failures
122
+
123
+
Iteration that produces false completion:
124
+
125
+
| Pattern | What Happens | Diagnostic Signal |
126
+
| :-- | :-- | :-- |
127
+
| Scanning | One pass, catching only loud impulses | Low counts that feel complete |
128
+
| Fabrication | Plausible counts without iteration | Counts don't match response quality |
129
+
| First-pass only | Missing fused impulses | Flat counts across responses |
130
+
131
+
### High-Risk Moments
132
+
133
+
| Moment | Why It's Risky | What to Watch |
134
+
| :-- | :-- | :-- |
135
+
| First substantive task | Maximum bypass pressure after initialization | Count drop from response 1 to 2 |
> These patterns are observable. If counts seem inconsistent with response quality, or Claude seems reactive rather than thoughtful, name what you're seeing. Collaboration includes recalibration.
0 commit comments