This document clarifies the usage and non-usage of key terms that are commonly misunderstood in discussions involving advanced autonomous systems.
It exists solely to reduce semantic confusion. It does not introduce new concepts, requirements, or evaluations.
This document applies only to terminology usage within this repository.
It does not redefine legal, technical, ethical, or policy terminology outside this context.
In this repository, "control" does NOT mean:
- real-time command over system behavior
- guaranteed outcome enforcement
- technical dominance over internal decision processes
When used, "control" refers only to institutional or procedural constraints on authorization, deployment, or repetition.
"Safety" is NOT used here to imply:
- risk elimination
- harm prevention guarantees
- compliance with any specific safety framework
When referenced, "safety" refers only to post-hoc institutional handling and stabilization after events.
"Alignment" does NOT mean:
- value alignment
- moral alignment
- objective-function alignment
The term is not used to describe internal system properties, but may appear only in descriptions of external institutional consistency.
"Autonomy" does NOT imply:
- agency
- intent
- moral responsibility
It is used descriptively to indicate operational independence from immediate human intervention.
An "incident" refers strictly to:
- a high-impact operational outcome
- regardless of intent, fault, or legality
It does NOT imply:
- wrongdoing
- system failure
- ethical violation
"Responsibility" refers only to institutional attribution mechanisms.
It does NOT imply:
- moral blame
- legal guilt
- individual fault assessment
This document does NOT:
- define best practices
- recommend policy actions
- assess system desirability
- propose preventive measures
This clarification exists to prevent post-incident semantic escalation, where ambiguous terminology drives unnecessary conflict or acceleration.
This material is released into the public domain under CC0. No attribution is required.