Skip to content

Commit 1c2ba0d

Browse files
committed
docs: remove remaining horizontal separators from root README
1 parent 64da4d6 commit 1c2ba0d

File tree

1 file changed

+0
-4
lines changed

1 file changed

+0
-4
lines changed

README.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,6 @@ This transforms AI security from "Best-Effort" Zero-trust to **Privacy-First Ver
2626
### 4. The Regulator (e.g., Office of the Comptroller of the Currency (OCC), European Central Bank (ECB), or Securities and Exchange Commission (SEC))
2727
* **Core Use Case:** **Automated Regulatory Audit.** While traditional audit models provide visibility through coarse data logging, applying this to AI creates a **Privacy Liability Paradox**: the more granular the audit (e.g., logging raw prompts/outputs), the higher the ingestion risk of sensitive PII and proprietary secrets. The **Regulator** requires real-time, cryptographically verifiable proof-of-compliance—demonstrating that: (1) all data ingested into AI systems (training data, Retrieval-Augmented Generation / RAG vector stores) was properly redacted and provenance-verified; (2) every AI interaction across the Enterprise strictly followed mandatory policy (trusted hardware, untampered models, and data residency); all without the liability of raw data ingestion or the exposure of proprietary prompt logic. This supports the reproducibility and documentation principles required by the **Model Risk Management (MRM)** regulatory framework and **Federal Reserve/OCC Supervisory Letter SR 11-7** (Interagency Guidance on Model Risk Management).
2828

29-
---
30-
3129
## Technical Challenges for Addressing Use Cases
3230

3331
To address the above use cases, we must solve the unique technical problems below. Note that the below technical problems are not unique to AI or Financial Services but are especially critical for the security, privacy, and compliance of the above use cases.
@@ -63,8 +61,6 @@ BYOD devices are unmanaged and unverified, making them a significant security ri
6361
Edge nodes are often in untrusted physical locations, making them vulnerable to physical tampering and unauthorized environment modification.
6462
* **Example (Use Case 2 - Enterprise Employee):** A branch server used by Relationship Managers is physically compromised or stolen. Traditional software-based security cannot detect hardware tampering, allowing attackers to extract AI model weights and sensitive customer PII.
6563

66-
---
67-
6864
## The Three-Layer Trust Architecture: Fusing Silicon, Identity, and Governance
6965

7066
**AegisSovereignAI** bridges Infrastructure Security (Layer 1 in Figure 2) and AI Governance (Layer 3 in Figure 2) by serving as a unifying control plane. Through a **Unified and Extensible Identity (Layer 2 in Figure 2)** framework, it cryptographically fuses workloads/user identities using silicon-level attestation with application-level governance while preserving privacy to create a single, cohesive identity architecture.

0 commit comments

Comments
 (0)