Goal: Your first bias audit in under 60 seconds.
Building High-Risk AI requires a fundamental shift in how we approach testing. It is no longer enough to check for technical accuracy (e.g., F1 Score); we must now mathematically prove that the system respects fundamental rights, such as non-discrimination or data quality, as mandated by the EU AI Act.
Venturalítica automates this by treating "Assurance" as a dependency. Instead of vague legal requirements, you define strict policies (OSCAL) that your model must pass before it can be deployed. This turns compliance into a deterministic engineering problem. !!! question "Is my System High-Risk?" According to Article 6 of EU AI Act, a system is High-Risk if it is covered by Annex I (Safety Components like machinery/medical devices) or listed in Annex III (Biometrics, Critical Infrastructure, Education, Employment, Essential Services, Law Enforcement, Migration, Justice/Democracy).
The Translation Layer:
-
Fundamental Risk: "The model must not discriminate against protected groups" (Art 9).
-
Policy Control: "Disparate Impact Ratio must be > 0.8".
-
Code Assertion:
assert calculated_metric > 0.8.
When you run quickstart(), you are technically running a Unit Test for Ethics.
pip install venturaliticaimport venturalitica as vl
vl.quickstart('loan')Output:
[Venturalítica {{ version }}] 🎓 Scenario: Credit Scoring Fairness
[Venturalítica {{ version }}] 📊 Loaded: UCI Dataset #144 (1000 samples)
CONTROL DESCRIPTION ACTUAL LIMIT RESULT
────────────────────────────────────────────────────────────────────────────────────────────────
credit-data-imbalanc Data Quality: Minority class repres... 0.429 > 0.2 ✅ PASS
credit-data-bias Disparate impact ratio follows the ... 0.818 > 0.8 ✅ PASS
credit-age-disparate Age disparate impact ratio > 0.5 0.286 > 0.5 ❌ FAIL
────────────────────────────────────────────────────────────────────────────────────────────────
Audit Summary: ❌ VIOLATION | 2/3 controls passed
!!! info The audit detected age-based bias in the UCI German Credit dataset — the age disparate impact ratio (0.286) is well below the 0.5 threshold set by the policy.
The quickstart() function is a wrapper that performs the full compliance lifecycle in one go:
- Downloads Data: Fetches the UCI German Credit dataset.
- Loads Policy: Reads
data_policy.oscal.yamlwhich defines the fairness rules. - Enforces: Runs the audit (
vl.enforce). - Records: Captures the evidence (
trace.json) for the dashboard.
Here's the equivalent "manual" code:
from ucimlrepo import fetch_ucirepo
import venturalitica as vl
# 1. Load Data (The "Risk Source")
dataset = fetch_ucirepo(id=144)
df = dataset.data.features
df['class'] = dataset.data.targets
# 2. Define the Policy (The "Law")
# Create data_policy.oscal.yaml (see Academy Level 1 for full policy file)
# 3. Run the Audit (The "Test")
# This automatically generates the Evidence Bill of Materials (BOM)
with vl.monitor("manual_audit"):
vl.enforce(
data=df,
target="class", # The outcome (True/False)
gender="Attribute9", # Protected Group A
age="Attribute13", # Protected Group B
policy="data_policy.oscal.yaml"
)The policy (data_policy.oscal.yaml) is the bridge. It tells the SDK what to check so you don't have to hardcode it.
# ... inside the OSCAL YAML ...
- control-id: credit-data-bias
description: "Disparate impact ratio must be > 0.8 (80% rule)"
props:
- name: metric_key
value: disparate_impact # <--- The Python Function to call
- name: threshold
value: "0.8" # <--- The Limit to enforce
- name: operator
value: ">" # <--- The Logic (> 0.8)
- name: "input:dimension"
value: gender # <--- Maps to "Attribute9"This design decouples Assurance (the policy file) from Engineering (the python code).
Without this mechanism, your AI model is a legal "Black Box":
- Liability: You cannot prove you checked for bias before deployment (Art 9).
- Fragility: Compliance is a manual checklist, easily forgotten or skipped.
- Opacity: Auditors cannot see the link between your code and the law.
By running quickstart(), you have just generated an immutable Compliance Artifact. Even if the laws change, your evidence remains.
Now that we have the evidence (the "Black Box" recording), let's inspect it in the Regulatory Map.
pip install venturalitica[dashboard] # Required for the UI
venturalitica uiNavigate through the Compliance Map tabs:
- Article 9 (Risk): See the failed
credit-age-disparatecontrol. This is your technical evidence of "Risk Monitoring". - Article 10 (Data): See the data distribution and quality checks.
- Article 13 (Transparency): Review the "Transparency Feed" to see your Python dependencies (BOM).
The final step is to turn this evidence into a legal document.
- In the Dashboard, go to the "Generation" tab.
- Select "English" (or Spanish/Catalan/Euskera).
- Click "Generate Annex IV".
Venturalítica will draft a technical document that references your specific run:
"As evidenced in
trace_quickstart_loan.json, the system was audited against [OSCAL Policy: Credit Scoring Fairness]. A deviation was detected in Age Disparity (0.36), identifying a potential risk of bias..."
- Policy Used:
loan/data_policy.oscal.yaml - Legal Basis:
- API Reference -- Full function signatures and parameters
- Policy Authoring Guide -- Write your own OSCAL policies from scratch
- Metrics Reference -- All 35+ available metrics
- Venturalitica Academy -- Guided learning path from Engineer to Architect