Skip to content
This repository was archived by the owner on May 31, 2025. It is now read-only.

zenml-io/credit-scoring-ai-act

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Credit‑Scoring AI‑Act Demo

Demo project to showcase EU AI Act compliance artifacts with ZenML.

Project Structure

credit_scoring_ai_act/
β”œβ”€β”€ data/
β”‚   β”œβ”€β”€ raw/ # public synthetic loan data (CSV)
β”‚   └── snapshots/ # auto‑versioned by data_versioning step
β”œβ”€β”€ pipelines/
β”‚   β”œβ”€β”€ credit_scoring.py # "happy path" train‑→‑deploy
β”‚   β”œβ”€β”€ monitor.py # scheduled drift checker
β”‚   └── retrain.py # triggered by incident label
β”œβ”€β”€ steps/
β”‚   β”œβ”€β”€ data_loader.py # load CSV β†’ log SHA‑256, WhyLogs profile
β”‚   β”œβ”€β”€ data_preprocessor.py # basic feature eng, outputs train/test splits
β”‚   β”œβ”€β”€ data_splitter.py # split dataset into train/test
β”‚   β”œβ”€β”€ generate_compliance_metadata.py # generate compliance metadata
β”‚   β”œβ”€β”€ train.py # XGBoost / sklearn model
β”‚   β”œβ”€β”€ evaluate.py # std. metrics + Fairlearn/Aequitas scan
β”‚   β”œβ”€β”€ approve.py # human‑in‑loop gate (approve_deployment step)
β”‚   β”œβ”€β”€ post_market_monitoring.py # post‑market monitoring
β”‚   β”œβ”€β”€ post_run_annex.py # generate Annex IV documentation
β”‚   β”œβ”€β”€ risk_assessment.py # risk assessment
β”‚   └── deploy.py # push to Modal / local FastAPI
β”‚
β”œβ”€β”€ compliance/
β”‚   β”œβ”€β”€ templates/
β”‚   β”‚   β”œβ”€β”€ annex_iv_template.j2 # annex iv template
β”‚   β”‚   └── fria_template.md # rights‑impact narrative
β”‚   β”œβ”€β”€ records/ # automated compliance records
β”‚   β”œβ”€β”€ manual_fills/ # manual compliance inputs
β”‚   β”œβ”€β”€ monitoring/ # monitoring records
β”‚   β”œβ”€β”€ deployment_records/ # deployment history
β”‚   └── approval_records/ # approval history
β”‚
β”œβ”€β”€ reports/ # auto‑generated Docs, PDFs, JSON logs
β”‚   β”œβ”€β”€ annex_iv_<run>.pdf
β”‚   β”œβ”€β”€ model_card_<run>.json
β”‚   └── incident_log.json
β”œβ”€β”€ utils/ # shared utilities
β”œβ”€β”€ configs/ # configuration files
└── README.md

Pipeline graph

  1. ingest -> preprocess -> train -> evaluate -> approve_deploy -> deploy
  2. Scheduled: monitor (daily) β†’ report_incident on drift.
    Incident closed? β†’ GitHub label retrain triggers retrain pipeline.

Where each compliance artifact is produced

Step / Hook AI‑Act Article(s) Output artefact
ingest 10, 12 dataset_info metadata, SHA‑256
preprocess 10, 12 preprocessing_info_*.json, data quality logs
evaluate 15 fairness_metrics, metrics.json
approve_deploy 14 approval.log (approver, rationale)
post_run_annex.py 11, 12 annex_iv_<run>.pdf
monitor.py 17 drift scores in reports/drift_<date>.json
incident_webhook.py 18 incident_log.json, external ticket
risk_assessment.py 9 risk scores, risk register updates

Compliance Directory Structure

Directory Purpose Auto/Manual
records/ Automated compliance records from pipeline runs Auto
manual_fills/ Manual compliance inputs and preprocessing info Manual
monitoring/ Post-market monitoring records and drift detection logs Auto
deployment_records/ Model deployment history and model cards Auto
approval_records/ Human approval records and rationales Manual
templates/ Jinja templates for documentation generation Manual

"Definition of Done"

zenml up && zenml run credit_scoring
# -> reports/annex_iv_<run>.pdf exists
# -> fairness_metrics logged
# -> approval gate records reviewer
# -> drift monitor cronjob scheduled
# -> compliance/records/ contains run artifacts
# -> compliance/manual_fills/ contains required inputs

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published