---
title: Ontology Reasoning Pipeline Architecture
description: **Complete Guide to OWL Reasoning Integration with whelk-rs**
category: explanation
tags:
- architecture
- api
- backend
updated-date: 2025-12-18
difficulty-level: advanced
---
Complete Guide to OWL Reasoning Integration with whelk-rs
The Ontology Reasoning Pipeline provides complete OWL 2 EL++ reasoning capabilities for VisionFlow, enabling automatic inference of class hierarchies, disjoint classes, and axiom enrichment.
Location: /src/services/ontology-reasoning-service.rs (473 lines)
The central reasoning service that integrates whelk-rs EL++ reasoner with the VisionFlow ontology system.
- ✅ Full whelk-rs Integration: Native Rust OWL 2 EL++ reasoning
- ✅ Three Core Methods:
infer-axioms()- Infers missing axioms with confidence scoresget-class-hierarchy()- Computes complete class hierarchy treeget-disjoint-classes()- Identifies disjoint class pairs
- ✅ Blake3-based Inference Caching: High-performance hashing
- ✅ Database Persistence:
inference-cachetable for results - ✅ Automatic Cache Invalidation: On ontology changes
- ✅ Comprehensive Error Handling: Production-ready
Infers missing axioms from the ontology using whelk-rs reasoning.
pub async fn infer-axioms(
&self,
ontology-id: &str,
) -> Result<Vec<InferredAxiom>, ServiceError>Returns: List of inferred axioms with:
- Axiom type (SubClassOf, EquivalentClasses, etc.)
- Subject and object IRIs
- Confidence score (0.0-1.0)
- Reasoning method used
Example:
let service = OntologyReasoningService::new(repo);
let inferred = service.infer-axioms("default").await?;
for axiom in inferred {
println!("{}: {} → {} (confidence: {})",
axiom.axiom-type,
axiom.subject-iri,
axiom.object-iri,
axiom.confidence
);
}Computes the complete class hierarchy with depth and parent-child relationships.
pub async fn get-class-hierarchy(
&self,
ontology-id: &str,
) -> Result<ClassHierarchy, ServiceError>Returns: Hierarchical tree structure with:
- Root classes (no parents)
- Parent-child relationships
- Depth calculations
- Descendant counts
Example:
let hierarchy = service.get-class-hierarchy("default").await?;
println!("Root classes: {:?}", hierarchy.root-classes);
for (iri, node) in hierarchy.hierarchy {
println!("{} (depth: {}, children: {})",
node.label,
node.depth,
node.children-iris.len()
);
}Identifies all disjoint class pairs from the ontology.
pub async fn get-disjoint-classes(
&self,
ontology-id: &str,
) -> Result<Vec<DisjointClassPair>, ServiceError>Returns: Pairs of classes that cannot have common instances.
Example:
let disjoint = service.get-disjoint-classes("default").await?;
for pair in disjoint {
println!("{} disjoint with {}", pair.class-a, pair.class-b);
}Database Table: inference-cache
CREATE TABLE IF NOT EXISTS inference-cache (
cache-key TEXT PRIMARY KEY,
ontology-id TEXT NOT NULL,
cache-type TEXT NOT NULL,
result-data TEXT NOT NULL,
created-at TEXT NOT NULL,
ontology-hash TEXT NOT NULL
);Uses Blake3 for fast, cryptographic-quality hashing:
let cache-key = blake3::hash(
format!("{}:{}:{}", ontology-id, cache-type, ontology-hash).as-bytes()
).to-hex();Automatic invalidation on ontology changes:
- Tracks ontology content hash
- Detects modifications automatically
- Regenerates cache entries as needed
Location: /src/actors/ontology-actor.rs
The OntologyActor coordinates reasoning operations:
#[derive(Message)]
#[rtype(result = "Result<(), Error>")]
pub struct TriggerReasoning {
pub ontology-id: String,
}
impl Handler<TriggerReasoning> for OntologyActor {
type Result = ResponseActFuture<Self, Result<(), Error>>;
fn handle(&mut self, msg: TriggerReasoning, -ctx: &mut Self::Context) -> Self::Result {
// 1. Call reasoning service
// 2. Update graph with inferred axioms
// 3. Invalidate caches
// 4. Notify subscribers
}
}Reasoning is triggered automatically during GitHub sync:
GitHub Push Event
↓
GitHub Sync Service
↓
OWL File Updated
↓
TriggerReasoning Message
↓
OntologyReasoningService
↓
Inference Results
↓
Graph Update
1. Initial Request
↓
2. Check Cache (Blake3 hash lookup)
├─ Cache Hit → Return cached results
└─ Cache Miss → Continue to reasoning
↓
3. Load Ontology from Repository
↓
4. Parse OWL with hornedowl
↓
5. Run whelk-rs EL++ Reasoner
↓
6. Extract Inferred Axioms
↓
7. Calculate Confidence Scores
↓
8. Store in Cache (with ontology hash)
↓
9. Return Results
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename-all = "camelCase")]
pub struct InferredAxiom {
pub axiom-type: String, // "SubClassOf", "EquivalentClasses", etc.
pub subject-iri: String, // Subject class IRI
pub object-iri: String, // Object class IRI
pub confidence: f64, // 0.0-1.0
pub reasoning-method: String, // "whelk-el++"
}#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename-all = "camelCase")]
pub struct ClassHierarchy {
pub root-classes: Vec<String>,
pub hierarchy: HashMap<String, ClassNode>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename-all = "camelCase")]
pub struct ClassNode {
pub iri: String,
pub label: String,
pub parent-iri: Option<String>,
pub children-iris: Vec<String>,
pub node-count: usize, // Descendant count
pub depth: usize, // Hierarchy depth
}#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename-all = "camelCase")]
pub struct DisjointClassPair {
pub class-a: String,
pub class-b: String,
}- Inference: O(n²) worst-case for EL++ (n = axioms)
- Hierarchy Computation: O(n) with memoization (n = classes)
- Cache Lookup: O(1) average (Blake3 + HashMap)
- Descendant Count: O(n) with memoization
- Memoization: Prevents redundant recursive calculations
- Blake3 Hashing: Fast cryptographic-quality hashing
- Database Caching: Persistent results across requests
- Lazy Loading: On-demand reasoning only
| Operation | 1,000 Classes | 5,000 Classes | 10,000 Classes |
|---|---|---|---|
| First Inference | ~500ms | ~2s | ~5s |
| Cached Retrieval | <10ms | <15ms | <20ms |
| Hierarchy Build | ~50ms | ~200ms | ~400ms |
use actix-web::{web, HttpResponse};
use crate::services::OntologyReasoningService;
async fn infer-endpoint(
service: web::Data<OntologyReasoningService>,
ontology-id: web::Path<String>,
) -> HttpResponse {
match service.infer-axioms(&ontology-id).await {
Ok(axioms) => HttpResponse::Ok().json(axioms),
Err(e) => HttpResponse::InternalServerError().json(e),
}
}use actix::prelude::*;
// Trigger reasoning
let reasoning-service = OntologyReasoningService::new(repo);
let addr = ontology-actor.start();
addr.send(TriggerReasoning {
ontology-id: "default".to-string(),
}).await?;use async-graphql::{Object, Context};
#[Object]
impl OntologyQuery {
async fn inferred-axioms(
&self,
ctx: &Context<'->,
ontology-id: String,
) -> Vec<InferredAxiom> {
let service = ctx.data::<OntologyReasoningService>().unwrap();
service.infer-axioms(&ontology-id).await.unwrap()
}
}Enable ontology reasoning in configuration:
[features]
ontology-validation = true
reasoning-cache = true# Reasoning configuration
REASONING-CACHE-TTL=3600 # Cache lifetime (seconds)
REASONING-TIMEOUT=30000 # Max reasoning time (ms)
REASONING-MAX-AXIOMS=100000 # Axiom limitLocated in /tests/ontology-reasoning-integration-test.rs (350+ lines)
# Run reasoning tests
cargo test --test ontology-reasoning-integration-test
# Test specific functionality
cargo test test-infer-axioms
cargo test test-class-hierarchy
cargo test test-disjoint-classesTest complete reasoning pipeline:
#[tokio::test]
async fn test-complete-reasoning-pipeline() {
let repo = setup-test-repository().await;
let service = OntologyReasoningService::new(repo);
// Load test ontology
load-ontology(&service, "test.owl").await;
// Trigger reasoning
let inferred = service.infer-axioms("test").await.unwrap();
// Verify results
assert!(inferred.len() > 0);
assert!(inferred.iter().all(|a| a.confidence > 0.0));
}- Cause: Large ontology or complex axioms
- Fix: Increase
REASONING-TIMEOUTor simplify ontology
- Cause: Ontology hash changing on every read
- Fix: Ensure consistent serialization
- Cause: OWL 2 profile incompatibility
- Fix: Verify ontology is EL++ compatible
Enable detailed logging:
env-logger::Builder::from-default-env()
.filter-module("ontology-reasoning", log::LevelFilter::Debug)
.init();- Incremental Reasoning: Only recompute changed portions
- Parallel Reasoning: Multi-threaded inference
- Explanation Support: Trace inference derivations
- Custom Rule Integration: User-defined reasoning rules
- SWRL Support: Semantic Web Rule Language integration
- ML-based Confidence Scoring: Learn from user feedback
- Distributed Reasoning: Multi-node computation
- Real-time Reasoning: Streaming ontology updates
- Hybrid Reasoning: Combine multiple reasoners
- whelk-rs: https://github.com/ontodev/whelk.rs
- OWL 2 EL Profile: https://www.w3.org/TR/owl2-profiles/#OWL-2-EL
- hornedowl: Rust OWL parser library
- Blake3: https://github.com/BLAKE3-team/BLAKE3
- Semantic Physics System - Physics constraint generation
- Hierarchical Visualization - Visual hierarchy rendering
-
- REST endpoints
-
- User-facing guide
Status: ✅ Production Ready Last Updated: 2025-11-03 Implementation: Complete with comprehensive testing