Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
## Changelog

### V1.5.2 Changelog (October 12, 2025)
- Fix to `bitsandbytes` install for MacOS -- it will only be installed for linux.
- Add KGE retriever
- Update KGE documentation

### V1.5.1 Changelog (September 7, 2025)
- Update dependencies.
- Make versioning standardized.
Expand Down
4 changes: 2 additions & 2 deletions CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -17,5 +17,5 @@ keywords:
- "Alignment"
- "Python Library"
license: "Apache-2.0"
version: "1.5.1"
date-released: "2025-07-29"
version: "1.5.2"
date-released: "2025-10-12"
15 changes: 9 additions & 6 deletions docs/source/aligner/rag.rst
Original file line number Diff line number Diff line change
@@ -1,24 +1,27 @@
Retrieval-Augmented Generation
================================

LLMs4OM
----------------------------------
**LLMs4OM: Matching Ontologies with Large Language Models**

.. sidebar:: **Reference:**

`LLMs4OM: Matching Ontologies with Large Language Models <https://link.springer.com/chapter/10.1007/978-3-031-78952-6_3>`_

The retrieval augmented generation (RAG) module at OntoAligner is driven by a ``LLMs4OM`` framework, a novel approach for effective ontology alignment using LLMs. This framework utilizes two modules for retrieval and matching, respectively, enhanced by zero-shot prompting across three ontology representations: concept, concept-parent, and concept-children. The ``LLMs4OM`` framework, can match and even surpass the performance of traditional OM systems, particularly in complex matching scenarios. The following diagram represent the ``LLMs4OM`` framework.
.. raw:: html

<iframe src="https://videolectures.net/embed/videos/eswc2024_babaei_giglou_language_models?part=1" width="100%" frameborder="0" allowfullscreen style="aspect-ratio:16/9"></iframe>

LLMs4OM
----------------------------------
**LLMs4OM: Matching Ontologies with Large Language Models**

The retrieval augmented generation (RAG) module at OntoAligner is driven by a ``LLMs4OM`` framework, a novel approach for effective ontology alignment using LLMs. This framework utilizes two modules for retrieval and matching, respectively, enhanced by zero-shot prompting across three ontology representations: concept, concept-parent, and concept-children. The ``LLMs4OM`` framework, can match and even surpass the performance of traditional OM systems, particularly in complex matching scenarios. The ``LLMs4OM`` framework (as shown in the following diagram) offers a RAG approach within LLMs for OM. LLMs4OM uses :math:`O_{source}` as query :math:`Q(O_{source})` to retrieve possible matches for for any :math:`C_s \in C_{source}` from :math:`C_{target} \in O_{target}`. Where, :math:`C_{target}` is stored in the knowledge base :math:`KB(O_{target})`. Later, :math:`C_{s}` and obtained :math:`C_t \in C_{target}` are used to query the LLM to check whether the :math:`(C_s, C_t)` pair is a match. As shown in above diagram, the framework comprises four main steps: 1) Concept representation, 2) Retriever model, 3) LLM, and 4) Post-processing. But within the OntoAligner we we adapted the workflow into a parser, encoder, alignment, post-processing, evaluate, and export steps.

.. raw:: html

<div align="center">
<img src="https://raw.githubusercontent.com/sciknoworg/OntoAligner/refs/heads/dev/docs/source/img/LLMs4OM.jpg" width="80%"/>
</div>

The ``LLMs4OM`` framework offers a RAG approach within LLMs for OM. LLMs4OM uses :math:`O_{source}` as query :math:`Q(O_{source})` to retrieve possible matches for for any :math:`C_s \in C_{source}` from :math:`C_{target} \in O_{target}`. Where, :math:`C_{target}` is stored in the knowledge base :math:`KB(O_{target})`. Later, :math:`C_{s}` and obtained :math:`C_t \in C_{target}` are used to query the LLM to check whether the :math:`(C_s, C_t)` pair is a match. As shown in above diagram, the framework comprises four main steps: 1) Concept representation, 2) Retriever model, 3) LLM, and 4) Post-processing. But within the OntoAligner we we adapted the workflow into a parser, encoder, alignment, post-processing, evaluate, and export steps.


Usage
----------------
Expand Down
18 changes: 18 additions & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,11 @@ OntoAligner was created by `Scientific Knowledge Organization (SciKnowOrg group)
<strong>The vision is to create a unified hub that brings together a wide range of ontology alignment models, making integration seamless for researchers and practitioners.</strong>
</div>

**Watch the OntoAligner presentation at EWC-2025.**

.. raw:: html

<iframe src="https://videolectures.net/embed/videos/eswc2025_bernardin_babaei_giglu?part=1" width="100%" frameborder="0" allowfullscreen style="aspect-ratio:16/9"></iframe>

Citing
=========
Expand Down Expand Up @@ -47,6 +52,19 @@ or our related work `LLMs4OM: Matching Ontologies with Large Language Models <ht
organization={Springer}
}

or if you are using Knowledge Graph Embeddings refer to `OntoAligner Meets Knowledge Graph Embedding Aligners <https://arxiv.org/abs/2509.26417>`_:

.. code-block:: bibtex

@article{babaei2025ontoaligner,
title={OntoAligner Meets Knowledge Graph Embedding Aligners},
author={Babaei Giglou, Hamed and D'Souza, Jennifer and Auer, S{\"o}ren and Sanaei, Mahsa},
journal={arXiv e-prints},
pages={arXiv--2509},
year={2025}
}



.. toctree::
:maxdepth: 1
Expand Down
2 changes: 1 addition & 1 deletion ontoaligner/VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.5.1
1.5.2