Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions .github/workflows/test-os-compatibility.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
name: Cross-platform Compatibility Tests

on:
push:
branches: [main]
pull_request:
branches: [main]

jobs:
os-compatibility-tests:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
python-version: ["3.10"]

steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}

- name: Install Poetry
shell: bash
run: |
curl -sSL https://install.python-poetry.org | python -
echo "$HOME/.local/bin" >> $GITHUB_PATH
echo "$APPDATA/Python/Scripts" >> $GITHUB_PATH

- name: Configure Poetry and install plugin
shell: bash
run: |
poetry --version
poetry config virtualenvs.create false
poetry self add "poetry-dynamic-versioning[plugin]"

- name: Install dependencies
shell: bash
run: |
poetry install --no-interaction --no-ansi

- name: Run tests
shell: bash
run: |
poetry run pytest
36 changes: 36 additions & 0 deletions docs/source/_static/custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -79,3 +79,39 @@
margin-bottom: 1.2em;
font-size: 1.1em;
}

.video-container {
position: relative;
width: 100%;
max-width: 960px;
margin: 2rem auto;
aspect-ratio: 16 / 9;
}

.video-container iframe {
width: 90%;
height: 90%;
border-radius: 12px;
}

.video-card {
max-width: 960px;
margin: 3rem auto;
padding: 1rem;
background: #f9fafb;
border-radius: 16px;
box-shadow: 0 10px 30px rgb(161, 159, 159);
}

.video-card iframe {
width: 100%;
aspect-ratio: 16 / 9;
border-radius: 12px;
}

.video-caption {
margin-top: 0.75rem;
text-align: center;
font-size: 1.05rem;
color: #0d3b5a;
}
3 changes: 1 addition & 2 deletions docs/source/aligner/kge.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,7 @@ Usage

.. sidebar::

Full code is available at `OntoAligner Repository. <https://github.com/sciknoworg/OntoAligner/blob/main/examples/kge.py>`_

A usage example is available at `OntoAligner Repository. <https://github.com/sciknoworg/OntoAligner/blob/main/examples/kge.py>`_

This module guides you through a step-by-step process for performing ontology alignment using a KGEs and the OntoAligner library. By the end, you’ll understand how to preprocess data, encode ontologies, generate alignments, evaluate results, and save the outputs in XML and JSON formats.

Expand Down
31 changes: 21 additions & 10 deletions docs/source/aligner/rag.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 +2,37 @@ Retrieval-Augmented Generation
================================


.. sidebar:: **Reference:**

`LLMs4OM: Matching Ontologies with Large Language Models <https://link.springer.com/chapter/10.1007/978-3-031-78952-6_3>`_

.. raw:: html

<iframe src="https://videolectures.net/embed/videos/eswc2024_babaei_giglou_language_models?part=1" width="100%" frameborder="0" allowfullscreen style="aspect-ratio:16/9"></iframe>

LLMs4OM
----------------------------------
**LLMs4OM: Matching Ontologies with Large Language Models**

The retrieval augmented generation (RAG) module at OntoAligner is driven by a ``LLMs4OM`` framework, a novel approach for effective ontology alignment using LLMs. This framework utilizes two modules for retrieval and matching, respectively, enhanced by zero-shot prompting across three ontology representations: concept, concept-parent, and concept-children. The ``LLMs4OM`` framework, can match and even surpass the performance of traditional OM systems, particularly in complex matching scenarios. The ``LLMs4OM`` framework (as shown in the following diagram) offers a RAG approach within LLMs for OM. LLMs4OM uses :math:`O_{source}` as query :math:`Q(O_{source})` to retrieve possible matches for for any :math:`C_s \in C_{source}` from :math:`C_{target} \in O_{target}`. Where, :math:`C_{target}` is stored in the knowledge base :math:`KB(O_{target})`. Later, :math:`C_{s}` and obtained :math:`C_t \in C_{target}` are used to query the LLM to check whether the :math:`(C_s, C_t)` pair is a match. As shown in above diagram, the framework comprises four main steps: 1) Concept representation, 2) Retriever model, 3) LLM, and 4) Post-processing. But within the OntoAligner we we adapted the workflow into a parser, encoder, alignment, post-processing, evaluate, and export steps.
The **LLMs4OM: Matching Ontologies with Large Language Models** work introduces a RAG approach for OA. The retrieval augmented generation (RAG) module at OntoAligner is driven by a ``LLMs4OM`` framework, a novel approach for effective ontology alignment using LLMs. This framework utilizes two modules for retrieval and matching, respectively, enhanced by zero-shot prompting across three ontology representations: concept, concept-parent, and concept-children. The ``LLMs4OM`` framework, can match and even surpass the performance of traditional OM systems, particularly in complex matching scenarios. The ``LLMs4OM`` framework is presented in the following diagram.

.. raw:: html

<div align="center">
<img src="https://raw.githubusercontent.com/sciknoworg/OntoAligner/refs/heads/dev/docs/source/img/LLMs4OM.jpg" width="80%"/>
</div>

This offers a RAG approach within LLMs for OM. LLMs4OM uses :math:`O_{source}` as query :math:`Q(O_{source})` to retrieve possible matches for for any :math:`C_s \in C_{source}` from :math:`C_{target} \in O_{target}`. Where, :math:`C_{target}` is stored in the knowledge base :math:`KB(O_{target})`. Later, :math:`C_{s}` and obtained :math:`C_t \in C_{target}` are used to query the LLM to check whether the :math:`(C_s, C_t)` pair is a match. As shown in above diagram, the framework comprises four main steps: 1) Concept representation, 2) Retriever model, 3) LLM, and 4) Post-processing. But within the OntoAligner we we adapted the workflow into a parser, encoder, alignment, post-processing, evaluate, and export steps.

.. raw:: html

<div class="video-card">
<iframe
src="https://videolectures.net/embed/videos/eswc2024_babaei_giglou_language_models?part=1"
frameborder="0"
allowfullscreen>
</iframe>
<p class="video-caption">
ESWC 2024 Talk — LLMs4OM Presentation by Hamed Babaei Giglou.
</p>
</div>


.. note::

**Reference:** Babaei Giglou, H., D’Souza, J., Engel, F., Auer, S. (2025). LLMs4OM: Matching Ontologies with Large Language Models. In: Meroño Peñuela, A., et al. The Semantic Web: ESWC 2024 Satellite Events. ESWC 2024. Lecture Notes in Computer Science, vol 15344. Springer, Cham. `https://doi.org/10.1007/978-3-031-78952-6_3 <https://doi.org/10.1007/978-3-031-78952-6_3>`_


Usage
----------------
Expand Down
14 changes: 11 additions & 3 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,19 @@ OntoAligner was created by `Scientific Knowledge Organization (SciKnowOrg group)
<strong>The vision is to create a unified hub that brings together a wide range of ontology alignment models, making integration seamless for researchers and practitioners.</strong>
</div>

**Watch the OntoAligner presentation at EWC-2025.**

.. raw:: html

<iframe src="https://videolectures.net/embed/videos/eswc2025_bernardin_babaei_giglu?part=1" width="100%" frameborder="0" allowfullscreen style="aspect-ratio:16/9"></iframe>
<div class="video-card">
<iframe
src="https://videolectures.net/embed/videos/eswc2025_bernardin_babaei_giglu?part=1"
frameborder="0"
allowfullscreen>
</iframe>
<p class="video-caption">
ESWC 2025 Talk — OntoAligner Presentation by Hamed Babaei Giglou.
</p>
</div>


Citing
=========
Expand Down