Skip to content

Commit 993614b

Browse files
committed
📝 add references and presentation videos.
1 parent 8b02554 commit 993614b

File tree

2 files changed

+27
-6
lines changed

2 files changed

+27
-6
lines changed

docs/source/aligner/rag.rst

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,27 @@
11
Retrieval-Augmented Generation
22
================================
33

4-
LLMs4OM
5-
----------------------------------
6-
**LLMs4OM: Matching Ontologies with Large Language Models**
74

85
.. sidebar:: **Reference:**
96

107
`LLMs4OM: Matching Ontologies with Large Language Models <https://link.springer.com/chapter/10.1007/978-3-031-78952-6_3>`_
118

12-
The retrieval augmented generation (RAG) module at OntoAligner is driven by a ``LLMs4OM`` framework, a novel approach for effective ontology alignment using LLMs. This framework utilizes two modules for retrieval and matching, respectively, enhanced by zero-shot prompting across three ontology representations: concept, concept-parent, and concept-children. The ``LLMs4OM`` framework, can match and even surpass the performance of traditional OM systems, particularly in complex matching scenarios. The following diagram represent the ``LLMs4OM`` framework.
9+
.. raw:: html
10+
11+
<iframe src="https://videolectures.net/embed/videos/eswc2024_babaei_giglou_language_models?part=1" width="100%" frameborder="0" allowfullscreen style="aspect-ratio:16/9"></iframe>
12+
13+
LLMs4OM
14+
----------------------------------
15+
**LLMs4OM: Matching Ontologies with Large Language Models**
16+
17+
The retrieval augmented generation (RAG) module at OntoAligner is driven by a ``LLMs4OM`` framework, a novel approach for effective ontology alignment using LLMs. This framework utilizes two modules for retrieval and matching, respectively, enhanced by zero-shot prompting across three ontology representations: concept, concept-parent, and concept-children. The ``LLMs4OM`` framework, can match and even surpass the performance of traditional OM systems, particularly in complex matching scenarios. The ``LLMs4OM`` framework (as shown in the following diagram) offers a RAG approach within LLMs for OM. LLMs4OM uses :math:`O_{source}` as query :math:`Q(O_{source})` to retrieve possible matches for for any :math:`C_s \in C_{source}` from :math:`C_{target} \in O_{target}`. Where, :math:`C_{target}` is stored in the knowledge base :math:`KB(O_{target})`. Later, :math:`C_{s}` and obtained :math:`C_t \in C_{target}` are used to query the LLM to check whether the :math:`(C_s, C_t)` pair is a match. As shown in above diagram, the framework comprises four main steps: 1) Concept representation, 2) Retriever model, 3) LLM, and 4) Post-processing. But within the OntoAligner we we adapted the workflow into a parser, encoder, alignment, post-processing, evaluate, and export steps.
1318

1419
.. raw:: html
1520

1621
<div align="center">
1722
<img src="https://raw.githubusercontent.com/sciknoworg/OntoAligner/refs/heads/dev/docs/source/img/LLMs4OM.jpg" width="80%"/>
1823
</div>
1924

20-
The ``LLMs4OM`` framework offers a RAG approach within LLMs for OM. LLMs4OM uses :math:`O_{source}` as query :math:`Q(O_{source})` to retrieve possible matches for for any :math:`C_s \in C_{source}` from :math:`C_{target} \in O_{target}`. Where, :math:`C_{target}` is stored in the knowledge base :math:`KB(O_{target})`. Later, :math:`C_{s}` and obtained :math:`C_t \in C_{target}` are used to query the LLM to check whether the :math:`(C_s, C_t)` pair is a match. As shown in above diagram, the framework comprises four main steps: 1) Concept representation, 2) Retriever model, 3) LLM, and 4) Post-processing. But within the OntoAligner we we adapted the workflow into a parser, encoder, alignment, post-processing, evaluate, and export steps.
21-
2225

2326
Usage
2427
----------------

docs/source/index.rst

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,11 @@ OntoAligner was created by `Scientific Knowledge Organization (SciKnowOrg group)
1717
<strong>The vision is to create a unified hub that brings together a wide range of ontology alignment models, making integration seamless for researchers and practitioners.</strong>
1818
</div>
1919

20+
**Watch the OntoAligner presentation at EWC-2025.**
21+
22+
.. raw:: html
23+
24+
<iframe src="https://videolectures.net/embed/videos/eswc2025_bernardin_babaei_giglu?part=1" width="100%" frameborder="0" allowfullscreen style="aspect-ratio:16/9"></iframe>
2025

2126
Citing
2227
=========
@@ -47,6 +52,19 @@ or our related work `LLMs4OM: Matching Ontologies with Large Language Models <ht
4752
organization={Springer}
4853
}
4954
55+
or if you are using Knowledge Graph Embeddings refer to `OntoAligner Meets Knowledge Graph Embedding Aligners <https://arxiv.org/abs/2509.26417>`_:
56+
57+
.. code-block:: bibtex
58+
59+
@article{babaei2025ontoaligner,
60+
title={OntoAligner Meets Knowledge Graph Embedding Aligners},
61+
author={Babaei Giglou, Hamed and D'Souza, Jennifer and Auer, S{\"o}ren and Sanaei, Mahsa},
62+
journal={arXiv e-prints},
63+
pages={arXiv--2509},
64+
year={2025}
65+
}
66+
67+
5068

5169
.. toctree::
5270
:maxdepth: 1

0 commit comments

Comments
 (0)