diff --git a/CHANGELOG.md b/CHANGELOG.md index 5eb6ccb..1bc24f1 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,10 @@ ## Changelog +### V1.5.2 Changelog (October 12, 2025) +- Fix to `bitsandbytes` install for MacOS -- it will only be installed for linux. +- Add KGE retriever +- Update KGE documentation + ### V1.5.1 Changelog (September 7, 2025) - Update dependencies. - Make versioning standardized. diff --git a/CITATION.cff b/CITATION.cff index 320d1a1..d2ad40d 100644 --- a/CITATION.cff +++ b/CITATION.cff @@ -17,5 +17,5 @@ keywords: - "Alignment" - "Python Library" license: "Apache-2.0" -version: "1.5.1" -date-released: "2025-07-29" +version: "1.5.2" +date-released: "2025-10-12" diff --git a/docs/source/aligner/rag.rst b/docs/source/aligner/rag.rst index 2e57b84..27e005b 100644 --- a/docs/source/aligner/rag.rst +++ b/docs/source/aligner/rag.rst @@ -1,15 +1,20 @@ Retrieval-Augmented Generation ================================ -LLMs4OM ----------------------------------- -**LLMs4OM: Matching Ontologies with Large Language Models** .. sidebar:: **Reference:** `LLMs4OM: Matching Ontologies with Large Language Models `_ -The retrieval augmented generation (RAG) module at OntoAligner is driven by a ``LLMs4OM`` framework, a novel approach for effective ontology alignment using LLMs. This framework utilizes two modules for retrieval and matching, respectively, enhanced by zero-shot prompting across three ontology representations: concept, concept-parent, and concept-children. The ``LLMs4OM`` framework, can match and even surpass the performance of traditional OM systems, particularly in complex matching scenarios. The following diagram represent the ``LLMs4OM`` framework. + .. raw:: html + + + +LLMs4OM +---------------------------------- +**LLMs4OM: Matching Ontologies with Large Language Models** + +The retrieval augmented generation (RAG) module at OntoAligner is driven by a ``LLMs4OM`` framework, a novel approach for effective ontology alignment using LLMs. This framework utilizes two modules for retrieval and matching, respectively, enhanced by zero-shot prompting across three ontology representations: concept, concept-parent, and concept-children. The ``LLMs4OM`` framework, can match and even surpass the performance of traditional OM systems, particularly in complex matching scenarios. The ``LLMs4OM`` framework (as shown in the following diagram) offers a RAG approach within LLMs for OM. LLMs4OM uses :math:`O_{source}` as query :math:`Q(O_{source})` to retrieve possible matches for for any :math:`C_s \in C_{source}` from :math:`C_{target} \in O_{target}`. Where, :math:`C_{target}` is stored in the knowledge base :math:`KB(O_{target})`. Later, :math:`C_{s}` and obtained :math:`C_t \in C_{target}` are used to query the LLM to check whether the :math:`(C_s, C_t)` pair is a match. As shown in above diagram, the framework comprises four main steps: 1) Concept representation, 2) Retriever model, 3) LLM, and 4) Post-processing. But within the OntoAligner we we adapted the workflow into a parser, encoder, alignment, post-processing, evaluate, and export steps. .. raw:: html @@ -17,8 +22,6 @@ The retrieval augmented generation (RAG) module at OntoAligner is driven by a `` -The ``LLMs4OM`` framework offers a RAG approach within LLMs for OM. LLMs4OM uses :math:`O_{source}` as query :math:`Q(O_{source})` to retrieve possible matches for for any :math:`C_s \in C_{source}` from :math:`C_{target} \in O_{target}`. Where, :math:`C_{target}` is stored in the knowledge base :math:`KB(O_{target})`. Later, :math:`C_{s}` and obtained :math:`C_t \in C_{target}` are used to query the LLM to check whether the :math:`(C_s, C_t)` pair is a match. As shown in above diagram, the framework comprises four main steps: 1) Concept representation, 2) Retriever model, 3) LLM, and 4) Post-processing. But within the OntoAligner we we adapted the workflow into a parser, encoder, alignment, post-processing, evaluate, and export steps. - Usage ---------------- diff --git a/docs/source/index.rst b/docs/source/index.rst index 99fc9ee..fe6eb45 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -17,6 +17,11 @@ OntoAligner was created by `Scientific Knowledge Organization (SciKnowOrg group) The vision is to create a unified hub that brings together a wide range of ontology alignment models, making integration seamless for researchers and practitioners. +**Watch the OntoAligner presentation at EWC-2025.** + +.. raw:: html + + Citing ========= @@ -47,6 +52,19 @@ or our related work `LLMs4OM: Matching Ontologies with Large Language Models `_: + + .. code-block:: bibtex + + @article{babaei2025ontoaligner, + title={OntoAligner Meets Knowledge Graph Embedding Aligners}, + author={Babaei Giglou, Hamed and D'Souza, Jennifer and Auer, S{\"o}ren and Sanaei, Mahsa}, + journal={arXiv e-prints}, + pages={arXiv--2509}, + year={2025} + } + + .. toctree:: :maxdepth: 1 diff --git a/ontoaligner/VERSION b/ontoaligner/VERSION index 26ca594..4cda8f1 100644 --- a/ontoaligner/VERSION +++ b/ontoaligner/VERSION @@ -1 +1 @@ -1.5.1 +1.5.2