diff --git a/.github/workflows/test-os-compatibility.yml b/.github/workflows/test-os-compatibility.yml new file mode 100644 index 0000000..2f0231f --- /dev/null +++ b/.github/workflows/test-os-compatibility.yml @@ -0,0 +1,49 @@ +name: Cross-platform Compatibility Tests + +on: + push: + branches: [main] + pull_request: + branches: [main] + +jobs: + os-compatibility-tests: + runs-on: ${{ matrix.os }} + strategy: + fail-fast: false + matrix: + os: [ubuntu-latest, windows-latest, macos-latest] + python-version: ["3.10"] + + steps: + - name: Checkout code + uses: actions/checkout@v4 + + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v5 + with: + python-version: ${{ matrix.python-version }} + + - name: Install Poetry + shell: bash + run: | + curl -sSL https://install.python-poetry.org | python - + echo "$HOME/.local/bin" >> $GITHUB_PATH + echo "$APPDATA/Python/Scripts" >> $GITHUB_PATH + + - name: Configure Poetry and install plugin + shell: bash + run: | + poetry --version + poetry config virtualenvs.create false + poetry self add "poetry-dynamic-versioning[plugin]" + + - name: Install dependencies + shell: bash + run: | + poetry install --no-interaction --no-ansi + + - name: Run tests + shell: bash + run: | + poetry run pytest diff --git a/docs/source/_static/custom.css b/docs/source/_static/custom.css index 0d541eb..672381c 100644 --- a/docs/source/_static/custom.css +++ b/docs/source/_static/custom.css @@ -79,3 +79,39 @@ margin-bottom: 1.2em; font-size: 1.1em; } + +.video-container { + position: relative; + width: 100%; + max-width: 960px; + margin: 2rem auto; + aspect-ratio: 16 / 9; +} + +.video-container iframe { + width: 90%; + height: 90%; + border-radius: 12px; +} + +.video-card { + max-width: 960px; + margin: 3rem auto; + padding: 1rem; + background: #f9fafb; + border-radius: 16px; + box-shadow: 0 10px 30px rgb(161, 159, 159); +} + +.video-card iframe { + width: 100%; + aspect-ratio: 16 / 9; + border-radius: 12px; +} + +.video-caption { + margin-top: 0.75rem; + text-align: center; + font-size: 1.05rem; + color: #0d3b5a; +} diff --git a/docs/source/aligner/kge.rst b/docs/source/aligner/kge.rst index 476eb6a..d269abc 100644 --- a/docs/source/aligner/kge.rst +++ b/docs/source/aligner/kge.rst @@ -34,8 +34,7 @@ Usage .. sidebar:: - Full code is available at `OntoAligner Repository. `_ - + A usage example is available at `OntoAligner Repository. `_ This module guides you through a step-by-step process for performing ontology alignment using a KGEs and the OntoAligner library. By the end, you’ll understand how to preprocess data, encode ontologies, generate alignments, evaluate results, and save the outputs in XML and JSON formats. diff --git a/docs/source/aligner/rag.rst b/docs/source/aligner/rag.rst index 27e005b..aaefc4d 100644 --- a/docs/source/aligner/rag.rst +++ b/docs/source/aligner/rag.rst @@ -2,19 +2,10 @@ Retrieval-Augmented Generation ================================ -.. sidebar:: **Reference:** - - `LLMs4OM: Matching Ontologies with Large Language Models `_ - - .. raw:: html - - - LLMs4OM ---------------------------------- -**LLMs4OM: Matching Ontologies with Large Language Models** -The retrieval augmented generation (RAG) module at OntoAligner is driven by a ``LLMs4OM`` framework, a novel approach for effective ontology alignment using LLMs. This framework utilizes two modules for retrieval and matching, respectively, enhanced by zero-shot prompting across three ontology representations: concept, concept-parent, and concept-children. The ``LLMs4OM`` framework, can match and even surpass the performance of traditional OM systems, particularly in complex matching scenarios. The ``LLMs4OM`` framework (as shown in the following diagram) offers a RAG approach within LLMs for OM. LLMs4OM uses :math:`O_{source}` as query :math:`Q(O_{source})` to retrieve possible matches for for any :math:`C_s \in C_{source}` from :math:`C_{target} \in O_{target}`. Where, :math:`C_{target}` is stored in the knowledge base :math:`KB(O_{target})`. Later, :math:`C_{s}` and obtained :math:`C_t \in C_{target}` are used to query the LLM to check whether the :math:`(C_s, C_t)` pair is a match. As shown in above diagram, the framework comprises four main steps: 1) Concept representation, 2) Retriever model, 3) LLM, and 4) Post-processing. But within the OntoAligner we we adapted the workflow into a parser, encoder, alignment, post-processing, evaluate, and export steps. +The **LLMs4OM: Matching Ontologies with Large Language Models** work introduces a RAG approach for OA. The retrieval augmented generation (RAG) module at OntoAligner is driven by a ``LLMs4OM`` framework, a novel approach for effective ontology alignment using LLMs. This framework utilizes two modules for retrieval and matching, respectively, enhanced by zero-shot prompting across three ontology representations: concept, concept-parent, and concept-children. The ``LLMs4OM`` framework, can match and even surpass the performance of traditional OM systems, particularly in complex matching scenarios. The ``LLMs4OM`` framework is presented in the following diagram. .. raw:: html @@ -22,6 +13,26 @@ The retrieval augmented generation (RAG) module at OntoAligner is driven by a `` +This offers a RAG approach within LLMs for OM. LLMs4OM uses :math:`O_{source}` as query :math:`Q(O_{source})` to retrieve possible matches for for any :math:`C_s \in C_{source}` from :math:`C_{target} \in O_{target}`. Where, :math:`C_{target}` is stored in the knowledge base :math:`KB(O_{target})`. Later, :math:`C_{s}` and obtained :math:`C_t \in C_{target}` are used to query the LLM to check whether the :math:`(C_s, C_t)` pair is a match. As shown in above diagram, the framework comprises four main steps: 1) Concept representation, 2) Retriever model, 3) LLM, and 4) Post-processing. But within the OntoAligner we we adapted the workflow into a parser, encoder, alignment, post-processing, evaluate, and export steps. + +.. raw:: html + +
+ +

+ ESWC 2024 Talk — LLMs4OM Presentation by Hamed Babaei Giglou. +

+
+ + +.. note:: + + **Reference:** Babaei Giglou, H., D’Souza, J., Engel, F., Auer, S. (2025). LLMs4OM: Matching Ontologies with Large Language Models. In: Meroño Peñuela, A., et al. The Semantic Web: ESWC 2024 Satellite Events. ESWC 2024. Lecture Notes in Computer Science, vol 15344. Springer, Cham. `https://doi.org/10.1007/978-3-031-78952-6_3 `_ + Usage ---------------- diff --git a/docs/source/index.rst b/docs/source/index.rst index fe6eb45..06e3b99 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -17,11 +17,19 @@ OntoAligner was created by `Scientific Knowledge Organization (SciKnowOrg group) The vision is to create a unified hub that brings together a wide range of ontology alignment models, making integration seamless for researchers and practitioners. -**Watch the OntoAligner presentation at EWC-2025.** - .. raw:: html - +
+ +

+ ESWC 2025 Talk — OntoAligner Presentation by Hamed Babaei Giglou. +

+
+ Citing =========