Skip to content

Commit ed52a75

Browse files
authored
Merge pull request #49 from sciknoworg/dev
documentation fix
2 parents f5f493b + 10a55cd commit ed52a75

File tree

7 files changed

+166
-45
lines changed

7 files changed

+166
-45
lines changed

docs/source/_static/custom.css

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,3 +36,38 @@
3636
.catalog-table tr:hover {
3737
background-color: #f1f1f1;
3838
}
39+
40+
.training-arguments > .table {
41+
display: grid;
42+
grid-template-columns: repeat(auto-fill, minmax(15em, 1fr));
43+
}
44+
45+
.training-arguments > .table > a {
46+
padding: 0.5rem;
47+
border: 1px solid #e1e4e5;
48+
}
49+
50+
.content{
51+
max-width: 840px;
52+
}
53+
54+
55+
/* Default (for large screens) */
56+
.content:not(.custom) {
57+
max-width: 65%;
58+
margin: 0 auto;
59+
}
60+
61+
/* Medium screens (e.g. tablets) */
62+
@media (max-width: 1024px) {
63+
.content:not(.custom) {
64+
max-width: 90%;
65+
}
66+
}
67+
68+
/* Small screens (e.g. smartphones) */
69+
@media (max-width: 600px) {
70+
.content:not(.custom) {
71+
max-width: 100%;
72+
}
73+
}

docs/source/_static/custom.js

Lines changed: 64 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

docs/source/aligner/rag.rst

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
Retrieval Augmented Generation
22
================================
3+
34
This tutorial walks you through the process of ontology matching using the OntoAligner library, leveraging retrieval-augmented generation (RAG) techniques. Starting with the necessary module imports, it defines a task and loads source and target ontologies along with reference matchings. The tutorial then encodes the ontologies using a specialized encoder, configures a retriever and an LLM, and generates predictions. Finally, it demonstrates two postprocessing techniques—heuristic and hybrid—followed by saving the matched alignments in XML format, ready for use or further analysis.
45

56
.. code-block:: python
@@ -72,7 +73,7 @@ In this tutorial, we demonstrated:
7273
You can customize the configurations and thresholds based on your specific dataset and use case. For more details, refer to the :doc:`../package_reference/postprocess`
7374

7475
FewShot RAG
75-
===============
76+
------------------------
7677
This tutorial works based on FewShot RAG matching, an extension of the RAG model, designed for few-shot learning tasks. The FewShot RAG workflow is the same as RAG but with two differences:
7778

7879
1. You only need to use FewShot encoders as follows, and since a fewshot model uses multiple examples you might also provide only specific examples from reference or other examples as a fewshot samples.
@@ -95,7 +96,7 @@ This tutorial works based on FewShot RAG matching, an extension of the RAG model
9596
model = MistralLLMBERTRetrieverFSRAG(positive_ratio=0.7, n_shots=5, **config)
9697
9798
In-Context Vectors RAG
98-
==================================
99+
------------------------
99100
This RAG variant performs ontology matching using ``ConceptRAGEncoder`` only. The In-Contect Vectors introduced by [1](https://github.com/shengliu66/ICV) tackle in-context learning as in-context vectors (ICV). We used LLMs in this perspective in the RAG module. The workflow is the same as RAG or FewShot RAG with the following differences:
100101

101102

@@ -117,4 +118,4 @@ This RAG variant performs ontology matching using ``ConceptRAGEncoder`` only. Th
117118
model.load(llm_path="tiiuae/falcon-7b", ir_path="all-MiniLM-L6-v2")
118119
119120
120-
[1] Liu, S., Ye, H., Xing, L., & Zou, J. (2023). [In-context vectors: Making in context learning more effective and controllable through latent space steering](https://arxiv.org/abs/2311.06668>). arXiv preprint arXiv:2311.06668.
121+
[1] Liu, S., Ye, H., Xing, L., & Zou, J. (2023). `In-context vectors: Making in context learning more effective and controllable through latent space steering <https://arxiv.org/abs/2311.06668>`_. arXiv preprint arXiv:2311.06668.

docs/source/conf.py

Lines changed: 31 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,9 @@
55
import importlib
66
import inspect
77
import os
8+
from sphinx.application import Sphinx
9+
from sphinx.writers.html5 import HTML5Translator
10+
import posixpath
811
# -- Project information -----------------------------------------------------
912
#
1013
sys.path.insert(0, pathlib.Path(__file__).parents[0].resolve().as_posix())
@@ -13,7 +16,6 @@
1316

1417
project = 'OntoAligner'
1518
copyright = f'{str(datetime.datetime.now().year)} SciKnowOrg'
16-
author = 'Hamed Babaei Giglou'
1719
release = '0.2.0'
1820

1921

@@ -23,6 +25,7 @@
2325
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
2426
# ones.
2527
extensions = [
28+
"sphinx_toolbox.collapse",
2629
"sphinx.ext.autodoc",
2730
"sphinx.ext.napoleon",
2831
"myst_parser",
@@ -32,13 +35,8 @@
3235
"sphinx.ext.linkcode",
3336
"sphinx_inline_tabs",
3437
"sphinxcontrib.mermaid",
35-
# "sphinx.ext.mathjax"
36-
37-
# 'sphinx.ext.duration',
38-
# 'sphinx.ext.doctest',
3938
'sphinx.ext.autodoc',
4039
'sphinx.ext.autosummary',
41-
# 'sphinx.ext.intersphinx',
4240
]
4341

4442
# autosummary_generate = True # Turn on sphinx.ext.autosummary
@@ -73,6 +71,9 @@
7371
("Github", "https://github.com/sciknoworg/OntoAligner"),
7472
("Pypi", "https://pypi.org/project/OntoAligner/")
7573
],
74+
"navigation_depth": 4,
75+
"collapse_navigation": True,
76+
"logo_only": True,
7677
}
7778

7879
html_static_path = ["_static"]
@@ -131,3 +132,27 @@ def linkcode_resolve(domain, info):
131132
relative_path = os.path.relpath(file_path, start=os.path.dirname(__file__))
132133
end_line = start_line + len(source_lines) - 1
133134
return f"{repo_url}/blob/{branch}/{relative_path}#L{start_line}-L{end_line}"
135+
136+
def visit_download_reference(self, node):
137+
root = "https://github.com/sciknoworg/OntoAligner/tree/main"
138+
atts = {"class": "reference download", "download": ""}
139+
140+
if not self.builder.download_support:
141+
self.context.append("")
142+
elif "refuri" in node:
143+
atts["class"] += " external"
144+
atts["href"] = node["refuri"]
145+
self.body.append(self.starttag(node, "a", "", **atts))
146+
self.context.append("</a>")
147+
elif "reftarget" in node and "refdoc" in node:
148+
atts["class"] += " external"
149+
atts["href"] = posixpath.join(root, os.path.dirname(node["refdoc"]), node["reftarget"])
150+
self.body.append(self.starttag(node, "a", "", **atts))
151+
self.context.append("</a>")
152+
else:
153+
self.context.append("")
154+
155+
HTML5Translator.visit_download_reference = visit_download_reference
156+
157+
def setup(app: Sphinx):
158+
pass

docs/source/gettingstarted/installation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,4 +63,4 @@ You can install OntoAligner directly from source to take advantage of the bleedi
6363

6464
Install PyTorch with CUDA support
6565
--------------------------------------------
66-
To use a GPU/CUDA for learners, you must install PyTorch with CUDA support. Follow `PyTorch - Get Started <https://pytorch.org/get-started/locally/>`_ for installation steps.
66+
To use a GPU/CUDA for aligner models, you must install PyTorch with CUDA support. Follow `PyTorch - Get Started <https://pytorch.org/get-started/locally/>`_ for installation steps. We recommend installing the `PEFT library <https://pypi.org/project/peft/>`_, especially if you plan to perform parameter-efficient fine-tuning or run inference with models fine-tuned using PEFT

0 commit comments

Comments
 (0)