Skip to content

Commit 3adfcb0

Browse files
committed
📝
1 parent 622def1 commit 3adfcb0

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/source/learners/llm.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -149,7 +149,7 @@ For this, you can extend the ``AutoLLM`` class and implement the required
149149
3. Implement ``generate(inputs, max_new_tokens)`` to encodes prompts, performs generation, decodes outputs, and maps them to labels.
150150

151151

152-
.. tab::
152+
.. tab:: Falcon-H
153153

154154
The following example shows how to build a Falcon integration:
155155

@@ -186,7 +186,7 @@ For this, you can extend the ``AutoLLM`` class and implement the required
186186

187187
return self.label_mapper.predict(decoded_outputs)
188188

189-
.. tab::
189+
.. tab:: Mistral-Small
190190

191191
For Mistral, you can integrate the official ``mistral-common`` tokenizer and chat completion interface:
192192

@@ -277,8 +277,8 @@ Once your custom class is defined, you can pass it into ``AutoLLMLearner``:
277277
278278
The following models are specialized within the OntoLearner:
279279

280-
- To use `mistralai/Mistral-Small-3.2-24B-Instruct-2506 <https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506>`_ you can use ``MistralLLM`` instead of ``AutoLLM`
281-
- To use `Falcon-H` series of LLMs (e.g. `tiiuae/Falcon-H1-1.5B-Deep-Instruct <https://huggingface.co/tiiuae/Falcon-H1-1.5B-Deep-Instruct>`_ you can ``FalconLLM`` instead of ``AutoLLM`.
280+
- To use `mistralai/Mistral-Small-3.2-24B-Instruct-2506 <https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506>`_ you can use ``MistralLLM`` instead of ``AutoLLM``.
281+
- To use `Falcon-H` series of LLMs (e.g. `tiiuae/Falcon-H1-1.5B-Deep-Instruct <https://huggingface.co/tiiuae/Falcon-H1-1.5B-Deep-Instruct>`_ you can ``FalconLLM`` instead of ``AutoLLM``.
282282

283283
.. note::
284284

0 commit comments

Comments
 (0)