Skip to content

Commit 098ecd9

Browse files
authored
docs: update llm document for embedding support of ollama (#869)
1 parent e26470c commit 098ecd9

File tree

1 file changed

+24
-1
lines changed

1 file changed

+24
-1
lines changed

docs/docs/ai/llm.mdx

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ We support the following types of LLM APIs:
2020
| API Name | `LlmApiType` enum | Text Generation | Text Embedding |
2121
|----------|---------------------|--------------------|--------------------|
2222
| [OpenAI](#openai) | `LlmApiType.OPENAI` |||
23-
| [Ollama](#ollama) | `LlmApiType.OLLAMA` || |
23+
| [Ollama](#ollama) | `LlmApiType.OLLAMA` || |
2424
| [Google Gemini](#google-gemini) | `LlmApiType.GEMINI` |||
2525
| [Vertex AI](#vertex-ai) | `LlmApiType.VERTEX_AI` |||
2626
| [Anthropic](#anthropic) | `LlmApiType.ANTHROPIC` |||
@@ -141,6 +141,29 @@ cocoindex.LlmSpec(
141141
</TabItem>
142142
</Tabs>
143143

144+
For text embedding with Ollama, you'll need to pull an embedding model first:
145+
146+
```bash
147+
ollama pull nomic-embed-text
148+
```
149+
150+
Then, a spec for Ollama embedding looks like this:
151+
152+
<Tabs>
153+
<TabItem value="python" label="Python" default>
154+
155+
```python
156+
cocoindex.functions.EmbedText(
157+
api_type=cocoindex.LlmApiType.OLLAMA,
158+
model="nomic-embed-text",
159+
# Optional, use Ollama's default port (11434) on localhost if not specified
160+
address="http://localhost:11434",
161+
)
162+
```
163+
164+
</TabItem>
165+
</Tabs>
166+
144167
### Google Gemini
145168

146169
Google exposes Gemini through Google AI Studio APIs.

0 commit comments

Comments
 (0)