File tree Expand file tree Collapse file tree 1 file changed +24
-1
lines changed Expand file tree Collapse file tree 1 file changed +24
-1
lines changed Original file line number Diff line number Diff line change @@ -20,7 +20,7 @@ We support the following types of LLM APIs:
2020| API Name | ` LlmApiType ` enum | Text Generation | Text Embedding |
2121| ----------| ---------------------| --------------------| --------------------|
2222| [ OpenAI] ( #openai ) | ` LlmApiType.OPENAI ` | ✅ | ✅ |
23- | [ Ollama] ( #ollama ) | ` LlmApiType.OLLAMA ` | ✅ | ❌ |
23+ | [ Ollama] ( #ollama ) | ` LlmApiType.OLLAMA ` | ✅ | ✅ |
2424| [ Google Gemini] ( #google-gemini ) | ` LlmApiType.GEMINI ` | ✅ | ✅ |
2525| [ Vertex AI] ( #vertex-ai ) | ` LlmApiType.VERTEX_AI ` | ✅ | ✅ |
2626| [ Anthropic] ( #anthropic ) | ` LlmApiType.ANTHROPIC ` | ✅ | ❌ |
@@ -141,6 +141,29 @@ cocoindex.LlmSpec(
141141</TabItem >
142142</Tabs >
143143
144+ For text embedding with Ollama, you'll need to pull an embedding model first:
145+
146+ ``` bash
147+ ollama pull nomic-embed-text
148+ ```
149+
150+ Then, a spec for Ollama embedding looks like this:
151+
152+ <Tabs >
153+ <TabItem value = " python" label = " Python" default >
154+
155+ ``` python
156+ cocoindex.functions.EmbedText(
157+ api_type = cocoindex.LlmApiType.OLLAMA ,
158+ model = " nomic-embed-text" ,
159+ # Optional, use Ollama's default port (11434) on localhost if not specified
160+ address = " http://localhost:11434" ,
161+ )
162+ ```
163+
164+ </TabItem >
165+ </Tabs >
166+
144167### Google Gemini
145168
146169Google exposes Gemini through Google AI Studio APIs.
You can’t perform that action at this time.
0 commit comments