Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -10,18 +10,18 @@ The xref:_openai_api_compatibility[OpenAI API compatibility] section explains ho

You first need access to an Ollama instance. There are a few options, including the following:

* xref:https://ollama.com/download[Download and install Ollama] on your local machine.
* link:https://ollama.com/download[Download and install Ollama] on your local machine.
* Configure and xref:api/testcontainers.adoc[run Ollama via Testcontainers].
* Bind to an Ollama instance via xref:api/cloud-bindings.adoc[Kubernetes Service Bindings].

You can pull the models you want to use in your application from the xref:https://ollama.com/library[Ollama model library]:
You can pull the models you want to use in your application from the link:https://ollama.com/library[Ollama model library]:

[source,shellscript]
----
ollama pull <model-name>
----

You can also pull any of the thousands, free, xref:https://huggingface.co/models?library=gguf&sort=trending[GGUF Hugging Face Models]:
You can also pull any of the thousands, free, link:https://huggingface.co/models?library=gguf&sort=trending[GGUF Hugging Face Models]:

[source,shellscript]
----
Expand Down Expand Up @@ -166,7 +166,7 @@ TIP: In addition to the model specific link:https://github.com/spring-projects/s
Spring AI Ollama can automatically pull models when they are not available in your Ollama instance.
This feature is particularly useful for development and testing as well as for deploying your applications to new environments.

TIP: You can also pull, by name, any of the thousands, free, xref:https://huggingface.co/models?library=gguf&sort=trending[GGUF Hugging Face Models].
TIP: You can also pull, by name, any of the thousands, free, link:https://huggingface.co/models?library=gguf&sort=trending[GGUF Hugging Face Models].

There are three strategies for pulling models:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The `OllamaEmbeddingModel` implementation leverages the Ollama https://github.co

You first need access to an Ollama instance. There are a few options, including the following:

* xref:https://ollama.com/download[Download and install Ollama] on your local machine.
* link:https://ollama.com/download[Download and install Ollama] on your local machine.
* Configure and xref:api/testcontainers.adoc[run Ollama via Testcontainers].
* Bind to an Ollama instance via xref:api/cloud-bindings.adoc[Kubernetes Service Bindings].

Expand All @@ -22,7 +22,7 @@ You can pull the models you want to use in your application from the https://oll
ollama pull <model-name>
----

You can also pull any of the thousands, free, xref:https://huggingface.co/models?library=gguf&sort=trending[GGUF Hugging Face Models]:
You can also pull any of the thousands, free, link:https://huggingface.co/models?library=gguf&sort=trending[GGUF Hugging Face Models]:

[source,shellscript]
----
Expand Down Expand Up @@ -166,7 +166,7 @@ EmbeddingResponse embeddingResponse = embeddingModel.call(
Spring AI Ollama can automatically pull models when they are not available in your Ollama instance.
This feature is particularly useful for development and testing as well as for deploying your applications to new environments.

TIP: You can also pull, by name, any of the thousands, free, xref:https://huggingface.co/models?library=gguf&sort=trending[GGUF Hugging Face Models].
TIP: You can also pull, by name, any of the thousands, free, link:https://huggingface.co/models?library=gguf&sort=trending[GGUF Hugging Face Models].

There are three strategies for pulling models:

Expand Down