Run an LLM model locally through Ollama #150
samshdn
started this conversation in
3 - AI Tooling
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
After installing the model, we can use it in two ways:
Approach 1: ollama UI:
example:
ollama run qwen3:4b "What is the capital of Brazil? /no_think"
Approach 2: API
example:
The UI is well-suited for simple interactions., some batch operations can be performed using the API.
Beta Was this translation helpful? Give feedback.
All reactions