Skip to content

Commit 9c60d15

Browse files
committed
Additional docs in README.md
1 parent 43d3648 commit 9c60d15

File tree

1 file changed

+103
-20
lines changed

1 file changed

+103
-20
lines changed

README.md

Lines changed: 103 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,48 +1,131 @@
11
# Ollama Python Library
22

3-
The Ollama Python library provides the easiest way to integrate your Python 3 project with [Ollama](https://github.com/jmorganca/ollama).
3+
The Ollama Python library provides the easiest way to integrate Python 3.8+ projects with [Ollama](https://github.com/jmorganca/ollama).
44

5-
## Getting Started
6-
7-
Requires Python 3.8 or higher.
5+
## Install
86

97
```sh
108
pip install ollama
119
```
1210

13-
A global default client is provided for convenience and can be used in the same way as the synchronous client.
11+
## Usage
1412

1513
```python
1614
import ollama
17-
response = ollama.chat(model='llama2', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])
15+
response = ollama.chat(model='llama2', messages=[
16+
{
17+
'role': 'user',
18+
'content': 'Why is the sky blue?',
19+
},
20+
])
21+
print(response['message']['content'])
1822
```
1923

24+
## Streaming responses
25+
26+
Response streaming can be enabled by setting `stream=True`, modifying function calls to return a Python generator where each part is an object in the stream.
27+
2028
```python
2129
import ollama
22-
message = {'role': 'user', 'content': 'Why is the sky blue?'}
23-
for part in ollama.chat(model='llama2', messages=[message], stream=True):
24-
print(part['message']['content'], end='', flush=True)
30+
31+
stream = ollama.chat(
32+
model='llama2',
33+
messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],
34+
stream=True,
35+
)
36+
37+
for chunk in stream:
38+
print(chunk['message']['content'], end='', flush=True)
2539
```
2640

41+
## API
42+
43+
The Ollama Python library's API is designed around the [Ollama REST API](https://github.com/jmorganca/ollama/blob/main/docs/api.md)
2744

28-
## Using the Synchronous Client
45+
### Chat
2946

3047
```python
31-
from ollama import Client
32-
message = {'role': 'user', 'content': 'Why is the sky blue?'}
33-
response = Client().chat(model='llama2', messages=[message])
48+
ollama.chat(model='llama2', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])
49+
```
50+
51+
### Generate
52+
53+
```python
54+
ollama.generate(model='llama2', prompt='Why is the sky blue?')
55+
```
56+
57+
### List
58+
59+
```python
60+
ollama.list()
61+
```
62+
63+
### Show
64+
65+
```python
66+
ollama.show('llama2')
67+
```
68+
69+
### Create
70+
71+
```python
72+
modelfile='''
73+
FROM llama2
74+
SYSTEM You are mario from super mario bros.
75+
'''
76+
77+
ollama.create(model='example', modelfile=modelfile)
78+
```
79+
80+
### Copy
81+
82+
```python
83+
ollama.copy('llama2', 'user/llama2')
84+
```
85+
86+
### Delete
87+
88+
```python
89+
ollama.delete('llama2')
3490
```
3591

36-
Response streaming can be enabled by setting `stream=True`. This modifies the function to return a Python generator where each part is an object in the stream.
92+
### Pull
93+
94+
```python
95+
ollama.pull('llama2')
96+
```
97+
98+
### Push
99+
100+
```python
101+
ollama.push('user/llama2')
102+
```
103+
104+
### Embeddings
105+
106+
```python
107+
ollama.embeddings(model='llama2', prompt='They sky is blue because of rayleigh scattering')
108+
```
109+
110+
## Custom client
111+
112+
A custom client can be created with the following fields:
113+
114+
- `host`: The Ollama host to connect to
115+
- `timeout`: The timeout for requests
37116

38117
```python
39118
from ollama import Client
40-
message = {'role': 'user', 'content': 'Why is the sky blue?'}
41-
for part in Client().chat(model='llama2', messages=[message], stream=True):
42-
print(part['message']['content'], end='', flush=True)
119+
ollama = Client(host='http://localhost:11434')
120+
response = ollama.chat(model='llama2', messages=[
121+
{
122+
'role': 'user',
123+
'content': 'Why is the sky blue?',
124+
},
125+
])
43126
```
44127

45-
## Using the Asynchronous Client
128+
## Async client
46129

47130
```python
48131
import asyncio
@@ -55,7 +138,7 @@ async def chat():
55138
asyncio.run(chat())
56139
```
57140

58-
Similar to the synchronous client, setting `stream=True` modifies the function to return a Python asynchronous generator.
141+
Setting `stream=True`` modifies functions to return a Python asynchronous generator:
59142

60143
```python
61144
import asyncio
@@ -69,7 +152,7 @@ async def chat():
69152
asyncio.run(chat())
70153
```
71154

72-
## Handling Errors
155+
## Errors
73156

74157
Errors are raised if requests return an error status or if an error is detected while streaming.
75158

0 commit comments

Comments
 (0)