Skip to content

Commit 60d292a

Browse files
authored
update to llama3.1 (#237)
1 parent f62eb97 commit 60d292a

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

README.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ pip install ollama
1212

1313
```python
1414
import ollama
15-
response = ollama.chat(model='llama3', messages=[
15+
response = ollama.chat(model='llama3.1', messages=[
1616
{
1717
'role': 'user',
1818
'content': 'Why is the sky blue?',
@@ -29,7 +29,7 @@ Response streaming can be enabled by setting `stream=True`, modifying function c
2929
import ollama
3030

3131
stream = ollama.chat(
32-
model='llama3',
32+
model='llama3.1',
3333
messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],
3434
stream=True,
3535
)
@@ -45,13 +45,13 @@ The Ollama Python library's API is designed around the [Ollama REST API](https:/
4545
### Chat
4646

4747
```python
48-
ollama.chat(model='llama3', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])
48+
ollama.chat(model='llama3.1', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])
4949
```
5050

5151
### Generate
5252

5353
```python
54-
ollama.generate(model='llama3', prompt='Why is the sky blue?')
54+
ollama.generate(model='llama3.1', prompt='Why is the sky blue?')
5555
```
5656

5757
### List
@@ -63,14 +63,14 @@ ollama.list()
6363
### Show
6464

6565
```python
66-
ollama.show('llama3')
66+
ollama.show('llama3.1')
6767
```
6868

6969
### Create
7070

7171
```python
7272
modelfile='''
73-
FROM llama3
73+
FROM llama3.1
7474
SYSTEM You are mario from super mario bros.
7575
'''
7676

@@ -80,31 +80,31 @@ ollama.create(model='example', modelfile=modelfile)
8080
### Copy
8181

8282
```python
83-
ollama.copy('llama3', 'user/llama3')
83+
ollama.copy('llama3.1', 'user/llama3.1')
8484
```
8585

8686
### Delete
8787

8888
```python
89-
ollama.delete('llama3')
89+
ollama.delete('llama3.1')
9090
```
9191

9292
### Pull
9393

9494
```python
95-
ollama.pull('llama3')
95+
ollama.pull('llama3.1')
9696
```
9797

9898
### Push
9999

100100
```python
101-
ollama.push('user/llama3')
101+
ollama.push('user/llama3.1')
102102
```
103103

104104
### Embeddings
105105

106106
```python
107-
ollama.embeddings(model='llama3', prompt='The sky is blue because of rayleigh scattering')
107+
ollama.embeddings(model='llama3.1', prompt='The sky is blue because of rayleigh scattering')
108108
```
109109

110110
### Ps
@@ -123,7 +123,7 @@ A custom client can be created with the following fields:
123123
```python
124124
from ollama import Client
125125
client = Client(host='http://localhost:11434')
126-
response = client.chat(model='llama3', messages=[
126+
response = client.chat(model='llama3.1', messages=[
127127
{
128128
'role': 'user',
129129
'content': 'Why is the sky blue?',
@@ -139,7 +139,7 @@ from ollama import AsyncClient
139139

140140
async def chat():
141141
message = {'role': 'user', 'content': 'Why is the sky blue?'}
142-
response = await AsyncClient().chat(model='llama3', messages=[message])
142+
response = await AsyncClient().chat(model='llama3.1', messages=[message])
143143

144144
asyncio.run(chat())
145145
```
@@ -152,7 +152,7 @@ from ollama import AsyncClient
152152

153153
async def chat():
154154
message = {'role': 'user', 'content': 'Why is the sky blue?'}
155-
async for part in await AsyncClient().chat(model='llama3', messages=[message], stream=True):
155+
async for part in await AsyncClient().chat(model='llama3.1', messages=[message], stream=True):
156156
print(part['message']['content'], end='', flush=True)
157157

158158
asyncio.run(chat())

0 commit comments

Comments
 (0)