You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: fern/docs/pages/installation/installation.mdx
+15-6Lines changed: 15 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,6 +28,11 @@ pyenv local 3.11
28
28
Install [Poetry](https://python-poetry.org/docs/#installing-with-the-official-installer) for dependency management:
29
29
Follow the instructions on the official Poetry website to install it.
30
30
31
+
<Calloutintent="warning">
32
+
A bug exists in Poetry versions 1.7.0 and earlier. We strongly recommend upgrading to a tested version.
33
+
To upgrade Poetry to latest tested version, run `poetry self update 1.8.3` after installing it.
34
+
</Callout>
35
+
31
36
### 4. Optional: Install `make`
32
37
To run various scripts, you need to install `make`. Follow the instructions for your operating system:
33
38
#### macOS
@@ -130,16 +135,20 @@ Go to [ollama.ai](https://ollama.ai/) and follow the instructions to install Oll
130
135
131
136
After the installation, make sure the Ollama desktop app is closed.
132
137
133
-
Install the models to be used, the default settings-ollama.yaml is configured to user `mistral 7b` LLM (~4GB) and `nomic-embed-text` Embeddings (~275MB). Therefore:
134
-
138
+
Now, start Ollama service (it will start a local inference server, serving both the LLM and the Embeddings):
135
139
```bash
136
-
ollama pull mistral
137
-
ollama pull nomic-embed-text
140
+
ollama serve
138
141
```
139
142
140
-
Now, start Ollama service (it will start a local inference server, serving both the LLM and the Embeddings):
143
+
Install the models to be used, the default settings-ollama.yaml is configured to user llama3.1 8b LLM (~4GB) and nomic-embed-text Embeddings (~275MB)
144
+
145
+
By default, PGPT will automatically pull models as needed. This behavior can be changed by modifying the `ollama.autopull_models` property.
146
+
147
+
In any case, if you want to manually pull models, run the following commands:
148
+
141
149
```bash
142
-
ollama serve
150
+
ollama pull llama3.1
151
+
ollama pull nomic-embed-text
143
152
```
144
153
145
154
Once done, on a different terminal, you can install PrivateGPT with the following command:
0 commit comments