Skip to content

Commit 2669e60

Browse files
Update README.md
1 parent c052227 commit 2669e60

File tree

1 file changed

+9
-2
lines changed

1 file changed

+9
-2
lines changed

README.md

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,11 @@ If you are using Neo4j Desktop, you will not be able to use the docker-compose b
3131
### Local deployment
3232
#### Running through docker-compose
3333
By default only OpenAI and Diffbot are enabled since Gemini requires extra GCP configurations.
34+
Accoroding to enviornment we are configuring the models which is indicated by VITE_LLM_MODELS_PROD variable we can configure model based on our need.
35+
EX:
36+
```env
37+
VITE_LLM_MODELS_PROD="openai_gpt_4o,openai_gpt_4o_mini,diffbot,gemini_1.5_flash"
38+
```
3439

3540
In your root folder, create a .env file with your OPENAI and DIFFBOT keys (if you want to use both):
3641
```env
@@ -72,15 +77,15 @@ You can of course combine all (local, youtube, wikipedia, s3 and gcs) or remove
7277

7378
### Chat Modes
7479

75-
By default,all of the chat modes will be available: vector, graph+vector and graph.
80+
By default,all of the chat modes will be available: vector, graph_vector, graph,fulltext,graph_vector_fulltext,entity_vector and global_vector.
7681
If none of the mode is mentioned in the chat modes variable all modes will be available:
7782
```env
7883
VITE_CHAT_MODES=""
7984
```
8085

8186
If however you want to specify the only vector mode or only graph mode you can do that by specifying the mode in the env:
8287
```env
83-
VITE_CHAT_MODES="vector,graph+vector"
88+
VITE_CHAT_MODES="vector,graph_vector,"
8489
```
8590

8691
#### Running Backend and Frontend separately (dev environment)
@@ -150,12 +155,14 @@ Allow unauthenticated request : Yes
150155
| VITE_TIME_PER_PAGE | Optional | 50 | Time per page for processing |
151156
| VITE_CHUNK_SIZE | Optional | 5242880 | Size of each chunk of file for upload |
152157
| VITE_GOOGLE_CLIENT_ID | Optional | | Client ID for Google authentication |
158+
| VITE_LLM_MODELS_PROD | Optional | openai_gpt_4o,openai_gpt_4o_mini,diffbot,gemini_1.5_flash | To Distinguish models based on the Enviornment PROD or DEV
153159
| GCS_FILE_CACHE | Optional | False | If set to True, will save the files to process into GCS. If set to False, will save the files locally |
154160
| ENTITY_EMBEDDING | Optional | False | If set to True, It will add embeddings for each entity in database |
155161
| LLM_MODEL_CONFIG_ollama_<model_name> | Optional | | Set ollama config as - model_name,model_local_url for local deployments |
156162
| RAGAS_EMBEDDING_MODEL | Optional | openai | embedding model used by ragas evaluation framework |
157163

158164

165+
159166
## For local llms (Ollama)
160167
1. Pull the docker imgage of ollama
161168
```bash

0 commit comments

Comments
 (0)