Skip to content

Commit 1861925

Browse files
authored
Merge pull request #324 from sudoleg/renovate/chromadb-1.x
chore(deps): update dependency chromadb to v1
2 parents 8248c3c + 8c6111f commit 1861925

File tree

4 files changed

+13
-19
lines changed

4 files changed

+13
-19
lines changed

docker-compose.yml

Lines changed: 2 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,24 +4,12 @@ networks:
44

55
services:
66
chromadb:
7-
image: chromadb/chroma:0.6.3
7+
image: chromadb/chroma:1.0.12
88
container_name: chroma-db
99
volumes:
10-
# Be aware that indexed data are located in "/chroma/chroma/"
11-
# Default configuration for persist_directory in chromadb/config.py
12-
# Read more about deployments: https://docs.trychroma.com/deployment
13-
- chroma:/chroma/chroma
14-
environment:
15-
- IS_PERSISTENT=TRUE
16-
- ALLOW_RESET=TRUE
10+
- chroma:/data
1711
ports:
1812
- "8000:8000"
19-
healthcheck:
20-
# Adjust below to match your container port
21-
test: [ "CMD", "curl", "-f", "http://localhost:8000/api/v2/heartbeat" ]
22-
interval: 30s
23-
timeout: 10s
24-
retries: 3
2513
networks:
2614
- net
2715

modules/helpers.py

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,14 @@ def num_tokens_from_string(string: str, model: str = "gpt-4o-mini") -> int:
218218
219219
See https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken
220220
"""
221-
encoding_name = tiktoken.encoding_name_for_model(model_name=model)
221+
222+
try:
223+
encoding_name = tiktoken.encoding_name_for_model(model_name=model)
224+
except KeyError as e:
225+
logging.error("Couldn't map %s to tokenizer: %s", model, str(e))
226+
# workaround until https://github.com/openai/tiktoken/issues/395 is fixed
227+
encoding_name = "o200k_base"
228+
222229
encoding = tiktoken.get_encoding(encoding_name)
223230
return len(encoding.encode(string))
224231

modules/ui.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -95,8 +95,7 @@ def display_model_settings_sidebar():
9595
)
9696
if model != get_default_config_value("default_model.gpt"):
9797
st.warning(
98-
""":warning: More advanced models (like gpt-4 and gpt-4o) have better reasoning capabilities and larger context windows. However, they likely won't make
99-
a big difference for short videos and simple tasks, like plain summarization. Also, beware of the higher costs of other [flagship models](https://platform.openai.com/docs/models/flagship-models)."""
98+
""":warning: Be aware of the higher costs and potentially higher latencies when using more advanced models (like gpt-4 and gpt-4o). You can see details (incl. costs) about the models and compare them [here](https://platform.openai.com/docs/models/compare)."""
10099
)
101100

102101

requirements.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ watchdog==6.0.0
88
pytest==8.3.5
99
peewee==3.18.1
1010
python-dotenv==1.1.0
11-
chromadb==0.6.3
12-
langchain-chroma==0.2.3
11+
chromadb==1.0.12
12+
langchain-chroma==0.2.4
1313
randomname==0.2.1
1414
tiktoken==0.9.0
1515
openai-whisper==20240930

0 commit comments

Comments
 (0)