Skip to content

Commit 14e2887

Browse files
author
sd109
committed
Minor tweaks
1 parent bee138e commit 14e2887

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

chart/web-app/config.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,14 +26,14 @@ class AppSettings(BaseSettings):
2626
model_config = SettingsConfigDict(env_prefix="llm_ui_")
2727

2828
# General settings
29-
model_name: str = Field(
29+
hf_model_name: str = Field(
3030
description="The model to use when constructing the LLM Chat client. This should match the model name running on the vLLM backend",
3131
)
3232
backend_url: HttpUrl = Field(
3333
default_factory=lambda: f"http://llm-backend.{get_k8s_namespace()}.svc"
3434
)
3535
page_title: str = Field(default="Large Language Model")
36-
model_instruction: str = Field(
36+
hf_model_instruction: str = Field(
3737
default="You are a helpful and cheerful AI assistant. Please respond appropriately."
3838
)
3939

chart/web-app/example-settings.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
backend_url: http://128.232.226.230
2-
model_name: tiiuae/falcon-7b
2+
hf_model_name: tiiuae/falcon-7b
33

4-
model_instruction: You are a helpful and cheerful AI assistant. Please respond appropriately.
4+
hf_model_instruction: You are a helpful and cheerful AI assistant. Please respond appropriately.
55

66
# UI theming tweaks
77
# theme_title_colour: white
@@ -11,6 +11,6 @@ model_instruction: You are a helpful and cheerful AI assistant. Please respond a
1111

1212
# llm_max_tokens:
1313
# llm_temperature:
14-
# llm_top_p:
14+
# llm_top_p:
1515
# llm_frequency_penalty:
1616
# llm_presence_penalty:

0 commit comments

Comments
 (0)