Skip to content

Commit 1bcbf20

Browse files
authored
change default model to gpt4o-mini (#1166)
Reasons 1. gpt4o-mini is 2x cheaper than the current default model of gpt3.5-turbo 2. Extended context length of 128k tokens 3. All the public benchmarks indicate better performance on all tasks by gpt4o-mini compared to gpt3.5-turbo. [Source](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/)
1 parent dadc410 commit 1bcbf20

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

src/ragas/llms/base.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -289,7 +289,7 @@ async def agenerate_text(
289289

290290

291291
def llm_factory(
292-
model: str = "gpt-3.5-turbo", run_config: t.Optional[RunConfig] = None
292+
model: str = "gpt-4o-mini", run_config: t.Optional[RunConfig] = None
293293
) -> BaseRagasLLM:
294294
timeout = None
295295
if run_config is not None:

src/ragas/testset/generator.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -146,8 +146,8 @@ def from_llama_index(
146146
@deprecated("0.1.4", removal="0.2.0", alternative="from_langchain")
147147
def with_openai(
148148
cls,
149-
generator_llm: str = "gpt-3.5-turbo-16k",
150-
critic_llm: str = "gpt-4",
149+
generator_llm: str = "gpt-4o-mini",
150+
critic_llm: str = "gpt-4o",
151151
embeddings: str = "text-embedding-ada-002",
152152
docstore: t.Optional[DocumentStore] = None,
153153
chunk_size: int = 1024,

0 commit comments

Comments
 (0)