Skip to content

Commit 35f6634

Browse files
committed
Adjust default temperature for OpenAI model completion
Modify the default temperature parameter in get_completion_from_messages() from 0.4 to 0, ensuring more deterministic and focused model responses.
1 parent 31403b2 commit 35f6634

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

llm-complete-guide/utils/llm_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -411,7 +411,7 @@ def get_topn_similar_docs(
411411

412412

413413
def get_completion_from_messages(
414-
messages, model=OPENAI_MODEL, temperature=0.4, max_tokens=1000
414+
messages, model=OPENAI_MODEL, temperature=0, max_tokens=1000
415415
):
416416
"""Generates a completion response from the given messages using the specified model.
417417

0 commit comments

Comments
 (0)