Skip to content

Conversation

ManilShrestha
Copy link
Contributor

Problems Addressed

  1. Indefinite Generation with stop_at Parameter:

    • When using the Llama3 model with a stop_at parameter in the extra body, the generation continues indefinitely if the model doesn't output the specified stop_at string. This leads to unexpected behavior and potential resource waste.
  2. Temperature Value Handling:

    • According to the OpenAI documentation, the temperature value ranges from 0 to 2. However, the existing code handles it as if it ranges from 0 to 1.

Solution

  1. Stop Condition Fix:

    • Modified the stop conditions to rely solely on the preferred_eos variable. The stop_at parameter is already handled within preferred_eos in earlier steps of the process.
  2. Temperature Range Adjustment:

    • Adjusted the handling of the temperature value to ensure that any value exceeding 2 is limited to 2, adhering to OpenAI's specified range.

Testing Conducted

  1. Stop Condition Testing:

    • Verified that the generation stops correctly when stop_at is reached.
    • Confirmed that the generation completes normally when stop_at is not encountered.
    • Tested with various input prompts and stop_at values to ensure robust handling.
  2. Temperature Value Testing:

    • Checked behavior by passing temperature > 2, confirming that the server sets the upper limit to 2 and processes the job correctly.

@edk208
Copy link
Contributor

edk208 commented Jul 11, 2024

is this still necessary or is this wrapped in #19 ?

@ManilShrestha
Copy link
Contributor Author

Yes, this is wrapped in #20. Going to close this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants