Skip to content

Commit be4025f

Browse files
committed
update
1 parent fc61fc1 commit be4025f

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/ai-services/openai/includes/chat-completion.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ Every response includes a `finish_reason`. The possible values for `finish_reaso
176176
* **stop**: API returned complete model output.
177177
* **length**: Incomplete model output due to max_tokens parameter or token limit.
178178
* **content_filter**: Omitted content due to a flag from our content filters.
179-
* **null**:API response still in progress or incomplete.
179+
* **null**: API response still in progress or incomplete.
180180

181181
Consider setting `max_tokens` to a slightly higher value than normal such as 300 or 500. This ensures that the model doesn't stop generating text before it reaches the end of the message.
182182

@@ -364,7 +364,7 @@ while True:
364364

365365
---
366366

367-
When you run the code above you will get a blank console window. Enter your first question in the window and then hit enter. Once the response is returned, you can repeat the process and keep asking questions.
367+
When you run the code above you'll get a blank console window. Enter your first question in the window and then hit enter. Once the response is returned, you can repeat the process and keep asking questions.
368368

369369
## Managing conversations
370370

@@ -547,7 +547,7 @@ The token counting portion of the code demonstrated previously is a simplified v
547547

548548
## Troubleshooting
549549

550-
### Do not use ChatML syntax with the Chat Completions endpoint
550+
### Don't use ChatML syntax with the Chat Completions endpoint
551551

552552
We have found that some customers will try using the [legacy ChatML syntax](../how-to/chat-markup-language.md) with the chat completion endpoints and newer models. ChatML was a preview capability that only worked with the legacy completions endpoint with the `gpt-35-turbo` version 0301 model which is [slated for retirement](../concepts/model-retirements.md). Attempting to use ChatML syntax with newer models and the chat completions endpoint can result errors as well as unexpected model response behavior, and is not recommended.
553553

0 commit comments

Comments
 (0)