Skip to content

Commit 9b39c70

Browse files
authored
Update deploy-models-tsuzumi.md
1 parent 683679d commit 9b39c70

File tree

1 file changed

+0
-38
lines changed

1 file changed

+0
-38
lines changed

articles/ai-studio/how-to/deploy-models-tsuzumi.md

Lines changed: 0 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -216,23 +216,6 @@ response = client.complete(
216216

217217
If you want to pass a parameter that isn't in the list of supported parameters, you can pass it to the underlying model using *extra parameters*. See [Pass extra parameters to the model](#pass-extra-parameters-to-the-model).
218218

219-
#### Create JSON outputs
220-
221-
tsuzumi-7b models can create JSON outputs. Set `response_format` to `json_object` to enable JSON mode and guarantee that the message the model generates is valid JSON. You must also instruct the model to produce JSON yourself via a system or user message. Also, the message content might be partially cut off if `finish_reason="length"`, which indicates that the generation exceeded `max_tokens` or that the conversation exceeded the max context length.
222-
223-
224-
```python
225-
from azure.ai.inference.models import ChatCompletionsResponseFormatJSON
226-
227-
response = client.complete(
228-
messages=[
229-
SystemMessage(content="You are a helpful assistant that always generate responses in JSON format, using."
230-
" the following format: { ""answer"": ""response"" }."),
231-
UserMessage(content="How many languages are in the world?"),
232-
],
233-
response_format={ "type": ChatCompletionsResponseFormatJSON() }
234-
)
235-
```
236219

237220
### Pass extra parameters to the model
238221

@@ -253,27 +236,6 @@ response = client.complete(
253236
)
254237
```
255238

256-
### Safe mode
257-
258-
tsuzumi-7b models support the parameter `safe_prompt`. You can toggle the safe prompt to prepend your messages with the following system prompt:
259-
260-
> Always assist with care, respect, and truth. Respond with utmost utility yet securely. Avoid harmful, unethical, prejudiced, or negative content. Ensure replies promote fairness and positivity.
261-
262-
The Azure AI Model Inference API allows you to pass this extra parameter as follows:
263-
264-
265-
```python
266-
response = client.complete(
267-
messages=[
268-
SystemMessage(content="You are a helpful assistant."),
269-
UserMessage(content="How many languages are in the world?"),
270-
],
271-
model_extras={
272-
"safe_mode": True
273-
}
274-
)
275-
```
276-
277239
### Apply content safety
278240

279241
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.

0 commit comments

Comments
 (0)