You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/deploy-models-tsuzumi.md
-38Lines changed: 0 additions & 38 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -216,23 +216,6 @@ response = client.complete(
216
216
217
217
If you want to pass a parameter that isn't in the list of supported parameters, you can pass it to the underlying model using *extra parameters*. See [Pass extra parameters to the model](#pass-extra-parameters-to-the-model).
218
218
219
-
#### Create JSON outputs
220
-
221
-
tsuzumi-7b models can create JSON outputs. Set `response_format` to `json_object` to enable JSON mode and guarantee that the message the model generates is valid JSON. You must also instruct the model to produce JSON yourself via a system or user message. Also, the message content might be partially cut off if `finish_reason="length"`, which indicates that the generation exceeded `max_tokens` or that the conversation exceeded the max context length.
222
-
223
-
224
-
```python
225
-
from azure.ai.inference.models import ChatCompletionsResponseFormatJSON
226
-
227
-
response = client.complete(
228
-
messages=[
229
-
SystemMessage(content="You are a helpful assistant that always generate responses in JSON format, using."
230
-
" the following format: { ""answer"": ""response"" }."),
231
-
UserMessage(content="How many languages are in the world?"),
tsuzumi-7b models support the parameter `safe_prompt`. You can toggle the safe prompt to prepend your messages with the following system prompt:
259
-
260
-
> Always assist with care, respect, and truth. Respond with utmost utility yet securely. Avoid harmful, unethical, prejudiced, or negative content. Ensure replies promote fairness and positivity.
261
-
262
-
The Azure AI Model Inference API allows you to pass this extra parameter as follows:
263
-
264
-
265
-
```python
266
-
response = client.complete(
267
-
messages=[
268
-
SystemMessage(content="You are a helpful assistant."),
269
-
UserMessage(content="How many languages are in the world?"),
270
-
],
271
-
model_extras={
272
-
"safe_mode": True
273
-
}
274
-
)
275
-
```
276
-
277
239
### Apply content safety
278
240
279
241
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
0 commit comments