Skip to content

Commit a915297

Browse files
committed
Merge branch 'main' into release-preview-eval-redteaming
2 parents 76ca3ca + b36b0d0 commit a915297

File tree

10 files changed

+19
-31
lines changed

10 files changed

+19
-31
lines changed

articles/ai-foundry/concepts/models-featured.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -327,7 +327,7 @@ See the [Nixtla model collection in Azure AI Foundry portal](https://ai.azure.co
327327
| ------ | ---- | ------------ |
328328
| [tsuzumi-7b](https://ai.azure.com/explore/models/Tsuzumi-7b/version/1/registry/azureml-nttdata) | [chat-completion](../model-inference/how-to/use-chat-completions.md?context=/azure/ai-foundry/context/context) | - **Input:** text (8,192 tokens) <br /> - **Output:** text (8,192 tokens) <br /> - **Tool calling:** No <br /> - **Response formats:** Text |
329329

330-
## Stability AI
330+
## Stability AI
331331

332332
The Stability AI collection of image generation models include Stable Image Core, Stable Image Ultra and Stable Diffusion 3.5 Large. Stable Diffusion 3.5 Large allows for an image and text input.
333333

articles/ai-foundry/how-to/model-catalog-overview.md

Lines changed: 6 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.custom:
1010
- ai-learning-hub
1111
- ignite-2024
1212
ms.topic: how-to
13-
ms.date: 12/04/2024
13+
ms.date: 03/24/2025
1414
ms.reviewer: jcioffi
1515
ms.author: ssalgado
1616
author: ssalgadodev
@@ -73,23 +73,12 @@ Network isolation | [Configure managed networks for Azure AI Foundry hubs](confi
7373

7474
### Available models for supported deployment options
7575

76-
The following list contains Serverless API models. For Azure OpenAI models, see [Azure OpenAI Service Models](../../ai-services/openai/concepts/models.md).
76+
For Azure OpenAI models, see [Azure OpenAI Service Models](../../ai-services/openai/concepts/models.md).
77+
78+
To view a list of supported models for Serverless API or Managed Compute, go to the home page of the model catalog in [Azure AI Foundry](https://ai.azure.com). Use the **Deployment options** filter to select either **Serverless API** or **Managed Compute**.
79+
80+
:::image type="content" source="../media/how-to/model-catalog-overview/catalog-filter.png" alt-text="A screenshot showing how to filter by managed compute models in the catalog." lightbox="../media/how-to/model-catalog-overview/catalog-filter.png":::
7781

78-
Model | Managed compute | Serverless API (pay-per-token)
79-
--|--|--
80-
AI21 family models | Not available | Jamba-1.5-Mini <br> Jamba-1.5-Large
81-
Bria | Not available | Bria-2.3-Fast
82-
Cohere family models | Not available | Cohere-command-r-plus-08-2024 <br> Cohere-command-r-08-2024 <br> Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual <br> Cohere-rerank-v3.5 <br> Cohere-rerank-v3-english <br> Cohere-rerank-v3-multilingual
83-
DeepSeek models from Microsoft | Not available | DeepSeek-V3 <br> DeepSeek-R1
84-
Gretel | Not available | Gretel-Navigator
85-
Healthcare AI family Models | MedImageParse<BR> MedImageInsight<BR> CxrReportGen<BR> Virchow<BR> Virchow2<BR> Prism<BR> BiomedCLIP-PubMedBERT<BR> microsoft-llava-med-v1.5<BR> m42-health-llama3-med4<BR> biomistral-biomistral-7b<BR> microsoft-biogpt-large-pub<BR> microsoft-biomednlp-pub<BR> stanford-crfm-biomedlm<BR> medicalai-clinicalbert<BR> microsoft-biogpt<BR> microsoft-biogpt-large<BR> microsoft-biomednlp-pub<BR> | Not Available
86-
JAIS | Not available | jais-30b-chat
87-
Meta Llama family models | Llama-3.3-70B-Instruct<BR> Llama-3.2-3B-Instruct<BR> Llama-3.2-1B-Instruct<BR> Llama-3.2-1B<BR> Llama-3.2-90B-Vision-Instruct<BR> Llama-3.2-11B-Vision-Instruct<BR> Llama-3.1-8B-Instruct<BR> Llama-3.1-8B<BR> Llama-3.1-70B-Instruct<BR> Llama-3.1-70B<BR> Llama-3-8B-Instruct<BR> Llama-3-70B<BR> Llama-3-8B<BR> Llama-Guard-3-1B<BR> Llama-Guard-3-8B<BR> Llama-Guard-3-11B-Vision<BR> Llama-2-7b<BR> Llama-2-70b<BR> Llama-2-7b-chat<BR> Llama-2-13b-chat<BR> CodeLlama-7b-hf<BR> CodeLlama-7b-Instruct-hf<BR> CodeLlama-34b-hf<BR> CodeLlama-34b-Python-hf<BR> CodeLlama-34b-Instruct-hf<BR> CodeLlama-13b-Instruct-hf<BR> CodeLlama-13b-Python-hf<BR> Prompt-Guard-86M<BR> CodeLlama-70b-hf<BR> | Llama-3.3-70B-Instruct<BR> Llama-3.2-90B-Vision-Instruct<br> Llama-3.2-11B-Vision-Instruct<br> Llama-3.1-8B-Instruct<br> Llama-3.1-70B-Instruct<br> Llama-3.1-405B-Instruct<br> Llama-3-8B-Instruct<br> Llama-3-70B-Instruct<br> Llama-2-7b<br> Llama-2-7b-chat<br> Llama-2-70b<br> Llama-2-70b-chat<br> Llama-2-13b<br> Llama-2-13b-chat<br>
88-
Microsoft Phi family models | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct <br> Phi-3-vision-128k-Instruct <br> Phi-3.5-mini-Instruct <br> Phi-3.5-vision-Instruct <br> Phi-3.5-MoE-Instruct <br> Phi-4 <br> Phi-4-mini-instruct <br> Phi-4-multimodal-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct <br> <br> Phi-3.5-mini-Instruct <br> Phi-3.5-vision-Instruct <br> Phi-3.5-MoE-Instruct <br> Phi-4 <br> Phi-4-mini-instruct <br> Phi-4-multimodal-instruct
89-
Mistral family models | mistralai-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x22B-Instruct-v0-1 <br> mistral-community-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x7B-v01 <br> mistralai-Mistral-7B-Instruct-v0-2 <br> mistralai-Mistral-7B-v01 <br> mistralai-Mixtral-8x7B-Instruct-v01 <br> mistralai-Mistral-7B-Instruct-v01 | Mistral-large (2411) <br> Mistral-large (2407) <br> Mistral-large (2402) <br> Mistral-small-2503 <br> Mistral-small <br> Ministral-3B <br> Mistral-NeMo <br> Codestral-2501
90-
Nixtla | Not available | TimeGEN-1
91-
NTT DATA | Not available | tsuzumi-7b
92-
Stability AI | Not available | Stable Diffusion 3.5 Large <br> Stable Image Core <br> Stable Image Ultra
9382

9483
<!-- docutune:enable -->
9584

382 KB
Loading

articles/ai-services/document-intelligence/how-to-guides/includes/v3-0/rest-api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ The cURL command line tool doesn't format API responses that contain JSON conten
9898

9999
#### [Windows](#tab/windows)
100100

101-
Use the NodeJS *json tool* as a JSON formatter for cURL. If you don't have [Node.js](https://nodejs.org/) installed, download and install the latest version.
101+
Use the Node.js *json tool* as a JSON formatter for cURL. If you don't have [Node.js](https://nodejs.org/) installed, download and install the latest version.
102102

103103
1. Open a console window and install the json tool by using the following command:
104104

articles/ai-services/document-intelligence/how-to-guides/includes/v4-0/rest-api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ The cURL command line tool doesn't format API responses that contain JSON conten
9090

9191
#### [Windows](#tab/windows)
9292

93-
Use the NodeJS *json tool* as a JSON formatter for cURL. If you don't have [Node.js](https://nodejs.org/) installed, download and install the latest version.
93+
Use the Node.js *json tool* as a JSON formatter for cURL. If you don't have [Node.js](https://nodejs.org/) installed, download and install the latest version.
9494

9595
1. Open a bash window and install the json tool by using the following command:
9696

articles/ai-services/openai/how-to/computer-use.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -220,7 +220,7 @@ response_2 = client.responses.create(
220220
model="computer-use-preview",
221221
previous_response_id=response.id,
222222
tools=[{
223-
"type": "computer-preview",
223+
"type": "computer_use_preview",
224224
"display_width": 1024,
225225
"display_height": 768,
226226
"environment": "browser" # other possible values: "mac", "windows", "ubuntu"

articles/ai-services/openai/how-to/responses.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -245,7 +245,7 @@ response = client.responses.retrieve("resp_67cb61fa3a448190bcf2c42d96f0d1a8")
245245
### Microsoft Entra ID
246246

247247
```bash
248-
curl -X GET "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/{response_id}?api-version=2025-03-01-preview" \
248+
curl -X GET "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/responses/{response_id}?api-version=2025-03-01-preview" \
249249
-H "Content-Type: application/json" \
250250
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN"
251251
```
@@ -441,7 +441,7 @@ inputs = [{"type": "message", "role": "user", "content": "Define and explain the
441441

442442
response = client.responses.create(
443443
model="gpt-4o", # replace with your model deployment name
444-
input="inputs"
444+
input=inputs
445445
)
446446

447447
inputs += response.output
@@ -451,7 +451,6 @@ inputs.append({"role": "user", "type": "message", "content": "Explain this at a
451451

452452
second_response = client.responses.create(
453453
model="gpt-4o",
454-
previous_response_id=response.id,
455454
input=inputs
456455
)
457456

@@ -507,7 +506,7 @@ for output in response.output:
507506
input.append(
508507
{
509508
"type": "function_call_output",
510-
"call_id": output.id,
509+
"call_id": output.call_id,
511510
"output": '{"temperature": "70 degrees"}',
512511
}
513512
)

articles/ai-services/speech-service/includes/release-notes/release-notes-sdk.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -368,9 +368,9 @@ This table shows the previous and new object names for real-time diarization and
368368

369369
* [New JavaScript meeting transcription quickstart](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/browser/meeting-transcription/README.md)
370370

371-
* [New NodeJS conversation transcription quickstart](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/node/conversation-transcription/README.md)
371+
* [New Node.js conversation transcription quickstart](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/node/conversation-transcription/README.md)
372372

373-
* [New NodeJS meeting transcription quickstart](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/node/meeting-transcription/README.md)
373+
* [New Node.js meeting transcription quickstart](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/node/meeting-transcription/README.md)
374374

375375
### Speech SDK 1.30.0: July 2023 release
376376

articles/machine-learning/data-science-virtual-machine/release-notes.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -506,7 +506,7 @@ Primary changes:
506506
- Changed Intellijidea to version 2021.2.3
507507
- Changed NVIDIA Drivers to version 470.103.01
508508
- Changed NVIDIA SMI to version 470.103.01
509-
- Changed Nodejs to version v16.13.0
509+
- Changed Node.js to version v16.13.0
510510
- Changed Pycharm to version 2021.2.3
511511
- Changed VS Code to version 1.61.2
512512
- Conda
@@ -537,7 +537,7 @@ Primary changes:
537537
- Changed pytorch to version 1.9.1
538538
- Changed Docker to version 20.10.9
539539
- Changed Intellijidea to version 2021.2.2
540-
- Changed Nodejs to version v14.18.0
540+
- Changed Node.js to version v14.18.0
541541
- Changed Pycharm to version 2021.2.2
542542
- Changed VS Code to version 1.60.2
543543
- Fixed AutoML environment (azureml_py36_automl)
@@ -613,7 +613,7 @@ Selected version updates include:
613613
- Julia 1.0.5
614614
- Jupyter Lab 2.2.6
615615
- Microsoft Edge browser
616-
- NodeJS 16.2.0
616+
- Node.js 16.2.0
617617
- Power BI Desktop 2.93.641.0 64-bit (May 2021)
618618
- PyCharm Community Edition 2021.1.1
619619
- Python 3.8

articles/search/cognitive-search-how-to-debug-skillset.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ Tunnelmole is an open source tunneling tool that can create a public URL that fo
176176
+ npm: `npm install -g tunnelmole`
177177
+ Linux: `curl -s https://tunnelmole.com/sh/install-linux.sh | sudo bash`
178178
+ Mac: `curl -s https://tunnelmole.com/sh/install-mac.sh --output install-mac.sh && sudo bash install-mac.sh`
179-
+ Windows: Install by using npm. Or if you don't have NodeJS installed, download the [precompiled .exe file for Windows](https://tunnelmole.com/downloads/tmole.exe) and put it somewhere in your PATH.
179+
+ Windows: Install by using npm. Or if you don't have Node.js installed, download the [precompiled .exe file for Windows](https://tunnelmole.com/downloads/tmole.exe) and put it somewhere in your PATH.
180180

181181
1. Run this command to create a new tunnel:
182182

0 commit comments

Comments
 (0)