You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/agents/includes/quickstart-javascript.md
+21-6Lines changed: 21 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,11 +31,18 @@ ms.custom: devx-track-js
31
31
| Run | Activation of an agent to begin running based on the contents of Thread. The agent uses its configuration and Thread’s Messages to perform tasks by calling models and tools. As part of a Run, the agent appends Messages to the Thread. |
32
32
| Run Step | A detailed list of steps the agent took as part of a Run. An agent can call tools or create Messages during its run. Examining Run Steps allows you to understand how the agent is getting to its results. |
33
33
34
-
Run the following commands to install the npm packages.
34
+
First, initialize a new project by running:
35
+
36
+
```console
37
+
npm init -y
38
+
```
39
+
40
+
Run the following commands to install the npm packages required.
35
41
36
42
```console
37
43
npm install @azure/ai-projects
38
44
npm install @azure/identity
45
+
npm install dotenv
39
46
```
40
47
41
48
Next, to authenticate your API requests and run the program, use the [az login](/cli/azure/authenticate-azure-cli-interactively) command to sign into your Azure subscription.
@@ -60,7 +67,9 @@ For example, your connection string may look something like:
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/provisioned-throughput.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,14 +48,14 @@ The amount of throughput (tokens per minute or TPM) a deployment gets per PTU is
48
48
49
49
To help with simplifying the sizing effort, the following table outlines the TPM per PTU for the specified models. To understand the impact of output tokens on the TPM per PTU limit, use the 3 input token to 1 output token ratio. For a detailed understanding of how different ratios of input and output tokens impact the throughput your workload needs, see the [Azure OpenAI capacity calculator](https://oai.azure.com/portal/calculator). The table also shows Service Level Agreement (SLA) Latency Target Values per model. For more information about the SLA for Azure OpenAI Service, see the [Service Level Agreements (SLA) for Online Services page](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1)
50
50
51
-
|Topic|**gpt-4o**|**gpt-4o-mini**|
52
-
| --- | --- | --- |
53
-
|Global & data zone provisioned minimum deployment|15|15|
54
-
|Global & data zone provisioned scale increment|5|5|
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/fine-tuning-deploy.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -378,7 +378,7 @@ Azure OpenAI fine-tuning supports the following deployment types.
378
378
379
379
:::image type="content" source="../media/fine-tuning/global-standard.png" alt-text="Screenshot of the global standard deployment user experience with a fine-tuned model." lightbox="../media/fine-tuning/global-standard.png":::
380
380
381
-
Global Standard fine-tuning deployments currently do not support vision and structured outputs.
381
+
Global Standard fine-tuned deployments currently support structured outputs only on GPT-4o.
382
382
383
383
### Provisioned Managed (preview)
384
384
@@ -392,7 +392,7 @@ Global Standard fine-tuning deployments currently do not support vision and stru
392
392
393
393
[Provisioned managed](./deployment-types.md#provisioned) fine-tuned deployments offer [predictable performance](../concepts/provisioned-throughput.md#what-do-the-provisioned-deployment-types-provide) for fine-tuned deployments. As part of public preview, provisioned managed deployments may be created regionally via the data-plane [REST API](../reference.md#data-plane-inference) version `2024-10-01` or newer. See below for examples.
394
394
395
-
Provisioned Managed fine-tuning deployments currently do not support vision and structured outputs.
395
+
Provisioned Managed fine-tuned deployments currently support structured outputs only on GPT-4o.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/fine-tuning-vision.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,10 @@ ms.author: mbullwin
16
16
17
17
Fine-tuning is also possible with images in your JSONL files. Just as you can send one or many image inputs to chat completions, you can include those same message types within your training data. Images can be provided either as publicly accessible URLs or data URIs containing [base64 encoded images](/azure/ai-services/openai/how-to/gpt-with-vision?tabs=rest#call-the-chat-completion-apis).
18
18
19
+
## Model support
20
+
21
+
Vision fine-tuning is supported for `gpt-4o` version `2024-08-06` models only.
22
+
19
23
## Image dataset requirements
20
24
21
25
- Your training file can contain a maximum of 50,000 examples that contain images (not including text examples).
0 commit comments