You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/concepts/model-benchmarks.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,30 +56,30 @@ Azure AI also displays the quality index as follows:
56
56
57
57
| Index | Description |
58
58
|-------|-------------|
59
-
| Quality Index| Quality Index is calculated by scaling down GPTSimilarity between 0-1 followed by averaging with accuracy metrics. Quality Index is higher, the better. |
59
+
| Quality index| Quality index is calculated by scaling down GPTSimilarity between zero and one, followed by averaging with accuracy metrics. Higher values of quality index are better. |
60
60
61
-
The quality index represents the average score of applicable primary metric (Accuracy, Rescaled GPTSimilarity) over 15 standard datasets and is provided on a scale of 0 to 1.
61
+
The quality index represents the average score of the applicable primary metric (accuracy, rescaled GPTSimilarity) over 15 standard datasets and is provided on a scale of zero to one.
62
62
63
-
Quality Index constitutes 2 categories of metrics:
63
+
Quality index constitutes two categories of metrics:
64
64
65
-
- Accuracy (eg: exact match or pass@k). Ranges from 0-1.
- Accuracy (for example, exact match or `pass@k`). Ranges from zero to one.
66
+
- Prompt-based metrics (for example, GPTSimilarity, groundedness, coherence, fluency, and relevance). Ranges from one to five.
67
67
68
-
The stability of the GPTSimilarity metric averaging with the accuracy of the model provides an indicator of the overall quality of the model.
68
+
The stability of the quality index value provides an indicator of the overall quality of the model.
69
69
70
70
### Performance
71
71
72
-
Performance metrics are calculated as an aggregate over 14 days, based on 24 trails (2 requests per trail) sent daily with a one-hour interval between every trail. The following default parameters are used for each request to the model endpoint:
72
+
Performance metrics are calculated as an aggregate over 14 days, based on 24 trails (two requests per trail) sent daily with a one-hour interval between every trail. The following default parameters are used for each request to the model endpoint:
73
73
74
74
| Parameter | Value | Applicable For |
75
75
|-----------|-------|----------------|
76
76
| Region | East US/East US2 |[Serverless APIs](../how-to/model-catalog-overview.md#serverless-api-pay-per-token-billing) and [Azure OpenAI](/azure/ai-services/openai/overview)|
77
77
| Tokens per minute (TPM) rate limit | 30k (180 RPM based on Azure OpenAI) <br> N/A (serverless APIs) | For Azure OpenAI models, selection is available for users with rate limit ranges based on deployment type (standard, global, global standard, and so on.) <br> For serverless APIs, this setting is abstracted. |
78
-
| Number of requests |2 requests in a trail for every hour (24 trails per day) | Serverless APIs, Azure OpenAI |
78
+
| Number of requests |Two requests in a trail for every hour (24 trails per day) | Serverless APIs, Azure OpenAI |
79
79
| Number of trails/runs | 14 days with 24 trails per day for 336 runs | Serverless APIs, Azure OpenAI |
| Number of tokens processed (moderate) | 80:20 ratio for input to output tokens, that is, 800 input tokens to 200 output tokens. | Serverless APIs, Azure OpenAI |
82
-
| Number of concurrent requests |1 (requests are sent sequentially one after other) | Serverless APIs, Azure OpenAI |
82
+
| Number of concurrent requests |One (requests are sent sequentially one after other) | Serverless APIs, Azure OpenAI |
83
83
| Data | Synthetic (input prompts prepared from static text) | Serverless APIs, Azure OpenAI |
84
84
| Region | East US/East US2 | Serverless APIs and Azure OpenAI |
85
85
| Deployment type | Standard | Applicable only for Azure OpenAI |
@@ -111,7 +111,7 @@ For performance metrics like latency or throughput, the time to first token and
111
111
112
112
### Cost
113
113
114
-
Cost calculations are estimates for using an LLM or SLM model endpoint hosted on the Azure AI platform. We support displaying the cost of MaaS and AOAI models. Please be aware that costs are subject to change. To account for this, we refresh our cost calculations on regular cadence.
114
+
Cost calculations are estimates for using an LLM or SLM model endpoint hosted on the Azure AI platform. Azure AI supports displaying the cost of serverless APIs and Azure OpenAI models. Because these costs are subject to change, we refresh our cost calculations on a regular cadence.
115
115
116
116
The cost of LLMs and SLMs is assessed across the following metrics:
0 commit comments