You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/openai/concepts/models.md
+71-47Lines changed: 71 additions & 47 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
-
title: Azure OpenAI Models
2
+
title: Azure OpenAI models
3
3
titleSuffix: Azure OpenAI
4
-
description: Learn about the different AI models that are available.
4
+
description: Learn about the different models that are available in Azure OpenAI.
5
5
ms.service: cognitive-services
6
6
ms.topic: conceptual
7
7
ms.date: 06/24/2022
@@ -13,41 +13,58 @@ recommendations: false
13
13
keywords:
14
14
---
15
15
16
-
# Azure OpenAI Models
16
+
# Azure OpenAI models
17
17
18
-
The service provides access to many different models. Models describe a family of models and are broken out as follows:
18
+
The service provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI.
19
19
20
-
|Modes | Description|
20
+
| Model family | Description|
21
21
|--|--|
22
-
| GPT-3 series | A set of GPT-3 models that can understand and generate natural language |
23
-
| Codex Series | A set of models that can understand and generate code, including translating natural language to code |
24
-
| Embeddings Series | An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently we offer three families of embedding models for different functionalities: text search, text similarity and code search |
22
+
|[GPT-3](#gpt-3-models)| A series of models that can understand and generate natural language. |
23
+
|[Codex](#codex-models)| A series of models that can understand and generate code, including translating natural language to code. |
24
+
|[Embeddings](#embedding-models)| A set of models that can understand and use embeddings. An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently, we offer three families of Embeddings models for different functionalities: text search, text similarity, and code search. |
25
+
26
+
## Model capabilities
27
+
28
+
Each model family has a series of models that are further distinguished by capability. These capabilities are typically identified by names, and the alphabetical order of these names generally signifies the relative capability and cost of that model within a given model family. For example, GPT-3 models use names such as Ada, Babbage, Curie, and Davinci to indicate relative capability and cost. Davinci is more capable (at a higher cost) than Curie, which in turn is more capable (at a higher cost) than Babbage, and so on.
29
+
30
+
> [!NOTE]
31
+
> Any task that can be performed by a less capable model like Ada can be performed by a more capable model like Curie or Davinci.
25
32
26
33
## Naming convention
27
34
28
-
Azure OpenAI's models follow a standard naming convention: `{task}-{model name}-{version #}`. For example, our most powerful natural language model is called `text-davinci-001` and a Codex series model would look like `code-cushman-001`.
35
+
Azure OpenAI's model names typically correspond to the following standard naming convention:
29
36
30
-
> Older versions of the GPT-3 models are available as `ada`, `babbage`, `curie`, `davinci` and do not follow these conventions. These models are primarily intended to be used for fine-tuning and search.
|`{family}`| The model family of the model. For example, [GPT-3 models](#gpt-3-models) uses `text`, while [Codex models](#codex-models) use `code`.|
42
+
|`{capability}`| The relative capability of the model. For example, GPT-3 models include `ada`, `babbage`, `curie`, and `davinci`.|
43
+
|`{input-type}`| ([Embeddings models](#embeddings-models) only) The input type of the embedding supported by the model. For example, text search embedding models include `doc` and `query`.|
44
+
|`{identifier}`| The version identifier of the model. |
45
+
46
+
For example, most powerful GPT-3 model is called `text-davinci-002`, while our most powerful Codex model is called `code-davinci-002`.
47
+
48
+
> Older versions of the GPT-3 models are available, named `ada`, `babbage`, `curie`, and `davinci`. These older models do not follow the standard naming conventions, and they are primarily intended for fine tuning. For more information, see [Learn how to customize a model for your application](../how-to/fine-tuning.md).
31
49
32
50
## Finding what models are available
33
51
34
52
You can easily see the models you have available for both inference and fine-tuning in your resource by using the [Models API](../reference.md#models).
35
53
54
+
## GPT-3 models
36
55
37
-
## GPT-3 Series
38
-
39
-
The GPT-3 models can understand and generate natural language. The service offers four model types with different levels of power suitable for different tasks. Davinci is the most capable model, and Ada is the fastest. Going forward these models are named with the following convention: `text-{model name}-XXX` where `XXX` refers to a numerical value for different versions of the model. Currently the latest versions are:
56
+
The GPT-3 models can understand and generate natural language. The service offers four model capabilities, each with different levels of power and speed suitable for different tasks. Davinci is the most capable model, while Ada is the fastest. The following list represents the latest versions of GPT-3 models, ordered by increasing capability.
40
57
41
-
- text-ada-001
42
-
- text-babbage-001
43
-
- text-curie-001
44
-
- text-davinci-001
58
+
-`text-ada-001`
59
+
-`text-babbage-001`
60
+
-`text-curie-001`
61
+
-`text-davinci-002`
45
62
46
-
While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting since it will produce the best results and validate the value our service can provide. Once you have a prototype working, you can then optimize your model choice with the best latency - performance tradeoff for your application.
63
+
While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting, because it will produce the best results and validate the value our service can provide. Once you have a prototype working, you can then optimize your model choice with the best latency/performance balance for your application.
47
64
48
-
### Davinci
65
+
### <aid="gpt-3-davinci"></a>Davinci
49
66
50
-
Davinci is the most capable model and can perform any task the other models can perform and often with less instruction. For applications requiring deep understanding of the content, like summarization for a specific audience and creative content generation, Davinci is going to produce the best results. These increased capabilities require more compute resources, so Davinci costs more and isn't as fast as the other models.
67
+
Davinci is the most capable model and can perform any task the other models can perform, often with less instruction. For applications requiring deep understanding of the content, like summarization for a specific audience and creative content generation, Davinci produces the best results. These increased capabilities require more compute resources, so Davinci costs more and isn't as fast as the other models.
51
68
52
69
Another area where Davinci excels is in understanding the intent of text. Davinci is excellent at solving many kinds of logic problems and explaining the motives of characters. Davinci has been able to solve some of the most challenging AI problems involving cause and effect.
53
70
@@ -61,7 +78,7 @@ Curie is powerful, yet fast. While Davinci is stronger when it comes to analyzin
61
78
62
79
### Babbage
63
80
64
-
Babbage can perform straightforward tasks like simple classification. It’s also capable when it comes to semantic search ranking how well documents match up with search queries.
81
+
Babbage can perform straightforward tasks like simple classification. It’s also capable when it comes to semantic search, ranking how well documents match up with search queries.
> Any task performed by a faster model like Ada can be performed by a more powerful model like Curie or Davinci.
76
-
77
-
## Codex Series
91
+
## Codex models
78
92
79
93
The Codex models are descendants of our base GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub.
80
94
81
-
They’re most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
95
+
They’re most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell. The following list represents the latest versions of Codex models, ordered by increasing capability.
96
+
97
+
-`code-cushman-001`
98
+
-`code-davinci-002`
99
+
100
+
### <aid="codex-davinci"></a>Davinci
101
+
102
+
Similar to GPT-3, Davinci is the most capable Codex model and can perform any task the other models can perform, often with less instruction. For applications requiring deep understanding of the content, like summarization for a specific audience and creative content generation, Davinci produces the best results. These increased capabilities require more compute resources, so Davinci costs more and isn't as fast as the other models.
103
+
104
+
### Cushman
105
+
106
+
Cushman is powerful, yet fast. While Davinci is stronger when it comes to analyzing complicated tasks, Cushman is quite capable for many code generation tasks and typically runs faster (and at a lower cost) than Davinci.
82
107
83
-
Currently we only offer one Codex model: `code-cushman-001`.
108
+
## Embeddings models
84
109
85
-
## Embeddings Models
110
+
Currently, we offer three families of Embeddings models for different functionalities: text search, text similarity, and code search. Each family includes models across the following spectrum of capabilities:
86
111
87
-
Currently we offer three families of embedding models for different functionalities: text search, text similarity and code search. Each family includes up to four models across a spectrum of capabilities:
112
+
- Ada: 1024 dimensions
113
+
- Babbage: 2048 dimensions
114
+
- Curie: 4096 dimensions
115
+
- Davinci: 12,288 dimensions
88
116
89
-
Ada (1024 dimensions),
90
-
Babbage (2048 dimensions),
91
-
Curie (4096 dimensions),
92
-
Davinci (12,288 dimensions).
93
117
Davinci is the most capable, but is slower and more expensive than the other models. Ada is the least capable, but is both faster and cheaper.
94
118
95
-
These embedding models are specifically created to be good at a particular task.
119
+
These Embeddings models are specifically created to be good at a particular task.
96
120
97
-
### Similarity embeddings
121
+
### Similarity embedding
98
122
99
123
These models are good at capturing semantic similarity between two or more pieces of text.
These models help measure whether long documents are relevant to a short search query. There are two types: one for embedding the documents to be retrieved, and one for embedding the search query.
131
+
These models help measure whether long documents are relevant to a short search query. There are two input types supported by this family: `doc`, for embedding the documents to be retrieved, and `query`, for embedding the search query.
Similar to text search embeddings, there are two types: one for embedding code snippets to be retrieved and one for embedding natural language search queries.
139
+
Similar to text search embedding models, there are two input types supported by this family: `code`, for embedding code snippets to be retrieved, and `text`, for embedding natural language search queries.
When using our embedding models, keep in mind their limitations and risks.
145
+
When using our Embeddings models, keep in mind their limitations and risks.
122
146
123
147
## Finding the right model
124
148
125
-
We recommend starting with our Davinci model since it will be the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with Davinci if you’re not concerned about cost and speed, or you can move onto Curie or another model and try to optimize around its capabilities.
149
+
We recommend starting with the most powerful model in a model family, such as the Davinci model, because it's the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with that model or, if balancing capability and cost is a concern for your application, you can move to a model with lower capability and cost, such as Curie or Cushman, and optimize around its capability.
Copy file name to clipboardExpand all lines: articles/cognitive-services/openai/how-to/completions.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -295,7 +295,7 @@ While all prompts result in completions, it can be helpful to think of text comp
295
295
Vertical farming provides a novel solution for producing food locally, reducing transportation costs and
296
296
```
297
297
298
-
This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using OpenAI's Codex models for tasks that involve understanding or generating code. Currently only `code-cushman-001` is supported.
298
+
This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using models from our Codex series for tasks that involve understanding or generating code. Currently, we support two Codex models: `code-davinci-002` and `code-cushman-001`. For more information about Codex models, see the [Codex models](../concepts/models.md#codex-models) section in [Models](../concepts/models.md).
299
299
300
300
```
301
301
import React from 'react';
@@ -355,7 +355,7 @@ Q:
355
355
356
356
## Working with code
357
357
358
-
The Codex model series is a descendant of OpenAI's base GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
358
+
The Codex model series is a descendant of OpenAI's base GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
359
359
360
360
You can use Codex for a variety of tasks including:
361
361
@@ -503,7 +503,7 @@ Create a list of random animals and species
**Lower temperatures give more precise results.** Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3, where a higher temperature can provide useful creative and random results, higher temperatures with Codex may give you really random or erratic responses.
506
+
**Lower temperatures give more precise results.** Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models may give you really random or erratic responses.
507
507
508
508
In cases where you need Codex to provide different potential results, start at zero and then increment upwards by .1 until you find suitable variation.
The Azure OpenAI service can solve many different natural language tasks through [prompt engineering](completions.md). Here we show an example of prompting for language translation:
174
+
The Azure OpenAI service can solve many different natural language tasks through [prompt engineering](completions.md). Here, we show an example of prompting for language translation:
Copy file name to clipboardExpand all lines: articles/cognitive-services/openai/how-to/work-with-code.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ keywords:
15
15
16
16
# Codex models and Azure OpenAI
17
17
18
-
The Codex model series is a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
18
+
The Codex model series is a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
19
19
20
20
You can use Codex for a variety of tasks including:
21
21
@@ -27,7 +27,7 @@ You can use Codex for a variety of tasks including:
27
27
28
28
## How to use the Codex models
29
29
30
-
Here are a few examples of using Codex that can be tested in the [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a code series model such as `code-cushman-001`.
30
+
Here are a few examples of using Codex that can be tested in the [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
Setting the API temperature to 0, or close to zero (such as0.1or0.2) tends to give better results in most cases. Unlike GPT-3, where a higher temperature can provide useful creative and random results, higher temperatures with Codex may give you really random or erratic responses.
183
+
Setting the API temperature to 0, or close to zero (such as0.1or0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models may give you really random or erratic responses.
184
184
185
185
In cases where you need Codex to provide different potential results, start at zero and then increment upwards by 0.1 until you find suitable variation.
0 commit comments