You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/openai/concepts/models.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,13 +55,13 @@ Another area where Davinci excels is in understanding the intent of text. Davinc
55
55
56
56
### Curie
57
57
58
-
Curie is extremely powerful, yet very fast. While Davinci is stronger when it comes to analyzing complicated text, Curie is quite capable for many nuanced tasks like sentiment classification and summarization. Curie is also good at answering questions and performing Q&A and as a general service chatbot.
58
+
Curie is powerful, yet fast. While Davinci is stronger when it comes to analyzing complicated text, Curie is capable for many nuanced tasks like sentiment classification and summarization. Curie is also good at answering questions and performing Q&A and as a general service chatbot.
59
59
60
60
**Use for**: Language translation, complex classification, text sentiment, summarization
61
61
62
62
### Babbage
63
63
64
-
Babbage can perform straightforward tasks like simple classification. It’s also quite capable when it comes to Semantic Search ranking how well documents match up with search queries.
64
+
Babbage can perform straightforward tasks like simple classification. It’s also capable when it comes to semantic search ranking how well documents match up with search queries.
@@ -78,7 +78,7 @@ Ada is usually the fastest model and can perform tasks like parsing text, addres
78
78
79
79
The Codex models are descendants of our base GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub.
80
80
81
-
They’re most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
81
+
They’re most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
82
82
83
83
Currently we only offer one Codex model: `code-cushman-001`.
84
84
@@ -90,7 +90,7 @@ Ada (1024 dimensions),
90
90
Babbage (2048 dimensions),
91
91
Curie (4096 dimensions),
92
92
Davinci (12,288 dimensions).
93
-
Davinci is the most capable, but is slower and more expensive than the other models. Ada is the least capable, but is significantly faster and cheaper.
93
+
Davinci is the most capable, but is slower and more expensive than the other models. Ada is the least capable, but is both faster and cheaper.
94
94
95
95
These embedding models are specifically created to be good at a particular task.
96
96
@@ -112,17 +112,17 @@ These models help measure whether long documents are relevant to a short search
112
112
113
113
### Code search embeddings
114
114
115
-
Similarly to search embeddings, there are two types: one for embedding code snippets to be retrieved and one for embedding natural language search queries.
115
+
Similar to text search embeddings, there are two types: one for embedding code snippets to be retrieved and one for embedding natural language search queries.
When using our embedding models, please keep in mind their limitations and risks.
121
+
When using our embedding models, keep in mind their limitations and risks.
122
122
123
123
## Finding the right model
124
124
125
-
We recommend starting with our Davinci model since it will be the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with Davinci if you’re not concerned about cost and speed or move onto Curie or another model and try to optimize around its capabilities.
125
+
We recommend starting with our Davinci model since it will be the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with Davinci if you’re not concerned about cost and speed, or you can move onto Curie or another model and try to optimize around its capabilities.
The maximum length of input text for our embedding models are 2048 tokens (approximately equivalent to around 2-3 pages of text). You should verify that your inputs don't exceed this limit before making a request.
36
+
The maximum length of input text for our embedding models is 2048 tokens (equivalent to around 2-3 pages of text). You should verify that your inputs don't exceed this limit before making a request.
37
37
38
38
### Choose the best model for your task
39
39
40
-
For the search models you can obtain embeddings in two ways. The `<search_model>-doc` model is used for longer pieces of text (to be searched over) and the `<search_model>-query` model is used for shorter pieces of text, typically queries or class labels in zero shot classification. You can read more about all of the embeddings models in our [Models](../concepts/models.md) guide.
40
+
For the search models, you can obtain embeddings in two ways. The `<search_model>-doc` model is used for longer pieces of text (to be searched over) and the `<search_model>-query` model is used for shorter pieces of text, typically queries or class labels in zero shot classification. You can read more about all of the Embeddings models in our [Models](../concepts/models.md) guide.
41
41
42
42
### Replace newlines with a single space
43
43
44
-
Unless you are embedding code, we suggest replacing newlines (\n) in your input with a single space, as we have observed inferior results when newlines are present.
44
+
Unless you're embedding code, we suggest replacing newlines (\n) in your input with a single space, as we have observed inferior results when newlines are present.
45
45
46
46
## Limitations & risks
47
47
48
-
Our embedding models may be unreliable or pose social risks in certain cases, and may cause harm in the absence of mitigations. Please review our Responsible AI content for more information on how to approach their use responsibly.
48
+
Our embedding models may be unreliable or pose social risks in certain cases, and may cause harm in the absence of mitigations. Review our Responsible AI content for more information on how to approach their use responsibly.
Copy file name to clipboardExpand all lines: articles/cognitive-services/openai/includes/studio.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: 'Quickstart: Use the OpenAI Service via the Azure OpenAI Studio'
3
3
titleSuffix: Azure OpenAI
4
-
description: Walkthrough on how to get started with Azure OpenAI and make your first completions and search calls with Azure OpenAI Studio.
4
+
description: Walkthrough on how to get started with Azure OpenAI and make your first completions call with Azure OpenAI Studio.
5
5
services: cognitive-services
6
6
manager: nitinme
7
7
ms.service: cognitive-services
@@ -14,8 +14,8 @@ keywords:
14
14
## Prerequisites
15
15
16
16
- An Azure subscription - <ahref="https://azure.microsoft.com/free/cognitive-services"target="_blank">Create one for free</a>
17
-
- Access granted to service in the desired Azure subscription. This service is currently invite only. You can fill out a new use case request here: <ahref="https://aka.ms/oai/access"target="_blank">https://aka.ms/oai/access</a>
18
-
- An Azure OpenAI Service resource with a model deployed. If you don't have a resource/model the process is documented in our [resource deployment guide](../how-to/create-resource.md)
17
+
- Access granted to service in the desired Azure subscription. Currently, this service is available only by invitation. You can fill out a new use case request here: <ahref="https://aka.ms/oai/access"target="_blank">https://aka.ms/oai/access</a>
18
+
- An Azure OpenAI Service resource with a model deployed. If you don't have a resource/model, the process is documented in our [resource deployment guide](../how-to/create-resource.md)
19
19
20
20
## Go to the Azure OpenAI Studio
21
21
@@ -27,22 +27,22 @@ You'll first land on our main page for the Azure OpenAI Studio and from here you
27
27
28
28
:::image type="content" source="../media/quickstarts/studio-start.png" alt-text="Screenshot of the landing page of the Azure OpenAI Studio with sections highlighted." lightbox="../media/quickstarts/studio-start.png":::
29
29
30
-
- Resources without a deployment will be prompted to create one. This is required to be able to inference with your models
30
+
- Resources without a deployment will be prompted to create one. A deployment is required to be able to inference with your models
31
31
- Get started with a few simple examples that demonstrate the capabilities of the service
32
32
- Navigate to different parts of the Studio including the **Playground** for experimentation and our fine-tuning workflow
33
33
- Find quick links to other helpful resources like documentation and community forums
34
34
35
-
From here, select the **create new deployment** button in the banner at the top. If you don't see this, you already have a deployment and can proceed to the 'playground' step.
35
+
From here, select the **create new deployment** button in the banner at the top. If you don't see this button, you already have a deployment and can proceed to the [Playground](#playground) step.
36
36
37
37
## Deployments
38
38
39
-
Before you can generate text or inference, you need to deploy a model. This is done by clicking the **create new deployment** on the deployments page. From here you can select from one of our many available models. For getting started we recommend `text-davinci-002` for users in South Central and `text-davinci-001` for users in West Europe (`text-davinci-002`isn't available in this region).
39
+
Before you can generate text or inference, you need to deploy a modelby clicking the **create new deployment**button on the deployments page. From here you can select from one of our many available models. For getting started we recommend selecting the `text-davinci-002`model.
40
40
41
41
Once this is complete, select the 'Playground' button on the left nav to start experimenting.
42
42
43
43
## Playground
44
44
45
-
The best way to start exploring completions is through our Playground. It's simply a text box where you can submit a prompt to generate a completion. From this page you can easily iterate and experiment with the capabilities. The following is an overview of the features available to you on this page:
45
+
The best way to start exploring completions is through our Playground. It's simply a text box where you can submit a prompt to generate a completion. From this page, you can easily iterate and experiment with the capabilities. The following list is an overview of the features available to you on this page:
46
46
47
47
:::image type="content" source="../media/quickstarts/playground-load.png" alt-text="Screenshot of the playground page of the Azure OpenAI Studio with sections highlighted." lightbox="../media/quickstarts/playground-load.png":::
0 commit comments