You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/integrate-synapseml.md
+57-40Lines changed: 57 additions & 40 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,49 +1,55 @@
1
1
---
2
-
title: 'How-to - Use Azure OpenAI Service with large datasets'
2
+
title: 'Use Azure OpenAI Service with large datasets'
3
3
titleSuffix: Azure OpenAI
4
-
description: Walkthrough on how to integrate Azure OpenAI with SynapseML and Apache Spark to apply large language models at a distributed scale.
4
+
description: Learn how to integrate Azure OpenAI Service with SynapseML and Apache Spark to apply large language models at a distributed scale.
5
5
services: cognitive-services
6
6
manager: nitinme
7
7
ms.service: cognitive-services
8
8
ms.subservice: openai
9
9
ms.custom: build-2023, build-2023-dataai
10
10
ms.topic: how-to
11
-
ms.date: 08/04/2022
11
+
ms.date: 08/29/2023
12
12
author: ChrisHMSFT
13
13
ms.author: chrhoder
14
14
recommendations: false
15
15
---
16
16
17
17
# Use Azure OpenAI with large datasets
18
18
19
-
Azure OpenAI can be used to solve a large number of natural language tasks through prompting the completion API. To make it easier to scale your prompting workflows from a few examples to large datasets of examples, we have integrated the Azure OpenAI Service with the distributed machine learning library [SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/). This integration makes it easy to use the [Apache Spark](https://spark.apache.org/) distributed computing framework to process millions of prompts with the OpenAI service. This tutorial shows how to apply large language models at a distributed scale using Azure Open AI and Azure Synapse Analytics.
19
+
Azure OpenAI can be used to solve a large number of natural language tasks through prompting the completion API. To make it easier to scale your prompting workflows from a few examples to large datasets of examples, Azure OpenAI Service is integrated with the distributed machine learning library [SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/). This integration makes it easy to use the [Apache Spark](https://spark.apache.org/) distributed computing framework to process millions of prompts with Azure OpenAI Service. This tutorial shows how to apply large language models at a distributed scale by using Azure OpenAI and Azure Synapse Analytics.
20
20
21
21
## Prerequisites
22
22
23
-
- An Azure subscription - <ahref="https://azure.microsoft.com/free/cognitive-services"target="_blank">Create one for free</a>
24
-
- Access granted to Azure OpenAI in the desired Azure subscription
23
+
- An Azure subscription. <ahref="https://azure.microsoft.com/free/cognitive-services"target="_blank">Create one for free</a>.
24
+
- Access granted to Azure OpenAI in the desired Azure subscription.
25
+
- An Azure OpenAI resource. [Create a resource](create-resource.md?pivots=web-portal#create-a-resource).
26
+
- An Apache Spark cluster with SynapseML installed. Create a [serverless Apache Spark pool](../../../synapse-analytics/get-started-analyze-spark.md#create-a-serverless-apache-spark-pool)
25
27
26
-
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <ahref="https://aka.ms/oai/access"target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
27
-
- An Azure OpenAI resource – [create a resource](create-resource.md?pivots=web-portal#create-a-resource)
28
-
- An Apache Spark cluster with SynapseML installed - create a serverless Apache Spark pool [here](../../../synapse-analytics/get-started-analyze-spark.md#create-a-serverless-apache-spark-pool)
28
+
> [!NOTE]
29
+
> Currently, you must submit an application to access Azure OpenAI Service. To apply for access, complete <ahref="https://aka.ms/oai/access"target="_blank">this form</a>. If you need assistance, open an issue on this repo to contact Microsoft.
30
+
31
+
Microsoft recommends that you [create an Azure Synapse workspace](../../../synapse-analytics/get-started-create-workspace.md). However, you can also use Azure Databricks, Azure HDInsight, Spark on Kubernetes, or the Python environment with the `pyspark` package.
32
+
33
+
## Import example code as a notebook
29
34
30
-
We recommend [creating a Synapse workspace](../../../synapse-analytics/get-started-create-workspace.md), but an Azure Databricks, HDInsight, or Spark on Kubernetes, or even a Python environment with the `pyspark` package, will also work.
35
+
To use the example code in this article with your Spark cluster, you have two options:
36
+
- Create a notebook in your Spark platform and copy the code into this notebook to run the demo.
37
+
- Download the notebook and import it into Azure Synapse.
31
38
32
-
## Import this guide as a notebook
39
+
1.[Download this demo as a notebook](https://github.com/microsoft/SynapseML/blob/master/docs/Explore%20Algorithms/OpenAI/OpenAI.ipynb). During the download process, select **Raw**, and then save the file.
33
40
34
-
The next step is to add this code into your Spark cluster. You can either create a notebook in your Spark platform and copy the code into this notebook to run the demo, or download the notebook and import it into Synapse Analytics.
41
+
1. Import the notebook [into the Synapse Workspace](../../../synapse-analytics/spark/apache-spark-development-using-notebooks.md#create-a-notebook), or if you're using Azure Databricks, import the notebook [into the Azure Databricks Workspace](/azure/databricks/notebooks/notebooks-manage#create-a-notebook).
35
42
36
-
1.[Download this demo as a notebook](https://github.com/microsoft/SynapseML/blob/master/notebooks/features/cognitive_services/CognitiveServices%20-%20OpenAI.ipynb) (select Raw, then save the file)
37
-
1. Import the notebook [into the Synapse Workspace](../../../synapse-analytics/spark/apache-spark-development-using-notebooks.md#create-a-notebook) or, if using Databricks, [into the Databricks Workspace](/azure/databricks/notebooks/notebooks-manage#create-a-notebook)
38
-
1. Install SynapseML on your cluster. See the installation instructions for Synapse at the bottom of [the SynapseML website](https://microsoft.github.io/SynapseML/). This requires pasting another cell at the top of the notebook you imported
39
-
1. Connect your notebook to a cluster and follow along, editing and running the cells below.
43
+
1. Install SynapseML on your cluster. See the installation instructions for Azure Synapse at the bottom of [the SynapseML website](https://microsoft.github.io/SynapseML/). This task requires pasting another cell at the top of the notebook you imported.
44
+
45
+
1. Connect your notebook to a cluster and follow along with editing and running the cells later in this article.
40
46
41
47
## Fill in your service information
42
48
43
-
Next, edit the cell in the notebook to point to your service. In particular, set the `resource_name`, `deployment_name`, `location`, and `key` variables to the corresponding values for your Azure OpenAI resource.
49
+
When the notebook is ready, you need to edit a few cells in your notebook to point to your service. Set the `resource_name`, `deployment_name`, `location`, and `key` variables to the corresponding values for your Azure OpenAI resource.
44
50
45
51
> [!IMPORTANT]
46
-
> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Azure AI services [security](../../security-features.md) article for more information.
52
+
> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). For more information, see [Azure AI services security](../../security-features.md).
47
53
48
54
```python
49
55
import os
@@ -59,9 +65,9 @@ assert key is not None and resource_name is not None
59
65
60
66
## Create a dataset of prompts
61
67
62
-
Next, create a dataframe consisting of a series of rows, with one prompt per row.
68
+
The next step is to create a dataframe consisting of a series of rows, with one prompt per row.
63
69
64
-
You can also load data directly from Azure Data Lake Storage (ADLS) or other databases. For more information about loading and preparing Spark dataframes, see the [Apache Spark data loading guide](https://spark.apache.org/docs/latest/sql-data-sources.html).
70
+
You can also load data directly from Azure Data Lake Storage or other databases. For more information about loading and preparing Spark dataframes, see the [Apache Spark Data Sources](https://spark.apache.org/docs/latest/sql-data-sources.html).
65
71
66
72
```python
67
73
df = spark.createDataFrame(
@@ -75,7 +81,7 @@ df = spark.createDataFrame(
75
81
76
82
## Create the OpenAICompletion Apache Spark client
77
83
78
-
To apply the OpenAI Completion service to the dataframe that you just created, create an `OpenAICompletion` object that serves as a distributed client. Parameters of the service can be set either with a single value, or by a column of the dataframe with the appropriate setters on the `OpenAICompletion` object. Here, we're setting `maxTokens` to 200. A token is around four characters, and this limit applies to the sum of the prompt and the result. We're also setting the `promptCol` parameter with the name of the prompt column in the dataframe.
84
+
To apply the Azure OpenAI Completion service to the dataframe, create an `OpenAICompletion` object that serves as a distributed client. Parameters of the service can be set either with a single value, or by a column of the dataframe with the appropriate setters on the `OpenAICompletion` object. In this example, you set the `maxTokens`parameter to 200. A token is around four characters, and this limit applies to the sum of the prompt and the result. You also set the `promptCol` parameter with the name of the prompt column in the dataframe.
79
85
80
86
```python
81
87
from synapse.ml.cognitive import OpenAICompletion
@@ -94,7 +100,7 @@ completion = (
94
100
95
101
## Transform the dataframe with the OpenAICompletion client
96
102
97
-
Now that you have the dataframe and the completion client, you can transform your input dataset and add a column called `completions` with all of the information the service adds. We'll select out just the text for simplicity.
103
+
After you have the dataframe and completion client, you can transform your input dataset and add a column called `completions` with all of the information the service adds. In this example, you select only the text for simplicity.
Your output should look something like the following example; note that the completion text can vary.
113
+
Your output should look something like the following example. Keep in mind that the completion text can vary so your output might look different.
108
114
109
-
|**prompt**|**error**|**text**|
110
-
|------------|-----------| ---------|
111
-
| Hello my name is | undefined | Makaveli I'm eighteen years old and I want to<br>be a rapper when I grow up I love writing and making music I'm from Los<br>Angeles, CA |
112
-
| The best code is code that's | undefined | understandable This is a subjective statement,<br>and there is no definitive answer. |
113
-
| SynapseML is | undefined | A machine learning algorithm that is able to learn how to predict the future outcome of events. |
I'm eighteen years old and I want to be a rapper when I grow up
120
+
I love writing and making music
121
+
I'm from Los Angeles, CA
114
122
115
-
## Other usage examples
123
+
The best code is code that's undefined understandable
124
+
This is a subjective statement, and there is no definitive answer.
125
+
126
+
SynapseML is undefined A machine learning algorithm that is able to learn how to predict the future outcome of events.
127
+
```
128
+
129
+
## Explore other usage scenarios
130
+
131
+
Let's review some other use case scenarios for working with Azure OpenAI Service and large datasets.
116
132
117
133
### Improve throughput with request batching
118
134
119
-
The example above makes several requests to the service, one for each prompt. To complete multiple prompts in a single request, use batch mode. First, in the `OpenAICompletion` object, instead of setting the Prompt column to "Prompt", specify "batchPrompt" for the BatchPrompt column.
120
-
To do so, create a dataframe with a list of prompts per row.
135
+
You can use Azure OpenAI Service with large datasets to improve throughput with request batching. In the previous example, you make several requests to the service, one for each prompt. To complete multiple prompts in a single request, you can use batch mode.
136
+
137
+
In the `OpenAICompletion` object, instead of setting the **Prompt** column to "prompt," you can specify "batchPrompt" to create the **BatchPrompt** column. To support this method, you create a dataframe with a list of prompts per row.
121
138
122
139
> [!NOTE]
123
-
> There is currently a limit of 20 prompts in a single request and a limit of 2048 "tokens", or approximately 1500 words.
140
+
> There's currently a limit of 20 prompts in a single request and a limit of 2048 "tokens," or approximately 1500 words.
Next we create the `OpenAICompletion` object. Rather than setting the prompt column, set the batchPrompt column if your column is of type `Array[String]`.
151
+
Next, you create the `OpenAICompletion` object. Rather than setting the "prompt" column, you set the "batchPrompt" column if your column is of type `Array[String]`.
135
152
136
153
```python
137
154
batch_completion = (
@@ -146,27 +163,27 @@ batch_completion = (
146
163
)
147
164
```
148
165
149
-
In the call to transform, a request will then be made per row. Because there are multiple prompts in a single row, each request will be sent with all prompts in that row. The results will contain a row for each row in the request.
166
+
In the call to `transform`, one request is made per row. Because there are multiple prompts in a single row, each request is sent with all prompts in that row. The results contain a row for each row in the request.
> There is currently a limit of 20 prompts in a single request and a limit of 2048 "tokens", or approximately 1500 words.
174
+
> There's currently a limit of 20 prompts in a single request and a limit of 2048 "tokens," or approximately 1500 words.
158
175
159
-
### Using an automatic mini-batcher
176
+
### Use an automatic mini-batcher
160
177
161
-
If your data is in column format, you can transpose it to row format using SynapseML's`FixedMiniBatcherTransformer`.
178
+
You can use Azure OpenAI Service with large datasets to transpose the data format. If your data is in column format, you can transpose it to row format by using the SynapseML `FixedMiniBatcherTransformer` object.
162
179
163
180
```python
164
181
from pyspark.sql.types import StringType
165
182
from synapse.ml.stages import FixedMiniBatchTransformer
166
183
from synapse.ml.core.spark import FluentAPI
167
184
168
185
completed_autobatch_df = (df
169
-
.coalesce(1) # Force a single partition so that our little 4-row dataframe makes a batch of size 4, you can remove this step for large datasets
186
+
.coalesce(1) # Force a single partition so your little 4-row dataframe makes a batch of size 4 - you can remove this step for large datasets.
Azure OpenAI can solve many different natural language tasks through [prompt engineering](completions.md). Here, we show an example of prompting for language translation:
196
+
Azure OpenAI can solve many different natural language tasks through [prompt engineering](completions.md). In this example, you can prompt for language translation:
0 commit comments