You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -78,14 +78,22 @@ So what does it take to build an image generation application? You need the foll
78
78
-**pillow**, to work with images in Python.
79
79
-**requests**, to help you make HTTP requests.
80
80
81
+
## Create and deploy an Azure OpenAI model
82
+
83
+
If not done already, follow the instructions on the [Microsoft Learn](https://learn.microsoft.com/azure/ai-foundry/openai/how-to/create-resource?pivots=web-portal) page
84
+
to create an Azure OpenAI resource and model. Select DALL-E 3 as model.
85
+
86
+
## Create the app
87
+
81
88
1. Create a file _.env_ with the following content:
82
89
83
90
```text
84
91
AZURE_OPENAI_ENDPOINT=<your endpoint>
85
92
AZURE_OPENAI_API_KEY=<your key>
93
+
AZURE_OPENAI_DEPLOYMENT="dall-e-3"
86
94
```
87
95
88
-
Locate this information in Azure Portal for your resource in the "Keys and Endpoint" section.
96
+
Locate this information in Azure OpenAI Foundry Portal for your resource in the "Deployments" section.
89
97
90
98
1. Collect the above libraries in a file called _requirements.txt_ like so:
91
99
@@ -113,57 +121,54 @@ So what does it take to build an image generation application? You need the foll
113
121
114
122
1. Add the following code in file called _app.py_:
# Create an image by using the image generation API
204
-
generation_response = openai.Image.create(
205
-
prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils', # Enter your prompt text here
206
-
size='1024x1024',
207
-
n=2,
208
-
temperature=0,
209
-
)
208
+
generation_response = client.images.generate(
209
+
prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils',
210
+
size='1024x1024', n=1,
211
+
model=os.environ['AZURE_OPENAI_DEPLOYMENT']
212
+
)
210
213
```
211
214
212
215
The above code responds with a JSON object that contains the URL of the generated image. We can use the URL to download the image and save it to a file.
@@ -222,14 +225,13 @@ Let's explain this code:
222
225
223
226
Let's look at the code that generates the image in more detail:
224
227
225
-
```python
226
-
generation_response = openai.Image.create(
227
-
prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils', # Enter your prompt text here
228
-
size='1024x1024',
229
-
n=2,
230
-
temperature=0,
231
-
)
232
-
```
228
+
```python
229
+
generation_response = client.images.generate(
230
+
prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils',
231
+
size='1024x1024', n=1,
232
+
model=os.environ['AZURE_OPENAI_DEPLOYMENT']
233
+
)
234
+
```
233
235
234
236
-**prompt**, is the text prompt that is used to generate the image. In this case, we're using the prompt "Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils".
235
237
-**size**, is the size of the image that is generated. In this case, we're generating an image that is 1024x1024 pixels.
@@ -244,20 +246,29 @@ You've seen so far how we were able to generate an image using a few lines in Py
244
246
245
247
You can also do the following:
246
248
247
-
-**Perform edits**. By providing an existing image a mask and a prompt, you can alter an image. For example, you can add something to a portion of an image. Imagine our bunny image, you can add a hat to the bunny. How you would do that is by providing the image, a mask (identifying the part of the area for the change) and a text prompt to say what should be done.
249
+
-**Perform edits**. By providing an existing image a mask and a prompt, you can alter an image. For example, you can add something to a portion of an image. Imagine our bunny image, you can add a hat to the bunny. How you would do that is by providing the image, a mask (identifying the part of the area for the change) and a text prompt to say what should be done.
250
+
> Note: this is not supported in DALL-E 3.
251
+
252
+
Here is an example using GPT Image:
253
+
254
+
```python
255
+
response = client.images.edit(
256
+
model="gpt-image-1",
257
+
image=open("sunlit_lounge.png", "rb"),
258
+
mask=open("mask.png", "rb"),
259
+
prompt="A sunlit indoor lounge area with a pool containing a flamingo"
260
+
)
261
+
image_url = response.data[0].url
262
+
```
248
263
249
-
```python
250
-
response = openai.Image.create_edit(
251
-
image=open("base_image.png", "rb"),
252
-
mask=open("mask.png", "rb"),
253
-
prompt="An image of a rabbit with a hat on its head.",
254
-
n=1,
255
-
size="1024x1024"
256
-
)
257
-
image_url = response['data'][0]['url']
258
-
```
264
+
The base image would only contain the lounge with pool but the final image would have a flamingo:
The base image would only contain the rabbit but the final image would have the hat on the rabbit.
261
272
262
273
-**Create variations**. The idea is that you take an existing image and ask that variations are created. To create a variation, you provide an image and a text prompt and code like so:
263
274
@@ -280,16 +291,16 @@ Let's look at an example of how temperature works, by running this prompt twice:
280
291
281
292
> Prompt : "Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils"
282
293
283
-

294
+

284
295
285
296
Now let's run that same prompt just to see that we won't get the same image twice:
286
297
287
-

298
+

288
299
289
300
As you can see, the images are similar, but not the same. Let's try changing the temperature value to 0.1 and see what happens:
290
301
291
302
```python
292
-
generation_response =openai.Image.create(
303
+
generation_response =client.images.create(
293
304
prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils', # Enter your prompt text here
294
305
size='1024x1024',
295
306
n=2
@@ -303,7 +314,7 @@ So let's try to make the response more deterministic. We could observe from the
303
314
Let's therefore change our code and set the temperature to 0, like so:
304
315
305
316
```python
306
-
generation_response =openai.Image.create(
317
+
generation_response =client.images.create(
307
318
prompt='Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils', # Enter your prompt text here
0 commit comments