Skip to content

Commit 3390a9f

Browse files
Merge pull request #5647 from PatrickFarley/imagen
add token info
2 parents 050bf6d + b2a14df commit 3390a9f

File tree

3 files changed

+18
-2
lines changed

3 files changed

+18
-2
lines changed

articles/ai-services/openai/how-to/dall-e.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,10 @@ The following is a sample request body. You specify a number of options, defined
100100

101101
---
102102

103+
> [!TIP]
104+
> For image generation token costs, see [Image tokens](../overview.md#image-generation-tokens).
105+
106+
103107
### Output
104108

105109
The response from a successful image generation API call looks like the following example. The `url` field contains a URL where you can download the generated image. The URL stays active for 24 hours.

articles/ai-services/openai/how-to/gpt-with-vision.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,7 @@ You set the value using the format shown in this example:
181181
}
182182
```
183183

184-
For details on how the image parameters impact tokens used and pricing please see - [What is Azure OpenAI? Image Tokens](../overview.md#image-tokens)
184+
For details on how the image parameters impact tokens used and pricing please see - [What is Azure OpenAI? Image Tokens](../overview.md#image-input-tokens)
185185

186186

187187
## Output

articles/ai-services/openai/overview.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ Azure OpenAI processes text by breaking it down into tokens. Tokens can be words
8484

8585
The total number of tokens processed in a given request depends on the length of your input, output, and request parameters. The quantity of tokens being processed will also affect your response latency and throughput for the models.
8686

87-
#### Image tokens
87+
#### Image input tokens
8888

8989
Azure OpenAI's image processing capabilities with GPT-4o, GPT-4o-mini, and GPT-4 Turbo with Vision models uses image tokenization to determine the total number of tokens consumed by image inputs. The number of tokens consumed is calculated based on two main factors: the level of image detail (low or high) and the image’s dimensions. Here's how token costs are calculated:
9090

@@ -108,6 +108,18 @@ Azure OpenAI's image processing capabilities with GPT-4o, GPT-4o-mini, and GPT-4
108108
- For GPT-4o and GPT-4 Turbo with Vision, the total token cost is 6 tiles x 170 tokens per tile + 85 base tokens = 1105 tokens.
109109
- For GPT-4o mini, the total token cost is 6 tiles x 5667 tokens per tile + 2833 base tokens = 36835 tokens.
110110

111+
#### Image generation tokens
112+
113+
GPT-image-1 generates images by first producing specialized image tokens. Both latency and eventual cost are proportional to the number of tokens required to render an image. The number of tokens generated depends on image dimensions and quality:
114+
115+
| Quality | Square (1024×1024) | Portrait (1024×1536) | landscape (1536×1024) |
116+
| ----------- | ---------------------- | ------------------------ | ------------------------- |
117+
| Low | 272 tokens | 408 tokens | 400 tokens |
118+
| Medium | 1056 tokens | 1584 tokens | 1568 tokens |
119+
| High | 4160 tokens | 6240 tokens | 6208 tokens |
120+
121+
122+
111123
### Resources
112124

113125
Azure OpenAI is a new product offering on Azure. You can get started with Azure OpenAI the same way as any other Azure product where you [create a resource](how-to/create-resource.md), or instance of the service, in your Azure Subscription. You can read more about Azure's [resource management design](/azure/azure-resource-manager/management/overview).

0 commit comments

Comments
 (0)