You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
title: Install and run Docker containers for Document Intelligence
2
+
title: Install and run Docker containers for Document Intelligence
3
3
titleSuffix: Azure AI services
4
4
description: Use the Docker containers for Document Intelligence on-premises to identify and extract key-value pairs, selection marks, tables, and structure from forms and documents.
source: ${FILE_MOUNT_PATH} # path to your local folder
579
+
target: /onprem_folder
580
+
- type: bind
581
+
source: ${DB_MOUNT_PATH} # path to your local folder
582
+
target: /onprem_db
583
+
ports:
584
+
- "5001:5001"
585
+
user: "1000:1000" # echo $(id -u):$(id -g)
586
+
587
+
```
588
+
::: moniker-end
589
+
590
+
:::moniker range=">=doc-intel-3.1.0"
591
+
592
+
#### Create a **docker compose** file
593
+
594
+
1. Name this file **docker-compose.yml**
595
+
596
+
2. The following code sample is a self-contained `docker compose` example to run Document Intelligence Layout, Studio, and Custom template containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
The custom template container and Layout container can use Azure Storage queues or in memory queues. The `Storage:ObjectStore:AzureBlob:ConnectionString` and `queue:azure:connectionstring` environment variables only need to be set if you're using Azure Storage queues. When running locally, delete these variables.
Copy file name to clipboardExpand all lines: articles/ai-services/document-intelligence/train/custom-model.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -95,7 +95,7 @@ If the language of your documents and extraction scenarios supports custom neura
95
95
96
96
* For custom model training, the maximum number of pages for training data is 500 for the custom template model and 50,000 for the custom neural model.
97
97
98
-
* For custom extraction model training, the total size of training data is 50 MB for template model and 1G-MB for the neural model.
98
+
* For custom extraction model training, the total size of training data is 50 MB for template model and 1GB for the neural model.
99
99
100
100
* For custom classification model training, the total size of training data is `1GB` with a maximum of 10,000 pages.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/gpt-with-vision.md
+6-7Lines changed: 6 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,17 +13,13 @@ manager: nitinme
13
13
# Use vision-enabled chat models
14
14
15
15
16
-
Vision-enabled chat models are large multimodal models (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. They incorporate both natural language processing and visual understanding. The current vision-enabled models are GPT-4 Turbo with Vision, GPT-4o, and GPT-4o-mini.
16
+
Vision-enabled chat models are large multimodal models (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. They incorporate both natural language processing and visual understanding. The current vision-enabled models are [o1](./reasoning.md), GPT-4o, and GPT-4o-mini, GPT-4 Turbo with Vision.
17
17
18
18
The vision-enabled models answer general questions about what's present in the images you upload.
19
19
20
20
> [!TIP]
21
21
> To use vision-enabled models, you call the Chat Completion API on a supported model that you have deployed. If you're not familiar with the Chat Completion API, see the [Vision-enabled chat how-to guide](/azure/ai-services/openai/how-to/chatgpt?tabs=python&pivots=programming-language-chat-completions).
The following command shows the most basic way to use the GPT-4 Turbo with Vision model with code. If this is your first time using these models programmatically, we recommend starting with our [GPT-4 Turbo with Vision quickstart](../gpt-v-quickstart.md).
@@ -39,8 +35,6 @@ Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployme
39
35
-`Content-Type`: application/json
40
36
-`api-key`: {API_KEY}
41
37
42
-
43
-
44
38
**Body**:
45
39
The following is a sample request body. The format is the same as the chat completions API for GPT-4, except that the message content can be an array containing text and images (either a valid HTTP or HTTPS URL to an image, or a base-64-encoded image).
46
40
@@ -368,6 +362,11 @@ Every response includes a `"finish_reason"` field. It has the following possible
0 commit comments