You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/content-understanding/quickstart/use-ai-foundry.md
+28-4Lines changed: 28 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ To create a project in [Azure AI Foundry](https://ai.azure.com), follow these st
35
35
1. Go to the **Home** page of [Azure AI Foundry](https://ai.azure.com).
36
36
1. Select **+ Create project**.
37
37
1. Enter a name for the project. Keep all the other settings as default.
38
-
1. Select **Customize** to specify properties of the hub.
38
+
1. Select **Advanced options** to specify properties of the hub.
39
39
1. For **Region**. You must choose `westus`, `swedencentral`, or `australiaeast`.
40
40
1. Select **Next**.
41
41
1. Select **Create project**.
@@ -56,11 +56,13 @@ You can manage the users and their individual roles here:
56
56
Now that everything is configured to get started, we can walk through, step-by-step, how to create a task and build your first analyzer. The type of task that you create depends on what data you plan to bring in.
57
57
58
58
***Single-file task:** A single-file task utilizes Content Understanding Standard mode and allows you to bring in one file to create your analyzer.
59
-
***Multi-file task:** A multi-file task utilizes Content Understandning Pro mode and allows you to bring in multiple files to create your analyzer. You can also bring in a set of reference data that the service can use to perform multi-step reasoning and make conclusions about your data. To learn more about the difference between Content Understanding Standard and Pro mode, check out [Azure AI Content Understanding pro and standard modes](../concepts/standard-pro-modes.md).
59
+
***Multi-file task:** A multi-file task utilizes Content Understandning Pro mode and allows you to bring in multiple files to create your analyzer. You can also bring in a set of reference data that the service can use to perform multi-step reasoning and make conclusions about your data.
60
+
61
+
To learn more about the difference between Content Understanding Standard and Pro mode, check out [Azure AI Content Understanding pro and standard modes](../concepts/standard-pro-modes.md).
When you create a single-file Content Understanding task, you'll start by building your field schema. The schema is the customizable framework that allows the analyzer to extract insights from your data. In this example, the schema is created to extract key data from an invoice document, but you can bring in any type of data and the steps remain the same. For a complete list of supported file types, see [input file limits](../service-limits.md#input-file-limits).
65
+
When you create a single-file Content Understanding task, you'll start by building your field schema. The schema is the customizable framework that allows the analyzer to extract insights from your data. In this example, the schema is created to extract key data from an invoice document, but you can bring in any type of data and the steps remain the same. [Compare the output of this invoice analysis use case to the output of a Content Understanding Pro invoice analysis scenario](). For a complete list of supported file types, see [input file limits](../service-limits.md#input-file-limits).
64
66
65
67
1. Upload a sample file of an invoice document or any other data relevant to your scenario.
66
68
@@ -98,9 +100,31 @@ Now you successfully built your first Content Understanding analyzer, and are re
98
100
99
101
# [Multi-file task (Pro mode)](#tab/pro)
100
102
101
-
When you create a multi-file Content Understanding task, you'll start by building your field schema. The schema is the customizable framework that allows the analyzer to extract insights from your data. In this example, the schema is created to extract key data from an invoice document, but you can bring in any document based data and the steps remain the same. For a complete list of supported file types, see [input file limits](../service-limits.md#input-file-limits).
103
+
When you create a multi-file Content Understanding task, you'll start by building your field schema. The schema is the customizable framework that guides the analyzer to extract the preferred insights from your data.
104
+
105
+
In this example, the schema is created to extract key fields from an invoice document, but you can bring in any document based data and the steps remain the same. For a complete list of supported file types, see [input file limits](../service-limits.md#input-file-limits).
106
+
107
+
1. Upload one or multiple sample files of invoice documents or any other document data relevant to your scenario.
108
+
109
+
:::image type="content" source="../media/analyzer-template/define-schema-upload.png" alt-text="Screenshot of upload step in user experience.":::
110
+
111
+
2. Add fields to your schema:
112
+
113
+
* Specify clear and simple field names. Some example fields might include **vendorName**, **items**, **price**.
114
+
115
+
* Indicate the value type for each field (strings, dates, numbers, lists, groups). To learn more, *see*[supported field types](../service-limits.md#field-schema-limits).
116
+
117
+
**[Optional]* Provide field descriptions to explain the desired behavior, including any exceptions or rules.
118
+
119
+
* Specify the method to generate the value for each field.
120
+
121
+
3. Select **Save**.
122
+
123
+
:::image type="content" source="../media/analyzer-template/define-schema.png" alt-text="Screenshot of completed schema.":::
102
124
125
+
4. Upload one or more pieces of reference data for the service to analyze. Adding reference data allows the model to compare and apply multi-step reasoning to your test data in order to infer conclusions about that data.
103
126
127
+
5. Run analysis on your data. Kicking off analysis generates an output on your test files based on the schema that you just created, and applies predictions by comparing that output to your reference data.
0 commit comments