You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/logic-apps/parse-document-chunk-text.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ ms.date: 08/16/2024
20
20
21
21
Sometimes you have to convert content into tokens, which are words or chunks of characters, or divide a large document into smaller pieces before you can use this content with some actions. For example, the **Azure AI Search** or **Azure OpenAI** actions expect tokenized input and can handle only a limited number of tokens.
22
22
23
-
For these scenarios, use the **Data Operations** actions named **Parse a document** and **Chunk text** in your Standard logic app workflow. These actions respectively transform content, such as a PDF document, CSV file, Excel file, and so on, into tokenized string output and then split the string into pieces, based on the number of tokens or characters. You can then reference and use these outputs with subsequent actions in your workflow.
23
+
For these scenarios, use the **Data Operations** actions named **Parse a document** and **Chunk text** in your Standard logic app workflow. These actions respectively transform content, such as a PDF document, CSV file, Excel file, and so on, into tokenized string output and then split the string into pieces, based on the number of tokens. You can then reference and use these outputs with subsequent actions in your workflow.
24
24
25
25
> [!TIP]
26
26
>
@@ -108,7 +108,7 @@ The **Chunk text** action splits content into smaller pieces for subsequent acti
108
108
109
109
1. On the designer, select the **Chunk text** action.
110
110
111
-
1. After the action information pane opens, on the **Parameters** tab, for the **Chunking Strategy** property, select **TokenSize** as the chunking method.
111
+
1. After the action information pane opens, on the **Parameters** tab, for the **Chunking Strategy** property, select **TokenSize** as the chunking method, if not already selected.
112
112
113
113
| Strategy | Description |
114
114
|----------|-------------|
@@ -146,7 +146,7 @@ Now, when you add other actions that expect and use tokenized input, such as the
|**Chunking Strategy**|**TokenSize**| String enum | Split the content, based on the number of tokens. <br><br>Default: **TokenSize**| Not applicable |
148
148
|**Text**| <*content-to-chunk*> | Any | The content to chunk. | See [Limits and configuration reference guide](logic-apps-limits-and-config.md#character-limits)|
149
-
|**Encoding model**| <*encoding-method*> | String enum | The encoding model to use: <br><br>- Default: **cl100k_base (gpt4, gpt-3.5-turbo, gpt-35-turbo)** <br><br>- **r50k_base (gpt-3)** <br><br>- **p50k_base (gpt-3)** <br><br>- **p50k_edit (gpt-3)** <br><br>- **cl200k_base (gpt-4o)** <br><br>For more information, see [OpenAI - Models overview](https://platform.openai.com/docs/models/overview). | Not applicable |
149
+
|**EncodingModel**| <*encoding-method*> | String enum | The encoding model to use: <br><br>- Default: **cl100k_base (gpt4, gpt-3.5-turbo, gpt-35-turbo)** <br><br>- **r50k_base (gpt-3)** <br><br>- **p50k_base (gpt-3)** <br><br>- **p50k_edit (gpt-3)** <br><br>- **cl200k_base (gpt-4o)** <br><br>For more information, see [OpenAI - Models overview](https://platform.openai.com/docs/models/overview). | Not applicable |
150
150
|**TokenSize**| <*max-tokens-per-chunk*> | Integer | The maximum number of tokens per content chunk. <br><br>Default: None | Minimum: **1** <br>Maximum: **8000**|
151
151
|**PageOverlapLength**| <*number-of-overlapping-characters*> | Integer | The number of characters from the end of the previous chunk to include in the next chunk. This setting helps you avoid losing important information when splitting content into chunks and preserves continuity and context across chunks. <br><br>Default: **0** - No overlapping characters exist. | Minimum: **0**|
152
152
@@ -179,7 +179,7 @@ The following example includes other actions that create a complete workflow pat
179
179
| 2 | Get the content. |**HTTP**| An **HTTP** action that retrieves the uploaded document using the file URL from the trigger output. |
180
180
| 3 | Compose document details. |**Compose**| A **Data Operations** action that concatenates various items. <br><br>This example concatenates key-value information about the document. |
181
181
| 4 | Create token string. |**Parse a document**| A **Data Operations** action that produces a tokenized string using the output from the **Compose** action. |
182
-
| 5 | Create content chunks. |**Chunk text**| A **Data Operations** action that splits the token string into pieces, based on either the number of characters or tokens per content chunk. |
182
+
| 5 | Create content chunks. |**Chunk text**| A **Data Operations** action that splits the token string into pieces, based on the number of tokens per content chunk. |
183
183
| 6 | Convert tokenized and chunked text to JSON. |**Parse JSON**| A **Data Operations** action that converts the chunked output into a JSON array. |
184
184
| 7 | Select JSON array items. |**Select**| A **Data Operations** action that selects multiple items from the JSON array. |
185
185
| 8 | Generate the embeddings. |**Get multiple embeddings**| An **Azure OpenAI** action that creates embeddings for each JSON array item. |
0 commit comments