You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/logic-apps/parse-document-chunk-text.md
+5-18Lines changed: 5 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.suite: integration
6
6
ms.collection: ce-skilling-ai-copilot
7
7
ms.reviewer: estfan, azla
8
8
ms.topic: how-to
9
-
ms.date: 08/14/2024
9
+
ms.date: 08/16/2024
10
10
# Customer intent: As a developer using Azure Logic Apps, I want to parse a document or chunk text that I want to use with Azure AI operations for my Standard workflow in Azure Logic Apps.
11
11
---
12
12
@@ -108,11 +108,10 @@ The **Chunk text** action splits content into smaller pieces for subsequent acti
108
108
109
109
1. On the designer, select the **Chunk text** action.
110
110
111
-
1. After the action information pane opens, on the **Parameters** tab, for the **Chunking Strategy** property, select either **FixedLength** or **TokenSize** as the chunking method.
111
+
1. After the action information pane opens, on the **Parameters** tab, for the **Chunking Strategy** property, select **TokenSize** as the chunking method.
112
112
113
113
| Strategy | Description |
114
114
|----------|-------------|
115
-
|**FixedLength**| Split the specified content, based on the number of characters. |
116
115
|**TokenSize**| Split the specified content, based on the number of tokens. |
117
116
118
117
1. After you select the strategy, select inside the **Text** box to specify the content for chunking.
@@ -145,23 +144,11 @@ Now, when you add other actions that expect and use tokenized input, such as the
145
144
146
145
| Name | Value | Data type | Description | Limits |
|**Chunking Strategy**|**FixedLength** or **TokenSize**| String enum |**FixedLength**: Split the content, based on the number of characters <br><br>**TokenSize**: Split the content, based on the number of tokens. <br><br>Default: **FixedLength**| Not applicable |
147
+
|**Chunking Strategy**|**TokenSize**| String enum | Split the content, based on the number of tokens. <br><br>Default: **TokenSize**| Not applicable |
149
148
|**Text**| <*content-to-chunk*> | Any | The content to chunk. | See [Limits and configuration reference guide](logic-apps-limits-and-config.md#character-limits)|
150
-
151
-
For **Chunking Strategy** set to **FixedLength**:
152
-
153
-
| Name | Value | Data type | Description | Limits |
|**MaxPageLength**| <*max-char-per-chunk*> | Integer | The maximum number of characters per content chunk. <br><br>Default: **5000**| Minimum: **1**|
156
-
|**PageOverlapLength**| <*number-of-overlapping-characters*> | Integer | The number of characters from the end of the previous chunk to include in the next chunk. This setting helps you avoid losing important information when splitting content into chunks and preserves continuity and context across chunks. <br><br>Default: **0** - No overlapping characters exist. | Minimum: **0**|
157
-
|**Language**| <*language*> | String | The [language](/azure/ai-services/language-service/language-detection/language-support) to use for the resulting chunks. <br><br>Default: **en-us**| Not applicable |
158
-
159
-
For **Chunking Strategy** set to **TokenSize**:
160
-
161
-
| Name | Value | Data type | Description | Limits |
|**TokenSize**| <*max-tokens-per-chunk*> | Integer | The maximum number of tokens per content chunk. <br><br>Default: None | Minimum: **1** <br>Maximum: **8000**|
164
149
|**Encoding model**| <*encoding-method*> | String enum | The encoding model to use: <br><br>- Default: **cl100k_base (gpt4, gpt-3.5-turbo, gpt-35-turbo)** <br><br>- **r50k_base (gpt-3)** <br><br>- **p50k_base (gpt-3)** <br><br>- **p50k_edit (gpt-3)** <br><br>- **cl200k_base (gpt-4o)** <br><br>For more information, see [OpenAI - Models overview](https://platform.openai.com/docs/models/overview). | Not applicable |
150
+
|**TokenSize**| <*max-tokens-per-chunk*> | Integer | The maximum number of tokens per content chunk. <br><br>Default: None | Minimum: **1** <br>Maximum: **8000**|
151
+
|**PageOverlapLength**| <*number-of-overlapping-characters*> | Integer | The number of characters from the end of the previous chunk to include in the next chunk. This setting helps you avoid losing important information when splitting content into chunks and preserves continuity and context across chunks. <br><br>Default: **0** - No overlapping characters exist. | Minimum: **0**|
0 commit comments