You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/search/cognitive-search-concept-troubleshooting.md
+6-10Lines changed: 6 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,15 +17,11 @@ This article contains tips to help you get started with AI enrichment and skills
17
17
18
18
## Tip 1: Start simple and start small
19
19
20
-
Both the [Import data wizard](ognitive-search-quickstart-blob.md) and [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) in the Azure portal support AI enrichment. Without writing any code, you can create all of the objects used in an enrichment pipeline: an index, indexer, data source, and skillset.
20
+
Both the [**Import data wizard**](cognitive-search-quickstart-blob.md) and [**Import and vectorize data wizard**](search-get-started-portal-import-vectors.md) in the Azure portal support AI enrichment. Without writing any code, you can create and examine all of the objects used in an enrichment pipeline: an index, indexer, data source, and skillset.
21
21
22
22
Another way to start simply is by creating a data source with just a handful of documents or rows in a table that are representative of the documents that will be indexed. A small data set is the best way to increase the speed of finding and fixing issues.Run your sample through the end-to-end pipeline and check that the results meet your needs. Once you're satisfied with the results, you're ready to add more files to your data source.
23
23
24
-
## Tip 2: Make sure your data source credentials are correct
25
-
26
-
The data source connection isn't validated until indexer execution. If you get connection errors, check the connection string, permissions, and the folder or container name.
27
-
28
-
## Tip 3: See what works even if there are some failures
24
+
## Tip 2: See what works even if there are some failures
29
25
30
26
Sometimes a small failure stops an indexer in its tracks. That is fine if you plan to fix issues one by one. However, you might want to ignore a particular type of error, allowing the indexer to continue so that you can see what flows are actually working.
31
27
@@ -45,19 +41,19 @@ To ignore errors during development, set `maxFailedItems` and `maxFailedItemsPer
45
41
> [!NOTE]
46
42
> As a best practice, set the `maxFailedItems` and `maxFailedItemsPerBatch` to 0 for production workloads
47
43
48
-
## Tip 4: Use Debug session to identify and resolve issues with your skillset
44
+
## Tip 3: Use Debug session to troubleshoot issues
49
45
50
46
[**Debug session**](./cognitive-search-debug-session.md) is a visual editor that shows a skillset's dependency graph, inputs and outputs, and definitions. It works by loading a single document from your search index, with the current indexer and skillset configuration. You can then run the entire skillset, scoped to a single document. Within a debug session, you can identify and resolve errors, validate changes, and commit changes to a parent skillset. For a walkthrough, see [Tutorial: debug sessions](./cognitive-search-tutorial-debug-sessions.md).
51
47
52
-
## Tip 5: Expected content fails to appear
48
+
## Tip 4: Expected content fails to appear
53
49
54
50
If you're missing content, check for dropped documents in the Azure portal. In the search service page, open **Indexers** and look at the **Docs succeeded** column. Click through to indexer execution history to review specific errors.
55
51
56
52
If the problem is related to file size, you might see an error like this: "The blob \<file-name>" has the size of \<file-size> bytes, which exceed the maximum size for document extraction for your current service tier." For more information on indexer limits, see [Service limits](search-limits-quotas-capacity.md).
57
53
58
54
A second reason for content failing to appear might be related input/output mapping errors. For example, an output target name is "People" but the index field name is lower-case "people". The system could return 201 success messages for the entire pipeline so you think indexing succeeded, when in fact a field is empty.
59
55
60
-
## Tip 6: Extend processing beyond maximum run time (24-hour window)
56
+
## Tip 5: Extend processing beyond maximum run time
61
57
62
58
Image analysis is computationally intensive for even simple cases, so when images are especially large or complex, processing times can exceed the maximum time allowed.
63
59
@@ -68,7 +64,7 @@ Scheduled indexing resumes at the last known good document. On a recurring sched
68
64
> [!NOTE]
69
65
> If an indexer is set to a certain schedule but repeatedly fails on the same document over and over again each time it runs, the indexer will begin running on a less frequent interval (up to the maximum of at least once every 24 hours) until it successfully makes progress again. = If you believe you have fixed whatever the issue that was causing the indexer to be stuck at a certain point, you can perform an on-demand run of the indexer, and if that successfully makes progress, the indexer will return to its set schedule interval again.
70
66
71
-
## Tip 7: Increase indexing throughput
67
+
## Tip 6: Increase indexing throughput
72
68
73
69
For [parallel indexing](search-howto-large-index.md), distribute your data into multiple containers or multiple virtual folders inside the same container. Then create multiple data source and indexer pairs. All indexers can use the same skillset and write into the same target search index, so your search app doesn’t need to be aware of this partitioning.
Copy file name to clipboardExpand all lines: articles/search/cognitive-search-skill-textsplit.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,7 +44,7 @@ Parameters are case-sensitive.
44
44
45
45
| Parameter name | Description |
46
46
|--------------------|-------------|
47
-
|`textItems`|An array of substrings that were extracted. `textItems` is the default name of the output. `targetName` is optional, but if you have multiple text split skills, make sure to set `targetName` so that you don't overwrite the data from the first skill with the second one. If `targetName` is set, use it in output field mappings or in downstream skills that use the skill output.|
47
+
|`textItems`|Output is an array of substrings that were extracted. `textItems` is the default name of the output. `targetName` is optional, but if you have multiple Text Split skills, make sure to set `targetName` so that you don't overwrite the data from the first skill with the second one. If `targetName` is set, use it in output field mappings or in downstream skills that use the skill output.|
48
48
49
49
## Sample definition
50
50
@@ -137,7 +137,7 @@ This definition adds `pageOverlapLength` of 100 characters and `maximumPagesToTa
137
137
138
138
Assuming the `maximumPageLength` is 5,000 characters (the default), then `"maximumPagesToTake": 1` processes the first 5,000 characters of each source document.
139
139
140
-
This example sets `textItems` to `myPages` through `targetName`. Because `targetName` is set, `myPages` is the value you should use to select the output from the Text Split skill. Use `/document/mypages/*` in downstream skills, indexer output field mappings, knowledge store projection, or index projections.
140
+
This example sets `textItems` to `myPages` through `targetName`. Because `targetName` is set, `myPages` is the value you should use to select the output from the Text Split skill. Use `/document/mypages/*` in downstream skills, indexer [output field mappings](cognitive-search-concept-annotations-syntax.md), [knowledge store projections](knowledge-store-projection-overview.md), and [index projections](index-projections-concept-intro.md).
0 commit comments