You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/search/cognitive-search-how-to-debug-skillset.md
+12-10Lines changed: 12 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,21 +36,23 @@ A debug session is a cached indexer and skillset execution, scoped to a single d
36
36
37
37
1. Select **+ New Debug Session**.
38
38
39
-
1. Provide a name for the session, for example *cog-search-debug-sessions*.
39
+
:::image type="content" source="media/cognitive-search-debug/new-debug-session.png" alt-text="Screenshot of the debug sessions commands in the portal page." border="true":::
40
40
41
-
1.Specify a general-purpose storage account that will be used to cache the skill executions. You'll be prompted to select and optionally create a blob container in Blob Storage or Azure Data Lake Storage Gen2. You can reuse the same container for all subsequent debug sessions you create.
41
+
1.In **Debug session name**, provide a name that will help you remember which skillset, indexer, and data source the debug session is about.
42
42
43
-
1.Select the indexer that drives the skillset you want to debug. Copies of both the indexer and skillset are used to create the session.
43
+
1.In **Storage connection**, find a general-purpose storage account for caching the debug session. You'll be prompted to select and optionally create a blob container in Blob Storage or Azure Data Lake Storage Gen2. You can reuse the same container for all subsequent debug sessions you create. A helpful container name might be "cognitive-search-debug-sessions".
44
44
45
-
1.Choose a document. The session will default to the first document in the data source, but you can also choose which document to step through by providing its URL.
45
+
1.In **Indexer template**, select the indexer that drives the skillset you want to debug. Copies of both the indexer and skillset are used to initialize the session.
46
46
47
-
If your document resides in a blob container in the same storage account used to cache your debug session, you can copy the document URL from the blob property page in the portal.
47
+
1. In **Document to debug**, choose the first document in the index or select a specific document. If you select a specific document, depending on the data source, you'll be asked for a URI or a row ID.
48
+
49
+
If your specific document is a blob, you'll be asked for the blob URI. You can find the URL in the blob property page in the portal.
48
50
49
51
:::image type="content" source="media/cognitive-search-debug/copy-blob-url.png" alt-text="Screenshot of the URI property in blob storage." border="true":::
50
52
51
-
1. Optionally, specify any indexer execution settings that should be used to create the session. The settings should mimic the settings used by the actual indexer. Any indexer options that you specify in a debug session have no effect on the indexer itself.
53
+
1. Optionally, in **Indexer settings**, specify any indexer execution settings used to create the session. The settings should mimic the settings used by the actual indexer. Any indexer options that you specify in a debug session have no effect on the indexer itself.
52
54
53
-
1. Select **Save Session** to get started.
55
+
1.Your configuration should look similar to this screenshot. Select **Save Session** to get started.
54
56
55
57
:::image type="content" source="media/cognitive-search-debug/debug-session-new.png" alt-text="Screenshot of a debug session page." border="true":::
56
58
@@ -74,7 +76,7 @@ To prove whether a modification resolves an error, follow these steps:
74
76
75
77
## View content of enrichment nodes
76
78
77
-
AI enrichment pipelines extract or infer information and structure from source documents, creating an enriched document in the process. An enriched document is first created during document cracking and populated with a root node (`/document`), plus nodes for any content that is lifted directly from the data source, such as metadata and the document key. Additional nodes are created by skills during skill execution, where each skill output adds a new node to the enrichment tree.
79
+
AI enrichment pipelines extract or infer information and structure from source documents, creating an enriched document in the process. An enriched document is first created during document cracking and populated with a root node (`/document`), plus nodes for any content that is lifted directly from the data source, such as metadata and the document key. More nodes are created by skills during skill execution, where each skill output adds a new node to the enrichment tree.
78
80
79
81
Enriched documents are internal, but a debug session gives you access to the content produced during skill execution. To view the content or output of each skill, follow these steps:
80
82
@@ -113,11 +115,11 @@ The following steps show you how to get information about a skill.
113
115
114
116
## Check field mappings
115
117
116
-
If skills produce output but the search index is empty, check the field mappings that specify how content moves out of the pipeline and into a search index.
118
+
If skills produce output but the search index is empty, check the field mappings. Field mappings specify how content moves out of the pipeline and into a search index.
117
119
118
120
1. Start with the default views: **AI enrichment > Skill Graph**, with the graph type set to **Dependency Graph**.
119
121
120
-
1. Select **Field Mappings** near the top. You should find at least the document key that uniquely identifies and associates each search document in the search index with it's source document in the data source.
122
+
1. Select **Field Mappings** near the top. You should find at least the document key that uniquely identifies and associates each search document in the search index with its source document in the data source.
121
123
122
124
If you're importing raw content straight from the data source, bypassing enrichment, you should find those fields in **Field Mappings**.
0 commit comments