Skip to content

Commit 494b28e

Browse files
authored
Merge pull request #187415 from HeidiSteen/heidist-fresh2
[azure search] Debug sessions revs
2 parents e6c75d1 + 87d08e3 commit 494b28e

File tree

2 files changed

+12
-10
lines changed

2 files changed

+12
-10
lines changed

articles/search/cognitive-search-how-to-debug-skillset.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -36,21 +36,23 @@ A debug session is a cached indexer and skillset execution, scoped to a single d
3636

3737
1. Select **+ New Debug Session**.
3838

39-
1. Provide a name for the session, for example *cog-search-debug-sessions*.
39+
:::image type="content" source="media/cognitive-search-debug/new-debug-session.png" alt-text="Screenshot of the debug sessions commands in the portal page." border="true":::
4040

41-
1. Specify a general-purpose storage account that will be used to cache the skill executions. You'll be prompted to select and optionally create a blob container in Blob Storage or Azure Data Lake Storage Gen2. You can reuse the same container for all subsequent debug sessions you create.
41+
1. In **Debug session name**, provide a name that will help you remember which skillset, indexer, and data source the debug session is about.
4242

43-
1. Select the indexer that drives the skillset you want to debug. Copies of both the indexer and skillset are used to create the session.
43+
1. In **Storage connection**, find a general-purpose storage account for caching the debug session. You'll be prompted to select and optionally create a blob container in Blob Storage or Azure Data Lake Storage Gen2. You can reuse the same container for all subsequent debug sessions you create. A helpful container name might be "cognitive-search-debug-sessions".
4444

45-
1. Choose a document. The session will default to the first document in the data source, but you can also choose which document to step through by providing its URL.
45+
1. In **Indexer template**, select the indexer that drives the skillset you want to debug. Copies of both the indexer and skillset are used to initialize the session.
4646

47-
If your document resides in a blob container in the same storage account used to cache your debug session, you can copy the document URL from the blob property page in the portal.
47+
1. In **Document to debug**, choose the first document in the index or select a specific document. If you select a specific document, depending on the data source, you'll be asked for a URI or a row ID.
48+
49+
If your specific document is a blob, you'll be asked for the blob URI. You can find the URL in the blob property page in the portal.
4850

4951
:::image type="content" source="media/cognitive-search-debug/copy-blob-url.png" alt-text="Screenshot of the URI property in blob storage." border="true":::
5052

51-
1. Optionally, specify any indexer execution settings that should be used to create the session. The settings should mimic the settings used by the actual indexer. Any indexer options that you specify in a debug session have no effect on the indexer itself.
53+
1. Optionally, in **Indexer settings**, specify any indexer execution settings used to create the session. The settings should mimic the settings used by the actual indexer. Any indexer options that you specify in a debug session have no effect on the indexer itself.
5254

53-
1. Select **Save Session** to get started.
55+
1. Your configuration should look similar to this screenshot. Select **Save Session** to get started.
5456

5557
:::image type="content" source="media/cognitive-search-debug/debug-session-new.png" alt-text="Screenshot of a debug session page." border="true":::
5658

@@ -74,7 +76,7 @@ To prove whether a modification resolves an error, follow these steps:
7476

7577
## View content of enrichment nodes
7678

77-
AI enrichment pipelines extract or infer information and structure from source documents, creating an enriched document in the process. An enriched document is first created during document cracking and populated with a root node (`/document`), plus nodes for any content that is lifted directly from the data source, such as metadata and the document key. Additional nodes are created by skills during skill execution, where each skill output adds a new node to the enrichment tree.
79+
AI enrichment pipelines extract or infer information and structure from source documents, creating an enriched document in the process. An enriched document is first created during document cracking and populated with a root node (`/document`), plus nodes for any content that is lifted directly from the data source, such as metadata and the document key. More nodes are created by skills during skill execution, where each skill output adds a new node to the enrichment tree.
7880

7981
Enriched documents are internal, but a debug session gives you access to the content produced during skill execution. To view the content or output of each skill, follow these steps:
8082

@@ -113,11 +115,11 @@ The following steps show you how to get information about a skill.
113115

114116
## Check field mappings
115117

116-
If skills produce output but the search index is empty, check the field mappings that specify how content moves out of the pipeline and into a search index.
118+
If skills produce output but the search index is empty, check the field mappings. Field mappings specify how content moves out of the pipeline and into a search index.
117119

118120
1. Start with the default views: **AI enrichment > Skill Graph**, with the graph type set to **Dependency Graph**.
119121

120-
1. Select **Field Mappings** near the top. You should find at least the document key that uniquely identifies and associates each search document in the search index with it's source document in the data source.
122+
1. Select **Field Mappings** near the top. You should find at least the document key that uniquely identifies and associates each search document in the search index with its source document in the data source.
121123

122124
If you're importing raw content straight from the data source, bypassing enrichment, you should find those fields in **Field Mappings**.
123125

31.8 KB
Loading

0 commit comments

Comments
 (0)