Skip to content

Commit 74c46a5

Browse files
authored
Merge pull request #28 from HeidiSteen/heidist-master
Added comments to the tutorial notebook
2 parents 33241c1 + fe472f0 commit 74c46a5

File tree

1 file changed

+46
-4
lines changed

1 file changed

+46
-4
lines changed

Tutorial-AI-Enrichment/PythonTutorial-AzureSearch-AIEnrichment.ipynb

Lines changed: 46 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,13 @@
1111
"from pprint import pprint"
1212
]
1313
},
14+
{
15+
"cell_type": "markdown",
16+
"metadata": {},
17+
"source": [
18+
"Name the objects created in this notebook."
19+
]
20+
},
1421
{
1522
"cell_type": "code",
1623
"execution_count": null,
@@ -28,7 +35,7 @@
2835
"cell_type": "markdown",
2936
"metadata": {},
3037
"source": [
31-
"Add the name and key of your search service."
38+
"Set up a search service connection."
3239
]
3340
},
3441
{
@@ -50,7 +57,7 @@
5057
"cell_type": "markdown",
5158
"metadata": {},
5259
"source": [
53-
"Add the full connection string to your storage account. This step assumes \"basic-demo-data-pr\" as the container name. Replace that string as well if your container name is different."
60+
"Create a data source connection to the external data in Blob storage. Provide a connection string to your service and the name of the container storing the sample files."
5461
]
5562
},
5663
{
@@ -69,13 +76,20 @@
6976
" \"connectionString\": datasourceConnectionString\n",
7077
" },\n",
7178
" \"container\": {\n",
72-
" \"name\": \"basic-demo-data-pr\"\n",
79+
" \"name\": \"<YOUR-CONTAINER-NAME\"\n",
7380
" }\n",
7481
"}\n",
7582
"r = requests.put( endpoint + \"/datasources/\" + datasource_name, data=json.dumps(datasource_payload), headers=headers, params=params )\n",
7683
"print(r.status_code)"
7784
]
7885
},
86+
{
87+
"cell_type": "markdown",
88+
"metadata": {},
89+
"source": [
90+
"Invoke natural language processing on blob content: recognize entities, detected language, break large text into segments, detect key phrases in each segment."
91+
]
92+
},
7993
{
8094
"cell_type": "code",
8195
"execution_count": null,
@@ -164,6 +178,13 @@
164178
"print(r.status_code)"
165179
]
166180
},
181+
{
182+
"cell_type": "markdown",
183+
"metadata": {},
184+
"source": [
185+
"Define a search index to store the output."
186+
]
187+
},
167188
{
168189
"cell_type": "code",
169190
"execution_count": null,
@@ -224,7 +245,7 @@
224245
"cell_type": "markdown",
225246
"metadata": {},
226247
"source": [
227-
"The next step, Create an indexer, is where all the deep processing occurs. This step takes several minutes to complete. "
248+
"Create and run an indexer. This step is where deep processing occur and it takes several minutes to complete. "
228249
]
229250
},
230251
{
@@ -282,6 +303,13 @@
282303
"print(r.status_code)\n"
283304
]
284305
},
306+
{
307+
"cell_type": "markdown",
308+
"metadata": {},
309+
"source": [
310+
"Monitor indexer status to see if it's running."
311+
]
312+
},
285313
{
286314
"cell_type": "code",
287315
"execution_count": null,
@@ -293,6 +321,13 @@
293321
"pprint(json.dumps(r.json(), indent=1))"
294322
]
295323
},
324+
{
325+
"cell_type": "markdown",
326+
"metadata": {},
327+
"source": [
328+
"Get the index defintion from the search service. This confirms the index is created."
329+
]
330+
},
296331
{
297332
"cell_type": "code",
298333
"execution_count": null,
@@ -304,6 +339,13 @@
304339
"print(json.dumps(r.json(), indent=1))"
305340
]
306341
},
342+
{
343+
"cell_type": "markdown",
344+
"metadata": {},
345+
"source": [
346+
"Query the index to return data. This query includes a search string that selects just one field (organizations)."
347+
]
348+
},
307349
{
308350
"cell_type": "code",
309351
"execution_count": null,

0 commit comments

Comments
 (0)