|
191 | 191 | "2. **`ModelType`**: Specifies the type of task; for the **Text Using Deep Learning**, this will be set to `SequenceToSequence`.\n",
|
192 | 192 | "3. **`OutputField`**: The name of the field where the output will be stored.\n",
|
193 | 193 | "\n",
|
194 |
| - "Other keys are prerogative to the Model extension author. In this instance, we've also incorporated `examples` and `prompt` related to **few-shot prompting**. This information will help us create a clear and effective prompt for the task at hand." |
| 194 | + "Other keys are prerogative of the Model extension author. In this instance, we've also incorporated `examples` and `prompt` related to **few-shot prompting**. This information will help us create a clear and effective prompt for the task at hand." |
195 | 195 | ]
|
196 | 196 | },
|
197 | 197 | {
|
|
244 | 244 | "id": "ccc1cbc9-d4dd-46bd-b6ad-578a0aa6d797",
|
245 | 245 | "metadata": {},
|
246 | 246 | "source": [
|
247 |
| - "The model extension requires the process to be wrapped in a class that must implement the following functions:\n", |
| 247 | + "The model extension requires the process be wrapped in a class that implements the following functions:\n", |
248 | 248 | "\n",
|
249 | 249 | "- `__init__`\n",
|
250 | 250 | "\n",
|
|
305 | 305 | "id": "f9aa02e1-fc97-428e-908e-c271b33c5670",
|
306 | 306 | "metadata": {},
|
307 | 307 | "source": [
|
308 |
| - "This function is designed to collect parameters from the user through the **Transform Text Using Deep Learning (GeoAI)** Tool. To connect to the GPT-3.5 deployment on Azure, we require the `API key`, `base URL`, and `deployment name`. Since these parameters are sensitive, we will prompt the user for this information instead of hardcoding it within the file." |
| 308 | + "This function is designed to collect parameters from the user through the **Transform Text Using Deep Learning (GeoAI)** Tool. To connect to the GPT-3.5 deployment on Azure, we require the `API key`, `base URL`, and `deployment name` parameters. Since these parameters are sensitive, we will prompt the user for this information instead of hardcoding it within the file." |
309 | 309 | ]
|
310 | 310 | },
|
311 | 311 | {
|
|
739 | 739 | "id": "295a59e6-0d2c-4276-941e-45b8b1f554c8",
|
740 | 740 | "metadata": {},
|
741 | 741 | "source": [
|
742 |
| - "To complete a custom NLP function setup, create a ESRI Deep Learning Package (.dlpk) file. \n", |
| 742 | + "To complete a custom NLP function setup, create an ESRI Deep Learning Package (.dlpk) file.\n", |
743 | 743 | "\n",
|
744 | 744 | "Organize the files as follows:\n",
|
745 | 745 | "\n",
|
|
892 | 892 | "id": "ce393c87",
|
893 | 893 | "metadata": {},
|
894 | 894 | "source": [
|
895 |
| - "In this notebook, we developed an address standardization model using GPT-3.5 with extensibility fucntion. The dataset consisted of non-standard, incorrect house addresses and from the United States. We leveraged the capabilities of GPT-3.5 to efficiently standardize input house addresses. Below are the results from the sample inputs." |
| 895 | + "In this notebook, we developed an address standardization model using GPT-3.5 with extensibility function. The dataset consisted of non-standard, incorrect house addresses from the United States. We leveraged the capabilities of GPT-3.5 to efficiently standardize input house addresses. Below are the results from the sample inputs." |
896 | 896 | ]
|
897 | 897 | },
|
898 | 898 | {
|
|
0 commit comments