|
210 | 210 | "id": "6aa80d78-407c-499f-b4b3-76a11f3bc6db",
|
211 | 211 | "metadata": {},
|
212 | 212 | "source": [
|
213 |
| - "The ESRI Model Definition (.emd) file will include both required and optional keys to facilitate model execution. To run **Classify Text Using Deep Learning (GeoAI)** Tool, we need to supply `InferenceFunction`, `ModelType` and `OutputField`. \n", |
| 213 | + "The ESRI Model Definition (.emd) file will include both required and optional keys to facilitate model execution. To run **Classify Text Using Deep Learning (GeoAI)** Tool, we need to supply the `InferenceFunction`, `ModelType` and `OutputField`. \n", |
214 | 214 | "\n",
|
215 | 215 | "1. `InferenceFunction`: Name of the module that contains the definition of NLP function\n",
|
216 | 216 | "2. `ModelType`: Defines the type of task. For **Classify Text Using Deep Learning (GeoAI)** Tool it will be `TextClassifier`\n",
|
217 | 217 | "3. `OutputField`: Name of the field in which contains the output\n",
|
218 | 218 | "\n",
|
219 |
| - "Other keys are prerogative to the Model extension author. In this case, we took liberty to state the prompt `few-shot prompting` related information - `examples` and `prompt`. We will utilize this information to construct a clear and effective prompt for the task." |
| 219 | + "Other keys are prerogative of the Model extension author. In this case, we took liberty to state the prompt `few-shot prompting` related information - `examples` and `prompt`. We will utilize this information to construct a clear and effective prompt for the task." |
220 | 220 | ]
|
221 | 221 | },
|
222 | 222 | {
|
|
278 | 278 | "id": "ccc1cbc9-d4dd-46bd-b6ad-578a0aa6d797",
|
279 | 279 | "metadata": {},
|
280 | 280 | "source": [
|
281 |
| - "Model extension requires the process to be wrapped in a class with the following functions mandatorily implemented:\n", |
| 281 | + "Model extension requires the process be wrapped in a class with the following functions implemented:\n", |
282 | 282 | "\n",
|
283 | 283 | "- `__init__`\n",
|
284 | 284 | "\n",
|
|
329 | 329 | "id": "f9aa02e1-fc97-428e-908e-c271b33c5670",
|
330 | 330 | "metadata": {},
|
331 | 331 | "source": [
|
332 |
| - "This function gathers parameters from the user for the **Classify Text Using Deep Learning (GeoAI)** Tool. The demo utilizes Llama-3 and employs `transformers` library for loading and inference. Users can customize the model’s input and output by specifying generation parameters, including `max_length` and `temperature`, to optimize the classification process." |
| 332 | + "This function gathers parameters from the user for the **Classify Text Using Deep Learning (GeoAI)** Tool. The demo utilizes Llama-3 and employs the `transformers` library for loading and inference. Users can customize the model’s input and output by specifying generation parameters, including `max_length` and `temperature`, to optimize the classification process." |
333 | 333 | ]
|
334 | 334 | },
|
335 | 335 | {
|
|
833 | 833 | "id": "6bfae9e3-9cd5-4608-a4d1-5ed6adbe6d36",
|
834 | 834 | "metadata": {},
|
835 | 835 | "source": [
|
836 |
| - "To complete a custom NLP function setup, create a ESRI Deep Learning Package (.dlpk) file. \n", |
| 836 | + "To complete a custom NLP function setup, create an ESRI Deep Learning Package (.dlpk) file. \n", |
837 | 837 | "\n",
|
838 | 838 | "Organize the files as follows:\n",
|
839 | 839 | "\n",
|
|
982 | 982 | "id": "ce393c87",
|
983 | 983 | "metadata": {},
|
984 | 984 | "source": [
|
985 |
| - "Utilizing the HarveyTweet Dataset, we successfully identified and classified tweets into critical categories such as public services and health-related information. Below are the results from the sample inputs." |
| 985 | + "Utilizing the HarveyTweet Dataset, we successfully identified and classified tweets into critical categories, such as public services and health-related information. Below are the results from the sample inputs." |
986 | 986 | ]
|
987 | 987 | },
|
988 | 988 | {
|
|
0 commit comments