|
117 | 117 | "metadata": {}, |
118 | 118 | "source": [ |
119 | 119 | "The primitive types of each attribute in the arrow tables need to match to make the operation efficient.\n", |
120 | | - "Zero-copy conversion is not guaranteed if the data types provided by the PGM via `power_grid_meta_data` are not used.\n", |
| 120 | + "Zero-copy conversion is not guaranteed if the data types provided via the PGM via `power_grid_meta_data` are not used.\n", |
121 | 121 | "Note that the asymmetric type of attribute in power-grid-model has a shape of `(3,)` along with a specific type. These represent the 3 phases of electrical system.\n", |
122 | 122 | "Hence, special care is required when handling asymmetric attributes. \n", |
123 | 123 | "\n", |
|
143 | 143 | "name": "stdout", |
144 | 144 | "output_type": "stream", |
145 | 145 | "text": [ |
146 | | - "-------node schema-------\n", |
| 146 | + "-------node scehma-------\n", |
147 | 147 | "id: int32\n", |
148 | 148 | "u_rated: double\n", |
149 | | - "-------asym load schema-------\n", |
| 149 | + "-------asym load scehma-------\n", |
150 | 150 | "id: int32\n", |
151 | 151 | "node: int32\n", |
152 | 152 | "status: int8\n", |
|
173 | 173 | " return pa.schema(schemas)\n", |
174 | 174 | "\n", |
175 | 175 | "\n", |
176 | | - "print(\"-------node schema-------\")\n", |
| 176 | + "print(\"-------node scehma-------\")\n", |
177 | 177 | "print(pgm_schema(DatasetType.input, ComponentType.node))\n", |
178 | | - "print(\"-------asym load schema-------\")\n", |
| 178 | + "print(\"-------asym load scehma-------\")\n", |
179 | 179 | "print(pgm_schema(DatasetType.input, ComponentType.asym_load))" |
180 | 180 | ] |
181 | 181 | }, |
|
188 | 188 | "The [power-grid-model documentation on Components](https://power-grid-model.readthedocs.io/en/stable/user_manual/components.html) provides documentation on which components are required and which ones are optional.\n", |
189 | 189 | "\n", |
190 | 190 | "Construct the Arrow data as a table with the correct headers and data types. \n", |
191 | | - "The creation and initialization of arrays and combining the data in a RecordBatch is up to the user." |
| 191 | + "The creation of arrays and combining it in a RecordBatch as well as the method of initializing that RecordBatch is up to the user." |
192 | 192 | ] |
193 | 193 | }, |
194 | 194 | { |
195 | 195 | "cell_type": "code", |
196 | | - "execution_count": 4, |
| 196 | + "execution_count": null, |
197 | 197 | "metadata": {}, |
198 | 198 | "outputs": [ |
199 | 199 | { |
|
213 | 213 | } |
214 | 214 | ], |
215 | 215 | "source": [ |
216 | | - "# create the individual columns with the correct data type\n", |
217 | 216 | "nodes_schema = pgm_schema(DatasetType.input, ComponentType.node)\n", |
218 | 217 | "nodes = pa.record_batch(\n", |
219 | 218 | " [\n", |
|
223 | 222 | " names=(\"id\", \"u_rated\"),\n", |
224 | 223 | ")\n", |
225 | 224 | "\n", |
226 | | - "# or convert directly using the schema\n" |
227 | 225 | "lines = pa.record_batch(\n", |
228 | 226 | " {\n", |
229 | 227 | " \"id\": [4, 5],\n", |
|
369 | 367 | }, |
370 | 368 | { |
371 | 369 | "cell_type": "code", |
372 | | - "execution_count": 8, |
| 370 | + "execution_count": null, |
373 | 371 | "metadata": {}, |
374 | 372 | "outputs": [ |
375 | 373 | { |
|
792 | 790 | " data: SingleColumnarData, dataset_type: DatasetType, component_type: ComponentType\n", |
793 | 791 | ") -> pa.RecordBatch:\n", |
794 | 792 | " \"\"\"Convert NumPy data to Arrow data.\"\"\"\n", |
795 | | - " # pa.record_batch.from_arrays(data, schema=pgm_schema(DatasetType.result, ComponentType.node))\n", |
796 | 793 | " component_pgm_schema = pgm_schema(dataset_type, component_type, data.keys())\n", |
797 | 794 | " pa_columns = {}\n", |
798 | 795 | " for attribute, data in data.items():\n", |
|
820 | 817 | { |
821 | 818 | "data": { |
822 | 819 | "text/plain": [ |
823 | | - "<pyarrow.lib.DoubleArray object at 0x000001A81FF94A00>\n", |
| 820 | + "<pyarrow.lib.DoubleArray object at 0x00000184F527A680>\n", |
824 | 821 | "[\n", |
825 | 822 | " 1,\n", |
826 | 823 | " 0.01,\n", |
|
0 commit comments