|
18 | 18 | "cell_type": "code",
|
19 | 19 | "execution_count": 1,
|
20 | 20 | "metadata": {},
|
21 |
| - "outputs": [ |
22 |
| - { |
23 |
| - "name": "stderr", |
24 |
| - "output_type": "stream", |
25 |
| - "text": [ |
26 |
| - "┌ Info: Precompiling FastAI [5d0beca9-ade8-49ae-ad0b-a3cf890e669f]\n", |
27 |
| - "└ @ Base loading.jl:1342\n", |
28 |
| - "WARNING: using Makie.Label in module FastAI conflicts with an existing identifier.\n" |
29 |
| - ] |
30 |
| - } |
31 |
| - ], |
| 21 | + "outputs": [], |
32 | 22 | "source": [
|
33 | 23 | "import CairoMakie; CairoMakie.activate!(type=\"png\")\n",
|
34 | 24 | "using DelimitedFiles: readdlm\n",
|
35 |
| - "using FastAI\n", |
36 |
| - "using FastAI.FilePathsBase, FastAI.StaticArrays, FastAI.DLPipelines\n", |
| 25 | + "using FastAI, ImageShow\n", |
| 26 | + "using FastAI.FilePathsBase, FastAI.StaticArrays\n", |
37 | 27 | "import FastAI.DataAugmentation"
|
38 | 28 | ]
|
39 | 29 | },
|
|
198 | 188 | {
|
199 | 189 | "data": {
|
200 | 190 | "text/plain": [
|
201 |
| - "loadannotfile (generic function with 2 tasks)" |
| 191 | + "loadannotfile (generic function with 2 methods)" |
202 | 192 | ]
|
203 | 193 | },
|
204 | 194 | "execution_count": 6,
|
|
302 | 292 | "cell_type": "markdown",
|
303 | 293 | "metadata": {},
|
304 | 294 | "source": [
|
305 |
| - "Before we can start using this data container for training, we need to split into a training and validation dataset. Since there are 13 different persons with many images each, randomly splitting the container does not make sense. The validation dataset would then contain many images that are very similar to those seen in training, and would hence say little about the generalization ability of a model. We instead use the first 12 subjects as a training dataset and validate on the last." |
| 295 | + "Before we can start using this data container for training, we need to split it into a training and validation dataset. Since there are 13 different persons with many images each, randomly splitting the container does not make sense. The validation dataset would then contain many images that are very similar to those seen in training, and would hence say little about the generalization ability of a model. We instead use the first 12 subjects as a training dataset and validate on the last." |
306 | 296 | ]
|
307 | 297 | },
|
308 | 298 | {
|
|
349 | 339 | {
|
350 | 340 | "data": {
|
351 | 341 | "text/plain": [
|
352 |
| - "BlockTask(Image{2} -> Keypoints{2, 1})" |
| 342 | + "SupervisedTask(Image{2} -> Keypoints{2, 1})" |
353 | 343 | ]
|
354 | 344 | },
|
355 | 345 | "execution_count": 10,
|
|
358 | 348 | }
|
359 | 349 | ],
|
360 | 350 | "source": [
|
361 |
| - "sz = (128, 128)\n", |
362 |
| - "task = BlockTask(\n", |
| 351 | + "sz = (224, 224)\n", |
| 352 | + "task = SupervisedTask(\n", |
363 | 353 | " (Image{2}(), Keypoints{2}(1)),\n",
|
364 | 354 | " (\n",
|
365 | 355 | " ProjectiveTransforms(sz, buffered=true, augmentations=augs_projection(max_warp=0)),\n",
|
|
384 | 374 | {
|
385 | 375 | "data": {
|
386 | 376 | "text/plain": [
|
387 |
| - "(\"128×128×3 Array{Float32, 3}\", Float32[0.33977616, 0.19801295])" |
| 377 | + "(\"224×224×3 Array{Float32, 3}\", Float32[-0.11184323, 0.52864575])" |
388 | 378 | ]
|
389 | 379 | },
|
390 | 380 | "execution_count": 11,
|
|
414 | 404 | "data": {
|
415 | 405 | "text/plain": [
|
416 | 406 | "1-element Vector{SVector{2, Float32}}:\n",
|
417 |
| - " [85.745674, 76.67283]" |
| 407 | + " [99.47356, 171.20831]" |
418 | 408 | ]
|
419 | 409 | },
|
420 | 410 | "execution_count": 12,
|
|
423 | 413 | }
|
424 | 414 | ],
|
425 | 415 | "source": [
|
426 |
| - "DLPipelines.decodeŷ(task, Training(), y)" |
427 |
| - ] |
428 |
| - }, |
429 |
| - { |
430 |
| - "cell_type": "markdown", |
431 |
| - "metadata": {}, |
432 |
| - "source": [ |
433 |
| - "We should also visualize our data to make sure that after all the encoding it still makes sense and the keypoint is properly aligned with the head on the image. The visualizations are derived from the data blocks we used when defining our `BlockTask`." |
| 416 | + "FastAI.decodeypred(task, Training(), y)" |
434 | 417 | ]
|
435 | 418 | },
|
436 | 419 | {
|
|
473 | 456 | "cell_type": "markdown",
|
474 | 457 | "metadata": {},
|
475 | 458 | "source": [
|
476 |
| - "We'll use a modified ResNet as a model backbone. and add a couple layers that regress the keypoint. [`taskmodel`](#) knows how to do this by looking at the data blocks used and calling [`blockmodel`](#)`(KepointTensor{2, Float32}((1,)), KeypointTensor{2, Float32}((1,)), backbone)`." |
| 459 | + "We'll use a modified ResNet as a model backbone. and add a couple layers that regress the keypoint. [`taskmodel`](#) knows how to do this by looking at the data blocks used and calling [`blockmodel`](#)`(KeypointTensor{2, Float32}((1,)), KeypointTensor{2, Float32}((1,)), backbone)`." |
477 | 460 | ]
|
478 | 461 | },
|
479 | 462 | {
|
|
552 | 535 | }
|
553 | 536 | ],
|
554 | 537 | "source": [
|
| 538 | + "import Flux\n", |
555 | 539 | "learner = Learner(\n",
|
556 | 540 | " model,\n",
|
557 | 541 | " (traindl, validdl),\n",
|
|
0 commit comments