diff --git a/trained/ai85-catsdogs-qat8-q.pth.tar b/trained/ai85-catsdogs-qat8-q.pth.tar index aa36de46..47ea5ab8 100644 Binary files a/trained/ai85-catsdogs-qat8-q.pth.tar and b/trained/ai85-catsdogs-qat8-q.pth.tar differ diff --git a/trained/ai85-catsdogs-qat8.log b/trained/ai85-catsdogs-qat8.log index 679d49e2..5665bde4 100644 --- a/trained/ai85-catsdogs-qat8.log +++ b/trained/ai85-catsdogs-qat8.log @@ -1,4420 +1,4463 @@ -2023-09-11 13:41:09,898 - Log file for this run: /home/ermanokman/repos/ai8x-training/logs/2023.09.11-134109/2023.09.11-134109.log -2023-09-11 13:41:15,218 - Optimizer Type: -2023-09-11 13:41:15,218 - Optimizer Args: {'lr': 0.001, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0.0, 'amsgrad': False} -2023-09-11 13:41:15,304 - Dataset sizes: +2025-05-20 16:36:52,909 - Log file for this run: /home/asyaturhal/ai8x-training/logs/catsdogs___2025.05.20-163652/catsdogs___2025.05.20-163652.log +2025-05-20 16:36:52,909 - The open file limit is 1024. Please raise the limit (see documentation). +2025-05-20 16:36:52,909 - Configuring device: MAX78000, simulate=False. +2025-05-20 16:36:53,106 - Optimizer Type: +2025-05-20 16:36:53,107 - Optimizer Args: {'lr': 0.001, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0.0, 'amsgrad': False, 'maximize': False, 'foreach': None, 'capturable': False, 'differentiable': False, 'fused': None} +2025-05-20 16:36:53,285 - Reading compression schedule from: policies/schedule-catsdogs.yaml +2025-05-20 16:36:53,288 - torch.compile() not available, using "eager" mode +2025-05-20 16:36:53,289 - Use distributed training to enable torch.compile() with multiple GPUs +2025-05-20 16:36:53,289 - Dataset sizes: training=18000 validation=2000 test=5000 -2023-09-11 13:41:15,304 - Reading compression schedule from: policies/schedule-catsdogs-new.yaml -2023-09-11 13:41:15,306 - - -2023-09-11 13:41:15,306 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:41:24,743 - Epoch: [0][ 10/ 71] Overall Loss 0.699847 Objective Loss 0.699847 LR 0.001000 Time 0.943677 -2023-09-11 13:41:26,804 - Epoch: [0][ 20/ 71] Overall Loss 0.691583 Objective Loss 0.691583 LR 0.001000 Time 0.574843 -2023-09-11 13:41:29,820 - Epoch: [0][ 30/ 71] Overall Loss 0.686328 Objective Loss 0.686328 LR 0.001000 Time 0.483771 -2023-09-11 13:41:32,306 - Epoch: [0][ 40/ 71] Overall Loss 0.681851 Objective Loss 0.681851 LR 0.001000 Time 0.424962 -2023-09-11 13:41:35,332 - Epoch: [0][ 50/ 71] Overall Loss 0.678799 Objective Loss 0.678799 LR 0.001000 Time 0.400490 -2023-09-11 13:41:37,722 - Epoch: [0][ 60/ 71] Overall Loss 0.675177 Objective Loss 0.675177 LR 0.001000 Time 0.373559 -2023-09-11 13:41:40,330 - Epoch: [0][ 70/ 71] Overall Loss 0.672030 Objective Loss 0.672030 Top1 62.890625 LR 0.001000 Time 0.357459 -2023-09-11 13:41:40,387 - Epoch: [0][ 71/ 71] Overall Loss 0.671505 Objective Loss 0.671505 Top1 62.797619 LR 0.001000 Time 0.353223 -2023-09-11 13:41:40,455 - --- validate (epoch=0)----------- -2023-09-11 13:41:40,456 - 2000 samples (256 per mini-batch) -2023-09-11 13:41:42,933 - Epoch: [0][ 8/ 8] Loss 0.651870 Top1 63.200000 -2023-09-11 13:41:43,019 - ==> Top1: 63.200 Loss: 0.652 - -2023-09-11 13:41:43,027 - ==> Confusion: -[[838 147] - [589 426]] - -2023-09-11 13:41:43,028 - ==> Best [Top1: 63.200 Sparsity:0.00 Params: 57776 on epoch: 0] -2023-09-11 13:41:43,029 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:41:43,033 - - -2023-09-11 13:41:43,033 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:41:46,504 - Epoch: [1][ 10/ 71] Overall Loss 0.643949 Objective Loss 0.643949 LR 0.001000 Time 0.347086 -2023-09-11 13:41:48,389 - Epoch: [1][ 20/ 71] Overall Loss 0.644591 Objective Loss 0.644591 LR 0.001000 Time 0.267755 -2023-09-11 13:41:50,986 - Epoch: [1][ 30/ 71] Overall Loss 0.642382 Objective Loss 0.642382 LR 0.001000 Time 0.265060 -2023-09-11 13:41:53,113 - Epoch: [1][ 40/ 71] Overall Loss 0.641840 Objective Loss 0.641840 LR 0.001000 Time 0.251956 -2023-09-11 13:41:55,560 - Epoch: [1][ 50/ 71] Overall Loss 0.635971 Objective Loss 0.635971 LR 0.001000 Time 0.250498 -2023-09-11 13:41:57,808 - Epoch: [1][ 60/ 71] Overall Loss 0.634133 Objective Loss 0.634133 LR 0.001000 Time 0.246217 -2023-09-11 13:41:59,983 - Epoch: [1][ 70/ 71] Overall Loss 0.630999 Objective Loss 0.630999 Top1 64.453125 LR 0.001000 Time 0.242112 -2023-09-11 13:42:00,039 - Epoch: [1][ 71/ 71] Overall Loss 0.630654 Objective Loss 0.630654 Top1 64.583333 LR 0.001000 Time 0.239482 -2023-09-11 13:42:00,130 - --- validate (epoch=1)----------- -2023-09-11 13:42:00,131 - 2000 samples (256 per mini-batch) -2023-09-11 13:42:02,634 - Epoch: [1][ 8/ 8] Loss 0.592152 Top1 69.600000 -2023-09-11 13:42:02,716 - ==> Top1: 69.600 Loss: 0.592 - -2023-09-11 13:42:02,716 - ==> Confusion: -[[705 280] - [328 687]] - -2023-09-11 13:42:02,731 - ==> Best [Top1: 69.600 Sparsity:0.00 Params: 57776 on epoch: 1] -2023-09-11 13:42:02,731 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:42:02,734 - - -2023-09-11 13:42:02,734 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:42:05,732 - Epoch: [2][ 10/ 71] Overall Loss 0.601056 Objective Loss 0.601056 LR 0.001000 Time 0.299777 -2023-09-11 13:42:08,256 - Epoch: [2][ 20/ 71] Overall Loss 0.592658 Objective Loss 0.592658 LR 0.001000 Time 0.276101 -2023-09-11 13:42:10,221 - Epoch: [2][ 30/ 71] Overall Loss 0.593226 Objective Loss 0.593226 LR 0.001000 Time 0.249553 -2023-09-11 13:42:12,810 - Epoch: [2][ 40/ 71] Overall Loss 0.588757 Objective Loss 0.588757 LR 0.001000 Time 0.251865 -2023-09-11 13:42:14,801 - Epoch: [2][ 50/ 71] Overall Loss 0.588544 Objective Loss 0.588544 LR 0.001000 Time 0.241307 -2023-09-11 13:42:17,533 - Epoch: [2][ 60/ 71] Overall Loss 0.584145 Objective Loss 0.584145 LR 0.001000 Time 0.246631 -2023-09-11 13:42:19,849 - Epoch: [2][ 70/ 71] Overall Loss 0.579406 Objective Loss 0.579406 Top1 73.828125 LR 0.001000 Time 0.244471 -2023-09-11 13:42:19,903 - Epoch: [2][ 71/ 71] Overall Loss 0.576867 Objective Loss 0.576867 Top1 75.297619 LR 0.001000 Time 0.241793 -2023-09-11 13:42:19,993 - --- validate (epoch=2)----------- -2023-09-11 13:42:19,993 - 2000 samples (256 per mini-batch) -2023-09-11 13:42:22,774 - Epoch: [2][ 8/ 8] Loss 0.580504 Top1 69.300000 -2023-09-11 13:42:22,875 - ==> Top1: 69.300 Loss: 0.581 - -2023-09-11 13:42:22,875 - ==> Confusion: -[[532 453] - [161 854]] - -2023-09-11 13:42:22,891 - ==> Best [Top1: 69.600 Sparsity:0.00 Params: 57776 on epoch: 1] -2023-09-11 13:42:22,891 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:42:22,893 - - -2023-09-11 13:42:22,893 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:42:26,686 - Epoch: [3][ 10/ 71] Overall Loss 0.562311 Objective Loss 0.562311 LR 0.001000 Time 0.379196 -2023-09-11 13:42:29,037 - Epoch: [3][ 20/ 71] Overall Loss 0.567811 Objective Loss 0.567811 LR 0.001000 Time 0.307147 -2023-09-11 13:42:31,599 - Epoch: [3][ 30/ 71] Overall Loss 0.560860 Objective Loss 0.560860 LR 0.001000 Time 0.290158 -2023-09-11 13:42:33,591 - Epoch: [3][ 40/ 71] Overall Loss 0.552764 Objective Loss 0.552764 LR 0.001000 Time 0.267408 -2023-09-11 13:42:36,620 - Epoch: [3][ 50/ 71] Overall Loss 0.549047 Objective Loss 0.549047 LR 0.001000 Time 0.274494 -2023-09-11 13:42:38,573 - Epoch: [3][ 60/ 71] Overall Loss 0.544536 Objective Loss 0.544536 LR 0.001000 Time 0.261289 -2023-09-11 13:42:40,813 - Epoch: [3][ 70/ 71] Overall Loss 0.541454 Objective Loss 0.541454 Top1 71.484375 LR 0.001000 Time 0.255952 -2023-09-11 13:42:40,858 - Epoch: [3][ 71/ 71] Overall Loss 0.541515 Objective Loss 0.541515 Top1 71.726190 LR 0.001000 Time 0.252983 -2023-09-11 13:42:40,936 - --- validate (epoch=3)----------- -2023-09-11 13:42:40,936 - 2000 samples (256 per mini-batch) -2023-09-11 13:42:43,634 - Epoch: [3][ 8/ 8] Loss 0.506979 Top1 75.700000 -2023-09-11 13:42:43,727 - ==> Top1: 75.700 Loss: 0.507 - -2023-09-11 13:42:43,728 - ==> Confusion: -[[777 208] - [278 737]] - -2023-09-11 13:42:43,743 - ==> Best [Top1: 75.700 Sparsity:0.00 Params: 57776 on epoch: 3] -2023-09-11 13:42:43,743 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:42:43,746 - - -2023-09-11 13:42:43,746 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:42:46,709 - Epoch: [4][ 10/ 71] Overall Loss 0.510656 Objective Loss 0.510656 LR 0.001000 Time 0.296287 -2023-09-11 13:42:48,733 - Epoch: [4][ 20/ 71] Overall Loss 0.509729 Objective Loss 0.509729 LR 0.001000 Time 0.249319 -2023-09-11 13:42:52,149 - Epoch: [4][ 30/ 71] Overall Loss 0.510293 Objective Loss 0.510293 LR 0.001000 Time 0.280066 -2023-09-11 13:42:53,993 - Epoch: [4][ 40/ 71] Overall Loss 0.513419 Objective Loss 0.513419 LR 0.001000 Time 0.256145 -2023-09-11 13:42:56,704 - Epoch: [4][ 50/ 71] Overall Loss 0.511901 Objective Loss 0.511901 LR 0.001000 Time 0.259121 -2023-09-11 13:42:58,738 - Epoch: [4][ 60/ 71] Overall Loss 0.509342 Objective Loss 0.509342 LR 0.001000 Time 0.249823 -2023-09-11 13:43:01,037 - Epoch: [4][ 70/ 71] Overall Loss 0.506468 Objective Loss 0.506468 Top1 78.125000 LR 0.001000 Time 0.246978 -2023-09-11 13:43:01,093 - Epoch: [4][ 71/ 71] Overall Loss 0.506386 Objective Loss 0.506386 Top1 76.488095 LR 0.001000 Time 0.244287 -2023-09-11 13:43:01,189 - --- validate (epoch=4)----------- -2023-09-11 13:43:01,189 - 2000 samples (256 per mini-batch) -2023-09-11 13:43:03,524 - Epoch: [4][ 8/ 8] Loss 0.471281 Top1 76.950000 -2023-09-11 13:43:03,616 - ==> Top1: 76.950 Loss: 0.471 - -2023-09-11 13:43:03,617 - ==> Confusion: -[[750 235] - [226 789]] - -2023-09-11 13:43:03,633 - ==> Best [Top1: 76.950 Sparsity:0.00 Params: 57776 on epoch: 4] -2023-09-11 13:43:03,633 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:43:03,635 - - -2023-09-11 13:43:03,635 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:43:07,712 - Epoch: [5][ 10/ 71] Overall Loss 0.481360 Objective Loss 0.481360 LR 0.001000 Time 0.407623 -2023-09-11 13:43:09,572 - Epoch: [5][ 20/ 71] Overall Loss 0.476039 Objective Loss 0.476039 LR 0.001000 Time 0.296815 -2023-09-11 13:43:12,136 - Epoch: [5][ 30/ 71] Overall Loss 0.470085 Objective Loss 0.470085 LR 0.001000 Time 0.283310 -2023-09-11 13:43:14,144 - Epoch: [5][ 40/ 71] Overall Loss 0.476820 Objective Loss 0.476820 LR 0.001000 Time 0.262687 -2023-09-11 13:43:17,208 - Epoch: [5][ 50/ 71] Overall Loss 0.474682 Objective Loss 0.474682 LR 0.001000 Time 0.271425 -2023-09-11 13:43:19,284 - Epoch: [5][ 60/ 71] Overall Loss 0.473813 Objective Loss 0.473813 LR 0.001000 Time 0.260781 -2023-09-11 13:43:21,593 - Epoch: [5][ 70/ 71] Overall Loss 0.473163 Objective Loss 0.473163 Top1 80.468750 LR 0.001000 Time 0.256506 -2023-09-11 13:43:21,651 - Epoch: [5][ 71/ 71] Overall Loss 0.472495 Objective Loss 0.472495 Top1 80.654762 LR 0.001000 Time 0.253711 -2023-09-11 13:43:21,743 - --- validate (epoch=5)----------- -2023-09-11 13:43:21,743 - 2000 samples (256 per mini-batch) -2023-09-11 13:43:23,961 - Epoch: [5][ 8/ 8] Loss 0.461314 Top1 77.700000 -2023-09-11 13:43:24,054 - ==> Top1: 77.700 Loss: 0.461 - -2023-09-11 13:43:24,055 - ==> Confusion: -[[702 283] - [163 852]] - -2023-09-11 13:43:24,057 - ==> Best [Top1: 77.700 Sparsity:0.00 Params: 57776 on epoch: 5] -2023-09-11 13:43:24,057 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:43:24,059 - - -2023-09-11 13:43:24,060 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:43:27,962 - Epoch: [6][ 10/ 71] Overall Loss 0.464840 Objective Loss 0.464840 LR 0.001000 Time 0.390198 -2023-09-11 13:43:30,148 - Epoch: [6][ 20/ 71] Overall Loss 0.460022 Objective Loss 0.460022 LR 0.001000 Time 0.304375 -2023-09-11 13:43:32,800 - Epoch: [6][ 30/ 71] Overall Loss 0.453277 Objective Loss 0.453277 LR 0.001000 Time 0.291312 -2023-09-11 13:43:34,814 - Epoch: [6][ 40/ 71] Overall Loss 0.458294 Objective Loss 0.458294 LR 0.001000 Time 0.268811 -2023-09-11 13:43:37,435 - Epoch: [6][ 50/ 71] Overall Loss 0.458526 Objective Loss 0.458526 LR 0.001000 Time 0.267482 -2023-09-11 13:43:39,416 - Epoch: [6][ 60/ 71] Overall Loss 0.458614 Objective Loss 0.458614 LR 0.001000 Time 0.255905 -2023-09-11 13:43:41,744 - Epoch: [6][ 70/ 71] Overall Loss 0.456109 Objective Loss 0.456109 Top1 78.125000 LR 0.001000 Time 0.252605 -2023-09-11 13:43:41,803 - Epoch: [6][ 71/ 71] Overall Loss 0.456300 Objective Loss 0.456300 Top1 77.083333 LR 0.001000 Time 0.249876 -2023-09-11 13:43:41,895 - --- validate (epoch=6)----------- -2023-09-11 13:43:41,895 - 2000 samples (256 per mini-batch) -2023-09-11 13:43:44,136 - Epoch: [6][ 8/ 8] Loss 0.429514 Top1 79.850000 -2023-09-11 13:43:44,223 - ==> Top1: 79.850 Loss: 0.430 - -2023-09-11 13:43:44,223 - ==> Confusion: -[[783 202] - [201 814]] - -2023-09-11 13:43:44,238 - ==> Best [Top1: 79.850 Sparsity:0.00 Params: 57776 on epoch: 6] -2023-09-11 13:43:44,238 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:43:44,243 - - -2023-09-11 13:43:44,244 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:43:47,964 - Epoch: [7][ 10/ 71] Overall Loss 0.421811 Objective Loss 0.421811 LR 0.001000 Time 0.371958 -2023-09-11 13:43:49,797 - Epoch: [7][ 20/ 71] Overall Loss 0.424821 Objective Loss 0.424821 LR 0.001000 Time 0.277642 -2023-09-11 13:43:52,431 - Epoch: [7][ 30/ 71] Overall Loss 0.418515 Objective Loss 0.418515 LR 0.001000 Time 0.272881 -2023-09-11 13:43:54,355 - Epoch: [7][ 40/ 71] Overall Loss 0.412479 Objective Loss 0.412479 LR 0.001000 Time 0.252756 -2023-09-11 13:43:57,025 - Epoch: [7][ 50/ 71] Overall Loss 0.427500 Objective Loss 0.427500 LR 0.001000 Time 0.255589 -2023-09-11 13:43:58,952 - Epoch: [7][ 60/ 71] Overall Loss 0.426962 Objective Loss 0.426962 LR 0.001000 Time 0.245105 -2023-09-11 13:44:01,259 - Epoch: [7][ 70/ 71] Overall Loss 0.429144 Objective Loss 0.429144 Top1 78.125000 LR 0.001000 Time 0.243041 -2023-09-11 13:44:01,315 - Epoch: [7][ 71/ 71] Overall Loss 0.429795 Objective Loss 0.429795 Top1 77.380952 LR 0.001000 Time 0.240411 -2023-09-11 13:44:01,402 - --- validate (epoch=7)----------- -2023-09-11 13:44:01,402 - 2000 samples (256 per mini-batch) -2023-09-11 13:44:04,211 - Epoch: [7][ 8/ 8] Loss 0.426213 Top1 80.100000 -2023-09-11 13:44:04,308 - ==> Top1: 80.100 Loss: 0.426 - -2023-09-11 13:44:04,308 - ==> Confusion: -[[826 159] - [239 776]] - -2023-09-11 13:44:04,311 - ==> Best [Top1: 80.100 Sparsity:0.00 Params: 57776 on epoch: 7] -2023-09-11 13:44:04,312 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:44:04,314 - - -2023-09-11 13:44:04,314 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:44:07,397 - Epoch: [8][ 10/ 71] Overall Loss 0.428434 Objective Loss 0.428434 LR 0.001000 Time 0.308250 -2023-09-11 13:44:09,545 - Epoch: [8][ 20/ 71] Overall Loss 0.415466 Objective Loss 0.415466 LR 0.001000 Time 0.261479 -2023-09-11 13:44:12,356 - Epoch: [8][ 30/ 71] Overall Loss 0.419250 Objective Loss 0.419250 LR 0.001000 Time 0.268008 -2023-09-11 13:44:14,690 - Epoch: [8][ 40/ 71] Overall Loss 0.417764 Objective Loss 0.417764 LR 0.001000 Time 0.259343 -2023-09-11 13:44:17,835 - Epoch: [8][ 50/ 71] Overall Loss 0.419137 Objective Loss 0.419137 LR 0.001000 Time 0.270382 -2023-09-11 13:44:19,822 - Epoch: [8][ 60/ 71] Overall Loss 0.418678 Objective Loss 0.418678 LR 0.001000 Time 0.258423 -2023-09-11 13:44:22,041 - Epoch: [8][ 70/ 71] Overall Loss 0.414687 Objective Loss 0.414687 Top1 82.812500 LR 0.001000 Time 0.253198 -2023-09-11 13:44:22,077 - Epoch: [8][ 71/ 71] Overall Loss 0.414823 Objective Loss 0.414823 Top1 81.250000 LR 0.001000 Time 0.250149 -2023-09-11 13:44:22,162 - --- validate (epoch=8)----------- -2023-09-11 13:44:22,162 - 2000 samples (256 per mini-batch) -2023-09-11 13:44:24,584 - Epoch: [8][ 8/ 8] Loss 0.446891 Top1 77.900000 -2023-09-11 13:44:24,668 - ==> Top1: 77.900 Loss: 0.447 - -2023-09-11 13:44:24,669 - ==> Confusion: -[[887 98] - [344 671]] - -2023-09-11 13:44:24,684 - ==> Best [Top1: 80.100 Sparsity:0.00 Params: 57776 on epoch: 7] -2023-09-11 13:44:24,684 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:44:24,688 - - -2023-09-11 13:44:24,689 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:44:28,540 - Epoch: [9][ 10/ 71] Overall Loss 0.412560 Objective Loss 0.412560 LR 0.001000 Time 0.385065 -2023-09-11 13:44:30,406 - Epoch: [9][ 20/ 71] Overall Loss 0.417186 Objective Loss 0.417186 LR 0.001000 Time 0.285822 -2023-09-11 13:44:32,974 - Epoch: [9][ 30/ 71] Overall Loss 0.411344 Objective Loss 0.411344 LR 0.001000 Time 0.276124 -2023-09-11 13:44:35,000 - Epoch: [9][ 40/ 71] Overall Loss 0.407738 Objective Loss 0.407738 LR 0.001000 Time 0.257741 -2023-09-11 13:44:37,647 - Epoch: [9][ 50/ 71] Overall Loss 0.400353 Objective Loss 0.400353 LR 0.001000 Time 0.259120 -2023-09-11 13:44:39,581 - Epoch: [9][ 60/ 71] Overall Loss 0.398903 Objective Loss 0.398903 LR 0.001000 Time 0.248170 -2023-09-11 13:44:41,838 - Epoch: [9][ 70/ 71] Overall Loss 0.399365 Objective Loss 0.399365 Top1 81.250000 LR 0.001000 Time 0.244949 -2023-09-11 13:44:41,894 - Epoch: [9][ 71/ 71] Overall Loss 0.398988 Objective Loss 0.398988 Top1 82.440476 LR 0.001000 Time 0.242288 -2023-09-11 13:44:41,986 - --- validate (epoch=9)----------- -2023-09-11 13:44:41,986 - 2000 samples (256 per mini-batch) -2023-09-11 13:44:44,304 - Epoch: [9][ 8/ 8] Loss 0.375745 Top1 83.050000 -2023-09-11 13:44:44,398 - ==> Top1: 83.050 Loss: 0.376 - -2023-09-11 13:44:44,398 - ==> Confusion: -[[815 170] - [169 846]] - -2023-09-11 13:44:44,404 - ==> Best [Top1: 83.050 Sparsity:0.00 Params: 57776 on epoch: 9] -2023-09-11 13:44:44,404 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:44:44,407 - - -2023-09-11 13:44:44,407 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:44:47,445 - Epoch: [10][ 10/ 71] Overall Loss 0.390573 Objective Loss 0.390573 LR 0.001000 Time 0.303756 -2023-09-11 13:44:49,437 - Epoch: [10][ 20/ 71] Overall Loss 0.389078 Objective Loss 0.389078 LR 0.001000 Time 0.251462 -2023-09-11 13:44:52,037 - Epoch: [10][ 30/ 71] Overall Loss 0.399523 Objective Loss 0.399523 LR 0.001000 Time 0.254308 -2023-09-11 13:44:55,105 - Epoch: [10][ 40/ 71] Overall Loss 0.397142 Objective Loss 0.397142 LR 0.001000 Time 0.267423 -2023-09-11 13:44:57,136 - Epoch: [10][ 50/ 71] Overall Loss 0.392307 Objective Loss 0.392307 LR 0.001000 Time 0.254549 -2023-09-11 13:44:59,742 - Epoch: [10][ 60/ 71] Overall Loss 0.394839 Objective Loss 0.394839 LR 0.001000 Time 0.255538 -2023-09-11 13:45:01,482 - Epoch: [10][ 70/ 71] Overall Loss 0.393819 Objective Loss 0.393819 Top1 85.156250 LR 0.001000 Time 0.243887 -2023-09-11 13:45:01,531 - Epoch: [10][ 71/ 71] Overall Loss 0.393722 Objective Loss 0.393722 Top1 82.738095 LR 0.001000 Time 0.241152 -2023-09-11 13:45:01,623 - --- validate (epoch=10)----------- -2023-09-11 13:45:01,623 - 2000 samples (256 per mini-batch) -2023-09-11 13:45:03,900 - Epoch: [10][ 8/ 8] Loss 0.386369 Top1 81.350000 -2023-09-11 13:45:03,994 - ==> Top1: 81.350 Loss: 0.386 - -2023-09-11 13:45:03,994 - ==> Confusion: -[[830 155] - [218 797]] - -2023-09-11 13:45:04,010 - ==> Best [Top1: 83.050 Sparsity:0.00 Params: 57776 on epoch: 9] -2023-09-11 13:45:04,010 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:45:04,012 - - -2023-09-11 13:45:04,012 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:45:06,981 - Epoch: [11][ 10/ 71] Overall Loss 0.359529 Objective Loss 0.359529 LR 0.001000 Time 0.296802 -2023-09-11 13:45:09,502 - Epoch: [11][ 20/ 71] Overall Loss 0.361109 Objective Loss 0.361109 LR 0.001000 Time 0.274459 -2023-09-11 13:45:11,377 - Epoch: [11][ 30/ 71] Overall Loss 0.363438 Objective Loss 0.363438 LR 0.001000 Time 0.245447 -2023-09-11 13:45:13,962 - Epoch: [11][ 40/ 71] Overall Loss 0.363700 Objective Loss 0.363700 LR 0.001000 Time 0.248698 -2023-09-11 13:45:15,940 - Epoch: [11][ 50/ 71] Overall Loss 0.364780 Objective Loss 0.364780 LR 0.001000 Time 0.238522 -2023-09-11 13:45:18,553 - Epoch: [11][ 60/ 71] Overall Loss 0.365858 Objective Loss 0.365858 LR 0.001000 Time 0.242303 -2023-09-11 13:45:20,351 - Epoch: [11][ 70/ 71] Overall Loss 0.369157 Objective Loss 0.369157 Top1 81.250000 LR 0.001000 Time 0.233367 -2023-09-11 13:45:20,409 - Epoch: [11][ 71/ 71] Overall Loss 0.369398 Objective Loss 0.369398 Top1 80.654762 LR 0.001000 Time 0.230901 -2023-09-11 13:45:20,500 - --- validate (epoch=11)----------- -2023-09-11 13:45:20,500 - 2000 samples (256 per mini-batch) -2023-09-11 13:45:23,340 - Epoch: [11][ 8/ 8] Loss 0.392797 Top1 82.250000 -2023-09-11 13:45:23,432 - ==> Top1: 82.250 Loss: 0.393 - -2023-09-11 13:45:23,432 - ==> Confusion: -[[903 82] - [273 742]] - -2023-09-11 13:45:23,449 - ==> Best [Top1: 83.050 Sparsity:0.00 Params: 57776 on epoch: 9] -2023-09-11 13:45:23,449 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:45:23,451 - - -2023-09-11 13:45:23,451 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:45:26,429 - Epoch: [12][ 10/ 71] Overall Loss 0.343490 Objective Loss 0.343490 LR 0.001000 Time 0.297701 -2023-09-11 13:45:28,411 - Epoch: [12][ 20/ 71] Overall Loss 0.358670 Objective Loss 0.358670 LR 0.001000 Time 0.247952 -2023-09-11 13:45:31,007 - Epoch: [12][ 30/ 71] Overall Loss 0.360095 Objective Loss 0.360095 LR 0.001000 Time 0.251827 -2023-09-11 13:45:32,893 - Epoch: [12][ 40/ 71] Overall Loss 0.370311 Objective Loss 0.370311 LR 0.001000 Time 0.236011 -2023-09-11 13:45:35,515 - Epoch: [12][ 50/ 71] Overall Loss 0.370859 Objective Loss 0.370859 LR 0.001000 Time 0.241231 -2023-09-11 13:45:37,948 - Epoch: [12][ 60/ 71] Overall Loss 0.372756 Objective Loss 0.372756 LR 0.001000 Time 0.241584 -2023-09-11 13:45:39,879 - Epoch: [12][ 70/ 71] Overall Loss 0.373521 Objective Loss 0.373521 Top1 85.546875 LR 0.001000 Time 0.234648 -2023-09-11 13:45:39,936 - Epoch: [12][ 71/ 71] Overall Loss 0.373402 Objective Loss 0.373402 Top1 84.821429 LR 0.001000 Time 0.232138 -2023-09-11 13:45:40,026 - --- validate (epoch=12)----------- -2023-09-11 13:45:40,026 - 2000 samples (256 per mini-batch) -2023-09-11 13:45:42,177 - Epoch: [12][ 8/ 8] Loss 0.360241 Top1 83.750000 -2023-09-11 13:45:42,267 - ==> Top1: 83.750 Loss: 0.360 - -2023-09-11 13:45:42,268 - ==> Confusion: -[[792 193] - [132 883]] - -2023-09-11 13:45:42,283 - ==> Best [Top1: 83.750 Sparsity:0.00 Params: 57776 on epoch: 12] -2023-09-11 13:45:42,283 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:45:42,286 - - -2023-09-11 13:45:42,286 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:45:45,311 - Epoch: [13][ 10/ 71] Overall Loss 0.350654 Objective Loss 0.350654 LR 0.001000 Time 0.302441 -2023-09-11 13:45:47,337 - Epoch: [13][ 20/ 71] Overall Loss 0.344415 Objective Loss 0.344415 LR 0.001000 Time 0.252499 -2023-09-11 13:45:49,906 - Epoch: [13][ 30/ 71] Overall Loss 0.339721 Objective Loss 0.339721 LR 0.001000 Time 0.253950 -2023-09-11 13:45:51,853 - Epoch: [13][ 40/ 71] Overall Loss 0.342982 Objective Loss 0.342982 LR 0.001000 Time 0.239141 -2023-09-11 13:45:54,432 - Epoch: [13][ 50/ 71] Overall Loss 0.349146 Objective Loss 0.349146 LR 0.001000 Time 0.242880 -2023-09-11 13:45:57,278 - Epoch: [13][ 60/ 71] Overall Loss 0.346913 Objective Loss 0.346913 LR 0.001000 Time 0.249826 -2023-09-11 13:45:59,081 - Epoch: [13][ 70/ 71] Overall Loss 0.347331 Objective Loss 0.347331 Top1 77.734375 LR 0.001000 Time 0.239887 -2023-09-11 13:45:59,140 - Epoch: [13][ 71/ 71] Overall Loss 0.348029 Objective Loss 0.348029 Top1 79.166667 LR 0.001000 Time 0.237337 -2023-09-11 13:45:59,233 - --- validate (epoch=13)----------- -2023-09-11 13:45:59,233 - 2000 samples (256 per mini-batch) -2023-09-11 13:46:01,423 - Epoch: [13][ 8/ 8] Loss 0.346512 Top1 84.400000 -2023-09-11 13:46:01,516 - ==> Top1: 84.400 Loss: 0.347 - -2023-09-11 13:46:01,516 - ==> Confusion: -[[864 121] - [191 824]] - -2023-09-11 13:46:01,530 - ==> Best [Top1: 84.400 Sparsity:0.00 Params: 57776 on epoch: 13] -2023-09-11 13:46:01,530 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:46:01,533 - - -2023-09-11 13:46:01,533 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:46:04,619 - Epoch: [14][ 10/ 71] Overall Loss 0.342434 Objective Loss 0.342434 LR 0.001000 Time 0.308546 -2023-09-11 13:46:06,492 - Epoch: [14][ 20/ 71] Overall Loss 0.343459 Objective Loss 0.343459 LR 0.001000 Time 0.247890 -2023-09-11 13:46:09,077 - Epoch: [14][ 30/ 71] Overall Loss 0.346110 Objective Loss 0.346110 LR 0.001000 Time 0.251431 -2023-09-11 13:46:11,015 - Epoch: [14][ 40/ 71] Overall Loss 0.341886 Objective Loss 0.341886 LR 0.001000 Time 0.237006 -2023-09-11 13:46:13,636 - Epoch: [14][ 50/ 71] Overall Loss 0.343969 Objective Loss 0.343969 LR 0.001000 Time 0.242014 -2023-09-11 13:46:15,628 - Epoch: [14][ 60/ 71] Overall Loss 0.341847 Objective Loss 0.341847 LR 0.001000 Time 0.234882 -2023-09-11 13:46:17,955 - Epoch: [14][ 70/ 71] Overall Loss 0.343307 Objective Loss 0.343307 Top1 83.593750 LR 0.001000 Time 0.234561 -2023-09-11 13:46:18,009 - Epoch: [14][ 71/ 71] Overall Loss 0.343631 Objective Loss 0.343631 Top1 83.630952 LR 0.001000 Time 0.232014 -2023-09-11 13:46:18,101 - --- validate (epoch=14)----------- -2023-09-11 13:46:18,101 - 2000 samples (256 per mini-batch) -2023-09-11 13:46:20,917 - Epoch: [14][ 8/ 8] Loss 0.385165 Top1 82.350000 -2023-09-11 13:46:21,015 - ==> Top1: 82.350 Loss: 0.385 - -2023-09-11 13:46:21,015 - ==> Confusion: -[[888 97] - [256 759]] - -2023-09-11 13:46:21,031 - ==> Best [Top1: 84.400 Sparsity:0.00 Params: 57776 on epoch: 13] -2023-09-11 13:46:21,031 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:46:21,034 - - -2023-09-11 13:46:21,034 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:46:24,541 - Epoch: [15][ 10/ 71] Overall Loss 0.346086 Objective Loss 0.346086 LR 0.001000 Time 0.350661 -2023-09-11 13:46:26,450 - Epoch: [15][ 20/ 71] Overall Loss 0.344210 Objective Loss 0.344210 LR 0.001000 Time 0.270766 -2023-09-11 13:46:29,061 - Epoch: [15][ 30/ 71] Overall Loss 0.339610 Objective Loss 0.339610 LR 0.001000 Time 0.267537 -2023-09-11 13:46:31,089 - Epoch: [15][ 40/ 71] Overall Loss 0.338289 Objective Loss 0.338289 LR 0.001000 Time 0.251359 -2023-09-11 13:46:33,676 - Epoch: [15][ 50/ 71] Overall Loss 0.335131 Objective Loss 0.335131 LR 0.001000 Time 0.252802 -2023-09-11 13:46:35,598 - Epoch: [15][ 60/ 71] Overall Loss 0.334381 Objective Loss 0.334381 LR 0.001000 Time 0.242700 -2023-09-11 13:46:37,960 - Epoch: [15][ 70/ 71] Overall Loss 0.334742 Objective Loss 0.334742 Top1 85.937500 LR 0.001000 Time 0.241767 -2023-09-11 13:46:38,014 - Epoch: [15][ 71/ 71] Overall Loss 0.337589 Objective Loss 0.337589 Top1 82.738095 LR 0.001000 Time 0.239122 -2023-09-11 13:46:38,098 - --- validate (epoch=15)----------- -2023-09-11 13:46:38,098 - 2000 samples (256 per mini-batch) -2023-09-11 13:46:40,321 - Epoch: [15][ 8/ 8] Loss 0.321565 Top1 86.500000 -2023-09-11 13:46:40,401 - ==> Top1: 86.500 Loss: 0.322 - -2023-09-11 13:46:40,401 - ==> Confusion: -[[847 138] - [132 883]] - -2023-09-11 13:46:40,415 - ==> Best [Top1: 86.500 Sparsity:0.00 Params: 57776 on epoch: 15] -2023-09-11 13:46:40,416 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:46:40,418 - - -2023-09-11 13:46:40,418 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:46:43,510 - Epoch: [16][ 10/ 71] Overall Loss 0.308856 Objective Loss 0.308856 LR 0.001000 Time 0.309109 -2023-09-11 13:46:45,361 - Epoch: [16][ 20/ 71] Overall Loss 0.308568 Objective Loss 0.308568 LR 0.001000 Time 0.247098 -2023-09-11 13:46:48,065 - Epoch: [16][ 30/ 71] Overall Loss 0.314104 Objective Loss 0.314104 LR 0.001000 Time 0.254848 -2023-09-11 13:46:49,994 - Epoch: [16][ 40/ 71] Overall Loss 0.323229 Objective Loss 0.323229 LR 0.001000 Time 0.239338 -2023-09-11 13:46:52,747 - Epoch: [16][ 50/ 71] Overall Loss 0.324753 Objective Loss 0.324753 LR 0.001000 Time 0.246535 -2023-09-11 13:46:54,748 - Epoch: [16][ 60/ 71] Overall Loss 0.322716 Objective Loss 0.322716 LR 0.001000 Time 0.238794 -2023-09-11 13:46:57,201 - Epoch: [16][ 70/ 71] Overall Loss 0.324338 Objective Loss 0.324338 Top1 84.765625 LR 0.001000 Time 0.239635 -2023-09-11 13:46:57,238 - Epoch: [16][ 71/ 71] Overall Loss 0.323968 Objective Loss 0.323968 Top1 85.714286 LR 0.001000 Time 0.236776 -2023-09-11 13:46:57,334 - --- validate (epoch=16)----------- -2023-09-11 13:46:57,334 - 2000 samples (256 per mini-batch) -2023-09-11 13:47:00,118 - Epoch: [16][ 8/ 8] Loss 0.375500 Top1 82.950000 -2023-09-11 13:47:00,215 - ==> Top1: 82.950 Loss: 0.375 - -2023-09-11 13:47:00,215 - ==> Confusion: -[[935 50] - [291 724]] - -2023-09-11 13:47:00,230 - ==> Best [Top1: 86.500 Sparsity:0.00 Params: 57776 on epoch: 15] -2023-09-11 13:47:00,230 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:47:00,232 - - -2023-09-11 13:47:00,232 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:47:03,279 - Epoch: [17][ 10/ 71] Overall Loss 0.337823 Objective Loss 0.337823 LR 0.001000 Time 0.304633 -2023-09-11 13:47:05,177 - Epoch: [17][ 20/ 71] Overall Loss 0.337905 Objective Loss 0.337905 LR 0.001000 Time 0.247203 -2023-09-11 13:47:07,844 - Epoch: [17][ 30/ 71] Overall Loss 0.331820 Objective Loss 0.331820 LR 0.001000 Time 0.253682 -2023-09-11 13:47:09,835 - Epoch: [17][ 40/ 71] Overall Loss 0.323893 Objective Loss 0.323893 LR 0.001000 Time 0.240035 -2023-09-11 13:47:12,613 - Epoch: [17][ 50/ 71] Overall Loss 0.320550 Objective Loss 0.320550 LR 0.001000 Time 0.247582 -2023-09-11 13:47:14,558 - Epoch: [17][ 60/ 71] Overall Loss 0.317630 Objective Loss 0.317630 LR 0.001000 Time 0.238727 -2023-09-11 13:47:16,938 - Epoch: [17][ 70/ 71] Overall Loss 0.315687 Objective Loss 0.315687 Top1 84.375000 LR 0.001000 Time 0.238620 -2023-09-11 13:47:16,988 - Epoch: [17][ 71/ 71] Overall Loss 0.315508 Objective Loss 0.315508 Top1 85.119048 LR 0.001000 Time 0.235950 -2023-09-11 13:47:17,088 - --- validate (epoch=17)----------- -2023-09-11 13:47:17,088 - 2000 samples (256 per mini-batch) -2023-09-11 13:47:19,349 - Epoch: [17][ 8/ 8] Loss 0.328981 Top1 86.050000 -2023-09-11 13:47:19,443 - ==> Top1: 86.050 Loss: 0.329 - -2023-09-11 13:47:19,443 - ==> Confusion: -[[783 202] - [ 77 938]] - -2023-09-11 13:47:19,458 - ==> Best [Top1: 86.500 Sparsity:0.00 Params: 57776 on epoch: 15] -2023-09-11 13:47:19,459 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:47:19,461 - - -2023-09-11 13:47:19,461 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:47:22,484 - Epoch: [18][ 10/ 71] Overall Loss 0.308845 Objective Loss 0.308845 LR 0.001000 Time 0.302244 -2023-09-11 13:47:24,975 - Epoch: [18][ 20/ 71] Overall Loss 0.304315 Objective Loss 0.304315 LR 0.001000 Time 0.275671 -2023-09-11 13:47:26,970 - Epoch: [18][ 30/ 71] Overall Loss 0.304256 Objective Loss 0.304256 LR 0.001000 Time 0.250262 -2023-09-11 13:47:29,604 - Epoch: [18][ 40/ 71] Overall Loss 0.300606 Objective Loss 0.300606 LR 0.001000 Time 0.253548 -2023-09-11 13:47:32,663 - Epoch: [18][ 50/ 71] Overall Loss 0.299598 Objective Loss 0.299598 LR 0.001000 Time 0.264007 -2023-09-11 13:47:34,988 - Epoch: [18][ 60/ 71] Overall Loss 0.298097 Objective Loss 0.298097 LR 0.001000 Time 0.258746 -2023-09-11 13:47:36,940 - Epoch: [18][ 70/ 71] Overall Loss 0.298756 Objective Loss 0.298756 Top1 87.890625 LR 0.001000 Time 0.249660 -2023-09-11 13:47:36,996 - Epoch: [18][ 71/ 71] Overall Loss 0.298696 Objective Loss 0.298696 Top1 86.904762 LR 0.001000 Time 0.246936 -2023-09-11 13:47:37,086 - --- validate (epoch=18)----------- -2023-09-11 13:47:37,086 - 2000 samples (256 per mini-batch) -2023-09-11 13:47:39,443 - Epoch: [18][ 8/ 8] Loss 0.330081 Top1 84.850000 -2023-09-11 13:47:39,535 - ==> Top1: 84.850 Loss: 0.330 - -2023-09-11 13:47:39,536 - ==> Confusion: -[[908 77] - [226 789]] - -2023-09-11 13:47:39,551 - ==> Best [Top1: 86.500 Sparsity:0.00 Params: 57776 on epoch: 15] -2023-09-11 13:47:39,551 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:47:39,553 - - -2023-09-11 13:47:39,553 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:47:42,909 - Epoch: [19][ 10/ 71] Overall Loss 0.299029 Objective Loss 0.299029 LR 0.001000 Time 0.335538 -2023-09-11 13:47:44,935 - Epoch: [19][ 20/ 71] Overall Loss 0.301830 Objective Loss 0.301830 LR 0.001000 Time 0.269058 -2023-09-11 13:47:47,535 - Epoch: [19][ 30/ 71] Overall Loss 0.305319 Objective Loss 0.305319 LR 0.001000 Time 0.266015 -2023-09-11 13:47:49,410 - Epoch: [19][ 40/ 71] Overall Loss 0.309273 Objective Loss 0.309273 LR 0.001000 Time 0.246374 -2023-09-11 13:47:52,209 - Epoch: [19][ 50/ 71] Overall Loss 0.307461 Objective Loss 0.307461 LR 0.001000 Time 0.253072 -2023-09-11 13:47:54,664 - Epoch: [19][ 60/ 71] Overall Loss 0.302684 Objective Loss 0.302684 LR 0.001000 Time 0.251801 -2023-09-11 13:47:56,934 - Epoch: [19][ 70/ 71] Overall Loss 0.304184 Objective Loss 0.304184 Top1 87.109375 LR 0.001000 Time 0.248259 -2023-09-11 13:47:56,988 - Epoch: [19][ 71/ 71] Overall Loss 0.304446 Objective Loss 0.304446 Top1 86.309524 LR 0.001000 Time 0.245519 -2023-09-11 13:47:57,074 - --- validate (epoch=19)----------- -2023-09-11 13:47:57,075 - 2000 samples (256 per mini-batch) -2023-09-11 13:48:00,023 - Epoch: [19][ 8/ 8] Loss 0.392467 Top1 81.850000 -2023-09-11 13:48:00,114 - ==> Top1: 81.850 Loss: 0.392 - -2023-09-11 13:48:00,114 - ==> Confusion: -[[945 40] - [323 692]] - -2023-09-11 13:48:00,130 - ==> Best [Top1: 86.500 Sparsity:0.00 Params: 57776 on epoch: 15] -2023-09-11 13:48:00,131 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:48:00,133 - - -2023-09-11 13:48:00,133 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:48:03,333 - Epoch: [20][ 10/ 71] Overall Loss 0.308596 Objective Loss 0.308596 LR 0.000500 Time 0.320001 -2023-09-11 13:48:05,200 - Epoch: [20][ 20/ 71] Overall Loss 0.295294 Objective Loss 0.295294 LR 0.000500 Time 0.253300 -2023-09-11 13:48:07,883 - Epoch: [20][ 30/ 71] Overall Loss 0.286377 Objective Loss 0.286377 LR 0.000500 Time 0.258288 -2023-09-11 13:48:09,825 - Epoch: [20][ 40/ 71] Overall Loss 0.284384 Objective Loss 0.284384 LR 0.000500 Time 0.242272 -2023-09-11 13:48:12,595 - Epoch: [20][ 50/ 71] Overall Loss 0.288698 Objective Loss 0.288698 LR 0.000500 Time 0.249210 -2023-09-11 13:48:14,652 - Epoch: [20][ 60/ 71] Overall Loss 0.287108 Objective Loss 0.287108 LR 0.000500 Time 0.241949 -2023-09-11 13:48:16,858 - Epoch: [20][ 70/ 71] Overall Loss 0.285438 Objective Loss 0.285438 Top1 90.234375 LR 0.000500 Time 0.238892 -2023-09-11 13:48:16,914 - Epoch: [20][ 71/ 71] Overall Loss 0.284864 Objective Loss 0.284864 Top1 90.476190 LR 0.000500 Time 0.236312 -2023-09-11 13:48:17,002 - --- validate (epoch=20)----------- -2023-09-11 13:48:17,003 - 2000 samples (256 per mini-batch) -2023-09-11 13:48:19,684 - Epoch: [20][ 8/ 8] Loss 0.288315 Top1 87.700000 -2023-09-11 13:48:19,784 - ==> Top1: 87.700 Loss: 0.288 - -2023-09-11 13:48:19,784 - ==> Confusion: -[[883 102] - [144 871]] - -2023-09-11 13:48:19,799 - ==> Best [Top1: 87.700 Sparsity:0.00 Params: 57776 on epoch: 20] -2023-09-11 13:48:19,800 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:48:19,802 - - -2023-09-11 13:48:19,802 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:48:23,064 - Epoch: [21][ 10/ 71] Overall Loss 0.285752 Objective Loss 0.285752 LR 0.000500 Time 0.326112 -2023-09-11 13:48:25,962 - Epoch: [21][ 20/ 71] Overall Loss 0.289914 Objective Loss 0.289914 LR 0.000500 Time 0.307928 -2023-09-11 13:48:27,858 - Epoch: [21][ 30/ 71] Overall Loss 0.292561 Objective Loss 0.292561 LR 0.000500 Time 0.268493 -2023-09-11 13:48:30,551 - Epoch: [21][ 40/ 71] Overall Loss 0.289862 Objective Loss 0.289862 LR 0.000500 Time 0.268689 -2023-09-11 13:48:32,744 - Epoch: [21][ 50/ 71] Overall Loss 0.286270 Objective Loss 0.286270 LR 0.000500 Time 0.258804 -2023-09-11 13:48:35,379 - Epoch: [21][ 60/ 71] Overall Loss 0.283386 Objective Loss 0.283386 LR 0.000500 Time 0.259570 -2023-09-11 13:48:37,168 - Epoch: [21][ 70/ 71] Overall Loss 0.280368 Objective Loss 0.280368 Top1 87.500000 LR 0.000500 Time 0.248042 -2023-09-11 13:48:37,223 - Epoch: [21][ 71/ 71] Overall Loss 0.281661 Objective Loss 0.281661 Top1 87.500000 LR 0.000500 Time 0.245322 -2023-09-11 13:48:37,319 - --- validate (epoch=21)----------- -2023-09-11 13:48:37,319 - 2000 samples (256 per mini-batch) -2023-09-11 13:48:40,108 - Epoch: [21][ 8/ 8] Loss 0.273386 Top1 88.700000 -2023-09-11 13:48:40,201 - ==> Top1: 88.700 Loss: 0.273 - -2023-09-11 13:48:40,202 - ==> Confusion: -[[849 136] - [ 90 925]] - -2023-09-11 13:48:40,202 - ==> Best [Top1: 88.700 Sparsity:0.00 Params: 57776 on epoch: 21] -2023-09-11 13:48:40,202 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:48:40,205 - - -2023-09-11 13:48:40,205 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:48:44,144 - Epoch: [22][ 10/ 71] Overall Loss 0.272255 Objective Loss 0.272255 LR 0.000500 Time 0.393836 -2023-09-11 13:48:46,056 - Epoch: [22][ 20/ 71] Overall Loss 0.273082 Objective Loss 0.273082 LR 0.000500 Time 0.292494 -2023-09-11 13:48:48,715 - Epoch: [22][ 30/ 71] Overall Loss 0.269281 Objective Loss 0.269281 LR 0.000500 Time 0.283605 -2023-09-11 13:48:50,683 - Epoch: [22][ 40/ 71] Overall Loss 0.266524 Objective Loss 0.266524 LR 0.000500 Time 0.261902 -2023-09-11 13:48:53,314 - Epoch: [22][ 50/ 71] Overall Loss 0.268067 Objective Loss 0.268067 LR 0.000500 Time 0.262135 -2023-09-11 13:48:55,339 - Epoch: [22][ 60/ 71] Overall Loss 0.272788 Objective Loss 0.272788 LR 0.000500 Time 0.252188 -2023-09-11 13:48:57,649 - Epoch: [22][ 70/ 71] Overall Loss 0.277193 Objective Loss 0.277193 Top1 90.625000 LR 0.000500 Time 0.249164 -2023-09-11 13:48:57,706 - Epoch: [22][ 71/ 71] Overall Loss 0.277597 Objective Loss 0.277597 Top1 89.583333 LR 0.000500 Time 0.246457 -2023-09-11 13:48:57,798 - --- validate (epoch=22)----------- -2023-09-11 13:48:57,798 - 2000 samples (256 per mini-batch) -2023-09-11 13:49:00,140 - Epoch: [22][ 8/ 8] Loss 0.268274 Top1 88.550000 -2023-09-11 13:49:00,235 - ==> Top1: 88.550 Loss: 0.268 - -2023-09-11 13:49:00,236 - ==> Confusion: -[[878 107] - [122 893]] - -2023-09-11 13:49:00,238 - ==> Best [Top1: 88.700 Sparsity:0.00 Params: 57776 on epoch: 21] -2023-09-11 13:49:00,238 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:49:00,240 - - -2023-09-11 13:49:00,240 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:49:03,252 - Epoch: [23][ 10/ 71] Overall Loss 0.266420 Objective Loss 0.266420 LR 0.000500 Time 0.301163 -2023-09-11 13:49:05,196 - Epoch: [23][ 20/ 71] Overall Loss 0.266742 Objective Loss 0.266742 LR 0.000500 Time 0.247729 -2023-09-11 13:49:07,967 - Epoch: [23][ 30/ 71] Overall Loss 0.268304 Objective Loss 0.268304 LR 0.000500 Time 0.257534 -2023-09-11 13:49:09,937 - Epoch: [23][ 40/ 71] Overall Loss 0.266549 Objective Loss 0.266549 LR 0.000500 Time 0.242395 -2023-09-11 13:49:12,617 - Epoch: [23][ 50/ 71] Overall Loss 0.267717 Objective Loss 0.267717 LR 0.000500 Time 0.247504 -2023-09-11 13:49:14,717 - Epoch: [23][ 60/ 71] Overall Loss 0.266442 Objective Loss 0.266442 LR 0.000500 Time 0.241240 -2023-09-11 13:49:16,882 - Epoch: [23][ 70/ 71] Overall Loss 0.265819 Objective Loss 0.265819 Top1 89.453125 LR 0.000500 Time 0.237710 -2023-09-11 13:49:16,936 - Epoch: [23][ 71/ 71] Overall Loss 0.265769 Objective Loss 0.265769 Top1 89.583333 LR 0.000500 Time 0.235113 -2023-09-11 13:49:17,027 - --- validate (epoch=23)----------- -2023-09-11 13:49:17,027 - 2000 samples (256 per mini-batch) -2023-09-11 13:49:19,615 - Epoch: [23][ 8/ 8] Loss 0.273119 Top1 88.500000 -2023-09-11 13:49:19,696 - ==> Top1: 88.500 Loss: 0.273 - -2023-09-11 13:49:19,697 - ==> Confusion: -[[872 113] - [117 898]] - -2023-09-11 13:49:19,699 - ==> Best [Top1: 88.700 Sparsity:0.00 Params: 57776 on epoch: 21] -2023-09-11 13:49:19,699 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:49:19,701 - - -2023-09-11 13:49:19,701 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:49:22,920 - Epoch: [24][ 10/ 71] Overall Loss 0.256910 Objective Loss 0.256910 LR 0.000500 Time 0.321854 -2023-09-11 13:49:25,424 - Epoch: [24][ 20/ 71] Overall Loss 0.266019 Objective Loss 0.266019 LR 0.000500 Time 0.286129 -2023-09-11 13:49:27,352 - Epoch: [24][ 30/ 71] Overall Loss 0.269425 Objective Loss 0.269425 LR 0.000500 Time 0.254983 -2023-09-11 13:49:29,940 - Epoch: [24][ 40/ 71] Overall Loss 0.275635 Objective Loss 0.275635 LR 0.000500 Time 0.255942 -2023-09-11 13:49:31,996 - Epoch: [24][ 50/ 71] Overall Loss 0.272881 Objective Loss 0.272881 LR 0.000500 Time 0.245869 -2023-09-11 13:49:34,562 - Epoch: [24][ 60/ 71] Overall Loss 0.272661 Objective Loss 0.272661 LR 0.000500 Time 0.247636 -2023-09-11 13:49:36,344 - Epoch: [24][ 70/ 71] Overall Loss 0.269014 Objective Loss 0.269014 Top1 90.234375 LR 0.000500 Time 0.237720 -2023-09-11 13:49:36,398 - Epoch: [24][ 71/ 71] Overall Loss 0.269085 Objective Loss 0.269085 Top1 90.178571 LR 0.000500 Time 0.235127 -2023-09-11 13:49:36,488 - --- validate (epoch=24)----------- -2023-09-11 13:49:36,489 - 2000 samples (256 per mini-batch) -2023-09-11 13:49:39,285 - Epoch: [24][ 8/ 8] Loss 0.262121 Top1 88.650000 -2023-09-11 13:49:39,379 - ==> Top1: 88.650 Loss: 0.262 - -2023-09-11 13:49:39,379 - ==> Confusion: -[[865 120] - [107 908]] - -2023-09-11 13:49:39,394 - ==> Best [Top1: 88.700 Sparsity:0.00 Params: 57776 on epoch: 21] -2023-09-11 13:49:39,394 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:49:39,397 - - -2023-09-11 13:49:39,397 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:49:42,347 - Epoch: [25][ 10/ 71] Overall Loss 0.268170 Objective Loss 0.268170 LR 0.000500 Time 0.294933 -2023-09-11 13:49:44,277 - Epoch: [25][ 20/ 71] Overall Loss 0.259584 Objective Loss 0.259584 LR 0.000500 Time 0.243975 -2023-09-11 13:49:46,867 - Epoch: [25][ 30/ 71] Overall Loss 0.260530 Objective Loss 0.260530 LR 0.000500 Time 0.248964 -2023-09-11 13:49:48,907 - Epoch: [25][ 40/ 71] Overall Loss 0.257426 Objective Loss 0.257426 LR 0.000500 Time 0.237719 -2023-09-11 13:49:51,556 - Epoch: [25][ 50/ 71] Overall Loss 0.258607 Objective Loss 0.258607 LR 0.000500 Time 0.243148 -2023-09-11 13:49:53,634 - Epoch: [25][ 60/ 71] Overall Loss 0.259880 Objective Loss 0.259880 LR 0.000500 Time 0.237244 -2023-09-11 13:49:56,017 - Epoch: [25][ 70/ 71] Overall Loss 0.259897 Objective Loss 0.259897 Top1 88.281250 LR 0.000500 Time 0.237394 -2023-09-11 13:49:56,074 - Epoch: [25][ 71/ 71] Overall Loss 0.260832 Objective Loss 0.260832 Top1 87.500000 LR 0.000500 Time 0.234855 -2023-09-11 13:49:56,167 - --- validate (epoch=25)----------- -2023-09-11 13:49:56,167 - 2000 samples (256 per mini-batch) -2023-09-11 13:49:58,976 - Epoch: [25][ 8/ 8] Loss 0.252398 Top1 89.300000 -2023-09-11 13:49:59,065 - ==> Top1: 89.300 Loss: 0.252 - -2023-09-11 13:49:59,066 - ==> Confusion: -[[882 103] +2025-05-20 16:36:53,289 - + +2025-05-20 16:36:53,289 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:36:58,263 - Epoch: [0][ 10/ 71] Overall Loss 0.703078 Objective Loss 0.703078 LR 0.001000 Time 0.497342 +2025-05-20 16:37:02,455 - Epoch: [0][ 20/ 71] Overall Loss 0.693930 Objective Loss 0.693930 LR 0.001000 Time 0.458227 +2025-05-20 16:37:07,393 - Epoch: [0][ 30/ 71] Overall Loss 0.686005 Objective Loss 0.686005 LR 0.001000 Time 0.470087 +2025-05-20 16:37:10,450 - Epoch: [0][ 40/ 71] Overall Loss 0.679942 Objective Loss 0.679942 LR 0.001000 Time 0.428975 +2025-05-20 16:37:15,048 - Epoch: [0][ 50/ 71] Overall Loss 0.675036 Objective Loss 0.675036 LR 0.001000 Time 0.435135 +2025-05-20 16:37:18,819 - Epoch: [0][ 60/ 71] Overall Loss 0.673182 Objective Loss 0.673182 LR 0.001000 Time 0.425467 +2025-05-20 16:37:22,960 - Epoch: [0][ 70/ 71] Overall Loss 0.670252 Objective Loss 0.670252 Top1 60.546875 LR 0.001000 Time 0.423837 +2025-05-20 16:37:23,078 - Epoch: [0][ 71/ 71] Overall Loss 0.670181 Objective Loss 0.670181 Top1 61.011905 LR 0.001000 Time 0.419526 +2025-05-20 16:37:23,103 - --- validate (epoch=0)----------- +2025-05-20 16:37:23,104 - 2000 samples (256 per mini-batch) +2025-05-20 16:37:27,052 - Epoch: [0][ 8/ 8] Loss 0.650954 Top1 61.350000 +2025-05-20 16:37:27,079 - ==> Top1: 61.350 Loss: 0.651 + +2025-05-20 16:37:27,080 - ==> Confusion: +[[804 181] + [592 423]] + +2025-05-20 16:37:27,117 - ==> Best [Top1: 61.350 Params: 57776 on epoch: 0] +2025-05-20 16:37:27,118 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:37:27,126 - + +2025-05-20 16:37:27,126 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:37:33,375 - Epoch: [1][ 10/ 71] Overall Loss 0.640749 Objective Loss 0.640749 LR 0.001000 Time 0.624905 +2025-05-20 16:37:36,064 - Epoch: [1][ 20/ 71] Overall Loss 0.641899 Objective Loss 0.641899 LR 0.001000 Time 0.446862 +2025-05-20 16:37:40,019 - Epoch: [1][ 30/ 71] Overall Loss 0.643615 Objective Loss 0.643615 LR 0.001000 Time 0.429725 +2025-05-20 16:37:43,628 - Epoch: [1][ 40/ 71] Overall Loss 0.641970 Objective Loss 0.641970 LR 0.001000 Time 0.412514 +2025-05-20 16:37:47,400 - Epoch: [1][ 50/ 71] Overall Loss 0.637422 Objective Loss 0.637422 LR 0.001000 Time 0.405448 +2025-05-20 16:37:51,544 - Epoch: [1][ 60/ 71] Overall Loss 0.632588 Objective Loss 0.632588 LR 0.001000 Time 0.406928 +2025-05-20 16:37:54,578 - Epoch: [1][ 70/ 71] Overall Loss 0.628283 Objective Loss 0.628283 Top1 66.015625 LR 0.001000 Time 0.392134 +2025-05-20 16:37:54,667 - Epoch: [1][ 71/ 71] Overall Loss 0.627468 Objective Loss 0.627468 Top1 66.369048 LR 0.001000 Time 0.387862 +2025-05-20 16:37:54,692 - --- validate (epoch=1)----------- +2025-05-20 16:37:54,692 - 2000 samples (256 per mini-batch) +2025-05-20 16:37:57,936 - Epoch: [1][ 8/ 8] Loss 0.595915 Top1 67.800000 +2025-05-20 16:37:57,968 - ==> Top1: 67.800 Loss: 0.596 + +2025-05-20 16:37:57,968 - ==> Confusion: +[[735 250] + [394 621]] + +2025-05-20 16:37:57,984 - ==> Best [Top1: 67.800 Params: 57776 on epoch: 1] +2025-05-20 16:37:57,985 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:37:57,992 - + +2025-05-20 16:37:57,993 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:38:04,139 - Epoch: [2][ 10/ 71] Overall Loss 0.593250 Objective Loss 0.593250 LR 0.001000 Time 0.614591 +2025-05-20 16:38:06,987 - Epoch: [2][ 20/ 71] Overall Loss 0.593424 Objective Loss 0.593424 LR 0.001000 Time 0.449697 +2025-05-20 16:38:10,997 - Epoch: [2][ 30/ 71] Overall Loss 0.595111 Objective Loss 0.595111 LR 0.001000 Time 0.433437 +2025-05-20 16:38:13,964 - Epoch: [2][ 40/ 71] Overall Loss 0.593027 Objective Loss 0.593027 LR 0.001000 Time 0.399252 +2025-05-20 16:38:17,879 - Epoch: [2][ 50/ 71] Overall Loss 0.588941 Objective Loss 0.588941 LR 0.001000 Time 0.397689 +2025-05-20 16:38:20,710 - Epoch: [2][ 60/ 71] Overall Loss 0.584632 Objective Loss 0.584632 LR 0.001000 Time 0.378599 +2025-05-20 16:38:24,289 - Epoch: [2][ 70/ 71] Overall Loss 0.579968 Objective Loss 0.579968 Top1 71.484375 LR 0.001000 Time 0.375626 +2025-05-20 16:38:24,384 - Epoch: [2][ 71/ 71] Overall Loss 0.578272 Objective Loss 0.578272 Top1 72.619048 LR 0.001000 Time 0.371683 +2025-05-20 16:38:24,411 - --- validate (epoch=2)----------- +2025-05-20 16:38:24,411 - 2000 samples (256 per mini-batch) +2025-05-20 16:38:27,949 - Epoch: [2][ 8/ 8] Loss 0.573918 Top1 70.250000 +2025-05-20 16:38:27,986 - ==> Top1: 70.250 Loss: 0.574 + +2025-05-20 16:38:27,986 - ==> Confusion: +[[562 423] + [172 843]] + +2025-05-20 16:38:28,003 - ==> Best [Top1: 70.250 Params: 57776 on epoch: 2] +2025-05-20 16:38:28,003 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:38:28,011 - + +2025-05-20 16:38:28,011 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:38:32,753 - Epoch: [3][ 10/ 71] Overall Loss 0.540832 Objective Loss 0.540832 LR 0.001000 Time 0.474149 +2025-05-20 16:38:37,421 - Epoch: [3][ 20/ 71] Overall Loss 0.545196 Objective Loss 0.545196 LR 0.001000 Time 0.470469 +2025-05-20 16:38:41,158 - Epoch: [3][ 30/ 71] Overall Loss 0.555518 Objective Loss 0.555518 LR 0.001000 Time 0.438199 +2025-05-20 16:38:45,064 - Epoch: [3][ 40/ 71] Overall Loss 0.553491 Objective Loss 0.553491 LR 0.001000 Time 0.426291 +2025-05-20 16:38:48,042 - Epoch: [3][ 50/ 71] Overall Loss 0.549716 Objective Loss 0.549716 LR 0.001000 Time 0.400590 +2025-05-20 16:38:51,551 - Epoch: [3][ 60/ 71] Overall Loss 0.547005 Objective Loss 0.547005 LR 0.001000 Time 0.392297 +2025-05-20 16:38:55,441 - Epoch: [3][ 70/ 71] Overall Loss 0.543241 Objective Loss 0.543241 Top1 74.218750 LR 0.001000 Time 0.391824 +2025-05-20 16:38:55,541 - Epoch: [3][ 71/ 71] Overall Loss 0.542534 Objective Loss 0.542534 Top1 76.190476 LR 0.001000 Time 0.387711 +2025-05-20 16:38:55,570 - --- validate (epoch=3)----------- +2025-05-20 16:38:55,571 - 2000 samples (256 per mini-batch) +2025-05-20 16:38:59,276 - Epoch: [3][ 8/ 8] Loss 0.509910 Top1 75.400000 +2025-05-20 16:38:59,305 - ==> Top1: 75.400 Loss: 0.510 + +2025-05-20 16:38:59,305 - ==> Confusion: +[[836 149] + [343 672]] + +2025-05-20 16:38:59,320 - ==> Best [Top1: 75.400 Params: 57776 on epoch: 3] +2025-05-20 16:38:59,320 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:38:59,328 - + +2025-05-20 16:38:59,328 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:39:04,344 - Epoch: [4][ 10/ 71] Overall Loss 0.517305 Objective Loss 0.517305 LR 0.001000 Time 0.501475 +2025-05-20 16:39:08,632 - Epoch: [4][ 20/ 71] Overall Loss 0.533014 Objective Loss 0.533014 LR 0.001000 Time 0.465131 +2025-05-20 16:39:12,288 - Epoch: [4][ 30/ 71] Overall Loss 0.536467 Objective Loss 0.536467 LR 0.001000 Time 0.431941 +2025-05-20 16:39:16,264 - Epoch: [4][ 40/ 71] Overall Loss 0.537946 Objective Loss 0.537946 LR 0.001000 Time 0.423352 +2025-05-20 16:39:19,885 - Epoch: [4][ 50/ 71] Overall Loss 0.529789 Objective Loss 0.529789 LR 0.001000 Time 0.411086 +2025-05-20 16:39:24,445 - Epoch: [4][ 60/ 71] Overall Loss 0.524812 Objective Loss 0.524812 LR 0.001000 Time 0.418573 +2025-05-20 16:39:27,306 - Epoch: [4][ 70/ 71] Overall Loss 0.520740 Objective Loss 0.520740 Top1 79.687500 LR 0.001000 Time 0.399643 +2025-05-20 16:39:27,406 - Epoch: [4][ 71/ 71] Overall Loss 0.520243 Objective Loss 0.520243 Top1 79.166667 LR 0.001000 Time 0.395421 +2025-05-20 16:39:27,441 - --- validate (epoch=4)----------- +2025-05-20 16:39:27,442 - 2000 samples (256 per mini-batch) +2025-05-20 16:39:30,884 - Epoch: [4][ 8/ 8] Loss 0.488432 Top1 77.450000 +2025-05-20 16:39:30,912 - ==> Top1: 77.450 Loss: 0.488 + +2025-05-20 16:39:30,912 - ==> Confusion: +[[688 297] + [154 861]] + +2025-05-20 16:39:30,928 - ==> Best [Top1: 77.450 Params: 57776 on epoch: 4] +2025-05-20 16:39:30,928 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:39:30,936 - + +2025-05-20 16:39:30,936 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:39:36,591 - Epoch: [5][ 10/ 71] Overall Loss 0.476453 Objective Loss 0.476453 LR 0.001000 Time 0.565489 +2025-05-20 16:39:39,820 - Epoch: [5][ 20/ 71] Overall Loss 0.481755 Objective Loss 0.481755 LR 0.001000 Time 0.444141 +2025-05-20 16:39:45,088 - Epoch: [5][ 30/ 71] Overall Loss 0.474174 Objective Loss 0.474174 LR 0.001000 Time 0.471686 +2025-05-20 16:39:48,239 - Epoch: [5][ 40/ 71] Overall Loss 0.474681 Objective Loss 0.474681 LR 0.001000 Time 0.432525 +2025-05-20 16:39:52,780 - Epoch: [5][ 50/ 71] Overall Loss 0.470564 Objective Loss 0.470564 LR 0.001000 Time 0.436828 +2025-05-20 16:39:56,149 - Epoch: [5][ 60/ 71] Overall Loss 0.477763 Objective Loss 0.477763 LR 0.001000 Time 0.420178 +2025-05-20 16:40:00,478 - Epoch: [5][ 70/ 71] Overall Loss 0.478147 Objective Loss 0.478147 Top1 80.078125 LR 0.001000 Time 0.421989 +2025-05-20 16:40:00,552 - Epoch: [5][ 71/ 71] Overall Loss 0.477292 Objective Loss 0.477292 Top1 80.952381 LR 0.001000 Time 0.417083 +2025-05-20 16:40:00,593 - --- validate (epoch=5)----------- +2025-05-20 16:40:00,593 - 2000 samples (256 per mini-batch) +2025-05-20 16:40:04,799 - Epoch: [5][ 8/ 8] Loss 0.479976 Top1 78.250000 +2025-05-20 16:40:04,834 - ==> Top1: 78.250 Loss: 0.480 + +2025-05-20 16:40:04,834 - ==> Confusion: +[[661 324] [111 904]] -2023-09-11 13:49:59,082 - ==> Best [Top1: 89.300 Sparsity:0.00 Params: 57776 on epoch: 25] -2023-09-11 13:49:59,082 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:49:59,085 - - -2023-09-11 13:49:59,085 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:50:02,124 - Epoch: [26][ 10/ 71] Overall Loss 0.258358 Objective Loss 0.258358 LR 0.000500 Time 0.303874 -2023-09-11 13:50:04,809 - Epoch: [26][ 20/ 71] Overall Loss 0.249143 Objective Loss 0.249143 LR 0.000500 Time 0.286156 -2023-09-11 13:50:07,807 - Epoch: [26][ 30/ 71] Overall Loss 0.248127 Objective Loss 0.248127 LR 0.000500 Time 0.290708 -2023-09-11 13:50:09,725 - Epoch: [26][ 40/ 71] Overall Loss 0.251466 Objective Loss 0.251466 LR 0.000500 Time 0.265960 -2023-09-11 13:50:12,375 - Epoch: [26][ 50/ 71] Overall Loss 0.252845 Objective Loss 0.252845 LR 0.000500 Time 0.265757 -2023-09-11 13:50:14,255 - Epoch: [26][ 60/ 71] Overall Loss 0.255276 Objective Loss 0.255276 LR 0.000500 Time 0.252796 -2023-09-11 13:50:16,834 - Epoch: [26][ 70/ 71] Overall Loss 0.255538 Objective Loss 0.255538 Top1 93.359375 LR 0.000500 Time 0.253517 -2023-09-11 13:50:16,885 - Epoch: [26][ 71/ 71] Overall Loss 0.254752 Objective Loss 0.254752 Top1 93.452381 LR 0.000500 Time 0.250670 -2023-09-11 13:50:16,982 - --- validate (epoch=26)----------- -2023-09-11 13:50:16,983 - 2000 samples (256 per mini-batch) -2023-09-11 13:50:19,275 - Epoch: [26][ 8/ 8] Loss 0.290556 Top1 87.500000 -2023-09-11 13:50:19,370 - ==> Top1: 87.500 Loss: 0.291 - -2023-09-11 13:50:19,370 - ==> Confusion: -[[786 199] - [ 51 964]] - -2023-09-11 13:50:19,371 - ==> Best [Top1: 89.300 Sparsity:0.00 Params: 57776 on epoch: 25] -2023-09-11 13:50:19,371 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:50:19,374 - - -2023-09-11 13:50:19,374 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:50:22,461 - Epoch: [27][ 10/ 71] Overall Loss 0.275978 Objective Loss 0.275978 LR 0.000500 Time 0.308697 -2023-09-11 13:50:24,397 - Epoch: [27][ 20/ 71] Overall Loss 0.264666 Objective Loss 0.264666 LR 0.000500 Time 0.251099 -2023-09-11 13:50:26,946 - Epoch: [27][ 30/ 71] Overall Loss 0.259205 Objective Loss 0.259205 LR 0.000500 Time 0.252380 -2023-09-11 13:50:29,226 - Epoch: [27][ 40/ 71] Overall Loss 0.261822 Objective Loss 0.261822 LR 0.000500 Time 0.246269 -2023-09-11 13:50:31,730 - Epoch: [27][ 50/ 71] Overall Loss 0.259050 Objective Loss 0.259050 LR 0.000500 Time 0.247097 -2023-09-11 13:50:33,979 - Epoch: [27][ 60/ 71] Overall Loss 0.256715 Objective Loss 0.256715 LR 0.000500 Time 0.243391 -2023-09-11 13:50:35,997 - Epoch: [27][ 70/ 71] Overall Loss 0.254551 Objective Loss 0.254551 Top1 88.281250 LR 0.000500 Time 0.237439 -2023-09-11 13:50:36,054 - Epoch: [27][ 71/ 71] Overall Loss 0.253893 Objective Loss 0.253893 Top1 89.285714 LR 0.000500 Time 0.234894 -2023-09-11 13:50:36,144 - --- validate (epoch=27)----------- -2023-09-11 13:50:36,144 - 2000 samples (256 per mini-batch) -2023-09-11 13:50:38,861 - Epoch: [27][ 8/ 8] Loss 0.280413 Top1 87.450000 -2023-09-11 13:50:38,965 - ==> Top1: 87.450 Loss: 0.280 - -2023-09-11 13:50:38,965 - ==> Confusion: -[[918 67] +2025-05-20 16:40:04,848 - ==> Best [Top1: 78.250 Params: 57776 on epoch: 5] +2025-05-20 16:40:04,848 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:40:04,856 - + +2025-05-20 16:40:04,856 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:40:10,967 - Epoch: [6][ 10/ 71] Overall Loss 0.478607 Objective Loss 0.478607 LR 0.001000 Time 0.611021 +2025-05-20 16:40:14,391 - Epoch: [6][ 20/ 71] Overall Loss 0.461292 Objective Loss 0.461292 LR 0.001000 Time 0.476714 +2025-05-20 16:40:18,517 - Epoch: [6][ 30/ 71] Overall Loss 0.456888 Objective Loss 0.456888 LR 0.001000 Time 0.455328 +2025-05-20 16:40:22,137 - Epoch: [6][ 40/ 71] Overall Loss 0.454965 Objective Loss 0.454965 LR 0.001000 Time 0.431976 +2025-05-20 16:40:27,038 - Epoch: [6][ 50/ 71] Overall Loss 0.451605 Objective Loss 0.451605 LR 0.001000 Time 0.443597 +2025-05-20 16:40:29,679 - Epoch: [6][ 60/ 71] Overall Loss 0.454220 Objective Loss 0.454220 LR 0.001000 Time 0.413676 +2025-05-20 16:40:33,728 - Epoch: [6][ 70/ 71] Overall Loss 0.450712 Objective Loss 0.450712 Top1 78.515625 LR 0.001000 Time 0.412421 +2025-05-20 16:40:33,819 - Epoch: [6][ 71/ 71] Overall Loss 0.450678 Objective Loss 0.450678 Top1 77.976190 LR 0.001000 Time 0.407889 +2025-05-20 16:40:33,845 - --- validate (epoch=6)----------- +2025-05-20 16:40:33,845 - 2000 samples (256 per mini-batch) +2025-05-20 16:40:37,371 - Epoch: [6][ 8/ 8] Loss 0.434274 Top1 80.150000 +2025-05-20 16:40:37,401 - ==> Top1: 80.150 Loss: 0.434 + +2025-05-20 16:40:37,401 - ==> Confusion: +[[772 213] [184 831]] -2023-09-11 13:50:38,980 - ==> Best [Top1: 89.300 Sparsity:0.00 Params: 57776 on epoch: 25] -2023-09-11 13:50:38,980 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:50:38,984 - - -2023-09-11 13:50:38,984 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:50:42,397 - Epoch: [28][ 10/ 71] Overall Loss 0.229270 Objective Loss 0.229270 LR 0.000500 Time 0.341166 -2023-09-11 13:50:44,994 - Epoch: [28][ 20/ 71] Overall Loss 0.249809 Objective Loss 0.249809 LR 0.000500 Time 0.300419 -2023-09-11 13:50:47,261 - Epoch: [28][ 30/ 71] Overall Loss 0.249062 Objective Loss 0.249062 LR 0.000500 Time 0.275836 -2023-09-11 13:50:50,010 - Epoch: [28][ 40/ 71] Overall Loss 0.241849 Objective Loss 0.241849 LR 0.000500 Time 0.275586 -2023-09-11 13:50:52,046 - Epoch: [28][ 50/ 71] Overall Loss 0.238061 Objective Loss 0.238061 LR 0.000500 Time 0.261187 -2023-09-11 13:50:54,779 - Epoch: [28][ 60/ 71] Overall Loss 0.242507 Objective Loss 0.242507 LR 0.000500 Time 0.263205 -2023-09-11 13:50:56,506 - Epoch: [28][ 70/ 71] Overall Loss 0.244726 Objective Loss 0.244726 Top1 87.500000 LR 0.000500 Time 0.250276 -2023-09-11 13:50:56,530 - Epoch: [28][ 71/ 71] Overall Loss 0.246453 Objective Loss 0.246453 Top1 86.309524 LR 0.000500 Time 0.247075 -2023-09-11 13:50:56,624 - --- validate (epoch=28)----------- -2023-09-11 13:50:56,624 - 2000 samples (256 per mini-batch) -2023-09-11 13:50:58,771 - Epoch: [28][ 8/ 8] Loss 0.276021 Top1 86.950000 -2023-09-11 13:50:58,865 - ==> Top1: 86.950 Loss: 0.276 - -2023-09-11 13:50:58,865 - ==> Confusion: -[[882 103] - [158 857]] - -2023-09-11 13:50:58,879 - ==> Best [Top1: 89.300 Sparsity:0.00 Params: 57776 on epoch: 25] -2023-09-11 13:50:58,879 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:50:58,882 - - -2023-09-11 13:50:58,882 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:51:02,007 - Epoch: [29][ 10/ 71] Overall Loss 0.238283 Objective Loss 0.238283 LR 0.000500 Time 0.312507 -2023-09-11 13:51:03,945 - Epoch: [29][ 20/ 71] Overall Loss 0.247203 Objective Loss 0.247203 LR 0.000500 Time 0.253129 -2023-09-11 13:51:06,535 - Epoch: [29][ 30/ 71] Overall Loss 0.247636 Objective Loss 0.247636 LR 0.000500 Time 0.255064 -2023-09-11 13:51:08,686 - Epoch: [29][ 40/ 71] Overall Loss 0.251349 Objective Loss 0.251349 LR 0.000500 Time 0.245058 -2023-09-11 13:51:11,367 - Epoch: [29][ 50/ 71] Overall Loss 0.249663 Objective Loss 0.249663 LR 0.000500 Time 0.249674 -2023-09-11 13:51:13,487 - Epoch: [29][ 60/ 71] Overall Loss 0.248195 Objective Loss 0.248195 LR 0.000500 Time 0.243387 -2023-09-11 13:51:15,720 - Epoch: [29][ 70/ 71] Overall Loss 0.246869 Objective Loss 0.246869 Top1 88.671875 LR 0.000500 Time 0.240504 -2023-09-11 13:51:15,773 - Epoch: [29][ 71/ 71] Overall Loss 0.246330 Objective Loss 0.246330 Top1 89.285714 LR 0.000500 Time 0.237869 -2023-09-11 13:51:15,863 - --- validate (epoch=29)----------- -2023-09-11 13:51:15,863 - 2000 samples (256 per mini-batch) -2023-09-11 13:51:18,356 - Epoch: [29][ 8/ 8] Loss 0.273313 Top1 87.650000 -2023-09-11 13:51:18,449 - ==> Top1: 87.650 Loss: 0.273 - -2023-09-11 13:51:18,450 - ==> Confusion: -[[896 89] - [158 857]] - -2023-09-11 13:51:18,465 - ==> Best [Top1: 89.300 Sparsity:0.00 Params: 57776 on epoch: 25] -2023-09-11 13:51:18,465 - Saving checkpoint to: logs/2023.09.11-134109/checkpoint.pth.tar -2023-09-11 13:51:18,476 - - -2023-09-11 13:51:18,476 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:51:22,066 - Epoch: [30][ 10/ 71] Overall Loss 0.397721 Objective Loss 0.397721 LR 0.000500 Time 0.358927 -2023-09-11 13:51:24,067 - Epoch: [30][ 20/ 71] Overall Loss 0.371947 Objective Loss 0.371947 LR 0.000500 Time 0.279498 -2023-09-11 13:51:26,793 - Epoch: [30][ 30/ 71] Overall Loss 0.347872 Objective Loss 0.347872 LR 0.000500 Time 0.277167 -2023-09-11 13:51:28,743 - Epoch: [30][ 40/ 71] Overall Loss 0.335985 Objective Loss 0.335985 LR 0.000500 Time 0.256616 -2023-09-11 13:51:31,377 - Epoch: [30][ 50/ 71] Overall Loss 0.321244 Objective Loss 0.321244 LR 0.000500 Time 0.257976 -2023-09-11 13:51:33,609 - Epoch: [30][ 60/ 71] Overall Loss 0.310839 Objective Loss 0.310839 LR 0.000500 Time 0.252173 -2023-09-11 13:51:35,956 - Epoch: [30][ 70/ 71] Overall Loss 0.301708 Objective Loss 0.301708 Top1 91.406250 LR 0.000500 Time 0.249681 -2023-09-11 13:51:36,033 - Epoch: [30][ 71/ 71] Overall Loss 0.299712 Objective Loss 0.299712 Top1 91.071429 LR 0.000500 Time 0.247239 -2023-09-11 13:51:36,125 - --- validate (epoch=30)----------- -2023-09-11 13:51:36,125 - 2000 samples (256 per mini-batch) -2023-09-11 13:51:38,837 - Epoch: [30][ 8/ 8] Loss 0.310722 Top1 87.150000 -2023-09-11 13:51:38,928 - ==> Top1: 87.150 Loss: 0.311 - -2023-09-11 13:51:38,928 - ==> Confusion: -[[917 68] - [189 826]] - -2023-09-11 13:51:38,945 - ==> Best [Top1: 87.150 Sparsity:0.00 Params: 57776 on epoch: 30] -2023-09-11 13:51:38,945 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:51:38,949 - - -2023-09-11 13:51:38,949 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:51:42,626 - Epoch: [31][ 10/ 71] Overall Loss 0.279034 Objective Loss 0.279034 LR 0.000500 Time 0.367688 -2023-09-11 13:51:44,782 - Epoch: [31][ 20/ 71] Overall Loss 0.264054 Objective Loss 0.264054 LR 0.000500 Time 0.291612 -2023-09-11 13:51:47,206 - Epoch: [31][ 30/ 71] Overall Loss 0.255587 Objective Loss 0.255587 LR 0.000500 Time 0.275182 -2023-09-11 13:51:49,202 - Epoch: [31][ 40/ 71] Overall Loss 0.250531 Objective Loss 0.250531 LR 0.000500 Time 0.256278 -2023-09-11 13:51:52,008 - Epoch: [31][ 50/ 71] Overall Loss 0.251510 Objective Loss 0.251510 LR 0.000500 Time 0.261148 -2023-09-11 13:51:53,961 - Epoch: [31][ 60/ 71] Overall Loss 0.252924 Objective Loss 0.252924 LR 0.000500 Time 0.250160 -2023-09-11 13:51:56,175 - Epoch: [31][ 70/ 71] Overall Loss 0.252713 Objective Loss 0.252713 Top1 88.671875 LR 0.000500 Time 0.246057 -2023-09-11 13:51:56,240 - Epoch: [31][ 71/ 71] Overall Loss 0.254634 Objective Loss 0.254634 Top1 87.797619 LR 0.000500 Time 0.243504 -2023-09-11 13:51:56,343 - --- validate (epoch=31)----------- -2023-09-11 13:51:56,343 - 2000 samples (256 per mini-batch) -2023-09-11 13:51:58,989 - Epoch: [31][ 8/ 8] Loss 0.273137 Top1 88.750000 -2023-09-11 13:51:59,077 - ==> Top1: 88.750 Loss: 0.273 - -2023-09-11 13:51:59,077 - ==> Confusion: -[[849 136] - [ 89 926]] - -2023-09-11 13:51:59,094 - ==> Best [Top1: 88.750 Sparsity:0.00 Params: 57776 on epoch: 31] -2023-09-11 13:51:59,094 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:51:59,099 - - -2023-09-11 13:51:59,099 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:52:02,669 - Epoch: [32][ 10/ 71] Overall Loss 0.231534 Objective Loss 0.231534 LR 0.000500 Time 0.356925 -2023-09-11 13:52:04,638 - Epoch: [32][ 20/ 71] Overall Loss 0.234719 Objective Loss 0.234719 LR 0.000500 Time 0.276899 -2023-09-11 13:52:07,512 - Epoch: [32][ 30/ 71] Overall Loss 0.232148 Objective Loss 0.232148 LR 0.000500 Time 0.280413 -2023-09-11 13:52:10,032 - Epoch: [32][ 40/ 71] Overall Loss 0.232311 Objective Loss 0.232311 LR 0.000500 Time 0.273288 -2023-09-11 13:52:12,556 - Epoch: [32][ 50/ 71] Overall Loss 0.237155 Objective Loss 0.237155 LR 0.000500 Time 0.269103 -2023-09-11 13:52:15,289 - Epoch: [32][ 60/ 71] Overall Loss 0.236672 Objective Loss 0.236672 LR 0.000500 Time 0.269793 -2023-09-11 13:52:17,110 - Epoch: [32][ 70/ 71] Overall Loss 0.241288 Objective Loss 0.241288 Top1 87.500000 LR 0.000500 Time 0.257271 -2023-09-11 13:52:17,183 - Epoch: [32][ 71/ 71] Overall Loss 0.242420 Objective Loss 0.242420 Top1 87.500000 LR 0.000500 Time 0.254666 -2023-09-11 13:52:17,278 - --- validate (epoch=32)----------- -2023-09-11 13:52:17,279 - 2000 samples (256 per mini-batch) -2023-09-11 13:52:20,289 - Epoch: [32][ 8/ 8] Loss 0.254586 Top1 89.000000 -2023-09-11 13:52:20,382 - ==> Top1: 89.000 Loss: 0.255 - -2023-09-11 13:52:20,382 - ==> Confusion: -[[852 133] - [ 87 928]] +2025-05-20 16:40:37,417 - ==> Best [Top1: 80.150 Params: 57776 on epoch: 6] +2025-05-20 16:40:37,417 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:40:37,424 - + +2025-05-20 16:40:37,424 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:40:43,289 - Epoch: [7][ 10/ 71] Overall Loss 0.436720 Objective Loss 0.436720 LR 0.001000 Time 0.586441 +2025-05-20 16:40:46,946 - Epoch: [7][ 20/ 71] Overall Loss 0.437891 Objective Loss 0.437891 LR 0.001000 Time 0.476030 +2025-05-20 16:40:51,823 - Epoch: [7][ 30/ 71] Overall Loss 0.432609 Objective Loss 0.432609 LR 0.001000 Time 0.479907 +2025-05-20 16:40:54,911 - Epoch: [7][ 40/ 71] Overall Loss 0.423440 Objective Loss 0.423440 LR 0.001000 Time 0.437141 +2025-05-20 16:40:58,853 - Epoch: [7][ 50/ 71] Overall Loss 0.434174 Objective Loss 0.434174 LR 0.001000 Time 0.428542 +2025-05-20 16:41:01,745 - Epoch: [7][ 60/ 71] Overall Loss 0.431793 Objective Loss 0.431793 LR 0.001000 Time 0.405310 +2025-05-20 16:41:05,417 - Epoch: [7][ 70/ 71] Overall Loss 0.431571 Objective Loss 0.431571 Top1 74.609375 LR 0.001000 Time 0.399858 +2025-05-20 16:41:05,516 - Epoch: [7][ 71/ 71] Overall Loss 0.430988 Objective Loss 0.430988 Top1 75.297619 LR 0.001000 Time 0.395619 +2025-05-20 16:41:05,546 - --- validate (epoch=7)----------- +2025-05-20 16:41:05,546 - 2000 samples (256 per mini-batch) +2025-05-20 16:41:09,012 - Epoch: [7][ 8/ 8] Loss 0.408698 Top1 80.500000 +2025-05-20 16:41:09,039 - ==> Top1: 80.500 Loss: 0.409 + +2025-05-20 16:41:09,039 - ==> Confusion: +[[776 209] + [181 834]] + +2025-05-20 16:41:09,051 - ==> Best [Top1: 80.500 Params: 57776 on epoch: 7] +2025-05-20 16:41:09,051 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:41:09,059 - + +2025-05-20 16:41:09,059 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:41:13,697 - Epoch: [8][ 10/ 71] Overall Loss 0.430849 Objective Loss 0.430849 LR 0.001000 Time 0.463690 +2025-05-20 16:41:17,158 - Epoch: [8][ 20/ 71] Overall Loss 0.420131 Objective Loss 0.420131 LR 0.001000 Time 0.404891 +2025-05-20 16:41:20,400 - Epoch: [8][ 30/ 71] Overall Loss 0.415092 Objective Loss 0.415092 LR 0.001000 Time 0.377997 +2025-05-20 16:41:24,573 - Epoch: [8][ 40/ 71] Overall Loss 0.411523 Objective Loss 0.411523 LR 0.001000 Time 0.387802 +2025-05-20 16:41:28,204 - Epoch: [8][ 50/ 71] Overall Loss 0.415387 Objective Loss 0.415387 LR 0.001000 Time 0.382865 +2025-05-20 16:41:32,230 - Epoch: [8][ 60/ 71] Overall Loss 0.412056 Objective Loss 0.412056 LR 0.001000 Time 0.386140 +2025-05-20 16:41:35,361 - Epoch: [8][ 70/ 71] Overall Loss 0.406389 Objective Loss 0.406389 Top1 87.500000 LR 0.001000 Time 0.375706 +2025-05-20 16:41:35,460 - Epoch: [8][ 71/ 71] Overall Loss 0.407000 Objective Loss 0.407000 Top1 84.523810 LR 0.001000 Time 0.371808 +2025-05-20 16:41:35,486 - --- validate (epoch=8)----------- +2025-05-20 16:41:35,486 - 2000 samples (256 per mini-batch) +2025-05-20 16:41:39,135 - Epoch: [8][ 8/ 8] Loss 0.416703 Top1 79.750000 +2025-05-20 16:41:39,167 - ==> Top1: 79.750 Loss: 0.417 + +2025-05-20 16:41:39,167 - ==> Confusion: +[[879 106] + [299 716]] + +2025-05-20 16:41:39,172 - ==> Best [Top1: 80.500 Params: 57776 on epoch: 7] +2025-05-20 16:41:39,173 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:41:39,180 - + +2025-05-20 16:41:39,180 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:41:44,052 - Epoch: [9][ 10/ 71] Overall Loss 0.428625 Objective Loss 0.428625 LR 0.001000 Time 0.487160 +2025-05-20 16:41:46,928 - Epoch: [9][ 20/ 71] Overall Loss 0.423812 Objective Loss 0.423812 LR 0.001000 Time 0.387363 +2025-05-20 16:41:51,043 - Epoch: [9][ 30/ 71] Overall Loss 0.418386 Objective Loss 0.418386 LR 0.001000 Time 0.395412 +2025-05-20 16:41:54,428 - Epoch: [9][ 40/ 71] Overall Loss 0.407149 Objective Loss 0.407149 LR 0.001000 Time 0.381180 +2025-05-20 16:41:58,686 - Epoch: [9][ 50/ 71] Overall Loss 0.400270 Objective Loss 0.400270 LR 0.001000 Time 0.390082 +2025-05-20 16:42:01,427 - Epoch: [9][ 60/ 71] Overall Loss 0.396224 Objective Loss 0.396224 LR 0.001000 Time 0.370746 +2025-05-20 16:42:04,962 - Epoch: [9][ 70/ 71] Overall Loss 0.393421 Objective Loss 0.393421 Top1 86.718750 LR 0.001000 Time 0.368285 +2025-05-20 16:42:05,055 - Epoch: [9][ 71/ 71] Overall Loss 0.393462 Objective Loss 0.393462 Top1 86.011905 LR 0.001000 Time 0.364400 +2025-05-20 16:42:05,082 - --- validate (epoch=9)----------- +2025-05-20 16:42:05,082 - 2000 samples (256 per mini-batch) +2025-05-20 16:42:08,673 - Epoch: [9][ 8/ 8] Loss 0.396068 Top1 82.050000 +2025-05-20 16:42:08,705 - ==> Top1: 82.050 Loss: 0.396 + +2025-05-20 16:42:08,705 - ==> Confusion: +[[765 220] + [139 876]] + +2025-05-20 16:42:08,720 - ==> Best [Top1: 82.050 Params: 57776 on epoch: 9] +2025-05-20 16:42:08,720 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:42:08,727 - + +2025-05-20 16:42:08,727 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:42:13,762 - Epoch: [10][ 10/ 71] Overall Loss 0.390753 Objective Loss 0.390753 LR 0.001000 Time 0.503418 +2025-05-20 16:42:17,368 - Epoch: [10][ 20/ 71] Overall Loss 0.390197 Objective Loss 0.390197 LR 0.001000 Time 0.431982 +2025-05-20 16:42:21,235 - Epoch: [10][ 30/ 71] Overall Loss 0.389795 Objective Loss 0.389795 LR 0.001000 Time 0.416872 +2025-05-20 16:42:25,724 - Epoch: [10][ 40/ 71] Overall Loss 0.390430 Objective Loss 0.390430 LR 0.001000 Time 0.424885 +2025-05-20 16:42:28,762 - Epoch: [10][ 50/ 71] Overall Loss 0.386919 Objective Loss 0.386919 LR 0.001000 Time 0.400650 +2025-05-20 16:42:32,500 - Epoch: [10][ 60/ 71] Overall Loss 0.388094 Objective Loss 0.388094 LR 0.001000 Time 0.396168 +2025-05-20 16:42:36,635 - Epoch: [10][ 70/ 71] Overall Loss 0.389001 Objective Loss 0.389001 Top1 84.765625 LR 0.001000 Time 0.398651 +2025-05-20 16:42:36,735 - Epoch: [10][ 71/ 71] Overall Loss 0.389536 Objective Loss 0.389536 Top1 83.035714 LR 0.001000 Time 0.394432 +2025-05-20 16:42:36,767 - --- validate (epoch=10)----------- +2025-05-20 16:42:36,767 - 2000 samples (256 per mini-batch) +2025-05-20 16:42:40,316 - Epoch: [10][ 8/ 8] Loss 0.378279 Top1 83.650000 +2025-05-20 16:42:40,345 - ==> Top1: 83.650 Loss: 0.378 + +2025-05-20 16:42:40,346 - ==> Confusion: +[[887 98] + [229 786]] + +2025-05-20 16:42:40,361 - ==> Best [Top1: 83.650 Params: 57776 on epoch: 10] +2025-05-20 16:42:40,362 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:42:40,369 - + +2025-05-20 16:42:40,369 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:42:46,618 - Epoch: [11][ 10/ 71] Overall Loss 0.404614 Objective Loss 0.404614 LR 0.001000 Time 0.624836 +2025-05-20 16:42:49,275 - Epoch: [11][ 20/ 71] Overall Loss 0.392510 Objective Loss 0.392510 LR 0.001000 Time 0.445242 +2025-05-20 16:42:53,862 - Epoch: [11][ 30/ 71] Overall Loss 0.386052 Objective Loss 0.386052 LR 0.001000 Time 0.449701 +2025-05-20 16:42:57,547 - Epoch: [11][ 40/ 71] Overall Loss 0.381434 Objective Loss 0.381434 LR 0.001000 Time 0.429401 +2025-05-20 16:43:01,594 - Epoch: [11][ 50/ 71] Overall Loss 0.376542 Objective Loss 0.376542 LR 0.001000 Time 0.424458 +2025-05-20 16:43:04,490 - Epoch: [11][ 60/ 71] Overall Loss 0.376544 Objective Loss 0.376544 LR 0.001000 Time 0.401976 +2025-05-20 16:43:08,226 - Epoch: [11][ 70/ 71] Overall Loss 0.375020 Objective Loss 0.375020 Top1 85.156250 LR 0.001000 Time 0.397919 +2025-05-20 16:43:08,314 - Epoch: [11][ 71/ 71] Overall Loss 0.374549 Objective Loss 0.374549 Top1 85.714286 LR 0.001000 Time 0.393547 +2025-05-20 16:43:08,344 - --- validate (epoch=11)----------- +2025-05-20 16:43:08,344 - 2000 samples (256 per mini-batch) +2025-05-20 16:43:11,713 - Epoch: [11][ 8/ 8] Loss 0.355396 Top1 83.050000 +2025-05-20 16:43:11,743 - ==> Top1: 83.050 Loss: 0.355 + +2025-05-20 16:43:11,743 - ==> Confusion: +[[865 120] + [219 796]] + +2025-05-20 16:43:11,754 - ==> Best [Top1: 83.650 Params: 57776 on epoch: 10] +2025-05-20 16:43:11,754 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:43:11,762 - + +2025-05-20 16:43:11,762 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:43:16,981 - Epoch: [12][ 10/ 71] Overall Loss 0.341067 Objective Loss 0.341067 LR 0.001000 Time 0.521878 +2025-05-20 16:43:20,273 - Epoch: [12][ 20/ 71] Overall Loss 0.343996 Objective Loss 0.343996 LR 0.001000 Time 0.425516 +2025-05-20 16:43:24,733 - Epoch: [12][ 30/ 71] Overall Loss 0.347052 Objective Loss 0.347052 LR 0.001000 Time 0.432339 +2025-05-20 16:43:28,001 - Epoch: [12][ 40/ 71] Overall Loss 0.369347 Objective Loss 0.369347 LR 0.001000 Time 0.405959 +2025-05-20 16:43:32,544 - Epoch: [12][ 50/ 71] Overall Loss 0.374472 Objective Loss 0.374472 LR 0.001000 Time 0.415617 +2025-05-20 16:43:35,370 - Epoch: [12][ 60/ 71] Overall Loss 0.375054 Objective Loss 0.375054 LR 0.001000 Time 0.393438 +2025-05-20 16:43:39,243 - Epoch: [12][ 70/ 71] Overall Loss 0.371786 Objective Loss 0.371786 Top1 85.546875 LR 0.001000 Time 0.392557 +2025-05-20 16:43:39,329 - Epoch: [12][ 71/ 71] Overall Loss 0.372129 Objective Loss 0.372129 Top1 84.821429 LR 0.001000 Time 0.388234 +2025-05-20 16:43:39,356 - --- validate (epoch=12)----------- +2025-05-20 16:43:39,356 - 2000 samples (256 per mini-batch) +2025-05-20 16:43:42,540 - Epoch: [12][ 8/ 8] Loss 0.343981 Top1 85.650000 +2025-05-20 16:43:42,570 - ==> Top1: 85.650 Loss: 0.344 + +2025-05-20 16:43:42,570 - ==> Confusion: +[[824 161] + [126 889]] + +2025-05-20 16:43:42,582 - ==> Best [Top1: 85.650 Params: 57776 on epoch: 12] +2025-05-20 16:43:42,582 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:43:42,589 - + +2025-05-20 16:43:42,590 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:43:48,137 - Epoch: [13][ 10/ 71] Overall Loss 0.341636 Objective Loss 0.341636 LR 0.001000 Time 0.554722 +2025-05-20 16:43:50,993 - Epoch: [13][ 20/ 71] Overall Loss 0.334073 Objective Loss 0.334073 LR 0.001000 Time 0.420106 +2025-05-20 16:43:55,069 - Epoch: [13][ 30/ 71] Overall Loss 0.334257 Objective Loss 0.334257 LR 0.001000 Time 0.415942 +2025-05-20 16:43:58,752 - Epoch: [13][ 40/ 71] Overall Loss 0.330426 Objective Loss 0.330426 LR 0.001000 Time 0.404012 +2025-05-20 16:44:02,809 - Epoch: [13][ 50/ 71] Overall Loss 0.334578 Objective Loss 0.334578 LR 0.001000 Time 0.404361 +2025-05-20 16:44:05,795 - Epoch: [13][ 60/ 71] Overall Loss 0.336050 Objective Loss 0.336050 LR 0.001000 Time 0.386724 +2025-05-20 16:44:09,296 - Epoch: [13][ 70/ 71] Overall Loss 0.344654 Objective Loss 0.344654 Top1 81.640625 LR 0.001000 Time 0.381481 +2025-05-20 16:44:09,379 - Epoch: [13][ 71/ 71] Overall Loss 0.344076 Objective Loss 0.344076 Top1 82.738095 LR 0.001000 Time 0.377288 +2025-05-20 16:44:09,404 - --- validate (epoch=13)----------- +2025-05-20 16:44:09,404 - 2000 samples (256 per mini-batch) +2025-05-20 16:44:12,665 - Epoch: [13][ 8/ 8] Loss 0.463868 Top1 78.200000 +2025-05-20 16:44:12,696 - ==> Top1: 78.200 Loss: 0.464 + +2025-05-20 16:44:12,697 - ==> Confusion: +[[944 41] + [395 620]] + +2025-05-20 16:44:12,713 - ==> Best [Top1: 85.650 Params: 57776 on epoch: 12] +2025-05-20 16:44:12,713 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:44:12,721 - + +2025-05-20 16:44:12,721 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:44:17,257 - Epoch: [14][ 10/ 71] Overall Loss 0.362495 Objective Loss 0.362495 LR 0.001000 Time 0.453536 +2025-05-20 16:44:21,110 - Epoch: [14][ 20/ 71] Overall Loss 0.358478 Objective Loss 0.358478 LR 0.001000 Time 0.419433 +2025-05-20 16:44:24,253 - Epoch: [14][ 30/ 71] Overall Loss 0.356144 Objective Loss 0.356144 LR 0.001000 Time 0.384362 +2025-05-20 16:44:29,260 - Epoch: [14][ 40/ 71] Overall Loss 0.347049 Objective Loss 0.347049 LR 0.001000 Time 0.413448 +2025-05-20 16:44:32,868 - Epoch: [14][ 50/ 71] Overall Loss 0.344425 Objective Loss 0.344425 LR 0.001000 Time 0.402916 +2025-05-20 16:44:37,213 - Epoch: [14][ 60/ 71] Overall Loss 0.337694 Objective Loss 0.337694 LR 0.001000 Time 0.408177 +2025-05-20 16:44:40,217 - Epoch: [14][ 70/ 71] Overall Loss 0.335706 Objective Loss 0.335706 Top1 90.234375 LR 0.001000 Time 0.392766 +2025-05-20 16:44:40,302 - Epoch: [14][ 71/ 71] Overall Loss 0.338157 Objective Loss 0.338157 Top1 88.988095 LR 0.001000 Time 0.388432 +2025-05-20 16:44:40,332 - --- validate (epoch=14)----------- +2025-05-20 16:44:40,333 - 2000 samples (256 per mini-batch) +2025-05-20 16:44:44,241 - Epoch: [14][ 8/ 8] Loss 0.324453 Top1 86.400000 +2025-05-20 16:44:44,270 - ==> Top1: 86.400 Loss: 0.324 + +2025-05-20 16:44:44,271 - ==> Confusion: +[[821 164] + [108 907]] -2023-09-11 13:52:20,384 - ==> Best [Top1: 89.000 Sparsity:0.00 Params: 57776 on epoch: 32] -2023-09-11 13:52:20,384 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:52:20,389 - - -2023-09-11 13:52:20,389 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:52:23,506 - Epoch: [33][ 10/ 71] Overall Loss 0.245684 Objective Loss 0.245684 LR 0.000500 Time 0.311613 -2023-09-11 13:52:25,412 - Epoch: [33][ 20/ 71] Overall Loss 0.253854 Objective Loss 0.253854 LR 0.000500 Time 0.251118 -2023-09-11 13:52:27,918 - Epoch: [33][ 30/ 71] Overall Loss 0.248718 Objective Loss 0.248718 LR 0.000500 Time 0.250932 -2023-09-11 13:52:29,792 - Epoch: [33][ 40/ 71] Overall Loss 0.253200 Objective Loss 0.253200 LR 0.000500 Time 0.235037 -2023-09-11 13:52:32,455 - Epoch: [33][ 50/ 71] Overall Loss 0.248377 Objective Loss 0.248377 LR 0.000500 Time 0.241286 -2023-09-11 13:52:34,449 - Epoch: [33][ 60/ 71] Overall Loss 0.247736 Objective Loss 0.247736 LR 0.000500 Time 0.234286 -2023-09-11 13:52:36,756 - Epoch: [33][ 70/ 71] Overall Loss 0.245271 Objective Loss 0.245271 Top1 89.843750 LR 0.000500 Time 0.233769 -2023-09-11 13:52:36,828 - Epoch: [33][ 71/ 71] Overall Loss 0.244977 Objective Loss 0.244977 Top1 89.285714 LR 0.000500 Time 0.231494 -2023-09-11 13:52:36,919 - --- validate (epoch=33)----------- -2023-09-11 13:52:36,919 - 2000 samples (256 per mini-batch) -2023-09-11 13:52:39,177 - Epoch: [33][ 8/ 8] Loss 0.273379 Top1 87.750000 -2023-09-11 13:52:39,266 - ==> Top1: 87.750 Loss: 0.273 - -2023-09-11 13:52:39,266 - ==> Confusion: +2025-05-20 16:44:44,286 - ==> Best [Top1: 86.400 Params: 57776 on epoch: 14] +2025-05-20 16:44:44,287 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:44:44,300 - + +2025-05-20 16:44:44,301 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:44:50,205 - Epoch: [15][ 10/ 71] Overall Loss 0.330901 Objective Loss 0.330901 LR 0.001000 Time 0.590376 +2025-05-20 16:44:52,967 - Epoch: [15][ 20/ 71] Overall Loss 0.334724 Objective Loss 0.334724 LR 0.001000 Time 0.433291 +2025-05-20 16:44:56,761 - Epoch: [15][ 30/ 71] Overall Loss 0.332943 Objective Loss 0.332943 LR 0.001000 Time 0.415308 +2025-05-20 16:45:00,478 - Epoch: [15][ 40/ 71] Overall Loss 0.332584 Objective Loss 0.332584 LR 0.001000 Time 0.404390 +2025-05-20 16:45:04,389 - Epoch: [15][ 50/ 71] Overall Loss 0.330590 Objective Loss 0.330590 LR 0.001000 Time 0.401731 +2025-05-20 16:45:07,261 - Epoch: [15][ 60/ 71] Overall Loss 0.327669 Objective Loss 0.327669 LR 0.001000 Time 0.382629 +2025-05-20 16:45:11,017 - Epoch: [15][ 70/ 71] Overall Loss 0.325655 Objective Loss 0.325655 Top1 88.281250 LR 0.001000 Time 0.381620 +2025-05-20 16:45:11,105 - Epoch: [15][ 71/ 71] Overall Loss 0.325682 Objective Loss 0.325682 Top1 86.904762 LR 0.001000 Time 0.377494 +2025-05-20 16:45:11,139 - --- validate (epoch=15)----------- +2025-05-20 16:45:11,139 - 2000 samples (256 per mini-batch) +2025-05-20 16:45:15,113 - Epoch: [15][ 8/ 8] Loss 0.363653 Top1 84.150000 +2025-05-20 16:45:15,146 - ==> Top1: 84.150 Loss: 0.364 + +2025-05-20 16:45:15,146 - ==> Confusion: +[[908 77] + [240 775]] + +2025-05-20 16:45:15,162 - ==> Best [Top1: 86.400 Params: 57776 on epoch: 14] +2025-05-20 16:45:15,162 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:45:15,169 - + +2025-05-20 16:45:15,169 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:45:20,055 - Epoch: [16][ 10/ 71] Overall Loss 0.322733 Objective Loss 0.322733 LR 0.001000 Time 0.488570 +2025-05-20 16:45:24,117 - Epoch: [16][ 20/ 71] Overall Loss 0.319356 Objective Loss 0.319356 LR 0.001000 Time 0.447356 +2025-05-20 16:45:27,252 - Epoch: [16][ 30/ 71] Overall Loss 0.318676 Objective Loss 0.318676 LR 0.001000 Time 0.402708 +2025-05-20 16:45:31,120 - Epoch: [16][ 40/ 71] Overall Loss 0.323979 Objective Loss 0.323979 LR 0.001000 Time 0.398742 +2025-05-20 16:45:34,940 - Epoch: [16][ 50/ 71] Overall Loss 0.323265 Objective Loss 0.323265 LR 0.001000 Time 0.395390 +2025-05-20 16:45:38,868 - Epoch: [16][ 60/ 71] Overall Loss 0.323162 Objective Loss 0.323162 LR 0.001000 Time 0.394941 +2025-05-20 16:45:43,105 - Epoch: [16][ 70/ 71] Overall Loss 0.320138 Objective Loss 0.320138 Top1 89.062500 LR 0.001000 Time 0.399048 +2025-05-20 16:45:43,204 - Epoch: [16][ 71/ 71] Overall Loss 0.319891 Objective Loss 0.319891 Top1 88.988095 LR 0.001000 Time 0.394825 +2025-05-20 16:45:43,236 - --- validate (epoch=16)----------- +2025-05-20 16:45:43,236 - 2000 samples (256 per mini-batch) +2025-05-20 16:45:47,163 - Epoch: [16][ 8/ 8] Loss 0.329715 Top1 85.850000 +2025-05-20 16:45:47,193 - ==> Top1: 85.850 Loss: 0.330 + +2025-05-20 16:45:47,193 - ==> Confusion: [[895 90] - [155 860]] + [193 822]] -2023-09-11 13:52:39,267 - ==> Best [Top1: 89.000 Sparsity:0.00 Params: 57776 on epoch: 32] -2023-09-11 13:52:39,267 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:52:39,271 - - -2023-09-11 13:52:39,272 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:52:42,336 - Epoch: [34][ 10/ 71] Overall Loss 0.236575 Objective Loss 0.236575 LR 0.000500 Time 0.306340 -2023-09-11 13:52:44,300 - Epoch: [34][ 20/ 71] Overall Loss 0.237157 Objective Loss 0.237157 LR 0.000500 Time 0.251385 -2023-09-11 13:52:46,795 - Epoch: [34][ 30/ 71] Overall Loss 0.240246 Objective Loss 0.240246 LR 0.000500 Time 0.250755 -2023-09-11 13:52:48,774 - Epoch: [34][ 40/ 71] Overall Loss 0.241606 Objective Loss 0.241606 LR 0.000500 Time 0.237522 -2023-09-11 13:52:51,917 - Epoch: [34][ 50/ 71] Overall Loss 0.239094 Objective Loss 0.239094 LR 0.000500 Time 0.252876 -2023-09-11 13:52:54,797 - Epoch: [34][ 60/ 71] Overall Loss 0.237985 Objective Loss 0.237985 LR 0.000500 Time 0.258725 -2023-09-11 13:52:56,585 - Epoch: [34][ 70/ 71] Overall Loss 0.238643 Objective Loss 0.238643 Top1 90.625000 LR 0.000500 Time 0.247298 -2023-09-11 13:52:56,666 - Epoch: [34][ 71/ 71] Overall Loss 0.239103 Objective Loss 0.239103 Top1 90.178571 LR 0.000500 Time 0.244956 -2023-09-11 13:52:56,766 - --- validate (epoch=34)----------- -2023-09-11 13:52:56,767 - 2000 samples (256 per mini-batch) -2023-09-11 13:52:59,038 - Epoch: [34][ 8/ 8] Loss 0.272697 Top1 88.200000 -2023-09-11 13:52:59,130 - ==> Top1: 88.200 Loss: 0.273 - -2023-09-11 13:52:59,130 - ==> Confusion: +2025-05-20 16:45:47,210 - ==> Best [Top1: 86.400 Params: 57776 on epoch: 14] +2025-05-20 16:45:47,210 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:45:47,217 - + +2025-05-20 16:45:47,217 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:45:51,958 - Epoch: [17][ 10/ 71] Overall Loss 0.303220 Objective Loss 0.303220 LR 0.001000 Time 0.473977 +2025-05-20 16:45:55,784 - Epoch: [17][ 20/ 71] Overall Loss 0.309487 Objective Loss 0.309487 LR 0.001000 Time 0.428282 +2025-05-20 16:45:59,576 - Epoch: [17][ 30/ 71] Overall Loss 0.303150 Objective Loss 0.303150 LR 0.001000 Time 0.411904 +2025-05-20 16:46:04,650 - Epoch: [17][ 40/ 71] Overall Loss 0.301813 Objective Loss 0.301813 LR 0.001000 Time 0.435764 +2025-05-20 16:46:07,607 - Epoch: [17][ 50/ 71] Overall Loss 0.301115 Objective Loss 0.301115 LR 0.001000 Time 0.407746 +2025-05-20 16:46:12,206 - Epoch: [17][ 60/ 71] Overall Loss 0.301616 Objective Loss 0.301616 LR 0.001000 Time 0.416442 +2025-05-20 16:46:14,980 - Epoch: [17][ 70/ 71] Overall Loss 0.299466 Objective Loss 0.299466 Top1 82.812500 LR 0.001000 Time 0.396578 +2025-05-20 16:46:15,068 - Epoch: [17][ 71/ 71] Overall Loss 0.299482 Objective Loss 0.299482 Top1 82.440476 LR 0.001000 Time 0.392224 +2025-05-20 16:46:15,096 - --- validate (epoch=17)----------- +2025-05-20 16:46:15,096 - 2000 samples (256 per mini-batch) +2025-05-20 16:46:18,890 - Epoch: [17][ 8/ 8] Loss 0.324542 Top1 86.450000 +2025-05-20 16:46:18,925 - ==> Top1: 86.450 Loss: 0.325 + +2025-05-20 16:46:18,925 - ==> Confusion: [[900 85] - [151 864]] - -2023-09-11 13:52:59,147 - ==> Best [Top1: 89.000 Sparsity:0.00 Params: 57776 on epoch: 32] -2023-09-11 13:52:59,147 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:52:59,149 - - -2023-09-11 13:52:59,149 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:53:02,859 - Epoch: [35][ 10/ 71] Overall Loss 0.233743 Objective Loss 0.233743 LR 0.000500 Time 0.370936 -2023-09-11 13:53:04,968 - Epoch: [35][ 20/ 71] Overall Loss 0.229295 Objective Loss 0.229295 LR 0.000500 Time 0.290896 -2023-09-11 13:53:07,460 - Epoch: [35][ 30/ 71] Overall Loss 0.235792 Objective Loss 0.235792 LR 0.000500 Time 0.276964 -2023-09-11 13:53:09,372 - Epoch: [35][ 40/ 71] Overall Loss 0.235366 Objective Loss 0.235366 LR 0.000500 Time 0.255520 -2023-09-11 13:53:11,896 - Epoch: [35][ 50/ 71] Overall Loss 0.236015 Objective Loss 0.236015 LR 0.000500 Time 0.254886 -2023-09-11 13:53:13,905 - Epoch: [35][ 60/ 71] Overall Loss 0.236580 Objective Loss 0.236580 LR 0.000500 Time 0.245887 -2023-09-11 13:53:16,283 - Epoch: [35][ 70/ 71] Overall Loss 0.238078 Objective Loss 0.238078 Top1 92.187500 LR 0.000500 Time 0.244735 -2023-09-11 13:53:16,349 - Epoch: [35][ 71/ 71] Overall Loss 0.236958 Objective Loss 0.236958 Top1 92.857143 LR 0.000500 Time 0.242218 -2023-09-11 13:53:16,439 - --- validate (epoch=35)----------- -2023-09-11 13:53:16,439 - 2000 samples (256 per mini-batch) -2023-09-11 13:53:18,760 - Epoch: [35][ 8/ 8] Loss 0.260876 Top1 89.850000 -2023-09-11 13:53:18,853 - ==> Top1: 89.850 Loss: 0.261 - -2023-09-11 13:53:18,853 - ==> Confusion: -[[865 120] - [ 83 932]] + [186 829]] + +2025-05-20 16:46:18,939 - ==> Best [Top1: 86.450 Params: 57776 on epoch: 17] +2025-05-20 16:46:18,939 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:46:18,947 - + +2025-05-20 16:46:18,947 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:46:24,240 - Epoch: [18][ 10/ 71] Overall Loss 0.281188 Objective Loss 0.281188 LR 0.001000 Time 0.529293 +2025-05-20 16:46:28,759 - Epoch: [18][ 20/ 71] Overall Loss 0.287099 Objective Loss 0.287099 LR 0.001000 Time 0.490573 +2025-05-20 16:46:31,566 - Epoch: [18][ 30/ 71] Overall Loss 0.300741 Objective Loss 0.300741 LR 0.001000 Time 0.420583 +2025-05-20 16:46:35,835 - Epoch: [18][ 40/ 71] Overall Loss 0.297122 Objective Loss 0.297122 LR 0.001000 Time 0.422155 +2025-05-20 16:46:39,318 - Epoch: [18][ 50/ 71] Overall Loss 0.299470 Objective Loss 0.299470 LR 0.001000 Time 0.407387 +2025-05-20 16:46:42,905 - Epoch: [18][ 60/ 71] Overall Loss 0.297510 Objective Loss 0.297510 LR 0.001000 Time 0.399260 +2025-05-20 16:46:46,061 - Epoch: [18][ 70/ 71] Overall Loss 0.297254 Objective Loss 0.297254 Top1 83.984375 LR 0.001000 Time 0.387309 +2025-05-20 16:46:46,142 - Epoch: [18][ 71/ 71] Overall Loss 0.296825 Objective Loss 0.296825 Top1 84.523810 LR 0.001000 Time 0.382988 +2025-05-20 16:46:46,174 - --- validate (epoch=18)----------- +2025-05-20 16:46:46,174 - 2000 samples (256 per mini-batch) +2025-05-20 16:46:50,268 - Epoch: [18][ 8/ 8] Loss 0.305895 Top1 86.600000 +2025-05-20 16:46:50,296 - ==> Top1: 86.600 Loss: 0.306 + +2025-05-20 16:46:50,296 - ==> Confusion: +[[872 113] + [155 860]] -2023-09-11 13:53:18,855 - ==> Best [Top1: 89.850 Sparsity:0.00 Params: 57776 on epoch: 35] -2023-09-11 13:53:18,855 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:53:18,858 - - -2023-09-11 13:53:18,858 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:53:23,107 - Epoch: [36][ 10/ 71] Overall Loss 0.248064 Objective Loss 0.248064 LR 0.000500 Time 0.424829 -2023-09-11 13:53:25,032 - Epoch: [36][ 20/ 71] Overall Loss 0.239090 Objective Loss 0.239090 LR 0.000500 Time 0.308663 -2023-09-11 13:53:27,531 - Epoch: [36][ 30/ 71] Overall Loss 0.237363 Objective Loss 0.237363 LR 0.000500 Time 0.289053 -2023-09-11 13:53:30,057 - Epoch: [36][ 40/ 71] Overall Loss 0.239426 Objective Loss 0.239426 LR 0.000500 Time 0.279942 -2023-09-11 13:53:32,687 - Epoch: [36][ 50/ 71] Overall Loss 0.238887 Objective Loss 0.238887 LR 0.000500 Time 0.276543 -2023-09-11 13:53:34,637 - Epoch: [36][ 60/ 71] Overall Loss 0.237315 Objective Loss 0.237315 LR 0.000500 Time 0.262947 -2023-09-11 13:53:37,091 - Epoch: [36][ 70/ 71] Overall Loss 0.236573 Objective Loss 0.236573 Top1 89.453125 LR 0.000500 Time 0.260444 -2023-09-11 13:53:37,168 - Epoch: [36][ 71/ 71] Overall Loss 0.238585 Objective Loss 0.238585 Top1 88.392857 LR 0.000500 Time 0.257845 -2023-09-11 13:53:37,260 - --- validate (epoch=36)----------- -2023-09-11 13:53:37,260 - 2000 samples (256 per mini-batch) -2023-09-11 13:53:40,143 - Epoch: [36][ 8/ 8] Loss 0.241535 Top1 89.700000 -2023-09-11 13:53:40,247 - ==> Top1: 89.700 Loss: 0.242 - -2023-09-11 13:53:40,247 - ==> Confusion: -[[874 111] - [ 95 920]] +2025-05-20 16:46:50,307 - ==> Best [Top1: 86.600 Params: 57776 on epoch: 18] +2025-05-20 16:46:50,307 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:46:50,315 - + +2025-05-20 16:46:50,315 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:46:55,161 - Epoch: [19][ 10/ 71] Overall Loss 0.287918 Objective Loss 0.287918 LR 0.001000 Time 0.484614 +2025-05-20 16:46:58,155 - Epoch: [19][ 20/ 71] Overall Loss 0.305173 Objective Loss 0.305173 LR 0.001000 Time 0.391970 +2025-05-20 16:47:02,454 - Epoch: [19][ 30/ 71] Overall Loss 0.311999 Objective Loss 0.311999 LR 0.001000 Time 0.404599 +2025-05-20 16:47:06,260 - Epoch: [19][ 40/ 71] Overall Loss 0.319623 Objective Loss 0.319623 LR 0.001000 Time 0.398597 +2025-05-20 16:47:10,039 - Epoch: [19][ 50/ 71] Overall Loss 0.319434 Objective Loss 0.319434 LR 0.001000 Time 0.394449 +2025-05-20 16:47:13,368 - Epoch: [19][ 60/ 71] Overall Loss 0.315239 Objective Loss 0.315239 LR 0.001000 Time 0.384190 +2025-05-20 16:47:16,988 - Epoch: [19][ 70/ 71] Overall Loss 0.309642 Objective Loss 0.309642 Top1 86.718750 LR 0.001000 Time 0.381013 +2025-05-20 16:47:17,088 - Epoch: [19][ 71/ 71] Overall Loss 0.310075 Objective Loss 0.310075 Top1 86.309524 LR 0.001000 Time 0.377046 +2025-05-20 16:47:17,120 - --- validate (epoch=19)----------- +2025-05-20 16:47:17,120 - 2000 samples (256 per mini-batch) +2025-05-20 16:47:20,596 - Epoch: [19][ 8/ 8] Loss 0.319295 Top1 87.200000 +2025-05-20 16:47:20,631 - ==> Top1: 87.200 Loss: 0.319 + +2025-05-20 16:47:20,631 - ==> Confusion: +[[895 90] + [166 849]] + +2025-05-20 16:47:20,646 - ==> Best [Top1: 87.200 Params: 57776 on epoch: 19] +2025-05-20 16:47:20,647 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:47:20,654 - + +2025-05-20 16:47:20,654 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:47:25,633 - Epoch: [20][ 10/ 71] Overall Loss 0.284938 Objective Loss 0.284938 LR 0.000500 Time 0.497870 +2025-05-20 16:47:28,512 - Epoch: [20][ 20/ 71] Overall Loss 0.271186 Objective Loss 0.271186 LR 0.000500 Time 0.392844 +2025-05-20 16:47:33,016 - Epoch: [20][ 30/ 71] Overall Loss 0.269433 Objective Loss 0.269433 LR 0.000500 Time 0.412025 +2025-05-20 16:47:36,278 - Epoch: [20][ 40/ 71] Overall Loss 0.270080 Objective Loss 0.270080 LR 0.000500 Time 0.390559 +2025-05-20 16:47:40,966 - Epoch: [20][ 50/ 71] Overall Loss 0.274904 Objective Loss 0.274904 LR 0.000500 Time 0.406190 +2025-05-20 16:47:43,874 - Epoch: [20][ 60/ 71] Overall Loss 0.273126 Objective Loss 0.273126 LR 0.000500 Time 0.386968 +2025-05-20 16:47:47,833 - Epoch: [20][ 70/ 71] Overall Loss 0.271244 Objective Loss 0.271244 Top1 87.890625 LR 0.000500 Time 0.388232 +2025-05-20 16:47:47,932 - Epoch: [20][ 71/ 71] Overall Loss 0.270683 Objective Loss 0.270683 Top1 88.095238 LR 0.000500 Time 0.384162 +2025-05-20 16:47:47,963 - --- validate (epoch=20)----------- +2025-05-20 16:47:47,963 - 2000 samples (256 per mini-batch) +2025-05-20 16:47:51,783 - Epoch: [20][ 8/ 8] Loss 0.280823 Top1 87.950000 +2025-05-20 16:47:51,814 - ==> Top1: 87.950 Loss: 0.281 + +2025-05-20 16:47:51,814 - ==> Confusion: +[[835 150] + [ 91 924]] -2023-09-11 13:53:40,262 - ==> Best [Top1: 89.850 Sparsity:0.00 Params: 57776 on epoch: 35] -2023-09-11 13:53:40,262 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:53:40,265 - - -2023-09-11 13:53:40,265 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:53:44,555 - Epoch: [37][ 10/ 71] Overall Loss 0.222455 Objective Loss 0.222455 LR 0.000500 Time 0.428889 -2023-09-11 13:53:47,207 - Epoch: [37][ 20/ 71] Overall Loss 0.221432 Objective Loss 0.221432 LR 0.000500 Time 0.347014 -2023-09-11 13:53:50,533 - Epoch: [37][ 30/ 71] Overall Loss 0.221274 Objective Loss 0.221274 LR 0.000500 Time 0.342227 -2023-09-11 13:53:52,482 - Epoch: [37][ 40/ 71] Overall Loss 0.224032 Objective Loss 0.224032 LR 0.000500 Time 0.305385 -2023-09-11 13:53:55,172 - Epoch: [37][ 50/ 71] Overall Loss 0.226442 Objective Loss 0.226442 LR 0.000500 Time 0.298090 -2023-09-11 13:53:57,270 - Epoch: [37][ 60/ 71] Overall Loss 0.223323 Objective Loss 0.223323 LR 0.000500 Time 0.283372 -2023-09-11 13:53:59,722 - Epoch: [37][ 70/ 71] Overall Loss 0.224561 Objective Loss 0.224561 Top1 91.406250 LR 0.000500 Time 0.277910 -2023-09-11 13:53:59,797 - Epoch: [37][ 71/ 71] Overall Loss 0.223947 Objective Loss 0.223947 Top1 91.666667 LR 0.000500 Time 0.275058 -2023-09-11 13:53:59,892 - --- validate (epoch=37)----------- -2023-09-11 13:53:59,893 - 2000 samples (256 per mini-batch) -2023-09-11 13:54:03,024 - Epoch: [37][ 8/ 8] Loss 0.254704 Top1 89.150000 -2023-09-11 13:54:03,138 - ==> Top1: 89.150 Loss: 0.255 - -2023-09-11 13:54:03,139 - ==> Confusion: -[[871 114] +2025-05-20 16:47:51,831 - ==> Best [Top1: 87.950 Params: 57776 on epoch: 20] +2025-05-20 16:47:51,831 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:47:51,839 - + +2025-05-20 16:47:51,839 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:47:56,856 - Epoch: [21][ 10/ 71] Overall Loss 0.291332 Objective Loss 0.291332 LR 0.000500 Time 0.501687 +2025-05-20 16:48:00,285 - Epoch: [21][ 20/ 71] Overall Loss 0.280664 Objective Loss 0.280664 LR 0.000500 Time 0.422243 +2025-05-20 16:48:03,330 - Epoch: [21][ 30/ 71] Overall Loss 0.280739 Objective Loss 0.280739 LR 0.000500 Time 0.383012 +2025-05-20 16:48:08,062 - Epoch: [21][ 40/ 71] Overall Loss 0.273395 Objective Loss 0.273395 LR 0.000500 Time 0.405543 +2025-05-20 16:48:11,237 - Epoch: [21][ 50/ 71] Overall Loss 0.272326 Objective Loss 0.272326 LR 0.000500 Time 0.387919 +2025-05-20 16:48:14,997 - Epoch: [21][ 60/ 71] Overall Loss 0.270407 Objective Loss 0.270407 LR 0.000500 Time 0.385935 +2025-05-20 16:48:18,909 - Epoch: [21][ 70/ 71] Overall Loss 0.269710 Objective Loss 0.269710 Top1 89.843750 LR 0.000500 Time 0.386685 +2025-05-20 16:48:18,992 - Epoch: [21][ 71/ 71] Overall Loss 0.270095 Objective Loss 0.270095 Top1 88.988095 LR 0.000500 Time 0.382407 +2025-05-20 16:48:19,024 - --- validate (epoch=21)----------- +2025-05-20 16:48:19,024 - 2000 samples (256 per mini-batch) +2025-05-20 16:48:22,707 - Epoch: [21][ 8/ 8] Loss 0.270353 Top1 88.000000 +2025-05-20 16:48:22,739 - ==> Top1: 88.000 Loss: 0.270 + +2025-05-20 16:48:22,739 - ==> Confusion: +[[848 137] [103 912]] -2023-09-11 13:54:03,153 - ==> Best [Top1: 89.850 Sparsity:0.00 Params: 57776 on epoch: 35] -2023-09-11 13:54:03,153 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:54:03,156 - - -2023-09-11 13:54:03,156 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:54:07,215 - Epoch: [38][ 10/ 71] Overall Loss 0.238176 Objective Loss 0.238176 LR 0.000500 Time 0.405859 -2023-09-11 13:54:09,194 - Epoch: [38][ 20/ 71] Overall Loss 0.243503 Objective Loss 0.243503 LR 0.000500 Time 0.301846 -2023-09-11 13:54:11,801 - Epoch: [38][ 30/ 71] Overall Loss 0.235106 Objective Loss 0.235106 LR 0.000500 Time 0.288123 -2023-09-11 13:54:13,797 - Epoch: [38][ 40/ 71] Overall Loss 0.232520 Objective Loss 0.232520 LR 0.000500 Time 0.265985 -2023-09-11 13:54:17,049 - Epoch: [38][ 50/ 71] Overall Loss 0.227965 Objective Loss 0.227965 LR 0.000500 Time 0.277808 -2023-09-11 13:54:19,082 - Epoch: [38][ 60/ 71] Overall Loss 0.226445 Objective Loss 0.226445 LR 0.000500 Time 0.265399 -2023-09-11 13:54:21,425 - Epoch: [38][ 70/ 71] Overall Loss 0.228459 Objective Loss 0.228459 Top1 91.015625 LR 0.000500 Time 0.260950 -2023-09-11 13:54:21,557 - Epoch: [38][ 71/ 71] Overall Loss 0.229903 Objective Loss 0.229903 Top1 88.690476 LR 0.000500 Time 0.259133 -2023-09-11 13:54:21,650 - --- validate (epoch=38)----------- -2023-09-11 13:54:21,651 - 2000 samples (256 per mini-batch) -2023-09-11 13:54:24,794 - Epoch: [38][ 8/ 8] Loss 0.277877 Top1 87.350000 -2023-09-11 13:54:24,890 - ==> Top1: 87.350 Loss: 0.278 - -2023-09-11 13:54:24,890 - ==> Confusion: -[[925 60] - [193 822]] - -2023-09-11 13:54:24,906 - ==> Best [Top1: 89.850 Sparsity:0.00 Params: 57776 on epoch: 35] -2023-09-11 13:54:24,906 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:54:24,910 - - -2023-09-11 13:54:24,911 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:54:29,274 - Epoch: [39][ 10/ 71] Overall Loss 0.220278 Objective Loss 0.220278 LR 0.000500 Time 0.436293 -2023-09-11 13:54:31,370 - Epoch: [39][ 20/ 71] Overall Loss 0.221846 Objective Loss 0.221846 LR 0.000500 Time 0.322915 -2023-09-11 13:54:33,936 - Epoch: [39][ 30/ 71] Overall Loss 0.224514 Objective Loss 0.224514 LR 0.000500 Time 0.300794 -2023-09-11 13:54:36,525 - Epoch: [39][ 40/ 71] Overall Loss 0.226222 Objective Loss 0.226222 LR 0.000500 Time 0.290324 -2023-09-11 13:54:39,639 - Epoch: [39][ 50/ 71] Overall Loss 0.226752 Objective Loss 0.226752 LR 0.000500 Time 0.294528 -2023-09-11 13:54:42,206 - Epoch: [39][ 60/ 71] Overall Loss 0.225107 Objective Loss 0.225107 LR 0.000500 Time 0.288227 -2023-09-11 13:54:44,115 - Epoch: [39][ 70/ 71] Overall Loss 0.225106 Objective Loss 0.225106 Top1 92.187500 LR 0.000500 Time 0.274316 -2023-09-11 13:54:44,192 - Epoch: [39][ 71/ 71] Overall Loss 0.225915 Objective Loss 0.225915 Top1 91.369048 LR 0.000500 Time 0.271531 -2023-09-11 13:54:44,281 - --- validate (epoch=39)----------- -2023-09-11 13:54:44,282 - 2000 samples (256 per mini-batch) -2023-09-11 13:54:47,392 - Epoch: [39][ 8/ 8] Loss 0.242747 Top1 89.050000 -2023-09-11 13:54:47,489 - ==> Top1: 89.050 Loss: 0.243 - -2023-09-11 13:54:47,489 - ==> Confusion: -[[852 133] - [ 86 929]] +2025-05-20 16:48:22,755 - ==> Best [Top1: 88.000 Params: 57776 on epoch: 21] +2025-05-20 16:48:22,755 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:48:22,763 - + +2025-05-20 16:48:22,763 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:48:27,351 - Epoch: [22][ 10/ 71] Overall Loss 0.251131 Objective Loss 0.251131 LR 0.000500 Time 0.458718 +2025-05-20 16:48:32,064 - Epoch: [22][ 20/ 71] Overall Loss 0.261416 Objective Loss 0.261416 LR 0.000500 Time 0.464981 +2025-05-20 16:48:34,997 - Epoch: [22][ 30/ 71] Overall Loss 0.262058 Objective Loss 0.262058 LR 0.000500 Time 0.407749 +2025-05-20 16:48:39,146 - Epoch: [22][ 40/ 71] Overall Loss 0.261994 Objective Loss 0.261994 LR 0.000500 Time 0.409547 +2025-05-20 16:48:42,694 - Epoch: [22][ 50/ 71] Overall Loss 0.259376 Objective Loss 0.259376 LR 0.000500 Time 0.398574 +2025-05-20 16:48:46,082 - Epoch: [22][ 60/ 71] Overall Loss 0.260916 Objective Loss 0.260916 LR 0.000500 Time 0.388617 +2025-05-20 16:48:48,807 - Epoch: [22][ 70/ 71] Overall Loss 0.262959 Objective Loss 0.262959 Top1 91.406250 LR 0.000500 Time 0.372020 +2025-05-20 16:48:48,900 - Epoch: [22][ 71/ 71] Overall Loss 0.262472 Objective Loss 0.262472 Top1 91.369048 LR 0.000500 Time 0.368094 +2025-05-20 16:48:48,926 - --- validate (epoch=22)----------- +2025-05-20 16:48:48,927 - 2000 samples (256 per mini-batch) +2025-05-20 16:48:52,954 - Epoch: [22][ 8/ 8] Loss 0.276716 Top1 88.850000 +2025-05-20 16:48:52,990 - ==> Top1: 88.850 Loss: 0.277 + +2025-05-20 16:48:52,990 - ==> Confusion: +[[856 129] + [ 94 921]] -2023-09-11 13:54:47,495 - ==> Best [Top1: 89.850 Sparsity:0.00 Params: 57776 on epoch: 35] -2023-09-11 13:54:47,496 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:54:47,500 - - -2023-09-11 13:54:47,500 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:54:51,821 - Epoch: [40][ 10/ 71] Overall Loss 0.210109 Objective Loss 0.210109 LR 0.000500 Time 0.432029 -2023-09-11 13:54:53,817 - Epoch: [40][ 20/ 71] Overall Loss 0.231170 Objective Loss 0.231170 LR 0.000500 Time 0.315805 -2023-09-11 13:54:56,357 - Epoch: [40][ 30/ 71] Overall Loss 0.230750 Objective Loss 0.230750 LR 0.000500 Time 0.295181 -2023-09-11 13:54:58,337 - Epoch: [40][ 40/ 71] Overall Loss 0.230324 Objective Loss 0.230324 LR 0.000500 Time 0.270890 -2023-09-11 13:55:01,197 - Epoch: [40][ 50/ 71] Overall Loss 0.231605 Objective Loss 0.231605 LR 0.000500 Time 0.273905 -2023-09-11 13:55:03,393 - Epoch: [40][ 60/ 71] Overall Loss 0.227965 Objective Loss 0.227965 LR 0.000500 Time 0.264847 -2023-09-11 13:55:05,962 - Epoch: [40][ 70/ 71] Overall Loss 0.225636 Objective Loss 0.225636 Top1 93.750000 LR 0.000500 Time 0.263698 -2023-09-11 13:55:06,043 - Epoch: [40][ 71/ 71] Overall Loss 0.225788 Objective Loss 0.225788 Top1 92.559524 LR 0.000500 Time 0.261133 -2023-09-11 13:55:06,141 - --- validate (epoch=40)----------- -2023-09-11 13:55:06,141 - 2000 samples (256 per mini-batch) -2023-09-11 13:55:09,008 - Epoch: [40][ 8/ 8] Loss 0.245391 Top1 89.550000 -2023-09-11 13:55:09,102 - ==> Top1: 89.550 Loss: 0.245 - -2023-09-11 13:55:09,102 - ==> Confusion: -[[893 92] - [117 898]] +2025-05-20 16:48:52,999 - ==> Best [Top1: 88.850 Params: 57776 on epoch: 22] +2025-05-20 16:48:52,999 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:48:53,006 - + +2025-05-20 16:48:53,007 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:48:59,142 - Epoch: [23][ 10/ 71] Overall Loss 0.262698 Objective Loss 0.262698 LR 0.000500 Time 0.613479 +2025-05-20 16:49:02,354 - Epoch: [23][ 20/ 71] Overall Loss 0.253918 Objective Loss 0.253918 LR 0.000500 Time 0.467339 +2025-05-20 16:49:05,962 - Epoch: [23][ 30/ 71] Overall Loss 0.257642 Objective Loss 0.257642 LR 0.000500 Time 0.431818 +2025-05-20 16:49:09,464 - Epoch: [23][ 40/ 71] Overall Loss 0.253806 Objective Loss 0.253806 LR 0.000500 Time 0.411406 +2025-05-20 16:49:13,290 - Epoch: [23][ 50/ 71] Overall Loss 0.256250 Objective Loss 0.256250 LR 0.000500 Time 0.405627 +2025-05-20 16:49:16,917 - Epoch: [23][ 60/ 71] Overall Loss 0.255315 Objective Loss 0.255315 LR 0.000500 Time 0.398476 +2025-05-20 16:49:20,730 - Epoch: [23][ 70/ 71] Overall Loss 0.257939 Objective Loss 0.257939 Top1 89.843750 LR 0.000500 Time 0.396017 +2025-05-20 16:49:20,830 - Epoch: [23][ 71/ 71] Overall Loss 0.256958 Objective Loss 0.256958 Top1 90.476190 LR 0.000500 Time 0.391840 +2025-05-20 16:49:20,865 - --- validate (epoch=23)----------- +2025-05-20 16:49:20,865 - 2000 samples (256 per mini-batch) +2025-05-20 16:49:24,361 - Epoch: [23][ 8/ 8] Loss 0.299542 Top1 87.650000 +2025-05-20 16:49:24,395 - ==> Top1: 87.650 Loss: 0.300 + +2025-05-20 16:49:24,395 - ==> Confusion: +[[913 72] + [175 840]] + +2025-05-20 16:49:24,411 - ==> Best [Top1: 88.850 Params: 57776 on epoch: 22] +2025-05-20 16:49:24,411 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:49:24,419 - + +2025-05-20 16:49:24,419 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:49:29,762 - Epoch: [24][ 10/ 71] Overall Loss 0.255781 Objective Loss 0.255781 LR 0.000500 Time 0.534253 +2025-05-20 16:49:32,812 - Epoch: [24][ 20/ 71] Overall Loss 0.257845 Objective Loss 0.257845 LR 0.000500 Time 0.419618 +2025-05-20 16:49:36,603 - Epoch: [24][ 30/ 71] Overall Loss 0.257686 Objective Loss 0.257686 LR 0.000500 Time 0.406103 +2025-05-20 16:49:39,780 - Epoch: [24][ 40/ 71] Overall Loss 0.263040 Objective Loss 0.263040 LR 0.000500 Time 0.383991 +2025-05-20 16:49:44,129 - Epoch: [24][ 50/ 71] Overall Loss 0.262556 Objective Loss 0.262556 LR 0.000500 Time 0.394161 +2025-05-20 16:49:48,045 - Epoch: [24][ 60/ 71] Overall Loss 0.259942 Objective Loss 0.259942 LR 0.000500 Time 0.393732 +2025-05-20 16:49:51,060 - Epoch: [24][ 70/ 71] Overall Loss 0.258168 Objective Loss 0.258168 Top1 90.234375 LR 0.000500 Time 0.380553 +2025-05-20 16:49:51,157 - Epoch: [24][ 71/ 71] Overall Loss 0.258319 Objective Loss 0.258319 Top1 89.880952 LR 0.000500 Time 0.376558 +2025-05-20 16:49:51,184 - --- validate (epoch=24)----------- +2025-05-20 16:49:51,184 - 2000 samples (256 per mini-batch) +2025-05-20 16:49:55,070 - Epoch: [24][ 8/ 8] Loss 0.284673 Top1 87.950000 +2025-05-20 16:49:55,097 - ==> Top1: 87.950 Loss: 0.285 + +2025-05-20 16:49:55,097 - ==> Confusion: +[[870 115] + [126 889]] + +2025-05-20 16:49:55,108 - ==> Best [Top1: 88.850 Params: 57776 on epoch: 22] +2025-05-20 16:49:55,108 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:49:55,115 - + +2025-05-20 16:49:55,115 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:50:00,422 - Epoch: [25][ 10/ 71] Overall Loss 0.261305 Objective Loss 0.261305 LR 0.000500 Time 0.530569 +2025-05-20 16:50:03,816 - Epoch: [25][ 20/ 71] Overall Loss 0.248798 Objective Loss 0.248798 LR 0.000500 Time 0.435010 +2025-05-20 16:50:07,808 - Epoch: [25][ 30/ 71] Overall Loss 0.256779 Objective Loss 0.256779 LR 0.000500 Time 0.423047 +2025-05-20 16:50:10,857 - Epoch: [25][ 40/ 71] Overall Loss 0.250452 Objective Loss 0.250452 LR 0.000500 Time 0.393497 +2025-05-20 16:50:14,682 - Epoch: [25][ 50/ 71] Overall Loss 0.253744 Objective Loss 0.253744 LR 0.000500 Time 0.391294 +2025-05-20 16:50:17,958 - Epoch: [25][ 60/ 71] Overall Loss 0.254479 Objective Loss 0.254479 LR 0.000500 Time 0.380684 +2025-05-20 16:50:21,468 - Epoch: [25][ 70/ 71] Overall Loss 0.255346 Objective Loss 0.255346 Top1 87.500000 LR 0.000500 Time 0.376428 +2025-05-20 16:50:21,549 - Epoch: [25][ 71/ 71] Overall Loss 0.256261 Objective Loss 0.256261 Top1 87.202381 LR 0.000500 Time 0.372275 +2025-05-20 16:50:21,579 - --- validate (epoch=25)----------- +2025-05-20 16:50:21,579 - 2000 samples (256 per mini-batch) +2025-05-20 16:50:25,469 - Epoch: [25][ 8/ 8] Loss 0.273164 Top1 89.450000 +2025-05-20 16:50:25,495 - ==> Top1: 89.450 Loss: 0.273 + +2025-05-20 16:50:25,495 - ==> Confusion: +[[853 132] + [ 79 936]] -2023-09-11 13:55:09,106 - ==> Best [Top1: 89.850 Sparsity:0.00 Params: 57776 on epoch: 35] -2023-09-11 13:55:09,106 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:55:09,108 - - -2023-09-11 13:55:09,108 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:55:13,466 - Epoch: [41][ 10/ 71] Overall Loss 0.244858 Objective Loss 0.244858 LR 0.000500 Time 0.435663 -2023-09-11 13:55:15,471 - Epoch: [41][ 20/ 71] Overall Loss 0.247248 Objective Loss 0.247248 LR 0.000500 Time 0.318068 -2023-09-11 13:55:17,976 - Epoch: [41][ 30/ 71] Overall Loss 0.239012 Objective Loss 0.239012 LR 0.000500 Time 0.295561 -2023-09-11 13:55:19,970 - Epoch: [41][ 40/ 71] Overall Loss 0.241582 Objective Loss 0.241582 LR 0.000500 Time 0.271490 -2023-09-11 13:55:22,661 - Epoch: [41][ 50/ 71] Overall Loss 0.234720 Objective Loss 0.234720 LR 0.000500 Time 0.271006 -2023-09-11 13:55:24,749 - Epoch: [41][ 60/ 71] Overall Loss 0.237387 Objective Loss 0.237387 LR 0.000500 Time 0.260642 -2023-09-11 13:55:26,960 - Epoch: [41][ 70/ 71] Overall Loss 0.234289 Objective Loss 0.234289 Top1 91.406250 LR 0.000500 Time 0.254981 -2023-09-11 13:55:27,058 - Epoch: [41][ 71/ 71] Overall Loss 0.233733 Objective Loss 0.233733 Top1 91.369048 LR 0.000500 Time 0.252779 -2023-09-11 13:55:27,157 - --- validate (epoch=41)----------- -2023-09-11 13:55:27,157 - 2000 samples (256 per mini-batch) -2023-09-11 13:55:30,091 - Epoch: [41][ 8/ 8] Loss 0.254824 Top1 88.750000 -2023-09-11 13:55:30,185 - ==> Top1: 88.750 Loss: 0.255 - -2023-09-11 13:55:30,185 - ==> Confusion: -[[907 78] - [147 868]] - -2023-09-11 13:55:30,201 - ==> Best [Top1: 89.850 Sparsity:0.00 Params: 57776 on epoch: 35] -2023-09-11 13:55:30,201 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:55:30,203 - - -2023-09-11 13:55:30,204 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:55:35,362 - Epoch: [42][ 10/ 71] Overall Loss 0.225509 Objective Loss 0.225509 LR 0.000500 Time 0.515822 -2023-09-11 13:55:37,444 - Epoch: [42][ 20/ 71] Overall Loss 0.222333 Objective Loss 0.222333 LR 0.000500 Time 0.361967 -2023-09-11 13:55:40,379 - Epoch: [42][ 30/ 71] Overall Loss 0.219916 Objective Loss 0.219916 LR 0.000500 Time 0.339152 -2023-09-11 13:55:42,372 - Epoch: [42][ 40/ 71] Overall Loss 0.221381 Objective Loss 0.221381 LR 0.000500 Time 0.304178 -2023-09-11 13:55:44,945 - Epoch: [42][ 50/ 71] Overall Loss 0.222749 Objective Loss 0.222749 LR 0.000500 Time 0.294788 -2023-09-11 13:55:47,458 - Epoch: [42][ 60/ 71] Overall Loss 0.225481 Objective Loss 0.225481 LR 0.000500 Time 0.287535 -2023-09-11 13:55:49,348 - Epoch: [42][ 70/ 71] Overall Loss 0.222290 Objective Loss 0.222290 Top1 94.921875 LR 0.000500 Time 0.273457 -2023-09-11 13:55:49,464 - Epoch: [42][ 71/ 71] Overall Loss 0.222086 Objective Loss 0.222086 Top1 94.345238 LR 0.000500 Time 0.271236 -2023-09-11 13:55:49,560 - --- validate (epoch=42)----------- -2023-09-11 13:55:49,561 - 2000 samples (256 per mini-batch) -2023-09-11 13:55:52,173 - Epoch: [42][ 8/ 8] Loss 0.250978 Top1 89.600000 -2023-09-11 13:55:52,269 - ==> Top1: 89.600 Loss: 0.251 - -2023-09-11 13:55:52,270 - ==> Confusion: -[[842 143] - [ 65 950]] +2025-05-20 16:50:25,508 - ==> Best [Top1: 89.450 Params: 57776 on epoch: 25] +2025-05-20 16:50:25,508 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:50:25,516 - + +2025-05-20 16:50:25,516 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:50:31,139 - Epoch: [26][ 10/ 71] Overall Loss 0.247792 Objective Loss 0.247792 LR 0.000500 Time 0.562229 +2025-05-20 16:50:34,937 - Epoch: [26][ 20/ 71] Overall Loss 0.247419 Objective Loss 0.247419 LR 0.000500 Time 0.471020 +2025-05-20 16:50:38,476 - Epoch: [26][ 30/ 71] Overall Loss 0.250598 Objective Loss 0.250598 LR 0.000500 Time 0.431955 +2025-05-20 16:50:42,835 - Epoch: [26][ 40/ 71] Overall Loss 0.250921 Objective Loss 0.250921 LR 0.000500 Time 0.432939 +2025-05-20 16:50:46,055 - Epoch: [26][ 50/ 71] Overall Loss 0.253677 Objective Loss 0.253677 LR 0.000500 Time 0.410743 +2025-05-20 16:50:49,995 - Epoch: [26][ 60/ 71] Overall Loss 0.256935 Objective Loss 0.256935 LR 0.000500 Time 0.407948 +2025-05-20 16:50:53,059 - Epoch: [26][ 70/ 71] Overall Loss 0.257035 Objective Loss 0.257035 Top1 94.140625 LR 0.000500 Time 0.393435 +2025-05-20 16:50:53,180 - Epoch: [26][ 71/ 71] Overall Loss 0.257445 Objective Loss 0.257445 Top1 92.261905 LR 0.000500 Time 0.389606 +2025-05-20 16:50:53,212 - --- validate (epoch=26)----------- +2025-05-20 16:50:53,212 - 2000 samples (256 per mini-batch) +2025-05-20 16:50:57,140 - Epoch: [26][ 8/ 8] Loss 0.269755 Top1 88.300000 +2025-05-20 16:50:57,170 - ==> Top1: 88.300 Loss: 0.270 + +2025-05-20 16:50:57,170 - ==> Confusion: +[[878 107] + [127 888]] -2023-09-11 13:55:52,271 - ==> Best [Top1: 89.850 Sparsity:0.00 Params: 57776 on epoch: 35] -2023-09-11 13:55:52,271 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:55:52,274 - - -2023-09-11 13:55:52,274 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:55:56,311 - Epoch: [43][ 10/ 71] Overall Loss 0.218615 Objective Loss 0.218615 LR 0.000500 Time 0.403702 -2023-09-11 13:55:58,331 - Epoch: [43][ 20/ 71] Overall Loss 0.222600 Objective Loss 0.222600 LR 0.000500 Time 0.302823 -2023-09-11 13:56:00,780 - Epoch: [43][ 30/ 71] Overall Loss 0.223415 Objective Loss 0.223415 LR 0.000500 Time 0.283517 -2023-09-11 13:56:02,923 - Epoch: [43][ 40/ 71] Overall Loss 0.227194 Objective Loss 0.227194 LR 0.000500 Time 0.266188 -2023-09-11 13:56:05,424 - Epoch: [43][ 50/ 71] Overall Loss 0.224221 Objective Loss 0.224221 LR 0.000500 Time 0.262959 -2023-09-11 13:56:07,512 - Epoch: [43][ 60/ 71] Overall Loss 0.224461 Objective Loss 0.224461 LR 0.000500 Time 0.253926 -2023-09-11 13:56:09,723 - Epoch: [43][ 70/ 71] Overall Loss 0.223292 Objective Loss 0.223292 Top1 92.187500 LR 0.000500 Time 0.249231 -2023-09-11 13:56:09,837 - Epoch: [43][ 71/ 71] Overall Loss 0.222594 Objective Loss 0.222594 Top1 92.857143 LR 0.000500 Time 0.247331 -2023-09-11 13:56:09,938 - --- validate (epoch=43)----------- -2023-09-11 13:56:09,938 - 2000 samples (256 per mini-batch) -2023-09-11 13:56:12,937 - Epoch: [43][ 8/ 8] Loss 0.233740 Top1 90.250000 -2023-09-11 13:56:13,039 - ==> Top1: 90.250 Loss: 0.234 - -2023-09-11 13:56:13,039 - ==> Confusion: +2025-05-20 16:50:57,186 - ==> Best [Top1: 89.450 Params: 57776 on epoch: 25] +2025-05-20 16:50:57,187 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:50:57,194 - + +2025-05-20 16:50:57,194 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:51:02,847 - Epoch: [27][ 10/ 71] Overall Loss 0.260914 Objective Loss 0.260914 LR 0.000500 Time 0.565266 +2025-05-20 16:51:05,831 - Epoch: [27][ 20/ 71] Overall Loss 0.248292 Objective Loss 0.248292 LR 0.000500 Time 0.431796 +2025-05-20 16:51:09,994 - Epoch: [27][ 30/ 71] Overall Loss 0.250235 Objective Loss 0.250235 LR 0.000500 Time 0.426613 +2025-05-20 16:51:13,763 - Epoch: [27][ 40/ 71] Overall Loss 0.250770 Objective Loss 0.250770 LR 0.000500 Time 0.414184 +2025-05-20 16:51:17,462 - Epoch: [27][ 50/ 71] Overall Loss 0.247182 Objective Loss 0.247182 LR 0.000500 Time 0.405315 +2025-05-20 16:51:20,955 - Epoch: [27][ 60/ 71] Overall Loss 0.247089 Objective Loss 0.247089 LR 0.000500 Time 0.395983 +2025-05-20 16:51:24,463 - Epoch: [27][ 70/ 71] Overall Loss 0.245045 Objective Loss 0.245045 Top1 89.453125 LR 0.000500 Time 0.389526 +2025-05-20 16:51:24,548 - Epoch: [27][ 71/ 71] Overall Loss 0.244233 Objective Loss 0.244233 Top1 89.880952 LR 0.000500 Time 0.385237 +2025-05-20 16:51:24,575 - --- validate (epoch=27)----------- +2025-05-20 16:51:24,575 - 2000 samples (256 per mini-batch) +2025-05-20 16:51:28,043 - Epoch: [27][ 8/ 8] Loss 0.280267 Top1 88.400000 +2025-05-20 16:51:28,072 - ==> Top1: 88.400 Loss: 0.280 + +2025-05-20 16:51:28,072 - ==> Confusion: +[[930 55] + [177 838]] + +2025-05-20 16:51:28,089 - ==> Best [Top1: 89.450 Params: 57776 on epoch: 25] +2025-05-20 16:51:28,089 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:51:28,096 - + +2025-05-20 16:51:28,096 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:51:34,287 - Epoch: [28][ 10/ 71] Overall Loss 0.237647 Objective Loss 0.237647 LR 0.000500 Time 0.618956 +2025-05-20 16:51:36,953 - Epoch: [28][ 20/ 71] Overall Loss 0.244962 Objective Loss 0.244962 LR 0.000500 Time 0.442796 +2025-05-20 16:51:40,687 - Epoch: [28][ 30/ 71] Overall Loss 0.240278 Objective Loss 0.240278 LR 0.000500 Time 0.419651 +2025-05-20 16:51:44,657 - Epoch: [28][ 40/ 71] Overall Loss 0.235534 Objective Loss 0.235534 LR 0.000500 Time 0.413977 +2025-05-20 16:51:47,576 - Epoch: [28][ 50/ 71] Overall Loss 0.234459 Objective Loss 0.234459 LR 0.000500 Time 0.389547 +2025-05-20 16:51:51,195 - Epoch: [28][ 60/ 71] Overall Loss 0.243685 Objective Loss 0.243685 LR 0.000500 Time 0.384938 +2025-05-20 16:51:54,065 - Epoch: [28][ 70/ 71] Overall Loss 0.247373 Objective Loss 0.247373 Top1 84.765625 LR 0.000500 Time 0.370951 +2025-05-20 16:51:54,164 - Epoch: [28][ 71/ 71] Overall Loss 0.247148 Objective Loss 0.247148 Top1 86.309524 LR 0.000500 Time 0.367117 +2025-05-20 16:51:54,189 - --- validate (epoch=28)----------- +2025-05-20 16:51:54,190 - 2000 samples (256 per mini-batch) +2025-05-20 16:51:58,118 - Epoch: [28][ 8/ 8] Loss 0.304768 Top1 87.200000 +2025-05-20 16:51:58,153 - ==> Top1: 87.200 Loss: 0.305 + +2025-05-20 16:51:58,153 - ==> Confusion: +[[951 34] + [222 793]] + +2025-05-20 16:51:58,170 - ==> Best [Top1: 89.450 Params: 57776 on epoch: 25] +2025-05-20 16:51:58,170 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:51:58,178 - + +2025-05-20 16:51:58,178 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:52:04,004 - Epoch: [29][ 10/ 71] Overall Loss 0.240491 Objective Loss 0.240491 LR 0.000500 Time 0.582608 +2025-05-20 16:52:06,819 - Epoch: [29][ 20/ 71] Overall Loss 0.239688 Objective Loss 0.239688 LR 0.000500 Time 0.431990 +2025-05-20 16:52:10,756 - Epoch: [29][ 30/ 71] Overall Loss 0.241053 Objective Loss 0.241053 LR 0.000500 Time 0.419237 +2025-05-20 16:52:14,502 - Epoch: [29][ 40/ 71] Overall Loss 0.245953 Objective Loss 0.245953 LR 0.000500 Time 0.408063 +2025-05-20 16:52:18,190 - Epoch: [29][ 50/ 71] Overall Loss 0.244692 Objective Loss 0.244692 LR 0.000500 Time 0.400212 +2025-05-20 16:52:21,354 - Epoch: [29][ 60/ 71] Overall Loss 0.244666 Objective Loss 0.244666 LR 0.000500 Time 0.386237 +2025-05-20 16:52:25,545 - Epoch: [29][ 70/ 71] Overall Loss 0.243975 Objective Loss 0.243975 Top1 91.796875 LR 0.000500 Time 0.390926 +2025-05-20 16:52:25,637 - Epoch: [29][ 71/ 71] Overall Loss 0.243106 Objective Loss 0.243106 Top1 92.261905 LR 0.000500 Time 0.386707 +2025-05-20 16:52:25,663 - --- validate (epoch=29)----------- +2025-05-20 16:52:25,663 - 2000 samples (256 per mini-batch) +2025-05-20 16:52:28,924 - Epoch: [29][ 8/ 8] Loss 0.254387 Top1 89.600000 +2025-05-20 16:52:28,960 - ==> Top1: 89.600 Loss: 0.254 + +2025-05-20 16:52:28,960 - ==> Confusion: [[890 95] - [100 915]] - -2023-09-11 13:56:13,043 - ==> Best [Top1: 90.250 Sparsity:0.00 Params: 57776 on epoch: 43] -2023-09-11 13:56:13,043 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:56:13,046 - - -2023-09-11 13:56:13,046 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:56:16,899 - Epoch: [44][ 10/ 71] Overall Loss 0.228020 Objective Loss 0.228020 LR 0.000500 Time 0.385251 -2023-09-11 13:56:18,921 - Epoch: [44][ 20/ 71] Overall Loss 0.237893 Objective Loss 0.237893 LR 0.000500 Time 0.293713 -2023-09-11 13:56:21,383 - Epoch: [44][ 30/ 71] Overall Loss 0.233888 Objective Loss 0.233888 LR 0.000500 Time 0.277857 -2023-09-11 13:56:23,525 - Epoch: [44][ 40/ 71] Overall Loss 0.229429 Objective Loss 0.229429 LR 0.000500 Time 0.261932 -2023-09-11 13:56:26,079 - Epoch: [44][ 50/ 71] Overall Loss 0.224992 Objective Loss 0.224992 LR 0.000500 Time 0.260615 -2023-09-11 13:56:28,079 - Epoch: [44][ 60/ 71] Overall Loss 0.222105 Objective Loss 0.222105 LR 0.000500 Time 0.250510 -2023-09-11 13:56:30,352 - Epoch: [44][ 70/ 71] Overall Loss 0.225423 Objective Loss 0.225423 Top1 88.281250 LR 0.000500 Time 0.247186 -2023-09-11 13:56:30,433 - Epoch: [44][ 71/ 71] Overall Loss 0.226879 Objective Loss 0.226879 Top1 88.392857 LR 0.000500 Time 0.244842 -2023-09-11 13:56:30,519 - --- validate (epoch=44)----------- -2023-09-11 13:56:30,519 - 2000 samples (256 per mini-batch) -2023-09-11 13:56:33,802 - Epoch: [44][ 8/ 8] Loss 0.232204 Top1 90.300000 -2023-09-11 13:56:33,893 - ==> Top1: 90.300 Loss: 0.232 - -2023-09-11 13:56:33,894 - ==> Confusion: -[[873 112] - [ 82 933]] + [113 902]] -2023-09-11 13:56:33,909 - ==> Best [Top1: 90.300 Sparsity:0.00 Params: 57776 on epoch: 44] -2023-09-11 13:56:33,909 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:56:33,915 - - -2023-09-11 13:56:33,915 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:56:37,558 - Epoch: [45][ 10/ 71] Overall Loss 0.223586 Objective Loss 0.223586 LR 0.000500 Time 0.364232 -2023-09-11 13:56:39,547 - Epoch: [45][ 20/ 71] Overall Loss 0.228267 Objective Loss 0.228267 LR 0.000500 Time 0.281586 -2023-09-11 13:56:42,251 - Epoch: [45][ 30/ 71] Overall Loss 0.222988 Objective Loss 0.222988 LR 0.000500 Time 0.277841 -2023-09-11 13:56:44,238 - Epoch: [45][ 40/ 71] Overall Loss 0.221006 Objective Loss 0.221006 LR 0.000500 Time 0.258049 -2023-09-11 13:56:46,853 - Epoch: [45][ 50/ 71] Overall Loss 0.221705 Objective Loss 0.221705 LR 0.000500 Time 0.258736 -2023-09-11 13:56:50,305 - Epoch: [45][ 60/ 71] Overall Loss 0.221461 Objective Loss 0.221461 LR 0.000500 Time 0.273129 -2023-09-11 13:56:52,130 - Epoch: [45][ 70/ 71] Overall Loss 0.218854 Objective Loss 0.218854 Top1 92.968750 LR 0.000500 Time 0.260187 -2023-09-11 13:56:52,211 - Epoch: [45][ 71/ 71] Overall Loss 0.219334 Objective Loss 0.219334 Top1 91.964286 LR 0.000500 Time 0.257659 -2023-09-11 13:56:52,303 - --- validate (epoch=45)----------- -2023-09-11 13:56:52,303 - 2000 samples (256 per mini-batch) -2023-09-11 13:56:55,479 - Epoch: [45][ 8/ 8] Loss 0.237121 Top1 89.850000 -2023-09-11 13:56:55,575 - ==> Top1: 89.850 Loss: 0.237 - -2023-09-11 13:56:55,575 - ==> Confusion: -[[890 95] - [108 907]] +2025-05-20 16:52:28,976 - ==> Best [Top1: 89.600 Params: 57776 on epoch: 29] +2025-05-20 16:52:28,976 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_checkpoint.pth.tar +2025-05-20 16:52:28,984 - Initiating quantization aware training (QAT)... +2025-05-20 16:52:28,985 - Collecting statistics for quantization aware training (QAT)... +2025-05-20 16:52:56,676 - + +2025-05-20 16:52:56,677 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:53:01,629 - Epoch: [30][ 10/ 71] Overall Loss 0.362016 Objective Loss 0.362016 LR 0.000500 Time 0.495199 +2025-05-20 16:53:04,643 - Epoch: [30][ 20/ 71] Overall Loss 0.318043 Objective Loss 0.318043 LR 0.000500 Time 0.398253 +2025-05-20 16:53:08,673 - Epoch: [30][ 30/ 71] Overall Loss 0.302594 Objective Loss 0.302594 LR 0.000500 Time 0.399823 +2025-05-20 16:53:12,303 - Epoch: [30][ 40/ 71] Overall Loss 0.285915 Objective Loss 0.285915 LR 0.000500 Time 0.390608 +2025-05-20 16:53:16,301 - Epoch: [30][ 50/ 71] Overall Loss 0.285823 Objective Loss 0.285823 LR 0.000500 Time 0.392437 +2025-05-20 16:53:20,362 - Epoch: [30][ 60/ 71] Overall Loss 0.279197 Objective Loss 0.279197 LR 0.000500 Time 0.394706 +2025-05-20 16:53:24,073 - Epoch: [30][ 70/ 71] Overall Loss 0.275794 Objective Loss 0.275794 Top1 91.406250 LR 0.000500 Time 0.391330 +2025-05-20 16:53:24,164 - Epoch: [30][ 71/ 71] Overall Loss 0.274694 Objective Loss 0.274694 Top1 91.666667 LR 0.000500 Time 0.387104 +2025-05-20 16:53:24,191 - --- validate (epoch=30)----------- +2025-05-20 16:53:24,191 - 2000 samples (256 per mini-batch) +2025-05-20 16:53:27,922 - Epoch: [30][ 8/ 8] Loss 0.275179 Top1 88.150000 +2025-05-20 16:53:27,955 - ==> Top1: 88.150 Loss: 0.275 + +2025-05-20 16:53:27,955 - ==> Confusion: +[[824 161] + [ 76 939]] -2023-09-11 13:56:55,591 - ==> Best [Top1: 90.300 Sparsity:0.00 Params: 57776 on epoch: 44] -2023-09-11 13:56:55,591 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:56:55,594 - - -2023-09-11 13:56:55,594 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:56:59,802 - Epoch: [46][ 10/ 71] Overall Loss 0.207894 Objective Loss 0.207894 LR 0.000500 Time 0.420766 -2023-09-11 13:57:01,869 - Epoch: [46][ 20/ 71] Overall Loss 0.216227 Objective Loss 0.216227 LR 0.000500 Time 0.313733 -2023-09-11 13:57:04,309 - Epoch: [46][ 30/ 71] Overall Loss 0.215699 Objective Loss 0.215699 LR 0.000500 Time 0.290484 -2023-09-11 13:57:06,345 - Epoch: [46][ 40/ 71] Overall Loss 0.216100 Objective Loss 0.216100 LR 0.000500 Time 0.268748 -2023-09-11 13:57:08,895 - Epoch: [46][ 50/ 71] Overall Loss 0.215635 Objective Loss 0.215635 LR 0.000500 Time 0.265986 -2023-09-11 13:57:11,518 - Epoch: [46][ 60/ 71] Overall Loss 0.217759 Objective Loss 0.217759 LR 0.000500 Time 0.265376 -2023-09-11 13:57:13,781 - Epoch: [46][ 70/ 71] Overall Loss 0.217998 Objective Loss 0.217998 Top1 90.234375 LR 0.000500 Time 0.259785 -2023-09-11 13:57:13,851 - Epoch: [46][ 71/ 71] Overall Loss 0.219428 Objective Loss 0.219428 Top1 88.392857 LR 0.000500 Time 0.257103 -2023-09-11 13:57:13,944 - --- validate (epoch=46)----------- -2023-09-11 13:57:13,945 - 2000 samples (256 per mini-batch) -2023-09-11 13:57:17,119 - Epoch: [46][ 8/ 8] Loss 0.231850 Top1 89.850000 -2023-09-11 13:57:17,222 - ==> Top1: 89.850 Loss: 0.232 - -2023-09-11 13:57:17,222 - ==> Confusion: -[[866 119] - [ 84 931]] +2025-05-20 16:53:27,965 - ==> Best [Top1: 88.150 Params: 57776 on epoch: 30] +2025-05-20 16:53:27,966 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:53:27,973 - + +2025-05-20 16:53:27,973 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:53:34,068 - Epoch: [31][ 10/ 71] Overall Loss 0.242930 Objective Loss 0.242930 LR 0.000500 Time 0.609415 +2025-05-20 16:53:37,334 - Epoch: [31][ 20/ 71] Overall Loss 0.239024 Objective Loss 0.239024 LR 0.000500 Time 0.467980 +2025-05-20 16:53:41,797 - Epoch: [31][ 30/ 71] Overall Loss 0.242770 Objective Loss 0.242770 LR 0.000500 Time 0.460739 +2025-05-20 16:53:45,096 - Epoch: [31][ 40/ 71] Overall Loss 0.243161 Objective Loss 0.243161 LR 0.000500 Time 0.428037 +2025-05-20 16:53:49,332 - Epoch: [31][ 50/ 71] Overall Loss 0.240135 Objective Loss 0.240135 LR 0.000500 Time 0.427133 +2025-05-20 16:53:52,144 - Epoch: [31][ 60/ 71] Overall Loss 0.241078 Objective Loss 0.241078 LR 0.000500 Time 0.402804 +2025-05-20 16:53:55,735 - Epoch: [31][ 70/ 71] Overall Loss 0.242488 Objective Loss 0.242488 Top1 86.718750 LR 0.000500 Time 0.396566 +2025-05-20 16:53:55,844 - Epoch: [31][ 71/ 71] Overall Loss 0.243835 Objective Loss 0.243835 Top1 85.714286 LR 0.000500 Time 0.392509 +2025-05-20 16:53:55,879 - --- validate (epoch=31)----------- +2025-05-20 16:53:55,879 - 2000 samples (256 per mini-batch) +2025-05-20 16:53:59,736 - Epoch: [31][ 8/ 8] Loss 0.250834 Top1 89.250000 +2025-05-20 16:53:59,766 - ==> Top1: 89.250 Loss: 0.251 + +2025-05-20 16:53:59,766 - ==> Confusion: +[[850 135] + [ 80 935]] -2023-09-11 13:57:17,239 - ==> Best [Top1: 90.300 Sparsity:0.00 Params: 57776 on epoch: 44] -2023-09-11 13:57:17,239 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:57:17,241 - - -2023-09-11 13:57:17,241 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:57:21,656 - Epoch: [47][ 10/ 71] Overall Loss 0.232497 Objective Loss 0.232497 LR 0.000500 Time 0.441400 -2023-09-11 13:57:23,721 - Epoch: [47][ 20/ 71] Overall Loss 0.220834 Objective Loss 0.220834 LR 0.000500 Time 0.323949 -2023-09-11 13:57:26,649 - Epoch: [47][ 30/ 71] Overall Loss 0.218803 Objective Loss 0.218803 LR 0.000500 Time 0.313565 -2023-09-11 13:57:29,885 - Epoch: [47][ 40/ 71] Overall Loss 0.213198 Objective Loss 0.213198 LR 0.000500 Time 0.316063 -2023-09-11 13:57:31,937 - Epoch: [47][ 50/ 71] Overall Loss 0.212029 Objective Loss 0.212029 LR 0.000500 Time 0.293882 -2023-09-11 13:57:34,511 - Epoch: [47][ 60/ 71] Overall Loss 0.216507 Objective Loss 0.216507 LR 0.000500 Time 0.287790 -2023-09-11 13:57:36,453 - Epoch: [47][ 70/ 71] Overall Loss 0.217801 Objective Loss 0.217801 Top1 90.234375 LR 0.000500 Time 0.274419 -2023-09-11 13:57:36,527 - Epoch: [47][ 71/ 71] Overall Loss 0.217838 Objective Loss 0.217838 Top1 90.476190 LR 0.000500 Time 0.271592 -2023-09-11 13:57:36,631 - --- validate (epoch=47)----------- -2023-09-11 13:57:36,631 - 2000 samples (256 per mini-batch) -2023-09-11 13:57:39,007 - Epoch: [47][ 8/ 8] Loss 0.242589 Top1 89.600000 -2023-09-11 13:57:39,108 - ==> Top1: 89.600 Loss: 0.243 - -2023-09-11 13:57:39,108 - ==> Confusion: -[[861 124] - [ 84 931]] +2025-05-20 16:53:59,777 - ==> Best [Top1: 89.250 Params: 57776 on epoch: 31] +2025-05-20 16:53:59,777 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:53:59,784 - + +2025-05-20 16:53:59,784 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:54:04,304 - Epoch: [32][ 10/ 71] Overall Loss 0.242539 Objective Loss 0.242539 LR 0.000500 Time 0.451929 +2025-05-20 16:54:08,102 - Epoch: [32][ 20/ 71] Overall Loss 0.240927 Objective Loss 0.240927 LR 0.000500 Time 0.415814 +2025-05-20 16:54:11,409 - Epoch: [32][ 30/ 71] Overall Loss 0.244782 Objective Loss 0.244782 LR 0.000500 Time 0.387444 +2025-05-20 16:54:15,945 - Epoch: [32][ 40/ 71] Overall Loss 0.239144 Objective Loss 0.239144 LR 0.000500 Time 0.403963 +2025-05-20 16:54:19,397 - Epoch: [32][ 50/ 71] Overall Loss 0.240515 Objective Loss 0.240515 LR 0.000500 Time 0.392216 +2025-05-20 16:54:23,110 - Epoch: [32][ 60/ 71] Overall Loss 0.239186 Objective Loss 0.239186 LR 0.000500 Time 0.388722 +2025-05-20 16:54:26,321 - Epoch: [32][ 70/ 71] Overall Loss 0.238898 Objective Loss 0.238898 Top1 88.671875 LR 0.000500 Time 0.379057 +2025-05-20 16:54:26,429 - Epoch: [32][ 71/ 71] Overall Loss 0.238140 Objective Loss 0.238140 Top1 89.880952 LR 0.000500 Time 0.375244 +2025-05-20 16:54:26,469 - --- validate (epoch=32)----------- +2025-05-20 16:54:26,470 - 2000 samples (256 per mini-batch) +2025-05-20 16:54:29,867 - Epoch: [32][ 8/ 8] Loss 0.252821 Top1 89.600000 +2025-05-20 16:54:29,902 - ==> Top1: 89.600 Loss: 0.253 + +2025-05-20 16:54:29,902 - ==> Confusion: +[[864 121] + [ 87 928]] -2023-09-11 13:57:39,123 - ==> Best [Top1: 90.300 Sparsity:0.00 Params: 57776 on epoch: 44] -2023-09-11 13:57:39,123 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:57:39,125 - - -2023-09-11 13:57:39,125 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:57:43,610 - Epoch: [48][ 10/ 71] Overall Loss 0.216998 Objective Loss 0.216998 LR 0.000500 Time 0.448359 -2023-09-11 13:57:45,622 - Epoch: [48][ 20/ 71] Overall Loss 0.215957 Objective Loss 0.215957 LR 0.000500 Time 0.324772 -2023-09-11 13:57:48,133 - Epoch: [48][ 30/ 71] Overall Loss 0.209637 Objective Loss 0.209637 LR 0.000500 Time 0.300222 -2023-09-11 13:57:50,258 - Epoch: [48][ 40/ 71] Overall Loss 0.210412 Objective Loss 0.210412 LR 0.000500 Time 0.278280 -2023-09-11 13:57:52,853 - Epoch: [48][ 50/ 71] Overall Loss 0.213087 Objective Loss 0.213087 LR 0.000500 Time 0.274512 -2023-09-11 13:57:54,901 - Epoch: [48][ 60/ 71] Overall Loss 0.215046 Objective Loss 0.215046 LR 0.000500 Time 0.262882 -2023-09-11 13:57:57,180 - Epoch: [48][ 70/ 71] Overall Loss 0.213777 Objective Loss 0.213777 Top1 88.671875 LR 0.000500 Time 0.257884 -2023-09-11 13:57:57,261 - Epoch: [48][ 71/ 71] Overall Loss 0.212789 Objective Loss 0.212789 Top1 90.773810 LR 0.000500 Time 0.255393 -2023-09-11 13:57:57,360 - --- validate (epoch=48)----------- -2023-09-11 13:57:57,360 - 2000 samples (256 per mini-batch) -2023-09-11 13:58:00,304 - Epoch: [48][ 8/ 8] Loss 0.228272 Top1 90.250000 -2023-09-11 13:58:00,389 - ==> Top1: 90.250 Loss: 0.228 - -2023-09-11 13:58:00,390 - ==> Confusion: -[[867 118] - [ 77 938]] +2025-05-20 16:54:29,918 - ==> Best [Top1: 89.600 Params: 57776 on epoch: 32] +2025-05-20 16:54:29,918 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:54:29,925 - + +2025-05-20 16:54:29,926 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:54:36,330 - Epoch: [33][ 10/ 71] Overall Loss 0.230107 Objective Loss 0.230107 LR 0.000500 Time 0.640414 +2025-05-20 16:54:39,362 - Epoch: [33][ 20/ 71] Overall Loss 0.236585 Objective Loss 0.236585 LR 0.000500 Time 0.471793 +2025-05-20 16:54:43,168 - Epoch: [33][ 30/ 71] Overall Loss 0.237993 Objective Loss 0.237993 LR 0.000500 Time 0.441362 +2025-05-20 16:54:48,234 - Epoch: [33][ 40/ 71] Overall Loss 0.240830 Objective Loss 0.240830 LR 0.000500 Time 0.457664 +2025-05-20 16:54:51,274 - Epoch: [33][ 50/ 71] Overall Loss 0.242251 Objective Loss 0.242251 LR 0.000500 Time 0.426937 +2025-05-20 16:54:54,965 - Epoch: [33][ 60/ 71] Overall Loss 0.242141 Objective Loss 0.242141 LR 0.000500 Time 0.417291 +2025-05-20 16:54:58,111 - Epoch: [33][ 70/ 71] Overall Loss 0.240645 Objective Loss 0.240645 Top1 88.281250 LR 0.000500 Time 0.402616 +2025-05-20 16:54:58,219 - Epoch: [33][ 71/ 71] Overall Loss 0.240710 Objective Loss 0.240710 Top1 88.392857 LR 0.000500 Time 0.398469 +2025-05-20 16:54:58,253 - --- validate (epoch=33)----------- +2025-05-20 16:54:58,253 - 2000 samples (256 per mini-batch) +2025-05-20 16:55:01,915 - Epoch: [33][ 8/ 8] Loss 0.251543 Top1 89.350000 +2025-05-20 16:55:01,952 - ==> Top1: 89.350 Loss: 0.252 + +2025-05-20 16:55:01,952 - ==> Confusion: +[[897 88] + [125 890]] -2023-09-11 13:58:00,406 - ==> Best [Top1: 90.300 Sparsity:0.00 Params: 57776 on epoch: 44] -2023-09-11 13:58:00,406 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:58:00,410 - - -2023-09-11 13:58:00,411 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:58:04,706 - Epoch: [49][ 10/ 71] Overall Loss 0.223845 Objective Loss 0.223845 LR 0.000500 Time 0.429526 -2023-09-11 13:58:07,106 - Epoch: [49][ 20/ 71] Overall Loss 0.222394 Objective Loss 0.222394 LR 0.000500 Time 0.334738 -2023-09-11 13:58:09,561 - Epoch: [49][ 30/ 71] Overall Loss 0.220999 Objective Loss 0.220999 LR 0.000500 Time 0.304986 -2023-09-11 13:58:11,636 - Epoch: [49][ 40/ 71] Overall Loss 0.212968 Objective Loss 0.212968 LR 0.000500 Time 0.280603 -2023-09-11 13:58:14,208 - Epoch: [49][ 50/ 71] Overall Loss 0.211852 Objective Loss 0.211852 LR 0.000500 Time 0.275904 -2023-09-11 13:58:16,463 - Epoch: [49][ 60/ 71] Overall Loss 0.210759 Objective Loss 0.210759 LR 0.000500 Time 0.267502 -2023-09-11 13:58:18,918 - Epoch: [49][ 70/ 71] Overall Loss 0.210280 Objective Loss 0.210280 Top1 91.015625 LR 0.000500 Time 0.264360 -2023-09-11 13:58:18,997 - Epoch: [49][ 71/ 71] Overall Loss 0.210721 Objective Loss 0.210721 Top1 91.071429 LR 0.000500 Time 0.261742 -2023-09-11 13:58:19,088 - --- validate (epoch=49)----------- -2023-09-11 13:58:19,088 - 2000 samples (256 per mini-batch) -2023-09-11 13:58:22,095 - Epoch: [49][ 8/ 8] Loss 0.245969 Top1 89.600000 -2023-09-11 13:58:22,205 - ==> Top1: 89.600 Loss: 0.246 - -2023-09-11 13:58:22,206 - ==> Confusion: -[[928 57] - [151 864]] - -2023-09-11 13:58:22,208 - ==> Best [Top1: 90.300 Sparsity:0.00 Params: 57776 on epoch: 44] -2023-09-11 13:58:22,208 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:58:22,212 - - -2023-09-11 13:58:22,212 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:58:25,756 - Epoch: [50][ 10/ 71] Overall Loss 0.210106 Objective Loss 0.210106 LR 0.000250 Time 0.354360 -2023-09-11 13:58:27,914 - Epoch: [50][ 20/ 71] Overall Loss 0.214831 Objective Loss 0.214831 LR 0.000250 Time 0.285012 -2023-09-11 13:58:31,666 - Epoch: [50][ 30/ 71] Overall Loss 0.209015 Objective Loss 0.209015 LR 0.000250 Time 0.315064 -2023-09-11 13:58:33,586 - Epoch: [50][ 40/ 71] Overall Loss 0.205012 Objective Loss 0.205012 LR 0.000250 Time 0.284304 -2023-09-11 13:58:37,576 - Epoch: [50][ 50/ 71] Overall Loss 0.201809 Objective Loss 0.201809 LR 0.000250 Time 0.307238 -2023-09-11 13:58:39,610 - Epoch: [50][ 60/ 71] Overall Loss 0.200139 Objective Loss 0.200139 LR 0.000250 Time 0.289928 -2023-09-11 13:58:42,348 - Epoch: [50][ 70/ 71] Overall Loss 0.201868 Objective Loss 0.201868 Top1 89.843750 LR 0.000250 Time 0.287613 -2023-09-11 13:58:42,448 - Epoch: [50][ 71/ 71] Overall Loss 0.201710 Objective Loss 0.201710 Top1 90.476190 LR 0.000250 Time 0.284969 -2023-09-11 13:58:42,545 - --- validate (epoch=50)----------- -2023-09-11 13:58:42,545 - 2000 samples (256 per mini-batch) -2023-09-11 13:58:45,611 - Epoch: [50][ 8/ 8] Loss 0.245072 Top1 89.900000 -2023-09-11 13:58:45,698 - ==> Top1: 89.900 Loss: 0.245 - -2023-09-11 13:58:45,699 - ==> Confusion: -[[910 75] - [127 888]] +2025-05-20 16:55:01,968 - ==> Best [Top1: 89.600 Params: 57776 on epoch: 32] +2025-05-20 16:55:01,968 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:55:01,975 - + +2025-05-20 16:55:01,975 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:55:08,564 - Epoch: [34][ 10/ 71] Overall Loss 0.213916 Objective Loss 0.213916 LR 0.000500 Time 0.658842 +2025-05-20 16:55:11,156 - Epoch: [34][ 20/ 71] Overall Loss 0.220007 Objective Loss 0.220007 LR 0.000500 Time 0.458957 +2025-05-20 16:55:14,805 - Epoch: [34][ 30/ 71] Overall Loss 0.219199 Objective Loss 0.219199 LR 0.000500 Time 0.427621 +2025-05-20 16:55:17,819 - Epoch: [34][ 40/ 71] Overall Loss 0.222494 Objective Loss 0.222494 LR 0.000500 Time 0.396047 +2025-05-20 16:55:21,591 - Epoch: [34][ 50/ 71] Overall Loss 0.227325 Objective Loss 0.227325 LR 0.000500 Time 0.392280 +2025-05-20 16:55:25,153 - Epoch: [34][ 60/ 71] Overall Loss 0.228290 Objective Loss 0.228290 LR 0.000500 Time 0.386250 +2025-05-20 16:55:28,780 - Epoch: [34][ 70/ 71] Overall Loss 0.228510 Objective Loss 0.228510 Top1 89.843750 LR 0.000500 Time 0.382886 +2025-05-20 16:55:28,888 - Epoch: [34][ 71/ 71] Overall Loss 0.227015 Objective Loss 0.227015 Top1 91.369048 LR 0.000500 Time 0.379013 +2025-05-20 16:55:28,919 - --- validate (epoch=34)----------- +2025-05-20 16:55:28,919 - 2000 samples (256 per mini-batch) +2025-05-20 16:55:32,899 - Epoch: [34][ 8/ 8] Loss 0.226974 Top1 90.700000 +2025-05-20 16:55:32,931 - ==> Top1: 90.700 Loss: 0.227 + +2025-05-20 16:55:32,931 - ==> Confusion: +[[856 129] + [ 57 958]] + +2025-05-20 16:55:32,942 - ==> Best [Top1: 90.700 Params: 57776 on epoch: 34] +2025-05-20 16:55:32,942 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:55:32,950 - + +2025-05-20 16:55:32,950 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:55:37,726 - Epoch: [35][ 10/ 71] Overall Loss 0.225966 Objective Loss 0.225966 LR 0.000500 Time 0.477605 +2025-05-20 16:55:40,529 - Epoch: [35][ 20/ 71] Overall Loss 0.217617 Objective Loss 0.217617 LR 0.000500 Time 0.378906 +2025-05-20 16:55:44,335 - Epoch: [35][ 30/ 71] Overall Loss 0.225538 Objective Loss 0.225538 LR 0.000500 Time 0.379449 +2025-05-20 16:55:47,601 - Epoch: [35][ 40/ 71] Overall Loss 0.227858 Objective Loss 0.227858 LR 0.000500 Time 0.366232 +2025-05-20 16:55:51,726 - Epoch: [35][ 50/ 71] Overall Loss 0.222584 Objective Loss 0.222584 LR 0.000500 Time 0.375489 +2025-05-20 16:55:54,577 - Epoch: [35][ 60/ 71] Overall Loss 0.220296 Objective Loss 0.220296 LR 0.000500 Time 0.360421 +2025-05-20 16:55:59,306 - Epoch: [35][ 70/ 71] Overall Loss 0.221207 Objective Loss 0.221207 Top1 89.062500 LR 0.000500 Time 0.376485 +2025-05-20 16:55:59,409 - Epoch: [35][ 71/ 71] Overall Loss 0.220895 Objective Loss 0.220895 Top1 90.178571 LR 0.000500 Time 0.372619 +2025-05-20 16:55:59,438 - --- validate (epoch=35)----------- +2025-05-20 16:55:59,438 - 2000 samples (256 per mini-batch) +2025-05-20 16:56:03,153 - Epoch: [35][ 8/ 8] Loss 0.266000 Top1 88.500000 +2025-05-20 16:56:03,189 - ==> Top1: 88.500 Loss: 0.266 + +2025-05-20 16:56:03,189 - ==> Confusion: +[[912 73] + [157 858]] + +2025-05-20 16:56:03,201 - ==> Best [Top1: 90.700 Params: 57776 on epoch: 34] +2025-05-20 16:56:03,201 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:56:03,208 - + +2025-05-20 16:56:03,209 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:56:08,393 - Epoch: [36][ 10/ 71] Overall Loss 0.236118 Objective Loss 0.236118 LR 0.000500 Time 0.518391 +2025-05-20 16:56:11,358 - Epoch: [36][ 20/ 71] Overall Loss 0.230699 Objective Loss 0.230699 LR 0.000500 Time 0.407410 +2025-05-20 16:56:15,060 - Epoch: [36][ 30/ 71] Overall Loss 0.235485 Objective Loss 0.235485 LR 0.000500 Time 0.395003 +2025-05-20 16:56:18,148 - Epoch: [36][ 40/ 71] Overall Loss 0.230947 Objective Loss 0.230947 LR 0.000500 Time 0.373451 +2025-05-20 16:56:22,015 - Epoch: [36][ 50/ 71] Overall Loss 0.231043 Objective Loss 0.231043 LR 0.000500 Time 0.376095 +2025-05-20 16:56:26,893 - Epoch: [36][ 60/ 71] Overall Loss 0.229137 Objective Loss 0.229137 LR 0.000500 Time 0.394695 +2025-05-20 16:56:30,051 - Epoch: [36][ 70/ 71] Overall Loss 0.227823 Objective Loss 0.227823 Top1 89.843750 LR 0.000500 Time 0.383420 +2025-05-20 16:56:30,146 - Epoch: [36][ 71/ 71] Overall Loss 0.227651 Objective Loss 0.227651 Top1 90.476190 LR 0.000500 Time 0.379365 +2025-05-20 16:56:30,182 - --- validate (epoch=36)----------- +2025-05-20 16:56:30,183 - 2000 samples (256 per mini-batch) +2025-05-20 16:56:34,105 - Epoch: [36][ 8/ 8] Loss 0.239324 Top1 89.800000 +2025-05-20 16:56:34,137 - ==> Top1: 89.800 Loss: 0.239 + +2025-05-20 16:56:34,138 - ==> Confusion: +[[857 128] + [ 76 939]] -2023-09-11 13:58:45,715 - ==> Best [Top1: 90.300 Sparsity:0.00 Params: 57776 on epoch: 44] -2023-09-11 13:58:45,715 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:58:45,718 - - -2023-09-11 13:58:45,718 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:58:49,998 - Epoch: [51][ 10/ 71] Overall Loss 0.205489 Objective Loss 0.205489 LR 0.000250 Time 0.427959 -2023-09-11 13:58:51,949 - Epoch: [51][ 20/ 71] Overall Loss 0.198528 Objective Loss 0.198528 LR 0.000250 Time 0.311514 -2023-09-11 13:58:54,717 - Epoch: [51][ 30/ 71] Overall Loss 0.195208 Objective Loss 0.195208 LR 0.000250 Time 0.299933 -2023-09-11 13:58:56,721 - Epoch: [51][ 40/ 71] Overall Loss 0.194314 Objective Loss 0.194314 LR 0.000250 Time 0.275045 -2023-09-11 13:58:59,437 - Epoch: [51][ 50/ 71] Overall Loss 0.194771 Objective Loss 0.194771 LR 0.000250 Time 0.274342 -2023-09-11 13:59:01,952 - Epoch: [51][ 60/ 71] Overall Loss 0.195812 Objective Loss 0.195812 LR 0.000250 Time 0.270536 -2023-09-11 13:59:03,809 - Epoch: [51][ 70/ 71] Overall Loss 0.194190 Objective Loss 0.194190 Top1 95.312500 LR 0.000250 Time 0.258404 -2023-09-11 13:59:03,918 - Epoch: [51][ 71/ 71] Overall Loss 0.193488 Objective Loss 0.193488 Top1 94.940476 LR 0.000250 Time 0.256303 -2023-09-11 13:59:04,008 - --- validate (epoch=51)----------- -2023-09-11 13:59:04,009 - 2000 samples (256 per mini-batch) -2023-09-11 13:59:07,207 - Epoch: [51][ 8/ 8] Loss 0.221582 Top1 90.000000 -2023-09-11 13:59:07,301 - ==> Top1: 90.000 Loss: 0.222 - -2023-09-11 13:59:07,301 - ==> Confusion: -[[886 99] - [101 914]] +2025-05-20 16:56:34,155 - ==> Best [Top1: 90.700 Params: 57776 on epoch: 34] +2025-05-20 16:56:34,155 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:56:34,162 - + +2025-05-20 16:56:34,162 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:56:39,284 - Epoch: [37][ 10/ 71] Overall Loss 0.213594 Objective Loss 0.213594 LR 0.000500 Time 0.512118 +2025-05-20 16:56:43,236 - Epoch: [37][ 20/ 71] Overall Loss 0.213371 Objective Loss 0.213371 LR 0.000500 Time 0.453626 +2025-05-20 16:56:47,491 - Epoch: [37][ 30/ 71] Overall Loss 0.219634 Objective Loss 0.219634 LR 0.000500 Time 0.444244 +2025-05-20 16:56:50,583 - Epoch: [37][ 40/ 71] Overall Loss 0.219141 Objective Loss 0.219141 LR 0.000500 Time 0.410485 +2025-05-20 16:56:54,430 - Epoch: [37][ 50/ 71] Overall Loss 0.221253 Objective Loss 0.221253 LR 0.000500 Time 0.405313 +2025-05-20 16:56:57,560 - Epoch: [37][ 60/ 71] Overall Loss 0.220210 Objective Loss 0.220210 LR 0.000500 Time 0.389921 +2025-05-20 16:57:02,058 - Epoch: [37][ 70/ 71] Overall Loss 0.218962 Objective Loss 0.218962 Top1 90.625000 LR 0.000500 Time 0.398469 +2025-05-20 16:57:02,150 - Epoch: [37][ 71/ 71] Overall Loss 0.219835 Objective Loss 0.219835 Top1 90.178571 LR 0.000500 Time 0.394103 +2025-05-20 16:57:02,184 - --- validate (epoch=37)----------- +2025-05-20 16:57:02,184 - 2000 samples (256 per mini-batch) +2025-05-20 16:57:06,206 - Epoch: [37][ 8/ 8] Loss 0.323884 Top1 86.100000 +2025-05-20 16:57:06,242 - ==> Top1: 86.100 Loss: 0.324 + +2025-05-20 16:57:06,242 - ==> Confusion: +[[941 44] + [234 781]] + +2025-05-20 16:57:06,258 - ==> Best [Top1: 90.700 Params: 57776 on epoch: 34] +2025-05-20 16:57:06,258 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:57:06,265 - + +2025-05-20 16:57:06,265 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:57:11,212 - Epoch: [38][ 10/ 71] Overall Loss 0.223370 Objective Loss 0.223370 LR 0.000500 Time 0.494592 +2025-05-20 16:57:14,148 - Epoch: [38][ 20/ 71] Overall Loss 0.217441 Objective Loss 0.217441 LR 0.000500 Time 0.394058 +2025-05-20 16:57:17,848 - Epoch: [38][ 30/ 71] Overall Loss 0.223425 Objective Loss 0.223425 LR 0.000500 Time 0.386035 +2025-05-20 16:57:21,762 - Epoch: [38][ 40/ 71] Overall Loss 0.223819 Objective Loss 0.223819 LR 0.000500 Time 0.387384 +2025-05-20 16:57:25,010 - Epoch: [38][ 50/ 71] Overall Loss 0.223751 Objective Loss 0.223751 LR 0.000500 Time 0.374858 +2025-05-20 16:57:28,276 - Epoch: [38][ 60/ 71] Overall Loss 0.224920 Objective Loss 0.224920 LR 0.000500 Time 0.366807 +2025-05-20 16:57:32,276 - Epoch: [38][ 70/ 71] Overall Loss 0.222095 Objective Loss 0.222095 Top1 94.531250 LR 0.000500 Time 0.371547 +2025-05-20 16:57:32,375 - Epoch: [38][ 71/ 71] Overall Loss 0.222710 Objective Loss 0.222710 Top1 92.857143 LR 0.000500 Time 0.367704 +2025-05-20 16:57:32,409 - --- validate (epoch=38)----------- +2025-05-20 16:57:32,410 - 2000 samples (256 per mini-batch) +2025-05-20 16:57:35,666 - Epoch: [38][ 8/ 8] Loss 0.236124 Top1 89.800000 +2025-05-20 16:57:35,698 - ==> Top1: 89.800 Loss: 0.236 + +2025-05-20 16:57:35,698 - ==> Confusion: +[[853 132] + [ 72 943]] -2023-09-11 13:59:07,317 - ==> Best [Top1: 90.300 Sparsity:0.00 Params: 57776 on epoch: 44] -2023-09-11 13:59:07,317 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:59:07,321 - - -2023-09-11 13:59:07,321 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:59:11,116 - Epoch: [52][ 10/ 71] Overall Loss 0.210399 Objective Loss 0.210399 LR 0.000250 Time 0.379486 -2023-09-11 13:59:14,139 - Epoch: [52][ 20/ 71] Overall Loss 0.203752 Objective Loss 0.203752 LR 0.000250 Time 0.340880 -2023-09-11 13:59:16,673 - Epoch: [52][ 30/ 71] Overall Loss 0.206223 Objective Loss 0.206223 LR 0.000250 Time 0.311702 -2023-09-11 13:59:19,368 - Epoch: [52][ 40/ 71] Overall Loss 0.209780 Objective Loss 0.209780 LR 0.000250 Time 0.301153 -2023-09-11 13:59:22,508 - Epoch: [52][ 50/ 71] Overall Loss 0.204946 Objective Loss 0.204946 LR 0.000250 Time 0.303713 -2023-09-11 13:59:24,555 - Epoch: [52][ 60/ 71] Overall Loss 0.203254 Objective Loss 0.203254 LR 0.000250 Time 0.287206 -2023-09-11 13:59:26,721 - Epoch: [52][ 70/ 71] Overall Loss 0.203954 Objective Loss 0.203954 Top1 92.187500 LR 0.000250 Time 0.277109 -2023-09-11 13:59:26,830 - Epoch: [52][ 71/ 71] Overall Loss 0.203238 Objective Loss 0.203238 Top1 92.261905 LR 0.000250 Time 0.274746 -2023-09-11 13:59:26,932 - --- validate (epoch=52)----------- -2023-09-11 13:59:26,933 - 2000 samples (256 per mini-batch) -2023-09-11 13:59:29,726 - Epoch: [52][ 8/ 8] Loss 0.216090 Top1 91.300000 -2023-09-11 13:59:29,828 - ==> Top1: 91.300 Loss: 0.216 - -2023-09-11 13:59:29,829 - ==> Confusion: -[[871 114] - [ 60 955]] +2025-05-20 16:57:35,715 - ==> Best [Top1: 90.700 Params: 57776 on epoch: 34] +2025-05-20 16:57:35,715 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:57:35,722 - + +2025-05-20 16:57:35,722 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:57:41,654 - Epoch: [39][ 10/ 71] Overall Loss 0.223528 Objective Loss 0.223528 LR 0.000500 Time 0.593124 +2025-05-20 16:57:44,807 - Epoch: [39][ 20/ 71] Overall Loss 0.219258 Objective Loss 0.219258 LR 0.000500 Time 0.454222 +2025-05-20 16:57:48,346 - Epoch: [39][ 30/ 71] Overall Loss 0.217845 Objective Loss 0.217845 LR 0.000500 Time 0.420750 +2025-05-20 16:57:51,252 - Epoch: [39][ 40/ 71] Overall Loss 0.218402 Objective Loss 0.218402 LR 0.000500 Time 0.388205 +2025-05-20 16:57:55,673 - Epoch: [39][ 50/ 71] Overall Loss 0.216967 Objective Loss 0.216967 LR 0.000500 Time 0.398985 +2025-05-20 16:57:58,383 - Epoch: [39][ 60/ 71] Overall Loss 0.216137 Objective Loss 0.216137 LR 0.000500 Time 0.377648 +2025-05-20 16:58:02,022 - Epoch: [39][ 70/ 71] Overall Loss 0.212106 Objective Loss 0.212106 Top1 93.359375 LR 0.000500 Time 0.375674 +2025-05-20 16:58:02,117 - Epoch: [39][ 71/ 71] Overall Loss 0.210723 Objective Loss 0.210723 Top1 93.750000 LR 0.000500 Time 0.371727 +2025-05-20 16:58:02,154 - --- validate (epoch=39)----------- +2025-05-20 16:58:02,154 - 2000 samples (256 per mini-batch) +2025-05-20 16:58:06,067 - Epoch: [39][ 8/ 8] Loss 0.232848 Top1 90.350000 +2025-05-20 16:58:06,096 - ==> Top1: 90.350 Loss: 0.233 + +2025-05-20 16:58:06,096 - ==> Confusion: +[[913 72] + [121 894]] + +2025-05-20 16:58:06,111 - ==> Best [Top1: 90.700 Params: 57776 on epoch: 34] +2025-05-20 16:58:06,111 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:58:06,118 - + +2025-05-20 16:58:06,118 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:58:12,286 - Epoch: [40][ 10/ 71] Overall Loss 0.213221 Objective Loss 0.213221 LR 0.000500 Time 0.616712 +2025-05-20 16:58:15,303 - Epoch: [40][ 20/ 71] Overall Loss 0.222668 Objective Loss 0.222668 LR 0.000500 Time 0.459183 +2025-05-20 16:58:19,309 - Epoch: [40][ 30/ 71] Overall Loss 0.225034 Objective Loss 0.225034 LR 0.000500 Time 0.439640 +2025-05-20 16:58:23,549 - Epoch: [40][ 40/ 71] Overall Loss 0.222112 Objective Loss 0.222112 LR 0.000500 Time 0.435721 +2025-05-20 16:58:27,223 - Epoch: [40][ 50/ 71] Overall Loss 0.218718 Objective Loss 0.218718 LR 0.000500 Time 0.422051 +2025-05-20 16:58:31,030 - Epoch: [40][ 60/ 71] Overall Loss 0.217809 Objective Loss 0.217809 LR 0.000500 Time 0.415151 +2025-05-20 16:58:34,535 - Epoch: [40][ 70/ 71] Overall Loss 0.215553 Objective Loss 0.215553 Top1 92.578125 LR 0.000500 Time 0.405916 +2025-05-20 16:58:34,637 - Epoch: [40][ 71/ 71] Overall Loss 0.215237 Objective Loss 0.215237 Top1 92.559524 LR 0.000500 Time 0.401641 +2025-05-20 16:58:34,664 - --- validate (epoch=40)----------- +2025-05-20 16:58:34,664 - 2000 samples (256 per mini-batch) +2025-05-20 16:58:38,622 - Epoch: [40][ 8/ 8] Loss 0.232644 Top1 90.100000 +2025-05-20 16:58:38,653 - ==> Top1: 90.100 Loss: 0.233 + +2025-05-20 16:58:38,653 - ==> Confusion: +[[875 110] + [ 88 927]] -2023-09-11 13:59:29,843 - ==> Best [Top1: 91.300 Sparsity:0.00 Params: 57776 on epoch: 52] -2023-09-11 13:59:29,844 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:59:29,846 - - -2023-09-11 13:59:29,846 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:59:33,436 - Epoch: [53][ 10/ 71] Overall Loss 0.191331 Objective Loss 0.191331 LR 0.000250 Time 0.358950 -2023-09-11 13:59:37,085 - Epoch: [53][ 20/ 71] Overall Loss 0.187278 Objective Loss 0.187278 LR 0.000250 Time 0.361867 -2023-09-11 13:59:39,095 - Epoch: [53][ 30/ 71] Overall Loss 0.195590 Objective Loss 0.195590 LR 0.000250 Time 0.308243 -2023-09-11 13:59:42,228 - Epoch: [53][ 40/ 71] Overall Loss 0.195112 Objective Loss 0.195112 LR 0.000250 Time 0.309505 -2023-09-11 13:59:44,727 - Epoch: [53][ 50/ 71] Overall Loss 0.195962 Objective Loss 0.195962 LR 0.000250 Time 0.297570 -2023-09-11 13:59:48,458 - Epoch: [53][ 60/ 71] Overall Loss 0.200068 Objective Loss 0.200068 LR 0.000250 Time 0.310156 -2023-09-11 13:59:50,283 - Epoch: [53][ 70/ 71] Overall Loss 0.199378 Objective Loss 0.199378 Top1 91.015625 LR 0.000250 Time 0.291924 -2023-09-11 13:59:50,375 - Epoch: [53][ 71/ 71] Overall Loss 0.198956 Objective Loss 0.198956 Top1 91.369048 LR 0.000250 Time 0.289100 -2023-09-11 13:59:50,472 - --- validate (epoch=53)----------- -2023-09-11 13:59:50,472 - 2000 samples (256 per mini-batch) -2023-09-11 13:59:53,523 - Epoch: [53][ 8/ 8] Loss 0.214612 Top1 91.950000 -2023-09-11 13:59:53,614 - ==> Top1: 91.950 Loss: 0.215 - -2023-09-11 13:59:53,614 - ==> Confusion: -[[882 103] - [ 58 957]] +2025-05-20 16:58:38,663 - ==> Best [Top1: 90.700 Params: 57776 on epoch: 34] +2025-05-20 16:58:38,664 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:58:38,671 - + +2025-05-20 16:58:38,671 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:58:43,542 - Epoch: [41][ 10/ 71] Overall Loss 0.198583 Objective Loss 0.198583 LR 0.000500 Time 0.487065 +2025-05-20 16:58:46,355 - Epoch: [41][ 20/ 71] Overall Loss 0.203662 Objective Loss 0.203662 LR 0.000500 Time 0.384150 +2025-05-20 16:58:50,354 - Epoch: [41][ 30/ 71] Overall Loss 0.203258 Objective Loss 0.203258 LR 0.000500 Time 0.389389 +2025-05-20 16:58:53,426 - Epoch: [41][ 40/ 71] Overall Loss 0.206303 Objective Loss 0.206303 LR 0.000500 Time 0.368825 +2025-05-20 16:58:57,065 - Epoch: [41][ 50/ 71] Overall Loss 0.208678 Objective Loss 0.208678 LR 0.000500 Time 0.367835 +2025-05-20 16:58:59,889 - Epoch: [41][ 60/ 71] Overall Loss 0.211501 Objective Loss 0.211501 LR 0.000500 Time 0.353598 +2025-05-20 16:59:03,616 - Epoch: [41][ 70/ 71] Overall Loss 0.210891 Objective Loss 0.210891 Top1 92.578125 LR 0.000500 Time 0.356320 +2025-05-20 16:59:03,724 - Epoch: [41][ 71/ 71] Overall Loss 0.210001 Objective Loss 0.210001 Top1 92.857143 LR 0.000500 Time 0.352826 +2025-05-20 16:59:03,758 - --- validate (epoch=41)----------- +2025-05-20 16:59:03,758 - 2000 samples (256 per mini-batch) +2025-05-20 16:59:07,511 - Epoch: [41][ 8/ 8] Loss 0.250367 Top1 89.300000 +2025-05-20 16:59:07,550 - ==> Top1: 89.300 Loss: 0.250 + +2025-05-20 16:59:07,550 - ==> Confusion: +[[900 85] + [129 886]] + +2025-05-20 16:59:07,562 - ==> Best [Top1: 90.700 Params: 57776 on epoch: 34] +2025-05-20 16:59:07,562 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:59:07,569 - + +2025-05-20 16:59:07,570 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:59:12,977 - Epoch: [42][ 10/ 71] Overall Loss 0.216531 Objective Loss 0.216531 LR 0.000500 Time 0.540638 +2025-05-20 16:59:16,131 - Epoch: [42][ 20/ 71] Overall Loss 0.207590 Objective Loss 0.207590 LR 0.000500 Time 0.427998 +2025-05-20 16:59:20,312 - Epoch: [42][ 30/ 71] Overall Loss 0.208188 Objective Loss 0.208188 LR 0.000500 Time 0.424684 +2025-05-20 16:59:23,509 - Epoch: [42][ 40/ 71] Overall Loss 0.205450 Objective Loss 0.205450 LR 0.000500 Time 0.398436 +2025-05-20 16:59:27,595 - Epoch: [42][ 50/ 71] Overall Loss 0.206483 Objective Loss 0.206483 LR 0.000500 Time 0.400451 +2025-05-20 16:59:31,591 - Epoch: [42][ 60/ 71] Overall Loss 0.202984 Objective Loss 0.202984 LR 0.000500 Time 0.400315 +2025-05-20 16:59:34,748 - Epoch: [42][ 70/ 71] Overall Loss 0.204835 Objective Loss 0.204835 Top1 89.843750 LR 0.000500 Time 0.388214 +2025-05-20 16:59:34,839 - Epoch: [42][ 71/ 71] Overall Loss 0.204517 Objective Loss 0.204517 Top1 89.583333 LR 0.000500 Time 0.384025 +2025-05-20 16:59:34,868 - --- validate (epoch=42)----------- +2025-05-20 16:59:34,869 - 2000 samples (256 per mini-batch) +2025-05-20 16:59:38,231 - Epoch: [42][ 8/ 8] Loss 0.241390 Top1 90.200000 +2025-05-20 16:59:38,266 - ==> Top1: 90.200 Loss: 0.241 + +2025-05-20 16:59:38,266 - ==> Confusion: +[[906 79] + [117 898]] -2023-09-11 13:59:53,631 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 13:59:53,631 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 13:59:53,633 - - -2023-09-11 13:59:53,634 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 13:59:58,056 - Epoch: [54][ 10/ 71] Overall Loss 0.192717 Objective Loss 0.192717 LR 0.000250 Time 0.442237 -2023-09-11 14:00:00,074 - Epoch: [54][ 20/ 71] Overall Loss 0.205282 Objective Loss 0.205282 LR 0.000250 Time 0.321961 -2023-09-11 14:00:02,519 - Epoch: [54][ 30/ 71] Overall Loss 0.200948 Objective Loss 0.200948 LR 0.000250 Time 0.296144 -2023-09-11 14:00:04,605 - Epoch: [54][ 40/ 71] Overall Loss 0.197583 Objective Loss 0.197583 LR 0.000250 Time 0.274235 -2023-09-11 14:00:07,441 - Epoch: [54][ 50/ 71] Overall Loss 0.196830 Objective Loss 0.196830 LR 0.000250 Time 0.276117 -2023-09-11 14:00:09,456 - Epoch: [54][ 60/ 71] Overall Loss 0.195745 Objective Loss 0.195745 LR 0.000250 Time 0.263675 -2023-09-11 14:00:11,733 - Epoch: [54][ 70/ 71] Overall Loss 0.195905 Objective Loss 0.195905 Top1 92.187500 LR 0.000250 Time 0.258531 -2023-09-11 14:00:11,809 - Epoch: [54][ 71/ 71] Overall Loss 0.195195 Objective Loss 0.195195 Top1 92.559524 LR 0.000250 Time 0.255959 -2023-09-11 14:00:11,901 - --- validate (epoch=54)----------- -2023-09-11 14:00:11,901 - 2000 samples (256 per mini-batch) -2023-09-11 14:00:15,192 - Epoch: [54][ 8/ 8] Loss 0.222322 Top1 90.600000 -2023-09-11 14:00:15,270 - ==> Top1: 90.600 Loss: 0.222 - -2023-09-11 14:00:15,270 - ==> Confusion: -[[874 111] - [ 77 938]] +2025-05-20 16:59:38,284 - ==> Best [Top1: 90.700 Params: 57776 on epoch: 34] +2025-05-20 16:59:38,284 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 16:59:38,291 - + +2025-05-20 16:59:38,291 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 16:59:43,276 - Epoch: [43][ 10/ 71] Overall Loss 0.214211 Objective Loss 0.214211 LR 0.000500 Time 0.498471 +2025-05-20 16:59:46,724 - Epoch: [43][ 20/ 71] Overall Loss 0.205865 Objective Loss 0.205865 LR 0.000500 Time 0.421584 +2025-05-20 16:59:51,648 - Epoch: [43][ 30/ 71] Overall Loss 0.203281 Objective Loss 0.203281 LR 0.000500 Time 0.445183 +2025-05-20 16:59:55,215 - Epoch: [43][ 40/ 71] Overall Loss 0.203822 Objective Loss 0.203822 LR 0.000500 Time 0.423074 +2025-05-20 16:59:59,534 - Epoch: [43][ 50/ 71] Overall Loss 0.206516 Objective Loss 0.206516 LR 0.000500 Time 0.424818 +2025-05-20 17:00:02,474 - Epoch: [43][ 60/ 71] Overall Loss 0.206653 Objective Loss 0.206653 LR 0.000500 Time 0.403024 +2025-05-20 17:00:06,577 - Epoch: [43][ 70/ 71] Overall Loss 0.210223 Objective Loss 0.210223 Top1 88.281250 LR 0.000500 Time 0.404057 +2025-05-20 17:00:06,676 - Epoch: [43][ 71/ 71] Overall Loss 0.210758 Objective Loss 0.210758 Top1 87.797619 LR 0.000500 Time 0.399750 +2025-05-20 17:00:06,716 - --- validate (epoch=43)----------- +2025-05-20 17:00:06,716 - 2000 samples (256 per mini-batch) +2025-05-20 17:00:10,549 - Epoch: [43][ 8/ 8] Loss 0.235727 Top1 89.750000 +2025-05-20 17:00:10,590 - ==> Top1: 89.750 Loss: 0.236 + +2025-05-20 17:00:10,590 - ==> Confusion: +[[896 89] + [116 899]] -2023-09-11 14:00:15,285 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:00:15,285 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:00:15,289 - - -2023-09-11 14:00:15,290 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:00:19,551 - Epoch: [55][ 10/ 71] Overall Loss 0.181343 Objective Loss 0.181343 LR 0.000250 Time 0.426042 -2023-09-11 14:00:22,387 - Epoch: [55][ 20/ 71] Overall Loss 0.189583 Objective Loss 0.189583 LR 0.000250 Time 0.354811 -2023-09-11 14:00:25,094 - Epoch: [55][ 30/ 71] Overall Loss 0.190053 Objective Loss 0.190053 LR 0.000250 Time 0.326791 -2023-09-11 14:00:28,215 - Epoch: [55][ 40/ 71] Overall Loss 0.193305 Objective Loss 0.193305 LR 0.000250 Time 0.323089 -2023-09-11 14:00:30,802 - Epoch: [55][ 50/ 71] Overall Loss 0.194918 Objective Loss 0.194918 LR 0.000250 Time 0.310222 -2023-09-11 14:00:32,895 - Epoch: [55][ 60/ 71] Overall Loss 0.195689 Objective Loss 0.195689 LR 0.000250 Time 0.293386 -2023-09-11 14:00:35,874 - Epoch: [55][ 70/ 71] Overall Loss 0.197085 Objective Loss 0.197085 Top1 92.968750 LR 0.000250 Time 0.294034 -2023-09-11 14:00:35,955 - Epoch: [55][ 71/ 71] Overall Loss 0.196309 Objective Loss 0.196309 Top1 93.452381 LR 0.000250 Time 0.291030 -2023-09-11 14:00:36,049 - --- validate (epoch=55)----------- -2023-09-11 14:00:36,049 - 2000 samples (256 per mini-batch) -2023-09-11 14:00:39,148 - Epoch: [55][ 8/ 8] Loss 0.227342 Top1 91.000000 -2023-09-11 14:00:39,239 - ==> Top1: 91.000 Loss: 0.227 - -2023-09-11 14:00:39,239 - ==> Confusion: +2025-05-20 17:00:10,607 - ==> Best [Top1: 90.700 Params: 57776 on epoch: 34] +2025-05-20 17:00:10,607 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:00:10,614 - + +2025-05-20 17:00:10,615 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:00:15,665 - Epoch: [44][ 10/ 71] Overall Loss 0.216944 Objective Loss 0.216944 LR 0.000500 Time 0.505006 +2025-05-20 17:00:19,962 - Epoch: [44][ 20/ 71] Overall Loss 0.211323 Objective Loss 0.211323 LR 0.000500 Time 0.467317 +2025-05-20 17:00:22,847 - Epoch: [44][ 30/ 71] Overall Loss 0.213375 Objective Loss 0.213375 LR 0.000500 Time 0.407692 +2025-05-20 17:00:26,686 - Epoch: [44][ 40/ 71] Overall Loss 0.215823 Objective Loss 0.215823 LR 0.000500 Time 0.401748 +2025-05-20 17:00:29,655 - Epoch: [44][ 50/ 71] Overall Loss 0.210949 Objective Loss 0.210949 LR 0.000500 Time 0.380762 +2025-05-20 17:00:33,010 - Epoch: [44][ 60/ 71] Overall Loss 0.209470 Objective Loss 0.209470 LR 0.000500 Time 0.373224 +2025-05-20 17:00:35,863 - Epoch: [44][ 70/ 71] Overall Loss 0.210400 Objective Loss 0.210400 Top1 92.968750 LR 0.000500 Time 0.360656 +2025-05-20 17:00:35,953 - Epoch: [44][ 71/ 71] Overall Loss 0.210874 Objective Loss 0.210874 Top1 92.261905 LR 0.000500 Time 0.356843 +2025-05-20 17:00:35,987 - --- validate (epoch=44)----------- +2025-05-20 17:00:35,987 - 2000 samples (256 per mini-batch) +2025-05-20 17:00:39,951 - Epoch: [44][ 8/ 8] Loss 0.223613 Top1 90.950000 +2025-05-20 17:00:39,986 - ==> Top1: 90.950 Loss: 0.224 + +2025-05-20 17:00:39,986 - ==> Confusion: [[888 97] - [ 83 932]] + [ 84 931]] -2023-09-11 14:00:39,254 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:00:39,254 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:00:39,259 - - -2023-09-11 14:00:39,259 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:00:43,506 - Epoch: [56][ 10/ 71] Overall Loss 0.188112 Objective Loss 0.188112 LR 0.000250 Time 0.424646 -2023-09-11 14:00:45,396 - Epoch: [56][ 20/ 71] Overall Loss 0.194265 Objective Loss 0.194265 LR 0.000250 Time 0.306801 -2023-09-11 14:00:47,928 - Epoch: [56][ 30/ 71] Overall Loss 0.187184 Objective Loss 0.187184 LR 0.000250 Time 0.288910 -2023-09-11 14:00:49,941 - Epoch: [56][ 40/ 71] Overall Loss 0.187814 Objective Loss 0.187814 LR 0.000250 Time 0.267013 -2023-09-11 14:00:52,538 - Epoch: [56][ 50/ 71] Overall Loss 0.190618 Objective Loss 0.190618 LR 0.000250 Time 0.265543 -2023-09-11 14:00:54,614 - Epoch: [56][ 60/ 71] Overall Loss 0.192400 Objective Loss 0.192400 LR 0.000250 Time 0.255884 -2023-09-11 14:00:56,898 - Epoch: [56][ 70/ 71] Overall Loss 0.193970 Objective Loss 0.193970 Top1 91.406250 LR 0.000250 Time 0.251943 -2023-09-11 14:00:56,978 - Epoch: [56][ 71/ 71] Overall Loss 0.193919 Objective Loss 0.193919 Top1 91.666667 LR 0.000250 Time 0.249520 -2023-09-11 14:00:57,086 - --- validate (epoch=56)----------- -2023-09-11 14:00:57,086 - 2000 samples (256 per mini-batch) -2023-09-11 14:01:00,042 - Epoch: [56][ 8/ 8] Loss 0.215892 Top1 91.200000 -2023-09-11 14:01:00,131 - ==> Top1: 91.200 Loss: 0.216 - -2023-09-11 14:01:00,131 - ==> Confusion: -[[868 117] - [ 59 956]] +2025-05-20 17:00:39,998 - ==> Best [Top1: 90.950 Params: 57776 on epoch: 44] +2025-05-20 17:00:39,998 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:00:40,006 - + +2025-05-20 17:00:40,006 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:00:44,640 - Epoch: [45][ 10/ 71] Overall Loss 0.214503 Objective Loss 0.214503 LR 0.000500 Time 0.463370 +2025-05-20 17:00:48,439 - Epoch: [45][ 20/ 71] Overall Loss 0.210393 Objective Loss 0.210393 LR 0.000500 Time 0.421618 +2025-05-20 17:00:52,543 - Epoch: [45][ 30/ 71] Overall Loss 0.210999 Objective Loss 0.210999 LR 0.000500 Time 0.417878 +2025-05-20 17:00:56,320 - Epoch: [45][ 40/ 71] Overall Loss 0.208063 Objective Loss 0.208063 LR 0.000500 Time 0.407814 +2025-05-20 17:01:00,212 - Epoch: [45][ 50/ 71] Overall Loss 0.208016 Objective Loss 0.208016 LR 0.000500 Time 0.404075 +2025-05-20 17:01:03,721 - Epoch: [45][ 60/ 71] Overall Loss 0.208929 Objective Loss 0.208929 LR 0.000500 Time 0.395216 +2025-05-20 17:01:07,349 - Epoch: [45][ 70/ 71] Overall Loss 0.207295 Objective Loss 0.207295 Top1 94.531250 LR 0.000500 Time 0.390516 +2025-05-20 17:01:07,449 - Epoch: [45][ 71/ 71] Overall Loss 0.207111 Objective Loss 0.207111 Top1 94.047619 LR 0.000500 Time 0.386421 +2025-05-20 17:01:07,486 - --- validate (epoch=45)----------- +2025-05-20 17:01:07,486 - 2000 samples (256 per mini-batch) +2025-05-20 17:01:10,839 - Epoch: [45][ 8/ 8] Loss 0.225586 Top1 90.550000 +2025-05-20 17:01:10,871 - ==> Top1: 90.550 Loss: 0.226 + +2025-05-20 17:01:10,872 - ==> Confusion: +[[907 78] + [111 904]] -2023-09-11 14:01:00,147 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:01:00,147 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:01:00,152 - - -2023-09-11 14:01:00,152 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:01:04,500 - Epoch: [57][ 10/ 71] Overall Loss 0.193119 Objective Loss 0.193119 LR 0.000250 Time 0.434778 -2023-09-11 14:01:07,792 - Epoch: [57][ 20/ 71] Overall Loss 0.198759 Objective Loss 0.198759 LR 0.000250 Time 0.381965 -2023-09-11 14:01:09,723 - Epoch: [57][ 30/ 71] Overall Loss 0.196323 Objective Loss 0.196323 LR 0.000250 Time 0.318996 -2023-09-11 14:01:12,277 - Epoch: [57][ 40/ 71] Overall Loss 0.193858 Objective Loss 0.193858 LR 0.000250 Time 0.303091 -2023-09-11 14:01:14,336 - Epoch: [57][ 50/ 71] Overall Loss 0.197934 Objective Loss 0.197934 LR 0.000250 Time 0.283645 -2023-09-11 14:01:17,535 - Epoch: [57][ 60/ 71] Overall Loss 0.196668 Objective Loss 0.196668 LR 0.000250 Time 0.289682 -2023-09-11 14:01:19,452 - Epoch: [57][ 70/ 71] Overall Loss 0.197166 Objective Loss 0.197166 Top1 94.531250 LR 0.000250 Time 0.275683 -2023-09-11 14:01:19,528 - Epoch: [57][ 71/ 71] Overall Loss 0.197062 Objective Loss 0.197062 Top1 93.750000 LR 0.000250 Time 0.272869 -2023-09-11 14:01:19,616 - --- validate (epoch=57)----------- -2023-09-11 14:01:19,616 - 2000 samples (256 per mini-batch) -2023-09-11 14:01:22,600 - Epoch: [57][ 8/ 8] Loss 0.224364 Top1 90.100000 -2023-09-11 14:01:22,696 - ==> Top1: 90.100 Loss: 0.224 - -2023-09-11 14:01:22,697 - ==> Confusion: -[[903 82] - [116 899]] +2025-05-20 17:01:10,887 - ==> Best [Top1: 90.950 Params: 57776 on epoch: 44] +2025-05-20 17:01:10,888 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:01:10,895 - + +2025-05-20 17:01:10,895 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:01:16,258 - Epoch: [46][ 10/ 71] Overall Loss 0.200943 Objective Loss 0.200943 LR 0.000500 Time 0.536237 +2025-05-20 17:01:19,357 - Epoch: [46][ 20/ 71] Overall Loss 0.191744 Objective Loss 0.191744 LR 0.000500 Time 0.423055 +2025-05-20 17:01:24,016 - Epoch: [46][ 30/ 71] Overall Loss 0.196711 Objective Loss 0.196711 LR 0.000500 Time 0.437303 +2025-05-20 17:01:27,342 - Epoch: [46][ 40/ 71] Overall Loss 0.198802 Objective Loss 0.198802 LR 0.000500 Time 0.411136 +2025-05-20 17:01:31,444 - Epoch: [46][ 50/ 71] Overall Loss 0.200492 Objective Loss 0.200492 LR 0.000500 Time 0.410938 +2025-05-20 17:01:34,977 - Epoch: [46][ 60/ 71] Overall Loss 0.201199 Objective Loss 0.201199 LR 0.000500 Time 0.401317 +2025-05-20 17:01:39,043 - Epoch: [46][ 70/ 71] Overall Loss 0.201177 Objective Loss 0.201177 Top1 88.671875 LR 0.000500 Time 0.402071 +2025-05-20 17:01:39,142 - Epoch: [46][ 71/ 71] Overall Loss 0.200790 Objective Loss 0.200790 Top1 89.880952 LR 0.000500 Time 0.397808 +2025-05-20 17:01:39,173 - --- validate (epoch=46)----------- +2025-05-20 17:01:39,173 - 2000 samples (256 per mini-batch) +2025-05-20 17:01:43,279 - Epoch: [46][ 8/ 8] Loss 0.227682 Top1 90.800000 +2025-05-20 17:01:43,311 - ==> Top1: 90.800 Loss: 0.228 + +2025-05-20 17:01:43,311 - ==> Confusion: +[[923 62] + [122 893]] -2023-09-11 14:01:22,711 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:01:22,712 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:01:22,716 - - -2023-09-11 14:01:22,716 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:01:27,180 - Epoch: [58][ 10/ 71] Overall Loss 0.185140 Objective Loss 0.185140 LR 0.000250 Time 0.446288 -2023-09-11 14:01:29,316 - Epoch: [58][ 20/ 71] Overall Loss 0.190309 Objective Loss 0.190309 LR 0.000250 Time 0.329956 -2023-09-11 14:01:32,131 - Epoch: [58][ 30/ 71] Overall Loss 0.189229 Objective Loss 0.189229 LR 0.000250 Time 0.313793 -2023-09-11 14:01:34,131 - Epoch: [58][ 40/ 71] Overall Loss 0.187631 Objective Loss 0.187631 LR 0.000250 Time 0.285317 -2023-09-11 14:01:36,772 - Epoch: [58][ 50/ 71] Overall Loss 0.190303 Objective Loss 0.190303 LR 0.000250 Time 0.281087 -2023-09-11 14:01:39,015 - Epoch: [58][ 60/ 71] Overall Loss 0.192195 Objective Loss 0.192195 LR 0.000250 Time 0.271606 -2023-09-11 14:01:41,299 - Epoch: [58][ 70/ 71] Overall Loss 0.189657 Objective Loss 0.189657 Top1 89.843750 LR 0.000250 Time 0.265434 -2023-09-11 14:01:41,385 - Epoch: [58][ 71/ 71] Overall Loss 0.189134 Objective Loss 0.189134 Top1 90.476190 LR 0.000250 Time 0.262905 -2023-09-11 14:01:41,470 - --- validate (epoch=58)----------- -2023-09-11 14:01:41,470 - 2000 samples (256 per mini-batch) -2023-09-11 14:01:44,677 - Epoch: [58][ 8/ 8] Loss 0.210598 Top1 91.000000 -2023-09-11 14:01:44,776 - ==> Top1: 91.000 Loss: 0.211 - -2023-09-11 14:01:44,776 - ==> Confusion: -[[890 95] - [ 85 930]] +2025-05-20 17:01:43,313 - ==> Best [Top1: 90.950 Params: 57776 on epoch: 44] +2025-05-20 17:01:43,313 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:01:43,320 - + +2025-05-20 17:01:43,320 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:01:48,133 - Epoch: [47][ 10/ 71] Overall Loss 0.184349 Objective Loss 0.184349 LR 0.000500 Time 0.481217 +2025-05-20 17:01:53,039 - Epoch: [47][ 20/ 71] Overall Loss 0.193488 Objective Loss 0.193488 LR 0.000500 Time 0.485899 +2025-05-20 17:01:55,759 - Epoch: [47][ 30/ 71] Overall Loss 0.196891 Objective Loss 0.196891 LR 0.000500 Time 0.414587 +2025-05-20 17:02:00,836 - Epoch: [47][ 40/ 71] Overall Loss 0.193894 Objective Loss 0.193894 LR 0.000500 Time 0.437863 +2025-05-20 17:02:03,904 - Epoch: [47][ 50/ 71] Overall Loss 0.192265 Objective Loss 0.192265 LR 0.000500 Time 0.411632 +2025-05-20 17:02:07,490 - Epoch: [47][ 60/ 71] Overall Loss 0.192222 Objective Loss 0.192222 LR 0.000500 Time 0.402795 +2025-05-20 17:02:10,969 - Epoch: [47][ 70/ 71] Overall Loss 0.193303 Objective Loss 0.193303 Top1 90.625000 LR 0.000500 Time 0.394943 +2025-05-20 17:02:11,074 - Epoch: [47][ 71/ 71] Overall Loss 0.192916 Objective Loss 0.192916 Top1 91.071429 LR 0.000500 Time 0.390868 +2025-05-20 17:02:11,106 - --- validate (epoch=47)----------- +2025-05-20 17:02:11,106 - 2000 samples (256 per mini-batch) +2025-05-20 17:02:15,015 - Epoch: [47][ 8/ 8] Loss 0.224960 Top1 90.050000 +2025-05-20 17:02:15,053 - ==> Top1: 90.050 Loss: 0.225 + +2025-05-20 17:02:15,054 - ==> Confusion: +[[876 109] + [ 90 925]] -2023-09-11 14:01:44,787 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:01:44,787 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:01:44,790 - - -2023-09-11 14:01:44,790 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:01:48,656 - Epoch: [59][ 10/ 71] Overall Loss 0.181874 Objective Loss 0.181874 LR 0.000250 Time 0.386534 -2023-09-11 14:01:50,820 - Epoch: [59][ 20/ 71] Overall Loss 0.191208 Objective Loss 0.191208 LR 0.000250 Time 0.301481 -2023-09-11 14:01:54,057 - Epoch: [59][ 30/ 71] Overall Loss 0.194623 Objective Loss 0.194623 LR 0.000250 Time 0.308873 -2023-09-11 14:01:57,028 - Epoch: [59][ 40/ 71] Overall Loss 0.196966 Objective Loss 0.196966 LR 0.000250 Time 0.305922 -2023-09-11 14:01:59,165 - Epoch: [59][ 50/ 71] Overall Loss 0.195761 Objective Loss 0.195761 LR 0.000250 Time 0.287476 -2023-09-11 14:02:01,963 - Epoch: [59][ 60/ 71] Overall Loss 0.193718 Objective Loss 0.193718 LR 0.000250 Time 0.286193 -2023-09-11 14:02:04,633 - Epoch: [59][ 70/ 71] Overall Loss 0.191707 Objective Loss 0.191707 Top1 92.968750 LR 0.000250 Time 0.283442 -2023-09-11 14:02:04,726 - Epoch: [59][ 71/ 71] Overall Loss 0.192098 Objective Loss 0.192098 Top1 92.857143 LR 0.000250 Time 0.280754 -2023-09-11 14:02:04,814 - --- validate (epoch=59)----------- -2023-09-11 14:02:04,814 - 2000 samples (256 per mini-batch) -2023-09-11 14:02:07,984 - Epoch: [59][ 8/ 8] Loss 0.227912 Top1 90.300000 -2023-09-11 14:02:08,082 - ==> Top1: 90.300 Loss: 0.228 - -2023-09-11 14:02:08,083 - ==> Confusion: -[[918 67] - [127 888]] +2025-05-20 17:02:15,055 - ==> Best [Top1: 90.950 Params: 57776 on epoch: 44] +2025-05-20 17:02:15,056 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:02:15,063 - + +2025-05-20 17:02:15,063 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:02:20,429 - Epoch: [48][ 10/ 71] Overall Loss 0.204024 Objective Loss 0.204024 LR 0.000500 Time 0.536483 +2025-05-20 17:02:23,997 - Epoch: [48][ 20/ 71] Overall Loss 0.211314 Objective Loss 0.211314 LR 0.000500 Time 0.446647 +2025-05-20 17:02:27,899 - Epoch: [48][ 30/ 71] Overall Loss 0.203937 Objective Loss 0.203937 LR 0.000500 Time 0.427833 +2025-05-20 17:02:31,218 - Epoch: [48][ 40/ 71] Overall Loss 0.200600 Objective Loss 0.200600 LR 0.000500 Time 0.403840 +2025-05-20 17:02:35,040 - Epoch: [48][ 50/ 71] Overall Loss 0.200926 Objective Loss 0.200926 LR 0.000500 Time 0.399506 +2025-05-20 17:02:38,617 - Epoch: [48][ 60/ 71] Overall Loss 0.199483 Objective Loss 0.199483 LR 0.000500 Time 0.392526 +2025-05-20 17:02:42,228 - Epoch: [48][ 70/ 71] Overall Loss 0.199598 Objective Loss 0.199598 Top1 92.578125 LR 0.000500 Time 0.388027 +2025-05-20 17:02:42,335 - Epoch: [48][ 71/ 71] Overall Loss 0.200123 Objective Loss 0.200123 Top1 92.261905 LR 0.000500 Time 0.384078 +2025-05-20 17:02:42,366 - --- validate (epoch=48)----------- +2025-05-20 17:02:42,367 - 2000 samples (256 per mini-batch) +2025-05-20 17:02:45,986 - Epoch: [48][ 8/ 8] Loss 0.218080 Top1 90.450000 +2025-05-20 17:02:46,022 - ==> Top1: 90.450 Loss: 0.218 + +2025-05-20 17:02:46,022 - ==> Confusion: +[[892 93] + [ 98 917]] -2023-09-11 14:02:08,089 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:02:08,089 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:02:08,093 - - -2023-09-11 14:02:08,093 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:02:11,911 - Epoch: [60][ 10/ 71] Overall Loss 0.196799 Objective Loss 0.196799 LR 0.000250 Time 0.381696 -2023-09-11 14:02:14,250 - Epoch: [60][ 20/ 71] Overall Loss 0.195302 Objective Loss 0.195302 LR 0.000250 Time 0.307801 -2023-09-11 14:02:16,597 - Epoch: [60][ 30/ 71] Overall Loss 0.196358 Objective Loss 0.196358 LR 0.000250 Time 0.283420 -2023-09-11 14:02:18,776 - Epoch: [60][ 40/ 71] Overall Loss 0.193358 Objective Loss 0.193358 LR 0.000250 Time 0.267028 -2023-09-11 14:02:21,727 - Epoch: [60][ 50/ 71] Overall Loss 0.192883 Objective Loss 0.192883 LR 0.000250 Time 0.272638 -2023-09-11 14:02:23,797 - Epoch: [60][ 60/ 71] Overall Loss 0.192364 Objective Loss 0.192364 LR 0.000250 Time 0.261680 -2023-09-11 14:02:26,313 - Epoch: [60][ 70/ 71] Overall Loss 0.189845 Objective Loss 0.189845 Top1 91.015625 LR 0.000250 Time 0.260239 -2023-09-11 14:02:26,390 - Epoch: [60][ 71/ 71] Overall Loss 0.189881 Objective Loss 0.189881 Top1 91.369048 LR 0.000250 Time 0.257655 -2023-09-11 14:02:26,487 - --- validate (epoch=60)----------- -2023-09-11 14:02:26,487 - 2000 samples (256 per mini-batch) -2023-09-11 14:02:29,678 - Epoch: [60][ 8/ 8] Loss 0.230776 Top1 90.300000 -2023-09-11 14:02:29,774 - ==> Top1: 90.300 Loss: 0.231 - -2023-09-11 14:02:29,774 - ==> Confusion: -[[901 84] - [110 905]] - -2023-09-11 14:02:29,789 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:02:29,790 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:02:29,794 - - -2023-09-11 14:02:29,794 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:02:33,312 - Epoch: [61][ 10/ 71] Overall Loss 0.185980 Objective Loss 0.185980 LR 0.000250 Time 0.351710 -2023-09-11 14:02:36,736 - Epoch: [61][ 20/ 71] Overall Loss 0.189905 Objective Loss 0.189905 LR 0.000250 Time 0.347048 -2023-09-11 14:02:39,852 - Epoch: [61][ 30/ 71] Overall Loss 0.191552 Objective Loss 0.191552 LR 0.000250 Time 0.335223 -2023-09-11 14:02:42,526 - Epoch: [61][ 40/ 71] Overall Loss 0.192173 Objective Loss 0.192173 LR 0.000250 Time 0.318258 -2023-09-11 14:02:45,231 - Epoch: [61][ 50/ 71] Overall Loss 0.188376 Objective Loss 0.188376 LR 0.000250 Time 0.308695 -2023-09-11 14:02:47,599 - Epoch: [61][ 60/ 71] Overall Loss 0.188209 Objective Loss 0.188209 LR 0.000250 Time 0.296708 -2023-09-11 14:02:50,259 - Epoch: [61][ 70/ 71] Overall Loss 0.189804 Objective Loss 0.189804 Top1 91.406250 LR 0.000250 Time 0.292310 -2023-09-11 14:02:50,364 - Epoch: [61][ 71/ 71] Overall Loss 0.188648 Objective Loss 0.188648 Top1 93.154762 LR 0.000250 Time 0.289671 -2023-09-11 14:02:50,469 - --- validate (epoch=61)----------- -2023-09-11 14:02:50,469 - 2000 samples (256 per mini-batch) -2023-09-11 14:02:53,661 - Epoch: [61][ 8/ 8] Loss 0.217436 Top1 90.450000 -2023-09-11 14:02:53,755 - ==> Top1: 90.450 Loss: 0.217 - -2023-09-11 14:02:53,755 - ==> Confusion: -[[889 96] - [ 95 920]] +2025-05-20 17:02:46,038 - ==> Best [Top1: 90.950 Params: 57776 on epoch: 44] +2025-05-20 17:02:46,038 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:02:46,046 - + +2025-05-20 17:02:46,046 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:02:51,233 - Epoch: [49][ 10/ 71] Overall Loss 0.189650 Objective Loss 0.189650 LR 0.000500 Time 0.518631 +2025-05-20 17:02:53,965 - Epoch: [49][ 20/ 71] Overall Loss 0.197128 Objective Loss 0.197128 LR 0.000500 Time 0.395930 +2025-05-20 17:02:57,603 - Epoch: [49][ 30/ 71] Overall Loss 0.204082 Objective Loss 0.204082 LR 0.000500 Time 0.385202 +2025-05-20 17:03:00,853 - Epoch: [49][ 40/ 71] Overall Loss 0.200867 Objective Loss 0.200867 LR 0.000500 Time 0.370133 +2025-05-20 17:03:05,460 - Epoch: [49][ 50/ 71] Overall Loss 0.199433 Objective Loss 0.199433 LR 0.000500 Time 0.388249 +2025-05-20 17:03:09,533 - Epoch: [49][ 60/ 71] Overall Loss 0.200732 Objective Loss 0.200732 LR 0.000500 Time 0.391413 +2025-05-20 17:03:13,377 - Epoch: [49][ 70/ 71] Overall Loss 0.202139 Objective Loss 0.202139 Top1 93.359375 LR 0.000500 Time 0.390412 +2025-05-20 17:03:13,485 - Epoch: [49][ 71/ 71] Overall Loss 0.202864 Objective Loss 0.202864 Top1 92.857143 LR 0.000500 Time 0.386430 +2025-05-20 17:03:13,524 - --- validate (epoch=49)----------- +2025-05-20 17:03:13,524 - 2000 samples (256 per mini-batch) +2025-05-20 17:03:17,440 - Epoch: [49][ 8/ 8] Loss 0.222540 Top1 91.050000 +2025-05-20 17:03:17,470 - ==> Top1: 91.050 Loss: 0.223 + +2025-05-20 17:03:17,471 - ==> Confusion: +[[876 109] + [ 70 945]] -2023-09-11 14:02:53,771 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:02:53,771 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:02:53,774 - - -2023-09-11 14:02:53,774 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:02:58,019 - Epoch: [62][ 10/ 71] Overall Loss 0.176550 Objective Loss 0.176550 LR 0.000250 Time 0.424490 -2023-09-11 14:03:00,083 - Epoch: [62][ 20/ 71] Overall Loss 0.184256 Objective Loss 0.184256 LR 0.000250 Time 0.315451 -2023-09-11 14:03:02,722 - Epoch: [62][ 30/ 71] Overall Loss 0.187185 Objective Loss 0.187185 LR 0.000250 Time 0.298230 -2023-09-11 14:03:05,033 - Epoch: [62][ 40/ 71] Overall Loss 0.189391 Objective Loss 0.189391 LR 0.000250 Time 0.281460 -2023-09-11 14:03:08,259 - Epoch: [62][ 50/ 71] Overall Loss 0.189735 Objective Loss 0.189735 LR 0.000250 Time 0.289663 -2023-09-11 14:03:10,897 - Epoch: [62][ 60/ 71] Overall Loss 0.190345 Objective Loss 0.190345 LR 0.000250 Time 0.285361 -2023-09-11 14:03:13,536 - Epoch: [62][ 70/ 71] Overall Loss 0.192127 Objective Loss 0.192127 Top1 90.234375 LR 0.000250 Time 0.282289 -2023-09-11 14:03:13,584 - Epoch: [62][ 71/ 71] Overall Loss 0.192368 Objective Loss 0.192368 Top1 90.476190 LR 0.000250 Time 0.278983 -2023-09-11 14:03:13,677 - --- validate (epoch=62)----------- -2023-09-11 14:03:13,677 - 2000 samples (256 per mini-batch) -2023-09-11 14:03:16,782 - Epoch: [62][ 8/ 8] Loss 0.213627 Top1 91.300000 -2023-09-11 14:03:16,878 - ==> Top1: 91.300 Loss: 0.214 - -2023-09-11 14:03:16,878 - ==> Confusion: -[[918 67] - [107 908]] +2025-05-20 17:03:17,486 - ==> Best [Top1: 91.050 Params: 57776 on epoch: 49] +2025-05-20 17:03:17,486 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:03:17,494 - + +2025-05-20 17:03:17,495 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:03:22,505 - Epoch: [50][ 10/ 71] Overall Loss 0.206462 Objective Loss 0.206462 LR 0.000250 Time 0.501015 +2025-05-20 17:03:26,219 - Epoch: [50][ 20/ 71] Overall Loss 0.200343 Objective Loss 0.200343 LR 0.000250 Time 0.436164 +2025-05-20 17:03:30,229 - Epoch: [50][ 30/ 71] Overall Loss 0.194565 Objective Loss 0.194565 LR 0.000250 Time 0.424449 +2025-05-20 17:03:35,198 - Epoch: [50][ 40/ 71] Overall Loss 0.189533 Objective Loss 0.189533 LR 0.000250 Time 0.442534 +2025-05-20 17:03:38,788 - Epoch: [50][ 50/ 71] Overall Loss 0.186872 Objective Loss 0.186872 LR 0.000250 Time 0.425826 +2025-05-20 17:03:42,538 - Epoch: [50][ 60/ 71] Overall Loss 0.186347 Objective Loss 0.186347 LR 0.000250 Time 0.417352 +2025-05-20 17:03:45,883 - Epoch: [50][ 70/ 71] Overall Loss 0.186709 Objective Loss 0.186709 Top1 94.140625 LR 0.000250 Time 0.405514 +2025-05-20 17:03:45,991 - Epoch: [50][ 71/ 71] Overall Loss 0.186815 Objective Loss 0.186815 Top1 93.452381 LR 0.000250 Time 0.401316 +2025-05-20 17:03:46,023 - --- validate (epoch=50)----------- +2025-05-20 17:03:46,023 - 2000 samples (256 per mini-batch) +2025-05-20 17:03:49,252 - Epoch: [50][ 8/ 8] Loss 0.198955 Top1 92.300000 +2025-05-20 17:03:49,288 - ==> Top1: 92.300 Loss: 0.199 + +2025-05-20 17:03:49,289 - ==> Confusion: +[[898 87] + [ 67 948]] -2023-09-11 14:03:16,893 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:03:16,893 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:03:16,896 - - -2023-09-11 14:03:16,896 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:03:21,585 - Epoch: [63][ 10/ 71] Overall Loss 0.192886 Objective Loss 0.192886 LR 0.000250 Time 0.468886 -2023-09-11 14:03:23,561 - Epoch: [63][ 20/ 71] Overall Loss 0.200825 Objective Loss 0.200825 LR 0.000250 Time 0.333210 -2023-09-11 14:03:26,568 - Epoch: [63][ 30/ 71] Overall Loss 0.199846 Objective Loss 0.199846 LR 0.000250 Time 0.322354 -2023-09-11 14:03:28,598 - Epoch: [63][ 40/ 71] Overall Loss 0.195114 Objective Loss 0.195114 LR 0.000250 Time 0.292505 -2023-09-11 14:03:31,504 - Epoch: [63][ 50/ 71] Overall Loss 0.197300 Objective Loss 0.197300 LR 0.000250 Time 0.292130 -2023-09-11 14:03:35,354 - Epoch: [63][ 60/ 71] Overall Loss 0.196549 Objective Loss 0.196549 LR 0.000250 Time 0.307599 -2023-09-11 14:03:37,242 - Epoch: [63][ 70/ 71] Overall Loss 0.194097 Objective Loss 0.194097 Top1 92.968750 LR 0.000250 Time 0.290625 -2023-09-11 14:03:37,363 - Epoch: [63][ 71/ 71] Overall Loss 0.193604 Objective Loss 0.193604 Top1 93.154762 LR 0.000250 Time 0.288230 -2023-09-11 14:03:37,457 - --- validate (epoch=63)----------- -2023-09-11 14:03:37,457 - 2000 samples (256 per mini-batch) -2023-09-11 14:03:40,319 - Epoch: [63][ 8/ 8] Loss 0.214302 Top1 90.650000 -2023-09-11 14:03:40,411 - ==> Top1: 90.650 Loss: 0.214 - -2023-09-11 14:03:40,411 - ==> Confusion: -[[879 106] - [ 81 934]] +2025-05-20 17:03:49,304 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:03:49,304 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:03:49,312 - + +2025-05-20 17:03:49,312 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:03:55,248 - Epoch: [51][ 10/ 71] Overall Loss 0.175480 Objective Loss 0.175480 LR 0.000250 Time 0.593525 +2025-05-20 17:03:57,968 - Epoch: [51][ 20/ 71] Overall Loss 0.183423 Objective Loss 0.183423 LR 0.000250 Time 0.432763 +2025-05-20 17:04:01,660 - Epoch: [51][ 30/ 71] Overall Loss 0.187487 Objective Loss 0.187487 LR 0.000250 Time 0.411564 +2025-05-20 17:04:06,896 - Epoch: [51][ 40/ 71] Overall Loss 0.191714 Objective Loss 0.191714 LR 0.000250 Time 0.439545 +2025-05-20 17:04:10,424 - Epoch: [51][ 50/ 71] Overall Loss 0.190370 Objective Loss 0.190370 LR 0.000250 Time 0.422200 +2025-05-20 17:04:13,887 - Epoch: [51][ 60/ 71] Overall Loss 0.190645 Objective Loss 0.190645 LR 0.000250 Time 0.409544 +2025-05-20 17:04:18,013 - Epoch: [51][ 70/ 71] Overall Loss 0.190293 Objective Loss 0.190293 Top1 91.406250 LR 0.000250 Time 0.409972 +2025-05-20 17:04:18,108 - Epoch: [51][ 71/ 71] Overall Loss 0.191590 Objective Loss 0.191590 Top1 91.666667 LR 0.000250 Time 0.405533 +2025-05-20 17:04:18,140 - --- validate (epoch=51)----------- +2025-05-20 17:04:18,140 - 2000 samples (256 per mini-batch) +2025-05-20 17:04:21,395 - Epoch: [51][ 8/ 8] Loss 0.228850 Top1 90.750000 +2025-05-20 17:04:21,424 - ==> Top1: 90.750 Loss: 0.229 + +2025-05-20 17:04:21,425 - ==> Confusion: +[[930 55] + [130 885]] -2023-09-11 14:03:40,427 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:03:40,427 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:03:40,429 - - -2023-09-11 14:03:40,429 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:03:43,974 - Epoch: [64][ 10/ 71] Overall Loss 0.178083 Objective Loss 0.178083 LR 0.000250 Time 0.354384 -2023-09-11 14:03:46,494 - Epoch: [64][ 20/ 71] Overall Loss 0.190776 Objective Loss 0.190776 LR 0.000250 Time 0.303183 -2023-09-11 14:03:49,460 - Epoch: [64][ 30/ 71] Overall Loss 0.192779 Objective Loss 0.192779 LR 0.000250 Time 0.300996 -2023-09-11 14:03:51,928 - Epoch: [64][ 40/ 71] Overall Loss 0.194145 Objective Loss 0.194145 LR 0.000250 Time 0.287430 -2023-09-11 14:03:55,028 - Epoch: [64][ 50/ 71] Overall Loss 0.195180 Objective Loss 0.195180 LR 0.000250 Time 0.291932 -2023-09-11 14:03:57,366 - Epoch: [64][ 60/ 71] Overall Loss 0.196967 Objective Loss 0.196967 LR 0.000250 Time 0.282236 -2023-09-11 14:04:00,185 - Epoch: [64][ 70/ 71] Overall Loss 0.194432 Objective Loss 0.194432 Top1 94.531250 LR 0.000250 Time 0.282189 -2023-09-11 14:04:00,264 - Epoch: [64][ 71/ 71] Overall Loss 0.193364 Objective Loss 0.193364 Top1 94.642857 LR 0.000250 Time 0.279333 -2023-09-11 14:04:00,359 - --- validate (epoch=64)----------- -2023-09-11 14:04:00,359 - 2000 samples (256 per mini-batch) -2023-09-11 14:04:03,475 - Epoch: [64][ 8/ 8] Loss 0.218896 Top1 90.700000 -2023-09-11 14:04:03,571 - ==> Top1: 90.700 Loss: 0.219 - -2023-09-11 14:04:03,571 - ==> Confusion: -[[903 82] - [104 911]] - -2023-09-11 14:04:03,586 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:04:03,586 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:04:03,590 - - -2023-09-11 14:04:03,591 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:04:07,365 - Epoch: [65][ 10/ 71] Overall Loss 0.197273 Objective Loss 0.197273 LR 0.000250 Time 0.377425 -2023-09-11 14:04:09,511 - Epoch: [65][ 20/ 71] Overall Loss 0.193656 Objective Loss 0.193656 LR 0.000250 Time 0.295990 -2023-09-11 14:04:12,290 - Epoch: [65][ 30/ 71] Overall Loss 0.192070 Objective Loss 0.192070 LR 0.000250 Time 0.289922 -2023-09-11 14:04:15,092 - Epoch: [65][ 40/ 71] Overall Loss 0.187438 Objective Loss 0.187438 LR 0.000250 Time 0.287499 -2023-09-11 14:04:18,183 - Epoch: [65][ 50/ 71] Overall Loss 0.186559 Objective Loss 0.186559 LR 0.000250 Time 0.291808 -2023-09-11 14:04:20,842 - Epoch: [65][ 60/ 71] Overall Loss 0.184826 Objective Loss 0.184826 LR 0.000250 Time 0.287490 -2023-09-11 14:04:24,052 - Epoch: [65][ 70/ 71] Overall Loss 0.184689 Objective Loss 0.184689 Top1 94.531250 LR 0.000250 Time 0.292276 -2023-09-11 14:04:24,155 - Epoch: [65][ 71/ 71] Overall Loss 0.184547 Objective Loss 0.184547 Top1 94.642857 LR 0.000250 Time 0.289608 -2023-09-11 14:04:24,247 - --- validate (epoch=65)----------- -2023-09-11 14:04:24,247 - 2000 samples (256 per mini-batch) -2023-09-11 14:04:26,748 - Epoch: [65][ 8/ 8] Loss 0.221507 Top1 90.200000 -2023-09-11 14:04:26,843 - ==> Top1: 90.200 Loss: 0.222 - -2023-09-11 14:04:26,843 - ==> Confusion: -[[875 110] - [ 86 929]] +2025-05-20 17:04:21,428 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:04:21,428 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:04:21,436 - + +2025-05-20 17:04:21,436 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:04:27,596 - Epoch: [52][ 10/ 71] Overall Loss 0.180719 Objective Loss 0.180719 LR 0.000250 Time 0.615935 +2025-05-20 17:04:30,633 - Epoch: [52][ 20/ 71] Overall Loss 0.179252 Objective Loss 0.179252 LR 0.000250 Time 0.459793 +2025-05-20 17:04:35,037 - Epoch: [52][ 30/ 71] Overall Loss 0.182196 Objective Loss 0.182196 LR 0.000250 Time 0.453322 +2025-05-20 17:04:38,250 - Epoch: [52][ 40/ 71] Overall Loss 0.181113 Objective Loss 0.181113 LR 0.000250 Time 0.420322 +2025-05-20 17:04:42,687 - Epoch: [52][ 50/ 71] Overall Loss 0.180423 Objective Loss 0.180423 LR 0.000250 Time 0.424993 +2025-05-20 17:04:45,623 - Epoch: [52][ 60/ 71] Overall Loss 0.181317 Objective Loss 0.181317 LR 0.000250 Time 0.403076 +2025-05-20 17:04:49,660 - Epoch: [52][ 70/ 71] Overall Loss 0.182178 Objective Loss 0.182178 Top1 93.359375 LR 0.000250 Time 0.403165 +2025-05-20 17:04:49,754 - Epoch: [52][ 71/ 71] Overall Loss 0.182441 Objective Loss 0.182441 Top1 92.261905 LR 0.000250 Time 0.398814 +2025-05-20 17:04:49,791 - --- validate (epoch=52)----------- +2025-05-20 17:04:49,791 - 2000 samples (256 per mini-batch) +2025-05-20 17:04:53,135 - Epoch: [52][ 8/ 8] Loss 0.230852 Top1 90.200000 +2025-05-20 17:04:53,169 - ==> Top1: 90.200 Loss: 0.231 + +2025-05-20 17:04:53,169 - ==> Confusion: +[[862 123] + [ 73 942]] -2023-09-11 14:04:26,860 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:04:26,860 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:04:26,862 - - -2023-09-11 14:04:26,862 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:04:31,177 - Epoch: [66][ 10/ 71] Overall Loss 0.185086 Objective Loss 0.185086 LR 0.000250 Time 0.431414 -2023-09-11 14:04:33,167 - Epoch: [66][ 20/ 71] Overall Loss 0.188280 Objective Loss 0.188280 LR 0.000250 Time 0.315179 -2023-09-11 14:04:36,052 - Epoch: [66][ 30/ 71] Overall Loss 0.191589 Objective Loss 0.191589 LR 0.000250 Time 0.306291 -2023-09-11 14:04:38,174 - Epoch: [66][ 40/ 71] Overall Loss 0.188799 Objective Loss 0.188799 LR 0.000250 Time 0.282753 -2023-09-11 14:04:41,067 - Epoch: [66][ 50/ 71] Overall Loss 0.191133 Objective Loss 0.191133 LR 0.000250 Time 0.284062 -2023-09-11 14:04:43,273 - Epoch: [66][ 60/ 71] Overall Loss 0.190130 Objective Loss 0.190130 LR 0.000250 Time 0.273475 -2023-09-11 14:04:45,628 - Epoch: [66][ 70/ 71] Overall Loss 0.192004 Objective Loss 0.192004 Top1 92.968750 LR 0.000250 Time 0.268048 -2023-09-11 14:04:45,706 - Epoch: [66][ 71/ 71] Overall Loss 0.191559 Objective Loss 0.191559 Top1 92.857143 LR 0.000250 Time 0.265362 -2023-09-11 14:04:45,794 - --- validate (epoch=66)----------- -2023-09-11 14:04:45,794 - 2000 samples (256 per mini-batch) -2023-09-11 14:04:48,218 - Epoch: [66][ 8/ 8] Loss 0.210423 Top1 90.900000 -2023-09-11 14:04:48,318 - ==> Top1: 90.900 Loss: 0.210 - -2023-09-11 14:04:48,318 - ==> Confusion: -[[879 106] +2025-05-20 17:04:53,185 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:04:53,185 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:04:53,193 - + +2025-05-20 17:04:53,193 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:04:59,021 - Epoch: [53][ 10/ 71] Overall Loss 0.184918 Objective Loss 0.184918 LR 0.000250 Time 0.582801 +2025-05-20 17:05:02,367 - Epoch: [53][ 20/ 71] Overall Loss 0.187864 Objective Loss 0.187864 LR 0.000250 Time 0.458661 +2025-05-20 17:05:06,811 - Epoch: [53][ 30/ 71] Overall Loss 0.187767 Objective Loss 0.187767 LR 0.000250 Time 0.453896 +2025-05-20 17:05:10,484 - Epoch: [53][ 40/ 71] Overall Loss 0.185592 Objective Loss 0.185592 LR 0.000250 Time 0.432239 +2025-05-20 17:05:14,920 - Epoch: [53][ 50/ 71] Overall Loss 0.188603 Objective Loss 0.188603 LR 0.000250 Time 0.434510 +2025-05-20 17:05:17,925 - Epoch: [53][ 60/ 71] Overall Loss 0.189365 Objective Loss 0.189365 LR 0.000250 Time 0.412163 +2025-05-20 17:05:21,884 - Epoch: [53][ 70/ 71] Overall Loss 0.187011 Objective Loss 0.187011 Top1 92.968750 LR 0.000250 Time 0.409845 +2025-05-20 17:05:21,978 - Epoch: [53][ 71/ 71] Overall Loss 0.186982 Objective Loss 0.186982 Top1 93.154762 LR 0.000250 Time 0.405388 +2025-05-20 17:05:22,008 - --- validate (epoch=53)----------- +2025-05-20 17:05:22,008 - 2000 samples (256 per mini-batch) +2025-05-20 17:05:25,601 - Epoch: [53][ 8/ 8] Loss 0.211053 Top1 92.000000 +2025-05-20 17:05:25,642 - ==> Top1: 92.000 Loss: 0.211 + +2025-05-20 17:05:25,642 - ==> Confusion: +[[901 84] [ 76 939]] -2023-09-11 14:04:48,334 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:04:48,334 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:04:48,339 - - -2023-09-11 14:04:48,339 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:04:52,976 - Epoch: [67][ 10/ 71] Overall Loss 0.172711 Objective Loss 0.172711 LR 0.000250 Time 0.463609 -2023-09-11 14:04:55,342 - Epoch: [67][ 20/ 71] Overall Loss 0.182210 Objective Loss 0.182210 LR 0.000250 Time 0.350121 -2023-09-11 14:04:57,869 - Epoch: [67][ 30/ 71] Overall Loss 0.187123 Objective Loss 0.187123 LR 0.000250 Time 0.317634 -2023-09-11 14:05:00,616 - Epoch: [67][ 40/ 71] Overall Loss 0.183886 Objective Loss 0.183886 LR 0.000250 Time 0.306884 -2023-09-11 14:05:03,485 - Epoch: [67][ 50/ 71] Overall Loss 0.185864 Objective Loss 0.185864 LR 0.000250 Time 0.302877 -2023-09-11 14:05:06,543 - Epoch: [67][ 60/ 71] Overall Loss 0.186490 Objective Loss 0.186490 LR 0.000250 Time 0.303358 -2023-09-11 14:05:08,893 - Epoch: [67][ 70/ 71] Overall Loss 0.185087 Objective Loss 0.185087 Top1 90.234375 LR 0.000250 Time 0.293596 -2023-09-11 14:05:08,977 - Epoch: [67][ 71/ 71] Overall Loss 0.185901 Objective Loss 0.185901 Top1 90.178571 LR 0.000250 Time 0.290630 -2023-09-11 14:05:09,073 - --- validate (epoch=67)----------- -2023-09-11 14:05:09,074 - 2000 samples (256 per mini-batch) -2023-09-11 14:05:12,161 - Epoch: [67][ 8/ 8] Loss 0.208232 Top1 91.600000 -2023-09-11 14:05:12,260 - ==> Top1: 91.600 Loss: 0.208 - -2023-09-11 14:05:12,261 - ==> Confusion: +2025-05-20 17:05:25,658 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:05:25,658 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:05:25,665 - + +2025-05-20 17:05:25,665 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:05:30,836 - Epoch: [54][ 10/ 71] Overall Loss 0.168584 Objective Loss 0.168584 LR 0.000250 Time 0.516951 +2025-05-20 17:05:33,518 - Epoch: [54][ 20/ 71] Overall Loss 0.182086 Objective Loss 0.182086 LR 0.000250 Time 0.392555 +2025-05-20 17:05:37,064 - Epoch: [54][ 30/ 71] Overall Loss 0.187779 Objective Loss 0.187779 LR 0.000250 Time 0.379895 +2025-05-20 17:05:40,813 - Epoch: [54][ 40/ 71] Overall Loss 0.182498 Objective Loss 0.182498 LR 0.000250 Time 0.378654 +2025-05-20 17:05:44,195 - Epoch: [54][ 50/ 71] Overall Loss 0.182124 Objective Loss 0.182124 LR 0.000250 Time 0.370548 +2025-05-20 17:05:47,685 - Epoch: [54][ 60/ 71] Overall Loss 0.181451 Objective Loss 0.181451 LR 0.000250 Time 0.366947 +2025-05-20 17:05:50,860 - Epoch: [54][ 70/ 71] Overall Loss 0.182870 Objective Loss 0.182870 Top1 91.796875 LR 0.000250 Time 0.359887 +2025-05-20 17:05:50,969 - Epoch: [54][ 71/ 71] Overall Loss 0.183675 Objective Loss 0.183675 Top1 91.666667 LR 0.000250 Time 0.356352 +2025-05-20 17:05:51,001 - --- validate (epoch=54)----------- +2025-05-20 17:05:51,001 - 2000 samples (256 per mini-batch) +2025-05-20 17:05:54,533 - Epoch: [54][ 8/ 8] Loss 0.217067 Top1 91.100000 +2025-05-20 17:05:54,570 - ==> Top1: 91.100 Loss: 0.217 + +2025-05-20 17:05:54,570 - ==> Confusion: +[[878 107] + [ 71 944]] + +2025-05-20 17:05:54,586 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:05:54,587 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:05:54,594 - + +2025-05-20 17:05:54,594 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:06:00,870 - Epoch: [55][ 10/ 71] Overall Loss 0.184379 Objective Loss 0.184379 LR 0.000250 Time 0.627506 +2025-05-20 17:06:03,735 - Epoch: [55][ 20/ 71] Overall Loss 0.176055 Objective Loss 0.176055 LR 0.000250 Time 0.456991 +2025-05-20 17:06:07,834 - Epoch: [55][ 30/ 71] Overall Loss 0.176157 Objective Loss 0.176157 LR 0.000250 Time 0.441302 +2025-05-20 17:06:10,875 - Epoch: [55][ 40/ 71] Overall Loss 0.174458 Objective Loss 0.174458 LR 0.000250 Time 0.406978 +2025-05-20 17:06:14,863 - Epoch: [55][ 50/ 71] Overall Loss 0.176752 Objective Loss 0.176752 LR 0.000250 Time 0.405351 +2025-05-20 17:06:19,433 - Epoch: [55][ 60/ 71] Overall Loss 0.179419 Objective Loss 0.179419 LR 0.000250 Time 0.413951 +2025-05-20 17:06:22,891 - Epoch: [55][ 70/ 71] Overall Loss 0.179399 Objective Loss 0.179399 Top1 93.359375 LR 0.000250 Time 0.404207 +2025-05-20 17:06:22,982 - Epoch: [55][ 71/ 71] Overall Loss 0.178523 Objective Loss 0.178523 Top1 94.047619 LR 0.000250 Time 0.399801 +2025-05-20 17:06:23,017 - --- validate (epoch=55)----------- +2025-05-20 17:06:23,017 - 2000 samples (256 per mini-batch) +2025-05-20 17:06:26,268 - Epoch: [55][ 8/ 8] Loss 0.228530 Top1 90.200000 +2025-05-20 17:06:26,306 - ==> Top1: 90.200 Loss: 0.229 + +2025-05-20 17:06:26,306 - ==> Confusion: [[917 68] - [100 915]] + [128 887]] + +2025-05-20 17:06:26,321 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:06:26,321 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:06:26,328 - + +2025-05-20 17:06:26,328 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:06:31,089 - Epoch: [56][ 10/ 71] Overall Loss 0.201010 Objective Loss 0.201010 LR 0.000250 Time 0.476051 +2025-05-20 17:06:34,165 - Epoch: [56][ 20/ 71] Overall Loss 0.200926 Objective Loss 0.200926 LR 0.000250 Time 0.391793 +2025-05-20 17:06:37,528 - Epoch: [56][ 30/ 71] Overall Loss 0.196490 Objective Loss 0.196490 LR 0.000250 Time 0.373284 +2025-05-20 17:06:42,320 - Epoch: [56][ 40/ 71] Overall Loss 0.192263 Objective Loss 0.192263 LR 0.000250 Time 0.399761 +2025-05-20 17:06:45,914 - Epoch: [56][ 50/ 71] Overall Loss 0.191549 Objective Loss 0.191549 LR 0.000250 Time 0.391674 +2025-05-20 17:06:49,534 - Epoch: [56][ 60/ 71] Overall Loss 0.190579 Objective Loss 0.190579 LR 0.000250 Time 0.386718 +2025-05-20 17:06:52,721 - Epoch: [56][ 70/ 71] Overall Loss 0.188029 Objective Loss 0.188029 Top1 91.796875 LR 0.000250 Time 0.377007 +2025-05-20 17:06:52,816 - Epoch: [56][ 71/ 71] Overall Loss 0.187884 Objective Loss 0.187884 Top1 92.559524 LR 0.000250 Time 0.373031 +2025-05-20 17:06:52,847 - --- validate (epoch=56)----------- +2025-05-20 17:06:52,847 - 2000 samples (256 per mini-batch) +2025-05-20 17:06:56,788 - Epoch: [56][ 8/ 8] Loss 0.202029 Top1 91.150000 +2025-05-20 17:06:56,828 - ==> Top1: 91.150 Loss: 0.202 + +2025-05-20 17:06:56,829 - ==> Confusion: +[[906 79] + [ 98 917]] + +2025-05-20 17:06:56,845 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:06:56,845 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:06:56,852 - + +2025-05-20 17:06:56,852 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:07:02,288 - Epoch: [57][ 10/ 71] Overall Loss 0.175481 Objective Loss 0.175481 LR 0.000250 Time 0.543538 +2025-05-20 17:07:05,754 - Epoch: [57][ 20/ 71] Overall Loss 0.167972 Objective Loss 0.167972 LR 0.000250 Time 0.445023 +2025-05-20 17:07:09,313 - Epoch: [57][ 30/ 71] Overall Loss 0.176895 Objective Loss 0.176895 LR 0.000250 Time 0.415329 +2025-05-20 17:07:13,079 - Epoch: [57][ 40/ 71] Overall Loss 0.174227 Objective Loss 0.174227 LR 0.000250 Time 0.405638 +2025-05-20 17:07:16,143 - Epoch: [57][ 50/ 71] Overall Loss 0.174435 Objective Loss 0.174435 LR 0.000250 Time 0.385777 +2025-05-20 17:07:19,601 - Epoch: [57][ 60/ 71] Overall Loss 0.175837 Objective Loss 0.175837 LR 0.000250 Time 0.379113 +2025-05-20 17:07:22,745 - Epoch: [57][ 70/ 71] Overall Loss 0.178241 Objective Loss 0.178241 Top1 91.015625 LR 0.000250 Time 0.369865 +2025-05-20 17:07:22,840 - Epoch: [57][ 71/ 71] Overall Loss 0.177944 Objective Loss 0.177944 Top1 91.964286 LR 0.000250 Time 0.365987 +2025-05-20 17:07:22,872 - --- validate (epoch=57)----------- +2025-05-20 17:07:22,873 - 2000 samples (256 per mini-batch) +2025-05-20 17:07:26,910 - Epoch: [57][ 8/ 8] Loss 0.205683 Top1 91.500000 +2025-05-20 17:07:26,949 - ==> Top1: 91.500 Loss: 0.206 + +2025-05-20 17:07:26,950 - ==> Confusion: +[[903 82] + [ 88 927]] -2023-09-11 14:05:12,276 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:05:12,276 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:05:12,278 - - -2023-09-11 14:05:12,278 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:05:16,093 - Epoch: [68][ 10/ 71] Overall Loss 0.198097 Objective Loss 0.198097 LR 0.000250 Time 0.381399 -2023-09-11 14:05:19,245 - Epoch: [68][ 20/ 71] Overall Loss 0.194152 Objective Loss 0.194152 LR 0.000250 Time 0.348302 -2023-09-11 14:05:21,266 - Epoch: [68][ 30/ 71] Overall Loss 0.194069 Objective Loss 0.194069 LR 0.000250 Time 0.299542 -2023-09-11 14:05:25,068 - Epoch: [68][ 40/ 71] Overall Loss 0.190730 Objective Loss 0.190730 LR 0.000250 Time 0.319695 -2023-09-11 14:05:27,275 - Epoch: [68][ 50/ 71] Overall Loss 0.190853 Objective Loss 0.190853 LR 0.000250 Time 0.299901 -2023-09-11 14:05:30,808 - Epoch: [68][ 60/ 71] Overall Loss 0.192491 Objective Loss 0.192491 LR 0.000250 Time 0.308786 -2023-09-11 14:05:32,745 - Epoch: [68][ 70/ 71] Overall Loss 0.191531 Objective Loss 0.191531 Top1 91.015625 LR 0.000250 Time 0.292353 -2023-09-11 14:05:32,825 - Epoch: [68][ 71/ 71] Overall Loss 0.191434 Objective Loss 0.191434 Top1 91.369048 LR 0.000250 Time 0.289357 -2023-09-11 14:05:32,916 - --- validate (epoch=68)----------- -2023-09-11 14:05:32,916 - 2000 samples (256 per mini-batch) -2023-09-11 14:05:35,337 - Epoch: [68][ 8/ 8] Loss 0.196833 Top1 91.200000 -2023-09-11 14:05:35,430 - ==> Top1: 91.200 Loss: 0.197 - -2023-09-11 14:05:35,430 - ==> Confusion: +2025-05-20 17:07:26,953 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:07:26,953 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:07:26,961 - + +2025-05-20 17:07:26,961 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:07:33,258 - Epoch: [58][ 10/ 71] Overall Loss 0.174391 Objective Loss 0.174391 LR 0.000250 Time 0.629661 +2025-05-20 17:07:36,545 - Epoch: [58][ 20/ 71] Overall Loss 0.172814 Objective Loss 0.172814 LR 0.000250 Time 0.479161 +2025-05-20 17:07:40,500 - Epoch: [58][ 30/ 71] Overall Loss 0.181090 Objective Loss 0.181090 LR 0.000250 Time 0.451250 +2025-05-20 17:07:43,950 - Epoch: [58][ 40/ 71] Overall Loss 0.181203 Objective Loss 0.181203 LR 0.000250 Time 0.424677 +2025-05-20 17:07:47,638 - Epoch: [58][ 50/ 71] Overall Loss 0.183263 Objective Loss 0.183263 LR 0.000250 Time 0.413507 +2025-05-20 17:07:50,968 - Epoch: [58][ 60/ 71] Overall Loss 0.182521 Objective Loss 0.182521 LR 0.000250 Time 0.400083 +2025-05-20 17:07:54,446 - Epoch: [58][ 70/ 71] Overall Loss 0.181383 Objective Loss 0.181383 Top1 91.796875 LR 0.000250 Time 0.392607 +2025-05-20 17:07:54,551 - Epoch: [58][ 71/ 71] Overall Loss 0.181948 Objective Loss 0.181948 Top1 91.964286 LR 0.000250 Time 0.388554 +2025-05-20 17:07:54,586 - --- validate (epoch=58)----------- +2025-05-20 17:07:54,587 - 2000 samples (256 per mini-batch) +2025-05-20 17:07:58,169 - Epoch: [58][ 8/ 8] Loss 0.230448 Top1 90.750000 +2025-05-20 17:07:58,202 - ==> Top1: 90.750 Loss: 0.230 + +2025-05-20 17:07:58,202 - ==> Confusion: +[[850 135] + [ 50 965]] + +2025-05-20 17:07:58,217 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:07:58,217 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:07:58,225 - + +2025-05-20 17:07:58,225 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:08:04,448 - Epoch: [59][ 10/ 71] Overall Loss 0.179840 Objective Loss 0.179840 LR 0.000250 Time 0.622297 +2025-05-20 17:08:07,137 - Epoch: [59][ 20/ 71] Overall Loss 0.180868 Objective Loss 0.180868 LR 0.000250 Time 0.445533 +2025-05-20 17:08:10,897 - Epoch: [59][ 30/ 71] Overall Loss 0.176709 Objective Loss 0.176709 LR 0.000250 Time 0.422357 +2025-05-20 17:08:14,717 - Epoch: [59][ 40/ 71] Overall Loss 0.175788 Objective Loss 0.175788 LR 0.000250 Time 0.412259 +2025-05-20 17:08:17,738 - Epoch: [59][ 50/ 71] Overall Loss 0.175247 Objective Loss 0.175247 LR 0.000250 Time 0.390223 +2025-05-20 17:08:21,664 - Epoch: [59][ 60/ 71] Overall Loss 0.177264 Objective Loss 0.177264 LR 0.000250 Time 0.390623 +2025-05-20 17:08:24,774 - Epoch: [59][ 70/ 71] Overall Loss 0.177230 Objective Loss 0.177230 Top1 95.703125 LR 0.000250 Time 0.379239 +2025-05-20 17:08:24,878 - Epoch: [59][ 71/ 71] Overall Loss 0.178713 Objective Loss 0.178713 Top1 94.345238 LR 0.000250 Time 0.375366 +2025-05-20 17:08:24,912 - --- validate (epoch=59)----------- +2025-05-20 17:08:24,913 - 2000 samples (256 per mini-batch) +2025-05-20 17:08:28,335 - Epoch: [59][ 8/ 8] Loss 0.198658 Top1 91.500000 +2025-05-20 17:08:28,374 - ==> Top1: 91.500 Loss: 0.199 + +2025-05-20 17:08:28,375 - ==> Confusion: [[906 79] - [ 97 918]] + [ 91 924]] -2023-09-11 14:05:35,431 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:05:35,431 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:05:35,436 - - -2023-09-11 14:05:35,436 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:05:39,141 - Epoch: [69][ 10/ 71] Overall Loss 0.200403 Objective Loss 0.200403 LR 0.000250 Time 0.370420 -2023-09-11 14:05:41,535 - Epoch: [69][ 20/ 71] Overall Loss 0.190536 Objective Loss 0.190536 LR 0.000250 Time 0.304902 -2023-09-11 14:05:45,097 - Epoch: [69][ 30/ 71] Overall Loss 0.197563 Objective Loss 0.197563 LR 0.000250 Time 0.322010 -2023-09-11 14:05:47,764 - Epoch: [69][ 40/ 71] Overall Loss 0.191363 Objective Loss 0.191363 LR 0.000250 Time 0.308157 -2023-09-11 14:05:50,457 - Epoch: [69][ 50/ 71] Overall Loss 0.189601 Objective Loss 0.189601 LR 0.000250 Time 0.300387 -2023-09-11 14:05:53,016 - Epoch: [69][ 60/ 71] Overall Loss 0.188452 Objective Loss 0.188452 LR 0.000250 Time 0.292968 -2023-09-11 14:05:55,328 - Epoch: [69][ 70/ 71] Overall Loss 0.187632 Objective Loss 0.187632 Top1 91.406250 LR 0.000250 Time 0.284133 -2023-09-11 14:05:55,405 - Epoch: [69][ 71/ 71] Overall Loss 0.187626 Objective Loss 0.187626 Top1 91.369048 LR 0.000250 Time 0.281210 -2023-09-11 14:05:55,493 - --- validate (epoch=69)----------- -2023-09-11 14:05:55,493 - 2000 samples (256 per mini-batch) -2023-09-11 14:05:58,714 - Epoch: [69][ 8/ 8] Loss 0.219728 Top1 90.050000 -2023-09-11 14:05:58,852 - ==> Top1: 90.050 Loss: 0.220 - -2023-09-11 14:05:58,853 - ==> Confusion: +2025-05-20 17:08:28,378 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:08:28,378 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:08:28,385 - + +2025-05-20 17:08:28,385 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:08:33,731 - Epoch: [60][ 10/ 71] Overall Loss 0.180490 Objective Loss 0.180490 LR 0.000250 Time 0.534562 +2025-05-20 17:08:36,796 - Epoch: [60][ 20/ 71] Overall Loss 0.185640 Objective Loss 0.185640 LR 0.000250 Time 0.420483 +2025-05-20 17:08:40,731 - Epoch: [60][ 30/ 71] Overall Loss 0.181379 Objective Loss 0.181379 LR 0.000250 Time 0.411498 +2025-05-20 17:08:43,895 - Epoch: [60][ 40/ 71] Overall Loss 0.176986 Objective Loss 0.176986 LR 0.000250 Time 0.387705 +2025-05-20 17:08:48,096 - Epoch: [60][ 50/ 71] Overall Loss 0.176923 Objective Loss 0.176923 LR 0.000250 Time 0.394179 +2025-05-20 17:08:51,157 - Epoch: [60][ 60/ 71] Overall Loss 0.179065 Objective Loss 0.179065 LR 0.000250 Time 0.379487 +2025-05-20 17:08:55,069 - Epoch: [60][ 70/ 71] Overall Loss 0.178910 Objective Loss 0.178910 Top1 91.796875 LR 0.000250 Time 0.381156 +2025-05-20 17:08:55,171 - Epoch: [60][ 71/ 71] Overall Loss 0.178747 Objective Loss 0.178747 Top1 91.666667 LR 0.000250 Time 0.377229 +2025-05-20 17:08:55,202 - --- validate (epoch=60)----------- +2025-05-20 17:08:55,202 - 2000 samples (256 per mini-batch) +2025-05-20 17:08:58,928 - Epoch: [60][ 8/ 8] Loss 0.211639 Top1 91.200000 +2025-05-20 17:08:58,963 - ==> Top1: 91.200 Loss: 0.212 + +2025-05-20 17:08:58,963 - ==> Confusion: +[[892 93] + [ 83 932]] + +2025-05-20 17:08:58,980 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:08:58,980 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:08:58,987 - + +2025-05-20 17:08:58,988 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:09:04,788 - Epoch: [61][ 10/ 71] Overall Loss 0.182788 Objective Loss 0.182788 LR 0.000250 Time 0.580028 +2025-05-20 17:09:08,017 - Epoch: [61][ 20/ 71] Overall Loss 0.178603 Objective Loss 0.178603 LR 0.000250 Time 0.451436 +2025-05-20 17:09:12,286 - Epoch: [61][ 30/ 71] Overall Loss 0.174001 Objective Loss 0.174001 LR 0.000250 Time 0.443230 +2025-05-20 17:09:15,797 - Epoch: [61][ 40/ 71] Overall Loss 0.169982 Objective Loss 0.169982 LR 0.000250 Time 0.420200 +2025-05-20 17:09:20,086 - Epoch: [61][ 50/ 71] Overall Loss 0.172805 Objective Loss 0.172805 LR 0.000250 Time 0.421937 +2025-05-20 17:09:23,462 - Epoch: [61][ 60/ 71] Overall Loss 0.174683 Objective Loss 0.174683 LR 0.000250 Time 0.407869 +2025-05-20 17:09:27,385 - Epoch: [61][ 70/ 71] Overall Loss 0.175533 Objective Loss 0.175533 Top1 93.359375 LR 0.000250 Time 0.405642 +2025-05-20 17:09:27,475 - Epoch: [61][ 71/ 71] Overall Loss 0.175165 Objective Loss 0.175165 Top1 93.154762 LR 0.000250 Time 0.401190 +2025-05-20 17:09:27,516 - --- validate (epoch=61)----------- +2025-05-20 17:09:27,516 - 2000 samples (256 per mini-batch) +2025-05-20 17:09:31,512 - Epoch: [61][ 8/ 8] Loss 0.204309 Top1 91.400000 +2025-05-20 17:09:31,546 - ==> Top1: 91.400 Loss: 0.204 + +2025-05-20 17:09:31,546 - ==> Confusion: +[[877 108] + [ 64 951]] + +2025-05-20 17:09:31,563 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:09:31,563 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:09:31,570 - + +2025-05-20 17:09:31,570 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:09:37,845 - Epoch: [62][ 10/ 71] Overall Loss 0.162213 Objective Loss 0.162213 LR 0.000250 Time 0.627370 +2025-05-20 17:09:40,805 - Epoch: [62][ 20/ 71] Overall Loss 0.172768 Objective Loss 0.172768 LR 0.000250 Time 0.461663 +2025-05-20 17:09:44,523 - Epoch: [62][ 30/ 71] Overall Loss 0.181576 Objective Loss 0.181576 LR 0.000250 Time 0.431709 +2025-05-20 17:09:48,271 - Epoch: [62][ 40/ 71] Overall Loss 0.182422 Objective Loss 0.182422 LR 0.000250 Time 0.417468 +2025-05-20 17:09:51,378 - Epoch: [62][ 50/ 71] Overall Loss 0.185388 Objective Loss 0.185388 LR 0.000250 Time 0.396119 +2025-05-20 17:09:55,091 - Epoch: [62][ 60/ 71] Overall Loss 0.181873 Objective Loss 0.181873 LR 0.000250 Time 0.391963 +2025-05-20 17:09:58,194 - Epoch: [62][ 70/ 71] Overall Loss 0.182787 Objective Loss 0.182787 Top1 94.140625 LR 0.000250 Time 0.380301 +2025-05-20 17:09:58,290 - Epoch: [62][ 71/ 71] Overall Loss 0.182472 Objective Loss 0.182472 Top1 94.047619 LR 0.000250 Time 0.376289 +2025-05-20 17:09:58,320 - --- validate (epoch=62)----------- +2025-05-20 17:09:58,320 - 2000 samples (256 per mini-batch) +2025-05-20 17:10:01,778 - Epoch: [62][ 8/ 8] Loss 0.202742 Top1 91.350000 +2025-05-20 17:10:01,812 - ==> Top1: 91.350 Loss: 0.203 + +2025-05-20 17:10:01,812 - ==> Confusion: [[863 122] - [ 77 938]] + [ 51 964]] -2023-09-11 14:05:58,868 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:05:58,869 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:05:58,873 - - -2023-09-11 14:05:58,873 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:06:03,252 - Epoch: [70][ 10/ 71] Overall Loss 0.191895 Objective Loss 0.191895 LR 0.000250 Time 0.437818 -2023-09-11 14:06:05,293 - Epoch: [70][ 20/ 71] Overall Loss 0.187425 Objective Loss 0.187425 LR 0.000250 Time 0.320924 -2023-09-11 14:06:07,926 - Epoch: [70][ 30/ 71] Overall Loss 0.187085 Objective Loss 0.187085 LR 0.000250 Time 0.301731 -2023-09-11 14:06:10,699 - Epoch: [70][ 40/ 71] Overall Loss 0.184490 Objective Loss 0.184490 LR 0.000250 Time 0.295606 -2023-09-11 14:06:13,187 - Epoch: [70][ 50/ 71] Overall Loss 0.186297 Objective Loss 0.186297 LR 0.000250 Time 0.286235 -2023-09-11 14:06:16,380 - Epoch: [70][ 60/ 71] Overall Loss 0.183535 Objective Loss 0.183535 LR 0.000250 Time 0.291745 -2023-09-11 14:06:18,289 - Epoch: [70][ 70/ 71] Overall Loss 0.182750 Objective Loss 0.182750 Top1 88.281250 LR 0.000250 Time 0.277339 -2023-09-11 14:06:18,365 - Epoch: [70][ 71/ 71] Overall Loss 0.182252 Objective Loss 0.182252 Top1 90.178571 LR 0.000250 Time 0.274501 -2023-09-11 14:06:18,444 - --- validate (epoch=70)----------- -2023-09-11 14:06:18,445 - 2000 samples (256 per mini-batch) -2023-09-11 14:06:21,379 - Epoch: [70][ 8/ 8] Loss 0.194180 Top1 91.600000 -2023-09-11 14:06:21,473 - ==> Top1: 91.600 Loss: 0.194 - -2023-09-11 14:06:21,473 - ==> Confusion: -[[888 97] - [ 71 944]] +2025-05-20 17:10:01,828 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:10:01,829 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:10:01,836 - + +2025-05-20 17:10:01,836 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:10:07,323 - Epoch: [63][ 10/ 71] Overall Loss 0.191293 Objective Loss 0.191293 LR 0.000250 Time 0.548690 +2025-05-20 17:10:10,250 - Epoch: [63][ 20/ 71] Overall Loss 0.186965 Objective Loss 0.186965 LR 0.000250 Time 0.420648 +2025-05-20 17:10:13,854 - Epoch: [63][ 30/ 71] Overall Loss 0.181152 Objective Loss 0.181152 LR 0.000250 Time 0.400567 +2025-05-20 17:10:17,473 - Epoch: [63][ 40/ 71] Overall Loss 0.176464 Objective Loss 0.176464 LR 0.000250 Time 0.390876 +2025-05-20 17:10:21,203 - Epoch: [63][ 50/ 71] Overall Loss 0.178392 Objective Loss 0.178392 LR 0.000250 Time 0.387300 +2025-05-20 17:10:24,895 - Epoch: [63][ 60/ 71] Overall Loss 0.179060 Objective Loss 0.179060 LR 0.000250 Time 0.384284 +2025-05-20 17:10:28,184 - Epoch: [63][ 70/ 71] Overall Loss 0.177142 Objective Loss 0.177142 Top1 95.703125 LR 0.000250 Time 0.376362 +2025-05-20 17:10:28,282 - Epoch: [63][ 71/ 71] Overall Loss 0.176200 Objective Loss 0.176200 Top1 95.238095 LR 0.000250 Time 0.372437 +2025-05-20 17:10:28,318 - --- validate (epoch=63)----------- +2025-05-20 17:10:28,318 - 2000 samples (256 per mini-batch) +2025-05-20 17:10:32,258 - Epoch: [63][ 8/ 8] Loss 0.217394 Top1 91.300000 +2025-05-20 17:10:32,291 - ==> Top1: 91.300 Loss: 0.217 + +2025-05-20 17:10:32,292 - ==> Confusion: +[[893 92] + [ 82 933]] -2023-09-11 14:06:21,489 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:06:21,489 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:06:21,492 - - -2023-09-11 14:06:21,492 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:06:25,153 - Epoch: [71][ 10/ 71] Overall Loss 0.175629 Objective Loss 0.175629 LR 0.000250 Time 0.366032 -2023-09-11 14:06:28,372 - Epoch: [71][ 20/ 71] Overall Loss 0.186175 Objective Loss 0.186175 LR 0.000250 Time 0.343974 -2023-09-11 14:06:32,060 - Epoch: [71][ 30/ 71] Overall Loss 0.184061 Objective Loss 0.184061 LR 0.000250 Time 0.352223 -2023-09-11 14:06:34,162 - Epoch: [71][ 40/ 71] Overall Loss 0.185751 Objective Loss 0.185751 LR 0.000250 Time 0.316701 -2023-09-11 14:06:36,853 - Epoch: [71][ 50/ 71] Overall Loss 0.186573 Objective Loss 0.186573 LR 0.000250 Time 0.307176 -2023-09-11 14:06:39,044 - Epoch: [71][ 60/ 71] Overall Loss 0.184373 Objective Loss 0.184373 LR 0.000250 Time 0.292499 -2023-09-11 14:06:42,039 - Epoch: [71][ 70/ 71] Overall Loss 0.183490 Objective Loss 0.183490 Top1 91.015625 LR 0.000250 Time 0.293490 -2023-09-11 14:06:42,156 - Epoch: [71][ 71/ 71] Overall Loss 0.183224 Objective Loss 0.183224 Top1 91.369048 LR 0.000250 Time 0.291004 -2023-09-11 14:06:42,253 - --- validate (epoch=71)----------- -2023-09-11 14:06:42,254 - 2000 samples (256 per mini-batch) -2023-09-11 14:06:44,977 - Epoch: [71][ 8/ 8] Loss 0.201142 Top1 91.600000 -2023-09-11 14:06:45,065 - ==> Top1: 91.600 Loss: 0.201 - -2023-09-11 14:06:45,065 - ==> Confusion: +2025-05-20 17:10:32,308 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:10:32,308 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:10:32,316 - + +2025-05-20 17:10:32,316 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:10:38,111 - Epoch: [64][ 10/ 71] Overall Loss 0.165425 Objective Loss 0.165425 LR 0.000250 Time 0.579430 +2025-05-20 17:10:40,772 - Epoch: [64][ 20/ 71] Overall Loss 0.167890 Objective Loss 0.167890 LR 0.000250 Time 0.422750 +2025-05-20 17:10:44,452 - Epoch: [64][ 30/ 71] Overall Loss 0.174575 Objective Loss 0.174575 LR 0.000250 Time 0.404489 +2025-05-20 17:10:48,061 - Epoch: [64][ 40/ 71] Overall Loss 0.178576 Objective Loss 0.178576 LR 0.000250 Time 0.393586 +2025-05-20 17:10:51,235 - Epoch: [64][ 50/ 71] Overall Loss 0.181042 Objective Loss 0.181042 LR 0.000250 Time 0.378344 +2025-05-20 17:10:55,072 - Epoch: [64][ 60/ 71] Overall Loss 0.182074 Objective Loss 0.182074 LR 0.000250 Time 0.379236 +2025-05-20 17:10:59,092 - Epoch: [64][ 70/ 71] Overall Loss 0.180093 Objective Loss 0.180093 Top1 93.750000 LR 0.000250 Time 0.382474 +2025-05-20 17:10:59,183 - Epoch: [64][ 71/ 71] Overall Loss 0.179426 Objective Loss 0.179426 Top1 93.452381 LR 0.000250 Time 0.378372 +2025-05-20 17:10:59,222 - --- validate (epoch=64)----------- +2025-05-20 17:10:59,223 - 2000 samples (256 per mini-batch) +2025-05-20 17:11:02,736 - Epoch: [64][ 8/ 8] Loss 0.195543 Top1 91.500000 +2025-05-20 17:11:02,770 - ==> Top1: 91.500 Loss: 0.196 + +2025-05-20 17:11:02,770 - ==> Confusion: [[913 72] - [ 96 919]] - -2023-09-11 14:06:45,081 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:06:45,081 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:06:45,085 - - -2023-09-11 14:06:45,085 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:06:48,921 - Epoch: [72][ 10/ 71] Overall Loss 0.189989 Objective Loss 0.189989 LR 0.000250 Time 0.383541 -2023-09-11 14:06:52,254 - Epoch: [72][ 20/ 71] Overall Loss 0.193197 Objective Loss 0.193197 LR 0.000250 Time 0.358399 -2023-09-11 14:06:54,239 - Epoch: [72][ 30/ 71] Overall Loss 0.192523 Objective Loss 0.192523 LR 0.000250 Time 0.305083 -2023-09-11 14:06:57,067 - Epoch: [72][ 40/ 71] Overall Loss 0.190208 Objective Loss 0.190208 LR 0.000250 Time 0.299499 -2023-09-11 14:06:59,040 - Epoch: [72][ 50/ 71] Overall Loss 0.188939 Objective Loss 0.188939 LR 0.000250 Time 0.279056 -2023-09-11 14:07:01,868 - Epoch: [72][ 60/ 71] Overall Loss 0.189885 Objective Loss 0.189885 LR 0.000250 Time 0.279684 -2023-09-11 14:07:03,870 - Epoch: [72][ 70/ 71] Overall Loss 0.186996 Objective Loss 0.186996 Top1 94.921875 LR 0.000250 Time 0.268315 -2023-09-11 14:07:03,951 - Epoch: [72][ 71/ 71] Overall Loss 0.186064 Objective Loss 0.186064 Top1 94.940476 LR 0.000250 Time 0.265683 -2023-09-11 14:07:04,039 - --- validate (epoch=72)----------- -2023-09-11 14:07:04,040 - 2000 samples (256 per mini-batch) -2023-09-11 14:07:07,141 - Epoch: [72][ 8/ 8] Loss 0.205507 Top1 91.000000 -2023-09-11 14:07:07,235 - ==> Top1: 91.000 Loss: 0.206 - -2023-09-11 14:07:07,235 - ==> Confusion: -[[906 79] - [101 914]] + [ 98 917]] -2023-09-11 14:07:07,250 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:07:07,250 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:07:07,253 - - -2023-09-11 14:07:07,253 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:07:11,206 - Epoch: [73][ 10/ 71] Overall Loss 0.187254 Objective Loss 0.187254 LR 0.000250 Time 0.395315 -2023-09-11 14:07:14,099 - Epoch: [73][ 20/ 71] Overall Loss 0.182856 Objective Loss 0.182856 LR 0.000250 Time 0.342260 -2023-09-11 14:07:16,364 - Epoch: [73][ 30/ 71] Overall Loss 0.185579 Objective Loss 0.185579 LR 0.000250 Time 0.303667 -2023-09-11 14:07:19,534 - Epoch: [73][ 40/ 71] Overall Loss 0.186372 Objective Loss 0.186372 LR 0.000250 Time 0.306995 -2023-09-11 14:07:22,024 - Epoch: [73][ 50/ 71] Overall Loss 0.185667 Objective Loss 0.185667 LR 0.000250 Time 0.295393 -2023-09-11 14:07:25,291 - Epoch: [73][ 60/ 71] Overall Loss 0.184137 Objective Loss 0.184137 LR 0.000250 Time 0.300593 -2023-09-11 14:07:27,218 - Epoch: [73][ 70/ 71] Overall Loss 0.182773 Objective Loss 0.182773 Top1 91.015625 LR 0.000250 Time 0.285187 -2023-09-11 14:07:27,296 - Epoch: [73][ 71/ 71] Overall Loss 0.183266 Objective Loss 0.183266 Top1 91.071429 LR 0.000250 Time 0.282264 -2023-09-11 14:07:27,406 - --- validate (epoch=73)----------- -2023-09-11 14:07:27,406 - 2000 samples (256 per mini-batch) -2023-09-11 14:07:30,293 - Epoch: [73][ 8/ 8] Loss 0.217516 Top1 90.900000 -2023-09-11 14:07:30,388 - ==> Top1: 90.900 Loss: 0.218 - -2023-09-11 14:07:30,389 - ==> Confusion: -[[902 83] - [ 99 916]] +2025-05-20 17:11:02,782 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:11:02,782 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:11:02,789 - + +2025-05-20 17:11:02,789 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:11:08,375 - Epoch: [65][ 10/ 71] Overall Loss 0.173909 Objective Loss 0.173909 LR 0.000250 Time 0.558539 +2025-05-20 17:11:11,640 - Epoch: [65][ 20/ 71] Overall Loss 0.171755 Objective Loss 0.171755 LR 0.000250 Time 0.442475 +2025-05-20 17:11:15,955 - Epoch: [65][ 30/ 71] Overall Loss 0.173895 Objective Loss 0.173895 LR 0.000250 Time 0.438818 +2025-05-20 17:11:19,088 - Epoch: [65][ 40/ 71] Overall Loss 0.169358 Objective Loss 0.169358 LR 0.000250 Time 0.407429 +2025-05-20 17:11:22,691 - Epoch: [65][ 50/ 71] Overall Loss 0.170719 Objective Loss 0.170719 LR 0.000250 Time 0.397995 +2025-05-20 17:11:25,479 - Epoch: [65][ 60/ 71] Overall Loss 0.174443 Objective Loss 0.174443 LR 0.000250 Time 0.378118 +2025-05-20 17:11:29,297 - Epoch: [65][ 70/ 71] Overall Loss 0.174119 Objective Loss 0.174119 Top1 89.062500 LR 0.000250 Time 0.378639 +2025-05-20 17:11:29,395 - Epoch: [65][ 71/ 71] Overall Loss 0.176017 Objective Loss 0.176017 Top1 88.392857 LR 0.000250 Time 0.374682 +2025-05-20 17:11:29,425 - --- validate (epoch=65)----------- +2025-05-20 17:11:29,425 - 2000 samples (256 per mini-batch) +2025-05-20 17:11:33,494 - Epoch: [65][ 8/ 8] Loss 0.211795 Top1 91.100000 +2025-05-20 17:11:33,527 - ==> Top1: 91.100 Loss: 0.212 + +2025-05-20 17:11:33,527 - ==> Confusion: +[[904 81] + [ 97 918]] -2023-09-11 14:07:30,390 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:07:30,390 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:07:30,395 - - -2023-09-11 14:07:30,395 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:07:33,613 - Epoch: [74][ 10/ 71] Overall Loss 0.187120 Objective Loss 0.187120 LR 0.000250 Time 0.321722 -2023-09-11 14:07:36,171 - Epoch: [74][ 20/ 71] Overall Loss 0.183056 Objective Loss 0.183056 LR 0.000250 Time 0.288780 -2023-09-11 14:07:38,696 - Epoch: [74][ 30/ 71] Overall Loss 0.181251 Objective Loss 0.181251 LR 0.000250 Time 0.276661 -2023-09-11 14:07:41,930 - Epoch: [74][ 40/ 71] Overall Loss 0.181312 Objective Loss 0.181312 LR 0.000250 Time 0.288352 -2023-09-11 14:07:44,106 - Epoch: [74][ 50/ 71] Overall Loss 0.180094 Objective Loss 0.180094 LR 0.000250 Time 0.274194 -2023-09-11 14:07:47,033 - Epoch: [74][ 60/ 71] Overall Loss 0.181496 Objective Loss 0.181496 LR 0.000250 Time 0.277258 -2023-09-11 14:07:49,091 - Epoch: [74][ 70/ 71] Overall Loss 0.181645 Objective Loss 0.181645 Top1 92.578125 LR 0.000250 Time 0.267058 -2023-09-11 14:07:49,168 - Epoch: [74][ 71/ 71] Overall Loss 0.181784 Objective Loss 0.181784 Top1 92.261905 LR 0.000250 Time 0.264367 -2023-09-11 14:07:49,260 - --- validate (epoch=74)----------- -2023-09-11 14:07:49,260 - 2000 samples (256 per mini-batch) -2023-09-11 14:07:52,292 - Epoch: [74][ 8/ 8] Loss 0.219677 Top1 91.000000 -2023-09-11 14:07:52,385 - ==> Top1: 91.000 Loss: 0.220 - -2023-09-11 14:07:52,385 - ==> Confusion: +2025-05-20 17:11:33,544 - ==> Best [Top1: 92.300 Params: 57776 on epoch: 50] +2025-05-20 17:11:33,544 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:11:33,552 - + +2025-05-20 17:11:33,552 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:11:39,092 - Epoch: [66][ 10/ 71] Overall Loss 0.175845 Objective Loss 0.175845 LR 0.000250 Time 0.553944 +2025-05-20 17:11:42,258 - Epoch: [66][ 20/ 71] Overall Loss 0.179010 Objective Loss 0.179010 LR 0.000250 Time 0.435284 +2025-05-20 17:11:46,747 - Epoch: [66][ 30/ 71] Overall Loss 0.175684 Objective Loss 0.175684 LR 0.000250 Time 0.439796 +2025-05-20 17:11:50,839 - Epoch: [66][ 40/ 71] Overall Loss 0.176532 Objective Loss 0.176532 LR 0.000250 Time 0.432140 +2025-05-20 17:11:55,230 - Epoch: [66][ 50/ 71] Overall Loss 0.179496 Objective Loss 0.179496 LR 0.000250 Time 0.433528 +2025-05-20 17:11:58,691 - Epoch: [66][ 60/ 71] Overall Loss 0.177950 Objective Loss 0.177950 LR 0.000250 Time 0.418951 +2025-05-20 17:12:02,507 - Epoch: [66][ 70/ 71] Overall Loss 0.177451 Objective Loss 0.177451 Top1 92.187500 LR 0.000250 Time 0.413609 +2025-05-20 17:12:02,601 - Epoch: [66][ 71/ 71] Overall Loss 0.176800 Objective Loss 0.176800 Top1 93.154762 LR 0.000250 Time 0.409110 +2025-05-20 17:12:02,637 - --- validate (epoch=66)----------- +2025-05-20 17:12:02,637 - 2000 samples (256 per mini-batch) +2025-05-20 17:12:06,417 - Epoch: [66][ 8/ 8] Loss 0.190767 Top1 92.450000 +2025-05-20 17:12:06,446 - ==> Top1: 92.450 Loss: 0.191 + +2025-05-20 17:12:06,446 - ==> Confusion: +[[919 66] + [ 85 930]] + +2025-05-20 17:12:06,462 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:12:06,462 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:12:06,470 - + +2025-05-20 17:12:06,470 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:12:12,148 - Epoch: [67][ 10/ 71] Overall Loss 0.189812 Objective Loss 0.189812 LR 0.000250 Time 0.567746 +2025-05-20 17:12:15,302 - Epoch: [67][ 20/ 71] Overall Loss 0.196114 Objective Loss 0.196114 LR 0.000250 Time 0.441574 +2025-05-20 17:12:20,323 - Epoch: [67][ 30/ 71] Overall Loss 0.187118 Objective Loss 0.187118 LR 0.000250 Time 0.461723 +2025-05-20 17:12:23,850 - Epoch: [67][ 40/ 71] Overall Loss 0.183526 Objective Loss 0.183526 LR 0.000250 Time 0.434474 +2025-05-20 17:12:28,253 - Epoch: [67][ 50/ 71] Overall Loss 0.186767 Objective Loss 0.186767 LR 0.000250 Time 0.435621 +2025-05-20 17:12:31,663 - Epoch: [67][ 60/ 71] Overall Loss 0.185818 Objective Loss 0.185818 LR 0.000250 Time 0.419842 +2025-05-20 17:12:35,733 - Epoch: [67][ 70/ 71] Overall Loss 0.183000 Objective Loss 0.183000 Top1 92.578125 LR 0.000250 Time 0.418007 +2025-05-20 17:12:35,826 - Epoch: [67][ 71/ 71] Overall Loss 0.183285 Objective Loss 0.183285 Top1 92.261905 LR 0.000250 Time 0.413430 +2025-05-20 17:12:35,865 - --- validate (epoch=67)----------- +2025-05-20 17:12:35,865 - 2000 samples (256 per mini-batch) +2025-05-20 17:12:39,398 - Epoch: [67][ 8/ 8] Loss 0.194106 Top1 92.000000 +2025-05-20 17:12:39,431 - ==> Top1: 92.000 Loss: 0.194 + +2025-05-20 17:12:39,432 - ==> Confusion: [[910 75] - [105 910]] - -2023-09-11 14:07:52,400 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:07:52,400 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:07:52,403 - - -2023-09-11 14:07:52,403 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:07:56,860 - Epoch: [75][ 10/ 71] Overall Loss 0.191604 Objective Loss 0.191604 LR 0.000250 Time 0.445699 -2023-09-11 14:07:59,807 - Epoch: [75][ 20/ 71] Overall Loss 0.190262 Objective Loss 0.190262 LR 0.000250 Time 0.370154 -2023-09-11 14:08:03,000 - Epoch: [75][ 30/ 71] Overall Loss 0.186464 Objective Loss 0.186464 LR 0.000250 Time 0.353203 -2023-09-11 14:08:05,569 - Epoch: [75][ 40/ 71] Overall Loss 0.181261 Objective Loss 0.181261 LR 0.000250 Time 0.329122 -2023-09-11 14:08:08,209 - Epoch: [75][ 50/ 71] Overall Loss 0.181693 Objective Loss 0.181693 LR 0.000250 Time 0.316095 -2023-09-11 14:08:11,143 - Epoch: [75][ 60/ 71] Overall Loss 0.180328 Objective Loss 0.180328 LR 0.000250 Time 0.312306 -2023-09-11 14:08:13,021 - Epoch: [75][ 70/ 71] Overall Loss 0.179314 Objective Loss 0.179314 Top1 91.406250 LR 0.000250 Time 0.294507 -2023-09-11 14:08:13,101 - Epoch: [75][ 71/ 71] Overall Loss 0.179500 Objective Loss 0.179500 Top1 91.071429 LR 0.000250 Time 0.291482 -2023-09-11 14:08:13,190 - --- validate (epoch=75)----------- -2023-09-11 14:08:13,190 - 2000 samples (256 per mini-batch) -2023-09-11 14:08:16,266 - Epoch: [75][ 8/ 8] Loss 0.203488 Top1 91.050000 -2023-09-11 14:08:16,368 - ==> Top1: 91.050 Loss: 0.203 - -2023-09-11 14:08:16,368 - ==> Confusion: -[[920 65] - [114 901]] - -2023-09-11 14:08:16,383 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:08:16,384 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:08:16,386 - - -2023-09-11 14:08:16,386 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:08:20,911 - Epoch: [76][ 10/ 71] Overall Loss 0.179623 Objective Loss 0.179623 LR 0.000250 Time 0.452446 -2023-09-11 14:08:23,767 - Epoch: [76][ 20/ 71] Overall Loss 0.184192 Objective Loss 0.184192 LR 0.000250 Time 0.369026 -2023-09-11 14:08:26,524 - Epoch: [76][ 30/ 71] Overall Loss 0.185231 Objective Loss 0.185231 LR 0.000250 Time 0.337880 -2023-09-11 14:08:28,661 - Epoch: [76][ 40/ 71] Overall Loss 0.182251 Objective Loss 0.182251 LR 0.000250 Time 0.306849 -2023-09-11 14:08:31,349 - Epoch: [76][ 50/ 71] Overall Loss 0.181050 Objective Loss 0.181050 LR 0.000250 Time 0.299218 -2023-09-11 14:08:33,562 - Epoch: [76][ 60/ 71] Overall Loss 0.182999 Objective Loss 0.182999 LR 0.000250 Time 0.286226 -2023-09-11 14:08:35,866 - Epoch: [76][ 70/ 71] Overall Loss 0.182859 Objective Loss 0.182859 Top1 89.453125 LR 0.000250 Time 0.278249 -2023-09-11 14:08:35,945 - Epoch: [76][ 71/ 71] Overall Loss 0.183531 Objective Loss 0.183531 Top1 90.178571 LR 0.000250 Time 0.275441 -2023-09-11 14:08:36,057 - --- validate (epoch=76)----------- -2023-09-11 14:08:36,057 - 2000 samples (256 per mini-batch) -2023-09-11 14:08:39,076 - Epoch: [76][ 8/ 8] Loss 0.203070 Top1 91.800000 -2023-09-11 14:08:39,158 - ==> Top1: 91.800 Loss: 0.203 - -2023-09-11 14:08:39,159 - ==> Confusion: -[[898 87] - [ 77 938]] + [ 85 930]] -2023-09-11 14:08:39,174 - ==> Best [Top1: 91.950 Sparsity:0.00 Params: 57776 on epoch: 53] -2023-09-11 14:08:39,174 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:08:39,176 - - -2023-09-11 14:08:39,176 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:08:42,573 - Epoch: [77][ 10/ 71] Overall Loss 0.198216 Objective Loss 0.198216 LR 0.000250 Time 0.339643 -2023-09-11 14:08:46,751 - Epoch: [77][ 20/ 71] Overall Loss 0.189626 Objective Loss 0.189626 LR 0.000250 Time 0.378722 -2023-09-11 14:08:48,686 - Epoch: [77][ 30/ 71] Overall Loss 0.182472 Objective Loss 0.182472 LR 0.000250 Time 0.316951 -2023-09-11 14:08:51,219 - Epoch: [77][ 40/ 71] Overall Loss 0.179858 Objective Loss 0.179858 LR 0.000250 Time 0.301033 -2023-09-11 14:08:53,483 - Epoch: [77][ 50/ 71] Overall Loss 0.178019 Objective Loss 0.178019 LR 0.000250 Time 0.286106 -2023-09-11 14:08:56,586 - Epoch: [77][ 60/ 71] Overall Loss 0.180002 Objective Loss 0.180002 LR 0.000250 Time 0.290130 -2023-09-11 14:08:58,550 - Epoch: [77][ 70/ 71] Overall Loss 0.179840 Objective Loss 0.179840 Top1 91.796875 LR 0.000250 Time 0.276741 -2023-09-11 14:08:58,629 - Epoch: [77][ 71/ 71] Overall Loss 0.179579 Objective Loss 0.179579 Top1 92.857143 LR 0.000250 Time 0.273947 -2023-09-11 14:08:58,728 - --- validate (epoch=77)----------- -2023-09-11 14:08:58,728 - 2000 samples (256 per mini-batch) -2023-09-11 14:09:01,903 - Epoch: [77][ 8/ 8] Loss 0.197987 Top1 92.250000 -2023-09-11 14:09:02,030 - ==> Top1: 92.250 Loss: 0.198 - -2023-09-11 14:09:02,030 - ==> Confusion: -[[898 87] - [ 68 947]] +2025-05-20 17:12:39,448 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:12:39,448 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:12:39,456 - + +2025-05-20 17:12:39,456 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:12:44,532 - Epoch: [68][ 10/ 71] Overall Loss 0.159914 Objective Loss 0.159914 LR 0.000250 Time 0.507586 +2025-05-20 17:12:48,906 - Epoch: [68][ 20/ 71] Overall Loss 0.165396 Objective Loss 0.165396 LR 0.000250 Time 0.472473 +2025-05-20 17:12:52,274 - Epoch: [68][ 30/ 71] Overall Loss 0.164490 Objective Loss 0.164490 LR 0.000250 Time 0.427227 +2025-05-20 17:12:56,827 - Epoch: [68][ 40/ 71] Overall Loss 0.166920 Objective Loss 0.166920 LR 0.000250 Time 0.434249 +2025-05-20 17:13:00,180 - Epoch: [68][ 50/ 71] Overall Loss 0.166189 Objective Loss 0.166189 LR 0.000250 Time 0.414440 +2025-05-20 17:13:03,900 - Epoch: [68][ 60/ 71] Overall Loss 0.169349 Objective Loss 0.169349 LR 0.000250 Time 0.407371 +2025-05-20 17:13:06,680 - Epoch: [68][ 70/ 71] Overall Loss 0.172950 Objective Loss 0.172950 Top1 93.750000 LR 0.000250 Time 0.388884 +2025-05-20 17:13:06,779 - Epoch: [68][ 71/ 71] Overall Loss 0.174219 Objective Loss 0.174219 Top1 92.261905 LR 0.000250 Time 0.384798 +2025-05-20 17:13:06,807 - --- validate (epoch=68)----------- +2025-05-20 17:13:06,807 - 2000 samples (256 per mini-batch) +2025-05-20 17:13:10,668 - Epoch: [68][ 8/ 8] Loss 0.213034 Top1 91.500000 +2025-05-20 17:13:10,705 - ==> Top1: 91.500 Loss: 0.213 + +2025-05-20 17:13:10,706 - ==> Confusion: +[[936 49] + [121 894]] + +2025-05-20 17:13:10,720 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:13:10,720 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:13:10,728 - + +2025-05-20 17:13:10,728 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:13:15,312 - Epoch: [69][ 10/ 71] Overall Loss 0.175943 Objective Loss 0.175943 LR 0.000250 Time 0.458364 +2025-05-20 17:13:19,230 - Epoch: [69][ 20/ 71] Overall Loss 0.176782 Objective Loss 0.176782 LR 0.000250 Time 0.425080 +2025-05-20 17:13:22,433 - Epoch: [69][ 30/ 71] Overall Loss 0.179952 Objective Loss 0.179952 LR 0.000250 Time 0.390142 +2025-05-20 17:13:27,265 - Epoch: [69][ 40/ 71] Overall Loss 0.180452 Objective Loss 0.180452 LR 0.000250 Time 0.413393 +2025-05-20 17:13:30,495 - Epoch: [69][ 50/ 71] Overall Loss 0.179254 Objective Loss 0.179254 LR 0.000250 Time 0.395300 +2025-05-20 17:13:34,407 - Epoch: [69][ 60/ 71] Overall Loss 0.178572 Objective Loss 0.178572 LR 0.000250 Time 0.394619 +2025-05-20 17:13:37,938 - Epoch: [69][ 70/ 71] Overall Loss 0.177307 Objective Loss 0.177307 Top1 92.968750 LR 0.000250 Time 0.388683 +2025-05-20 17:13:38,047 - Epoch: [69][ 71/ 71] Overall Loss 0.177414 Objective Loss 0.177414 Top1 93.452381 LR 0.000250 Time 0.384732 +2025-05-20 17:13:38,082 - --- validate (epoch=69)----------- +2025-05-20 17:13:38,083 - 2000 samples (256 per mini-batch) +2025-05-20 17:13:41,530 - Epoch: [69][ 8/ 8] Loss 0.203159 Top1 92.150000 +2025-05-20 17:13:41,560 - ==> Top1: 92.150 Loss: 0.203 + +2025-05-20 17:13:41,560 - ==> Confusion: +[[918 67] + [ 90 925]] -2023-09-11 14:09:02,043 - ==> Best [Top1: 92.250 Sparsity:0.00 Params: 57776 on epoch: 77] -2023-09-11 14:09:02,044 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:09:02,049 - - -2023-09-11 14:09:02,049 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:09:06,547 - Epoch: [78][ 10/ 71] Overall Loss 0.166856 Objective Loss 0.166856 LR 0.000250 Time 0.449732 -2023-09-11 14:09:08,657 - Epoch: [78][ 20/ 71] Overall Loss 0.181823 Objective Loss 0.181823 LR 0.000250 Time 0.330349 -2023-09-11 14:09:11,275 - Epoch: [78][ 30/ 71] Overall Loss 0.179580 Objective Loss 0.179580 LR 0.000250 Time 0.307465 -2023-09-11 14:09:13,713 - Epoch: [78][ 40/ 71] Overall Loss 0.178072 Objective Loss 0.178072 LR 0.000250 Time 0.291553 -2023-09-11 14:09:16,580 - Epoch: [78][ 50/ 71] Overall Loss 0.176616 Objective Loss 0.176616 LR 0.000250 Time 0.290572 -2023-09-11 14:09:19,954 - Epoch: [78][ 60/ 71] Overall Loss 0.175459 Objective Loss 0.175459 LR 0.000250 Time 0.298370 -2023-09-11 14:09:22,282 - Epoch: [78][ 70/ 71] Overall Loss 0.175782 Objective Loss 0.175782 Top1 94.140625 LR 0.000250 Time 0.289000 -2023-09-11 14:09:22,362 - Epoch: [78][ 71/ 71] Overall Loss 0.175655 Objective Loss 0.175655 Top1 94.047619 LR 0.000250 Time 0.286059 -2023-09-11 14:09:22,475 - --- validate (epoch=78)----------- -2023-09-11 14:09:22,475 - 2000 samples (256 per mini-batch) -2023-09-11 14:09:25,162 - Epoch: [78][ 8/ 8] Loss 0.207562 Top1 91.000000 -2023-09-11 14:09:25,258 - ==> Top1: 91.000 Loss: 0.208 - -2023-09-11 14:09:25,258 - ==> Confusion: -[[879 106] - [ 74 941]] +2025-05-20 17:13:41,576 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:13:41,576 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:13:41,583 - + +2025-05-20 17:13:41,583 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:13:46,465 - Epoch: [70][ 10/ 71] Overall Loss 0.186445 Objective Loss 0.186445 LR 0.000250 Time 0.488112 +2025-05-20 17:13:49,955 - Epoch: [70][ 20/ 71] Overall Loss 0.174263 Objective Loss 0.174263 LR 0.000250 Time 0.418531 +2025-05-20 17:13:54,325 - Epoch: [70][ 30/ 71] Overall Loss 0.172706 Objective Loss 0.172706 LR 0.000250 Time 0.424676 +2025-05-20 17:13:57,728 - Epoch: [70][ 40/ 71] Overall Loss 0.173057 Objective Loss 0.173057 LR 0.000250 Time 0.403565 +2025-05-20 17:14:01,541 - Epoch: [70][ 50/ 71] Overall Loss 0.176106 Objective Loss 0.176106 LR 0.000250 Time 0.399117 +2025-05-20 17:14:04,918 - Epoch: [70][ 60/ 71] Overall Loss 0.177835 Objective Loss 0.177835 LR 0.000250 Time 0.388879 +2025-05-20 17:14:08,282 - Epoch: [70][ 70/ 71] Overall Loss 0.176512 Objective Loss 0.176512 Top1 94.531250 LR 0.000250 Time 0.381373 +2025-05-20 17:14:08,391 - Epoch: [70][ 71/ 71] Overall Loss 0.175821 Objective Loss 0.175821 Top1 94.642857 LR 0.000250 Time 0.377534 +2025-05-20 17:14:08,423 - --- validate (epoch=70)----------- +2025-05-20 17:14:08,424 - 2000 samples (256 per mini-batch) +2025-05-20 17:14:12,082 - Epoch: [70][ 8/ 8] Loss 0.183336 Top1 92.250000 +2025-05-20 17:14:12,115 - ==> Top1: 92.250 Loss: 0.183 + +2025-05-20 17:14:12,115 - ==> Confusion: +[[901 84] + [ 71 944]] -2023-09-11 14:09:25,269 - ==> Best [Top1: 92.250 Sparsity:0.00 Params: 57776 on epoch: 77] -2023-09-11 14:09:25,270 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:09:25,272 - - -2023-09-11 14:09:25,272 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:09:29,846 - Epoch: [79][ 10/ 71] Overall Loss 0.160407 Objective Loss 0.160407 LR 0.000250 Time 0.457354 -2023-09-11 14:09:31,861 - Epoch: [79][ 20/ 71] Overall Loss 0.163583 Objective Loss 0.163583 LR 0.000250 Time 0.329392 -2023-09-11 14:09:34,545 - Epoch: [79][ 30/ 71] Overall Loss 0.174761 Objective Loss 0.174761 LR 0.000250 Time 0.309054 -2023-09-11 14:09:38,374 - Epoch: [79][ 40/ 71] Overall Loss 0.173552 Objective Loss 0.173552 LR 0.000250 Time 0.327499 -2023-09-11 14:09:40,515 - Epoch: [79][ 50/ 71] Overall Loss 0.172318 Objective Loss 0.172318 LR 0.000250 Time 0.304822 -2023-09-11 14:09:43,191 - Epoch: [79][ 60/ 71] Overall Loss 0.173251 Objective Loss 0.173251 LR 0.000250 Time 0.298607 -2023-09-11 14:09:45,075 - Epoch: [79][ 70/ 71] Overall Loss 0.173210 Objective Loss 0.173210 Top1 95.312500 LR 0.000250 Time 0.282865 -2023-09-11 14:09:45,155 - Epoch: [79][ 71/ 71] Overall Loss 0.173239 Objective Loss 0.173239 Top1 94.642857 LR 0.000250 Time 0.279998 -2023-09-11 14:09:45,251 - --- validate (epoch=79)----------- -2023-09-11 14:09:45,251 - 2000 samples (256 per mini-batch) -2023-09-11 14:09:48,560 - Epoch: [79][ 8/ 8] Loss 0.206681 Top1 91.150000 -2023-09-11 14:09:48,657 - ==> Top1: 91.150 Loss: 0.207 - -2023-09-11 14:09:48,657 - ==> Confusion: -[[894 91] - [ 86 929]] +2025-05-20 17:14:12,123 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:14:12,123 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:14:12,130 - + +2025-05-20 17:14:12,130 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:14:18,281 - Epoch: [71][ 10/ 71] Overall Loss 0.168063 Objective Loss 0.168063 LR 0.000250 Time 0.615021 +2025-05-20 17:14:21,035 - Epoch: [71][ 20/ 71] Overall Loss 0.168220 Objective Loss 0.168220 LR 0.000250 Time 0.445180 +2025-05-20 17:14:24,550 - Epoch: [71][ 30/ 71] Overall Loss 0.172159 Objective Loss 0.172159 LR 0.000250 Time 0.413933 +2025-05-20 17:14:29,759 - Epoch: [71][ 40/ 71] Overall Loss 0.173530 Objective Loss 0.173530 LR 0.000250 Time 0.440683 +2025-05-20 17:14:33,125 - Epoch: [71][ 50/ 71] Overall Loss 0.173668 Objective Loss 0.173668 LR 0.000250 Time 0.419858 +2025-05-20 17:14:36,888 - Epoch: [71][ 60/ 71] Overall Loss 0.174697 Objective Loss 0.174697 LR 0.000250 Time 0.412591 +2025-05-20 17:14:40,408 - Epoch: [71][ 70/ 71] Overall Loss 0.173687 Objective Loss 0.173687 Top1 92.578125 LR 0.000250 Time 0.403933 +2025-05-20 17:14:40,516 - Epoch: [71][ 71/ 71] Overall Loss 0.173536 Objective Loss 0.173536 Top1 92.559524 LR 0.000250 Time 0.399770 +2025-05-20 17:14:40,555 - --- validate (epoch=71)----------- +2025-05-20 17:14:40,555 - 2000 samples (256 per mini-batch) +2025-05-20 17:14:44,102 - Epoch: [71][ 8/ 8] Loss 0.207446 Top1 91.750000 +2025-05-20 17:14:44,136 - ==> Top1: 91.750 Loss: 0.207 + +2025-05-20 17:14:44,136 - ==> Confusion: +[[867 118] + [ 47 968]] + +2025-05-20 17:14:44,150 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:14:44,150 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:14:44,158 - + +2025-05-20 17:14:44,158 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:14:50,663 - Epoch: [72][ 10/ 71] Overall Loss 0.195799 Objective Loss 0.195799 LR 0.000250 Time 0.650437 +2025-05-20 17:14:54,157 - Epoch: [72][ 20/ 71] Overall Loss 0.186779 Objective Loss 0.186779 LR 0.000250 Time 0.499933 +2025-05-20 17:14:58,402 - Epoch: [72][ 30/ 71] Overall Loss 0.180320 Objective Loss 0.180320 LR 0.000250 Time 0.474759 +2025-05-20 17:15:01,610 - Epoch: [72][ 40/ 71] Overall Loss 0.179961 Objective Loss 0.179961 LR 0.000250 Time 0.436265 +2025-05-20 17:15:05,592 - Epoch: [72][ 50/ 71] Overall Loss 0.177975 Objective Loss 0.177975 LR 0.000250 Time 0.428649 +2025-05-20 17:15:09,311 - Epoch: [72][ 60/ 71] Overall Loss 0.177653 Objective Loss 0.177653 LR 0.000250 Time 0.419180 +2025-05-20 17:15:12,447 - Epoch: [72][ 70/ 71] Overall Loss 0.177988 Objective Loss 0.177988 Top1 94.140625 LR 0.000250 Time 0.404086 +2025-05-20 17:15:12,555 - Epoch: [72][ 71/ 71] Overall Loss 0.177133 Objective Loss 0.177133 Top1 94.642857 LR 0.000250 Time 0.399922 +2025-05-20 17:15:12,588 - --- validate (epoch=72)----------- +2025-05-20 17:15:12,588 - 2000 samples (256 per mini-batch) +2025-05-20 17:15:16,654 - Epoch: [72][ 8/ 8] Loss 0.195730 Top1 91.200000 +2025-05-20 17:15:16,691 - ==> Top1: 91.200 Loss: 0.196 + +2025-05-20 17:15:16,692 - ==> Confusion: +[[906 79] + [ 97 918]] -2023-09-11 14:09:48,672 - ==> Best [Top1: 92.250 Sparsity:0.00 Params: 57776 on epoch: 77] -2023-09-11 14:09:48,672 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:09:48,674 - - -2023-09-11 14:09:48,674 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:09:52,082 - Epoch: [80][ 10/ 71] Overall Loss 0.165410 Objective Loss 0.165410 LR 0.000250 Time 0.340711 -2023-09-11 14:09:54,095 - Epoch: [80][ 20/ 71] Overall Loss 0.166029 Objective Loss 0.166029 LR 0.000250 Time 0.270995 -2023-09-11 14:09:56,698 - Epoch: [80][ 30/ 71] Overall Loss 0.169130 Objective Loss 0.169130 LR 0.000250 Time 0.267403 -2023-09-11 14:10:00,130 - Epoch: [80][ 40/ 71] Overall Loss 0.170398 Objective Loss 0.170398 LR 0.000250 Time 0.286345 -2023-09-11 14:10:03,363 - Epoch: [80][ 50/ 71] Overall Loss 0.170214 Objective Loss 0.170214 LR 0.000250 Time 0.293745 -2023-09-11 14:10:06,478 - Epoch: [80][ 60/ 71] Overall Loss 0.173040 Objective Loss 0.173040 LR 0.000250 Time 0.296695 -2023-09-11 14:10:08,890 - Epoch: [80][ 70/ 71] Overall Loss 0.175115 Objective Loss 0.175115 Top1 90.625000 LR 0.000250 Time 0.288759 -2023-09-11 14:10:08,967 - Epoch: [80][ 71/ 71] Overall Loss 0.174298 Objective Loss 0.174298 Top1 91.071429 LR 0.000250 Time 0.285782 -2023-09-11 14:10:09,078 - --- validate (epoch=80)----------- -2023-09-11 14:10:09,078 - 2000 samples (256 per mini-batch) -2023-09-11 14:10:11,438 - Epoch: [80][ 8/ 8] Loss 0.205587 Top1 91.800000 -2023-09-11 14:10:11,537 - ==> Top1: 91.800 Loss: 0.206 - -2023-09-11 14:10:11,538 - ==> Confusion: -[[882 103] - [ 61 954]] +2025-05-20 17:15:16,708 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:15:16,708 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:15:16,715 - + +2025-05-20 17:15:16,715 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:15:22,767 - Epoch: [73][ 10/ 71] Overall Loss 0.182026 Objective Loss 0.182026 LR 0.000250 Time 0.605119 +2025-05-20 17:15:25,935 - Epoch: [73][ 20/ 71] Overall Loss 0.170008 Objective Loss 0.170008 LR 0.000250 Time 0.460924 +2025-05-20 17:15:30,928 - Epoch: [73][ 30/ 71] Overall Loss 0.166407 Objective Loss 0.166407 LR 0.000250 Time 0.473712 +2025-05-20 17:15:33,881 - Epoch: [73][ 40/ 71] Overall Loss 0.165149 Objective Loss 0.165149 LR 0.000250 Time 0.429094 +2025-05-20 17:15:37,577 - Epoch: [73][ 50/ 71] Overall Loss 0.166431 Objective Loss 0.166431 LR 0.000250 Time 0.417195 +2025-05-20 17:15:41,014 - Epoch: [73][ 60/ 71] Overall Loss 0.167939 Objective Loss 0.167939 LR 0.000250 Time 0.404935 +2025-05-20 17:15:45,409 - Epoch: [73][ 70/ 71] Overall Loss 0.168127 Objective Loss 0.168127 Top1 91.796875 LR 0.000250 Time 0.409866 +2025-05-20 17:15:45,504 - Epoch: [73][ 71/ 71] Overall Loss 0.169884 Objective Loss 0.169884 Top1 89.880952 LR 0.000250 Time 0.405430 +2025-05-20 17:15:45,533 - --- validate (epoch=73)----------- +2025-05-20 17:15:45,533 - 2000 samples (256 per mini-batch) +2025-05-20 17:15:49,000 - Epoch: [73][ 8/ 8] Loss 0.215512 Top1 90.900000 +2025-05-20 17:15:49,035 - ==> Top1: 90.900 Loss: 0.216 + +2025-05-20 17:15:49,035 - ==> Confusion: +[[920 65] + [117 898]] -2023-09-11 14:10:11,552 - ==> Best [Top1: 92.250 Sparsity:0.00 Params: 57776 on epoch: 77] -2023-09-11 14:10:11,552 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:10:11,555 - - -2023-09-11 14:10:11,555 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:10:15,228 - Epoch: [81][ 10/ 71] Overall Loss 0.172562 Objective Loss 0.172562 LR 0.000250 Time 0.367298 -2023-09-11 14:10:18,733 - Epoch: [81][ 20/ 71] Overall Loss 0.171393 Objective Loss 0.171393 LR 0.000250 Time 0.358872 -2023-09-11 14:10:20,833 - Epoch: [81][ 30/ 71] Overall Loss 0.170608 Objective Loss 0.170608 LR 0.000250 Time 0.309240 -2023-09-11 14:10:24,292 - Epoch: [81][ 40/ 71] Overall Loss 0.173893 Objective Loss 0.173893 LR 0.000250 Time 0.318396 -2023-09-11 14:10:26,851 - Epoch: [81][ 50/ 71] Overall Loss 0.175891 Objective Loss 0.175891 LR 0.000250 Time 0.305891 -2023-09-11 14:10:29,584 - Epoch: [81][ 60/ 71] Overall Loss 0.176622 Objective Loss 0.176622 LR 0.000250 Time 0.300449 -2023-09-11 14:10:32,164 - Epoch: [81][ 70/ 71] Overall Loss 0.174014 Objective Loss 0.174014 Top1 91.796875 LR 0.000250 Time 0.294381 -2023-09-11 14:10:32,256 - Epoch: [81][ 71/ 71] Overall Loss 0.173938 Objective Loss 0.173938 Top1 92.261905 LR 0.000250 Time 0.291531 -2023-09-11 14:10:32,352 - --- validate (epoch=81)----------- -2023-09-11 14:10:32,352 - 2000 samples (256 per mini-batch) -2023-09-11 14:10:34,758 - Epoch: [81][ 8/ 8] Loss 0.209532 Top1 91.450000 -2023-09-11 14:10:34,851 - ==> Top1: 91.450 Loss: 0.210 - -2023-09-11 14:10:34,851 - ==> Confusion: -[[880 105] - [ 66 949]] +2025-05-20 17:15:49,050 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:15:49,050 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:15:49,058 - + +2025-05-20 17:15:49,058 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:15:54,215 - Epoch: [74][ 10/ 71] Overall Loss 0.167069 Objective Loss 0.167069 LR 0.000250 Time 0.515637 +2025-05-20 17:15:57,520 - Epoch: [74][ 20/ 71] Overall Loss 0.164737 Objective Loss 0.164737 LR 0.000250 Time 0.423070 +2025-05-20 17:16:01,490 - Epoch: [74][ 30/ 71] Overall Loss 0.169922 Objective Loss 0.169922 LR 0.000250 Time 0.414370 +2025-05-20 17:16:04,532 - Epoch: [74][ 40/ 71] Overall Loss 0.172714 Objective Loss 0.172714 LR 0.000250 Time 0.386797 +2025-05-20 17:16:09,106 - Epoch: [74][ 50/ 71] Overall Loss 0.173175 Objective Loss 0.173175 LR 0.000250 Time 0.400922 +2025-05-20 17:16:12,601 - Epoch: [74][ 60/ 71] Overall Loss 0.169573 Objective Loss 0.169573 LR 0.000250 Time 0.392346 +2025-05-20 17:16:16,518 - Epoch: [74][ 70/ 71] Overall Loss 0.168324 Objective Loss 0.168324 Top1 92.968750 LR 0.000250 Time 0.392253 +2025-05-20 17:16:16,627 - Epoch: [74][ 71/ 71] Overall Loss 0.169861 Objective Loss 0.169861 Top1 91.964286 LR 0.000250 Time 0.388254 +2025-05-20 17:16:16,657 - --- validate (epoch=74)----------- +2025-05-20 17:16:16,657 - 2000 samples (256 per mini-batch) +2025-05-20 17:16:20,385 - Epoch: [74][ 8/ 8] Loss 0.204266 Top1 92.100000 +2025-05-20 17:16:20,416 - ==> Top1: 92.100 Loss: 0.204 + +2025-05-20 17:16:20,416 - ==> Confusion: +[[892 93] + [ 65 950]] -2023-09-11 14:10:34,851 - ==> Best [Top1: 92.250 Sparsity:0.00 Params: 57776 on epoch: 77] -2023-09-11 14:10:34,852 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:10:34,854 - - -2023-09-11 14:10:34,854 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:10:39,384 - Epoch: [82][ 10/ 71] Overall Loss 0.176427 Objective Loss 0.176427 LR 0.000250 Time 0.452948 -2023-09-11 14:10:41,450 - Epoch: [82][ 20/ 71] Overall Loss 0.181110 Objective Loss 0.181110 LR 0.000250 Time 0.329736 -2023-09-11 14:10:44,340 - Epoch: [82][ 30/ 71] Overall Loss 0.180008 Objective Loss 0.180008 LR 0.000250 Time 0.316155 -2023-09-11 14:10:46,969 - Epoch: [82][ 40/ 71] Overall Loss 0.180053 Objective Loss 0.180053 LR 0.000250 Time 0.302838 -2023-09-11 14:10:49,682 - Epoch: [82][ 50/ 71] Overall Loss 0.174916 Objective Loss 0.174916 LR 0.000250 Time 0.296511 -2023-09-11 14:10:52,459 - Epoch: [82][ 60/ 71] Overall Loss 0.174933 Objective Loss 0.174933 LR 0.000250 Time 0.293377 -2023-09-11 14:10:54,685 - Epoch: [82][ 70/ 71] Overall Loss 0.173828 Objective Loss 0.173828 Top1 91.796875 LR 0.000250 Time 0.283261 -2023-09-11 14:10:54,755 - Epoch: [82][ 71/ 71] Overall Loss 0.173300 Objective Loss 0.173300 Top1 92.261905 LR 0.000250 Time 0.280255 -2023-09-11 14:10:54,846 - --- validate (epoch=82)----------- -2023-09-11 14:10:54,846 - 2000 samples (256 per mini-batch) -2023-09-11 14:10:57,467 - Epoch: [82][ 8/ 8] Loss 0.221770 Top1 90.800000 -2023-09-11 14:10:57,559 - ==> Top1: 90.800 Loss: 0.222 - -2023-09-11 14:10:57,559 - ==> Confusion: -[[926 59] - [125 890]] +2025-05-20 17:16:20,431 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:16:20,431 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:16:20,438 - + +2025-05-20 17:16:20,438 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:16:25,879 - Epoch: [75][ 10/ 71] Overall Loss 0.158553 Objective Loss 0.158553 LR 0.000250 Time 0.543982 +2025-05-20 17:16:28,675 - Epoch: [75][ 20/ 71] Overall Loss 0.161364 Objective Loss 0.161364 LR 0.000250 Time 0.411782 +2025-05-20 17:16:32,992 - Epoch: [75][ 30/ 71] Overall Loss 0.173855 Objective Loss 0.173855 LR 0.000250 Time 0.418418 +2025-05-20 17:16:36,217 - Epoch: [75][ 40/ 71] Overall Loss 0.173805 Objective Loss 0.173805 LR 0.000250 Time 0.394432 +2025-05-20 17:16:40,233 - Epoch: [75][ 50/ 71] Overall Loss 0.175264 Objective Loss 0.175264 LR 0.000250 Time 0.395841 +2025-05-20 17:16:43,386 - Epoch: [75][ 60/ 71] Overall Loss 0.174015 Objective Loss 0.174015 LR 0.000250 Time 0.382417 +2025-05-20 17:16:47,389 - Epoch: [75][ 70/ 71] Overall Loss 0.173432 Objective Loss 0.173432 Top1 92.968750 LR 0.000250 Time 0.384974 +2025-05-20 17:16:47,498 - Epoch: [75][ 71/ 71] Overall Loss 0.172760 Objective Loss 0.172760 Top1 94.047619 LR 0.000250 Time 0.381076 +2025-05-20 17:16:47,530 - --- validate (epoch=75)----------- +2025-05-20 17:16:47,531 - 2000 samples (256 per mini-batch) +2025-05-20 17:16:51,405 - Epoch: [75][ 8/ 8] Loss 0.210269 Top1 91.650000 +2025-05-20 17:16:51,440 - ==> Top1: 91.650 Loss: 0.210 + +2025-05-20 17:16:51,440 - ==> Confusion: +[[895 90] + [ 77 938]] -2023-09-11 14:10:57,574 - ==> Best [Top1: 92.250 Sparsity:0.00 Params: 57776 on epoch: 77] -2023-09-11 14:10:57,574 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:10:57,579 - - -2023-09-11 14:10:57,579 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:11:02,037 - Epoch: [83][ 10/ 71] Overall Loss 0.165242 Objective Loss 0.165242 LR 0.000250 Time 0.445769 -2023-09-11 14:11:04,115 - Epoch: [83][ 20/ 71] Overall Loss 0.166902 Objective Loss 0.166902 LR 0.000250 Time 0.326763 -2023-09-11 14:11:06,731 - Epoch: [83][ 30/ 71] Overall Loss 0.170209 Objective Loss 0.170209 LR 0.000250 Time 0.305017 -2023-09-11 14:11:09,938 - Epoch: [83][ 40/ 71] Overall Loss 0.171517 Objective Loss 0.171517 LR 0.000250 Time 0.308953 -2023-09-11 14:11:12,165 - Epoch: [83][ 50/ 71] Overall Loss 0.170525 Objective Loss 0.170525 LR 0.000250 Time 0.291685 -2023-09-11 14:11:14,734 - Epoch: [83][ 60/ 71] Overall Loss 0.172556 Objective Loss 0.172556 LR 0.000250 Time 0.285886 -2023-09-11 14:11:16,739 - Epoch: [83][ 70/ 71] Overall Loss 0.173467 Objective Loss 0.173467 Top1 94.531250 LR 0.000250 Time 0.273685 -2023-09-11 14:11:16,821 - Epoch: [83][ 71/ 71] Overall Loss 0.173260 Objective Loss 0.173260 Top1 94.345238 LR 0.000250 Time 0.270981 -2023-09-11 14:11:16,919 - --- validate (epoch=83)----------- -2023-09-11 14:11:16,919 - 2000 samples (256 per mini-batch) -2023-09-11 14:11:20,181 - Epoch: [83][ 8/ 8] Loss 0.191952 Top1 91.750000 -2023-09-11 14:11:20,264 - ==> Top1: 91.750 Loss: 0.192 - -2023-09-11 14:11:20,264 - ==> Confusion: +2025-05-20 17:16:51,458 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:16:51,458 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:16:51,465 - + +2025-05-20 17:16:51,465 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:16:56,350 - Epoch: [76][ 10/ 71] Overall Loss 0.161621 Objective Loss 0.161621 LR 0.000250 Time 0.488372 +2025-05-20 17:17:00,774 - Epoch: [76][ 20/ 71] Overall Loss 0.174310 Objective Loss 0.174310 LR 0.000250 Time 0.465383 +2025-05-20 17:17:03,752 - Epoch: [76][ 30/ 71] Overall Loss 0.168815 Objective Loss 0.168815 LR 0.000250 Time 0.409520 +2025-05-20 17:17:09,450 - Epoch: [76][ 40/ 71] Overall Loss 0.168324 Objective Loss 0.168324 LR 0.000250 Time 0.449576 +2025-05-20 17:17:12,507 - Epoch: [76][ 50/ 71] Overall Loss 0.167346 Objective Loss 0.167346 LR 0.000250 Time 0.420803 +2025-05-20 17:17:16,127 - Epoch: [76][ 60/ 71] Overall Loss 0.167658 Objective Loss 0.167658 LR 0.000250 Time 0.410983 +2025-05-20 17:17:19,382 - Epoch: [76][ 70/ 71] Overall Loss 0.166104 Objective Loss 0.166104 Top1 94.921875 LR 0.000250 Time 0.398780 +2025-05-20 17:17:19,491 - Epoch: [76][ 71/ 71] Overall Loss 0.166344 Objective Loss 0.166344 Top1 94.047619 LR 0.000250 Time 0.394685 +2025-05-20 17:17:19,525 - --- validate (epoch=76)----------- +2025-05-20 17:17:19,525 - 2000 samples (256 per mini-batch) +2025-05-20 17:17:23,321 - Epoch: [76][ 8/ 8] Loss 0.196501 Top1 91.250000 +2025-05-20 17:17:23,352 - ==> Top1: 91.250 Loss: 0.197 + +2025-05-20 17:17:23,352 - ==> Confusion: [[898 87] - [ 78 937]] - -2023-09-11 14:11:20,279 - ==> Best [Top1: 92.250 Sparsity:0.00 Params: 57776 on epoch: 77] -2023-09-11 14:11:20,280 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:11:20,282 - - -2023-09-11 14:11:20,282 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:11:24,748 - Epoch: [84][ 10/ 71] Overall Loss 0.189782 Objective Loss 0.189782 LR 0.000250 Time 0.446571 -2023-09-11 14:11:26,771 - Epoch: [84][ 20/ 71] Overall Loss 0.176138 Objective Loss 0.176138 LR 0.000250 Time 0.324421 -2023-09-11 14:11:29,966 - Epoch: [84][ 30/ 71] Overall Loss 0.174005 Objective Loss 0.174005 LR 0.000250 Time 0.322768 -2023-09-11 14:11:33,233 - Epoch: [84][ 40/ 71] Overall Loss 0.173212 Objective Loss 0.173212 LR 0.000250 Time 0.323738 -2023-09-11 14:11:36,353 - Epoch: [84][ 50/ 71] Overall Loss 0.171643 Objective Loss 0.171643 LR 0.000250 Time 0.321384 -2023-09-11 14:11:38,376 - Epoch: [84][ 60/ 71] Overall Loss 0.172850 Objective Loss 0.172850 LR 0.000250 Time 0.301534 -2023-09-11 14:11:40,836 - Epoch: [84][ 70/ 71] Overall Loss 0.172915 Objective Loss 0.172915 Top1 91.406250 LR 0.000250 Time 0.293597 -2023-09-11 14:11:40,919 - Epoch: [84][ 71/ 71] Overall Loss 0.172290 Objective Loss 0.172290 Top1 92.559524 LR 0.000250 Time 0.290628 -2023-09-11 14:11:41,013 - --- validate (epoch=84)----------- -2023-09-11 14:11:41,014 - 2000 samples (256 per mini-batch) -2023-09-11 14:11:43,569 - Epoch: [84][ 8/ 8] Loss 0.180015 Top1 92.300000 -2023-09-11 14:11:43,674 - ==> Top1: 92.300 Loss: 0.180 - -2023-09-11 14:11:43,674 - ==> Confusion: -[[930 55] - [ 99 916]] - -2023-09-11 14:11:43,689 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:11:43,689 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:11:43,694 - - -2023-09-11 14:11:43,694 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:11:48,258 - Epoch: [85][ 10/ 71] Overall Loss 0.167408 Objective Loss 0.167408 LR 0.000250 Time 0.456287 -2023-09-11 14:11:50,843 - Epoch: [85][ 20/ 71] Overall Loss 0.182627 Objective Loss 0.182627 LR 0.000250 Time 0.357389 -2023-09-11 14:11:53,318 - Epoch: [85][ 30/ 71] Overall Loss 0.180798 Objective Loss 0.180798 LR 0.000250 Time 0.320742 -2023-09-11 14:11:55,688 - Epoch: [85][ 40/ 71] Overall Loss 0.178201 Objective Loss 0.178201 LR 0.000250 Time 0.299820 -2023-09-11 14:11:58,683 - Epoch: [85][ 50/ 71] Overall Loss 0.178461 Objective Loss 0.178461 LR 0.000250 Time 0.299749 -2023-09-11 14:12:01,251 - Epoch: [85][ 60/ 71] Overall Loss 0.178835 Objective Loss 0.178835 LR 0.000250 Time 0.292572 -2023-09-11 14:12:03,617 - Epoch: [85][ 70/ 71] Overall Loss 0.177598 Objective Loss 0.177598 Top1 91.796875 LR 0.000250 Time 0.284580 -2023-09-11 14:12:03,689 - Epoch: [85][ 71/ 71] Overall Loss 0.177759 Objective Loss 0.177759 Top1 91.071429 LR 0.000250 Time 0.281575 -2023-09-11 14:12:03,781 - --- validate (epoch=85)----------- -2023-09-11 14:12:03,782 - 2000 samples (256 per mini-batch) -2023-09-11 14:12:06,797 - Epoch: [85][ 8/ 8] Loss 0.194900 Top1 91.600000 -2023-09-11 14:12:06,894 - ==> Top1: 91.600 Loss: 0.195 - -2023-09-11 14:12:06,895 - ==> Confusion: -[[901 84] - [ 84 931]] + [ 88 927]] -2023-09-11 14:12:06,897 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:12:06,897 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:12:06,900 - - -2023-09-11 14:12:06,900 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:12:10,151 - Epoch: [86][ 10/ 71] Overall Loss 0.174243 Objective Loss 0.174243 LR 0.000250 Time 0.325089 -2023-09-11 14:12:13,443 - Epoch: [86][ 20/ 71] Overall Loss 0.184667 Objective Loss 0.184667 LR 0.000250 Time 0.327135 -2023-09-11 14:12:16,167 - Epoch: [86][ 30/ 71] Overall Loss 0.184026 Objective Loss 0.184026 LR 0.000250 Time 0.308869 -2023-09-11 14:12:18,668 - Epoch: [86][ 40/ 71] Overall Loss 0.179885 Objective Loss 0.179885 LR 0.000250 Time 0.294157 -2023-09-11 14:12:22,122 - Epoch: [86][ 50/ 71] Overall Loss 0.174214 Objective Loss 0.174214 LR 0.000250 Time 0.304409 -2023-09-11 14:12:24,312 - Epoch: [86][ 60/ 71] Overall Loss 0.170317 Objective Loss 0.170317 LR 0.000250 Time 0.290165 -2023-09-11 14:12:26,951 - Epoch: [86][ 70/ 71] Overall Loss 0.171514 Objective Loss 0.171514 Top1 92.187500 LR 0.000250 Time 0.286409 -2023-09-11 14:12:27,073 - Epoch: [86][ 71/ 71] Overall Loss 0.172297 Objective Loss 0.172297 Top1 91.369048 LR 0.000250 Time 0.284097 -2023-09-11 14:12:27,175 - --- validate (epoch=86)----------- -2023-09-11 14:12:27,176 - 2000 samples (256 per mini-batch) -2023-09-11 14:12:30,276 - Epoch: [86][ 8/ 8] Loss 0.194908 Top1 91.650000 -2023-09-11 14:12:30,373 - ==> Top1: 91.650 Loss: 0.195 - -2023-09-11 14:12:30,373 - ==> Confusion: -[[895 90] - [ 77 938]] +2025-05-20 17:17:23,361 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:17:23,361 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:17:23,368 - + +2025-05-20 17:17:23,368 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:17:28,998 - Epoch: [77][ 10/ 71] Overall Loss 0.165592 Objective Loss 0.165592 LR 0.000250 Time 0.562907 +2025-05-20 17:17:32,194 - Epoch: [77][ 20/ 71] Overall Loss 0.163784 Objective Loss 0.163784 LR 0.000250 Time 0.441258 +2025-05-20 17:17:36,362 - Epoch: [77][ 30/ 71] Overall Loss 0.164019 Objective Loss 0.164019 LR 0.000250 Time 0.433066 +2025-05-20 17:17:39,775 - Epoch: [77][ 40/ 71] Overall Loss 0.164803 Objective Loss 0.164803 LR 0.000250 Time 0.410125 +2025-05-20 17:17:43,945 - Epoch: [77][ 50/ 71] Overall Loss 0.164895 Objective Loss 0.164895 LR 0.000250 Time 0.411494 +2025-05-20 17:17:46,931 - Epoch: [77][ 60/ 71] Overall Loss 0.161775 Objective Loss 0.161775 LR 0.000250 Time 0.392669 +2025-05-20 17:17:50,942 - Epoch: [77][ 70/ 71] Overall Loss 0.160338 Objective Loss 0.160338 Top1 95.312500 LR 0.000250 Time 0.393874 +2025-05-20 17:17:51,044 - Epoch: [77][ 71/ 71] Overall Loss 0.161640 Objective Loss 0.161640 Top1 93.452381 LR 0.000250 Time 0.389766 +2025-05-20 17:17:51,079 - --- validate (epoch=77)----------- +2025-05-20 17:17:51,079 - 2000 samples (256 per mini-batch) +2025-05-20 17:17:54,434 - Epoch: [77][ 8/ 8] Loss 0.211819 Top1 91.850000 +2025-05-20 17:17:54,467 - ==> Top1: 91.850 Loss: 0.212 + +2025-05-20 17:17:54,468 - ==> Confusion: +[[911 74] + [ 89 926]] -2023-09-11 14:12:30,375 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:12:30,375 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:12:30,380 - - -2023-09-11 14:12:30,380 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:12:33,664 - Epoch: [87][ 10/ 71] Overall Loss 0.169480 Objective Loss 0.169480 LR 0.000250 Time 0.328315 -2023-09-11 14:12:36,317 - Epoch: [87][ 20/ 71] Overall Loss 0.170471 Objective Loss 0.170471 LR 0.000250 Time 0.296786 -2023-09-11 14:12:38,702 - Epoch: [87][ 30/ 71] Overall Loss 0.174471 Objective Loss 0.174471 LR 0.000250 Time 0.277374 -2023-09-11 14:12:42,415 - Epoch: [87][ 40/ 71] Overall Loss 0.171517 Objective Loss 0.171517 LR 0.000250 Time 0.300840 -2023-09-11 14:12:44,731 - Epoch: [87][ 50/ 71] Overall Loss 0.169939 Objective Loss 0.169939 LR 0.000250 Time 0.286979 -2023-09-11 14:12:47,843 - Epoch: [87][ 60/ 71] Overall Loss 0.170427 Objective Loss 0.170427 LR 0.000250 Time 0.291006 -2023-09-11 14:12:50,401 - Epoch: [87][ 70/ 71] Overall Loss 0.170594 Objective Loss 0.170594 Top1 91.406250 LR 0.000250 Time 0.285974 -2023-09-11 14:12:50,491 - Epoch: [87][ 71/ 71] Overall Loss 0.170287 Objective Loss 0.170287 Top1 91.666667 LR 0.000250 Time 0.283211 -2023-09-11 14:12:50,591 - --- validate (epoch=87)----------- -2023-09-11 14:12:50,591 - 2000 samples (256 per mini-batch) -2023-09-11 14:12:53,355 - Epoch: [87][ 8/ 8] Loss 0.210209 Top1 91.050000 -2023-09-11 14:12:53,451 - ==> Top1: 91.050 Loss: 0.210 - -2023-09-11 14:12:53,451 - ==> Confusion: -[[867 118] - [ 61 954]] +2025-05-20 17:17:54,484 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:17:54,484 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:17:54,491 - + +2025-05-20 17:17:54,491 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:17:59,683 - Epoch: [78][ 10/ 71] Overall Loss 0.162512 Objective Loss 0.162512 LR 0.000250 Time 0.519140 +2025-05-20 17:18:04,338 - Epoch: [78][ 20/ 71] Overall Loss 0.165471 Objective Loss 0.165471 LR 0.000250 Time 0.492311 +2025-05-20 17:18:07,293 - Epoch: [78][ 30/ 71] Overall Loss 0.163653 Objective Loss 0.163653 LR 0.000250 Time 0.426681 +2025-05-20 17:18:11,283 - Epoch: [78][ 40/ 71] Overall Loss 0.169916 Objective Loss 0.169916 LR 0.000250 Time 0.419767 +2025-05-20 17:18:14,475 - Epoch: [78][ 50/ 71] Overall Loss 0.165923 Objective Loss 0.165923 LR 0.000250 Time 0.399645 +2025-05-20 17:18:18,012 - Epoch: [78][ 60/ 71] Overall Loss 0.167218 Objective Loss 0.167218 LR 0.000250 Time 0.391976 +2025-05-20 17:18:21,267 - Epoch: [78][ 70/ 71] Overall Loss 0.165537 Objective Loss 0.165537 Top1 94.140625 LR 0.000250 Time 0.382474 +2025-05-20 17:18:21,359 - Epoch: [78][ 71/ 71] Overall Loss 0.164653 Objective Loss 0.164653 Top1 94.345238 LR 0.000250 Time 0.378387 +2025-05-20 17:18:21,392 - --- validate (epoch=78)----------- +2025-05-20 17:18:21,392 - 2000 samples (256 per mini-batch) +2025-05-20 17:18:25,362 - Epoch: [78][ 8/ 8] Loss 0.197489 Top1 92.100000 +2025-05-20 17:18:25,400 - ==> Top1: 92.100 Loss: 0.197 + +2025-05-20 17:18:25,401 - ==> Confusion: +[[896 89] + [ 69 946]] -2023-09-11 14:12:53,466 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:12:53,466 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:12:53,469 - - -2023-09-11 14:12:53,469 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:12:58,470 - Epoch: [88][ 10/ 71] Overall Loss 0.179149 Objective Loss 0.179149 LR 0.000250 Time 0.500069 -2023-09-11 14:13:00,554 - Epoch: [88][ 20/ 71] Overall Loss 0.178031 Objective Loss 0.178031 LR 0.000250 Time 0.354199 -2023-09-11 14:13:03,345 - Epoch: [88][ 30/ 71] Overall Loss 0.174576 Objective Loss 0.174576 LR 0.000250 Time 0.329175 -2023-09-11 14:13:05,387 - Epoch: [88][ 40/ 71] Overall Loss 0.177559 Objective Loss 0.177559 LR 0.000250 Time 0.297904 -2023-09-11 14:13:08,269 - Epoch: [88][ 50/ 71] Overall Loss 0.175757 Objective Loss 0.175757 LR 0.000250 Time 0.295965 -2023-09-11 14:13:10,673 - Epoch: [88][ 60/ 71] Overall Loss 0.174484 Objective Loss 0.174484 LR 0.000250 Time 0.286704 -2023-09-11 14:13:13,042 - Epoch: [88][ 70/ 71] Overall Loss 0.172377 Objective Loss 0.172377 Top1 94.921875 LR 0.000250 Time 0.279572 -2023-09-11 14:13:13,089 - Epoch: [88][ 71/ 71] Overall Loss 0.172759 Objective Loss 0.172759 Top1 94.940476 LR 0.000250 Time 0.276299 -2023-09-11 14:13:13,185 - --- validate (epoch=88)----------- -2023-09-11 14:13:13,185 - 2000 samples (256 per mini-batch) -2023-09-11 14:13:16,310 - Epoch: [88][ 8/ 8] Loss 0.188833 Top1 91.900000 -2023-09-11 14:13:16,402 - ==> Top1: 91.900 Loss: 0.189 - -2023-09-11 14:13:16,403 - ==> Confusion: -[[898 87] - [ 75 940]] +2025-05-20 17:18:25,413 - ==> Best [Top1: 92.450 Params: 57776 on epoch: 66] +2025-05-20 17:18:25,413 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:18:25,420 - + +2025-05-20 17:18:25,420 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:18:30,388 - Epoch: [79][ 10/ 71] Overall Loss 0.167809 Objective Loss 0.167809 LR 0.000250 Time 0.496762 +2025-05-20 17:18:33,322 - Epoch: [79][ 20/ 71] Overall Loss 0.174485 Objective Loss 0.174485 LR 0.000250 Time 0.395035 +2025-05-20 17:18:37,178 - Epoch: [79][ 30/ 71] Overall Loss 0.177465 Objective Loss 0.177465 LR 0.000250 Time 0.391875 +2025-05-20 17:18:40,565 - Epoch: [79][ 40/ 71] Overall Loss 0.180006 Objective Loss 0.180006 LR 0.000250 Time 0.378587 +2025-05-20 17:18:44,880 - Epoch: [79][ 50/ 71] Overall Loss 0.175400 Objective Loss 0.175400 LR 0.000250 Time 0.389154 +2025-05-20 17:18:48,219 - Epoch: [79][ 60/ 71] Overall Loss 0.175170 Objective Loss 0.175170 LR 0.000250 Time 0.379947 +2025-05-20 17:18:51,484 - Epoch: [79][ 70/ 71] Overall Loss 0.174166 Objective Loss 0.174166 Top1 93.359375 LR 0.000250 Time 0.372301 +2025-05-20 17:18:51,592 - Epoch: [79][ 71/ 71] Overall Loss 0.173739 Objective Loss 0.173739 Top1 93.750000 LR 0.000250 Time 0.368578 +2025-05-20 17:18:51,630 - --- validate (epoch=79)----------- +2025-05-20 17:18:51,630 - 2000 samples (256 per mini-batch) +2025-05-20 17:18:55,144 - Epoch: [79][ 8/ 8] Loss 0.198232 Top1 92.700000 +2025-05-20 17:18:55,176 - ==> Top1: 92.700 Loss: 0.198 + +2025-05-20 17:18:55,176 - ==> Confusion: +[[894 91] + [ 55 960]] -2023-09-11 14:13:16,419 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:13:16,419 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:13:16,421 - - -2023-09-11 14:13:16,421 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:13:20,186 - Epoch: [89][ 10/ 71] Overall Loss 0.166576 Objective Loss 0.166576 LR 0.000250 Time 0.376377 -2023-09-11 14:13:22,244 - Epoch: [89][ 20/ 71] Overall Loss 0.167335 Objective Loss 0.167335 LR 0.000250 Time 0.291079 -2023-09-11 14:13:24,987 - Epoch: [89][ 30/ 71] Overall Loss 0.169125 Objective Loss 0.169125 LR 0.000250 Time 0.285489 -2023-09-11 14:13:27,054 - Epoch: [89][ 40/ 71] Overall Loss 0.172338 Objective Loss 0.172338 LR 0.000250 Time 0.265765 -2023-09-11 14:13:30,762 - Epoch: [89][ 50/ 71] Overall Loss 0.172367 Objective Loss 0.172367 LR 0.000250 Time 0.286769 -2023-09-11 14:13:32,785 - Epoch: [89][ 60/ 71] Overall Loss 0.171878 Objective Loss 0.171878 LR 0.000250 Time 0.272690 -2023-09-11 14:13:35,211 - Epoch: [89][ 70/ 71] Overall Loss 0.173665 Objective Loss 0.173665 Top1 93.359375 LR 0.000250 Time 0.268388 -2023-09-11 14:13:35,291 - Epoch: [89][ 71/ 71] Overall Loss 0.173720 Objective Loss 0.173720 Top1 93.154762 LR 0.000250 Time 0.265733 -2023-09-11 14:13:35,388 - --- validate (epoch=89)----------- -2023-09-11 14:13:35,388 - 2000 samples (256 per mini-batch) -2023-09-11 14:13:37,960 - Epoch: [89][ 8/ 8] Loss 0.189567 Top1 91.750000 -2023-09-11 14:13:38,051 - ==> Top1: 91.750 Loss: 0.190 - -2023-09-11 14:13:38,052 - ==> Confusion: -[[916 69] - [ 96 919]] - -2023-09-11 14:13:38,067 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:13:38,067 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:13:38,070 - - -2023-09-11 14:13:38,070 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:13:41,268 - Epoch: [90][ 10/ 71] Overall Loss 0.161454 Objective Loss 0.161454 LR 0.000250 Time 0.319800 -2023-09-11 14:13:44,234 - Epoch: [90][ 20/ 71] Overall Loss 0.167940 Objective Loss 0.167940 LR 0.000250 Time 0.308177 -2023-09-11 14:13:46,346 - Epoch: [90][ 30/ 71] Overall Loss 0.167092 Objective Loss 0.167092 LR 0.000250 Time 0.275857 -2023-09-11 14:13:49,784 - Epoch: [90][ 40/ 71] Overall Loss 0.167773 Objective Loss 0.167773 LR 0.000250 Time 0.292824 -2023-09-11 14:13:53,007 - Epoch: [90][ 50/ 71] Overall Loss 0.168747 Objective Loss 0.168747 LR 0.000250 Time 0.298709 -2023-09-11 14:13:55,048 - Epoch: [90][ 60/ 71] Overall Loss 0.168229 Objective Loss 0.168229 LR 0.000250 Time 0.282930 -2023-09-11 14:13:57,562 - Epoch: [90][ 70/ 71] Overall Loss 0.171689 Objective Loss 0.171689 Top1 91.796875 LR 0.000250 Time 0.278435 -2023-09-11 14:13:57,642 - Epoch: [90][ 71/ 71] Overall Loss 0.172089 Objective Loss 0.172089 Top1 91.666667 LR 0.000250 Time 0.275631 -2023-09-11 14:13:57,733 - --- validate (epoch=90)----------- -2023-09-11 14:13:57,734 - 2000 samples (256 per mini-batch) -2023-09-11 14:14:00,313 - Epoch: [90][ 8/ 8] Loss 0.246620 Top1 89.650000 -2023-09-11 14:14:00,418 - ==> Top1: 89.650 Loss: 0.247 - -2023-09-11 14:14:00,418 - ==> Confusion: +2025-05-20 17:18:55,191 - ==> Best [Top1: 92.700 Params: 57776 on epoch: 79] +2025-05-20 17:18:55,191 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:18:55,199 - + +2025-05-20 17:18:55,199 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:18:59,813 - Epoch: [80][ 10/ 71] Overall Loss 0.170409 Objective Loss 0.170409 LR 0.000250 Time 0.461338 +2025-05-20 17:19:02,548 - Epoch: [80][ 20/ 71] Overall Loss 0.165484 Objective Loss 0.165484 LR 0.000250 Time 0.367400 +2025-05-20 17:19:06,247 - Epoch: [80][ 30/ 71] Overall Loss 0.161217 Objective Loss 0.161217 LR 0.000250 Time 0.368221 +2025-05-20 17:19:09,398 - Epoch: [80][ 40/ 71] Overall Loss 0.164286 Objective Loss 0.164286 LR 0.000250 Time 0.354920 +2025-05-20 17:19:14,396 - Epoch: [80][ 50/ 71] Overall Loss 0.166671 Objective Loss 0.166671 LR 0.000250 Time 0.383894 +2025-05-20 17:19:17,291 - Epoch: [80][ 60/ 71] Overall Loss 0.166362 Objective Loss 0.166362 LR 0.000250 Time 0.368159 +2025-05-20 17:19:21,119 - Epoch: [80][ 70/ 71] Overall Loss 0.165867 Objective Loss 0.165867 Top1 93.750000 LR 0.000250 Time 0.370254 +2025-05-20 17:19:21,224 - Epoch: [80][ 71/ 71] Overall Loss 0.165649 Objective Loss 0.165649 Top1 94.345238 LR 0.000250 Time 0.366508 +2025-05-20 17:19:21,253 - --- validate (epoch=80)----------- +2025-05-20 17:19:21,253 - 2000 samples (256 per mini-batch) +2025-05-20 17:19:25,256 - Epoch: [80][ 8/ 8] Loss 0.186532 Top1 92.100000 +2025-05-20 17:19:25,297 - ==> Top1: 92.100 Loss: 0.187 + +2025-05-20 17:19:25,297 - ==> Confusion: +[[889 96] + [ 62 953]] + +2025-05-20 17:19:25,307 - ==> Best [Top1: 92.700 Params: 57776 on epoch: 79] +2025-05-20 17:19:25,308 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:19:25,315 - + +2025-05-20 17:19:25,315 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:19:29,893 - Epoch: [81][ 10/ 71] Overall Loss 0.162112 Objective Loss 0.162112 LR 0.000250 Time 0.457736 +2025-05-20 17:19:35,116 - Epoch: [81][ 20/ 71] Overall Loss 0.166899 Objective Loss 0.166899 LR 0.000250 Time 0.489980 +2025-05-20 17:19:38,402 - Epoch: [81][ 30/ 71] Overall Loss 0.164937 Objective Loss 0.164937 LR 0.000250 Time 0.436191 +2025-05-20 17:19:42,536 - Epoch: [81][ 40/ 71] Overall Loss 0.161839 Objective Loss 0.161839 LR 0.000250 Time 0.430494 +2025-05-20 17:19:46,504 - Epoch: [81][ 50/ 71] Overall Loss 0.160879 Objective Loss 0.160879 LR 0.000250 Time 0.423752 +2025-05-20 17:19:50,117 - Epoch: [81][ 60/ 71] Overall Loss 0.161774 Objective Loss 0.161774 LR 0.000250 Time 0.413339 +2025-05-20 17:19:53,097 - Epoch: [81][ 70/ 71] Overall Loss 0.163699 Objective Loss 0.163699 Top1 94.531250 LR 0.000250 Time 0.396851 +2025-05-20 17:19:53,206 - Epoch: [81][ 71/ 71] Overall Loss 0.164515 Objective Loss 0.164515 Top1 94.047619 LR 0.000250 Time 0.392790 +2025-05-20 17:19:53,244 - --- validate (epoch=81)----------- +2025-05-20 17:19:53,245 - 2000 samples (256 per mini-batch) +2025-05-20 17:19:57,224 - Epoch: [81][ 8/ 8] Loss 0.222569 Top1 90.850000 +2025-05-20 17:19:57,263 - ==> Top1: 90.850 Loss: 0.223 + +2025-05-20 17:19:57,263 - ==> Confusion: [[948 37] - [170 845]] - -2023-09-11 14:14:00,434 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:14:00,434 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:14:00,437 - - -2023-09-11 14:14:00,437 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:14:05,421 - Epoch: [91][ 10/ 71] Overall Loss 0.191089 Objective Loss 0.191089 LR 0.000250 Time 0.498407 -2023-09-11 14:14:07,700 - Epoch: [91][ 20/ 71] Overall Loss 0.190392 Objective Loss 0.190392 LR 0.000250 Time 0.363139 -2023-09-11 14:14:10,928 - Epoch: [91][ 30/ 71] Overall Loss 0.186018 Objective Loss 0.186018 LR 0.000250 Time 0.349661 -2023-09-11 14:14:13,817 - Epoch: [91][ 40/ 71] Overall Loss 0.179970 Objective Loss 0.179970 LR 0.000250 Time 0.334467 -2023-09-11 14:14:16,536 - Epoch: [91][ 50/ 71] Overall Loss 0.179278 Objective Loss 0.179278 LR 0.000250 Time 0.321943 -2023-09-11 14:14:19,328 - Epoch: [91][ 60/ 71] Overall Loss 0.176434 Objective Loss 0.176434 LR 0.000250 Time 0.314814 -2023-09-11 14:14:21,251 - Epoch: [91][ 70/ 71] Overall Loss 0.176318 Objective Loss 0.176318 Top1 93.359375 LR 0.000250 Time 0.297315 -2023-09-11 14:14:21,336 - Epoch: [91][ 71/ 71] Overall Loss 0.175632 Objective Loss 0.175632 Top1 93.452381 LR 0.000250 Time 0.294311 -2023-09-11 14:14:21,431 - --- validate (epoch=91)----------- -2023-09-11 14:14:21,431 - 2000 samples (256 per mini-batch) -2023-09-11 14:14:24,344 - Epoch: [91][ 8/ 8] Loss 0.198640 Top1 91.250000 -2023-09-11 14:14:24,444 - ==> Top1: 91.250 Loss: 0.199 - -2023-09-11 14:14:24,444 - ==> Confusion: -[[917 68] - [107 908]] + [146 869]] + +2025-05-20 17:19:57,268 - ==> Best [Top1: 92.700 Params: 57776 on epoch: 79] +2025-05-20 17:19:57,268 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:19:57,275 - + +2025-05-20 17:19:57,276 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:20:03,457 - Epoch: [82][ 10/ 71] Overall Loss 0.167714 Objective Loss 0.167714 LR 0.000250 Time 0.618111 +2025-05-20 17:20:06,230 - Epoch: [82][ 20/ 71] Overall Loss 0.163003 Objective Loss 0.163003 LR 0.000250 Time 0.447650 +2025-05-20 17:20:10,297 - Epoch: [82][ 30/ 71] Overall Loss 0.162849 Objective Loss 0.162849 LR 0.000250 Time 0.434015 +2025-05-20 17:20:13,691 - Epoch: [82][ 40/ 71] Overall Loss 0.162985 Objective Loss 0.162985 LR 0.000250 Time 0.410347 +2025-05-20 17:20:17,954 - Epoch: [82][ 50/ 71] Overall Loss 0.165148 Objective Loss 0.165148 LR 0.000250 Time 0.413522 +2025-05-20 17:20:22,153 - Epoch: [82][ 60/ 71] Overall Loss 0.166167 Objective Loss 0.166167 LR 0.000250 Time 0.414584 +2025-05-20 17:20:25,223 - Epoch: [82][ 70/ 71] Overall Loss 0.168268 Objective Loss 0.168268 Top1 93.750000 LR 0.000250 Time 0.399209 +2025-05-20 17:20:25,317 - Epoch: [82][ 71/ 71] Overall Loss 0.167606 Objective Loss 0.167606 Top1 93.750000 LR 0.000250 Time 0.394905 +2025-05-20 17:20:25,352 - --- validate (epoch=82)----------- +2025-05-20 17:20:25,352 - 2000 samples (256 per mini-batch) +2025-05-20 17:20:29,019 - Epoch: [82][ 8/ 8] Loss 0.185763 Top1 92.350000 +2025-05-20 17:20:29,050 - ==> Top1: 92.350 Loss: 0.186 + +2025-05-20 17:20:29,050 - ==> Confusion: +[[910 75] + [ 78 937]] -2023-09-11 14:14:24,459 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:14:24,459 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:14:24,462 - - -2023-09-11 14:14:24,462 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:14:27,666 - Epoch: [92][ 10/ 71] Overall Loss 0.172365 Objective Loss 0.172365 LR 0.000250 Time 0.320323 -2023-09-11 14:14:31,475 - Epoch: [92][ 20/ 71] Overall Loss 0.171160 Objective Loss 0.171160 LR 0.000250 Time 0.350642 -2023-09-11 14:14:34,053 - Epoch: [92][ 30/ 71] Overall Loss 0.167699 Objective Loss 0.167699 LR 0.000250 Time 0.319678 -2023-09-11 14:14:37,057 - Epoch: [92][ 40/ 71] Overall Loss 0.166092 Objective Loss 0.166092 LR 0.000250 Time 0.314838 -2023-09-11 14:14:39,215 - Epoch: [92][ 50/ 71] Overall Loss 0.165662 Objective Loss 0.165662 LR 0.000250 Time 0.295023 -2023-09-11 14:14:41,873 - Epoch: [92][ 60/ 71] Overall Loss 0.165927 Objective Loss 0.165927 LR 0.000250 Time 0.290150 -2023-09-11 14:14:43,942 - Epoch: [92][ 70/ 71] Overall Loss 0.166586 Objective Loss 0.166586 Top1 93.750000 LR 0.000250 Time 0.278247 -2023-09-11 14:14:44,019 - Epoch: [92][ 71/ 71] Overall Loss 0.166952 Objective Loss 0.166952 Top1 92.559524 LR 0.000250 Time 0.275421 -2023-09-11 14:14:44,111 - --- validate (epoch=92)----------- -2023-09-11 14:14:44,112 - 2000 samples (256 per mini-batch) -2023-09-11 14:14:47,472 - Epoch: [92][ 8/ 8] Loss 0.196228 Top1 91.700000 -2023-09-11 14:14:47,575 - ==> Top1: 91.700 Loss: 0.196 - -2023-09-11 14:14:47,575 - ==> Confusion: -[[878 107] - [ 59 956]] +2025-05-20 17:20:29,064 - ==> Best [Top1: 92.700 Params: 57776 on epoch: 79] +2025-05-20 17:20:29,064 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:20:29,071 - + +2025-05-20 17:20:29,071 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:20:34,479 - Epoch: [83][ 10/ 71] Overall Loss 0.148194 Objective Loss 0.148194 LR 0.000250 Time 0.540708 +2025-05-20 17:20:38,553 - Epoch: [83][ 20/ 71] Overall Loss 0.156851 Objective Loss 0.156851 LR 0.000250 Time 0.474011 +2025-05-20 17:20:43,219 - Epoch: [83][ 30/ 71] Overall Loss 0.157372 Objective Loss 0.157372 LR 0.000250 Time 0.471547 +2025-05-20 17:20:46,893 - Epoch: [83][ 40/ 71] Overall Loss 0.157848 Objective Loss 0.157848 LR 0.000250 Time 0.445514 +2025-05-20 17:20:51,433 - Epoch: [83][ 50/ 71] Overall Loss 0.155144 Objective Loss 0.155144 LR 0.000250 Time 0.447194 +2025-05-20 17:20:54,575 - Epoch: [83][ 60/ 71] Overall Loss 0.157437 Objective Loss 0.157437 LR 0.000250 Time 0.425026 +2025-05-20 17:20:58,259 - Epoch: [83][ 70/ 71] Overall Loss 0.159353 Objective Loss 0.159353 Top1 94.531250 LR 0.000250 Time 0.416932 +2025-05-20 17:20:58,351 - Epoch: [83][ 71/ 71] Overall Loss 0.159547 Objective Loss 0.159547 Top1 94.345238 LR 0.000250 Time 0.412355 +2025-05-20 17:20:58,382 - --- validate (epoch=83)----------- +2025-05-20 17:20:58,382 - 2000 samples (256 per mini-batch) +2025-05-20 17:21:02,381 - Epoch: [83][ 8/ 8] Loss 0.183204 Top1 92.200000 +2025-05-20 17:21:02,414 - ==> Top1: 92.200 Loss: 0.183 + +2025-05-20 17:21:02,414 - ==> Confusion: +[[931 54] + [102 913]] + +2025-05-20 17:21:02,431 - ==> Best [Top1: 92.700 Params: 57776 on epoch: 79] +2025-05-20 17:21:02,431 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:21:02,438 - + +2025-05-20 17:21:02,438 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:21:08,804 - Epoch: [84][ 10/ 71] Overall Loss 0.172514 Objective Loss 0.172514 LR 0.000250 Time 0.636482 +2025-05-20 17:21:11,690 - Epoch: [84][ 20/ 71] Overall Loss 0.165112 Objective Loss 0.165112 LR 0.000250 Time 0.462553 +2025-05-20 17:21:16,096 - Epoch: [84][ 30/ 71] Overall Loss 0.162559 Objective Loss 0.162559 LR 0.000250 Time 0.455208 +2025-05-20 17:21:19,274 - Epoch: [84][ 40/ 71] Overall Loss 0.162091 Objective Loss 0.162091 LR 0.000250 Time 0.420850 +2025-05-20 17:21:24,406 - Epoch: [84][ 50/ 71] Overall Loss 0.162568 Objective Loss 0.162568 LR 0.000250 Time 0.439303 +2025-05-20 17:21:27,637 - Epoch: [84][ 60/ 71] Overall Loss 0.162159 Objective Loss 0.162159 LR 0.000250 Time 0.419944 +2025-05-20 17:21:31,710 - Epoch: [84][ 70/ 71] Overall Loss 0.160856 Objective Loss 0.160856 Top1 92.968750 LR 0.000250 Time 0.418124 +2025-05-20 17:21:31,806 - Epoch: [84][ 71/ 71] Overall Loss 0.161893 Objective Loss 0.161893 Top1 93.154762 LR 0.000250 Time 0.413593 +2025-05-20 17:21:31,848 - --- validate (epoch=84)----------- +2025-05-20 17:21:31,848 - 2000 samples (256 per mini-batch) +2025-05-20 17:21:35,618 - Epoch: [84][ 8/ 8] Loss 0.203820 Top1 92.200000 +2025-05-20 17:21:35,655 - ==> Top1: 92.200 Loss: 0.204 + +2025-05-20 17:21:35,655 - ==> Confusion: +[[887 98] + [ 58 957]] -2023-09-11 14:14:47,591 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:14:47,591 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:14:47,594 - - -2023-09-11 14:14:47,594 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:14:51,853 - Epoch: [93][ 10/ 71] Overall Loss 0.180006 Objective Loss 0.180006 LR 0.000250 Time 0.425942 -2023-09-11 14:14:54,538 - Epoch: [93][ 20/ 71] Overall Loss 0.172435 Objective Loss 0.172435 LR 0.000250 Time 0.347173 -2023-09-11 14:14:56,761 - Epoch: [93][ 30/ 71] Overall Loss 0.171962 Objective Loss 0.171962 LR 0.000250 Time 0.305546 -2023-09-11 14:14:59,415 - Epoch: [93][ 40/ 71] Overall Loss 0.168024 Objective Loss 0.168024 LR 0.000250 Time 0.295492 -2023-09-11 14:15:01,623 - Epoch: [93][ 50/ 71] Overall Loss 0.167567 Objective Loss 0.167567 LR 0.000250 Time 0.280543 -2023-09-11 14:15:04,289 - Epoch: [93][ 60/ 71] Overall Loss 0.164762 Objective Loss 0.164762 LR 0.000250 Time 0.278227 -2023-09-11 14:15:06,233 - Epoch: [93][ 70/ 71] Overall Loss 0.164611 Objective Loss 0.164611 Top1 95.703125 LR 0.000250 Time 0.266246 -2023-09-11 14:15:06,309 - Epoch: [93][ 71/ 71] Overall Loss 0.164235 Objective Loss 0.164235 Top1 95.238095 LR 0.000250 Time 0.263561 -2023-09-11 14:15:06,411 - --- validate (epoch=93)----------- -2023-09-11 14:15:06,411 - 2000 samples (256 per mini-batch) -2023-09-11 14:15:09,553 - Epoch: [93][ 8/ 8] Loss 0.210347 Top1 90.950000 -2023-09-11 14:15:09,658 - ==> Top1: 90.950 Loss: 0.210 - -2023-09-11 14:15:09,659 - ==> Confusion: -[[852 133] - [ 48 967]] +2025-05-20 17:21:35,666 - ==> Best [Top1: 92.700 Params: 57776 on epoch: 79] +2025-05-20 17:21:35,666 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:21:35,674 - + +2025-05-20 17:21:35,674 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:21:41,886 - Epoch: [85][ 10/ 71] Overall Loss 0.155006 Objective Loss 0.155006 LR 0.000250 Time 0.621117 +2025-05-20 17:21:45,284 - Epoch: [85][ 20/ 71] Overall Loss 0.158380 Objective Loss 0.158380 LR 0.000250 Time 0.480474 +2025-05-20 17:21:50,197 - Epoch: [85][ 30/ 71] Overall Loss 0.162665 Objective Loss 0.162665 LR 0.000250 Time 0.484045 +2025-05-20 17:21:54,547 - Epoch: [85][ 40/ 71] Overall Loss 0.162533 Objective Loss 0.162533 LR 0.000250 Time 0.471781 +2025-05-20 17:21:58,729 - Epoch: [85][ 50/ 71] Overall Loss 0.162693 Objective Loss 0.162693 LR 0.000250 Time 0.461052 +2025-05-20 17:22:03,086 - Epoch: [85][ 60/ 71] Overall Loss 0.162720 Objective Loss 0.162720 LR 0.000250 Time 0.456827 +2025-05-20 17:22:06,972 - Epoch: [85][ 70/ 71] Overall Loss 0.162145 Objective Loss 0.162145 Top1 92.578125 LR 0.000250 Time 0.447069 +2025-05-20 17:22:07,080 - Epoch: [85][ 71/ 71] Overall Loss 0.163218 Objective Loss 0.163218 Top1 91.666667 LR 0.000250 Time 0.442295 +2025-05-20 17:22:07,118 - --- validate (epoch=85)----------- +2025-05-20 17:22:07,118 - 2000 samples (256 per mini-batch) +2025-05-20 17:22:11,474 - Epoch: [85][ 8/ 8] Loss 0.195463 Top1 91.950000 +2025-05-20 17:22:11,509 - ==> Top1: 91.950 Loss: 0.195 + +2025-05-20 17:22:11,509 - ==> Confusion: +[[912 73] + [ 88 927]] -2023-09-11 14:15:09,673 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:15:09,674 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:15:09,678 - - -2023-09-11 14:15:09,678 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:15:14,266 - Epoch: [94][ 10/ 71] Overall Loss 0.177586 Objective Loss 0.177586 LR 0.000250 Time 0.458744 -2023-09-11 14:15:16,228 - Epoch: [94][ 20/ 71] Overall Loss 0.167646 Objective Loss 0.167646 LR 0.000250 Time 0.327475 -2023-09-11 14:15:19,611 - Epoch: [94][ 30/ 71] Overall Loss 0.175110 Objective Loss 0.175110 LR 0.000250 Time 0.331057 -2023-09-11 14:15:22,241 - Epoch: [94][ 40/ 71] Overall Loss 0.176991 Objective Loss 0.176991 LR 0.000250 Time 0.314024 -2023-09-11 14:15:24,586 - Epoch: [94][ 50/ 71] Overall Loss 0.177384 Objective Loss 0.177384 LR 0.000250 Time 0.298116 -2023-09-11 14:15:27,639 - Epoch: [94][ 60/ 71] Overall Loss 0.175930 Objective Loss 0.175930 LR 0.000250 Time 0.299307 -2023-09-11 14:15:29,653 - Epoch: [94][ 70/ 71] Overall Loss 0.176024 Objective Loss 0.176024 Top1 92.968750 LR 0.000250 Time 0.285317 -2023-09-11 14:15:29,737 - Epoch: [94][ 71/ 71] Overall Loss 0.175764 Objective Loss 0.175764 Top1 93.154762 LR 0.000250 Time 0.282482 -2023-09-11 14:15:29,836 - --- validate (epoch=94)----------- -2023-09-11 14:15:29,836 - 2000 samples (256 per mini-batch) -2023-09-11 14:15:32,673 - Epoch: [94][ 8/ 8] Loss 0.193169 Top1 91.600000 -2023-09-11 14:15:32,772 - ==> Top1: 91.600 Loss: 0.193 - -2023-09-11 14:15:32,772 - ==> Confusion: -[[913 72] - [ 96 919]] - -2023-09-11 14:15:32,788 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:15:32,788 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:15:32,790 - - -2023-09-11 14:15:32,791 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:15:36,370 - Epoch: [95][ 10/ 71] Overall Loss 0.159713 Objective Loss 0.159713 LR 0.000250 Time 0.357888 -2023-09-11 14:15:38,712 - Epoch: [95][ 20/ 71] Overall Loss 0.158096 Objective Loss 0.158096 LR 0.000250 Time 0.296047 -2023-09-11 14:15:41,455 - Epoch: [95][ 30/ 71] Overall Loss 0.159871 Objective Loss 0.159871 LR 0.000250 Time 0.288785 -2023-09-11 14:15:43,522 - Epoch: [95][ 40/ 71] Overall Loss 0.166333 Objective Loss 0.166333 LR 0.000250 Time 0.268247 -2023-09-11 14:15:46,255 - Epoch: [95][ 50/ 71] Overall Loss 0.166696 Objective Loss 0.166696 LR 0.000250 Time 0.269244 -2023-09-11 14:15:49,384 - Epoch: [95][ 60/ 71] Overall Loss 0.167923 Objective Loss 0.167923 LR 0.000250 Time 0.276514 -2023-09-11 14:15:51,799 - Epoch: [95][ 70/ 71] Overall Loss 0.171762 Objective Loss 0.171762 Top1 89.453125 LR 0.000250 Time 0.271514 -2023-09-11 14:15:51,882 - Epoch: [95][ 71/ 71] Overall Loss 0.173798 Objective Loss 0.173798 Top1 88.988095 LR 0.000250 Time 0.268863 -2023-09-11 14:15:51,984 - --- validate (epoch=95)----------- -2023-09-11 14:15:51,985 - 2000 samples (256 per mini-batch) -2023-09-11 14:15:54,605 - Epoch: [95][ 8/ 8] Loss 0.210212 Top1 91.200000 -2023-09-11 14:15:54,705 - ==> Top1: 91.200 Loss: 0.210 - -2023-09-11 14:15:54,705 - ==> Confusion: -[[951 34] - [142 873]] - -2023-09-11 14:15:54,706 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:15:54,706 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:15:54,711 - - -2023-09-11 14:15:54,711 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:15:58,701 - Epoch: [96][ 10/ 71] Overall Loss 0.167610 Objective Loss 0.167610 LR 0.000250 Time 0.398873 -2023-09-11 14:16:01,519 - Epoch: [96][ 20/ 71] Overall Loss 0.163180 Objective Loss 0.163180 LR 0.000250 Time 0.340349 -2023-09-11 14:16:04,035 - Epoch: [96][ 30/ 71] Overall Loss 0.168081 Objective Loss 0.168081 LR 0.000250 Time 0.310737 -2023-09-11 14:16:07,382 - Epoch: [96][ 40/ 71] Overall Loss 0.170437 Objective Loss 0.170437 LR 0.000250 Time 0.316735 -2023-09-11 14:16:09,525 - Epoch: [96][ 50/ 71] Overall Loss 0.168819 Objective Loss 0.168819 LR 0.000250 Time 0.296234 -2023-09-11 14:16:12,332 - Epoch: [96][ 60/ 71] Overall Loss 0.167712 Objective Loss 0.167712 LR 0.000250 Time 0.293639 -2023-09-11 14:16:14,599 - Epoch: [96][ 70/ 71] Overall Loss 0.169499 Objective Loss 0.169499 Top1 94.140625 LR 0.000250 Time 0.284071 -2023-09-11 14:16:14,699 - Epoch: [96][ 71/ 71] Overall Loss 0.170389 Objective Loss 0.170389 Top1 93.154762 LR 0.000250 Time 0.281479 -2023-09-11 14:16:14,793 - --- validate (epoch=96)----------- -2023-09-11 14:16:14,793 - 2000 samples (256 per mini-batch) -2023-09-11 14:16:18,069 - Epoch: [96][ 8/ 8] Loss 0.200266 Top1 91.300000 -2023-09-11 14:16:18,161 - ==> Top1: 91.300 Loss: 0.200 - -2023-09-11 14:16:18,161 - ==> Confusion: -[[907 78] - [ 96 919]] - -2023-09-11 14:16:18,175 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:16:18,175 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:16:18,180 - - -2023-09-11 14:16:18,180 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:16:23,239 - Epoch: [97][ 10/ 71] Overall Loss 0.168783 Objective Loss 0.168783 LR 0.000250 Time 0.505807 -2023-09-11 14:16:25,324 - Epoch: [97][ 20/ 71] Overall Loss 0.176425 Objective Loss 0.176425 LR 0.000250 Time 0.357156 -2023-09-11 14:16:28,169 - Epoch: [97][ 30/ 71] Overall Loss 0.173409 Objective Loss 0.173409 LR 0.000250 Time 0.332932 -2023-09-11 14:16:30,470 - Epoch: [97][ 40/ 71] Overall Loss 0.171410 Objective Loss 0.171410 LR 0.000250 Time 0.307223 -2023-09-11 14:16:33,026 - Epoch: [97][ 50/ 71] Overall Loss 0.172385 Objective Loss 0.172385 LR 0.000250 Time 0.296889 -2023-09-11 14:16:35,798 - Epoch: [97][ 60/ 71] Overall Loss 0.169745 Objective Loss 0.169745 LR 0.000250 Time 0.293595 -2023-09-11 14:16:37,863 - Epoch: [97][ 70/ 71] Overall Loss 0.166689 Objective Loss 0.166689 Top1 92.187500 LR 0.000250 Time 0.281150 -2023-09-11 14:16:37,945 - Epoch: [97][ 71/ 71] Overall Loss 0.167816 Objective Loss 0.167816 Top1 90.773810 LR 0.000250 Time 0.278347 -2023-09-11 14:16:38,053 - --- validate (epoch=97)----------- -2023-09-11 14:16:38,053 - 2000 samples (256 per mini-batch) -2023-09-11 14:16:41,124 - Epoch: [97][ 8/ 8] Loss 0.189017 Top1 92.250000 -2023-09-11 14:16:41,219 - ==> Top1: 92.250 Loss: 0.189 - -2023-09-11 14:16:41,219 - ==> Confusion: -[[895 90] - [ 65 950]] +2025-05-20 17:22:11,522 - ==> Best [Top1: 92.700 Params: 57776 on epoch: 79] +2025-05-20 17:22:11,522 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:22:11,530 - + +2025-05-20 17:22:11,530 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:22:17,985 - Epoch: [86][ 10/ 71] Overall Loss 0.172872 Objective Loss 0.172872 LR 0.000250 Time 0.645347 +2025-05-20 17:22:21,872 - Epoch: [86][ 20/ 71] Overall Loss 0.164317 Objective Loss 0.164317 LR 0.000250 Time 0.517018 +2025-05-20 17:22:27,091 - Epoch: [86][ 30/ 71] Overall Loss 0.163247 Objective Loss 0.163247 LR 0.000250 Time 0.518640 +2025-05-20 17:22:30,841 - Epoch: [86][ 40/ 71] Overall Loss 0.164009 Objective Loss 0.164009 LR 0.000250 Time 0.482726 +2025-05-20 17:22:35,807 - Epoch: [86][ 50/ 71] Overall Loss 0.161920 Objective Loss 0.161920 LR 0.000250 Time 0.485500 +2025-05-20 17:22:38,927 - Epoch: [86][ 60/ 71] Overall Loss 0.163193 Objective Loss 0.163193 LR 0.000250 Time 0.456571 +2025-05-20 17:22:42,787 - Epoch: [86][ 70/ 71] Overall Loss 0.165353 Objective Loss 0.165353 Top1 92.187500 LR 0.000250 Time 0.446478 +2025-05-20 17:22:42,891 - Epoch: [86][ 71/ 71] Overall Loss 0.165220 Objective Loss 0.165220 Top1 91.964286 LR 0.000250 Time 0.441663 +2025-05-20 17:22:42,946 - --- validate (epoch=86)----------- +2025-05-20 17:22:42,947 - 2000 samples (256 per mini-batch) +2025-05-20 17:22:46,760 - Epoch: [86][ 8/ 8] Loss 0.185827 Top1 92.350000 +2025-05-20 17:22:46,793 - ==> Top1: 92.350 Loss: 0.186 + +2025-05-20 17:22:46,794 - ==> Confusion: +[[932 53] + [100 915]] -2023-09-11 14:16:41,234 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:16:41,234 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:16:41,236 - - -2023-09-11 14:16:41,237 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:16:45,754 - Epoch: [98][ 10/ 71] Overall Loss 0.162816 Objective Loss 0.162816 LR 0.000250 Time 0.451734 -2023-09-11 14:16:47,774 - Epoch: [98][ 20/ 71] Overall Loss 0.168544 Objective Loss 0.168544 LR 0.000250 Time 0.326831 -2023-09-11 14:16:50,929 - Epoch: [98][ 30/ 71] Overall Loss 0.167241 Objective Loss 0.167241 LR 0.000250 Time 0.323054 -2023-09-11 14:16:52,955 - Epoch: [98][ 40/ 71] Overall Loss 0.163767 Objective Loss 0.163767 LR 0.000250 Time 0.292937 -2023-09-11 14:16:55,840 - Epoch: [98][ 50/ 71] Overall Loss 0.164618 Objective Loss 0.164618 LR 0.000250 Time 0.292042 -2023-09-11 14:16:57,943 - Epoch: [98][ 60/ 71] Overall Loss 0.163995 Objective Loss 0.163995 LR 0.000250 Time 0.278409 -2023-09-11 14:17:00,308 - Epoch: [98][ 70/ 71] Overall Loss 0.166320 Objective Loss 0.166320 Top1 90.234375 LR 0.000250 Time 0.272411 -2023-09-11 14:17:00,418 - Epoch: [98][ 71/ 71] Overall Loss 0.166477 Objective Loss 0.166477 Top1 89.583333 LR 0.000250 Time 0.270122 -2023-09-11 14:17:00,526 - --- validate (epoch=98)----------- -2023-09-11 14:17:00,526 - 2000 samples (256 per mini-batch) -2023-09-11 14:17:03,760 - Epoch: [98][ 8/ 8] Loss 0.204048 Top1 91.600000 -2023-09-11 14:17:03,853 - ==> Top1: 91.600 Loss: 0.204 - -2023-09-11 14:17:03,853 - ==> Confusion: -[[939 46] - [122 893]] +2025-05-20 17:22:46,807 - ==> Best [Top1: 92.700 Params: 57776 on epoch: 79] +2025-05-20 17:22:46,807 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:22:46,814 - + +2025-05-20 17:22:46,814 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:22:52,715 - Epoch: [87][ 10/ 71] Overall Loss 0.154966 Objective Loss 0.154966 LR 0.000250 Time 0.590022 +2025-05-20 17:22:56,621 - Epoch: [87][ 20/ 71] Overall Loss 0.149700 Objective Loss 0.149700 LR 0.000250 Time 0.490298 +2025-05-20 17:23:01,042 - Epoch: [87][ 30/ 71] Overall Loss 0.151863 Objective Loss 0.151863 LR 0.000250 Time 0.474229 +2025-05-20 17:23:05,172 - Epoch: [87][ 40/ 71] Overall Loss 0.151690 Objective Loss 0.151690 LR 0.000250 Time 0.458917 +2025-05-20 17:23:09,847 - Epoch: [87][ 50/ 71] Overall Loss 0.153559 Objective Loss 0.153559 LR 0.000250 Time 0.460618 +2025-05-20 17:23:12,758 - Epoch: [87][ 60/ 71] Overall Loss 0.151374 Objective Loss 0.151374 LR 0.000250 Time 0.432361 +2025-05-20 17:23:17,046 - Epoch: [87][ 70/ 71] Overall Loss 0.152742 Objective Loss 0.152742 Top1 93.359375 LR 0.000250 Time 0.431842 +2025-05-20 17:23:17,148 - Epoch: [87][ 71/ 71] Overall Loss 0.152207 Objective Loss 0.152207 Top1 94.047619 LR 0.000250 Time 0.427203 +2025-05-20 17:23:17,192 - --- validate (epoch=87)----------- +2025-05-20 17:23:17,192 - 2000 samples (256 per mini-batch) +2025-05-20 17:23:21,320 - Epoch: [87][ 8/ 8] Loss 0.173041 Top1 93.250000 +2025-05-20 17:23:21,360 - ==> Top1: 93.250 Loss: 0.173 + +2025-05-20 17:23:21,361 - ==> Confusion: +[[914 71] + [ 64 951]] + +2025-05-20 17:23:21,376 - ==> Best [Top1: 93.250 Params: 57776 on epoch: 87] +2025-05-20 17:23:21,376 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:23:21,384 - + +2025-05-20 17:23:21,384 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:23:27,740 - Epoch: [88][ 10/ 71] Overall Loss 0.168857 Objective Loss 0.168857 LR 0.000250 Time 0.635490 +2025-05-20 17:23:31,319 - Epoch: [88][ 20/ 71] Overall Loss 0.162935 Objective Loss 0.162935 LR 0.000250 Time 0.496696 +2025-05-20 17:23:35,527 - Epoch: [88][ 30/ 71] Overall Loss 0.162943 Objective Loss 0.162943 LR 0.000250 Time 0.471386 +2025-05-20 17:23:40,272 - Epoch: [88][ 40/ 71] Overall Loss 0.157977 Objective Loss 0.157977 LR 0.000250 Time 0.472154 +2025-05-20 17:23:44,054 - Epoch: [88][ 50/ 71] Overall Loss 0.158536 Objective Loss 0.158536 LR 0.000250 Time 0.453350 +2025-05-20 17:23:47,960 - Epoch: [88][ 60/ 71] Overall Loss 0.161063 Objective Loss 0.161063 LR 0.000250 Time 0.442895 +2025-05-20 17:23:52,397 - Epoch: [88][ 70/ 71] Overall Loss 0.163266 Objective Loss 0.163266 Top1 91.406250 LR 0.000250 Time 0.443006 +2025-05-20 17:23:52,476 - Epoch: [88][ 71/ 71] Overall Loss 0.162862 Objective Loss 0.162862 Top1 91.369048 LR 0.000250 Time 0.437870 +2025-05-20 17:23:52,510 - --- validate (epoch=88)----------- +2025-05-20 17:23:52,511 - 2000 samples (256 per mini-batch) +2025-05-20 17:23:57,138 - Epoch: [88][ 8/ 8] Loss 0.187349 Top1 92.650000 +2025-05-20 17:23:57,183 - ==> Top1: 92.650 Loss: 0.187 + +2025-05-20 17:23:57,183 - ==> Confusion: +[[933 52] + [ 95 920]] -2023-09-11 14:17:03,868 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:17:03,869 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:17:03,873 - - -2023-09-11 14:17:03,873 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:17:07,766 - Epoch: [99][ 10/ 71] Overall Loss 0.184852 Objective Loss 0.184852 LR 0.000250 Time 0.389240 -2023-09-11 14:17:10,809 - Epoch: [99][ 20/ 71] Overall Loss 0.174011 Objective Loss 0.174011 LR 0.000250 Time 0.346735 -2023-09-11 14:17:12,805 - Epoch: [99][ 30/ 71] Overall Loss 0.170335 Objective Loss 0.170335 LR 0.000250 Time 0.297681 -2023-09-11 14:17:15,829 - Epoch: [99][ 40/ 71] Overall Loss 0.172405 Objective Loss 0.172405 LR 0.000250 Time 0.298847 -2023-09-11 14:17:17,904 - Epoch: [99][ 50/ 71] Overall Loss 0.170454 Objective Loss 0.170454 LR 0.000250 Time 0.280575 -2023-09-11 14:17:20,678 - Epoch: [99][ 60/ 71] Overall Loss 0.169586 Objective Loss 0.169586 LR 0.000250 Time 0.280044 -2023-09-11 14:17:22,523 - Epoch: [99][ 70/ 71] Overall Loss 0.167114 Objective Loss 0.167114 Top1 95.312500 LR 0.000250 Time 0.266379 -2023-09-11 14:17:22,574 - Epoch: [99][ 71/ 71] Overall Loss 0.166522 Objective Loss 0.166522 Top1 94.940476 LR 0.000250 Time 0.263345 -2023-09-11 14:17:22,677 - --- validate (epoch=99)----------- -2023-09-11 14:17:22,678 - 2000 samples (256 per mini-batch) -2023-09-11 14:17:26,116 - Epoch: [99][ 8/ 8] Loss 0.204708 Top1 91.050000 -2023-09-11 14:17:26,214 - ==> Top1: 91.050 Loss: 0.205 - -2023-09-11 14:17:26,214 - ==> Confusion: -[[860 125] - [ 54 961]] - -2023-09-11 14:17:26,229 - ==> Best [Top1: 92.300 Sparsity:0.00 Params: 57776 on epoch: 84] -2023-09-11 14:17:26,229 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:17:26,234 - - -2023-09-11 14:17:26,234 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:17:30,223 - Epoch: [100][ 10/ 71] Overall Loss 0.156231 Objective Loss 0.156231 LR 0.000250 Time 0.398900 -2023-09-11 14:17:33,094 - Epoch: [100][ 20/ 71] Overall Loss 0.163714 Objective Loss 0.163714 LR 0.000250 Time 0.342976 -2023-09-11 14:17:36,343 - Epoch: [100][ 30/ 71] Overall Loss 0.163452 Objective Loss 0.163452 LR 0.000250 Time 0.336919 -2023-09-11 14:17:38,985 - Epoch: [100][ 40/ 71] Overall Loss 0.161386 Objective Loss 0.161386 LR 0.000250 Time 0.318743 -2023-09-11 14:17:41,673 - Epoch: [100][ 50/ 71] Overall Loss 0.162253 Objective Loss 0.162253 LR 0.000250 Time 0.308749 -2023-09-11 14:17:45,049 - Epoch: [100][ 60/ 71] Overall Loss 0.161087 Objective Loss 0.161087 LR 0.000250 Time 0.313548 -2023-09-11 14:17:47,489 - Epoch: [100][ 70/ 71] Overall Loss 0.161462 Objective Loss 0.161462 Top1 93.750000 LR 0.000250 Time 0.303610 -2023-09-11 14:17:47,569 - Epoch: [100][ 71/ 71] Overall Loss 0.161992 Objective Loss 0.161992 Top1 93.452381 LR 0.000250 Time 0.300463 -2023-09-11 14:17:47,668 - --- validate (epoch=100)----------- -2023-09-11 14:17:47,668 - 2000 samples (256 per mini-batch) -2023-09-11 14:17:50,807 - Epoch: [100][ 8/ 8] Loss 0.185727 Top1 92.800000 -2023-09-11 14:17:50,902 - ==> Top1: 92.800 Loss: 0.186 - -2023-09-11 14:17:50,902 - ==> Confusion: -[[921 64] +2025-05-20 17:23:57,195 - ==> Best [Top1: 93.250 Params: 57776 on epoch: 87] +2025-05-20 17:23:57,195 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:23:57,203 - + +2025-05-20 17:23:57,203 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:24:03,747 - Epoch: [89][ 10/ 71] Overall Loss 0.149263 Objective Loss 0.149263 LR 0.000250 Time 0.654312 +2025-05-20 17:24:07,720 - Epoch: [89][ 20/ 71] Overall Loss 0.162597 Objective Loss 0.162597 LR 0.000250 Time 0.525795 +2025-05-20 17:24:13,231 - Epoch: [89][ 30/ 71] Overall Loss 0.167750 Objective Loss 0.167750 LR 0.000250 Time 0.534216 +2025-05-20 17:24:17,572 - Epoch: [89][ 40/ 71] Overall Loss 0.165228 Objective Loss 0.165228 LR 0.000250 Time 0.509156 +2025-05-20 17:24:22,840 - Epoch: [89][ 50/ 71] Overall Loss 0.160560 Objective Loss 0.160560 LR 0.000250 Time 0.512694 +2025-05-20 17:24:26,768 - Epoch: [89][ 60/ 71] Overall Loss 0.158169 Objective Loss 0.158169 LR 0.000250 Time 0.492699 +2025-05-20 17:24:31,927 - Epoch: [89][ 70/ 71] Overall Loss 0.158210 Objective Loss 0.158210 Top1 92.968750 LR 0.000250 Time 0.496010 +2025-05-20 17:24:32,026 - Epoch: [89][ 71/ 71] Overall Loss 0.159376 Objective Loss 0.159376 Top1 91.964286 LR 0.000250 Time 0.490421 +2025-05-20 17:24:32,072 - --- validate (epoch=89)----------- +2025-05-20 17:24:32,072 - 2000 samples (256 per mini-batch) +2025-05-20 17:24:36,634 - Epoch: [89][ 8/ 8] Loss 0.194191 Top1 91.800000 +2025-05-20 17:24:36,684 - ==> Top1: 91.800 Loss: 0.194 + +2025-05-20 17:24:36,684 - ==> Confusion: +[[901 84] [ 80 935]] -2023-09-11 14:17:50,918 - ==> Best [Top1: 92.800 Sparsity:0.00 Params: 57776 on epoch: 100] -2023-09-11 14:17:50,918 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:17:50,921 - - -2023-09-11 14:17:50,921 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:17:55,178 - Epoch: [101][ 10/ 71] Overall Loss 0.156850 Objective Loss 0.156850 LR 0.000250 Time 0.425591 -2023-09-11 14:17:57,469 - Epoch: [101][ 20/ 71] Overall Loss 0.154226 Objective Loss 0.154226 LR 0.000250 Time 0.327335 -2023-09-11 14:18:00,465 - Epoch: [101][ 30/ 71] Overall Loss 0.156152 Objective Loss 0.156152 LR 0.000250 Time 0.318095 -2023-09-11 14:18:02,444 - Epoch: [101][ 40/ 71] Overall Loss 0.158698 Objective Loss 0.158698 LR 0.000250 Time 0.288022 -2023-09-11 14:18:05,184 - Epoch: [101][ 50/ 71] Overall Loss 0.157699 Objective Loss 0.157699 LR 0.000250 Time 0.285218 -2023-09-11 14:18:07,277 - Epoch: [101][ 60/ 71] Overall Loss 0.156583 Objective Loss 0.156583 LR 0.000250 Time 0.272565 -2023-09-11 14:18:09,915 - Epoch: [101][ 70/ 71] Overall Loss 0.158038 Objective Loss 0.158038 Top1 91.406250 LR 0.000250 Time 0.271304 -2023-09-11 14:18:10,040 - Epoch: [101][ 71/ 71] Overall Loss 0.157563 Objective Loss 0.157563 Top1 91.964286 LR 0.000250 Time 0.269245 -2023-09-11 14:18:10,143 - --- validate (epoch=101)----------- -2023-09-11 14:18:10,143 - 2000 samples (256 per mini-batch) -2023-09-11 14:18:13,017 - Epoch: [101][ 8/ 8] Loss 0.206197 Top1 91.600000 -2023-09-11 14:18:13,117 - ==> Top1: 91.600 Loss: 0.206 - -2023-09-11 14:18:13,118 - ==> Confusion: -[[873 112] - [ 56 959]] +2025-05-20 17:24:36,699 - ==> Best [Top1: 93.250 Params: 57776 on epoch: 87] +2025-05-20 17:24:36,699 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:24:36,706 - + +2025-05-20 17:24:36,706 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:24:43,583 - Epoch: [90][ 10/ 71] Overall Loss 0.153639 Objective Loss 0.153639 LR 0.000250 Time 0.687617 +2025-05-20 17:24:47,146 - Epoch: [90][ 20/ 71] Overall Loss 0.167463 Objective Loss 0.167463 LR 0.000250 Time 0.521938 +2025-05-20 17:24:51,886 - Epoch: [90][ 30/ 71] Overall Loss 0.163201 Objective Loss 0.163201 LR 0.000250 Time 0.505936 +2025-05-20 17:24:55,807 - Epoch: [90][ 40/ 71] Overall Loss 0.159472 Objective Loss 0.159472 LR 0.000250 Time 0.477473 +2025-05-20 17:25:01,000 - Epoch: [90][ 50/ 71] Overall Loss 0.159917 Objective Loss 0.159917 LR 0.000250 Time 0.485831 +2025-05-20 17:25:04,197 - Epoch: [90][ 60/ 71] Overall Loss 0.158400 Objective Loss 0.158400 LR 0.000250 Time 0.458137 +2025-05-20 17:25:09,124 - Epoch: [90][ 70/ 71] Overall Loss 0.159092 Objective Loss 0.159092 Top1 94.140625 LR 0.000250 Time 0.463077 +2025-05-20 17:25:09,233 - Epoch: [90][ 71/ 71] Overall Loss 0.158299 Objective Loss 0.158299 Top1 94.047619 LR 0.000250 Time 0.458088 +2025-05-20 17:25:09,273 - --- validate (epoch=90)----------- +2025-05-20 17:25:09,274 - 2000 samples (256 per mini-batch) +2025-05-20 17:25:13,545 - Epoch: [90][ 8/ 8] Loss 0.185794 Top1 92.550000 +2025-05-20 17:25:13,597 - ==> Top1: 92.550 Loss: 0.186 + +2025-05-20 17:25:13,597 - ==> Confusion: +[[914 71] + [ 78 937]] -2023-09-11 14:18:13,133 - ==> Best [Top1: 92.800 Sparsity:0.00 Params: 57776 on epoch: 100] -2023-09-11 14:18:13,133 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:18:13,136 - - -2023-09-11 14:18:13,136 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:18:17,916 - Epoch: [102][ 10/ 71] Overall Loss 0.154106 Objective Loss 0.154106 LR 0.000250 Time 0.477956 -2023-09-11 14:18:19,923 - Epoch: [102][ 20/ 71] Overall Loss 0.159820 Objective Loss 0.159820 LR 0.000250 Time 0.339308 -2023-09-11 14:18:23,444 - Epoch: [102][ 30/ 71] Overall Loss 0.154715 Objective Loss 0.154715 LR 0.000250 Time 0.343550 -2023-09-11 14:18:25,515 - Epoch: [102][ 40/ 71] Overall Loss 0.155759 Objective Loss 0.155759 LR 0.000250 Time 0.309447 -2023-09-11 14:18:28,842 - Epoch: [102][ 50/ 71] Overall Loss 0.157523 Objective Loss 0.157523 LR 0.000250 Time 0.314094 -2023-09-11 14:18:30,907 - Epoch: [102][ 60/ 71] Overall Loss 0.158726 Objective Loss 0.158726 LR 0.000250 Time 0.296155 -2023-09-11 14:18:34,011 - Epoch: [102][ 70/ 71] Overall Loss 0.159022 Objective Loss 0.159022 Top1 94.921875 LR 0.000250 Time 0.298183 -2023-09-11 14:18:34,079 - Epoch: [102][ 71/ 71] Overall Loss 0.159669 Objective Loss 0.159669 Top1 93.750000 LR 0.000250 Time 0.294932 -2023-09-11 14:18:34,187 - --- validate (epoch=102)----------- -2023-09-11 14:18:34,187 - 2000 samples (256 per mini-batch) -2023-09-11 14:18:37,417 - Epoch: [102][ 8/ 8] Loss 0.182677 Top1 92.200000 -2023-09-11 14:18:37,517 - ==> Top1: 92.200 Loss: 0.183 - -2023-09-11 14:18:37,517 - ==> Confusion: -[[927 58] - [ 98 917]] +2025-05-20 17:25:13,608 - ==> Best [Top1: 93.250 Params: 57776 on epoch: 87] +2025-05-20 17:25:13,608 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:25:13,616 - + +2025-05-20 17:25:13,616 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:25:19,590 - Epoch: [91][ 10/ 71] Overall Loss 0.175776 Objective Loss 0.175776 LR 0.000250 Time 0.597408 +2025-05-20 17:25:23,296 - Epoch: [91][ 20/ 71] Overall Loss 0.161762 Objective Loss 0.161762 LR 0.000250 Time 0.483936 +2025-05-20 17:25:27,702 - Epoch: [91][ 30/ 71] Overall Loss 0.159980 Objective Loss 0.159980 LR 0.000250 Time 0.469489 +2025-05-20 17:25:32,726 - Epoch: [91][ 40/ 71] Overall Loss 0.163513 Objective Loss 0.163513 LR 0.000250 Time 0.477715 +2025-05-20 17:25:37,062 - Epoch: [91][ 50/ 71] Overall Loss 0.162824 Objective Loss 0.162824 LR 0.000250 Time 0.468879 +2025-05-20 17:25:41,282 - Epoch: [91][ 60/ 71] Overall Loss 0.160335 Objective Loss 0.160335 LR 0.000250 Time 0.461064 +2025-05-20 17:25:45,411 - Epoch: [91][ 70/ 71] Overall Loss 0.156768 Objective Loss 0.156768 Top1 92.578125 LR 0.000250 Time 0.454175 +2025-05-20 17:25:45,512 - Epoch: [91][ 71/ 71] Overall Loss 0.157433 Objective Loss 0.157433 Top1 92.857143 LR 0.000250 Time 0.449206 +2025-05-20 17:25:45,556 - --- validate (epoch=91)----------- +2025-05-20 17:25:45,557 - 2000 samples (256 per mini-batch) +2025-05-20 17:25:49,863 - Epoch: [91][ 8/ 8] Loss 0.181684 Top1 93.250000 +2025-05-20 17:25:49,899 - ==> Top1: 93.250 Loss: 0.182 + +2025-05-20 17:25:49,900 - ==> Confusion: +[[901 84] + [ 51 964]] -2023-09-11 14:18:37,521 - ==> Best [Top1: 92.800 Sparsity:0.00 Params: 57776 on epoch: 100] -2023-09-11 14:18:37,521 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:18:37,524 - - -2023-09-11 14:18:37,524 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:18:40,928 - Epoch: [103][ 10/ 71] Overall Loss 0.162412 Objective Loss 0.162412 LR 0.000250 Time 0.340391 -2023-09-11 14:18:43,275 - Epoch: [103][ 20/ 71] Overall Loss 0.158648 Objective Loss 0.158648 LR 0.000250 Time 0.287517 -2023-09-11 14:18:46,157 - Epoch: [103][ 30/ 71] Overall Loss 0.161908 Objective Loss 0.161908 LR 0.000250 Time 0.287743 -2023-09-11 14:18:49,232 - Epoch: [103][ 40/ 71] Overall Loss 0.162365 Objective Loss 0.162365 LR 0.000250 Time 0.292664 -2023-09-11 14:18:51,461 - Epoch: [103][ 50/ 71] Overall Loss 0.160468 Objective Loss 0.160468 LR 0.000250 Time 0.278699 -2023-09-11 14:18:54,662 - Epoch: [103][ 60/ 71] Overall Loss 0.161803 Objective Loss 0.161803 LR 0.000250 Time 0.285605 -2023-09-11 14:18:56,668 - Epoch: [103][ 70/ 71] Overall Loss 0.160846 Objective Loss 0.160846 Top1 91.796875 LR 0.000250 Time 0.273456 -2023-09-11 14:18:56,776 - Epoch: [103][ 71/ 71] Overall Loss 0.160586 Objective Loss 0.160586 Top1 92.857143 LR 0.000250 Time 0.271116 -2023-09-11 14:18:56,871 - --- validate (epoch=103)----------- -2023-09-11 14:18:56,872 - 2000 samples (256 per mini-batch) -2023-09-11 14:19:00,086 - Epoch: [103][ 8/ 8] Loss 0.218527 Top1 91.400000 -2023-09-11 14:19:00,184 - ==> Top1: 91.400 Loss: 0.219 - -2023-09-11 14:19:00,184 - ==> Confusion: -[[926 59] - [113 902]] +2025-05-20 17:25:49,916 - ==> Best [Top1: 93.250 Params: 57776 on epoch: 91] +2025-05-20 17:25:49,916 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:25:49,924 - + +2025-05-20 17:25:49,924 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:25:56,145 - Epoch: [92][ 10/ 71] Overall Loss 0.143111 Objective Loss 0.143111 LR 0.000250 Time 0.622053 +2025-05-20 17:25:59,895 - Epoch: [92][ 20/ 71] Overall Loss 0.151448 Objective Loss 0.151448 LR 0.000250 Time 0.498501 +2025-05-20 17:26:05,127 - Epoch: [92][ 30/ 71] Overall Loss 0.153936 Objective Loss 0.153936 LR 0.000250 Time 0.506722 +2025-05-20 17:26:08,824 - Epoch: [92][ 40/ 71] Overall Loss 0.150679 Objective Loss 0.150679 LR 0.000250 Time 0.472456 +2025-05-20 17:26:13,655 - Epoch: [92][ 50/ 71] Overall Loss 0.154621 Objective Loss 0.154621 LR 0.000250 Time 0.474584 +2025-05-20 17:26:17,395 - Epoch: [92][ 60/ 71] Overall Loss 0.155590 Objective Loss 0.155590 LR 0.000250 Time 0.457811 +2025-05-20 17:26:21,950 - Epoch: [92][ 70/ 71] Overall Loss 0.155963 Objective Loss 0.155963 Top1 92.187500 LR 0.000250 Time 0.457473 +2025-05-20 17:26:22,042 - Epoch: [92][ 71/ 71] Overall Loss 0.155315 Objective Loss 0.155315 Top1 93.750000 LR 0.000250 Time 0.452329 +2025-05-20 17:26:22,095 - --- validate (epoch=92)----------- +2025-05-20 17:26:22,095 - 2000 samples (256 per mini-batch) +2025-05-20 17:26:26,568 - Epoch: [92][ 8/ 8] Loss 0.186931 Top1 92.700000 +2025-05-20 17:26:26,606 - ==> Top1: 92.700 Loss: 0.187 + +2025-05-20 17:26:26,606 - ==> Confusion: +[[902 83] + [ 63 952]] -2023-09-11 14:19:00,199 - ==> Best [Top1: 92.800 Sparsity:0.00 Params: 57776 on epoch: 100] -2023-09-11 14:19:00,199 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:19:00,201 - - -2023-09-11 14:19:00,202 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:19:03,798 - Epoch: [104][ 10/ 71] Overall Loss 0.172566 Objective Loss 0.172566 LR 0.000250 Time 0.359571 -2023-09-11 14:19:05,779 - Epoch: [104][ 20/ 71] Overall Loss 0.175436 Objective Loss 0.175436 LR 0.000250 Time 0.278811 -2023-09-11 14:19:08,552 - Epoch: [104][ 30/ 71] Overall Loss 0.175752 Objective Loss 0.175752 LR 0.000250 Time 0.278327 -2023-09-11 14:19:10,626 - Epoch: [104][ 40/ 71] Overall Loss 0.178091 Objective Loss 0.178091 LR 0.000250 Time 0.260577 -2023-09-11 14:19:13,439 - Epoch: [104][ 50/ 71] Overall Loss 0.180067 Objective Loss 0.180067 LR 0.000250 Time 0.264718 -2023-09-11 14:19:16,719 - Epoch: [104][ 60/ 71] Overall Loss 0.176319 Objective Loss 0.176319 LR 0.000250 Time 0.275252 -2023-09-11 14:19:19,070 - Epoch: [104][ 70/ 71] Overall Loss 0.172017 Objective Loss 0.172017 Top1 93.750000 LR 0.000250 Time 0.269518 -2023-09-11 14:19:19,118 - Epoch: [104][ 71/ 71] Overall Loss 0.171663 Objective Loss 0.171663 Top1 93.750000 LR 0.000250 Time 0.266394 -2023-09-11 14:19:19,204 - --- validate (epoch=104)----------- -2023-09-11 14:19:19,204 - 2000 samples (256 per mini-batch) -2023-09-11 14:19:22,168 - Epoch: [104][ 8/ 8] Loss 0.191922 Top1 92.150000 -2023-09-11 14:19:22,266 - ==> Top1: 92.150 Loss: 0.192 - -2023-09-11 14:19:22,266 - ==> Confusion: -[[905 80] - [ 77 938]] +2025-05-20 17:26:26,624 - ==> Best [Top1: 93.250 Params: 57776 on epoch: 91] +2025-05-20 17:26:26,624 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:26:26,631 - + +2025-05-20 17:26:26,631 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:26:32,610 - Epoch: [93][ 10/ 71] Overall Loss 0.166794 Objective Loss 0.166794 LR 0.000250 Time 0.597811 +2025-05-20 17:26:36,744 - Epoch: [93][ 20/ 71] Overall Loss 0.160323 Objective Loss 0.160323 LR 0.000250 Time 0.505565 +2025-05-20 17:26:42,264 - Epoch: [93][ 30/ 71] Overall Loss 0.161882 Objective Loss 0.161882 LR 0.000250 Time 0.521028 +2025-05-20 17:26:45,824 - Epoch: [93][ 40/ 71] Overall Loss 0.163639 Objective Loss 0.163639 LR 0.000250 Time 0.479780 +2025-05-20 17:26:50,729 - Epoch: [93][ 50/ 71] Overall Loss 0.168834 Objective Loss 0.168834 LR 0.000250 Time 0.481919 +2025-05-20 17:26:54,201 - Epoch: [93][ 60/ 71] Overall Loss 0.167313 Objective Loss 0.167313 LR 0.000250 Time 0.459452 +2025-05-20 17:26:58,399 - Epoch: [93][ 70/ 71] Overall Loss 0.167552 Objective Loss 0.167552 Top1 92.968750 LR 0.000250 Time 0.453778 +2025-05-20 17:26:58,506 - Epoch: [93][ 71/ 71] Overall Loss 0.169153 Objective Loss 0.169153 Top1 91.369048 LR 0.000250 Time 0.448902 +2025-05-20 17:26:58,537 - --- validate (epoch=93)----------- +2025-05-20 17:26:58,537 - 2000 samples (256 per mini-batch) +2025-05-20 17:27:02,826 - Epoch: [93][ 8/ 8] Loss 0.209215 Top1 92.200000 +2025-05-20 17:27:02,868 - ==> Top1: 92.200 Loss: 0.209 + +2025-05-20 17:27:02,868 - ==> Confusion: +[[880 105] + [ 51 964]] -2023-09-11 14:19:22,268 - ==> Best [Top1: 92.800 Sparsity:0.00 Params: 57776 on epoch: 100] -2023-09-11 14:19:22,268 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:19:22,273 - - -2023-09-11 14:19:22,273 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:19:26,132 - Epoch: [105][ 10/ 71] Overall Loss 0.159154 Objective Loss 0.159154 LR 0.000250 Time 0.385879 -2023-09-11 14:19:28,342 - Epoch: [105][ 20/ 71] Overall Loss 0.170387 Objective Loss 0.170387 LR 0.000250 Time 0.303428 -2023-09-11 14:19:31,124 - Epoch: [105][ 30/ 71] Overall Loss 0.166630 Objective Loss 0.166630 LR 0.000250 Time 0.294988 -2023-09-11 14:19:33,683 - Epoch: [105][ 40/ 71] Overall Loss 0.161344 Objective Loss 0.161344 LR 0.000250 Time 0.285207 -2023-09-11 14:19:37,314 - Epoch: [105][ 50/ 71] Overall Loss 0.163882 Objective Loss 0.163882 LR 0.000250 Time 0.300781 -2023-09-11 14:19:39,823 - Epoch: [105][ 60/ 71] Overall Loss 0.166138 Objective Loss 0.166138 LR 0.000250 Time 0.292465 -2023-09-11 14:19:41,790 - Epoch: [105][ 70/ 71] Overall Loss 0.166402 Objective Loss 0.166402 Top1 94.531250 LR 0.000250 Time 0.278774 -2023-09-11 14:19:41,884 - Epoch: [105][ 71/ 71] Overall Loss 0.165853 Objective Loss 0.165853 Top1 94.642857 LR 0.000250 Time 0.276174 -2023-09-11 14:19:41,979 - --- validate (epoch=105)----------- -2023-09-11 14:19:41,979 - 2000 samples (256 per mini-batch) -2023-09-11 14:19:45,122 - Epoch: [105][ 8/ 8] Loss 0.171953 Top1 93.400000 -2023-09-11 14:19:45,225 - ==> Top1: 93.400 Loss: 0.172 - -2023-09-11 14:19:45,225 - ==> Confusion: -[[914 71] - [ 61 954]] +2025-05-20 17:27:02,886 - ==> Best [Top1: 93.250 Params: 57776 on epoch: 91] +2025-05-20 17:27:02,887 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:27:02,894 - + +2025-05-20 17:27:02,895 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:27:09,304 - Epoch: [94][ 10/ 71] Overall Loss 0.158132 Objective Loss 0.158132 LR 0.000250 Time 0.640877 +2025-05-20 17:27:12,972 - Epoch: [94][ 20/ 71] Overall Loss 0.163195 Objective Loss 0.163195 LR 0.000250 Time 0.503797 +2025-05-20 17:27:17,437 - Epoch: [94][ 30/ 71] Overall Loss 0.160888 Objective Loss 0.160888 LR 0.000250 Time 0.484691 +2025-05-20 17:27:22,448 - Epoch: [94][ 40/ 71] Overall Loss 0.161220 Objective Loss 0.161220 LR 0.000250 Time 0.488778 +2025-05-20 17:27:26,478 - Epoch: [94][ 50/ 71] Overall Loss 0.159978 Objective Loss 0.159978 LR 0.000250 Time 0.471623 +2025-05-20 17:27:31,373 - Epoch: [94][ 60/ 71] Overall Loss 0.159628 Objective Loss 0.159628 LR 0.000250 Time 0.474599 +2025-05-20 17:27:35,135 - Epoch: [94][ 70/ 71] Overall Loss 0.159464 Objective Loss 0.159464 Top1 93.359375 LR 0.000250 Time 0.460531 +2025-05-20 17:27:35,243 - Epoch: [94][ 71/ 71] Overall Loss 0.158541 Objective Loss 0.158541 Top1 94.345238 LR 0.000250 Time 0.455562 +2025-05-20 17:27:35,283 - --- validate (epoch=94)----------- +2025-05-20 17:27:35,284 - 2000 samples (256 per mini-batch) +2025-05-20 17:27:39,592 - Epoch: [94][ 8/ 8] Loss 0.186576 Top1 92.550000 +2025-05-20 17:27:39,628 - ==> Top1: 92.550 Loss: 0.187 + +2025-05-20 17:27:39,628 - ==> Confusion: +[[907 78] + [ 71 944]] -2023-09-11 14:19:45,241 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:19:45,241 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:19:45,246 - - -2023-09-11 14:19:45,247 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:19:49,572 - Epoch: [106][ 10/ 71] Overall Loss 0.145345 Objective Loss 0.145345 LR 0.000250 Time 0.432542 -2023-09-11 14:19:51,623 - Epoch: [106][ 20/ 71] Overall Loss 0.157191 Objective Loss 0.157191 LR 0.000250 Time 0.318779 -2023-09-11 14:19:54,571 - Epoch: [106][ 30/ 71] Overall Loss 0.157885 Objective Loss 0.157885 LR 0.000250 Time 0.310763 -2023-09-11 14:19:56,673 - Epoch: [106][ 40/ 71] Overall Loss 0.156442 Objective Loss 0.156442 LR 0.000250 Time 0.285632 -2023-09-11 14:19:59,727 - Epoch: [106][ 50/ 71] Overall Loss 0.157988 Objective Loss 0.157988 LR 0.000250 Time 0.289568 -2023-09-11 14:20:02,372 - Epoch: [106][ 60/ 71] Overall Loss 0.158815 Objective Loss 0.158815 LR 0.000250 Time 0.285390 -2023-09-11 14:20:05,191 - Epoch: [106][ 70/ 71] Overall Loss 0.156338 Objective Loss 0.156338 Top1 94.140625 LR 0.000250 Time 0.284879 -2023-09-11 14:20:05,244 - Epoch: [106][ 71/ 71] Overall Loss 0.156162 Objective Loss 0.156162 Top1 94.047619 LR 0.000250 Time 0.281609 -2023-09-11 14:20:05,341 - --- validate (epoch=106)----------- -2023-09-11 14:20:05,341 - 2000 samples (256 per mini-batch) -2023-09-11 14:20:08,541 - Epoch: [106][ 8/ 8] Loss 0.177933 Top1 92.350000 -2023-09-11 14:20:08,640 - ==> Top1: 92.350 Loss: 0.178 - -2023-09-11 14:20:08,640 - ==> Confusion: -[[893 92] - [ 61 954]] +2025-05-20 17:27:39,644 - ==> Best [Top1: 93.250 Params: 57776 on epoch: 91] +2025-05-20 17:27:39,645 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:27:39,652 - + +2025-05-20 17:27:39,652 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:27:45,591 - Epoch: [95][ 10/ 71] Overall Loss 0.147944 Objective Loss 0.147944 LR 0.000250 Time 0.593854 +2025-05-20 17:27:48,897 - Epoch: [95][ 20/ 71] Overall Loss 0.155392 Objective Loss 0.155392 LR 0.000250 Time 0.462199 +2025-05-20 17:27:53,585 - Epoch: [95][ 30/ 71] Overall Loss 0.155864 Objective Loss 0.155864 LR 0.000250 Time 0.464373 +2025-05-20 17:27:57,760 - Epoch: [95][ 40/ 71] Overall Loss 0.155018 Objective Loss 0.155018 LR 0.000250 Time 0.452648 +2025-05-20 17:28:02,294 - Epoch: [95][ 50/ 71] Overall Loss 0.154060 Objective Loss 0.154060 LR 0.000250 Time 0.452804 +2025-05-20 17:28:06,080 - Epoch: [95][ 60/ 71] Overall Loss 0.158236 Objective Loss 0.158236 LR 0.000250 Time 0.440417 +2025-05-20 17:28:11,050 - Epoch: [95][ 70/ 71] Overall Loss 0.159595 Objective Loss 0.159595 Top1 91.796875 LR 0.000250 Time 0.448505 +2025-05-20 17:28:11,142 - Epoch: [95][ 71/ 71] Overall Loss 0.159220 Objective Loss 0.159220 Top1 92.559524 LR 0.000250 Time 0.443482 +2025-05-20 17:28:11,178 - --- validate (epoch=95)----------- +2025-05-20 17:28:11,178 - 2000 samples (256 per mini-batch) +2025-05-20 17:28:15,449 - Epoch: [95][ 8/ 8] Loss 0.181657 Top1 92.700000 +2025-05-20 17:28:15,491 - ==> Top1: 92.700 Loss: 0.182 + +2025-05-20 17:28:15,491 - ==> Confusion: +[[928 57] + [ 89 926]] -2023-09-11 14:20:08,644 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:20:08,644 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:20:08,647 - - -2023-09-11 14:20:08,647 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:20:13,112 - Epoch: [107][ 10/ 71] Overall Loss 0.145079 Objective Loss 0.145079 LR 0.000250 Time 0.446379 -2023-09-11 14:20:15,186 - Epoch: [107][ 20/ 71] Overall Loss 0.164126 Objective Loss 0.164126 LR 0.000250 Time 0.326894 -2023-09-11 14:20:18,451 - Epoch: [107][ 30/ 71] Overall Loss 0.161951 Objective Loss 0.161951 LR 0.000250 Time 0.326754 -2023-09-11 14:20:21,022 - Epoch: [107][ 40/ 71] Overall Loss 0.160587 Objective Loss 0.160587 LR 0.000250 Time 0.309332 -2023-09-11 14:20:23,666 - Epoch: [107][ 50/ 71] Overall Loss 0.160446 Objective Loss 0.160446 LR 0.000250 Time 0.300329 -2023-09-11 14:20:26,492 - Epoch: [107][ 60/ 71] Overall Loss 0.159784 Objective Loss 0.159784 LR 0.000250 Time 0.297371 -2023-09-11 14:20:28,836 - Epoch: [107][ 70/ 71] Overall Loss 0.159109 Objective Loss 0.159109 Top1 93.750000 LR 0.000250 Time 0.288377 -2023-09-11 14:20:28,878 - Epoch: [107][ 71/ 71] Overall Loss 0.159157 Objective Loss 0.159157 Top1 93.750000 LR 0.000250 Time 0.284903 -2023-09-11 14:20:28,976 - --- validate (epoch=107)----------- -2023-09-11 14:20:28,976 - 2000 samples (256 per mini-batch) -2023-09-11 14:20:31,494 - Epoch: [107][ 8/ 8] Loss 0.180029 Top1 92.600000 -2023-09-11 14:20:31,593 - ==> Top1: 92.600 Loss: 0.180 - -2023-09-11 14:20:31,594 - ==> Confusion: -[[888 97] - [ 51 964]] +2025-05-20 17:28:15,507 - ==> Best [Top1: 93.250 Params: 57776 on epoch: 91] +2025-05-20 17:28:15,508 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:28:15,517 - + +2025-05-20 17:28:15,518 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:28:21,605 - Epoch: [96][ 10/ 71] Overall Loss 0.148368 Objective Loss 0.148368 LR 0.000250 Time 0.608643 +2025-05-20 17:28:25,130 - Epoch: [96][ 20/ 71] Overall Loss 0.150668 Objective Loss 0.150668 LR 0.000250 Time 0.480565 +2025-05-20 17:28:30,150 - Epoch: [96][ 30/ 71] Overall Loss 0.153365 Objective Loss 0.153365 LR 0.000250 Time 0.487683 +2025-05-20 17:28:34,681 - Epoch: [96][ 40/ 71] Overall Loss 0.155736 Objective Loss 0.155736 LR 0.000250 Time 0.479015 +2025-05-20 17:28:39,224 - Epoch: [96][ 50/ 71] Overall Loss 0.153537 Objective Loss 0.153537 LR 0.000250 Time 0.474070 +2025-05-20 17:28:43,141 - Epoch: [96][ 60/ 71] Overall Loss 0.157000 Objective Loss 0.157000 LR 0.000250 Time 0.460339 +2025-05-20 17:28:47,837 - Epoch: [96][ 70/ 71] Overall Loss 0.155991 Objective Loss 0.155991 Top1 92.968750 LR 0.000250 Time 0.461659 +2025-05-20 17:28:47,945 - Epoch: [96][ 71/ 71] Overall Loss 0.155475 Objective Loss 0.155475 Top1 93.154762 LR 0.000250 Time 0.456675 +2025-05-20 17:28:47,990 - --- validate (epoch=96)----------- +2025-05-20 17:28:47,991 - 2000 samples (256 per mini-batch) +2025-05-20 17:28:52,203 - Epoch: [96][ 8/ 8] Loss 0.201455 Top1 91.050000 +2025-05-20 17:28:52,235 - ==> Top1: 91.050 Loss: 0.201 + +2025-05-20 17:28:52,235 - ==> Confusion: +[[864 121] + [ 58 957]] -2023-09-11 14:20:31,595 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:20:31,595 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:20:31,600 - - -2023-09-11 14:20:31,600 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:20:36,742 - Epoch: [108][ 10/ 71] Overall Loss 0.156600 Objective Loss 0.156600 LR 0.000250 Time 0.514130 -2023-09-11 14:20:38,937 - Epoch: [108][ 20/ 71] Overall Loss 0.164639 Objective Loss 0.164639 LR 0.000250 Time 0.366836 -2023-09-11 14:20:41,768 - Epoch: [108][ 30/ 71] Overall Loss 0.161595 Objective Loss 0.161595 LR 0.000250 Time 0.338904 -2023-09-11 14:20:44,087 - Epoch: [108][ 40/ 71] Overall Loss 0.160896 Objective Loss 0.160896 LR 0.000250 Time 0.312134 -2023-09-11 14:20:46,945 - Epoch: [108][ 50/ 71] Overall Loss 0.159346 Objective Loss 0.159346 LR 0.000250 Time 0.306861 -2023-09-11 14:20:49,072 - Epoch: [108][ 60/ 71] Overall Loss 0.159753 Objective Loss 0.159753 LR 0.000250 Time 0.291168 -2023-09-11 14:20:51,415 - Epoch: [108][ 70/ 71] Overall Loss 0.159803 Objective Loss 0.159803 Top1 91.796875 LR 0.000250 Time 0.283037 -2023-09-11 14:20:51,520 - Epoch: [108][ 71/ 71] Overall Loss 0.159666 Objective Loss 0.159666 Top1 91.964286 LR 0.000250 Time 0.280526 -2023-09-11 14:20:51,625 - --- validate (epoch=108)----------- -2023-09-11 14:20:51,625 - 2000 samples (256 per mini-batch) -2023-09-11 14:20:54,684 - Epoch: [108][ 8/ 8] Loss 0.188573 Top1 92.000000 -2023-09-11 14:20:54,791 - ==> Top1: 92.000 Loss: 0.189 - -2023-09-11 14:20:54,791 - ==> Confusion: -[[895 90] - [ 70 945]] +2025-05-20 17:28:52,251 - ==> Best [Top1: 93.250 Params: 57776 on epoch: 91] +2025-05-20 17:28:52,251 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:28:52,258 - + +2025-05-20 17:28:52,258 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:28:58,679 - Epoch: [97][ 10/ 71] Overall Loss 0.163892 Objective Loss 0.163892 LR 0.000250 Time 0.641990 +2025-05-20 17:29:02,032 - Epoch: [97][ 20/ 71] Overall Loss 0.159090 Objective Loss 0.159090 LR 0.000250 Time 0.488625 +2025-05-20 17:29:07,359 - Epoch: [97][ 30/ 71] Overall Loss 0.154054 Objective Loss 0.154054 LR 0.000250 Time 0.503289 +2025-05-20 17:29:10,915 - Epoch: [97][ 40/ 71] Overall Loss 0.152173 Objective Loss 0.152173 LR 0.000250 Time 0.466362 +2025-05-20 17:29:16,142 - Epoch: [97][ 50/ 71] Overall Loss 0.155783 Objective Loss 0.155783 LR 0.000250 Time 0.477622 +2025-05-20 17:29:19,781 - Epoch: [97][ 60/ 71] Overall Loss 0.155411 Objective Loss 0.155411 LR 0.000250 Time 0.458670 +2025-05-20 17:29:24,655 - Epoch: [97][ 70/ 71] Overall Loss 0.157780 Objective Loss 0.157780 Top1 92.968750 LR 0.000250 Time 0.462761 +2025-05-20 17:29:24,763 - Epoch: [97][ 71/ 71] Overall Loss 0.156765 Objective Loss 0.156765 Top1 94.047619 LR 0.000250 Time 0.457763 +2025-05-20 17:29:24,804 - --- validate (epoch=97)----------- +2025-05-20 17:29:24,804 - 2000 samples (256 per mini-batch) +2025-05-20 17:29:28,971 - Epoch: [97][ 8/ 8] Loss 0.178135 Top1 93.350000 +2025-05-20 17:29:29,011 - ==> Top1: 93.350 Loss: 0.178 + +2025-05-20 17:29:29,011 - ==> Confusion: +[[943 42] + [ 91 924]] -2023-09-11 14:20:54,806 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:20:54,806 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:20:54,808 - - -2023-09-11 14:20:54,809 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:20:58,725 - Epoch: [109][ 10/ 71] Overall Loss 0.138860 Objective Loss 0.138860 LR 0.000250 Time 0.391557 -2023-09-11 14:21:02,147 - Epoch: [109][ 20/ 71] Overall Loss 0.150663 Objective Loss 0.150663 LR 0.000250 Time 0.366900 -2023-09-11 14:21:04,240 - Epoch: [109][ 30/ 71] Overall Loss 0.151050 Objective Loss 0.151050 LR 0.000250 Time 0.314343 -2023-09-11 14:21:07,602 - Epoch: [109][ 40/ 71] Overall Loss 0.155698 Objective Loss 0.155698 LR 0.000250 Time 0.319789 -2023-09-11 14:21:09,712 - Epoch: [109][ 50/ 71] Overall Loss 0.155425 Objective Loss 0.155425 LR 0.000250 Time 0.298039 -2023-09-11 14:21:12,773 - Epoch: [109][ 60/ 71] Overall Loss 0.157343 Objective Loss 0.157343 LR 0.000250 Time 0.299374 -2023-09-11 14:21:15,142 - Epoch: [109][ 70/ 71] Overall Loss 0.155515 Objective Loss 0.155515 Top1 94.531250 LR 0.000250 Time 0.290452 -2023-09-11 14:21:15,232 - Epoch: [109][ 71/ 71] Overall Loss 0.155382 Objective Loss 0.155382 Top1 94.940476 LR 0.000250 Time 0.287615 -2023-09-11 14:21:15,331 - --- validate (epoch=109)----------- -2023-09-11 14:21:15,332 - 2000 samples (256 per mini-batch) -2023-09-11 14:21:18,437 - Epoch: [109][ 8/ 8] Loss 0.182867 Top1 91.900000 -2023-09-11 14:21:18,556 - ==> Top1: 91.900 Loss: 0.183 - -2023-09-11 14:21:18,556 - ==> Confusion: -[[905 80] +2025-05-20 17:29:29,027 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:29:29,027 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:29:29,036 - + +2025-05-20 17:29:29,036 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:29:35,730 - Epoch: [98][ 10/ 71] Overall Loss 0.157439 Objective Loss 0.157439 LR 0.000250 Time 0.669287 +2025-05-20 17:29:39,140 - Epoch: [98][ 20/ 71] Overall Loss 0.147738 Objective Loss 0.147738 LR 0.000250 Time 0.505128 +2025-05-20 17:29:43,532 - Epoch: [98][ 30/ 71] Overall Loss 0.149281 Objective Loss 0.149281 LR 0.000250 Time 0.483134 +2025-05-20 17:29:47,691 - Epoch: [98][ 40/ 71] Overall Loss 0.152160 Objective Loss 0.152160 LR 0.000250 Time 0.466322 +2025-05-20 17:29:52,135 - Epoch: [98][ 50/ 71] Overall Loss 0.157240 Objective Loss 0.157240 LR 0.000250 Time 0.461934 +2025-05-20 17:29:57,014 - Epoch: [98][ 60/ 71] Overall Loss 0.154193 Objective Loss 0.154193 LR 0.000250 Time 0.466141 +2025-05-20 17:30:00,451 - Epoch: [98][ 70/ 71] Overall Loss 0.152970 Objective Loss 0.152970 Top1 93.750000 LR 0.000250 Time 0.448649 +2025-05-20 17:30:00,549 - Epoch: [98][ 71/ 71] Overall Loss 0.153736 Objective Loss 0.153736 Top1 93.154762 LR 0.000250 Time 0.443701 +2025-05-20 17:30:00,589 - --- validate (epoch=98)----------- +2025-05-20 17:30:00,589 - 2000 samples (256 per mini-batch) +2025-05-20 17:30:04,824 - Epoch: [98][ 8/ 8] Loss 0.207026 Top1 91.650000 +2025-05-20 17:30:04,865 - ==> Top1: 91.650 Loss: 0.207 + +2025-05-20 17:30:04,866 - ==> Confusion: +[[940 45] + [122 893]] + +2025-05-20 17:30:04,882 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:30:04,882 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:30:04,890 - + +2025-05-20 17:30:04,890 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:30:10,980 - Epoch: [99][ 10/ 71] Overall Loss 0.149483 Objective Loss 0.149483 LR 0.000250 Time 0.608933 +2025-05-20 17:30:14,893 - Epoch: [99][ 20/ 71] Overall Loss 0.153796 Objective Loss 0.153796 LR 0.000250 Time 0.500132 +2025-05-20 17:30:19,489 - Epoch: [99][ 30/ 71] Overall Loss 0.154332 Objective Loss 0.154332 LR 0.000250 Time 0.486604 +2025-05-20 17:30:23,835 - Epoch: [99][ 40/ 71] Overall Loss 0.156601 Objective Loss 0.156601 LR 0.000250 Time 0.473590 +2025-05-20 17:30:28,765 - Epoch: [99][ 50/ 71] Overall Loss 0.155205 Objective Loss 0.155205 LR 0.000250 Time 0.477468 +2025-05-20 17:30:32,397 - Epoch: [99][ 60/ 71] Overall Loss 0.154509 Objective Loss 0.154509 LR 0.000250 Time 0.458405 +2025-05-20 17:30:36,970 - Epoch: [99][ 70/ 71] Overall Loss 0.155089 Objective Loss 0.155089 Top1 92.578125 LR 0.000250 Time 0.458246 +2025-05-20 17:30:37,078 - Epoch: [99][ 71/ 71] Overall Loss 0.155788 Objective Loss 0.155788 Top1 92.559524 LR 0.000250 Time 0.453321 +2025-05-20 17:30:37,116 - --- validate (epoch=99)----------- +2025-05-20 17:30:37,116 - 2000 samples (256 per mini-batch) +2025-05-20 17:30:41,459 - Epoch: [99][ 8/ 8] Loss 0.177794 Top1 92.650000 +2025-05-20 17:30:41,502 - ==> Top1: 92.650 Loss: 0.178 + +2025-05-20 17:30:41,502 - ==> Confusion: +[[898 87] + [ 60 955]] + +2025-05-20 17:30:41,518 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:30:41,518 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:30:41,525 - + +2025-05-20 17:30:41,526 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:30:47,956 - Epoch: [100][ 10/ 71] Overall Loss 0.165021 Objective Loss 0.165021 LR 0.000250 Time 0.642969 +2025-05-20 17:30:51,654 - Epoch: [100][ 20/ 71] Overall Loss 0.156189 Objective Loss 0.156189 LR 0.000250 Time 0.506380 +2025-05-20 17:30:57,243 - Epoch: [100][ 30/ 71] Overall Loss 0.160364 Objective Loss 0.160364 LR 0.000250 Time 0.523862 +2025-05-20 17:31:01,593 - Epoch: [100][ 40/ 71] Overall Loss 0.158523 Objective Loss 0.158523 LR 0.000250 Time 0.501626 +2025-05-20 17:31:06,489 - Epoch: [100][ 50/ 71] Overall Loss 0.159701 Objective Loss 0.159701 LR 0.000250 Time 0.499225 +2025-05-20 17:31:10,330 - Epoch: [100][ 60/ 71] Overall Loss 0.159212 Objective Loss 0.159212 LR 0.000250 Time 0.480028 +2025-05-20 17:31:14,637 - Epoch: [100][ 70/ 71] Overall Loss 0.157859 Objective Loss 0.157859 Top1 89.453125 LR 0.000250 Time 0.472972 +2025-05-20 17:31:14,730 - Epoch: [100][ 71/ 71] Overall Loss 0.157156 Objective Loss 0.157156 Top1 91.071429 LR 0.000250 Time 0.467628 +2025-05-20 17:31:14,764 - --- validate (epoch=100)----------- +2025-05-20 17:31:14,764 - 2000 samples (256 per mini-batch) +2025-05-20 17:31:19,053 - Epoch: [100][ 8/ 8] Loss 0.184021 Top1 92.600000 +2025-05-20 17:31:19,094 - ==> Top1: 92.600 Loss: 0.184 + +2025-05-20 17:31:19,094 - ==> Confusion: +[[922 63] + [ 85 930]] + +2025-05-20 17:31:19,111 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:31:19,112 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:31:19,119 - + +2025-05-20 17:31:19,119 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:31:25,269 - Epoch: [101][ 10/ 71] Overall Loss 0.169913 Objective Loss 0.169913 LR 0.000250 Time 0.614908 +2025-05-20 17:31:28,800 - Epoch: [101][ 20/ 71] Overall Loss 0.157828 Objective Loss 0.157828 LR 0.000250 Time 0.484011 +2025-05-20 17:31:32,801 - Epoch: [101][ 30/ 71] Overall Loss 0.159211 Objective Loss 0.159211 LR 0.000250 Time 0.456033 +2025-05-20 17:31:38,321 - Epoch: [101][ 40/ 71] Overall Loss 0.156449 Objective Loss 0.156449 LR 0.000250 Time 0.480003 +2025-05-20 17:31:42,552 - Epoch: [101][ 50/ 71] Overall Loss 0.158864 Objective Loss 0.158864 LR 0.000250 Time 0.468614 +2025-05-20 17:31:47,804 - Epoch: [101][ 60/ 71] Overall Loss 0.156143 Objective Loss 0.156143 LR 0.000250 Time 0.478050 +2025-05-20 17:31:51,328 - Epoch: [101][ 70/ 71] Overall Loss 0.157250 Objective Loss 0.157250 Top1 94.921875 LR 0.000250 Time 0.460096 +2025-05-20 17:31:51,438 - Epoch: [101][ 71/ 71] Overall Loss 0.157448 Objective Loss 0.157448 Top1 94.345238 LR 0.000250 Time 0.455162 +2025-05-20 17:31:51,480 - --- validate (epoch=101)----------- +2025-05-20 17:31:51,480 - 2000 samples (256 per mini-batch) +2025-05-20 17:31:55,533 - Epoch: [101][ 8/ 8] Loss 0.182528 Top1 92.500000 +2025-05-20 17:31:55,565 - ==> Top1: 92.500 Loss: 0.183 + +2025-05-20 17:31:55,565 - ==> Confusion: +[[917 68] [ 82 933]] -2023-09-11 14:21:18,572 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:21:18,572 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:21:18,577 - - -2023-09-11 14:21:18,577 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:21:23,414 - Epoch: [110][ 10/ 71] Overall Loss 0.146526 Objective Loss 0.146526 LR 0.000250 Time 0.483640 -2023-09-11 14:21:25,508 - Epoch: [110][ 20/ 71] Overall Loss 0.156178 Objective Loss 0.156178 LR 0.000250 Time 0.346506 -2023-09-11 14:21:28,581 - Epoch: [110][ 30/ 71] Overall Loss 0.153178 Objective Loss 0.153178 LR 0.000250 Time 0.333429 -2023-09-11 14:21:30,653 - Epoch: [110][ 40/ 71] Overall Loss 0.153481 Objective Loss 0.153481 LR 0.000250 Time 0.301863 -2023-09-11 14:21:33,555 - Epoch: [110][ 50/ 71] Overall Loss 0.153613 Objective Loss 0.153613 LR 0.000250 Time 0.299521 -2023-09-11 14:21:35,728 - Epoch: [110][ 60/ 71] Overall Loss 0.153514 Objective Loss 0.153514 LR 0.000250 Time 0.285816 -2023-09-11 14:21:38,587 - Epoch: [110][ 70/ 71] Overall Loss 0.155230 Objective Loss 0.155230 Top1 95.312500 LR 0.000250 Time 0.285823 -2023-09-11 14:21:38,664 - Epoch: [110][ 71/ 71] Overall Loss 0.154238 Objective Loss 0.154238 Top1 95.535714 LR 0.000250 Time 0.282880 -2023-09-11 14:21:38,763 - --- validate (epoch=110)----------- -2023-09-11 14:21:38,763 - 2000 samples (256 per mini-batch) -2023-09-11 14:21:41,919 - Epoch: [110][ 8/ 8] Loss 0.192630 Top1 92.250000 -2023-09-11 14:21:42,015 - ==> Top1: 92.250 Loss: 0.193 - -2023-09-11 14:21:42,015 - ==> Confusion: -[[910 75] - [ 80 935]] +2025-05-20 17:31:55,582 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:31:55,582 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:31:55,590 - + +2025-05-20 17:31:55,590 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:32:01,334 - Epoch: [102][ 10/ 71] Overall Loss 0.167005 Objective Loss 0.167005 LR 0.000250 Time 0.574370 +2025-05-20 17:32:05,208 - Epoch: [102][ 20/ 71] Overall Loss 0.158221 Objective Loss 0.158221 LR 0.000250 Time 0.480866 +2025-05-20 17:32:10,036 - Epoch: [102][ 30/ 71] Overall Loss 0.151981 Objective Loss 0.151981 LR 0.000250 Time 0.481481 +2025-05-20 17:32:13,535 - Epoch: [102][ 40/ 71] Overall Loss 0.148612 Objective Loss 0.148612 LR 0.000250 Time 0.448584 +2025-05-20 17:32:18,417 - Epoch: [102][ 50/ 71] Overall Loss 0.151737 Objective Loss 0.151737 LR 0.000250 Time 0.456497 +2025-05-20 17:32:22,386 - Epoch: [102][ 60/ 71] Overall Loss 0.150272 Objective Loss 0.150272 LR 0.000250 Time 0.446560 +2025-05-20 17:32:26,931 - Epoch: [102][ 70/ 71] Overall Loss 0.150861 Objective Loss 0.150861 Top1 91.406250 LR 0.000250 Time 0.447695 +2025-05-20 17:32:27,039 - Epoch: [102][ 71/ 71] Overall Loss 0.151367 Objective Loss 0.151367 Top1 91.666667 LR 0.000250 Time 0.442897 +2025-05-20 17:32:27,084 - --- validate (epoch=102)----------- +2025-05-20 17:32:27,085 - 2000 samples (256 per mini-batch) +2025-05-20 17:32:31,506 - Epoch: [102][ 8/ 8] Loss 0.196211 Top1 92.050000 +2025-05-20 17:32:31,542 - ==> Top1: 92.050 Loss: 0.196 + +2025-05-20 17:32:31,542 - ==> Confusion: +[[912 73] + [ 86 929]] -2023-09-11 14:21:42,031 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:21:42,031 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:21:42,034 - - -2023-09-11 14:21:42,034 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:21:45,271 - Epoch: [111][ 10/ 71] Overall Loss 0.154457 Objective Loss 0.154457 LR 0.000250 Time 0.323700 -2023-09-11 14:21:48,159 - Epoch: [111][ 20/ 71] Overall Loss 0.158500 Objective Loss 0.158500 LR 0.000250 Time 0.306227 -2023-09-11 14:21:50,968 - Epoch: [111][ 30/ 71] Overall Loss 0.158053 Objective Loss 0.158053 LR 0.000250 Time 0.297743 -2023-09-11 14:21:53,024 - Epoch: [111][ 40/ 71] Overall Loss 0.159179 Objective Loss 0.159179 LR 0.000250 Time 0.274719 -2023-09-11 14:21:55,809 - Epoch: [111][ 50/ 71] Overall Loss 0.160190 Objective Loss 0.160190 LR 0.000250 Time 0.275454 -2023-09-11 14:21:58,737 - Epoch: [111][ 60/ 71] Overall Loss 0.158338 Objective Loss 0.158338 LR 0.000250 Time 0.278340 -2023-09-11 14:22:01,271 - Epoch: [111][ 70/ 71] Overall Loss 0.159379 Objective Loss 0.159379 Top1 90.625000 LR 0.000250 Time 0.274786 -2023-09-11 14:22:01,395 - Epoch: [111][ 71/ 71] Overall Loss 0.160600 Objective Loss 0.160600 Top1 90.178571 LR 0.000250 Time 0.272660 -2023-09-11 14:22:01,491 - --- validate (epoch=111)----------- -2023-09-11 14:22:01,492 - 2000 samples (256 per mini-batch) -2023-09-11 14:22:04,273 - Epoch: [111][ 8/ 8] Loss 0.187871 Top1 92.450000 -2023-09-11 14:22:04,373 - ==> Top1: 92.450 Loss: 0.188 - -2023-09-11 14:22:04,373 - ==> Confusion: -[[924 61] - [ 90 925]] +2025-05-20 17:32:31,559 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:32:31,559 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:32:31,567 - + +2025-05-20 17:32:31,568 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:32:37,844 - Epoch: [103][ 10/ 71] Overall Loss 0.162493 Objective Loss 0.162493 LR 0.000250 Time 0.627540 +2025-05-20 17:32:41,870 - Epoch: [103][ 20/ 71] Overall Loss 0.154862 Objective Loss 0.154862 LR 0.000250 Time 0.515061 +2025-05-20 17:32:45,718 - Epoch: [103][ 30/ 71] Overall Loss 0.152220 Objective Loss 0.152220 LR 0.000250 Time 0.471640 +2025-05-20 17:32:50,760 - Epoch: [103][ 40/ 71] Overall Loss 0.147383 Objective Loss 0.147383 LR 0.000250 Time 0.479760 +2025-05-20 17:32:54,727 - Epoch: [103][ 50/ 71] Overall Loss 0.150551 Objective Loss 0.150551 LR 0.000250 Time 0.463133 +2025-05-20 17:32:59,294 - Epoch: [103][ 60/ 71] Overall Loss 0.150882 Objective Loss 0.150882 LR 0.000250 Time 0.462067 +2025-05-20 17:33:02,983 - Epoch: [103][ 70/ 71] Overall Loss 0.151728 Objective Loss 0.151728 Top1 94.140625 LR 0.000250 Time 0.448752 +2025-05-20 17:33:03,081 - Epoch: [103][ 71/ 71] Overall Loss 0.150971 Objective Loss 0.150971 Top1 94.940476 LR 0.000250 Time 0.443807 +2025-05-20 17:33:03,119 - --- validate (epoch=103)----------- +2025-05-20 17:33:03,119 - 2000 samples (256 per mini-batch) +2025-05-20 17:33:07,259 - Epoch: [103][ 8/ 8] Loss 0.189891 Top1 92.850000 +2025-05-20 17:33:07,302 - ==> Top1: 92.850 Loss: 0.190 + +2025-05-20 17:33:07,302 - ==> Confusion: +[[929 56] + [ 87 928]] -2023-09-11 14:22:04,386 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:22:04,387 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:22:04,391 - - -2023-09-11 14:22:04,391 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:22:08,195 - Epoch: [112][ 10/ 71] Overall Loss 0.162705 Objective Loss 0.162705 LR 0.000250 Time 0.380333 -2023-09-11 14:22:11,752 - Epoch: [112][ 20/ 71] Overall Loss 0.159159 Objective Loss 0.159159 LR 0.000250 Time 0.368011 -2023-09-11 14:22:13,854 - Epoch: [112][ 30/ 71] Overall Loss 0.161378 Objective Loss 0.161378 LR 0.000250 Time 0.315389 -2023-09-11 14:22:16,358 - Epoch: [112][ 40/ 71] Overall Loss 0.161000 Objective Loss 0.161000 LR 0.000250 Time 0.299131 -2023-09-11 14:22:19,110 - Epoch: [112][ 50/ 71] Overall Loss 0.161352 Objective Loss 0.161352 LR 0.000250 Time 0.294335 -2023-09-11 14:22:22,233 - Epoch: [112][ 60/ 71] Overall Loss 0.162280 Objective Loss 0.162280 LR 0.000250 Time 0.297330 -2023-09-11 14:22:24,888 - Epoch: [112][ 70/ 71] Overall Loss 0.163199 Objective Loss 0.163199 Top1 96.093750 LR 0.000250 Time 0.292774 -2023-09-11 14:22:25,011 - Epoch: [112][ 71/ 71] Overall Loss 0.163719 Objective Loss 0.163719 Top1 94.940476 LR 0.000250 Time 0.290382 -2023-09-11 14:22:25,113 - --- validate (epoch=112)----------- -2023-09-11 14:22:25,114 - 2000 samples (256 per mini-batch) -2023-09-11 14:22:28,184 - Epoch: [112][ 8/ 8] Loss 0.183618 Top1 92.550000 -2023-09-11 14:22:28,285 - ==> Top1: 92.550 Loss: 0.184 - -2023-09-11 14:22:28,286 - ==> Confusion: -[[909 76] - [ 73 942]] +2025-05-20 17:33:07,320 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:33:07,320 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:33:07,329 - + +2025-05-20 17:33:07,329 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:33:13,408 - Epoch: [104][ 10/ 71] Overall Loss 0.149889 Objective Loss 0.149889 LR 0.000250 Time 0.607811 +2025-05-20 17:33:17,173 - Epoch: [104][ 20/ 71] Overall Loss 0.146326 Objective Loss 0.146326 LR 0.000250 Time 0.492147 +2025-05-20 17:33:22,305 - Epoch: [104][ 30/ 71] Overall Loss 0.144175 Objective Loss 0.144175 LR 0.000250 Time 0.499144 +2025-05-20 17:33:26,644 - Epoch: [104][ 40/ 71] Overall Loss 0.147957 Objective Loss 0.147957 LR 0.000250 Time 0.482832 +2025-05-20 17:33:30,704 - Epoch: [104][ 50/ 71] Overall Loss 0.146925 Objective Loss 0.146925 LR 0.000250 Time 0.467455 +2025-05-20 17:33:34,199 - Epoch: [104][ 60/ 71] Overall Loss 0.145124 Objective Loss 0.145124 LR 0.000250 Time 0.447789 +2025-05-20 17:33:39,204 - Epoch: [104][ 70/ 71] Overall Loss 0.146753 Objective Loss 0.146753 Top1 93.750000 LR 0.000250 Time 0.455318 +2025-05-20 17:33:39,305 - Epoch: [104][ 71/ 71] Overall Loss 0.147038 Objective Loss 0.147038 Top1 92.261905 LR 0.000250 Time 0.450329 +2025-05-20 17:33:39,342 - --- validate (epoch=104)----------- +2025-05-20 17:33:39,343 - 2000 samples (256 per mini-batch) +2025-05-20 17:33:43,565 - Epoch: [104][ 8/ 8] Loss 0.175187 Top1 92.400000 +2025-05-20 17:33:43,604 - ==> Top1: 92.400 Loss: 0.175 + +2025-05-20 17:33:43,604 - ==> Confusion: +[[891 94] + [ 58 957]] -2023-09-11 14:22:28,302 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:22:28,302 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:22:28,304 - - -2023-09-11 14:22:28,304 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:22:32,187 - Epoch: [113][ 10/ 71] Overall Loss 0.157719 Objective Loss 0.157719 LR 0.000250 Time 0.388172 -2023-09-11 14:22:34,580 - Epoch: [113][ 20/ 71] Overall Loss 0.152581 Objective Loss 0.152581 LR 0.000250 Time 0.313753 -2023-09-11 14:22:37,333 - Epoch: [113][ 30/ 71] Overall Loss 0.147929 Objective Loss 0.147929 LR 0.000250 Time 0.300911 -2023-09-11 14:22:40,643 - Epoch: [113][ 40/ 71] Overall Loss 0.151903 Objective Loss 0.151903 LR 0.000250 Time 0.308424 -2023-09-11 14:22:42,774 - Epoch: [113][ 50/ 71] Overall Loss 0.150648 Objective Loss 0.150648 LR 0.000250 Time 0.289357 -2023-09-11 14:22:45,554 - Epoch: [113][ 60/ 71] Overall Loss 0.155184 Objective Loss 0.155184 LR 0.000250 Time 0.287464 -2023-09-11 14:22:47,769 - Epoch: [113][ 70/ 71] Overall Loss 0.155835 Objective Loss 0.155835 Top1 92.968750 LR 0.000250 Time 0.278038 -2023-09-11 14:22:47,877 - Epoch: [113][ 71/ 71] Overall Loss 0.155112 Objective Loss 0.155112 Top1 93.750000 LR 0.000250 Time 0.275637 -2023-09-11 14:22:47,976 - --- validate (epoch=113)----------- -2023-09-11 14:22:47,977 - 2000 samples (256 per mini-batch) -2023-09-11 14:22:50,825 - Epoch: [113][ 8/ 8] Loss 0.198310 Top1 91.500000 -2023-09-11 14:22:50,925 - ==> Top1: 91.500 Loss: 0.198 - -2023-09-11 14:22:50,925 - ==> Confusion: -[[925 60] - [110 905]] - -2023-09-11 14:22:50,941 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:22:50,941 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:22:50,945 - - -2023-09-11 14:22:50,945 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:22:54,450 - Epoch: [114][ 10/ 71] Overall Loss 0.142089 Objective Loss 0.142089 LR 0.000250 Time 0.350382 -2023-09-11 14:22:56,900 - Epoch: [114][ 20/ 71] Overall Loss 0.148716 Objective Loss 0.148716 LR 0.000250 Time 0.297707 -2023-09-11 14:22:59,151 - Epoch: [114][ 30/ 71] Overall Loss 0.154499 Objective Loss 0.154499 LR 0.000250 Time 0.273480 -2023-09-11 14:23:02,225 - Epoch: [114][ 40/ 71] Overall Loss 0.153955 Objective Loss 0.153955 LR 0.000250 Time 0.281947 -2023-09-11 14:23:05,573 - Epoch: [114][ 50/ 71] Overall Loss 0.154632 Objective Loss 0.154632 LR 0.000250 Time 0.292515 -2023-09-11 14:23:07,906 - Epoch: [114][ 60/ 71] Overall Loss 0.151727 Objective Loss 0.151727 LR 0.000250 Time 0.282650 -2023-09-11 14:23:10,556 - Epoch: [114][ 70/ 71] Overall Loss 0.153685 Objective Loss 0.153685 Top1 88.281250 LR 0.000250 Time 0.280124 -2023-09-11 14:23:10,676 - Epoch: [114][ 71/ 71] Overall Loss 0.153362 Objective Loss 0.153362 Top1 89.583333 LR 0.000250 Time 0.277864 -2023-09-11 14:23:10,768 - --- validate (epoch=114)----------- -2023-09-11 14:23:10,768 - 2000 samples (256 per mini-batch) -2023-09-11 14:23:13,240 - Epoch: [114][ 8/ 8] Loss 0.217304 Top1 90.900000 -2023-09-11 14:23:13,339 - ==> Top1: 90.900 Loss: 0.217 - -2023-09-11 14:23:13,340 - ==> Confusion: -[[943 42] - [140 875]] - -2023-09-11 14:23:13,345 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:23:13,345 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:23:13,348 - - -2023-09-11 14:23:13,348 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:23:17,207 - Epoch: [115][ 10/ 71] Overall Loss 0.153090 Objective Loss 0.153090 LR 0.000250 Time 0.385915 -2023-09-11 14:23:20,282 - Epoch: [115][ 20/ 71] Overall Loss 0.165374 Objective Loss 0.165374 LR 0.000250 Time 0.346697 -2023-09-11 14:23:22,272 - Epoch: [115][ 30/ 71] Overall Loss 0.164887 Objective Loss 0.164887 LR 0.000250 Time 0.297433 -2023-09-11 14:23:24,986 - Epoch: [115][ 40/ 71] Overall Loss 0.160846 Objective Loss 0.160846 LR 0.000250 Time 0.290928 -2023-09-11 14:23:27,839 - Epoch: [115][ 50/ 71] Overall Loss 0.160231 Objective Loss 0.160231 LR 0.000250 Time 0.289780 -2023-09-11 14:23:30,667 - Epoch: [115][ 60/ 71] Overall Loss 0.161147 Objective Loss 0.161147 LR 0.000250 Time 0.288623 -2023-09-11 14:23:32,519 - Epoch: [115][ 70/ 71] Overall Loss 0.161751 Objective Loss 0.161751 Top1 94.921875 LR 0.000250 Time 0.273843 -2023-09-11 14:23:32,597 - Epoch: [115][ 71/ 71] Overall Loss 0.161790 Objective Loss 0.161790 Top1 94.345238 LR 0.000250 Time 0.271074 -2023-09-11 14:23:32,697 - --- validate (epoch=115)----------- -2023-09-11 14:23:32,697 - 2000 samples (256 per mini-batch) -2023-09-11 14:23:35,769 - Epoch: [115][ 8/ 8] Loss 0.182288 Top1 91.800000 -2023-09-11 14:23:35,868 - ==> Top1: 91.800 Loss: 0.182 - -2023-09-11 14:23:35,869 - ==> Confusion: -[[914 71] - [ 93 922]] +2025-05-20 17:33:43,621 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:33:43,622 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:33:43,634 - + +2025-05-20 17:33:43,635 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:33:50,073 - Epoch: [105][ 10/ 71] Overall Loss 0.164534 Objective Loss 0.164534 LR 0.000250 Time 0.643804 +2025-05-20 17:33:53,856 - Epoch: [105][ 20/ 71] Overall Loss 0.155750 Objective Loss 0.155750 LR 0.000250 Time 0.511029 +2025-05-20 17:33:59,146 - Epoch: [105][ 30/ 71] Overall Loss 0.151451 Objective Loss 0.151451 LR 0.000250 Time 0.516986 +2025-05-20 17:34:03,390 - Epoch: [105][ 40/ 71] Overall Loss 0.150003 Objective Loss 0.150003 LR 0.000250 Time 0.493843 +2025-05-20 17:34:08,764 - Epoch: [105][ 50/ 71] Overall Loss 0.149006 Objective Loss 0.149006 LR 0.000250 Time 0.502544 +2025-05-20 17:34:11,830 - Epoch: [105][ 60/ 71] Overall Loss 0.149388 Objective Loss 0.149388 LR 0.000250 Time 0.469881 +2025-05-20 17:34:15,819 - Epoch: [105][ 70/ 71] Overall Loss 0.149842 Objective Loss 0.149842 Top1 92.968750 LR 0.000250 Time 0.459731 +2025-05-20 17:34:15,917 - Epoch: [105][ 71/ 71] Overall Loss 0.149216 Objective Loss 0.149216 Top1 94.047619 LR 0.000250 Time 0.454636 +2025-05-20 17:34:15,959 - --- validate (epoch=105)----------- +2025-05-20 17:34:15,959 - 2000 samples (256 per mini-batch) +2025-05-20 17:34:20,143 - Epoch: [105][ 8/ 8] Loss 0.179802 Top1 92.650000 +2025-05-20 17:34:20,180 - ==> Top1: 92.650 Loss: 0.180 + +2025-05-20 17:34:20,180 - ==> Confusion: +[[904 81] + [ 66 949]] -2023-09-11 14:23:35,883 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:23:35,883 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:23:35,886 - - -2023-09-11 14:23:35,886 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:23:40,531 - Epoch: [116][ 10/ 71] Overall Loss 0.138990 Objective Loss 0.138990 LR 0.000250 Time 0.464511 -2023-09-11 14:23:42,669 - Epoch: [116][ 20/ 71] Overall Loss 0.158642 Objective Loss 0.158642 LR 0.000250 Time 0.339139 -2023-09-11 14:23:45,689 - Epoch: [116][ 30/ 71] Overall Loss 0.154690 Objective Loss 0.154690 LR 0.000250 Time 0.326750 -2023-09-11 14:23:48,768 - Epoch: [116][ 40/ 71] Overall Loss 0.154020 Objective Loss 0.154020 LR 0.000250 Time 0.322020 -2023-09-11 14:23:50,948 - Epoch: [116][ 50/ 71] Overall Loss 0.155154 Objective Loss 0.155154 LR 0.000250 Time 0.301215 -2023-09-11 14:23:53,545 - Epoch: [116][ 60/ 71] Overall Loss 0.156255 Objective Loss 0.156255 LR 0.000250 Time 0.294286 -2023-09-11 14:23:55,403 - Epoch: [116][ 70/ 71] Overall Loss 0.156868 Objective Loss 0.156868 Top1 93.750000 LR 0.000250 Time 0.278786 -2023-09-11 14:23:55,461 - Epoch: [116][ 71/ 71] Overall Loss 0.158173 Objective Loss 0.158173 Top1 92.857143 LR 0.000250 Time 0.275666 -2023-09-11 14:23:55,551 - --- validate (epoch=116)----------- -2023-09-11 14:23:55,551 - 2000 samples (256 per mini-batch) -2023-09-11 14:23:58,648 - Epoch: [116][ 8/ 8] Loss 0.205931 Top1 91.200000 -2023-09-11 14:23:58,747 - ==> Top1: 91.200 Loss: 0.206 - -2023-09-11 14:23:58,747 - ==> Confusion: -[[872 113] +2025-05-20 17:34:20,185 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:34:20,186 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:34:20,193 - + +2025-05-20 17:34:20,193 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:34:26,379 - Epoch: [106][ 10/ 71] Overall Loss 0.151536 Objective Loss 0.151536 LR 0.000250 Time 0.618544 +2025-05-20 17:34:29,950 - Epoch: [106][ 20/ 71] Overall Loss 0.150531 Objective Loss 0.150531 LR 0.000250 Time 0.487776 +2025-05-20 17:34:35,053 - Epoch: [106][ 30/ 71] Overall Loss 0.157028 Objective Loss 0.157028 LR 0.000250 Time 0.495293 +2025-05-20 17:34:39,148 - Epoch: [106][ 40/ 71] Overall Loss 0.155452 Objective Loss 0.155452 LR 0.000250 Time 0.473820 +2025-05-20 17:34:43,930 - Epoch: [106][ 50/ 71] Overall Loss 0.153760 Objective Loss 0.153760 LR 0.000250 Time 0.474693 +2025-05-20 17:34:47,440 - Epoch: [106][ 60/ 71] Overall Loss 0.153097 Objective Loss 0.153097 LR 0.000250 Time 0.454077 +2025-05-20 17:34:52,127 - Epoch: [106][ 70/ 71] Overall Loss 0.153301 Objective Loss 0.153301 Top1 91.015625 LR 0.000250 Time 0.456160 +2025-05-20 17:34:52,236 - Epoch: [106][ 71/ 71] Overall Loss 0.152658 Objective Loss 0.152658 Top1 91.964286 LR 0.000250 Time 0.451272 +2025-05-20 17:34:52,280 - --- validate (epoch=106)----------- +2025-05-20 17:34:52,280 - 2000 samples (256 per mini-batch) +2025-05-20 17:34:56,391 - Epoch: [106][ 8/ 8] Loss 0.193721 Top1 91.900000 +2025-05-20 17:34:56,430 - ==> Top1: 91.900 Loss: 0.194 + +2025-05-20 17:34:56,431 - ==> Confusion: +[[882 103] + [ 59 956]] + +2025-05-20 17:34:56,446 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:34:56,447 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:34:56,454 - + +2025-05-20 17:34:56,454 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:35:02,797 - Epoch: [107][ 10/ 71] Overall Loss 0.148370 Objective Loss 0.148370 LR 0.000250 Time 0.634228 +2025-05-20 17:35:06,261 - Epoch: [107][ 20/ 71] Overall Loss 0.143088 Objective Loss 0.143088 LR 0.000250 Time 0.490289 +2025-05-20 17:35:11,432 - Epoch: [107][ 30/ 71] Overall Loss 0.144455 Objective Loss 0.144455 LR 0.000250 Time 0.499213 +2025-05-20 17:35:14,906 - Epoch: [107][ 40/ 71] Overall Loss 0.149773 Objective Loss 0.149773 LR 0.000250 Time 0.461254 +2025-05-20 17:35:19,280 - Epoch: [107][ 50/ 71] Overall Loss 0.151895 Objective Loss 0.151895 LR 0.000250 Time 0.456472 +2025-05-20 17:35:22,876 - Epoch: [107][ 60/ 71] Overall Loss 0.150726 Objective Loss 0.150726 LR 0.000250 Time 0.440322 +2025-05-20 17:35:27,714 - Epoch: [107][ 70/ 71] Overall Loss 0.148927 Objective Loss 0.148927 Top1 93.750000 LR 0.000250 Time 0.446526 +2025-05-20 17:35:27,820 - Epoch: [107][ 71/ 71] Overall Loss 0.148960 Objective Loss 0.148960 Top1 93.750000 LR 0.000250 Time 0.441723 +2025-05-20 17:35:27,859 - --- validate (epoch=107)----------- +2025-05-20 17:35:27,860 - 2000 samples (256 per mini-batch) +2025-05-20 17:35:31,934 - Epoch: [107][ 8/ 8] Loss 0.198329 Top1 92.200000 +2025-05-20 17:35:31,976 - ==> Top1: 92.200 Loss: 0.198 + +2025-05-20 17:35:31,976 - ==> Confusion: +[[892 93] [ 63 952]] -2023-09-11 14:23:58,763 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:23:58,764 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:23:58,766 - - -2023-09-11 14:23:58,766 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:24:03,279 - Epoch: [117][ 10/ 71] Overall Loss 0.138909 Objective Loss 0.138909 LR 0.000250 Time 0.451268 -2023-09-11 14:24:05,330 - Epoch: [117][ 20/ 71] Overall Loss 0.148037 Objective Loss 0.148037 LR 0.000250 Time 0.328171 -2023-09-11 14:24:07,976 - Epoch: [117][ 30/ 71] Overall Loss 0.146152 Objective Loss 0.146152 LR 0.000250 Time 0.306953 -2023-09-11 14:24:11,319 - Epoch: [117][ 40/ 71] Overall Loss 0.148480 Objective Loss 0.148480 LR 0.000250 Time 0.313789 -2023-09-11 14:24:14,015 - Epoch: [117][ 50/ 71] Overall Loss 0.149088 Objective Loss 0.149088 LR 0.000250 Time 0.304941 -2023-09-11 14:24:16,346 - Epoch: [117][ 60/ 71] Overall Loss 0.149498 Objective Loss 0.149498 LR 0.000250 Time 0.292967 -2023-09-11 14:24:18,668 - Epoch: [117][ 70/ 71] Overall Loss 0.149300 Objective Loss 0.149300 Top1 96.093750 LR 0.000250 Time 0.284278 -2023-09-11 14:24:18,760 - Epoch: [117][ 71/ 71] Overall Loss 0.149729 Objective Loss 0.149729 Top1 95.535714 LR 0.000250 Time 0.281562 -2023-09-11 14:24:18,847 - --- validate (epoch=117)----------- -2023-09-11 14:24:18,847 - 2000 samples (256 per mini-batch) -2023-09-11 14:24:21,987 - Epoch: [117][ 8/ 8] Loss 0.182746 Top1 92.500000 -2023-09-11 14:24:22,092 - ==> Top1: 92.500 Loss: 0.183 - -2023-09-11 14:24:22,092 - ==> Confusion: -[[928 57] - [ 93 922]] +2025-05-20 17:35:31,993 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:35:31,993 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:35:32,002 - + +2025-05-20 17:35:32,002 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:35:38,064 - Epoch: [108][ 10/ 71] Overall Loss 0.135008 Objective Loss 0.135008 LR 0.000250 Time 0.606147 +2025-05-20 17:35:41,372 - Epoch: [108][ 20/ 71] Overall Loss 0.143136 Objective Loss 0.143136 LR 0.000250 Time 0.468413 +2025-05-20 17:35:46,552 - Epoch: [108][ 30/ 71] Overall Loss 0.144396 Objective Loss 0.144396 LR 0.000250 Time 0.484936 +2025-05-20 17:35:51,011 - Epoch: [108][ 40/ 71] Overall Loss 0.144684 Objective Loss 0.144684 LR 0.000250 Time 0.475164 +2025-05-20 17:35:56,255 - Epoch: [108][ 50/ 71] Overall Loss 0.146784 Objective Loss 0.146784 LR 0.000250 Time 0.485019 +2025-05-20 17:35:59,621 - Epoch: [108][ 60/ 71] Overall Loss 0.147669 Objective Loss 0.147669 LR 0.000250 Time 0.460271 +2025-05-20 17:36:04,580 - Epoch: [108][ 70/ 71] Overall Loss 0.148136 Objective Loss 0.148136 Top1 94.531250 LR 0.000250 Time 0.465356 +2025-05-20 17:36:04,688 - Epoch: [108][ 71/ 71] Overall Loss 0.148519 Objective Loss 0.148519 Top1 94.345238 LR 0.000250 Time 0.460317 +2025-05-20 17:36:04,725 - --- validate (epoch=108)----------- +2025-05-20 17:36:04,725 - 2000 samples (256 per mini-batch) +2025-05-20 17:36:08,931 - Epoch: [108][ 8/ 8] Loss 0.199211 Top1 91.900000 +2025-05-20 17:36:08,966 - ==> Top1: 91.900 Loss: 0.199 + +2025-05-20 17:36:08,967 - ==> Confusion: +[[935 50] + [112 903]] + +2025-05-20 17:36:08,979 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:36:08,979 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:36:08,987 - + +2025-05-20 17:36:08,987 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:36:15,112 - Epoch: [109][ 10/ 71] Overall Loss 0.136617 Objective Loss 0.136617 LR 0.000250 Time 0.612402 +2025-05-20 17:36:18,533 - Epoch: [109][ 20/ 71] Overall Loss 0.161030 Objective Loss 0.161030 LR 0.000250 Time 0.477264 +2025-05-20 17:36:23,050 - Epoch: [109][ 30/ 71] Overall Loss 0.164583 Objective Loss 0.164583 LR 0.000250 Time 0.468711 +2025-05-20 17:36:27,656 - Epoch: [109][ 40/ 71] Overall Loss 0.158711 Objective Loss 0.158711 LR 0.000250 Time 0.466678 +2025-05-20 17:36:31,792 - Epoch: [109][ 50/ 71] Overall Loss 0.153408 Objective Loss 0.153408 LR 0.000250 Time 0.456053 +2025-05-20 17:36:35,990 - Epoch: [109][ 60/ 71] Overall Loss 0.152872 Objective Loss 0.152872 LR 0.000250 Time 0.450001 +2025-05-20 17:36:39,853 - Epoch: [109][ 70/ 71] Overall Loss 0.151476 Objective Loss 0.151476 Top1 95.703125 LR 0.000250 Time 0.440898 +2025-05-20 17:36:39,961 - Epoch: [109][ 71/ 71] Overall Loss 0.150863 Objective Loss 0.150863 Top1 95.238095 LR 0.000250 Time 0.436211 +2025-05-20 17:36:40,002 - --- validate (epoch=109)----------- +2025-05-20 17:36:40,002 - 2000 samples (256 per mini-batch) +2025-05-20 17:36:44,236 - Epoch: [109][ 8/ 8] Loss 0.181350 Top1 92.600000 +2025-05-20 17:36:44,276 - ==> Top1: 92.600 Loss: 0.181 + +2025-05-20 17:36:44,276 - ==> Confusion: +[[917 68] + [ 80 935]] + +2025-05-20 17:36:44,291 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:36:44,291 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:36:44,298 - + +2025-05-20 17:36:44,298 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:36:51,286 - Epoch: [110][ 10/ 71] Overall Loss 0.159075 Objective Loss 0.159075 LR 0.000250 Time 0.698685 +2025-05-20 17:36:54,276 - Epoch: [110][ 20/ 71] Overall Loss 0.154416 Objective Loss 0.154416 LR 0.000250 Time 0.498831 +2025-05-20 17:36:59,334 - Epoch: [110][ 30/ 71] Overall Loss 0.154911 Objective Loss 0.154911 LR 0.000250 Time 0.501128 +2025-05-20 17:37:03,396 - Epoch: [110][ 40/ 71] Overall Loss 0.151287 Objective Loss 0.151287 LR 0.000250 Time 0.477405 +2025-05-20 17:37:08,505 - Epoch: [110][ 50/ 71] Overall Loss 0.150658 Objective Loss 0.150658 LR 0.000250 Time 0.484092 +2025-05-20 17:37:12,293 - Epoch: [110][ 60/ 71] Overall Loss 0.148885 Objective Loss 0.148885 LR 0.000250 Time 0.466539 +2025-05-20 17:37:16,873 - Epoch: [110][ 70/ 71] Overall Loss 0.149147 Objective Loss 0.149147 Top1 94.531250 LR 0.000250 Time 0.465314 +2025-05-20 17:37:16,981 - Epoch: [110][ 71/ 71] Overall Loss 0.149535 Objective Loss 0.149535 Top1 94.345238 LR 0.000250 Time 0.460280 +2025-05-20 17:37:17,023 - --- validate (epoch=110)----------- +2025-05-20 17:37:17,024 - 2000 samples (256 per mini-batch) +2025-05-20 17:37:21,402 - Epoch: [110][ 8/ 8] Loss 0.184789 Top1 92.300000 +2025-05-20 17:37:21,446 - ==> Top1: 92.300 Loss: 0.185 + +2025-05-20 17:37:21,446 - ==> Confusion: +[[884 101] + [ 53 962]] + +2025-05-20 17:37:21,463 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:37:21,463 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:37:21,470 - + +2025-05-20 17:37:21,470 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:37:27,930 - Epoch: [111][ 10/ 71] Overall Loss 0.139471 Objective Loss 0.139471 LR 0.000250 Time 0.645984 +2025-05-20 17:37:31,508 - Epoch: [111][ 20/ 71] Overall Loss 0.150218 Objective Loss 0.150218 LR 0.000250 Time 0.501847 +2025-05-20 17:37:36,222 - Epoch: [111][ 30/ 71] Overall Loss 0.147771 Objective Loss 0.147771 LR 0.000250 Time 0.491697 +2025-05-20 17:37:40,387 - Epoch: [111][ 40/ 71] Overall Loss 0.146535 Objective Loss 0.146535 LR 0.000250 Time 0.472887 +2025-05-20 17:37:45,644 - Epoch: [111][ 50/ 71] Overall Loss 0.148177 Objective Loss 0.148177 LR 0.000250 Time 0.483440 +2025-05-20 17:37:49,590 - Epoch: [111][ 60/ 71] Overall Loss 0.149029 Objective Loss 0.149029 LR 0.000250 Time 0.468619 +2025-05-20 17:37:53,674 - Epoch: [111][ 70/ 71] Overall Loss 0.147974 Objective Loss 0.147974 Top1 92.578125 LR 0.000250 Time 0.460013 +2025-05-20 17:37:53,772 - Epoch: [111][ 71/ 71] Overall Loss 0.148194 Objective Loss 0.148194 Top1 93.154762 LR 0.000250 Time 0.454913 +2025-05-20 17:37:53,812 - --- validate (epoch=111)----------- +2025-05-20 17:37:53,812 - 2000 samples (256 per mini-batch) +2025-05-20 17:37:58,110 - Epoch: [111][ 8/ 8] Loss 0.240542 Top1 91.000000 +2025-05-20 17:37:58,150 - ==> Top1: 91.000 Loss: 0.241 + +2025-05-20 17:37:58,150 - ==> Confusion: +[[951 34] + [146 869]] + +2025-05-20 17:37:58,167 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:37:58,167 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:37:58,174 - + +2025-05-20 17:37:58,174 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:38:04,706 - Epoch: [112][ 10/ 71] Overall Loss 0.159922 Objective Loss 0.159922 LR 0.000250 Time 0.653157 +2025-05-20 17:38:08,458 - Epoch: [112][ 20/ 71] Overall Loss 0.154808 Objective Loss 0.154808 LR 0.000250 Time 0.514129 +2025-05-20 17:38:13,596 - Epoch: [112][ 30/ 71] Overall Loss 0.153449 Objective Loss 0.153449 LR 0.000250 Time 0.514002 +2025-05-20 17:38:17,427 - Epoch: [112][ 40/ 71] Overall Loss 0.146985 Objective Loss 0.146985 LR 0.000250 Time 0.481283 +2025-05-20 17:38:22,668 - Epoch: [112][ 50/ 71] Overall Loss 0.143145 Objective Loss 0.143145 LR 0.000250 Time 0.489823 +2025-05-20 17:38:26,649 - Epoch: [112][ 60/ 71] Overall Loss 0.145426 Objective Loss 0.145426 LR 0.000250 Time 0.474547 +2025-05-20 17:38:31,203 - Epoch: [112][ 70/ 71] Overall Loss 0.148987 Objective Loss 0.148987 Top1 92.578125 LR 0.000250 Time 0.471803 +2025-05-20 17:38:31,298 - Epoch: [112][ 71/ 71] Overall Loss 0.150203 Objective Loss 0.150203 Top1 90.476190 LR 0.000250 Time 0.466494 +2025-05-20 17:38:31,346 - --- validate (epoch=112)----------- +2025-05-20 17:38:31,346 - 2000 samples (256 per mini-batch) +2025-05-20 17:38:35,711 - Epoch: [112][ 8/ 8] Loss 0.194047 Top1 92.300000 +2025-05-20 17:38:35,752 - ==> Top1: 92.300 Loss: 0.194 + +2025-05-20 17:38:35,753 - ==> Confusion: +[[886 99] + [ 55 960]] + +2025-05-20 17:38:35,770 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:38:35,770 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:38:35,777 - + +2025-05-20 17:38:35,777 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:38:41,724 - Epoch: [113][ 10/ 71] Overall Loss 0.146663 Objective Loss 0.146663 LR 0.000250 Time 0.594661 +2025-05-20 17:38:45,574 - Epoch: [113][ 20/ 71] Overall Loss 0.146442 Objective Loss 0.146442 LR 0.000250 Time 0.489779 +2025-05-20 17:38:50,542 - Epoch: [113][ 30/ 71] Overall Loss 0.148351 Objective Loss 0.148351 LR 0.000250 Time 0.492089 +2025-05-20 17:38:54,461 - Epoch: [113][ 40/ 71] Overall Loss 0.148242 Objective Loss 0.148242 LR 0.000250 Time 0.467037 +2025-05-20 17:38:59,124 - Epoch: [113][ 50/ 71] Overall Loss 0.146797 Objective Loss 0.146797 LR 0.000250 Time 0.466891 +2025-05-20 17:39:03,015 - Epoch: [113][ 60/ 71] Overall Loss 0.147751 Objective Loss 0.147751 LR 0.000250 Time 0.453906 +2025-05-20 17:39:08,103 - Epoch: [113][ 70/ 71] Overall Loss 0.147201 Objective Loss 0.147201 Top1 91.015625 LR 0.000250 Time 0.461745 +2025-05-20 17:39:08,210 - Epoch: [113][ 71/ 71] Overall Loss 0.147339 Objective Loss 0.147339 Top1 91.964286 LR 0.000250 Time 0.456750 +2025-05-20 17:39:08,254 - --- validate (epoch=113)----------- +2025-05-20 17:39:08,254 - 2000 samples (256 per mini-batch) +2025-05-20 17:39:12,655 - Epoch: [113][ 8/ 8] Loss 0.189131 Top1 92.650000 +2025-05-20 17:39:12,695 - ==> Top1: 92.650 Loss: 0.189 + +2025-05-20 17:39:12,695 - ==> Confusion: +[[932 53] + [ 94 921]] -2023-09-11 14:24:22,108 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:24:22,108 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:24:22,112 - - -2023-09-11 14:24:22,113 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:24:26,460 - Epoch: [118][ 10/ 71] Overall Loss 0.136541 Objective Loss 0.136541 LR 0.000250 Time 0.434664 -2023-09-11 14:24:28,529 - Epoch: [118][ 20/ 71] Overall Loss 0.143755 Objective Loss 0.143755 LR 0.000250 Time 0.320767 -2023-09-11 14:24:31,693 - Epoch: [118][ 30/ 71] Overall Loss 0.143971 Objective Loss 0.143971 LR 0.000250 Time 0.319312 -2023-09-11 14:24:34,227 - Epoch: [118][ 40/ 71] Overall Loss 0.148381 Objective Loss 0.148381 LR 0.000250 Time 0.302809 -2023-09-11 14:24:36,566 - Epoch: [118][ 50/ 71] Overall Loss 0.151889 Objective Loss 0.151889 LR 0.000250 Time 0.289030 -2023-09-11 14:24:39,012 - Epoch: [118][ 60/ 71] Overall Loss 0.152913 Objective Loss 0.152913 LR 0.000250 Time 0.281612 -2023-09-11 14:24:41,083 - Epoch: [118][ 70/ 71] Overall Loss 0.154050 Objective Loss 0.154050 Top1 94.531250 LR 0.000250 Time 0.270977 -2023-09-11 14:24:41,160 - Epoch: [118][ 71/ 71] Overall Loss 0.155474 Objective Loss 0.155474 Top1 93.154762 LR 0.000250 Time 0.268240 -2023-09-11 14:24:41,258 - --- validate (epoch=118)----------- -2023-09-11 14:24:41,258 - 2000 samples (256 per mini-batch) -2023-09-11 14:24:43,811 - Epoch: [118][ 8/ 8] Loss 0.193220 Top1 91.700000 -2023-09-11 14:24:43,906 - ==> Top1: 91.700 Loss: 0.193 - -2023-09-11 14:24:43,907 - ==> Confusion: +2025-05-20 17:39:12,709 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:39:12,710 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:39:12,718 - + +2025-05-20 17:39:12,718 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:39:19,305 - Epoch: [114][ 10/ 71] Overall Loss 0.146474 Objective Loss 0.146474 LR 0.000250 Time 0.658602 +2025-05-20 17:39:23,424 - Epoch: [114][ 20/ 71] Overall Loss 0.152880 Objective Loss 0.152880 LR 0.000250 Time 0.535223 +2025-05-20 17:39:28,599 - Epoch: [114][ 30/ 71] Overall Loss 0.154151 Objective Loss 0.154151 LR 0.000250 Time 0.529319 +2025-05-20 17:39:33,025 - Epoch: [114][ 40/ 71] Overall Loss 0.153188 Objective Loss 0.153188 LR 0.000250 Time 0.507626 +2025-05-20 17:39:37,619 - Epoch: [114][ 50/ 71] Overall Loss 0.151510 Objective Loss 0.151510 LR 0.000250 Time 0.497975 +2025-05-20 17:39:40,817 - Epoch: [114][ 60/ 71] Overall Loss 0.152574 Objective Loss 0.152574 LR 0.000250 Time 0.468282 +2025-05-20 17:39:44,818 - Epoch: [114][ 70/ 71] Overall Loss 0.150923 Objective Loss 0.150923 Top1 94.531250 LR 0.000250 Time 0.458534 +2025-05-20 17:39:44,915 - Epoch: [114][ 71/ 71] Overall Loss 0.150047 Objective Loss 0.150047 Top1 94.940476 LR 0.000250 Time 0.453441 +2025-05-20 17:39:44,953 - --- validate (epoch=114)----------- +2025-05-20 17:39:44,953 - 2000 samples (256 per mini-batch) +2025-05-20 17:39:48,888 - Epoch: [114][ 8/ 8] Loss 0.178189 Top1 92.700000 +2025-05-20 17:39:48,920 - ==> Top1: 92.700 Loss: 0.178 + +2025-05-20 17:39:48,921 - ==> Confusion: +[[923 62] + [ 84 931]] + +2025-05-20 17:39:48,930 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:39:48,930 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:39:48,938 - + +2025-05-20 17:39:48,938 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:39:53,695 - Epoch: [115][ 10/ 71] Overall Loss 0.141023 Objective Loss 0.141023 LR 0.000250 Time 0.475658 +2025-05-20 17:39:58,670 - Epoch: [115][ 20/ 71] Overall Loss 0.143323 Objective Loss 0.143323 LR 0.000250 Time 0.486566 +2025-05-20 17:40:01,684 - Epoch: [115][ 30/ 71] Overall Loss 0.142525 Objective Loss 0.142525 LR 0.000250 Time 0.424838 +2025-05-20 17:40:05,718 - Epoch: [115][ 40/ 71] Overall Loss 0.144886 Objective Loss 0.144886 LR 0.000250 Time 0.419464 +2025-05-20 17:40:08,973 - Epoch: [115][ 50/ 71] Overall Loss 0.145959 Objective Loss 0.145959 LR 0.000250 Time 0.400659 +2025-05-20 17:40:12,485 - Epoch: [115][ 60/ 71] Overall Loss 0.147920 Objective Loss 0.147920 LR 0.000250 Time 0.392417 +2025-05-20 17:40:15,723 - Epoch: [115][ 70/ 71] Overall Loss 0.147691 Objective Loss 0.147691 Top1 93.750000 LR 0.000250 Time 0.382613 +2025-05-20 17:40:15,833 - Epoch: [115][ 71/ 71] Overall Loss 0.146439 Objective Loss 0.146439 Top1 95.238095 LR 0.000250 Time 0.378774 +2025-05-20 17:40:15,867 - --- validate (epoch=115)----------- +2025-05-20 17:40:15,867 - 2000 samples (256 per mini-batch) +2025-05-20 17:40:19,559 - Epoch: [115][ 8/ 8] Loss 0.180223 Top1 93.050000 +2025-05-20 17:40:19,594 - ==> Top1: 93.050 Loss: 0.180 + +2025-05-20 17:40:19,594 - ==> Confusion: [[907 78] - [ 88 927]] + [ 61 954]] -2023-09-11 14:24:43,922 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:24:43,922 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:24:43,924 - - -2023-09-11 14:24:43,924 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:24:48,445 - Epoch: [119][ 10/ 71] Overall Loss 0.143458 Objective Loss 0.143458 LR 0.000250 Time 0.452039 -2023-09-11 14:24:50,502 - Epoch: [119][ 20/ 71] Overall Loss 0.148373 Objective Loss 0.148373 LR 0.000250 Time 0.328819 -2023-09-11 14:24:53,490 - Epoch: [119][ 30/ 71] Overall Loss 0.152654 Objective Loss 0.152654 LR 0.000250 Time 0.318825 -2023-09-11 14:24:56,081 - Epoch: [119][ 40/ 71] Overall Loss 0.155670 Objective Loss 0.155670 LR 0.000250 Time 0.303888 -2023-09-11 14:24:59,009 - Epoch: [119][ 50/ 71] Overall Loss 0.151970 Objective Loss 0.151970 LR 0.000250 Time 0.301663 -2023-09-11 14:25:01,519 - Epoch: [119][ 60/ 71] Overall Loss 0.154865 Objective Loss 0.154865 LR 0.000250 Time 0.293203 -2023-09-11 14:25:04,066 - Epoch: [119][ 70/ 71] Overall Loss 0.154402 Objective Loss 0.154402 Top1 94.921875 LR 0.000250 Time 0.287699 -2023-09-11 14:25:04,142 - Epoch: [119][ 71/ 71] Overall Loss 0.153025 Objective Loss 0.153025 Top1 95.535714 LR 0.000250 Time 0.284722 -2023-09-11 14:25:04,253 - --- validate (epoch=119)----------- -2023-09-11 14:25:04,253 - 2000 samples (256 per mini-batch) -2023-09-11 14:25:06,684 - Epoch: [119][ 8/ 8] Loss 0.190002 Top1 91.950000 -2023-09-11 14:25:06,791 - ==> Top1: 91.950 Loss: 0.190 - -2023-09-11 14:25:06,791 - ==> Confusion: -[[902 83] - [ 78 937]] +2025-05-20 17:40:19,612 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:40:19,612 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:40:19,620 - + +2025-05-20 17:40:19,620 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:40:24,886 - Epoch: [116][ 10/ 71] Overall Loss 0.128982 Objective Loss 0.128982 LR 0.000250 Time 0.526583 +2025-05-20 17:40:28,289 - Epoch: [116][ 20/ 71] Overall Loss 0.136912 Objective Loss 0.136912 LR 0.000250 Time 0.433414 +2025-05-20 17:40:32,958 - Epoch: [116][ 30/ 71] Overall Loss 0.139740 Objective Loss 0.139740 LR 0.000250 Time 0.444549 +2025-05-20 17:40:36,084 - Epoch: [116][ 40/ 71] Overall Loss 0.140511 Objective Loss 0.140511 LR 0.000250 Time 0.411557 +2025-05-20 17:40:40,412 - Epoch: [116][ 50/ 71] Overall Loss 0.142179 Objective Loss 0.142179 LR 0.000250 Time 0.415802 +2025-05-20 17:40:43,553 - Epoch: [116][ 60/ 71] Overall Loss 0.143216 Objective Loss 0.143216 LR 0.000250 Time 0.398849 +2025-05-20 17:40:47,555 - Epoch: [116][ 70/ 71] Overall Loss 0.143645 Objective Loss 0.143645 Top1 92.968750 LR 0.000250 Time 0.399042 +2025-05-20 17:40:47,645 - Epoch: [116][ 71/ 71] Overall Loss 0.142804 Objective Loss 0.142804 Top1 94.047619 LR 0.000250 Time 0.394686 +2025-05-20 17:40:47,681 - --- validate (epoch=116)----------- +2025-05-20 17:40:47,681 - 2000 samples (256 per mini-batch) +2025-05-20 17:40:51,111 - Epoch: [116][ 8/ 8] Loss 0.196859 Top1 91.850000 +2025-05-20 17:40:51,149 - ==> Top1: 91.850 Loss: 0.197 + +2025-05-20 17:40:51,149 - ==> Confusion: +[[893 92] + [ 71 944]] -2023-09-11 14:25:06,807 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:25:06,807 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:25:06,810 - - -2023-09-11 14:25:06,810 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:25:10,145 - Epoch: [120][ 10/ 71] Overall Loss 0.158165 Objective Loss 0.158165 LR 0.000250 Time 0.333444 -2023-09-11 14:25:12,969 - Epoch: [120][ 20/ 71] Overall Loss 0.155748 Objective Loss 0.155748 LR 0.000250 Time 0.307904 -2023-09-11 14:25:15,102 - Epoch: [120][ 30/ 71] Overall Loss 0.152589 Objective Loss 0.152589 LR 0.000250 Time 0.276370 -2023-09-11 14:25:17,982 - Epoch: [120][ 40/ 71] Overall Loss 0.151822 Objective Loss 0.151822 LR 0.000250 Time 0.279252 -2023-09-11 14:25:20,772 - Epoch: [120][ 50/ 71] Overall Loss 0.151450 Objective Loss 0.151450 LR 0.000250 Time 0.279197 -2023-09-11 14:25:22,949 - Epoch: [120][ 60/ 71] Overall Loss 0.151350 Objective Loss 0.151350 LR 0.000250 Time 0.268955 -2023-09-11 14:25:25,387 - Epoch: [120][ 70/ 71] Overall Loss 0.151768 Objective Loss 0.151768 Top1 94.140625 LR 0.000250 Time 0.265348 -2023-09-11 14:25:25,432 - Epoch: [120][ 71/ 71] Overall Loss 0.152307 Objective Loss 0.152307 Top1 93.750000 LR 0.000250 Time 0.262243 -2023-09-11 14:25:25,542 - --- validate (epoch=120)----------- -2023-09-11 14:25:25,542 - 2000 samples (256 per mini-batch) -2023-09-11 14:25:28,668 - Epoch: [120][ 8/ 8] Loss 0.176358 Top1 93.350000 -2023-09-11 14:25:28,767 - ==> Top1: 93.350 Loss: 0.176 - -2023-09-11 14:25:28,767 - ==> Confusion: +2025-05-20 17:40:51,164 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:40:51,164 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:40:51,171 - + +2025-05-20 17:40:51,171 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:40:56,251 - Epoch: [117][ 10/ 71] Overall Loss 0.152580 Objective Loss 0.152580 LR 0.000250 Time 0.507947 +2025-05-20 17:41:00,631 - Epoch: [117][ 20/ 71] Overall Loss 0.150659 Objective Loss 0.150659 LR 0.000250 Time 0.472946 +2025-05-20 17:41:04,133 - Epoch: [117][ 30/ 71] Overall Loss 0.148864 Objective Loss 0.148864 LR 0.000250 Time 0.432012 +2025-05-20 17:41:07,844 - Epoch: [117][ 40/ 71] Overall Loss 0.147181 Objective Loss 0.147181 LR 0.000250 Time 0.416784 +2025-05-20 17:41:11,326 - Epoch: [117][ 50/ 71] Overall Loss 0.143109 Objective Loss 0.143109 LR 0.000250 Time 0.403056 +2025-05-20 17:41:14,295 - Epoch: [117][ 60/ 71] Overall Loss 0.142307 Objective Loss 0.142307 LR 0.000250 Time 0.385355 +2025-05-20 17:41:18,110 - Epoch: [117][ 70/ 71] Overall Loss 0.141838 Objective Loss 0.141838 Top1 94.921875 LR 0.000250 Time 0.384806 +2025-05-20 17:41:18,219 - Epoch: [117][ 71/ 71] Overall Loss 0.142744 Objective Loss 0.142744 Top1 92.559524 LR 0.000250 Time 0.380912 +2025-05-20 17:41:18,250 - --- validate (epoch=117)----------- +2025-05-20 17:41:18,250 - 2000 samples (256 per mini-batch) +2025-05-20 17:41:21,835 - Epoch: [117][ 8/ 8] Loss 0.180268 Top1 92.650000 +2025-05-20 17:41:21,874 - ==> Top1: 92.650 Loss: 0.180 + +2025-05-20 17:41:21,874 - ==> Confusion: [[931 54] - [ 79 936]] + [ 93 922]] -2023-09-11 14:25:28,782 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:25:28,782 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:25:28,784 - - -2023-09-11 14:25:28,784 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:25:32,725 - Epoch: [121][ 10/ 71] Overall Loss 0.147763 Objective Loss 0.147763 LR 0.000250 Time 0.394072 -2023-09-11 14:25:35,356 - Epoch: [121][ 20/ 71] Overall Loss 0.148104 Objective Loss 0.148104 LR 0.000250 Time 0.328562 -2023-09-11 14:25:37,380 - Epoch: [121][ 30/ 71] Overall Loss 0.148219 Objective Loss 0.148219 LR 0.000250 Time 0.286481 -2023-09-11 14:25:40,656 - Epoch: [121][ 40/ 71] Overall Loss 0.147496 Objective Loss 0.147496 LR 0.000250 Time 0.296770 -2023-09-11 14:25:43,915 - Epoch: [121][ 50/ 71] Overall Loss 0.148962 Objective Loss 0.148962 LR 0.000250 Time 0.302577 -2023-09-11 14:25:47,096 - Epoch: [121][ 60/ 71] Overall Loss 0.152575 Objective Loss 0.152575 LR 0.000250 Time 0.305168 -2023-09-11 14:25:49,233 - Epoch: [121][ 70/ 71] Overall Loss 0.154188 Objective Loss 0.154188 Top1 94.921875 LR 0.000250 Time 0.292088 -2023-09-11 14:25:49,306 - Epoch: [121][ 71/ 71] Overall Loss 0.153444 Objective Loss 0.153444 Top1 95.238095 LR 0.000250 Time 0.289001 -2023-09-11 14:25:49,399 - --- validate (epoch=121)----------- -2023-09-11 14:25:49,399 - 2000 samples (256 per mini-batch) -2023-09-11 14:25:52,387 - Epoch: [121][ 8/ 8] Loss 0.181445 Top1 92.900000 -2023-09-11 14:25:52,492 - ==> Top1: 92.900 Loss: 0.181 - -2023-09-11 14:25:52,493 - ==> Confusion: -[[903 82] - [ 60 955]] +2025-05-20 17:41:21,890 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:41:21,890 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:41:21,898 - + +2025-05-20 17:41:21,898 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:41:28,323 - Epoch: [118][ 10/ 71] Overall Loss 0.136129 Objective Loss 0.136129 LR 0.000250 Time 0.642472 +2025-05-20 17:41:31,140 - Epoch: [118][ 20/ 71] Overall Loss 0.141898 Objective Loss 0.141898 LR 0.000250 Time 0.462044 +2025-05-20 17:41:34,852 - Epoch: [118][ 30/ 71] Overall Loss 0.145089 Objective Loss 0.145089 LR 0.000250 Time 0.431769 +2025-05-20 17:41:39,072 - Epoch: [118][ 40/ 71] Overall Loss 0.141558 Objective Loss 0.141558 LR 0.000250 Time 0.429307 +2025-05-20 17:41:42,257 - Epoch: [118][ 50/ 71] Overall Loss 0.142093 Objective Loss 0.142093 LR 0.000250 Time 0.407154 +2025-05-20 17:41:45,799 - Epoch: [118][ 60/ 71] Overall Loss 0.141396 Objective Loss 0.141396 LR 0.000250 Time 0.398327 +2025-05-20 17:41:48,684 - Epoch: [118][ 70/ 71] Overall Loss 0.143323 Objective Loss 0.143323 Top1 95.703125 LR 0.000250 Time 0.382629 +2025-05-20 17:41:48,793 - Epoch: [118][ 71/ 71] Overall Loss 0.142423 Objective Loss 0.142423 Top1 96.130952 LR 0.000250 Time 0.378770 +2025-05-20 17:41:48,829 - --- validate (epoch=118)----------- +2025-05-20 17:41:48,829 - 2000 samples (256 per mini-batch) +2025-05-20 17:41:52,224 - Epoch: [118][ 8/ 8] Loss 0.169842 Top1 93.000000 +2025-05-20 17:41:52,259 - ==> Top1: 93.000 Loss: 0.170 + +2025-05-20 17:41:52,259 - ==> Confusion: +[[930 55] + [ 85 930]] -2023-09-11 14:25:52,509 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:25:52,509 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:25:52,513 - - -2023-09-11 14:25:52,513 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:25:56,139 - Epoch: [122][ 10/ 71] Overall Loss 0.159403 Objective Loss 0.159403 LR 0.000250 Time 0.362527 -2023-09-11 14:25:58,402 - Epoch: [122][ 20/ 71] Overall Loss 0.155574 Objective Loss 0.155574 LR 0.000250 Time 0.294397 -2023-09-11 14:26:01,099 - Epoch: [122][ 30/ 71] Overall Loss 0.155011 Objective Loss 0.155011 LR 0.000250 Time 0.286175 -2023-09-11 14:26:04,594 - Epoch: [122][ 40/ 71] Overall Loss 0.154934 Objective Loss 0.154934 LR 0.000250 Time 0.301995 -2023-09-11 14:26:06,698 - Epoch: [122][ 50/ 71] Overall Loss 0.154435 Objective Loss 0.154435 LR 0.000250 Time 0.283671 -2023-09-11 14:26:09,396 - Epoch: [122][ 60/ 71] Overall Loss 0.154462 Objective Loss 0.154462 LR 0.000250 Time 0.281352 -2023-09-11 14:26:11,330 - Epoch: [122][ 70/ 71] Overall Loss 0.153412 Objective Loss 0.153412 Top1 94.921875 LR 0.000250 Time 0.268780 -2023-09-11 14:26:11,390 - Epoch: [122][ 71/ 71] Overall Loss 0.153494 Objective Loss 0.153494 Top1 94.345238 LR 0.000250 Time 0.265833 -2023-09-11 14:26:11,488 - --- validate (epoch=122)----------- -2023-09-11 14:26:11,488 - 2000 samples (256 per mini-batch) -2023-09-11 14:26:14,508 - Epoch: [122][ 8/ 8] Loss 0.172276 Top1 92.650000 -2023-09-11 14:26:14,606 - ==> Top1: 92.650 Loss: 0.172 - -2023-09-11 14:26:14,606 - ==> Confusion: -[[907 78] - [ 69 946]] +2025-05-20 17:41:52,276 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:41:52,276 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:41:52,283 - + +2025-05-20 17:41:52,283 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:41:57,693 - Epoch: [119][ 10/ 71] Overall Loss 0.154871 Objective Loss 0.154871 LR 0.000250 Time 0.540921 +2025-05-20 17:42:00,813 - Epoch: [119][ 20/ 71] Overall Loss 0.151868 Objective Loss 0.151868 LR 0.000250 Time 0.426468 +2025-05-20 17:42:04,348 - Epoch: [119][ 30/ 71] Overall Loss 0.142720 Objective Loss 0.142720 LR 0.000250 Time 0.402133 +2025-05-20 17:42:08,549 - Epoch: [119][ 40/ 71] Overall Loss 0.143883 Objective Loss 0.143883 LR 0.000250 Time 0.406603 +2025-05-20 17:42:12,296 - Epoch: [119][ 50/ 71] Overall Loss 0.141876 Objective Loss 0.141876 LR 0.000250 Time 0.400222 +2025-05-20 17:42:15,461 - Epoch: [119][ 60/ 71] Overall Loss 0.143364 Objective Loss 0.143364 LR 0.000250 Time 0.386268 +2025-05-20 17:42:19,048 - Epoch: [119][ 70/ 71] Overall Loss 0.145708 Objective Loss 0.145708 Top1 92.187500 LR 0.000250 Time 0.382325 +2025-05-20 17:42:19,140 - Epoch: [119][ 71/ 71] Overall Loss 0.145765 Objective Loss 0.145765 Top1 92.857143 LR 0.000250 Time 0.378229 +2025-05-20 17:42:19,168 - --- validate (epoch=119)----------- +2025-05-20 17:42:19,168 - 2000 samples (256 per mini-batch) +2025-05-20 17:42:23,098 - Epoch: [119][ 8/ 8] Loss 0.194125 Top1 92.150000 +2025-05-20 17:42:23,136 - ==> Top1: 92.150 Loss: 0.194 + +2025-05-20 17:42:23,136 - ==> Confusion: +[[929 56] + [101 914]] -2023-09-11 14:26:14,621 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:26:14,621 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:26:14,623 - - -2023-09-11 14:26:14,623 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:26:19,779 - Epoch: [123][ 10/ 71] Overall Loss 0.145175 Objective Loss 0.145175 LR 0.000250 Time 0.515550 -2023-09-11 14:26:21,805 - Epoch: [123][ 20/ 71] Overall Loss 0.146461 Objective Loss 0.146461 LR 0.000250 Time 0.359057 -2023-09-11 14:26:25,099 - Epoch: [123][ 30/ 71] Overall Loss 0.148759 Objective Loss 0.148759 LR 0.000250 Time 0.348988 -2023-09-11 14:26:27,280 - Epoch: [123][ 40/ 71] Overall Loss 0.149970 Objective Loss 0.149970 LR 0.000250 Time 0.316264 -2023-09-11 14:26:29,981 - Epoch: [123][ 50/ 71] Overall Loss 0.150674 Objective Loss 0.150674 LR 0.000250 Time 0.307028 -2023-09-11 14:26:32,799 - Epoch: [123][ 60/ 71] Overall Loss 0.149635 Objective Loss 0.149635 LR 0.000250 Time 0.302819 -2023-09-11 14:26:34,998 - Epoch: [123][ 70/ 71] Overall Loss 0.149368 Objective Loss 0.149368 Top1 94.531250 LR 0.000250 Time 0.290970 -2023-09-11 14:26:35,130 - Epoch: [123][ 71/ 71] Overall Loss 0.149840 Objective Loss 0.149840 Top1 94.345238 LR 0.000250 Time 0.288720 -2023-09-11 14:26:35,227 - --- validate (epoch=123)----------- -2023-09-11 14:26:35,227 - 2000 samples (256 per mini-batch) -2023-09-11 14:26:38,214 - Epoch: [123][ 8/ 8] Loss 0.187360 Top1 91.500000 -2023-09-11 14:26:38,313 - ==> Top1: 91.500 Loss: 0.187 - -2023-09-11 14:26:38,313 - ==> Confusion: +2025-05-20 17:42:23,153 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:42:23,153 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:42:23,161 - + +2025-05-20 17:42:23,161 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:42:29,102 - Epoch: [120][ 10/ 71] Overall Loss 0.149321 Objective Loss 0.149321 LR 0.000250 Time 0.594103 +2025-05-20 17:42:32,833 - Epoch: [120][ 20/ 71] Overall Loss 0.135715 Objective Loss 0.135715 LR 0.000250 Time 0.483579 +2025-05-20 17:42:36,939 - Epoch: [120][ 30/ 71] Overall Loss 0.139324 Objective Loss 0.139324 LR 0.000250 Time 0.459238 +2025-05-20 17:42:40,043 - Epoch: [120][ 40/ 71] Overall Loss 0.139638 Objective Loss 0.139638 LR 0.000250 Time 0.422025 +2025-05-20 17:42:43,797 - Epoch: [120][ 50/ 71] Overall Loss 0.140431 Objective Loss 0.140431 LR 0.000250 Time 0.412692 +2025-05-20 17:42:46,709 - Epoch: [120][ 60/ 71] Overall Loss 0.141129 Objective Loss 0.141129 LR 0.000250 Time 0.392440 +2025-05-20 17:42:50,709 - Epoch: [120][ 70/ 71] Overall Loss 0.141001 Objective Loss 0.141001 Top1 94.140625 LR 0.000250 Time 0.393508 +2025-05-20 17:42:50,817 - Epoch: [120][ 71/ 71] Overall Loss 0.141249 Objective Loss 0.141249 Top1 94.642857 LR 0.000250 Time 0.389481 +2025-05-20 17:42:50,846 - --- validate (epoch=120)----------- +2025-05-20 17:42:50,846 - 2000 samples (256 per mini-batch) +2025-05-20 17:42:54,254 - Epoch: [120][ 8/ 8] Loss 0.187462 Top1 92.550000 +2025-05-20 17:42:54,293 - ==> Top1: 92.550 Loss: 0.187 + +2025-05-20 17:42:54,293 - ==> Confusion: [[905 80] - [ 90 925]] + [ 69 946]] -2023-09-11 14:26:38,328 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:26:38,328 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:26:38,330 - - -2023-09-11 14:26:38,330 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:26:41,812 - Epoch: [124][ 10/ 71] Overall Loss 0.153550 Objective Loss 0.153550 LR 0.000250 Time 0.348103 -2023-09-11 14:26:45,586 - Epoch: [124][ 20/ 71] Overall Loss 0.161497 Objective Loss 0.161497 LR 0.000250 Time 0.362750 -2023-09-11 14:26:47,782 - Epoch: [124][ 30/ 71] Overall Loss 0.161224 Objective Loss 0.161224 LR 0.000250 Time 0.315013 -2023-09-11 14:26:50,475 - Epoch: [124][ 40/ 71] Overall Loss 0.159362 Objective Loss 0.159362 LR 0.000250 Time 0.303582 -2023-09-11 14:26:52,907 - Epoch: [124][ 50/ 71] Overall Loss 0.156961 Objective Loss 0.156961 LR 0.000250 Time 0.291492 -2023-09-11 14:26:55,522 - Epoch: [124][ 60/ 71] Overall Loss 0.155239 Objective Loss 0.155239 LR 0.000250 Time 0.286494 -2023-09-11 14:26:57,473 - Epoch: [124][ 70/ 71] Overall Loss 0.152487 Objective Loss 0.152487 Top1 92.578125 LR 0.000250 Time 0.273430 -2023-09-11 14:26:57,539 - Epoch: [124][ 71/ 71] Overall Loss 0.151267 Objective Loss 0.151267 Top1 93.750000 LR 0.000250 Time 0.270511 -2023-09-11 14:26:57,632 - --- validate (epoch=124)----------- -2023-09-11 14:26:57,632 - 2000 samples (256 per mini-batch) -2023-09-11 14:27:00,719 - Epoch: [124][ 8/ 8] Loss 0.178087 Top1 92.950000 -2023-09-11 14:27:00,818 - ==> Top1: 92.950 Loss: 0.178 - -2023-09-11 14:27:00,818 - ==> Confusion: -[[917 68] - [ 73 942]] +2025-05-20 17:42:54,311 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:42:54,311 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:42:54,318 - + +2025-05-20 17:42:54,318 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:43:00,563 - Epoch: [121][ 10/ 71] Overall Loss 0.151962 Objective Loss 0.151962 LR 0.000250 Time 0.624426 +2025-05-20 17:43:03,790 - Epoch: [121][ 20/ 71] Overall Loss 0.143205 Objective Loss 0.143205 LR 0.000250 Time 0.473507 +2025-05-20 17:43:07,824 - Epoch: [121][ 30/ 71] Overall Loss 0.142985 Objective Loss 0.142985 LR 0.000250 Time 0.450122 +2025-05-20 17:43:11,126 - Epoch: [121][ 40/ 71] Overall Loss 0.141182 Objective Loss 0.141182 LR 0.000250 Time 0.420142 +2025-05-20 17:43:15,351 - Epoch: [121][ 50/ 71] Overall Loss 0.144198 Objective Loss 0.144198 LR 0.000250 Time 0.420616 +2025-05-20 17:43:19,328 - Epoch: [121][ 60/ 71] Overall Loss 0.145244 Objective Loss 0.145244 LR 0.000250 Time 0.416786 +2025-05-20 17:43:23,212 - Epoch: [121][ 70/ 71] Overall Loss 0.145708 Objective Loss 0.145708 Top1 91.015625 LR 0.000250 Time 0.412734 +2025-05-20 17:43:23,308 - Epoch: [121][ 71/ 71] Overall Loss 0.146131 Objective Loss 0.146131 Top1 91.666667 LR 0.000250 Time 0.408259 +2025-05-20 17:43:23,347 - --- validate (epoch=121)----------- +2025-05-20 17:43:23,347 - 2000 samples (256 per mini-batch) +2025-05-20 17:43:27,147 - Epoch: [121][ 8/ 8] Loss 0.211110 Top1 92.300000 +2025-05-20 17:43:27,180 - ==> Top1: 92.300 Loss: 0.211 + +2025-05-20 17:43:27,180 - ==> Confusion: +[[949 36] + [118 897]] + +2025-05-20 17:43:27,198 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:43:27,198 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:43:27,205 - + +2025-05-20 17:43:27,205 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:43:31,839 - Epoch: [122][ 10/ 71] Overall Loss 0.141194 Objective Loss 0.141194 LR 0.000250 Time 0.463280 +2025-05-20 17:43:35,100 - Epoch: [122][ 20/ 71] Overall Loss 0.137514 Objective Loss 0.137514 LR 0.000250 Time 0.394716 +2025-05-20 17:43:38,307 - Epoch: [122][ 30/ 71] Overall Loss 0.137958 Objective Loss 0.137958 LR 0.000250 Time 0.370026 +2025-05-20 17:43:42,737 - Epoch: [122][ 40/ 71] Overall Loss 0.141251 Objective Loss 0.141251 LR 0.000250 Time 0.388246 +2025-05-20 17:43:45,861 - Epoch: [122][ 50/ 71] Overall Loss 0.141776 Objective Loss 0.141776 LR 0.000250 Time 0.373073 +2025-05-20 17:43:49,340 - Epoch: [122][ 60/ 71] Overall Loss 0.142603 Objective Loss 0.142603 LR 0.000250 Time 0.368866 +2025-05-20 17:43:52,247 - Epoch: [122][ 70/ 71] Overall Loss 0.143772 Objective Loss 0.143772 Top1 95.703125 LR 0.000250 Time 0.357708 +2025-05-20 17:43:52,357 - Epoch: [122][ 71/ 71] Overall Loss 0.144234 Objective Loss 0.144234 Top1 94.940476 LR 0.000250 Time 0.354207 +2025-05-20 17:43:52,389 - --- validate (epoch=122)----------- +2025-05-20 17:43:52,389 - 2000 samples (256 per mini-batch) +2025-05-20 17:43:56,383 - Epoch: [122][ 8/ 8] Loss 0.180052 Top1 92.000000 +2025-05-20 17:43:56,422 - ==> Top1: 92.000 Loss: 0.180 + +2025-05-20 17:43:56,422 - ==> Confusion: +[[906 79] + [ 81 934]] -2023-09-11 14:27:00,832 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:27:00,832 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:27:00,835 - - -2023-09-11 14:27:00,835 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:27:05,308 - Epoch: [125][ 10/ 71] Overall Loss 0.146957 Objective Loss 0.146957 LR 0.000250 Time 0.447231 -2023-09-11 14:27:07,395 - Epoch: [125][ 20/ 71] Overall Loss 0.148726 Objective Loss 0.148726 LR 0.000250 Time 0.327946 -2023-09-11 14:27:10,184 - Epoch: [125][ 30/ 71] Overall Loss 0.150275 Objective Loss 0.150275 LR 0.000250 Time 0.311585 -2023-09-11 14:27:13,416 - Epoch: [125][ 40/ 71] Overall Loss 0.153439 Objective Loss 0.153439 LR 0.000250 Time 0.314498 -2023-09-11 14:27:16,237 - Epoch: [125][ 50/ 71] Overall Loss 0.154053 Objective Loss 0.154053 LR 0.000250 Time 0.308005 -2023-09-11 14:27:19,235 - Epoch: [125][ 60/ 71] Overall Loss 0.151248 Objective Loss 0.151248 LR 0.000250 Time 0.306637 -2023-09-11 14:27:21,512 - Epoch: [125][ 70/ 71] Overall Loss 0.153314 Objective Loss 0.153314 Top1 95.312500 LR 0.000250 Time 0.295348 -2023-09-11 14:27:21,593 - Epoch: [125][ 71/ 71] Overall Loss 0.153839 Objective Loss 0.153839 Top1 94.345238 LR 0.000250 Time 0.292330 -2023-09-11 14:27:21,684 - --- validate (epoch=125)----------- -2023-09-11 14:27:21,684 - 2000 samples (256 per mini-batch) -2023-09-11 14:27:24,768 - Epoch: [125][ 8/ 8] Loss 0.212439 Top1 91.200000 -2023-09-11 14:27:24,866 - ==> Top1: 91.200 Loss: 0.212 - -2023-09-11 14:27:24,866 - ==> Confusion: -[[852 133] - [ 43 972]] - -2023-09-11 14:27:24,883 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:27:24,883 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:27:24,888 - - -2023-09-11 14:27:24,888 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:27:28,180 - Epoch: [126][ 10/ 71] Overall Loss 0.159838 Objective Loss 0.159838 LR 0.000250 Time 0.329214 -2023-09-11 14:27:31,786 - Epoch: [126][ 20/ 71] Overall Loss 0.154857 Objective Loss 0.154857 LR 0.000250 Time 0.344889 -2023-09-11 14:27:33,863 - Epoch: [126][ 30/ 71] Overall Loss 0.151944 Objective Loss 0.151944 LR 0.000250 Time 0.299120 -2023-09-11 14:27:36,715 - Epoch: [126][ 40/ 71] Overall Loss 0.151772 Objective Loss 0.151772 LR 0.000250 Time 0.295652 -2023-09-11 14:27:38,923 - Epoch: [126][ 50/ 71] Overall Loss 0.151669 Objective Loss 0.151669 LR 0.000250 Time 0.280665 -2023-09-11 14:27:42,280 - Epoch: [126][ 60/ 71] Overall Loss 0.153885 Objective Loss 0.153885 LR 0.000250 Time 0.289832 -2023-09-11 14:27:44,987 - Epoch: [126][ 70/ 71] Overall Loss 0.152757 Objective Loss 0.152757 Top1 94.921875 LR 0.000250 Time 0.287090 -2023-09-11 14:27:45,052 - Epoch: [126][ 71/ 71] Overall Loss 0.153143 Objective Loss 0.153143 Top1 94.642857 LR 0.000250 Time 0.283969 -2023-09-11 14:27:45,144 - --- validate (epoch=126)----------- -2023-09-11 14:27:45,145 - 2000 samples (256 per mini-batch) -2023-09-11 14:27:48,224 - Epoch: [126][ 8/ 8] Loss 0.173259 Top1 92.150000 -2023-09-11 14:27:48,324 - ==> Top1: 92.150 Loss: 0.173 - -2023-09-11 14:27:48,325 - ==> Confusion: -[[898 87] - [ 70 945]] +2025-05-20 17:43:56,437 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:43:56,437 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:43:56,444 - + +2025-05-20 17:43:56,444 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:44:02,563 - Epoch: [123][ 10/ 71] Overall Loss 0.146543 Objective Loss 0.146543 LR 0.000250 Time 0.611778 +2025-05-20 17:44:05,427 - Epoch: [123][ 20/ 71] Overall Loss 0.143877 Objective Loss 0.143877 LR 0.000250 Time 0.449077 +2025-05-20 17:44:09,524 - Epoch: [123][ 30/ 71] Overall Loss 0.146919 Objective Loss 0.146919 LR 0.000250 Time 0.435938 +2025-05-20 17:44:13,288 - Epoch: [123][ 40/ 71] Overall Loss 0.144743 Objective Loss 0.144743 LR 0.000250 Time 0.421067 +2025-05-20 17:44:17,770 - Epoch: [123][ 50/ 71] Overall Loss 0.143991 Objective Loss 0.143991 LR 0.000250 Time 0.426476 +2025-05-20 17:44:21,394 - Epoch: [123][ 60/ 71] Overall Loss 0.144616 Objective Loss 0.144616 LR 0.000250 Time 0.415800 +2025-05-20 17:44:25,466 - Epoch: [123][ 70/ 71] Overall Loss 0.144053 Objective Loss 0.144053 Top1 94.921875 LR 0.000250 Time 0.414558 +2025-05-20 17:44:25,560 - Epoch: [123][ 71/ 71] Overall Loss 0.143516 Objective Loss 0.143516 Top1 95.238095 LR 0.000250 Time 0.410048 +2025-05-20 17:44:25,596 - --- validate (epoch=123)----------- +2025-05-20 17:44:25,596 - 2000 samples (256 per mini-batch) +2025-05-20 17:44:28,859 - Epoch: [123][ 8/ 8] Loss 0.175703 Top1 92.450000 +2025-05-20 17:44:28,897 - ==> Top1: 92.450 Loss: 0.176 + +2025-05-20 17:44:28,898 - ==> Confusion: +[[921 64] + [ 87 928]] -2023-09-11 14:27:48,332 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:27:48,332 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:27:48,335 - - -2023-09-11 14:27:48,335 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:27:51,917 - Epoch: [127][ 10/ 71] Overall Loss 0.156472 Objective Loss 0.156472 LR 0.000250 Time 0.358186 -2023-09-11 14:27:54,680 - Epoch: [127][ 20/ 71] Overall Loss 0.158909 Objective Loss 0.158909 LR 0.000250 Time 0.317199 -2023-09-11 14:27:57,898 - Epoch: [127][ 30/ 71] Overall Loss 0.157736 Objective Loss 0.157736 LR 0.000250 Time 0.318736 -2023-09-11 14:27:59,904 - Epoch: [127][ 40/ 71] Overall Loss 0.156398 Objective Loss 0.156398 LR 0.000250 Time 0.289190 -2023-09-11 14:28:02,615 - Epoch: [127][ 50/ 71] Overall Loss 0.154198 Objective Loss 0.154198 LR 0.000250 Time 0.285568 -2023-09-11 14:28:05,250 - Epoch: [127][ 60/ 71] Overall Loss 0.151700 Objective Loss 0.151700 LR 0.000250 Time 0.281887 -2023-09-11 14:28:07,321 - Epoch: [127][ 70/ 71] Overall Loss 0.152589 Objective Loss 0.152589 Top1 94.921875 LR 0.000250 Time 0.271199 -2023-09-11 14:28:07,398 - Epoch: [127][ 71/ 71] Overall Loss 0.151469 Objective Loss 0.151469 Top1 95.535714 LR 0.000250 Time 0.268454 -2023-09-11 14:28:07,514 - --- validate (epoch=127)----------- -2023-09-11 14:28:07,514 - 2000 samples (256 per mini-batch) -2023-09-11 14:28:10,211 - Epoch: [127][ 8/ 8] Loss 0.203122 Top1 91.650000 -2023-09-11 14:28:10,309 - ==> Top1: 91.650 Loss: 0.203 - -2023-09-11 14:28:10,309 - ==> Confusion: -[[925 60] - [107 908]] +2025-05-20 17:44:28,914 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:44:28,914 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:44:28,922 - + +2025-05-20 17:44:28,922 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:44:34,462 - Epoch: [124][ 10/ 71] Overall Loss 0.133024 Objective Loss 0.133024 LR 0.000250 Time 0.553987 +2025-05-20 17:44:38,093 - Epoch: [124][ 20/ 71] Overall Loss 0.134184 Objective Loss 0.134184 LR 0.000250 Time 0.458490 +2025-05-20 17:44:42,123 - Epoch: [124][ 30/ 71] Overall Loss 0.136399 Objective Loss 0.136399 LR 0.000250 Time 0.440011 +2025-05-20 17:44:45,366 - Epoch: [124][ 40/ 71] Overall Loss 0.138197 Objective Loss 0.138197 LR 0.000250 Time 0.411070 +2025-05-20 17:44:49,702 - Epoch: [124][ 50/ 71] Overall Loss 0.140745 Objective Loss 0.140745 LR 0.000250 Time 0.415557 +2025-05-20 17:44:53,282 - Epoch: [124][ 60/ 71] Overall Loss 0.140619 Objective Loss 0.140619 LR 0.000250 Time 0.405968 +2025-05-20 17:44:57,375 - Epoch: [124][ 70/ 71] Overall Loss 0.140074 Objective Loss 0.140074 Top1 94.921875 LR 0.000250 Time 0.406435 +2025-05-20 17:44:57,466 - Epoch: [124][ 71/ 71] Overall Loss 0.139421 Objective Loss 0.139421 Top1 95.238095 LR 0.000250 Time 0.401991 +2025-05-20 17:44:57,498 - --- validate (epoch=124)----------- +2025-05-20 17:44:57,498 - 2000 samples (256 per mini-batch) +2025-05-20 17:45:01,230 - Epoch: [124][ 8/ 8] Loss 0.192012 Top1 92.750000 +2025-05-20 17:45:01,263 - ==> Top1: 92.750 Loss: 0.192 + +2025-05-20 17:45:01,263 - ==> Confusion: +[[926 59] + [ 86 929]] -2023-09-11 14:28:10,325 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:28:10,325 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:28:10,327 - - -2023-09-11 14:28:10,328 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:28:14,639 - Epoch: [128][ 10/ 71] Overall Loss 0.152323 Objective Loss 0.152323 LR 0.000250 Time 0.431095 -2023-09-11 14:28:16,637 - Epoch: [128][ 20/ 71] Overall Loss 0.146072 Objective Loss 0.146072 LR 0.000250 Time 0.315449 -2023-09-11 14:28:19,132 - Epoch: [128][ 30/ 71] Overall Loss 0.145345 Objective Loss 0.145345 LR 0.000250 Time 0.293457 -2023-09-11 14:28:21,393 - Epoch: [128][ 40/ 71] Overall Loss 0.145795 Objective Loss 0.145795 LR 0.000250 Time 0.276598 -2023-09-11 14:28:24,776 - Epoch: [128][ 50/ 71] Overall Loss 0.143390 Objective Loss 0.143390 LR 0.000250 Time 0.288941 -2023-09-11 14:28:26,924 - Epoch: [128][ 60/ 71] Overall Loss 0.147056 Objective Loss 0.147056 LR 0.000250 Time 0.276578 -2023-09-11 14:28:29,257 - Epoch: [128][ 70/ 71] Overall Loss 0.148932 Objective Loss 0.148932 Top1 94.921875 LR 0.000250 Time 0.270382 -2023-09-11 14:28:29,341 - Epoch: [128][ 71/ 71] Overall Loss 0.147760 Objective Loss 0.147760 Top1 96.130952 LR 0.000250 Time 0.267753 -2023-09-11 14:28:29,437 - --- validate (epoch=128)----------- -2023-09-11 14:28:29,437 - 2000 samples (256 per mini-batch) -2023-09-11 14:28:32,611 - Epoch: [128][ 8/ 8] Loss 0.172684 Top1 92.400000 -2023-09-11 14:28:32,711 - ==> Top1: 92.400 Loss: 0.173 - -2023-09-11 14:28:32,711 - ==> Confusion: -[[920 65] - [ 87 928]] +2025-05-20 17:45:01,270 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:45:01,270 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:45:01,277 - + +2025-05-20 17:45:01,277 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:45:07,594 - Epoch: [125][ 10/ 71] Overall Loss 0.136750 Objective Loss 0.136750 LR 0.000250 Time 0.631622 +2025-05-20 17:45:10,663 - Epoch: [125][ 20/ 71] Overall Loss 0.146409 Objective Loss 0.146409 LR 0.000250 Time 0.469222 +2025-05-20 17:45:14,448 - Epoch: [125][ 30/ 71] Overall Loss 0.143006 Objective Loss 0.143006 LR 0.000250 Time 0.438988 +2025-05-20 17:45:18,028 - Epoch: [125][ 40/ 71] Overall Loss 0.138526 Objective Loss 0.138526 LR 0.000250 Time 0.418736 +2025-05-20 17:45:21,759 - Epoch: [125][ 50/ 71] Overall Loss 0.137745 Objective Loss 0.137745 LR 0.000250 Time 0.409598 +2025-05-20 17:45:25,044 - Epoch: [125][ 60/ 71] Overall Loss 0.139647 Objective Loss 0.139647 LR 0.000250 Time 0.396078 +2025-05-20 17:45:29,327 - Epoch: [125][ 70/ 71] Overall Loss 0.137221 Objective Loss 0.137221 Top1 96.093750 LR 0.000250 Time 0.400676 +2025-05-20 17:45:29,425 - Epoch: [125][ 71/ 71] Overall Loss 0.137475 Objective Loss 0.137475 Top1 95.238095 LR 0.000250 Time 0.396407 +2025-05-20 17:45:29,463 - --- validate (epoch=125)----------- +2025-05-20 17:45:29,464 - 2000 samples (256 per mini-batch) +2025-05-20 17:45:33,471 - Epoch: [125][ 8/ 8] Loss 0.194108 Top1 93.150000 +2025-05-20 17:45:33,504 - ==> Top1: 93.150 Loss: 0.194 + +2025-05-20 17:45:33,504 - ==> Confusion: +[[939 46] + [ 91 924]] -2023-09-11 14:28:32,726 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:28:32,726 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:28:32,729 - - -2023-09-11 14:28:32,729 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:28:36,730 - Epoch: [129][ 10/ 71] Overall Loss 0.143648 Objective Loss 0.143648 LR 0.000250 Time 0.400093 -2023-09-11 14:28:38,847 - Epoch: [129][ 20/ 71] Overall Loss 0.143172 Objective Loss 0.143172 LR 0.000250 Time 0.305862 -2023-09-11 14:28:42,151 - Epoch: [129][ 30/ 71] Overall Loss 0.146932 Objective Loss 0.146932 LR 0.000250 Time 0.314029 -2023-09-11 14:28:44,391 - Epoch: [129][ 40/ 71] Overall Loss 0.145313 Objective Loss 0.145313 LR 0.000250 Time 0.291516 -2023-09-11 14:28:47,172 - Epoch: [129][ 50/ 71] Overall Loss 0.144002 Objective Loss 0.144002 LR 0.000250 Time 0.288827 -2023-09-11 14:28:49,332 - Epoch: [129][ 60/ 71] Overall Loss 0.144306 Objective Loss 0.144306 LR 0.000250 Time 0.276680 -2023-09-11 14:28:51,777 - Epoch: [129][ 70/ 71] Overall Loss 0.143729 Objective Loss 0.143729 Top1 94.531250 LR 0.000250 Time 0.272077 -2023-09-11 14:28:51,841 - Epoch: [129][ 71/ 71] Overall Loss 0.144571 Objective Loss 0.144571 Top1 93.750000 LR 0.000250 Time 0.269144 -2023-09-11 14:28:51,936 - --- validate (epoch=129)----------- -2023-09-11 14:28:51,937 - 2000 samples (256 per mini-batch) -2023-09-11 14:28:55,163 - Epoch: [129][ 8/ 8] Loss 0.176960 Top1 92.950000 -2023-09-11 14:28:55,254 - ==> Top1: 92.950 Loss: 0.177 - -2023-09-11 14:28:55,254 - ==> Confusion: -[[910 75] - [ 66 949]] +2025-05-20 17:45:33,520 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:45:33,520 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:45:33,527 - + +2025-05-20 17:45:33,527 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:45:38,766 - Epoch: [126][ 10/ 71] Overall Loss 0.140975 Objective Loss 0.140975 LR 0.000250 Time 0.523839 +2025-05-20 17:45:42,112 - Epoch: [126][ 20/ 71] Overall Loss 0.143656 Objective Loss 0.143656 LR 0.000250 Time 0.429197 +2025-05-20 17:45:45,914 - Epoch: [126][ 30/ 71] Overall Loss 0.143215 Objective Loss 0.143215 LR 0.000250 Time 0.412835 +2025-05-20 17:45:49,868 - Epoch: [126][ 40/ 71] Overall Loss 0.139499 Objective Loss 0.139499 LR 0.000250 Time 0.408475 +2025-05-20 17:45:54,015 - Epoch: [126][ 50/ 71] Overall Loss 0.140551 Objective Loss 0.140551 LR 0.000250 Time 0.409718 +2025-05-20 17:45:57,053 - Epoch: [126][ 60/ 71] Overall Loss 0.140511 Objective Loss 0.140511 LR 0.000250 Time 0.392054 +2025-05-20 17:46:01,146 - Epoch: [126][ 70/ 71] Overall Loss 0.141274 Objective Loss 0.141274 Top1 95.312500 LR 0.000250 Time 0.394513 +2025-05-20 17:46:01,254 - Epoch: [126][ 71/ 71] Overall Loss 0.142307 Objective Loss 0.142307 Top1 93.452381 LR 0.000250 Time 0.390473 +2025-05-20 17:46:01,290 - --- validate (epoch=126)----------- +2025-05-20 17:46:01,291 - 2000 samples (256 per mini-batch) +2025-05-20 17:46:05,222 - Epoch: [126][ 8/ 8] Loss 0.187760 Top1 92.800000 +2025-05-20 17:46:05,256 - ==> Top1: 92.800 Loss: 0.188 + +2025-05-20 17:46:05,256 - ==> Confusion: +[[921 64] + [ 80 935]] -2023-09-11 14:28:55,270 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:28:55,271 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:28:55,273 - - -2023-09-11 14:28:55,273 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:28:59,767 - Epoch: [130][ 10/ 71] Overall Loss 0.133989 Objective Loss 0.133989 LR 0.000250 Time 0.449328 -2023-09-11 14:29:01,864 - Epoch: [130][ 20/ 71] Overall Loss 0.139110 Objective Loss 0.139110 LR 0.000250 Time 0.329508 -2023-09-11 14:29:04,431 - Epoch: [130][ 30/ 71] Overall Loss 0.145812 Objective Loss 0.145812 LR 0.000250 Time 0.305207 -2023-09-11 14:29:06,811 - Epoch: [130][ 40/ 71] Overall Loss 0.148222 Objective Loss 0.148222 LR 0.000250 Time 0.288416 -2023-09-11 14:29:09,411 - Epoch: [130][ 50/ 71] Overall Loss 0.148935 Objective Loss 0.148935 LR 0.000250 Time 0.282722 -2023-09-11 14:29:12,589 - Epoch: [130][ 60/ 71] Overall Loss 0.148580 Objective Loss 0.148580 LR 0.000250 Time 0.288555 -2023-09-11 14:29:14,827 - Epoch: [130][ 70/ 71] Overall Loss 0.147858 Objective Loss 0.147858 Top1 92.968750 LR 0.000250 Time 0.279303 -2023-09-11 14:29:14,878 - Epoch: [130][ 71/ 71] Overall Loss 0.147621 Objective Loss 0.147621 Top1 93.154762 LR 0.000250 Time 0.276083 -2023-09-11 14:29:14,970 - --- validate (epoch=130)----------- -2023-09-11 14:29:14,970 - 2000 samples (256 per mini-batch) -2023-09-11 14:29:17,980 - Epoch: [130][ 8/ 8] Loss 0.194279 Top1 92.500000 -2023-09-11 14:29:18,090 - ==> Top1: 92.500 Loss: 0.194 - -2023-09-11 14:29:18,090 - ==> Confusion: -[[896 89] - [ 61 954]] +2025-05-20 17:46:05,273 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:46:05,273 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:46:05,280 - + +2025-05-20 17:46:05,280 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:46:09,945 - Epoch: [127][ 10/ 71] Overall Loss 0.124472 Objective Loss 0.124472 LR 0.000250 Time 0.466450 +2025-05-20 17:46:14,276 - Epoch: [127][ 20/ 71] Overall Loss 0.133793 Objective Loss 0.133793 LR 0.000250 Time 0.449744 +2025-05-20 17:46:17,366 - Epoch: [127][ 30/ 71] Overall Loss 0.136698 Objective Loss 0.136698 LR 0.000250 Time 0.402810 +2025-05-20 17:46:21,256 - Epoch: [127][ 40/ 71] Overall Loss 0.137408 Objective Loss 0.137408 LR 0.000250 Time 0.399351 +2025-05-20 17:46:24,810 - Epoch: [127][ 50/ 71] Overall Loss 0.139959 Objective Loss 0.139959 LR 0.000250 Time 0.390552 +2025-05-20 17:46:27,707 - Epoch: [127][ 60/ 71] Overall Loss 0.142732 Objective Loss 0.142732 LR 0.000250 Time 0.373739 +2025-05-20 17:46:31,461 - Epoch: [127][ 70/ 71] Overall Loss 0.143036 Objective Loss 0.143036 Top1 91.796875 LR 0.000250 Time 0.373971 +2025-05-20 17:46:31,555 - Epoch: [127][ 71/ 71] Overall Loss 0.142832 Objective Loss 0.142832 Top1 92.261905 LR 0.000250 Time 0.370024 +2025-05-20 17:46:31,590 - --- validate (epoch=127)----------- +2025-05-20 17:46:31,590 - 2000 samples (256 per mini-batch) +2025-05-20 17:46:35,121 - Epoch: [127][ 8/ 8] Loss 0.193854 Top1 91.500000 +2025-05-20 17:46:35,154 - ==> Top1: 91.500 Loss: 0.194 + +2025-05-20 17:46:35,154 - ==> Confusion: +[[865 120] + [ 50 965]] + +2025-05-20 17:46:35,172 - ==> Best [Top1: 93.350 Params: 57776 on epoch: 97] +2025-05-20 17:46:35,172 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:46:35,179 - + +2025-05-20 17:46:35,179 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:46:40,157 - Epoch: [128][ 10/ 71] Overall Loss 0.117907 Objective Loss 0.117907 LR 0.000250 Time 0.497725 +2025-05-20 17:46:42,939 - Epoch: [128][ 20/ 71] Overall Loss 0.133441 Objective Loss 0.133441 LR 0.000250 Time 0.387926 +2025-05-20 17:46:47,067 - Epoch: [128][ 30/ 71] Overall Loss 0.136226 Objective Loss 0.136226 LR 0.000250 Time 0.396204 +2025-05-20 17:46:50,214 - Epoch: [128][ 40/ 71] Overall Loss 0.136142 Objective Loss 0.136142 LR 0.000250 Time 0.375826 +2025-05-20 17:46:54,100 - Epoch: [128][ 50/ 71] Overall Loss 0.135945 Objective Loss 0.135945 LR 0.000250 Time 0.378366 +2025-05-20 17:46:57,275 - Epoch: [128][ 60/ 71] Overall Loss 0.138208 Objective Loss 0.138208 LR 0.000250 Time 0.368225 +2025-05-20 17:47:00,973 - Epoch: [128][ 70/ 71] Overall Loss 0.138344 Objective Loss 0.138344 Top1 96.484375 LR 0.000250 Time 0.368446 +2025-05-20 17:47:01,079 - Epoch: [128][ 71/ 71] Overall Loss 0.137643 Objective Loss 0.137643 Top1 96.726190 LR 0.000250 Time 0.364744 +2025-05-20 17:47:01,108 - --- validate (epoch=128)----------- +2025-05-20 17:47:01,108 - 2000 samples (256 per mini-batch) +2025-05-20 17:47:04,609 - Epoch: [128][ 8/ 8] Loss 0.172276 Top1 93.450000 +2025-05-20 17:47:04,643 - ==> Top1: 93.450 Loss: 0.172 + +2025-05-20 17:47:04,643 - ==> Confusion: +[[947 38] + [ 93 922]] -2023-09-11 14:29:18,107 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:29:18,107 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:29:18,109 - - -2023-09-11 14:29:18,109 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:29:22,195 - Epoch: [131][ 10/ 71] Overall Loss 0.169378 Objective Loss 0.169378 LR 0.000250 Time 0.408476 -2023-09-11 14:29:24,407 - Epoch: [131][ 20/ 71] Overall Loss 0.172940 Objective Loss 0.172940 LR 0.000250 Time 0.314824 -2023-09-11 14:29:27,895 - Epoch: [131][ 30/ 71] Overall Loss 0.162268 Objective Loss 0.162268 LR 0.000250 Time 0.326135 -2023-09-11 14:29:29,981 - Epoch: [131][ 40/ 71] Overall Loss 0.159843 Objective Loss 0.159843 LR 0.000250 Time 0.296755 -2023-09-11 14:29:32,767 - Epoch: [131][ 50/ 71] Overall Loss 0.160467 Objective Loss 0.160467 LR 0.000250 Time 0.293111 -2023-09-11 14:29:35,049 - Epoch: [131][ 60/ 71] Overall Loss 0.158357 Objective Loss 0.158357 LR 0.000250 Time 0.282297 -2023-09-11 14:29:37,745 - Epoch: [131][ 70/ 71] Overall Loss 0.154357 Objective Loss 0.154357 Top1 93.359375 LR 0.000250 Time 0.280465 -2023-09-11 14:29:37,833 - Epoch: [131][ 71/ 71] Overall Loss 0.153234 Objective Loss 0.153234 Top1 94.047619 LR 0.000250 Time 0.277757 -2023-09-11 14:29:37,925 - --- validate (epoch=131)----------- -2023-09-11 14:29:37,925 - 2000 samples (256 per mini-batch) -2023-09-11 14:29:40,897 - Epoch: [131][ 8/ 8] Loss 0.180635 Top1 92.300000 -2023-09-11 14:29:40,996 - ==> Top1: 92.300 Loss: 0.181 - -2023-09-11 14:29:40,997 - ==> Confusion: -[[918 67] - [ 87 928]] +2025-05-20 17:47:04,658 - ==> Best [Top1: 93.450 Params: 57776 on epoch: 128] +2025-05-20 17:47:04,658 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:47:04,672 - + +2025-05-20 17:47:04,672 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:47:09,763 - Epoch: [129][ 10/ 71] Overall Loss 0.143961 Objective Loss 0.143961 LR 0.000250 Time 0.508973 +2025-05-20 17:47:12,982 - Epoch: [129][ 20/ 71] Overall Loss 0.149478 Objective Loss 0.149478 LR 0.000250 Time 0.415444 +2025-05-20 17:47:17,120 - Epoch: [129][ 30/ 71] Overall Loss 0.145577 Objective Loss 0.145577 LR 0.000250 Time 0.414865 +2025-05-20 17:47:20,591 - Epoch: [129][ 40/ 71] Overall Loss 0.142993 Objective Loss 0.142993 LR 0.000250 Time 0.397924 +2025-05-20 17:47:24,511 - Epoch: [129][ 50/ 71] Overall Loss 0.139479 Objective Loss 0.139479 LR 0.000250 Time 0.396725 +2025-05-20 17:47:27,589 - Epoch: [129][ 60/ 71] Overall Loss 0.139054 Objective Loss 0.139054 LR 0.000250 Time 0.381901 +2025-05-20 17:47:31,061 - Epoch: [129][ 70/ 71] Overall Loss 0.139680 Objective Loss 0.139680 Top1 94.921875 LR 0.000250 Time 0.376945 +2025-05-20 17:47:31,157 - Epoch: [129][ 71/ 71] Overall Loss 0.139327 Objective Loss 0.139327 Top1 95.238095 LR 0.000250 Time 0.372980 +2025-05-20 17:47:31,192 - --- validate (epoch=129)----------- +2025-05-20 17:47:31,192 - 2000 samples (256 per mini-batch) +2025-05-20 17:47:34,553 - Epoch: [129][ 8/ 8] Loss 0.186203 Top1 92.250000 +2025-05-20 17:47:34,587 - ==> Top1: 92.250 Loss: 0.186 + +2025-05-20 17:47:34,587 - ==> Confusion: +[[937 48] + [107 908]] + +2025-05-20 17:47:34,603 - ==> Best [Top1: 93.450 Params: 57776 on epoch: 128] +2025-05-20 17:47:34,603 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:47:34,610 - + +2025-05-20 17:47:34,610 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:47:39,303 - Epoch: [130][ 10/ 71] Overall Loss 0.148824 Objective Loss 0.148824 LR 0.000250 Time 0.469228 +2025-05-20 17:47:42,946 - Epoch: [130][ 20/ 71] Overall Loss 0.145021 Objective Loss 0.145021 LR 0.000250 Time 0.416721 +2025-05-20 17:47:46,386 - Epoch: [130][ 30/ 71] Overall Loss 0.144404 Objective Loss 0.144404 LR 0.000250 Time 0.392486 +2025-05-20 17:47:50,754 - Epoch: [130][ 40/ 71] Overall Loss 0.142326 Objective Loss 0.142326 LR 0.000250 Time 0.403541 +2025-05-20 17:47:54,312 - Epoch: [130][ 50/ 71] Overall Loss 0.140824 Objective Loss 0.140824 LR 0.000250 Time 0.393994 +2025-05-20 17:47:57,798 - Epoch: [130][ 60/ 71] Overall Loss 0.143766 Objective Loss 0.143766 LR 0.000250 Time 0.386429 +2025-05-20 17:48:01,060 - Epoch: [130][ 70/ 71] Overall Loss 0.141929 Objective Loss 0.141929 Top1 94.531250 LR 0.000250 Time 0.377810 +2025-05-20 17:48:01,154 - Epoch: [130][ 71/ 71] Overall Loss 0.144615 Objective Loss 0.144615 Top1 93.154762 LR 0.000250 Time 0.373824 +2025-05-20 17:48:01,186 - --- validate (epoch=130)----------- +2025-05-20 17:48:01,186 - 2000 samples (256 per mini-batch) +2025-05-20 17:48:04,577 - Epoch: [130][ 8/ 8] Loss 0.171090 Top1 93.400000 +2025-05-20 17:48:04,613 - ==> Top1: 93.400 Loss: 0.171 + +2025-05-20 17:48:04,613 - ==> Confusion: +[[921 64] + [ 68 947]] + +2025-05-20 17:48:04,629 - ==> Best [Top1: 93.450 Params: 57776 on epoch: 128] +2025-05-20 17:48:04,629 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:48:04,636 - + +2025-05-20 17:48:04,637 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:48:11,105 - Epoch: [131][ 10/ 71] Overall Loss 0.139184 Objective Loss 0.139184 LR 0.000250 Time 0.646747 +2025-05-20 17:48:13,870 - Epoch: [131][ 20/ 71] Overall Loss 0.137615 Objective Loss 0.137615 LR 0.000250 Time 0.461630 +2025-05-20 17:48:17,592 - Epoch: [131][ 30/ 71] Overall Loss 0.137859 Objective Loss 0.137859 LR 0.000250 Time 0.431810 +2025-05-20 17:48:21,386 - Epoch: [131][ 40/ 71] Overall Loss 0.140075 Objective Loss 0.140075 LR 0.000250 Time 0.418706 +2025-05-20 17:48:25,107 - Epoch: [131][ 50/ 71] Overall Loss 0.140019 Objective Loss 0.140019 LR 0.000250 Time 0.409366 +2025-05-20 17:48:29,438 - Epoch: [131][ 60/ 71] Overall Loss 0.143426 Objective Loss 0.143426 LR 0.000250 Time 0.413326 +2025-05-20 17:48:32,632 - Epoch: [131][ 70/ 71] Overall Loss 0.143889 Objective Loss 0.143889 Top1 93.359375 LR 0.000250 Time 0.399903 +2025-05-20 17:48:32,740 - Epoch: [131][ 71/ 71] Overall Loss 0.144506 Objective Loss 0.144506 Top1 93.750000 LR 0.000250 Time 0.395788 +2025-05-20 17:48:32,774 - --- validate (epoch=131)----------- +2025-05-20 17:48:32,774 - 2000 samples (256 per mini-batch) +2025-05-20 17:48:36,354 - Epoch: [131][ 8/ 8] Loss 0.186805 Top1 92.050000 +2025-05-20 17:48:36,387 - ==> Top1: 92.050 Loss: 0.187 + +2025-05-20 17:48:36,387 - ==> Confusion: +[[890 95] + [ 64 951]] + +2025-05-20 17:48:36,404 - ==> Best [Top1: 93.450 Params: 57776 on epoch: 128] +2025-05-20 17:48:36,404 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:48:36,411 - + +2025-05-20 17:48:36,412 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:48:41,280 - Epoch: [132][ 10/ 71] Overall Loss 0.152028 Objective Loss 0.152028 LR 0.000250 Time 0.486761 +2025-05-20 17:48:44,349 - Epoch: [132][ 20/ 71] Overall Loss 0.149574 Objective Loss 0.149574 LR 0.000250 Time 0.396798 +2025-05-20 17:48:48,317 - Epoch: [132][ 30/ 71] Overall Loss 0.141457 Objective Loss 0.141457 LR 0.000250 Time 0.396791 +2025-05-20 17:48:53,672 - Epoch: [132][ 40/ 71] Overall Loss 0.138643 Objective Loss 0.138643 LR 0.000250 Time 0.431458 +2025-05-20 17:48:57,039 - Epoch: [132][ 50/ 71] Overall Loss 0.138989 Objective Loss 0.138989 LR 0.000250 Time 0.412518 +2025-05-20 17:49:00,762 - Epoch: [132][ 60/ 71] Overall Loss 0.138521 Objective Loss 0.138521 LR 0.000250 Time 0.405802 +2025-05-20 17:49:03,728 - Epoch: [132][ 70/ 71] Overall Loss 0.137334 Objective Loss 0.137334 Top1 95.703125 LR 0.000250 Time 0.390195 +2025-05-20 17:49:03,836 - Epoch: [132][ 71/ 71] Overall Loss 0.137580 Objective Loss 0.137580 Top1 95.535714 LR 0.000250 Time 0.386226 +2025-05-20 17:49:03,873 - --- validate (epoch=132)----------- +2025-05-20 17:49:03,873 - 2000 samples (256 per mini-batch) +2025-05-20 17:49:07,150 - Epoch: [132][ 8/ 8] Loss 0.184299 Top1 93.050000 +2025-05-20 17:49:07,183 - ==> Top1: 93.050 Loss: 0.184 + +2025-05-20 17:49:07,184 - ==> Confusion: +[[921 64] + [ 75 940]] -2023-09-11 14:29:41,003 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:29:41,003 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:29:41,008 - - -2023-09-11 14:29:41,008 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:29:44,732 - Epoch: [132][ 10/ 71] Overall Loss 0.155653 Objective Loss 0.155653 LR 0.000250 Time 0.372401 -2023-09-11 14:29:47,010 - Epoch: [132][ 20/ 71] Overall Loss 0.152960 Objective Loss 0.152960 LR 0.000250 Time 0.300032 -2023-09-11 14:29:50,533 - Epoch: [132][ 30/ 71] Overall Loss 0.150961 Objective Loss 0.150961 LR 0.000250 Time 0.317469 -2023-09-11 14:29:52,556 - Epoch: [132][ 40/ 71] Overall Loss 0.153491 Objective Loss 0.153491 LR 0.000250 Time 0.288667 -2023-09-11 14:29:56,016 - Epoch: [132][ 50/ 71] Overall Loss 0.150734 Objective Loss 0.150734 LR 0.000250 Time 0.300114 -2023-09-11 14:29:58,085 - Epoch: [132][ 60/ 71] Overall Loss 0.151509 Objective Loss 0.151509 LR 0.000250 Time 0.284582 -2023-09-11 14:30:00,426 - Epoch: [132][ 70/ 71] Overall Loss 0.151010 Objective Loss 0.151010 Top1 93.750000 LR 0.000250 Time 0.277358 -2023-09-11 14:30:00,502 - Epoch: [132][ 71/ 71] Overall Loss 0.151110 Objective Loss 0.151110 Top1 93.750000 LR 0.000250 Time 0.274529 -2023-09-11 14:30:00,599 - --- validate (epoch=132)----------- -2023-09-11 14:30:00,599 - 2000 samples (256 per mini-batch) -2023-09-11 14:30:03,308 - Epoch: [132][ 8/ 8] Loss 0.195264 Top1 92.350000 -2023-09-11 14:30:03,398 - ==> Top1: 92.350 Loss: 0.195 - -2023-09-11 14:30:03,399 - ==> Confusion: +2025-05-20 17:49:07,189 - ==> Best [Top1: 93.450 Params: 57776 on epoch: 128] +2025-05-20 17:49:07,190 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:49:07,197 - + +2025-05-20 17:49:07,197 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:49:12,153 - Epoch: [133][ 10/ 71] Overall Loss 0.136334 Objective Loss 0.136334 LR 0.000250 Time 0.495523 +2025-05-20 17:49:15,809 - Epoch: [133][ 20/ 71] Overall Loss 0.135759 Objective Loss 0.135759 LR 0.000250 Time 0.430547 +2025-05-20 17:49:19,403 - Epoch: [133][ 30/ 71] Overall Loss 0.137714 Objective Loss 0.137714 LR 0.000250 Time 0.406821 +2025-05-20 17:49:24,146 - Epoch: [133][ 40/ 71] Overall Loss 0.138220 Objective Loss 0.138220 LR 0.000250 Time 0.423684 +2025-05-20 17:49:27,699 - Epoch: [133][ 50/ 71] Overall Loss 0.135480 Objective Loss 0.135480 LR 0.000250 Time 0.410002 +2025-05-20 17:49:31,218 - Epoch: [133][ 60/ 71] Overall Loss 0.137271 Objective Loss 0.137271 LR 0.000250 Time 0.400305 +2025-05-20 17:49:34,128 - Epoch: [133][ 70/ 71] Overall Loss 0.138890 Objective Loss 0.138890 Top1 92.968750 LR 0.000250 Time 0.384686 +2025-05-20 17:49:34,237 - Epoch: [133][ 71/ 71] Overall Loss 0.138944 Objective Loss 0.138944 Top1 93.154762 LR 0.000250 Time 0.380800 +2025-05-20 17:49:34,266 - --- validate (epoch=133)----------- +2025-05-20 17:49:34,266 - 2000 samples (256 per mini-batch) +2025-05-20 17:49:38,131 - Epoch: [133][ 8/ 8] Loss 0.177455 Top1 92.950000 +2025-05-20 17:49:38,168 - ==> Top1: 92.950 Loss: 0.177 + +2025-05-20 17:49:38,168 - ==> Confusion: [[924 61] - [ 92 923]] + [ 80 935]] -2023-09-11 14:30:03,409 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:30:03,409 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:30:03,414 - - -2023-09-11 14:30:03,414 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:30:06,881 - Epoch: [133][ 10/ 71] Overall Loss 0.154421 Objective Loss 0.154421 LR 0.000250 Time 0.346661 -2023-09-11 14:30:09,227 - Epoch: [133][ 20/ 71] Overall Loss 0.153585 Objective Loss 0.153585 LR 0.000250 Time 0.290646 -2023-09-11 14:30:12,033 - Epoch: [133][ 30/ 71] Overall Loss 0.145166 Objective Loss 0.145166 LR 0.000250 Time 0.287281 -2023-09-11 14:30:14,808 - Epoch: [133][ 40/ 71] Overall Loss 0.144886 Objective Loss 0.144886 LR 0.000250 Time 0.284809 -2023-09-11 14:30:17,544 - Epoch: [133][ 50/ 71] Overall Loss 0.146801 Objective Loss 0.146801 LR 0.000250 Time 0.282572 -2023-09-11 14:30:20,064 - Epoch: [133][ 60/ 71] Overall Loss 0.147362 Objective Loss 0.147362 LR 0.000250 Time 0.277475 -2023-09-11 14:30:22,796 - Epoch: [133][ 70/ 71] Overall Loss 0.146584 Objective Loss 0.146584 Top1 94.921875 LR 0.000250 Time 0.276847 -2023-09-11 14:30:22,872 - Epoch: [133][ 71/ 71] Overall Loss 0.145839 Objective Loss 0.145839 Top1 95.238095 LR 0.000250 Time 0.274022 -2023-09-11 14:30:22,969 - --- validate (epoch=133)----------- -2023-09-11 14:30:22,969 - 2000 samples (256 per mini-batch) -2023-09-11 14:30:25,818 - Epoch: [133][ 8/ 8] Loss 0.179392 Top1 93.000000 -2023-09-11 14:30:25,925 - ==> Top1: 93.000 Loss: 0.179 - -2023-09-11 14:30:25,925 - ==> Confusion: -[[917 68] - [ 72 943]] +2025-05-20 17:49:38,184 - ==> Best [Top1: 93.450 Params: 57776 on epoch: 128] +2025-05-20 17:49:38,184 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:49:38,191 - + +2025-05-20 17:49:38,191 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:49:44,174 - Epoch: [134][ 10/ 71] Overall Loss 0.133268 Objective Loss 0.133268 LR 0.000250 Time 0.598227 +2025-05-20 17:49:47,130 - Epoch: [134][ 20/ 71] Overall Loss 0.141196 Objective Loss 0.141196 LR 0.000250 Time 0.446858 +2025-05-20 17:49:51,352 - Epoch: [134][ 30/ 71] Overall Loss 0.137128 Objective Loss 0.137128 LR 0.000250 Time 0.438624 +2025-05-20 17:49:54,717 - Epoch: [134][ 40/ 71] Overall Loss 0.135735 Objective Loss 0.135735 LR 0.000250 Time 0.413093 +2025-05-20 17:49:58,999 - Epoch: [134][ 50/ 71] Overall Loss 0.135431 Objective Loss 0.135431 LR 0.000250 Time 0.416108 +2025-05-20 17:50:02,000 - Epoch: [134][ 60/ 71] Overall Loss 0.135563 Objective Loss 0.135563 LR 0.000250 Time 0.396774 +2025-05-20 17:50:06,040 - Epoch: [134][ 70/ 71] Overall Loss 0.136386 Objective Loss 0.136386 Top1 92.187500 LR 0.000250 Time 0.397807 +2025-05-20 17:50:06,143 - Epoch: [134][ 71/ 71] Overall Loss 0.136068 Objective Loss 0.136068 Top1 93.154762 LR 0.000250 Time 0.393652 +2025-05-20 17:50:06,179 - --- validate (epoch=134)----------- +2025-05-20 17:50:06,179 - 2000 samples (256 per mini-batch) +2025-05-20 17:50:09,881 - Epoch: [134][ 8/ 8] Loss 0.173176 Top1 93.200000 +2025-05-20 17:50:09,918 - ==> Top1: 93.200 Loss: 0.173 + +2025-05-20 17:50:09,919 - ==> Confusion: +[[910 75] + [ 61 954]] -2023-09-11 14:30:25,936 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:30:25,937 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:30:25,941 - - -2023-09-11 14:30:25,941 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:30:29,421 - Epoch: [134][ 10/ 71] Overall Loss 0.142843 Objective Loss 0.142843 LR 0.000250 Time 0.347966 -2023-09-11 14:30:32,341 - Epoch: [134][ 20/ 71] Overall Loss 0.147513 Objective Loss 0.147513 LR 0.000250 Time 0.319964 -2023-09-11 14:30:34,454 - Epoch: [134][ 30/ 71] Overall Loss 0.149436 Objective Loss 0.149436 LR 0.000250 Time 0.283716 -2023-09-11 14:30:37,281 - Epoch: [134][ 40/ 71] Overall Loss 0.149348 Objective Loss 0.149348 LR 0.000250 Time 0.283463 -2023-09-11 14:30:40,393 - Epoch: [134][ 50/ 71] Overall Loss 0.150265 Objective Loss 0.150265 LR 0.000250 Time 0.288989 -2023-09-11 14:30:43,671 - Epoch: [134][ 60/ 71] Overall Loss 0.152650 Objective Loss 0.152650 LR 0.000250 Time 0.295454 -2023-09-11 14:30:46,107 - Epoch: [134][ 70/ 71] Overall Loss 0.151760 Objective Loss 0.151760 Top1 95.703125 LR 0.000250 Time 0.288046 -2023-09-11 14:30:46,223 - Epoch: [134][ 71/ 71] Overall Loss 0.151662 Objective Loss 0.151662 Top1 95.535714 LR 0.000250 Time 0.285624 -2023-09-11 14:30:46,314 - --- validate (epoch=134)----------- -2023-09-11 14:30:46,314 - 2000 samples (256 per mini-batch) -2023-09-11 14:30:49,064 - Epoch: [134][ 8/ 8] Loss 0.190496 Top1 91.750000 -2023-09-11 14:30:49,168 - ==> Top1: 91.750 Loss: 0.190 - -2023-09-11 14:30:49,168 - ==> Confusion: -[[861 124] - [ 41 974]] - -2023-09-11 14:30:49,183 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:30:49,183 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:30:49,185 - - -2023-09-11 14:30:49,185 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:30:52,477 - Epoch: [135][ 10/ 71] Overall Loss 0.163237 Objective Loss 0.163237 LR 0.000250 Time 0.329069 -2023-09-11 14:30:54,920 - Epoch: [135][ 20/ 71] Overall Loss 0.153894 Objective Loss 0.153894 LR 0.000250 Time 0.286677 -2023-09-11 14:30:57,480 - Epoch: [135][ 30/ 71] Overall Loss 0.150857 Objective Loss 0.150857 LR 0.000250 Time 0.276437 -2023-09-11 14:31:00,120 - Epoch: [135][ 40/ 71] Overall Loss 0.150887 Objective Loss 0.150887 LR 0.000250 Time 0.273328 -2023-09-11 14:31:02,458 - Epoch: [135][ 50/ 71] Overall Loss 0.150766 Objective Loss 0.150766 LR 0.000250 Time 0.265405 -2023-09-11 14:31:05,079 - Epoch: [135][ 60/ 71] Overall Loss 0.154346 Objective Loss 0.154346 LR 0.000250 Time 0.264852 -2023-09-11 14:31:07,554 - Epoch: [135][ 70/ 71] Overall Loss 0.154839 Objective Loss 0.154839 Top1 92.968750 LR 0.000250 Time 0.262373 -2023-09-11 14:31:07,678 - Epoch: [135][ 71/ 71] Overall Loss 0.154324 Objective Loss 0.154324 Top1 93.750000 LR 0.000250 Time 0.260419 -2023-09-11 14:31:07,768 - --- validate (epoch=135)----------- -2023-09-11 14:31:07,769 - 2000 samples (256 per mini-batch) -2023-09-11 14:31:10,486 - Epoch: [135][ 8/ 8] Loss 0.201336 Top1 91.950000 -2023-09-11 14:31:10,581 - ==> Top1: 91.950 Loss: 0.201 - -2023-09-11 14:31:10,582 - ==> Confusion: -[[938 47] - [114 901]] - -2023-09-11 14:31:10,583 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:31:10,583 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:31:10,586 - - -2023-09-11 14:31:10,586 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:31:14,353 - Epoch: [136][ 10/ 71] Overall Loss 0.155226 Objective Loss 0.155226 LR 0.000250 Time 0.376665 -2023-09-11 14:31:16,377 - Epoch: [136][ 20/ 71] Overall Loss 0.160189 Objective Loss 0.160189 LR 0.000250 Time 0.289501 -2023-09-11 14:31:19,151 - Epoch: [136][ 30/ 71] Overall Loss 0.152189 Objective Loss 0.152189 LR 0.000250 Time 0.285432 -2023-09-11 14:31:21,876 - Epoch: [136][ 40/ 71] Overall Loss 0.148189 Objective Loss 0.148189 LR 0.000250 Time 0.282204 -2023-09-11 14:31:24,545 - Epoch: [136][ 50/ 71] Overall Loss 0.149102 Objective Loss 0.149102 LR 0.000250 Time 0.279139 -2023-09-11 14:31:27,991 - Epoch: [136][ 60/ 71] Overall Loss 0.145139 Objective Loss 0.145139 LR 0.000250 Time 0.290042 -2023-09-11 14:31:30,436 - Epoch: [136][ 70/ 71] Overall Loss 0.143204 Objective Loss 0.143204 Top1 96.484375 LR 0.000250 Time 0.283526 -2023-09-11 14:31:30,510 - Epoch: [136][ 71/ 71] Overall Loss 0.143074 Objective Loss 0.143074 Top1 95.535714 LR 0.000250 Time 0.280579 -2023-09-11 14:31:30,605 - --- validate (epoch=136)----------- -2023-09-11 14:31:30,605 - 2000 samples (256 per mini-batch) -2023-09-11 14:31:33,725 - Epoch: [136][ 8/ 8] Loss 0.183390 Top1 92.650000 -2023-09-11 14:31:33,827 - ==> Top1: 92.650 Loss: 0.183 - -2023-09-11 14:31:33,827 - ==> Confusion: -[[913 72] - [ 75 940]] +2025-05-20 17:50:09,932 - ==> Best [Top1: 93.450 Params: 57776 on epoch: 128] +2025-05-20 17:50:09,932 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:50:09,940 - + +2025-05-20 17:50:09,940 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:50:15,493 - Epoch: [135][ 10/ 71] Overall Loss 0.126243 Objective Loss 0.126243 LR 0.000250 Time 0.555271 +2025-05-20 17:50:18,743 - Epoch: [135][ 20/ 71] Overall Loss 0.129998 Objective Loss 0.129998 LR 0.000250 Time 0.440131 +2025-05-20 17:50:23,022 - Epoch: [135][ 30/ 71] Overall Loss 0.129617 Objective Loss 0.129617 LR 0.000250 Time 0.436030 +2025-05-20 17:50:26,388 - Epoch: [135][ 40/ 71] Overall Loss 0.133227 Objective Loss 0.133227 LR 0.000250 Time 0.411170 +2025-05-20 17:50:30,429 - Epoch: [135][ 50/ 71] Overall Loss 0.134123 Objective Loss 0.134123 LR 0.000250 Time 0.409755 +2025-05-20 17:50:33,501 - Epoch: [135][ 60/ 71] Overall Loss 0.137416 Objective Loss 0.137416 LR 0.000250 Time 0.392657 +2025-05-20 17:50:37,472 - Epoch: [135][ 70/ 71] Overall Loss 0.139838 Objective Loss 0.139838 Top1 90.625000 LR 0.000250 Time 0.393283 +2025-05-20 17:50:37,580 - Epoch: [135][ 71/ 71] Overall Loss 0.139797 Objective Loss 0.139797 Top1 91.666667 LR 0.000250 Time 0.389268 +2025-05-20 17:50:37,612 - --- validate (epoch=135)----------- +2025-05-20 17:50:37,612 - 2000 samples (256 per mini-batch) +2025-05-20 17:50:41,549 - Epoch: [135][ 8/ 8] Loss 0.167213 Top1 93.150000 +2025-05-20 17:50:41,578 - ==> Top1: 93.150 Loss: 0.167 + +2025-05-20 17:50:41,579 - ==> Confusion: +[[907 78] + [ 59 956]] -2023-09-11 14:31:33,843 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:31:33,843 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:31:33,845 - - -2023-09-11 14:31:33,845 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:31:37,943 - Epoch: [137][ 10/ 71] Overall Loss 0.152552 Objective Loss 0.152552 LR 0.000250 Time 0.409689 -2023-09-11 14:31:39,999 - Epoch: [137][ 20/ 71] Overall Loss 0.140332 Objective Loss 0.140332 LR 0.000250 Time 0.307619 -2023-09-11 14:31:43,179 - Epoch: [137][ 30/ 71] Overall Loss 0.146501 Objective Loss 0.146501 LR 0.000250 Time 0.311092 -2023-09-11 14:31:45,487 - Epoch: [137][ 40/ 71] Overall Loss 0.143306 Objective Loss 0.143306 LR 0.000250 Time 0.291005 -2023-09-11 14:31:48,250 - Epoch: [137][ 50/ 71] Overall Loss 0.143334 Objective Loss 0.143334 LR 0.000250 Time 0.288060 -2023-09-11 14:31:50,315 - Epoch: [137][ 60/ 71] Overall Loss 0.145178 Objective Loss 0.145178 LR 0.000250 Time 0.274452 -2023-09-11 14:31:52,790 - Epoch: [137][ 70/ 71] Overall Loss 0.144973 Objective Loss 0.144973 Top1 93.359375 LR 0.000250 Time 0.270606 -2023-09-11 14:31:52,868 - Epoch: [137][ 71/ 71] Overall Loss 0.144332 Objective Loss 0.144332 Top1 94.345238 LR 0.000250 Time 0.267882 -2023-09-11 14:31:52,949 - --- validate (epoch=137)----------- -2023-09-11 14:31:52,949 - 2000 samples (256 per mini-batch) -2023-09-11 14:31:55,338 - Epoch: [137][ 8/ 8] Loss 0.182730 Top1 93.100000 -2023-09-11 14:31:55,433 - ==> Top1: 93.100 Loss: 0.183 - -2023-09-11 14:31:55,433 - ==> Confusion: +2025-05-20 17:50:41,595 - ==> Best [Top1: 93.450 Params: 57776 on epoch: 128] +2025-05-20 17:50:41,595 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:50:41,602 - + +2025-05-20 17:50:41,602 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:50:46,222 - Epoch: [136][ 10/ 71] Overall Loss 0.130686 Objective Loss 0.130686 LR 0.000250 Time 0.461921 +2025-05-20 17:50:50,267 - Epoch: [136][ 20/ 71] Overall Loss 0.135785 Objective Loss 0.135785 LR 0.000250 Time 0.433198 +2025-05-20 17:50:53,989 - Epoch: [136][ 30/ 71] Overall Loss 0.134360 Objective Loss 0.134360 LR 0.000250 Time 0.412859 +2025-05-20 17:50:58,312 - Epoch: [136][ 40/ 71] Overall Loss 0.135667 Objective Loss 0.135667 LR 0.000250 Time 0.417696 +2025-05-20 17:51:02,338 - Epoch: [136][ 50/ 71] Overall Loss 0.136629 Objective Loss 0.136629 LR 0.000250 Time 0.414674 +2025-05-20 17:51:05,368 - Epoch: [136][ 60/ 71] Overall Loss 0.136785 Objective Loss 0.136785 LR 0.000250 Time 0.396059 +2025-05-20 17:51:09,023 - Epoch: [136][ 70/ 71] Overall Loss 0.136111 Objective Loss 0.136111 Top1 95.312500 LR 0.000250 Time 0.391684 +2025-05-20 17:51:09,131 - Epoch: [136][ 71/ 71] Overall Loss 0.136596 Objective Loss 0.136596 Top1 94.940476 LR 0.000250 Time 0.387694 +2025-05-20 17:51:09,167 - --- validate (epoch=136)----------- +2025-05-20 17:51:09,167 - 2000 samples (256 per mini-batch) +2025-05-20 17:51:13,045 - Epoch: [136][ 8/ 8] Loss 0.172207 Top1 93.200000 +2025-05-20 17:51:13,079 - ==> Top1: 93.200 Loss: 0.172 + +2025-05-20 17:51:13,079 - ==> Confusion: [[922 63] - [ 75 940]] + [ 73 942]] -2023-09-11 14:31:55,449 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:31:55,449 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:31:55,452 - - -2023-09-11 14:31:55,452 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:32:00,008 - Epoch: [138][ 10/ 71] Overall Loss 0.162620 Objective Loss 0.162620 LR 0.000250 Time 0.455615 -2023-09-11 14:32:02,014 - Epoch: [138][ 20/ 71] Overall Loss 0.160275 Objective Loss 0.160275 LR 0.000250 Time 0.328078 -2023-09-11 14:32:04,714 - Epoch: [138][ 30/ 71] Overall Loss 0.156930 Objective Loss 0.156930 LR 0.000250 Time 0.308695 -2023-09-11 14:32:07,266 - Epoch: [138][ 40/ 71] Overall Loss 0.155821 Objective Loss 0.155821 LR 0.000250 Time 0.295312 -2023-09-11 14:32:10,463 - Epoch: [138][ 50/ 71] Overall Loss 0.154124 Objective Loss 0.154124 LR 0.000250 Time 0.300185 -2023-09-11 14:32:12,760 - Epoch: [138][ 60/ 71] Overall Loss 0.151248 Objective Loss 0.151248 LR 0.000250 Time 0.288442 -2023-09-11 14:32:15,119 - Epoch: [138][ 70/ 71] Overall Loss 0.151575 Objective Loss 0.151575 Top1 96.484375 LR 0.000250 Time 0.280923 -2023-09-11 14:32:15,196 - Epoch: [138][ 71/ 71] Overall Loss 0.151564 Objective Loss 0.151564 Top1 95.535714 LR 0.000250 Time 0.278055 -2023-09-11 14:32:15,296 - --- validate (epoch=138)----------- -2023-09-11 14:32:15,297 - 2000 samples (256 per mini-batch) -2023-09-11 14:32:18,342 - Epoch: [138][ 8/ 8] Loss 0.198425 Top1 91.400000 -2023-09-11 14:32:18,438 - ==> Top1: 91.400 Loss: 0.198 - -2023-09-11 14:32:18,438 - ==> Confusion: -[[928 57] - [115 900]] - -2023-09-11 14:32:18,454 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:32:18,454 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:32:18,456 - - -2023-09-11 14:32:18,456 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:32:22,814 - Epoch: [139][ 10/ 71] Overall Loss 0.143969 Objective Loss 0.143969 LR 0.000250 Time 0.435681 -2023-09-11 14:32:24,858 - Epoch: [139][ 20/ 71] Overall Loss 0.138549 Objective Loss 0.138549 LR 0.000250 Time 0.320047 -2023-09-11 14:32:27,663 - Epoch: [139][ 30/ 71] Overall Loss 0.143566 Objective Loss 0.143566 LR 0.000250 Time 0.306859 -2023-09-11 14:32:30,683 - Epoch: [139][ 40/ 71] Overall Loss 0.148024 Objective Loss 0.148024 LR 0.000250 Time 0.305632 -2023-09-11 14:32:32,787 - Epoch: [139][ 50/ 71] Overall Loss 0.148305 Objective Loss 0.148305 LR 0.000250 Time 0.286574 -2023-09-11 14:32:35,424 - Epoch: [139][ 60/ 71] Overall Loss 0.147937 Objective Loss 0.147937 LR 0.000250 Time 0.282750 -2023-09-11 14:32:37,371 - Epoch: [139][ 70/ 71] Overall Loss 0.146612 Objective Loss 0.146612 Top1 95.312500 LR 0.000250 Time 0.270175 -2023-09-11 14:32:37,443 - Epoch: [139][ 71/ 71] Overall Loss 0.145893 Objective Loss 0.145893 Top1 95.535714 LR 0.000250 Time 0.267386 -2023-09-11 14:32:37,524 - --- validate (epoch=139)----------- -2023-09-11 14:32:37,524 - 2000 samples (256 per mini-batch) -2023-09-11 14:32:39,944 - Epoch: [139][ 8/ 8] Loss 0.172109 Top1 92.900000 -2023-09-11 14:32:40,056 - ==> Top1: 92.900 Loss: 0.172 - -2023-09-11 14:32:40,057 - ==> Confusion: -[[910 75] - [ 67 948]] +2025-05-20 17:51:13,096 - ==> Best [Top1: 93.450 Params: 57776 on epoch: 128] +2025-05-20 17:51:13,096 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:51:13,103 - + +2025-05-20 17:51:13,103 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:51:18,355 - Epoch: [137][ 10/ 71] Overall Loss 0.122142 Objective Loss 0.122142 LR 0.000250 Time 0.525077 +2025-05-20 17:51:21,541 - Epoch: [137][ 20/ 71] Overall Loss 0.130448 Objective Loss 0.130448 LR 0.000250 Time 0.421849 +2025-05-20 17:51:25,235 - Epoch: [137][ 30/ 71] Overall Loss 0.131636 Objective Loss 0.131636 LR 0.000250 Time 0.404330 +2025-05-20 17:51:29,489 - Epoch: [137][ 40/ 71] Overall Loss 0.128484 Objective Loss 0.128484 LR 0.000250 Time 0.409597 +2025-05-20 17:51:34,136 - Epoch: [137][ 50/ 71] Overall Loss 0.131752 Objective Loss 0.131752 LR 0.000250 Time 0.420609 +2025-05-20 17:51:37,330 - Epoch: [137][ 60/ 71] Overall Loss 0.133542 Objective Loss 0.133542 LR 0.000250 Time 0.403745 +2025-05-20 17:51:41,179 - Epoch: [137][ 70/ 71] Overall Loss 0.133563 Objective Loss 0.133563 Top1 94.140625 LR 0.000250 Time 0.401040 +2025-05-20 17:51:41,277 - Epoch: [137][ 71/ 71] Overall Loss 0.133606 Objective Loss 0.133606 Top1 94.345238 LR 0.000250 Time 0.396766 +2025-05-20 17:51:41,316 - --- validate (epoch=137)----------- +2025-05-20 17:51:41,316 - 2000 samples (256 per mini-batch) +2025-05-20 17:51:45,022 - Epoch: [137][ 8/ 8] Loss 0.190640 Top1 92.400000 +2025-05-20 17:51:45,059 - ==> Top1: 92.400 Loss: 0.191 + +2025-05-20 17:51:45,059 - ==> Confusion: +[[909 76] + [ 76 939]] -2023-09-11 14:32:40,072 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:32:40,072 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:32:40,074 - - -2023-09-11 14:32:40,074 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:32:44,531 - Epoch: [140][ 10/ 71] Overall Loss 0.137844 Objective Loss 0.137844 LR 0.000250 Time 0.445661 -2023-09-11 14:32:46,908 - Epoch: [140][ 20/ 71] Overall Loss 0.138486 Objective Loss 0.138486 LR 0.000250 Time 0.341625 -2023-09-11 14:32:49,805 - Epoch: [140][ 30/ 71] Overall Loss 0.140440 Objective Loss 0.140440 LR 0.000250 Time 0.324324 -2023-09-11 14:32:51,799 - Epoch: [140][ 40/ 71] Overall Loss 0.143244 Objective Loss 0.143244 LR 0.000250 Time 0.293076 -2023-09-11 14:32:54,679 - Epoch: [140][ 50/ 71] Overall Loss 0.145232 Objective Loss 0.145232 LR 0.000250 Time 0.292055 -2023-09-11 14:32:57,264 - Epoch: [140][ 60/ 71] Overall Loss 0.148538 Objective Loss 0.148538 LR 0.000250 Time 0.286452 -2023-09-11 14:32:59,797 - Epoch: [140][ 70/ 71] Overall Loss 0.147153 Objective Loss 0.147153 Top1 94.140625 LR 0.000250 Time 0.281722 -2023-09-11 14:32:59,915 - Epoch: [140][ 71/ 71] Overall Loss 0.146148 Objective Loss 0.146148 Top1 94.940476 LR 0.000250 Time 0.279412 -2023-09-11 14:33:00,008 - --- validate (epoch=140)----------- -2023-09-11 14:33:00,008 - 2000 samples (256 per mini-batch) -2023-09-11 14:33:02,845 - Epoch: [140][ 8/ 8] Loss 0.212978 Top1 91.200000 -2023-09-11 14:33:02,931 - ==> Top1: 91.200 Loss: 0.213 - -2023-09-11 14:33:02,932 - ==> Confusion: -[[946 39] - [137 878]] - -2023-09-11 14:33:02,932 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:33:02,932 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:33:02,935 - - -2023-09-11 14:33:02,935 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:33:07,354 - Epoch: [141][ 10/ 71] Overall Loss 0.146552 Objective Loss 0.146552 LR 0.000250 Time 0.441844 -2023-09-11 14:33:09,210 - Epoch: [141][ 20/ 71] Overall Loss 0.153975 Objective Loss 0.153975 LR 0.000250 Time 0.313699 -2023-09-11 14:33:11,908 - Epoch: [141][ 30/ 71] Overall Loss 0.151210 Objective Loss 0.151210 LR 0.000250 Time 0.299062 -2023-09-11 14:33:14,042 - Epoch: [141][ 40/ 71] Overall Loss 0.148038 Objective Loss 0.148038 LR 0.000250 Time 0.277629 -2023-09-11 14:33:16,690 - Epoch: [141][ 50/ 71] Overall Loss 0.144898 Objective Loss 0.144898 LR 0.000250 Time 0.275058 -2023-09-11 14:33:20,918 - Epoch: [141][ 60/ 71] Overall Loss 0.145915 Objective Loss 0.145915 LR 0.000250 Time 0.299683 -2023-09-11 14:33:22,934 - Epoch: [141][ 70/ 71] Overall Loss 0.145070 Objective Loss 0.145070 Top1 92.187500 LR 0.000250 Time 0.285660 -2023-09-11 14:33:23,070 - Epoch: [141][ 71/ 71] Overall Loss 0.144928 Objective Loss 0.144928 Top1 92.857143 LR 0.000250 Time 0.283549 -2023-09-11 14:33:23,164 - --- validate (epoch=141)----------- -2023-09-11 14:33:23,165 - 2000 samples (256 per mini-batch) -2023-09-11 14:33:25,959 - Epoch: [141][ 8/ 8] Loss 0.178595 Top1 92.150000 -2023-09-11 14:33:26,059 - ==> Top1: 92.150 Loss: 0.179 - -2023-09-11 14:33:26,060 - ==> Confusion: -[[938 47] - [110 905]] - -2023-09-11 14:33:26,076 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:33:26,076 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:33:26,081 - - -2023-09-11 14:33:26,081 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:33:30,694 - Epoch: [142][ 10/ 71] Overall Loss 0.142283 Objective Loss 0.142283 LR 0.000250 Time 0.461272 -2023-09-11 14:33:33,190 - Epoch: [142][ 20/ 71] Overall Loss 0.144333 Objective Loss 0.144333 LR 0.000250 Time 0.355395 -2023-09-11 14:33:35,658 - Epoch: [142][ 30/ 71] Overall Loss 0.141440 Objective Loss 0.141440 LR 0.000250 Time 0.319183 -2023-09-11 14:33:38,139 - Epoch: [142][ 40/ 71] Overall Loss 0.142345 Objective Loss 0.142345 LR 0.000250 Time 0.301405 -2023-09-11 14:33:41,181 - Epoch: [142][ 50/ 71] Overall Loss 0.145485 Objective Loss 0.145485 LR 0.000250 Time 0.301959 -2023-09-11 14:33:44,043 - Epoch: [142][ 60/ 71] Overall Loss 0.146029 Objective Loss 0.146029 LR 0.000250 Time 0.299342 -2023-09-11 14:33:46,149 - Epoch: [142][ 70/ 71] Overall Loss 0.145071 Objective Loss 0.145071 Top1 94.921875 LR 0.000250 Time 0.286657 -2023-09-11 14:33:46,229 - Epoch: [142][ 71/ 71] Overall Loss 0.145138 Objective Loss 0.145138 Top1 94.642857 LR 0.000250 Time 0.283734 -2023-09-11 14:33:46,334 - --- validate (epoch=142)----------- -2023-09-11 14:33:46,334 - 2000 samples (256 per mini-batch) -2023-09-11 14:33:49,469 - Epoch: [142][ 8/ 8] Loss 0.189400 Top1 92.150000 -2023-09-11 14:33:49,559 - ==> Top1: 92.150 Loss: 0.189 - -2023-09-11 14:33:49,560 - ==> Confusion: -[[934 51] - [106 909]] - -2023-09-11 14:33:49,575 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:33:49,575 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:33:49,578 - - -2023-09-11 14:33:49,578 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:33:54,477 - Epoch: [143][ 10/ 71] Overall Loss 0.147861 Objective Loss 0.147861 LR 0.000250 Time 0.489829 -2023-09-11 14:33:56,524 - Epoch: [143][ 20/ 71] Overall Loss 0.146568 Objective Loss 0.146568 LR 0.000250 Time 0.347284 -2023-09-11 14:33:59,150 - Epoch: [143][ 30/ 71] Overall Loss 0.146154 Objective Loss 0.146154 LR 0.000250 Time 0.319049 -2023-09-11 14:34:01,201 - Epoch: [143][ 40/ 71] Overall Loss 0.142727 Objective Loss 0.142727 LR 0.000250 Time 0.290547 -2023-09-11 14:34:03,930 - Epoch: [143][ 50/ 71] Overall Loss 0.142083 Objective Loss 0.142083 LR 0.000250 Time 0.287009 -2023-09-11 14:34:07,278 - Epoch: [143][ 60/ 71] Overall Loss 0.142703 Objective Loss 0.142703 LR 0.000250 Time 0.294978 -2023-09-11 14:34:09,256 - Epoch: [143][ 70/ 71] Overall Loss 0.143171 Objective Loss 0.143171 Top1 96.093750 LR 0.000250 Time 0.281080 -2023-09-11 14:34:09,333 - Epoch: [143][ 71/ 71] Overall Loss 0.144350 Objective Loss 0.144350 Top1 94.940476 LR 0.000250 Time 0.278205 -2023-09-11 14:34:09,432 - --- validate (epoch=143)----------- -2023-09-11 14:34:09,432 - 2000 samples (256 per mini-batch) -2023-09-11 14:34:12,660 - Epoch: [143][ 8/ 8] Loss 0.179815 Top1 92.700000 -2023-09-11 14:34:12,762 - ==> Top1: 92.700 Loss: 0.180 - -2023-09-11 14:34:12,762 - ==> Confusion: -[[930 55] - [ 91 924]] +2025-05-20 17:51:45,074 - ==> Best [Top1: 93.450 Params: 57776 on epoch: 128] +2025-05-20 17:51:45,074 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:51:45,081 - + +2025-05-20 17:51:45,081 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:51:49,848 - Epoch: [138][ 10/ 71] Overall Loss 0.137613 Objective Loss 0.137613 LR 0.000250 Time 0.476601 +2025-05-20 17:51:53,951 - Epoch: [138][ 20/ 71] Overall Loss 0.141682 Objective Loss 0.141682 LR 0.000250 Time 0.443444 +2025-05-20 17:51:58,809 - Epoch: [138][ 30/ 71] Overall Loss 0.137253 Objective Loss 0.137253 LR 0.000250 Time 0.457529 +2025-05-20 17:52:01,904 - Epoch: [138][ 40/ 71] Overall Loss 0.136379 Objective Loss 0.136379 LR 0.000250 Time 0.420523 +2025-05-20 17:52:05,717 - Epoch: [138][ 50/ 71] Overall Loss 0.139042 Objective Loss 0.139042 LR 0.000250 Time 0.412679 +2025-05-20 17:52:08,763 - Epoch: [138][ 60/ 71] Overall Loss 0.138644 Objective Loss 0.138644 LR 0.000250 Time 0.394661 +2025-05-20 17:52:12,232 - Epoch: [138][ 70/ 71] Overall Loss 0.138922 Objective Loss 0.138922 Top1 94.531250 LR 0.000250 Time 0.387833 +2025-05-20 17:52:12,340 - Epoch: [138][ 71/ 71] Overall Loss 0.139286 Objective Loss 0.139286 Top1 93.154762 LR 0.000250 Time 0.383890 +2025-05-20 17:52:12,370 - --- validate (epoch=138)----------- +2025-05-20 17:52:12,370 - 2000 samples (256 per mini-batch) +2025-05-20 17:52:16,297 - Epoch: [138][ 8/ 8] Loss 0.181796 Top1 93.450000 +2025-05-20 17:52:16,333 - ==> Top1: 93.450 Loss: 0.182 + +2025-05-20 17:52:16,334 - ==> Confusion: +[[920 65] + [ 66 949]] -2023-09-11 14:34:12,778 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:34:12,778 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:34:12,780 - - -2023-09-11 14:34:12,781 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:34:17,143 - Epoch: [144][ 10/ 71] Overall Loss 0.136959 Objective Loss 0.136959 LR 0.000250 Time 0.436228 -2023-09-11 14:34:19,136 - Epoch: [144][ 20/ 71] Overall Loss 0.141286 Objective Loss 0.141286 LR 0.000250 Time 0.317729 -2023-09-11 14:34:21,716 - Epoch: [144][ 30/ 71] Overall Loss 0.135099 Objective Loss 0.135099 LR 0.000250 Time 0.297587 -2023-09-11 14:34:23,711 - Epoch: [144][ 40/ 71] Overall Loss 0.142435 Objective Loss 0.142435 LR 0.000250 Time 0.273054 -2023-09-11 14:34:26,302 - Epoch: [144][ 50/ 71] Overall Loss 0.143734 Objective Loss 0.143734 LR 0.000250 Time 0.270259 -2023-09-11 14:34:28,562 - Epoch: [144][ 60/ 71] Overall Loss 0.141154 Objective Loss 0.141154 LR 0.000250 Time 0.262872 -2023-09-11 14:34:31,279 - Epoch: [144][ 70/ 71] Overall Loss 0.141921 Objective Loss 0.141921 Top1 93.359375 LR 0.000250 Time 0.264138 -2023-09-11 14:34:31,399 - Epoch: [144][ 71/ 71] Overall Loss 0.141126 Objective Loss 0.141126 Top1 94.345238 LR 0.000250 Time 0.262103 -2023-09-11 14:34:31,497 - --- validate (epoch=144)----------- -2023-09-11 14:34:31,497 - 2000 samples (256 per mini-batch) -2023-09-11 14:34:34,799 - Epoch: [144][ 8/ 8] Loss 0.186136 Top1 92.500000 -2023-09-11 14:34:34,897 - ==> Top1: 92.500 Loss: 0.186 - -2023-09-11 14:34:34,897 - ==> Confusion: +2025-05-20 17:52:16,349 - ==> Best [Top1: 93.450 Params: 57776 on epoch: 138] +2025-05-20 17:52:16,349 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:52:16,357 - + +2025-05-20 17:52:16,357 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:52:21,062 - Epoch: [139][ 10/ 71] Overall Loss 0.131214 Objective Loss 0.131214 LR 0.000250 Time 0.470451 +2025-05-20 17:52:24,870 - Epoch: [139][ 20/ 71] Overall Loss 0.143516 Objective Loss 0.143516 LR 0.000250 Time 0.425588 +2025-05-20 17:52:28,384 - Epoch: [139][ 30/ 71] Overall Loss 0.140689 Objective Loss 0.140689 LR 0.000250 Time 0.400850 +2025-05-20 17:52:32,407 - Epoch: [139][ 40/ 71] Overall Loss 0.141392 Objective Loss 0.141392 LR 0.000250 Time 0.401206 +2025-05-20 17:52:36,859 - Epoch: [139][ 50/ 71] Overall Loss 0.144676 Objective Loss 0.144676 LR 0.000250 Time 0.409994 +2025-05-20 17:52:41,165 - Epoch: [139][ 60/ 71] Overall Loss 0.144647 Objective Loss 0.144647 LR 0.000250 Time 0.413428 +2025-05-20 17:52:44,569 - Epoch: [139][ 70/ 71] Overall Loss 0.143637 Objective Loss 0.143637 Top1 92.968750 LR 0.000250 Time 0.403001 +2025-05-20 17:52:44,668 - Epoch: [139][ 71/ 71] Overall Loss 0.143242 Objective Loss 0.143242 Top1 93.154762 LR 0.000250 Time 0.398705 +2025-05-20 17:52:44,699 - --- validate (epoch=139)----------- +2025-05-20 17:52:44,700 - 2000 samples (256 per mini-batch) +2025-05-20 17:52:48,652 - Epoch: [139][ 8/ 8] Loss 0.198741 Top1 91.450000 +2025-05-20 17:52:48,687 - ==> Top1: 91.450 Loss: 0.199 + +2025-05-20 17:52:48,687 - ==> Confusion: [[948 37] - [113 902]] + [134 881]] + +2025-05-20 17:52:48,703 - ==> Best [Top1: 93.450 Params: 57776 on epoch: 138] +2025-05-20 17:52:48,703 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:52:48,710 - + +2025-05-20 17:52:48,710 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:52:54,107 - Epoch: [140][ 10/ 71] Overall Loss 0.158588 Objective Loss 0.158588 LR 0.000250 Time 0.539631 +2025-05-20 17:52:57,137 - Epoch: [140][ 20/ 71] Overall Loss 0.154967 Objective Loss 0.154967 LR 0.000250 Time 0.421268 +2025-05-20 17:53:01,581 - Epoch: [140][ 30/ 71] Overall Loss 0.145178 Objective Loss 0.145178 LR 0.000250 Time 0.428967 +2025-05-20 17:53:05,299 - Epoch: [140][ 40/ 71] Overall Loss 0.142972 Objective Loss 0.142972 LR 0.000250 Time 0.414674 +2025-05-20 17:53:08,959 - Epoch: [140][ 50/ 71] Overall Loss 0.141546 Objective Loss 0.141546 LR 0.000250 Time 0.404938 +2025-05-20 17:53:12,135 - Epoch: [140][ 60/ 71] Overall Loss 0.140332 Objective Loss 0.140332 LR 0.000250 Time 0.390379 +2025-05-20 17:53:15,858 - Epoch: [140][ 70/ 71] Overall Loss 0.138154 Objective Loss 0.138154 Top1 96.484375 LR 0.000250 Time 0.387785 +2025-05-20 17:53:15,957 - Epoch: [140][ 71/ 71] Overall Loss 0.137814 Objective Loss 0.137814 Top1 96.130952 LR 0.000250 Time 0.383713 +2025-05-20 17:53:15,989 - --- validate (epoch=140)----------- +2025-05-20 17:53:15,989 - 2000 samples (256 per mini-batch) +2025-05-20 17:53:19,437 - Epoch: [140][ 8/ 8] Loss 0.169402 Top1 94.150000 +2025-05-20 17:53:19,469 - ==> Top1: 94.150 Loss: 0.169 + +2025-05-20 17:53:19,469 - ==> Confusion: +[[920 65] + [ 52 963]] -2023-09-11 14:34:34,913 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:34:34,913 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:34:34,918 - - -2023-09-11 14:34:34,918 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:34:38,668 - Epoch: [145][ 10/ 71] Overall Loss 0.139226 Objective Loss 0.139226 LR 0.000250 Time 0.375008 -2023-09-11 14:34:41,102 - Epoch: [145][ 20/ 71] Overall Loss 0.143591 Objective Loss 0.143591 LR 0.000250 Time 0.309175 -2023-09-11 14:34:43,970 - Epoch: [145][ 30/ 71] Overall Loss 0.140378 Objective Loss 0.140378 LR 0.000250 Time 0.301715 -2023-09-11 14:34:47,510 - Epoch: [145][ 40/ 71] Overall Loss 0.140806 Objective Loss 0.140806 LR 0.000250 Time 0.314775 -2023-09-11 14:34:49,569 - Epoch: [145][ 50/ 71] Overall Loss 0.141182 Objective Loss 0.141182 LR 0.000250 Time 0.292982 -2023-09-11 14:34:53,098 - Epoch: [145][ 60/ 71] Overall Loss 0.141860 Objective Loss 0.141860 LR 0.000250 Time 0.302971 -2023-09-11 14:34:54,916 - Epoch: [145][ 70/ 71] Overall Loss 0.142169 Objective Loss 0.142169 Top1 93.750000 LR 0.000250 Time 0.285657 -2023-09-11 14:34:55,011 - Epoch: [145][ 71/ 71] Overall Loss 0.144079 Objective Loss 0.144079 Top1 91.964286 LR 0.000250 Time 0.282971 -2023-09-11 14:34:55,098 - --- validate (epoch=145)----------- -2023-09-11 14:34:55,099 - 2000 samples (256 per mini-batch) -2023-09-11 14:34:58,148 - Epoch: [145][ 8/ 8] Loss 0.172893 Top1 92.750000 -2023-09-11 14:34:58,243 - ==> Top1: 92.750 Loss: 0.173 - -2023-09-11 14:34:58,243 - ==> Confusion: -[[896 89] - [ 56 959]] +2025-05-20 17:53:19,485 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:53:19,485 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:53:19,493 - + +2025-05-20 17:53:19,493 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:53:24,068 - Epoch: [141][ 10/ 71] Overall Loss 0.134658 Objective Loss 0.134658 LR 0.000250 Time 0.457490 +2025-05-20 17:53:28,232 - Epoch: [141][ 20/ 71] Overall Loss 0.129290 Objective Loss 0.129290 LR 0.000250 Time 0.436904 +2025-05-20 17:53:31,902 - Epoch: [141][ 30/ 71] Overall Loss 0.133399 Objective Loss 0.133399 LR 0.000250 Time 0.413587 +2025-05-20 17:53:36,572 - Epoch: [141][ 40/ 71] Overall Loss 0.130865 Objective Loss 0.130865 LR 0.000250 Time 0.426934 +2025-05-20 17:53:40,119 - Epoch: [141][ 50/ 71] Overall Loss 0.131891 Objective Loss 0.131891 LR 0.000250 Time 0.412489 +2025-05-20 17:53:43,489 - Epoch: [141][ 60/ 71] Overall Loss 0.134488 Objective Loss 0.134488 LR 0.000250 Time 0.399894 +2025-05-20 17:53:46,960 - Epoch: [141][ 70/ 71] Overall Loss 0.136548 Objective Loss 0.136548 Top1 93.750000 LR 0.000250 Time 0.392349 +2025-05-20 17:53:47,060 - Epoch: [141][ 71/ 71] Overall Loss 0.137421 Objective Loss 0.137421 Top1 92.559524 LR 0.000250 Time 0.388224 +2025-05-20 17:53:47,094 - --- validate (epoch=141)----------- +2025-05-20 17:53:47,094 - 2000 samples (256 per mini-batch) +2025-05-20 17:53:50,842 - Epoch: [141][ 8/ 8] Loss 0.200519 Top1 92.700000 +2025-05-20 17:53:50,878 - ==> Top1: 92.700 Loss: 0.201 + +2025-05-20 17:53:50,878 - ==> Confusion: +[[875 110] + [ 36 979]] + +2025-05-20 17:53:50,895 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:53:50,895 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:53:50,902 - + +2025-05-20 17:53:50,902 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:53:55,937 - Epoch: [142][ 10/ 71] Overall Loss 0.129949 Objective Loss 0.129949 LR 0.000250 Time 0.503428 +2025-05-20 17:53:59,114 - Epoch: [142][ 20/ 71] Overall Loss 0.131609 Objective Loss 0.131609 LR 0.000250 Time 0.410543 +2025-05-20 17:54:03,074 - Epoch: [142][ 30/ 71] Overall Loss 0.132511 Objective Loss 0.132511 LR 0.000250 Time 0.405702 +2025-05-20 17:54:06,770 - Epoch: [142][ 40/ 71] Overall Loss 0.136254 Objective Loss 0.136254 LR 0.000250 Time 0.396651 +2025-05-20 17:54:10,791 - Epoch: [142][ 50/ 71] Overall Loss 0.140883 Objective Loss 0.140883 LR 0.000250 Time 0.397745 +2025-05-20 17:54:13,989 - Epoch: [142][ 60/ 71] Overall Loss 0.141582 Objective Loss 0.141582 LR 0.000250 Time 0.384745 +2025-05-20 17:54:17,607 - Epoch: [142][ 70/ 71] Overall Loss 0.142520 Objective Loss 0.142520 Top1 94.140625 LR 0.000250 Time 0.381460 +2025-05-20 17:54:17,700 - Epoch: [142][ 71/ 71] Overall Loss 0.141066 Objective Loss 0.141066 Top1 95.238095 LR 0.000250 Time 0.377401 +2025-05-20 17:54:17,732 - --- validate (epoch=142)----------- +2025-05-20 17:54:17,733 - 2000 samples (256 per mini-batch) +2025-05-20 17:54:21,590 - Epoch: [142][ 8/ 8] Loss 0.209275 Top1 91.300000 +2025-05-20 17:54:21,624 - ==> Top1: 91.300 Loss: 0.209 + +2025-05-20 17:54:21,625 - ==> Confusion: +[[860 125] + [ 49 966]] -2023-09-11 14:34:58,259 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:34:58,259 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:34:58,262 - - -2023-09-11 14:34:58,262 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:35:01,651 - Epoch: [146][ 10/ 71] Overall Loss 0.138115 Objective Loss 0.138115 LR 0.000250 Time 0.338853 -2023-09-11 14:35:04,728 - Epoch: [146][ 20/ 71] Overall Loss 0.144828 Objective Loss 0.144828 LR 0.000250 Time 0.323263 -2023-09-11 14:35:06,782 - Epoch: [146][ 30/ 71] Overall Loss 0.143555 Objective Loss 0.143555 LR 0.000250 Time 0.283981 -2023-09-11 14:35:09,726 - Epoch: [146][ 40/ 71] Overall Loss 0.145013 Objective Loss 0.145013 LR 0.000250 Time 0.286577 -2023-09-11 14:35:12,435 - Epoch: [146][ 50/ 71] Overall Loss 0.144679 Objective Loss 0.144679 LR 0.000250 Time 0.283426 -2023-09-11 14:35:14,546 - Epoch: [146][ 60/ 71] Overall Loss 0.144077 Objective Loss 0.144077 LR 0.000250 Time 0.271362 -2023-09-11 14:35:17,093 - Epoch: [146][ 70/ 71] Overall Loss 0.144371 Objective Loss 0.144371 Top1 92.968750 LR 0.000250 Time 0.268980 -2023-09-11 14:35:17,141 - Epoch: [146][ 71/ 71] Overall Loss 0.144123 Objective Loss 0.144123 Top1 93.750000 LR 0.000250 Time 0.265867 -2023-09-11 14:35:17,237 - --- validate (epoch=146)----------- -2023-09-11 14:35:17,237 - 2000 samples (256 per mini-batch) -2023-09-11 14:35:20,061 - Epoch: [146][ 8/ 8] Loss 0.182260 Top1 92.000000 -2023-09-11 14:35:20,175 - ==> Top1: 92.000 Loss: 0.182 - -2023-09-11 14:35:20,175 - ==> Confusion: -[[908 77] - [ 83 932]] +2025-05-20 17:54:21,638 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:54:21,639 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:54:21,646 - + +2025-05-20 17:54:21,646 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:54:27,134 - Epoch: [143][ 10/ 71] Overall Loss 0.154877 Objective Loss 0.154877 LR 0.000250 Time 0.548799 +2025-05-20 17:54:30,717 - Epoch: [143][ 20/ 71] Overall Loss 0.138430 Objective Loss 0.138430 LR 0.000250 Time 0.453502 +2025-05-20 17:54:34,689 - Epoch: [143][ 30/ 71] Overall Loss 0.128948 Objective Loss 0.128948 LR 0.000250 Time 0.434736 +2025-05-20 17:54:38,475 - Epoch: [143][ 40/ 71] Overall Loss 0.131228 Objective Loss 0.131228 LR 0.000250 Time 0.420689 +2025-05-20 17:54:42,154 - Epoch: [143][ 50/ 71] Overall Loss 0.135011 Objective Loss 0.135011 LR 0.000250 Time 0.410132 +2025-05-20 17:54:45,826 - Epoch: [143][ 60/ 71] Overall Loss 0.137232 Objective Loss 0.137232 LR 0.000250 Time 0.402963 +2025-05-20 17:54:48,965 - Epoch: [143][ 70/ 71] Overall Loss 0.140876 Objective Loss 0.140876 Top1 94.531250 LR 0.000250 Time 0.390241 +2025-05-20 17:54:49,074 - Epoch: [143][ 71/ 71] Overall Loss 0.141686 Objective Loss 0.141686 Top1 93.750000 LR 0.000250 Time 0.386271 +2025-05-20 17:54:49,104 - --- validate (epoch=143)----------- +2025-05-20 17:54:49,104 - 2000 samples (256 per mini-batch) +2025-05-20 17:54:52,950 - Epoch: [143][ 8/ 8] Loss 0.167918 Top1 92.500000 +2025-05-20 17:54:52,986 - ==> Top1: 92.500 Loss: 0.168 + +2025-05-20 17:54:52,986 - ==> Confusion: +[[884 101] + [ 49 966]] -2023-09-11 14:35:20,191 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:35:20,191 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:35:20,193 - - -2023-09-11 14:35:20,194 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:35:24,726 - Epoch: [147][ 10/ 71] Overall Loss 0.152309 Objective Loss 0.152309 LR 0.000250 Time 0.453163 -2023-09-11 14:35:26,871 - Epoch: [147][ 20/ 71] Overall Loss 0.150362 Objective Loss 0.150362 LR 0.000250 Time 0.333837 -2023-09-11 14:35:29,891 - Epoch: [147][ 30/ 71] Overall Loss 0.143293 Objective Loss 0.143293 LR 0.000250 Time 0.323206 -2023-09-11 14:35:32,022 - Epoch: [147][ 40/ 71] Overall Loss 0.139178 Objective Loss 0.139178 LR 0.000250 Time 0.295670 -2023-09-11 14:35:34,909 - Epoch: [147][ 50/ 71] Overall Loss 0.137350 Objective Loss 0.137350 LR 0.000250 Time 0.294273 -2023-09-11 14:35:37,061 - Epoch: [147][ 60/ 71] Overall Loss 0.138778 Objective Loss 0.138778 LR 0.000250 Time 0.281092 -2023-09-11 14:35:39,628 - Epoch: [147][ 70/ 71] Overall Loss 0.140749 Objective Loss 0.140749 Top1 94.531250 LR 0.000250 Time 0.277607 -2023-09-11 14:35:39,675 - Epoch: [147][ 71/ 71] Overall Loss 0.141748 Objective Loss 0.141748 Top1 94.047619 LR 0.000250 Time 0.274354 -2023-09-11 14:35:39,774 - --- validate (epoch=147)----------- -2023-09-11 14:35:39,774 - 2000 samples (256 per mini-batch) -2023-09-11 14:35:42,982 - Epoch: [147][ 8/ 8] Loss 0.180102 Top1 92.450000 -2023-09-11 14:35:43,076 - ==> Top1: 92.450 Loss: 0.180 - -2023-09-11 14:35:43,077 - ==> Confusion: -[[907 78] - [ 73 942]] +2025-05-20 17:54:53,002 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:54:53,002 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:54:53,010 - + +2025-05-20 17:54:53,010 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:54:58,347 - Epoch: [144][ 10/ 71] Overall Loss 0.132752 Objective Loss 0.132752 LR 0.000250 Time 0.533677 +2025-05-20 17:55:01,751 - Epoch: [144][ 20/ 71] Overall Loss 0.134171 Objective Loss 0.134171 LR 0.000250 Time 0.437027 +2025-05-20 17:55:05,755 - Epoch: [144][ 30/ 71] Overall Loss 0.131199 Objective Loss 0.131199 LR 0.000250 Time 0.424812 +2025-05-20 17:55:09,167 - Epoch: [144][ 40/ 71] Overall Loss 0.132946 Objective Loss 0.132946 LR 0.000250 Time 0.403889 +2025-05-20 17:55:14,014 - Epoch: [144][ 50/ 71] Overall Loss 0.130603 Objective Loss 0.130603 LR 0.000250 Time 0.420041 +2025-05-20 17:55:17,231 - Epoch: [144][ 60/ 71] Overall Loss 0.129403 Objective Loss 0.129403 LR 0.000250 Time 0.403642 +2025-05-20 17:55:21,108 - Epoch: [144][ 70/ 71] Overall Loss 0.129748 Objective Loss 0.129748 Top1 93.359375 LR 0.000250 Time 0.401369 +2025-05-20 17:55:21,217 - Epoch: [144][ 71/ 71] Overall Loss 0.129852 Objective Loss 0.129852 Top1 93.750000 LR 0.000250 Time 0.397240 +2025-05-20 17:55:21,247 - --- validate (epoch=144)----------- +2025-05-20 17:55:21,247 - 2000 samples (256 per mini-batch) +2025-05-20 17:55:24,920 - Epoch: [144][ 8/ 8] Loss 0.179115 Top1 92.900000 +2025-05-20 17:55:24,964 - ==> Top1: 92.900 Loss: 0.179 + +2025-05-20 17:55:24,964 - ==> Confusion: +[[905 80] + [ 62 953]] + +2025-05-20 17:55:24,981 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:55:24,981 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:55:24,989 - + +2025-05-20 17:55:24,989 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:55:29,715 - Epoch: [145][ 10/ 71] Overall Loss 0.137716 Objective Loss 0.137716 LR 0.000250 Time 0.472529 +2025-05-20 17:55:34,266 - Epoch: [145][ 20/ 71] Overall Loss 0.144436 Objective Loss 0.144436 LR 0.000250 Time 0.463803 +2025-05-20 17:55:37,787 - Epoch: [145][ 30/ 71] Overall Loss 0.140962 Objective Loss 0.140962 LR 0.000250 Time 0.426572 +2025-05-20 17:55:42,207 - Epoch: [145][ 40/ 71] Overall Loss 0.141265 Objective Loss 0.141265 LR 0.000250 Time 0.430407 +2025-05-20 17:55:45,775 - Epoch: [145][ 50/ 71] Overall Loss 0.141014 Objective Loss 0.141014 LR 0.000250 Time 0.415679 +2025-05-20 17:55:49,964 - Epoch: [145][ 60/ 71] Overall Loss 0.142808 Objective Loss 0.142808 LR 0.000250 Time 0.416215 +2025-05-20 17:55:52,947 - Epoch: [145][ 70/ 71] Overall Loss 0.139721 Objective Loss 0.139721 Top1 96.875000 LR 0.000250 Time 0.399361 +2025-05-20 17:55:53,055 - Epoch: [145][ 71/ 71] Overall Loss 0.140183 Objective Loss 0.140183 Top1 95.535714 LR 0.000250 Time 0.395259 +2025-05-20 17:55:53,090 - --- validate (epoch=145)----------- +2025-05-20 17:55:53,090 - 2000 samples (256 per mini-batch) +2025-05-20 17:55:56,906 - Epoch: [145][ 8/ 8] Loss 0.172197 Top1 93.150000 +2025-05-20 17:55:56,939 - ==> Top1: 93.150 Loss: 0.172 + +2025-05-20 17:55:56,939 - ==> Confusion: +[[917 68] + [ 69 946]] -2023-09-11 14:35:43,093 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:35:43,093 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:35:43,095 - - -2023-09-11 14:35:43,096 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:35:47,241 - Epoch: [148][ 10/ 71] Overall Loss 0.135503 Objective Loss 0.135503 LR 0.000250 Time 0.414532 -2023-09-11 14:35:49,275 - Epoch: [148][ 20/ 71] Overall Loss 0.142288 Objective Loss 0.142288 LR 0.000250 Time 0.308926 -2023-09-11 14:35:51,815 - Epoch: [148][ 30/ 71] Overall Loss 0.145302 Objective Loss 0.145302 LR 0.000250 Time 0.290606 -2023-09-11 14:35:54,254 - Epoch: [148][ 40/ 71] Overall Loss 0.148457 Objective Loss 0.148457 LR 0.000250 Time 0.278931 -2023-09-11 14:35:57,754 - Epoch: [148][ 50/ 71] Overall Loss 0.147964 Objective Loss 0.147964 LR 0.000250 Time 0.293128 -2023-09-11 14:35:59,859 - Epoch: [148][ 60/ 71] Overall Loss 0.146237 Objective Loss 0.146237 LR 0.000250 Time 0.279351 -2023-09-11 14:36:02,135 - Epoch: [148][ 70/ 71] Overall Loss 0.144249 Objective Loss 0.144249 Top1 94.531250 LR 0.000250 Time 0.271963 -2023-09-11 14:36:02,214 - Epoch: [148][ 71/ 71] Overall Loss 0.144060 Objective Loss 0.144060 Top1 94.642857 LR 0.000250 Time 0.269236 -2023-09-11 14:36:02,317 - --- validate (epoch=148)----------- -2023-09-11 14:36:02,317 - 2000 samples (256 per mini-batch) -2023-09-11 14:36:05,429 - Epoch: [148][ 8/ 8] Loss 0.174375 Top1 92.700000 -2023-09-11 14:36:05,511 - ==> Top1: 92.700 Loss: 0.174 - -2023-09-11 14:36:05,512 - ==> Confusion: -[[933 52] - [ 94 921]] +2025-05-20 17:55:56,948 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:55:56,949 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:55:56,956 - + +2025-05-20 17:55:56,956 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:56:02,008 - Epoch: [146][ 10/ 71] Overall Loss 0.129140 Objective Loss 0.129140 LR 0.000250 Time 0.505162 +2025-05-20 17:56:05,000 - Epoch: [146][ 20/ 71] Overall Loss 0.134352 Objective Loss 0.134352 LR 0.000250 Time 0.402160 +2025-05-20 17:56:09,621 - Epoch: [146][ 30/ 71] Overall Loss 0.134734 Objective Loss 0.134734 LR 0.000250 Time 0.422120 +2025-05-20 17:56:12,626 - Epoch: [146][ 40/ 71] Overall Loss 0.137017 Objective Loss 0.137017 LR 0.000250 Time 0.391721 +2025-05-20 17:56:16,325 - Epoch: [146][ 50/ 71] Overall Loss 0.136125 Objective Loss 0.136125 LR 0.000250 Time 0.387355 +2025-05-20 17:56:20,718 - Epoch: [146][ 60/ 71] Overall Loss 0.134321 Objective Loss 0.134321 LR 0.000250 Time 0.395993 +2025-05-20 17:56:24,014 - Epoch: [146][ 70/ 71] Overall Loss 0.134213 Objective Loss 0.134213 Top1 94.531250 LR 0.000250 Time 0.386512 +2025-05-20 17:56:24,109 - Epoch: [146][ 71/ 71] Overall Loss 0.134766 Objective Loss 0.134766 Top1 94.940476 LR 0.000250 Time 0.382397 +2025-05-20 17:56:24,138 - --- validate (epoch=146)----------- +2025-05-20 17:56:24,138 - 2000 samples (256 per mini-batch) +2025-05-20 17:56:27,756 - Epoch: [146][ 8/ 8] Loss 0.176243 Top1 92.850000 +2025-05-20 17:56:27,794 - ==> Top1: 92.850 Loss: 0.176 + +2025-05-20 17:56:27,794 - ==> Confusion: +[[937 48] + [ 95 920]] -2023-09-11 14:36:05,528 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:36:05,528 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:36:05,530 - - -2023-09-11 14:36:05,531 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:36:09,485 - Epoch: [149][ 10/ 71] Overall Loss 0.130062 Objective Loss 0.130062 LR 0.000250 Time 0.395369 -2023-09-11 14:36:12,746 - Epoch: [149][ 20/ 71] Overall Loss 0.135317 Objective Loss 0.135317 LR 0.000250 Time 0.360711 -2023-09-11 14:36:14,888 - Epoch: [149][ 30/ 71] Overall Loss 0.134581 Objective Loss 0.134581 LR 0.000250 Time 0.311866 -2023-09-11 14:36:18,332 - Epoch: [149][ 40/ 71] Overall Loss 0.135434 Objective Loss 0.135434 LR 0.000250 Time 0.319995 -2023-09-11 14:36:21,587 - Epoch: [149][ 50/ 71] Overall Loss 0.133118 Objective Loss 0.133118 LR 0.000250 Time 0.321104 -2023-09-11 14:36:23,621 - Epoch: [149][ 60/ 71] Overall Loss 0.133327 Objective Loss 0.133327 LR 0.000250 Time 0.301473 -2023-09-11 14:36:26,165 - Epoch: [149][ 70/ 71] Overall Loss 0.134538 Objective Loss 0.134538 Top1 96.093750 LR 0.000250 Time 0.294750 -2023-09-11 14:36:26,230 - Epoch: [149][ 71/ 71] Overall Loss 0.134893 Objective Loss 0.134893 Top1 95.238095 LR 0.000250 Time 0.291507 -2023-09-11 14:36:26,319 - --- validate (epoch=149)----------- -2023-09-11 14:36:26,319 - 2000 samples (256 per mini-batch) -2023-09-11 14:36:29,465 - Epoch: [149][ 8/ 8] Loss 0.159635 Top1 93.100000 -2023-09-11 14:36:29,566 - ==> Top1: 93.100 Loss: 0.160 - -2023-09-11 14:36:29,566 - ==> Confusion: -[[925 60] - [ 78 937]] +2025-05-20 17:56:27,811 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:56:27,811 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:56:27,818 - + +2025-05-20 17:56:27,818 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:56:34,204 - Epoch: [147][ 10/ 71] Overall Loss 0.135942 Objective Loss 0.135942 LR 0.000250 Time 0.638478 +2025-05-20 17:56:37,399 - Epoch: [147][ 20/ 71] Overall Loss 0.130581 Objective Loss 0.130581 LR 0.000250 Time 0.479008 +2025-05-20 17:56:41,008 - Epoch: [147][ 30/ 71] Overall Loss 0.136982 Objective Loss 0.136982 LR 0.000250 Time 0.439606 +2025-05-20 17:56:44,816 - Epoch: [147][ 40/ 71] Overall Loss 0.135904 Objective Loss 0.135904 LR 0.000250 Time 0.424906 +2025-05-20 17:56:48,778 - Epoch: [147][ 50/ 71] Overall Loss 0.132055 Objective Loss 0.132055 LR 0.000250 Time 0.419166 +2025-05-20 17:56:52,108 - Epoch: [147][ 60/ 71] Overall Loss 0.133130 Objective Loss 0.133130 LR 0.000250 Time 0.404786 +2025-05-20 17:56:55,411 - Epoch: [147][ 70/ 71] Overall Loss 0.132986 Objective Loss 0.132986 Top1 95.312500 LR 0.000250 Time 0.394146 +2025-05-20 17:56:55,515 - Epoch: [147][ 71/ 71] Overall Loss 0.132582 Objective Loss 0.132582 Top1 95.238095 LR 0.000250 Time 0.390063 +2025-05-20 17:56:55,551 - --- validate (epoch=147)----------- +2025-05-20 17:56:55,551 - 2000 samples (256 per mini-batch) +2025-05-20 17:56:58,839 - Epoch: [147][ 8/ 8] Loss 0.191527 Top1 93.050000 +2025-05-20 17:56:58,879 - ==> Top1: 93.050 Loss: 0.192 + +2025-05-20 17:56:58,879 - ==> Confusion: +[[945 40] + [ 99 916]] -2023-09-11 14:36:29,582 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:36:29,582 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:36:29,584 - - -2023-09-11 14:36:29,584 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:36:33,979 - Epoch: [150][ 10/ 71] Overall Loss 0.158186 Objective Loss 0.158186 LR 0.000250 Time 0.439386 -2023-09-11 14:36:36,035 - Epoch: [150][ 20/ 71] Overall Loss 0.149389 Objective Loss 0.149389 LR 0.000250 Time 0.322478 -2023-09-11 14:36:39,870 - Epoch: [150][ 30/ 71] Overall Loss 0.148036 Objective Loss 0.148036 LR 0.000250 Time 0.342817 -2023-09-11 14:36:42,212 - Epoch: [150][ 40/ 71] Overall Loss 0.143113 Objective Loss 0.143113 LR 0.000250 Time 0.315662 -2023-09-11 14:36:44,956 - Epoch: [150][ 50/ 71] Overall Loss 0.144990 Objective Loss 0.144990 LR 0.000250 Time 0.307394 -2023-09-11 14:36:47,080 - Epoch: [150][ 60/ 71] Overall Loss 0.146210 Objective Loss 0.146210 LR 0.000250 Time 0.291561 -2023-09-11 14:36:49,414 - Epoch: [150][ 70/ 71] Overall Loss 0.144060 Objective Loss 0.144060 Top1 94.531250 LR 0.000250 Time 0.283244 -2023-09-11 14:36:49,492 - Epoch: [150][ 71/ 71] Overall Loss 0.144973 Objective Loss 0.144973 Top1 94.345238 LR 0.000250 Time 0.280348 -2023-09-11 14:36:49,587 - --- validate (epoch=150)----------- -2023-09-11 14:36:49,588 - 2000 samples (256 per mini-batch) -2023-09-11 14:36:52,549 - Epoch: [150][ 8/ 8] Loss 0.187073 Top1 92.650000 -2023-09-11 14:36:52,652 - ==> Top1: 92.650 Loss: 0.187 - -2023-09-11 14:36:52,652 - ==> Confusion: -[[930 55] - [ 92 923]] +2025-05-20 17:56:58,893 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:56:58,893 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:56:58,900 - + +2025-05-20 17:56:58,900 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:57:04,918 - Epoch: [148][ 10/ 71] Overall Loss 0.133789 Objective Loss 0.133789 LR 0.000250 Time 0.601730 +2025-05-20 17:57:08,302 - Epoch: [148][ 20/ 71] Overall Loss 0.140332 Objective Loss 0.140332 LR 0.000250 Time 0.470056 +2025-05-20 17:57:12,799 - Epoch: [148][ 30/ 71] Overall Loss 0.142240 Objective Loss 0.142240 LR 0.000250 Time 0.463256 +2025-05-20 17:57:15,936 - Epoch: [148][ 40/ 71] Overall Loss 0.141554 Objective Loss 0.141554 LR 0.000250 Time 0.425869 +2025-05-20 17:57:19,633 - Epoch: [148][ 50/ 71] Overall Loss 0.140317 Objective Loss 0.140317 LR 0.000250 Time 0.414616 +2025-05-20 17:57:22,963 - Epoch: [148][ 60/ 71] Overall Loss 0.138542 Objective Loss 0.138542 LR 0.000250 Time 0.401013 +2025-05-20 17:57:26,774 - Epoch: [148][ 70/ 71] Overall Loss 0.138535 Objective Loss 0.138535 Top1 95.312500 LR 0.000250 Time 0.398170 +2025-05-20 17:57:26,882 - Epoch: [148][ 71/ 71] Overall Loss 0.137939 Objective Loss 0.137939 Top1 95.535714 LR 0.000250 Time 0.394082 +2025-05-20 17:57:26,918 - --- validate (epoch=148)----------- +2025-05-20 17:57:26,918 - 2000 samples (256 per mini-batch) +2025-05-20 17:57:30,740 - Epoch: [148][ 8/ 8] Loss 0.177713 Top1 92.850000 +2025-05-20 17:57:30,775 - ==> Top1: 92.850 Loss: 0.178 + +2025-05-20 17:57:30,775 - ==> Confusion: +[[907 78] + [ 65 950]] -2023-09-11 14:36:52,668 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:36:52,668 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:36:52,671 - - -2023-09-11 14:36:52,671 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:36:56,786 - Epoch: [151][ 10/ 71] Overall Loss 0.135560 Objective Loss 0.135560 LR 0.000250 Time 0.411463 -2023-09-11 14:36:58,847 - Epoch: [151][ 20/ 71] Overall Loss 0.131192 Objective Loss 0.131192 LR 0.000250 Time 0.308799 -2023-09-11 14:37:01,599 - Epoch: [151][ 30/ 71] Overall Loss 0.130698 Objective Loss 0.130698 LR 0.000250 Time 0.297567 -2023-09-11 14:37:03,660 - Epoch: [151][ 40/ 71] Overall Loss 0.131475 Objective Loss 0.131475 LR 0.000250 Time 0.274703 -2023-09-11 14:37:07,067 - Epoch: [151][ 50/ 71] Overall Loss 0.134438 Objective Loss 0.134438 LR 0.000250 Time 0.287890 -2023-09-11 14:37:09,829 - Epoch: [151][ 60/ 71] Overall Loss 0.137150 Objective Loss 0.137150 LR 0.000250 Time 0.285940 -2023-09-11 14:37:12,296 - Epoch: [151][ 70/ 71] Overall Loss 0.140684 Objective Loss 0.140684 Top1 94.140625 LR 0.000250 Time 0.280335 -2023-09-11 14:37:12,369 - Epoch: [151][ 71/ 71] Overall Loss 0.140162 Objective Loss 0.140162 Top1 94.345238 LR 0.000250 Time 0.277402 -2023-09-11 14:37:12,451 - --- validate (epoch=151)----------- -2023-09-11 14:37:12,452 - 2000 samples (256 per mini-batch) -2023-09-11 14:37:14,812 - Epoch: [151][ 8/ 8] Loss 0.180883 Top1 92.400000 -2023-09-11 14:37:14,917 - ==> Top1: 92.400 Loss: 0.181 - -2023-09-11 14:37:14,917 - ==> Confusion: -[[893 92] - [ 60 955]] +2025-05-20 17:57:30,783 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:57:30,783 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:57:30,793 - + +2025-05-20 17:57:30,793 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:57:35,870 - Epoch: [149][ 10/ 71] Overall Loss 0.131129 Objective Loss 0.131129 LR 0.000250 Time 0.507598 +2025-05-20 17:57:39,048 - Epoch: [149][ 20/ 71] Overall Loss 0.133587 Objective Loss 0.133587 LR 0.000250 Time 0.412656 +2025-05-20 17:57:43,567 - Epoch: [149][ 30/ 71] Overall Loss 0.131407 Objective Loss 0.131407 LR 0.000250 Time 0.425757 +2025-05-20 17:57:47,037 - Epoch: [149][ 40/ 71] Overall Loss 0.132287 Objective Loss 0.132287 LR 0.000250 Time 0.406040 +2025-05-20 17:57:50,709 - Epoch: [149][ 50/ 71] Overall Loss 0.132498 Objective Loss 0.132498 LR 0.000250 Time 0.398276 +2025-05-20 17:57:53,802 - Epoch: [149][ 60/ 71] Overall Loss 0.133010 Objective Loss 0.133010 LR 0.000250 Time 0.383443 +2025-05-20 17:57:57,847 - Epoch: [149][ 70/ 71] Overall Loss 0.135747 Objective Loss 0.135747 Top1 94.531250 LR 0.000250 Time 0.386435 +2025-05-20 17:57:57,955 - Epoch: [149][ 71/ 71] Overall Loss 0.136638 Objective Loss 0.136638 Top1 94.047619 LR 0.000250 Time 0.382519 +2025-05-20 17:57:57,993 - --- validate (epoch=149)----------- +2025-05-20 17:57:57,993 - 2000 samples (256 per mini-batch) +2025-05-20 17:58:01,641 - Epoch: [149][ 8/ 8] Loss 0.171367 Top1 93.650000 +2025-05-20 17:58:01,676 - ==> Top1: 93.650 Loss: 0.171 + +2025-05-20 17:58:01,676 - ==> Confusion: +[[935 50] + [ 77 938]] -2023-09-11 14:37:14,933 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:37:14,933 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:37:14,936 - - -2023-09-11 14:37:14,936 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:37:18,203 - Epoch: [152][ 10/ 71] Overall Loss 0.144504 Objective Loss 0.144504 LR 0.000250 Time 0.326706 -2023-09-11 14:37:20,642 - Epoch: [152][ 20/ 71] Overall Loss 0.144285 Objective Loss 0.144285 LR 0.000250 Time 0.285251 -2023-09-11 14:37:23,730 - Epoch: [152][ 30/ 71] Overall Loss 0.141729 Objective Loss 0.141729 LR 0.000250 Time 0.293095 -2023-09-11 14:37:26,534 - Epoch: [152][ 40/ 71] Overall Loss 0.143139 Objective Loss 0.143139 LR 0.000250 Time 0.289913 -2023-09-11 14:37:29,483 - Epoch: [152][ 50/ 71] Overall Loss 0.141997 Objective Loss 0.141997 LR 0.000250 Time 0.290899 -2023-09-11 14:37:31,602 - Epoch: [152][ 60/ 71] Overall Loss 0.141396 Objective Loss 0.141396 LR 0.000250 Time 0.277737 -2023-09-11 14:37:34,127 - Epoch: [152][ 70/ 71] Overall Loss 0.141193 Objective Loss 0.141193 Top1 94.140625 LR 0.000250 Time 0.274118 -2023-09-11 14:37:34,205 - Epoch: [152][ 71/ 71] Overall Loss 0.141199 Objective Loss 0.141199 Top1 94.642857 LR 0.000250 Time 0.271361 -2023-09-11 14:37:34,286 - --- validate (epoch=152)----------- -2023-09-11 14:37:34,286 - 2000 samples (256 per mini-batch) -2023-09-11 14:37:37,464 - Epoch: [152][ 8/ 8] Loss 0.197767 Top1 92.050000 -2023-09-11 14:37:37,565 - ==> Top1: 92.050 Loss: 0.198 - -2023-09-11 14:37:37,565 - ==> Confusion: -[[939 46] - [113 902]] +2025-05-20 17:58:01,693 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:58:01,693 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:58:01,701 - + +2025-05-20 17:58:01,701 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:58:07,991 - Epoch: [150][ 10/ 71] Overall Loss 0.123131 Objective Loss 0.123131 LR 0.000250 Time 0.628996 +2025-05-20 17:58:10,902 - Epoch: [150][ 20/ 71] Overall Loss 0.125711 Objective Loss 0.125711 LR 0.000250 Time 0.460038 +2025-05-20 17:58:15,130 - Epoch: [150][ 30/ 71] Overall Loss 0.129340 Objective Loss 0.129340 LR 0.000250 Time 0.447611 +2025-05-20 17:58:19,021 - Epoch: [150][ 40/ 71] Overall Loss 0.130725 Objective Loss 0.130725 LR 0.000250 Time 0.432972 +2025-05-20 17:58:22,675 - Epoch: [150][ 50/ 71] Overall Loss 0.129742 Objective Loss 0.129742 LR 0.000250 Time 0.419445 +2025-05-20 17:58:26,627 - Epoch: [150][ 60/ 71] Overall Loss 0.127501 Objective Loss 0.127501 LR 0.000250 Time 0.415397 +2025-05-20 17:58:29,508 - Epoch: [150][ 70/ 71] Overall Loss 0.129790 Objective Loss 0.129790 Top1 96.875000 LR 0.000250 Time 0.397205 +2025-05-20 17:58:29,600 - Epoch: [150][ 71/ 71] Overall Loss 0.129159 Objective Loss 0.129159 Top1 97.023810 LR 0.000250 Time 0.392910 +2025-05-20 17:58:29,630 - --- validate (epoch=150)----------- +2025-05-20 17:58:29,630 - 2000 samples (256 per mini-batch) +2025-05-20 17:58:33,072 - Epoch: [150][ 8/ 8] Loss 0.171167 Top1 93.000000 +2025-05-20 17:58:33,103 - ==> Top1: 93.000 Loss: 0.171 + +2025-05-20 17:58:33,103 - ==> Confusion: +[[900 85] + [ 55 960]] -2023-09-11 14:37:37,566 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:37:37,566 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:37:37,568 - - -2023-09-11 14:37:37,569 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:37:41,232 - Epoch: [153][ 10/ 71] Overall Loss 0.141990 Objective Loss 0.141990 LR 0.000250 Time 0.366271 -2023-09-11 14:37:43,748 - Epoch: [153][ 20/ 71] Overall Loss 0.140454 Objective Loss 0.140454 LR 0.000250 Time 0.308905 -2023-09-11 14:37:45,940 - Epoch: [153][ 30/ 71] Overall Loss 0.136106 Objective Loss 0.136106 LR 0.000250 Time 0.279006 -2023-09-11 14:37:48,454 - Epoch: [153][ 40/ 71] Overall Loss 0.136150 Objective Loss 0.136150 LR 0.000250 Time 0.272089 -2023-09-11 14:37:50,776 - Epoch: [153][ 50/ 71] Overall Loss 0.139164 Objective Loss 0.139164 LR 0.000250 Time 0.264108 -2023-09-11 14:37:53,525 - Epoch: [153][ 60/ 71] Overall Loss 0.136480 Objective Loss 0.136480 LR 0.000250 Time 0.265900 -2023-09-11 14:37:55,787 - Epoch: [153][ 70/ 71] Overall Loss 0.138509 Objective Loss 0.138509 Top1 94.531250 LR 0.000250 Time 0.260229 -2023-09-11 14:37:55,859 - Epoch: [153][ 71/ 71] Overall Loss 0.137816 Objective Loss 0.137816 Top1 95.238095 LR 0.000250 Time 0.257581 -2023-09-11 14:37:55,959 - --- validate (epoch=153)----------- -2023-09-11 14:37:55,959 - 2000 samples (256 per mini-batch) -2023-09-11 14:37:58,353 - Epoch: [153][ 8/ 8] Loss 0.215560 Top1 91.250000 -2023-09-11 14:37:58,448 - ==> Top1: 91.250 Loss: 0.216 - -2023-09-11 14:37:58,449 - ==> Confusion: -[[836 149] - [ 26 989]] - -2023-09-11 14:37:58,465 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:37:58,465 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:37:58,468 - - -2023-09-11 14:37:58,468 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:38:02,687 - Epoch: [154][ 10/ 71] Overall Loss 0.140922 Objective Loss 0.140922 LR 0.000250 Time 0.421873 -2023-09-11 14:38:04,720 - Epoch: [154][ 20/ 71] Overall Loss 0.135275 Objective Loss 0.135275 LR 0.000250 Time 0.312544 -2023-09-11 14:38:07,525 - Epoch: [154][ 30/ 71] Overall Loss 0.142118 Objective Loss 0.142118 LR 0.000250 Time 0.301861 -2023-09-11 14:38:09,852 - Epoch: [154][ 40/ 71] Overall Loss 0.138506 Objective Loss 0.138506 LR 0.000250 Time 0.284549 -2023-09-11 14:38:12,837 - Epoch: [154][ 50/ 71] Overall Loss 0.139098 Objective Loss 0.139098 LR 0.000250 Time 0.287338 -2023-09-11 14:38:14,928 - Epoch: [154][ 60/ 71] Overall Loss 0.141296 Objective Loss 0.141296 LR 0.000250 Time 0.274302 -2023-09-11 14:38:17,225 - Epoch: [154][ 70/ 71] Overall Loss 0.142905 Objective Loss 0.142905 Top1 91.406250 LR 0.000250 Time 0.267918 -2023-09-11 14:38:17,328 - Epoch: [154][ 71/ 71] Overall Loss 0.142904 Objective Loss 0.142904 Top1 92.261905 LR 0.000250 Time 0.265590 -2023-09-11 14:38:17,430 - --- validate (epoch=154)----------- -2023-09-11 14:38:17,430 - 2000 samples (256 per mini-batch) -2023-09-11 14:38:20,199 - Epoch: [154][ 8/ 8] Loss 0.193705 Top1 91.450000 -2023-09-11 14:38:20,290 - ==> Top1: 91.450 Loss: 0.194 - -2023-09-11 14:38:20,290 - ==> Confusion: -[[937 48] - [123 892]] - -2023-09-11 14:38:20,306 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:38:20,306 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:38:20,308 - - -2023-09-11 14:38:20,308 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:38:24,149 - Epoch: [155][ 10/ 71] Overall Loss 0.155127 Objective Loss 0.155127 LR 0.000250 Time 0.384032 -2023-09-11 14:38:26,754 - Epoch: [155][ 20/ 71] Overall Loss 0.152524 Objective Loss 0.152524 LR 0.000250 Time 0.322225 -2023-09-11 14:38:28,740 - Epoch: [155][ 30/ 71] Overall Loss 0.150550 Objective Loss 0.150550 LR 0.000250 Time 0.281035 -2023-09-11 14:38:31,374 - Epoch: [155][ 40/ 71] Overall Loss 0.145926 Objective Loss 0.145926 LR 0.000250 Time 0.276599 -2023-09-11 14:38:34,581 - Epoch: [155][ 50/ 71] Overall Loss 0.145236 Objective Loss 0.145236 LR 0.000250 Time 0.285423 -2023-09-11 14:38:38,402 - Epoch: [155][ 60/ 71] Overall Loss 0.145329 Objective Loss 0.145329 LR 0.000250 Time 0.301523 -2023-09-11 14:38:40,809 - Epoch: [155][ 70/ 71] Overall Loss 0.145206 Objective Loss 0.145206 Top1 96.093750 LR 0.000250 Time 0.292837 -2023-09-11 14:38:40,908 - Epoch: [155][ 71/ 71] Overall Loss 0.146487 Objective Loss 0.146487 Top1 94.940476 LR 0.000250 Time 0.290097 -2023-09-11 14:38:41,004 - --- validate (epoch=155)----------- -2023-09-11 14:38:41,004 - 2000 samples (256 per mini-batch) -2023-09-11 14:38:43,556 - Epoch: [155][ 8/ 8] Loss 0.180358 Top1 92.350000 -2023-09-11 14:38:43,659 - ==> Top1: 92.350 Loss: 0.180 - -2023-09-11 14:38:43,660 - ==> Confusion: +2025-05-20 17:58:33,118 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:58:33,119 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:58:33,126 - + +2025-05-20 17:58:33,126 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:58:37,873 - Epoch: [151][ 10/ 71] Overall Loss 0.127451 Objective Loss 0.127451 LR 0.000250 Time 0.474584 +2025-05-20 17:58:40,965 - Epoch: [151][ 20/ 71] Overall Loss 0.130377 Objective Loss 0.130377 LR 0.000250 Time 0.391889 +2025-05-20 17:58:45,306 - Epoch: [151][ 30/ 71] Overall Loss 0.128303 Objective Loss 0.128303 LR 0.000250 Time 0.405961 +2025-05-20 17:58:49,162 - Epoch: [151][ 40/ 71] Overall Loss 0.127750 Objective Loss 0.127750 LR 0.000250 Time 0.400869 +2025-05-20 17:58:53,726 - Epoch: [151][ 50/ 71] Overall Loss 0.130532 Objective Loss 0.130532 LR 0.000250 Time 0.411956 +2025-05-20 17:58:56,974 - Epoch: [151][ 60/ 71] Overall Loss 0.131277 Objective Loss 0.131277 LR 0.000250 Time 0.397429 +2025-05-20 17:59:00,754 - Epoch: [151][ 70/ 71] Overall Loss 0.131904 Objective Loss 0.131904 Top1 92.578125 LR 0.000250 Time 0.394653 +2025-05-20 17:59:00,862 - Epoch: [151][ 71/ 71] Overall Loss 0.130810 Objective Loss 0.130810 Top1 93.750000 LR 0.000250 Time 0.390611 +2025-05-20 17:59:00,894 - --- validate (epoch=151)----------- +2025-05-20 17:59:00,894 - 2000 samples (256 per mini-batch) +2025-05-20 17:59:04,398 - Epoch: [151][ 8/ 8] Loss 0.165314 Top1 93.100000 +2025-05-20 17:59:04,433 - ==> Top1: 93.100 Loss: 0.165 + +2025-05-20 17:59:04,433 - ==> Confusion: +[[900 85] + [ 53 962]] + +2025-05-20 17:59:04,449 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:59:04,449 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:59:04,456 - + +2025-05-20 17:59:04,456 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:59:09,316 - Epoch: [152][ 10/ 71] Overall Loss 0.134909 Objective Loss 0.134909 LR 0.000250 Time 0.485961 +2025-05-20 17:59:12,637 - Epoch: [152][ 20/ 71] Overall Loss 0.131502 Objective Loss 0.131502 LR 0.000250 Time 0.408994 +2025-05-20 17:59:16,551 - Epoch: [152][ 30/ 71] Overall Loss 0.133772 Objective Loss 0.133772 LR 0.000250 Time 0.403097 +2025-05-20 17:59:19,594 - Epoch: [152][ 40/ 71] Overall Loss 0.132202 Objective Loss 0.132202 LR 0.000250 Time 0.378406 +2025-05-20 17:59:23,503 - Epoch: [152][ 50/ 71] Overall Loss 0.131552 Objective Loss 0.131552 LR 0.000250 Time 0.380894 +2025-05-20 17:59:26,604 - Epoch: [152][ 60/ 71] Overall Loss 0.133022 Objective Loss 0.133022 LR 0.000250 Time 0.369095 +2025-05-20 17:59:30,315 - Epoch: [152][ 70/ 71] Overall Loss 0.133526 Objective Loss 0.133526 Top1 92.968750 LR 0.000250 Time 0.369375 +2025-05-20 17:59:30,418 - Epoch: [152][ 71/ 71] Overall Loss 0.132456 Objective Loss 0.132456 Top1 94.047619 LR 0.000250 Time 0.365619 +2025-05-20 17:59:30,451 - --- validate (epoch=152)----------- +2025-05-20 17:59:30,452 - 2000 samples (256 per mini-batch) +2025-05-20 17:59:34,015 - Epoch: [152][ 8/ 8] Loss 0.185545 Top1 93.000000 +2025-05-20 17:59:34,052 - ==> Top1: 93.000 Loss: 0.186 + +2025-05-20 17:59:34,052 - ==> Confusion: +[[909 76] + [ 64 951]] + +2025-05-20 17:59:34,069 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 17:59:34,069 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 17:59:34,076 - + +2025-05-20 17:59:34,076 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 17:59:40,454 - Epoch: [153][ 10/ 71] Overall Loss 0.134782 Objective Loss 0.134782 LR 0.000250 Time 0.637723 +2025-05-20 17:59:43,344 - Epoch: [153][ 20/ 71] Overall Loss 0.130883 Objective Loss 0.130883 LR 0.000250 Time 0.463340 +2025-05-20 17:59:47,270 - Epoch: [153][ 30/ 71] Overall Loss 0.135771 Objective Loss 0.135771 LR 0.000250 Time 0.439748 +2025-05-20 17:59:50,664 - Epoch: [153][ 40/ 71] Overall Loss 0.131503 Objective Loss 0.131503 LR 0.000250 Time 0.414653 +2025-05-20 17:59:54,880 - Epoch: [153][ 50/ 71] Overall Loss 0.131947 Objective Loss 0.131947 LR 0.000250 Time 0.416043 +2025-05-20 17:59:58,207 - Epoch: [153][ 60/ 71] Overall Loss 0.131397 Objective Loss 0.131397 LR 0.000250 Time 0.402149 +2025-05-20 18:00:01,796 - Epoch: [153][ 70/ 71] Overall Loss 0.131815 Objective Loss 0.131815 Top1 94.140625 LR 0.000250 Time 0.395969 +2025-05-20 18:00:01,895 - Epoch: [153][ 71/ 71] Overall Loss 0.131055 Objective Loss 0.131055 Top1 94.940476 LR 0.000250 Time 0.391780 +2025-05-20 18:00:01,926 - --- validate (epoch=153)----------- +2025-05-20 18:00:01,926 - 2000 samples (256 per mini-batch) +2025-05-20 18:00:05,851 - Epoch: [153][ 8/ 8] Loss 0.163007 Top1 93.450000 +2025-05-20 18:00:05,890 - ==> Top1: 93.450 Loss: 0.163 + +2025-05-20 18:00:05,890 - ==> Confusion: [[906 79] - [ 74 941]] - -2023-09-11 14:38:43,675 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:38:43,675 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:38:43,677 - - -2023-09-11 14:38:43,677 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:38:47,396 - Epoch: [156][ 10/ 71] Overall Loss 0.125541 Objective Loss 0.125541 LR 0.000250 Time 0.371853 -2023-09-11 14:38:50,323 - Epoch: [156][ 20/ 71] Overall Loss 0.130025 Objective Loss 0.130025 LR 0.000250 Time 0.332216 -2023-09-11 14:38:53,578 - Epoch: [156][ 30/ 71] Overall Loss 0.126393 Objective Loss 0.126393 LR 0.000250 Time 0.329967 -2023-09-11 14:38:55,752 - Epoch: [156][ 40/ 71] Overall Loss 0.131066 Objective Loss 0.131066 LR 0.000250 Time 0.301830 -2023-09-11 14:38:59,598 - Epoch: [156][ 50/ 71] Overall Loss 0.130813 Objective Loss 0.130813 LR 0.000250 Time 0.318367 -2023-09-11 14:39:01,743 - Epoch: [156][ 60/ 71] Overall Loss 0.133392 Objective Loss 0.133392 LR 0.000250 Time 0.301063 -2023-09-11 14:39:04,103 - Epoch: [156][ 70/ 71] Overall Loss 0.134936 Objective Loss 0.134936 Top1 92.578125 LR 0.000250 Time 0.291763 -2023-09-11 14:39:04,196 - Epoch: [156][ 71/ 71] Overall Loss 0.133993 Objective Loss 0.133993 Top1 93.750000 LR 0.000250 Time 0.288962 -2023-09-11 14:39:04,294 - --- validate (epoch=156)----------- -2023-09-11 14:39:04,294 - 2000 samples (256 per mini-batch) -2023-09-11 14:39:07,145 - Epoch: [156][ 8/ 8] Loss 0.195592 Top1 92.350000 -2023-09-11 14:39:07,242 - ==> Top1: 92.350 Loss: 0.196 - -2023-09-11 14:39:07,242 - ==> Confusion: -[[937 48] - [105 910]] - -2023-09-11 14:39:07,258 - ==> Best [Top1: 93.400 Sparsity:0.00 Params: 57776 on epoch: 105] -2023-09-11 14:39:07,258 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:39:07,260 - - -2023-09-11 14:39:07,260 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:39:10,520 - Epoch: [157][ 10/ 71] Overall Loss 0.158947 Objective Loss 0.158947 LR 0.000250 Time 0.325914 -2023-09-11 14:39:12,782 - Epoch: [157][ 20/ 71] Overall Loss 0.145077 Objective Loss 0.145077 LR 0.000250 Time 0.276053 -2023-09-11 14:39:15,555 - Epoch: [157][ 30/ 71] Overall Loss 0.137404 Objective Loss 0.137404 LR 0.000250 Time 0.276456 -2023-09-11 14:39:19,291 - Epoch: [157][ 40/ 71] Overall Loss 0.137514 Objective Loss 0.137514 LR 0.000250 Time 0.300726 -2023-09-11 14:39:21,593 - Epoch: [157][ 50/ 71] Overall Loss 0.138335 Objective Loss 0.138335 LR 0.000250 Time 0.286609 -2023-09-11 14:39:24,762 - Epoch: [157][ 60/ 71] Overall Loss 0.136350 Objective Loss 0.136350 LR 0.000250 Time 0.291654 -2023-09-11 14:39:26,820 - Epoch: [157][ 70/ 71] Overall Loss 0.135475 Objective Loss 0.135475 Top1 94.531250 LR 0.000250 Time 0.279390 -2023-09-11 14:39:26,893 - Epoch: [157][ 71/ 71] Overall Loss 0.135719 Objective Loss 0.135719 Top1 93.750000 LR 0.000250 Time 0.276481 -2023-09-11 14:39:26,978 - --- validate (epoch=157)----------- -2023-09-11 14:39:26,978 - 2000 samples (256 per mini-batch) -2023-09-11 14:39:29,965 - Epoch: [157][ 8/ 8] Loss 0.164704 Top1 93.700000 -2023-09-11 14:39:30,065 - ==> Top1: 93.700 Loss: 0.165 - -2023-09-11 14:39:30,065 - ==> Confusion: -[[911 74] [ 52 963]] -2023-09-11 14:39:30,078 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:39:30,078 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:39:30,081 - - -2023-09-11 14:39:30,081 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:39:34,044 - Epoch: [158][ 10/ 71] Overall Loss 0.148368 Objective Loss 0.148368 LR 0.000250 Time 0.396215 -2023-09-11 14:39:36,725 - Epoch: [158][ 20/ 71] Overall Loss 0.139819 Objective Loss 0.139819 LR 0.000250 Time 0.332168 -2023-09-11 14:39:38,775 - Epoch: [158][ 30/ 71] Overall Loss 0.139671 Objective Loss 0.139671 LR 0.000250 Time 0.289761 -2023-09-11 14:39:42,068 - Epoch: [158][ 40/ 71] Overall Loss 0.138871 Objective Loss 0.138871 LR 0.000250 Time 0.299630 -2023-09-11 14:39:44,863 - Epoch: [158][ 50/ 71] Overall Loss 0.137978 Objective Loss 0.137978 LR 0.000250 Time 0.295608 -2023-09-11 14:39:47,464 - Epoch: [158][ 60/ 71] Overall Loss 0.137981 Objective Loss 0.137981 LR 0.000250 Time 0.289672 -2023-09-11 14:39:49,396 - Epoch: [158][ 70/ 71] Overall Loss 0.135996 Objective Loss 0.135996 Top1 94.921875 LR 0.000250 Time 0.275889 -2023-09-11 14:39:49,480 - Epoch: [158][ 71/ 71] Overall Loss 0.135544 Objective Loss 0.135544 Top1 95.238095 LR 0.000250 Time 0.273192 -2023-09-11 14:39:49,585 - --- validate (epoch=158)----------- -2023-09-11 14:39:49,585 - 2000 samples (256 per mini-batch) -2023-09-11 14:39:52,646 - Epoch: [158][ 8/ 8] Loss 0.194998 Top1 91.850000 -2023-09-11 14:39:52,753 - ==> Top1: 91.850 Loss: 0.195 - -2023-09-11 14:39:52,754 - ==> Confusion: -[[921 64] - [ 99 916]] - -2023-09-11 14:39:52,770 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:39:52,770 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:39:52,772 - - -2023-09-11 14:39:52,772 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:39:57,277 - Epoch: [159][ 10/ 71] Overall Loss 0.127449 Objective Loss 0.127449 LR 0.000250 Time 0.450389 -2023-09-11 14:39:59,476 - Epoch: [159][ 20/ 71] Overall Loss 0.134861 Objective Loss 0.134861 LR 0.000250 Time 0.335161 -2023-09-11 14:40:02,391 - Epoch: [159][ 30/ 71] Overall Loss 0.138033 Objective Loss 0.138033 LR 0.000250 Time 0.320580 -2023-09-11 14:40:04,443 - Epoch: [159][ 40/ 71] Overall Loss 0.136145 Objective Loss 0.136145 LR 0.000250 Time 0.291724 -2023-09-11 14:40:08,015 - Epoch: [159][ 50/ 71] Overall Loss 0.136789 Objective Loss 0.136789 LR 0.000250 Time 0.304820 -2023-09-11 14:40:11,050 - Epoch: [159][ 60/ 71] Overall Loss 0.137358 Objective Loss 0.137358 LR 0.000250 Time 0.304588 -2023-09-11 14:40:13,214 - Epoch: [159][ 70/ 71] Overall Loss 0.137390 Objective Loss 0.137390 Top1 95.703125 LR 0.000250 Time 0.291986 -2023-09-11 14:40:13,258 - Epoch: [159][ 71/ 71] Overall Loss 0.136949 Objective Loss 0.136949 Top1 95.833333 LR 0.000250 Time 0.288494 -2023-09-11 14:40:13,358 - --- validate (epoch=159)----------- -2023-09-11 14:40:13,358 - 2000 samples (256 per mini-batch) -2023-09-11 14:40:16,482 - Epoch: [159][ 8/ 8] Loss 0.172339 Top1 93.250000 -2023-09-11 14:40:16,581 - ==> Top1: 93.250 Loss: 0.172 - -2023-09-11 14:40:16,582 - ==> Confusion: -[[939 46] - [ 89 926]] +2025-05-20 18:00:05,907 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:00:05,907 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:00:05,914 - + +2025-05-20 18:00:05,915 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:00:10,758 - Epoch: [154][ 10/ 71] Overall Loss 0.119667 Objective Loss 0.119667 LR 0.000250 Time 0.484302 +2025-05-20 18:00:13,730 - Epoch: [154][ 20/ 71] Overall Loss 0.128246 Objective Loss 0.128246 LR 0.000250 Time 0.390735 +2025-05-20 18:00:17,562 - Epoch: [154][ 30/ 71] Overall Loss 0.130449 Objective Loss 0.130449 LR 0.000250 Time 0.388195 +2025-05-20 18:00:20,849 - Epoch: [154][ 40/ 71] Overall Loss 0.135114 Objective Loss 0.135114 LR 0.000250 Time 0.373323 +2025-05-20 18:00:25,633 - Epoch: [154][ 50/ 71] Overall Loss 0.135057 Objective Loss 0.135057 LR 0.000250 Time 0.394322 +2025-05-20 18:00:29,002 - Epoch: [154][ 60/ 71] Overall Loss 0.134935 Objective Loss 0.134935 LR 0.000250 Time 0.384748 +2025-05-20 18:00:32,805 - Epoch: [154][ 70/ 71] Overall Loss 0.133839 Objective Loss 0.133839 Top1 92.578125 LR 0.000250 Time 0.384111 +2025-05-20 18:00:32,915 - Epoch: [154][ 71/ 71] Overall Loss 0.133673 Objective Loss 0.133673 Top1 92.559524 LR 0.000250 Time 0.380241 +2025-05-20 18:00:32,945 - --- validate (epoch=154)----------- +2025-05-20 18:00:32,945 - 2000 samples (256 per mini-batch) +2025-05-20 18:00:36,235 - Epoch: [154][ 8/ 8] Loss 0.174793 Top1 93.550000 +2025-05-20 18:00:36,271 - ==> Top1: 93.550 Loss: 0.175 + +2025-05-20 18:00:36,271 - ==> Confusion: +[[922 63] + [ 66 949]] -2023-09-11 14:40:16,598 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:40:16,598 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:40:16,602 - - -2023-09-11 14:40:16,602 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:40:21,185 - Epoch: [160][ 10/ 71] Overall Loss 0.127272 Objective Loss 0.127272 LR 0.000250 Time 0.458257 -2023-09-11 14:40:23,712 - Epoch: [160][ 20/ 71] Overall Loss 0.130602 Objective Loss 0.130602 LR 0.000250 Time 0.355428 -2023-09-11 14:40:26,511 - Epoch: [160][ 30/ 71] Overall Loss 0.136898 Objective Loss 0.136898 LR 0.000250 Time 0.330237 -2023-09-11 14:40:29,016 - Epoch: [160][ 40/ 71] Overall Loss 0.140281 Objective Loss 0.140281 LR 0.000250 Time 0.310310 -2023-09-11 14:40:32,526 - Epoch: [160][ 50/ 71] Overall Loss 0.140610 Objective Loss 0.140610 LR 0.000250 Time 0.318439 -2023-09-11 14:40:35,245 - Epoch: [160][ 60/ 71] Overall Loss 0.138607 Objective Loss 0.138607 LR 0.000250 Time 0.310684 -2023-09-11 14:40:37,308 - Epoch: [160][ 70/ 71] Overall Loss 0.137954 Objective Loss 0.137954 Top1 92.968750 LR 0.000250 Time 0.295758 -2023-09-11 14:40:37,427 - Epoch: [160][ 71/ 71] Overall Loss 0.137916 Objective Loss 0.137916 Top1 92.857143 LR 0.000250 Time 0.293266 -2023-09-11 14:40:37,512 - --- validate (epoch=160)----------- -2023-09-11 14:40:37,512 - 2000 samples (256 per mini-batch) -2023-09-11 14:40:40,498 - Epoch: [160][ 8/ 8] Loss 0.183178 Top1 92.700000 -2023-09-11 14:40:40,598 - ==> Top1: 92.700 Loss: 0.183 - -2023-09-11 14:40:40,599 - ==> Confusion: -[[904 81] - [ 65 950]] +2025-05-20 18:00:36,288 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:00:36,288 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:00:36,295 - + +2025-05-20 18:00:36,295 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:00:42,921 - Epoch: [155][ 10/ 71] Overall Loss 0.128888 Objective Loss 0.128888 LR 0.000250 Time 0.662544 +2025-05-20 18:00:45,733 - Epoch: [155][ 20/ 71] Overall Loss 0.126488 Objective Loss 0.126488 LR 0.000250 Time 0.471829 +2025-05-20 18:00:49,278 - Epoch: [155][ 30/ 71] Overall Loss 0.123493 Objective Loss 0.123493 LR 0.000250 Time 0.432718 +2025-05-20 18:00:52,355 - Epoch: [155][ 40/ 71] Overall Loss 0.129058 Objective Loss 0.129058 LR 0.000250 Time 0.401446 +2025-05-20 18:00:56,941 - Epoch: [155][ 50/ 71] Overall Loss 0.128136 Objective Loss 0.128136 LR 0.000250 Time 0.412883 +2025-05-20 18:01:00,713 - Epoch: [155][ 60/ 71] Overall Loss 0.129348 Objective Loss 0.129348 LR 0.000250 Time 0.406925 +2025-05-20 18:01:04,505 - Epoch: [155][ 70/ 71] Overall Loss 0.129122 Objective Loss 0.129122 Top1 95.703125 LR 0.000250 Time 0.402962 +2025-05-20 18:01:04,608 - Epoch: [155][ 71/ 71] Overall Loss 0.129789 Objective Loss 0.129789 Top1 94.642857 LR 0.000250 Time 0.398730 +2025-05-20 18:01:04,638 - --- validate (epoch=155)----------- +2025-05-20 18:01:04,638 - 2000 samples (256 per mini-batch) +2025-05-20 18:01:08,744 - Epoch: [155][ 8/ 8] Loss 0.202881 Top1 91.750000 +2025-05-20 18:01:08,778 - ==> Top1: 91.750 Loss: 0.203 + +2025-05-20 18:01:08,778 - ==> Confusion: +[[948 37] + [128 887]] + +2025-05-20 18:01:08,790 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:01:08,790 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:01:08,797 - + +2025-05-20 18:01:08,798 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:01:13,435 - Epoch: [156][ 10/ 71] Overall Loss 0.145247 Objective Loss 0.145247 LR 0.000250 Time 0.463667 +2025-05-20 18:01:16,679 - Epoch: [156][ 20/ 71] Overall Loss 0.134773 Objective Loss 0.134773 LR 0.000250 Time 0.394000 +2025-05-20 18:01:20,166 - Epoch: [156][ 30/ 71] Overall Loss 0.128784 Objective Loss 0.128784 LR 0.000250 Time 0.378896 +2025-05-20 18:01:24,116 - Epoch: [156][ 40/ 71] Overall Loss 0.127616 Objective Loss 0.127616 LR 0.000250 Time 0.382932 +2025-05-20 18:01:27,137 - Epoch: [156][ 50/ 71] Overall Loss 0.126286 Objective Loss 0.126286 LR 0.000250 Time 0.366753 +2025-05-20 18:01:30,796 - Epoch: [156][ 60/ 71] Overall Loss 0.128118 Objective Loss 0.128118 LR 0.000250 Time 0.366611 +2025-05-20 18:01:34,611 - Epoch: [156][ 70/ 71] Overall Loss 0.128126 Objective Loss 0.128126 Top1 95.703125 LR 0.000250 Time 0.368731 +2025-05-20 18:01:34,720 - Epoch: [156][ 71/ 71] Overall Loss 0.127189 Objective Loss 0.127189 Top1 96.428571 LR 0.000250 Time 0.365063 +2025-05-20 18:01:34,751 - --- validate (epoch=156)----------- +2025-05-20 18:01:34,751 - 2000 samples (256 per mini-batch) +2025-05-20 18:01:38,728 - Epoch: [156][ 8/ 8] Loss 0.174603 Top1 92.750000 +2025-05-20 18:01:38,765 - ==> Top1: 92.750 Loss: 0.175 + +2025-05-20 18:01:38,765 - ==> Confusion: +[[923 62] + [ 83 932]] -2023-09-11 14:40:40,613 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:40:40,613 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:40:40,616 - - -2023-09-11 14:40:40,616 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:40:44,323 - Epoch: [161][ 10/ 71] Overall Loss 0.123808 Objective Loss 0.123808 LR 0.000250 Time 0.370695 -2023-09-11 14:40:47,189 - Epoch: [161][ 20/ 71] Overall Loss 0.131230 Objective Loss 0.131230 LR 0.000250 Time 0.328594 -2023-09-11 14:40:50,092 - Epoch: [161][ 30/ 71] Overall Loss 0.131855 Objective Loss 0.131855 LR 0.000250 Time 0.315826 -2023-09-11 14:40:52,700 - Epoch: [161][ 40/ 71] Overall Loss 0.133592 Objective Loss 0.133592 LR 0.000250 Time 0.302070 -2023-09-11 14:40:55,680 - Epoch: [161][ 50/ 71] Overall Loss 0.133337 Objective Loss 0.133337 LR 0.000250 Time 0.301253 -2023-09-11 14:40:58,916 - Epoch: [161][ 60/ 71] Overall Loss 0.133530 Objective Loss 0.133530 LR 0.000250 Time 0.304971 -2023-09-11 14:41:01,013 - Epoch: [161][ 70/ 71] Overall Loss 0.133453 Objective Loss 0.133453 Top1 92.578125 LR 0.000250 Time 0.291347 -2023-09-11 14:41:01,097 - Epoch: [161][ 71/ 71] Overall Loss 0.133963 Objective Loss 0.133963 Top1 92.559524 LR 0.000250 Time 0.288426 -2023-09-11 14:41:01,194 - --- validate (epoch=161)----------- -2023-09-11 14:41:01,194 - 2000 samples (256 per mini-batch) -2023-09-11 14:41:04,124 - Epoch: [161][ 8/ 8] Loss 0.192709 Top1 92.050000 -2023-09-11 14:41:04,210 - ==> Top1: 92.050 Loss: 0.193 - -2023-09-11 14:41:04,211 - ==> Confusion: +2025-05-20 18:01:38,782 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:01:38,782 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:01:38,790 - + +2025-05-20 18:01:38,790 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:01:44,696 - Epoch: [157][ 10/ 71] Overall Loss 0.133732 Objective Loss 0.133732 LR 0.000250 Time 0.590538 +2025-05-20 18:01:47,355 - Epoch: [157][ 20/ 71] Overall Loss 0.134617 Objective Loss 0.134617 LR 0.000250 Time 0.428235 +2025-05-20 18:01:51,154 - Epoch: [157][ 30/ 71] Overall Loss 0.132913 Objective Loss 0.132913 LR 0.000250 Time 0.412097 +2025-05-20 18:01:54,418 - Epoch: [157][ 40/ 71] Overall Loss 0.133123 Objective Loss 0.133123 LR 0.000250 Time 0.390676 +2025-05-20 18:01:58,053 - Epoch: [157][ 50/ 71] Overall Loss 0.133407 Objective Loss 0.133407 LR 0.000250 Time 0.385222 +2025-05-20 18:02:01,942 - Epoch: [157][ 60/ 71] Overall Loss 0.132263 Objective Loss 0.132263 LR 0.000250 Time 0.385829 +2025-05-20 18:02:05,449 - Epoch: [157][ 70/ 71] Overall Loss 0.131831 Objective Loss 0.131831 Top1 93.359375 LR 0.000250 Time 0.380817 +2025-05-20 18:02:05,545 - Epoch: [157][ 71/ 71] Overall Loss 0.131602 Objective Loss 0.131602 Top1 93.452381 LR 0.000250 Time 0.376798 +2025-05-20 18:02:05,574 - --- validate (epoch=157)----------- +2025-05-20 18:02:05,575 - 2000 samples (256 per mini-batch) +2025-05-20 18:02:09,246 - Epoch: [157][ 8/ 8] Loss 0.170979 Top1 93.100000 +2025-05-20 18:02:09,277 - ==> Top1: 93.100 Loss: 0.171 + +2025-05-20 18:02:09,277 - ==> Confusion: [[946 39] - [120 895]] - -2023-09-11 14:41:04,227 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:41:04,227 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:41:04,231 - - -2023-09-11 14:41:04,232 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:41:08,070 - Epoch: [162][ 10/ 71] Overall Loss 0.141698 Objective Loss 0.141698 LR 0.000250 Time 0.383790 -2023-09-11 14:41:10,116 - Epoch: [162][ 20/ 71] Overall Loss 0.152747 Objective Loss 0.152747 LR 0.000250 Time 0.294192 -2023-09-11 14:41:12,711 - Epoch: [162][ 30/ 71] Overall Loss 0.144171 Objective Loss 0.144171 LR 0.000250 Time 0.282613 -2023-09-11 14:41:15,349 - Epoch: [162][ 40/ 71] Overall Loss 0.140177 Objective Loss 0.140177 LR 0.000250 Time 0.277901 -2023-09-11 14:41:18,961 - Epoch: [162][ 50/ 71] Overall Loss 0.139428 Objective Loss 0.139428 LR 0.000250 Time 0.294547 -2023-09-11 14:41:21,076 - Epoch: [162][ 60/ 71] Overall Loss 0.140653 Objective Loss 0.140653 LR 0.000250 Time 0.280707 -2023-09-11 14:41:23,431 - Epoch: [162][ 70/ 71] Overall Loss 0.138707 Objective Loss 0.138707 Top1 96.484375 LR 0.000250 Time 0.274240 -2023-09-11 14:41:23,523 - Epoch: [162][ 71/ 71] Overall Loss 0.139048 Objective Loss 0.139048 Top1 95.535714 LR 0.000250 Time 0.271673 -2023-09-11 14:41:23,626 - --- validate (epoch=162)----------- -2023-09-11 14:41:23,627 - 2000 samples (256 per mini-batch) -2023-09-11 14:41:26,288 - Epoch: [162][ 8/ 8] Loss 0.172017 Top1 92.800000 -2023-09-11 14:41:26,386 - ==> Top1: 92.800 Loss: 0.172 - -2023-09-11 14:41:26,386 - ==> Confusion: -[[932 53] - [ 91 924]] + [ 99 916]] -2023-09-11 14:41:26,387 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:41:26,387 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:41:26,392 - - -2023-09-11 14:41:26,392 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:41:29,860 - Epoch: [163][ 10/ 71] Overall Loss 0.134971 Objective Loss 0.134971 LR 0.000250 Time 0.346731 -2023-09-11 14:41:32,202 - Epoch: [163][ 20/ 71] Overall Loss 0.139776 Objective Loss 0.139776 LR 0.000250 Time 0.290417 -2023-09-11 14:41:35,453 - Epoch: [163][ 30/ 71] Overall Loss 0.140548 Objective Loss 0.140548 LR 0.000250 Time 0.301996 -2023-09-11 14:41:38,055 - Epoch: [163][ 40/ 71] Overall Loss 0.136687 Objective Loss 0.136687 LR 0.000250 Time 0.291534 -2023-09-11 14:41:41,291 - Epoch: [163][ 50/ 71] Overall Loss 0.131695 Objective Loss 0.131695 LR 0.000250 Time 0.297937 -2023-09-11 14:41:43,865 - Epoch: [163][ 60/ 71] Overall Loss 0.133672 Objective Loss 0.133672 LR 0.000250 Time 0.291178 -2023-09-11 14:41:45,890 - Epoch: [163][ 70/ 71] Overall Loss 0.131670 Objective Loss 0.131670 Top1 94.531250 LR 0.000250 Time 0.278510 -2023-09-11 14:41:45,972 - Epoch: [163][ 71/ 71] Overall Loss 0.134635 Objective Loss 0.134635 Top1 91.964286 LR 0.000250 Time 0.275728 -2023-09-11 14:41:46,076 - --- validate (epoch=163)----------- -2023-09-11 14:41:46,076 - 2000 samples (256 per mini-batch) -2023-09-11 14:41:48,472 - Epoch: [163][ 8/ 8] Loss 0.176663 Top1 92.500000 -2023-09-11 14:41:48,569 - ==> Top1: 92.500 Loss: 0.177 - -2023-09-11 14:41:48,570 - ==> Confusion: -[[935 50] - [100 915]] +2025-05-20 18:02:09,292 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:02:09,292 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:02:09,300 - + +2025-05-20 18:02:09,300 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:02:14,032 - Epoch: [158][ 10/ 71] Overall Loss 0.135822 Objective Loss 0.135822 LR 0.000250 Time 0.473211 +2025-05-20 18:02:17,542 - Epoch: [158][ 20/ 71] Overall Loss 0.138536 Objective Loss 0.138536 LR 0.000250 Time 0.412047 +2025-05-20 18:02:21,606 - Epoch: [158][ 30/ 71] Overall Loss 0.135685 Objective Loss 0.135685 LR 0.000250 Time 0.410149 +2025-05-20 18:02:25,633 - Epoch: [158][ 40/ 71] Overall Loss 0.133216 Objective Loss 0.133216 LR 0.000250 Time 0.408284 +2025-05-20 18:02:28,966 - Epoch: [158][ 50/ 71] Overall Loss 0.130924 Objective Loss 0.130924 LR 0.000250 Time 0.393275 +2025-05-20 18:02:32,867 - Epoch: [158][ 60/ 71] Overall Loss 0.134442 Objective Loss 0.134442 LR 0.000250 Time 0.392744 +2025-05-20 18:02:35,959 - Epoch: [158][ 70/ 71] Overall Loss 0.135489 Objective Loss 0.135489 Top1 95.312500 LR 0.000250 Time 0.380811 +2025-05-20 18:02:36,056 - Epoch: [158][ 71/ 71] Overall Loss 0.136167 Objective Loss 0.136167 Top1 94.047619 LR 0.000250 Time 0.376806 +2025-05-20 18:02:36,085 - --- validate (epoch=158)----------- +2025-05-20 18:02:36,085 - 2000 samples (256 per mini-batch) +2025-05-20 18:02:40,059 - Epoch: [158][ 8/ 8] Loss 0.182006 Top1 92.950000 +2025-05-20 18:02:40,098 - ==> Top1: 92.950 Loss: 0.182 + +2025-05-20 18:02:40,098 - ==> Confusion: +[[936 49] + [ 92 923]] -2023-09-11 14:41:48,585 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:41:48,585 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:41:48,587 - - -2023-09-11 14:41:48,587 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:41:53,130 - Epoch: [164][ 10/ 71] Overall Loss 0.129067 Objective Loss 0.129067 LR 0.000250 Time 0.454192 -2023-09-11 14:41:55,290 - Epoch: [164][ 20/ 71] Overall Loss 0.133608 Objective Loss 0.133608 LR 0.000250 Time 0.335100 -2023-09-11 14:41:57,981 - Epoch: [164][ 30/ 71] Overall Loss 0.129368 Objective Loss 0.129368 LR 0.000250 Time 0.313100 -2023-09-11 14:42:00,282 - Epoch: [164][ 40/ 71] Overall Loss 0.131453 Objective Loss 0.131453 LR 0.000250 Time 0.292328 -2023-09-11 14:42:03,817 - Epoch: [164][ 50/ 71] Overall Loss 0.130713 Objective Loss 0.130713 LR 0.000250 Time 0.304550 -2023-09-11 14:42:06,200 - Epoch: [164][ 60/ 71] Overall Loss 0.130805 Objective Loss 0.130805 LR 0.000250 Time 0.293514 -2023-09-11 14:42:08,454 - Epoch: [164][ 70/ 71] Overall Loss 0.131235 Objective Loss 0.131235 Top1 93.359375 LR 0.000250 Time 0.283769 -2023-09-11 14:42:08,572 - Epoch: [164][ 71/ 71] Overall Loss 0.130858 Objective Loss 0.130858 Top1 94.047619 LR 0.000250 Time 0.281442 -2023-09-11 14:42:08,678 - --- validate (epoch=164)----------- -2023-09-11 14:42:08,678 - 2000 samples (256 per mini-batch) -2023-09-11 14:42:11,954 - Epoch: [164][ 8/ 8] Loss 0.176193 Top1 92.350000 -2023-09-11 14:42:12,097 - ==> Top1: 92.350 Loss: 0.176 - -2023-09-11 14:42:12,098 - ==> Confusion: -[[902 83] - [ 70 945]] +2025-05-20 18:02:40,114 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:02:40,114 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:02:40,122 - + +2025-05-20 18:02:40,122 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:02:46,225 - Epoch: [159][ 10/ 71] Overall Loss 0.138093 Objective Loss 0.138093 LR 0.000250 Time 0.610221 +2025-05-20 18:02:49,194 - Epoch: [159][ 20/ 71] Overall Loss 0.134152 Objective Loss 0.134152 LR 0.000250 Time 0.453543 +2025-05-20 18:02:53,334 - Epoch: [159][ 30/ 71] Overall Loss 0.130633 Objective Loss 0.130633 LR 0.000250 Time 0.440350 +2025-05-20 18:02:56,909 - Epoch: [159][ 40/ 71] Overall Loss 0.130667 Objective Loss 0.130667 LR 0.000250 Time 0.419651 +2025-05-20 18:03:00,431 - Epoch: [159][ 50/ 71] Overall Loss 0.130817 Objective Loss 0.130817 LR 0.000250 Time 0.406141 +2025-05-20 18:03:03,498 - Epoch: [159][ 60/ 71] Overall Loss 0.132910 Objective Loss 0.132910 LR 0.000250 Time 0.389573 +2025-05-20 18:03:08,294 - Epoch: [159][ 70/ 71] Overall Loss 0.132430 Objective Loss 0.132430 Top1 93.359375 LR 0.000250 Time 0.402429 +2025-05-20 18:03:08,403 - Epoch: [159][ 71/ 71] Overall Loss 0.132691 Objective Loss 0.132691 Top1 93.154762 LR 0.000250 Time 0.398288 +2025-05-20 18:03:08,437 - --- validate (epoch=159)----------- +2025-05-20 18:03:08,437 - 2000 samples (256 per mini-batch) +2025-05-20 18:03:11,831 - Epoch: [159][ 8/ 8] Loss 0.200749 Top1 91.800000 +2025-05-20 18:03:11,865 - ==> Top1: 91.800 Loss: 0.201 + +2025-05-20 18:03:11,865 - ==> Confusion: +[[930 55] + [109 906]] + +2025-05-20 18:03:11,881 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:03:11,881 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:03:11,889 - + +2025-05-20 18:03:11,889 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:03:17,936 - Epoch: [160][ 10/ 71] Overall Loss 0.137887 Objective Loss 0.137887 LR 0.000250 Time 0.604643 +2025-05-20 18:03:21,010 - Epoch: [160][ 20/ 71] Overall Loss 0.135545 Objective Loss 0.135545 LR 0.000250 Time 0.456008 +2025-05-20 18:03:25,332 - Epoch: [160][ 30/ 71] Overall Loss 0.132198 Objective Loss 0.132198 LR 0.000250 Time 0.448059 +2025-05-20 18:03:28,399 - Epoch: [160][ 40/ 71] Overall Loss 0.132445 Objective Loss 0.132445 LR 0.000250 Time 0.412726 +2025-05-20 18:03:31,941 - Epoch: [160][ 50/ 71] Overall Loss 0.133717 Objective Loss 0.133717 LR 0.000250 Time 0.401016 +2025-05-20 18:03:35,139 - Epoch: [160][ 60/ 71] Overall Loss 0.133291 Objective Loss 0.133291 LR 0.000250 Time 0.387467 +2025-05-20 18:03:38,271 - Epoch: [160][ 70/ 71] Overall Loss 0.134729 Objective Loss 0.134729 Top1 92.578125 LR 0.000250 Time 0.376854 +2025-05-20 18:03:38,376 - Epoch: [160][ 71/ 71] Overall Loss 0.134716 Objective Loss 0.134716 Top1 92.857143 LR 0.000250 Time 0.373024 +2025-05-20 18:03:38,404 - --- validate (epoch=160)----------- +2025-05-20 18:03:38,404 - 2000 samples (256 per mini-batch) +2025-05-20 18:03:41,663 - Epoch: [160][ 8/ 8] Loss 0.185646 Top1 92.450000 +2025-05-20 18:03:41,696 - ==> Top1: 92.450 Loss: 0.186 + +2025-05-20 18:03:41,696 - ==> Confusion: +[[928 57] + [ 94 921]] -2023-09-11 14:42:12,113 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:42:12,113 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:42:12,118 - - -2023-09-11 14:42:12,118 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:42:17,170 - Epoch: [165][ 10/ 71] Overall Loss 0.141816 Objective Loss 0.141816 LR 0.000250 Time 0.505176 -2023-09-11 14:42:19,398 - Epoch: [165][ 20/ 71] Overall Loss 0.140926 Objective Loss 0.140926 LR 0.000250 Time 0.363960 -2023-09-11 14:42:22,592 - Epoch: [165][ 30/ 71] Overall Loss 0.139346 Objective Loss 0.139346 LR 0.000250 Time 0.349096 -2023-09-11 14:42:24,693 - Epoch: [165][ 40/ 71] Overall Loss 0.142904 Objective Loss 0.142904 LR 0.000250 Time 0.314334 -2023-09-11 14:42:27,504 - Epoch: [165][ 50/ 71] Overall Loss 0.141788 Objective Loss 0.141788 LR 0.000250 Time 0.307692 -2023-09-11 14:42:31,208 - Epoch: [165][ 60/ 71] Overall Loss 0.144623 Objective Loss 0.144623 LR 0.000250 Time 0.318123 -2023-09-11 14:42:33,136 - Epoch: [165][ 70/ 71] Overall Loss 0.144675 Objective Loss 0.144675 Top1 95.703125 LR 0.000250 Time 0.300222 -2023-09-11 14:42:33,214 - Epoch: [165][ 71/ 71] Overall Loss 0.146900 Objective Loss 0.146900 Top1 95.238095 LR 0.000250 Time 0.297095 -2023-09-11 14:42:33,305 - --- validate (epoch=165)----------- -2023-09-11 14:42:33,306 - 2000 samples (256 per mini-batch) -2023-09-11 14:42:36,111 - Epoch: [165][ 8/ 8] Loss 0.175504 Top1 92.800000 -2023-09-11 14:42:36,203 - ==> Top1: 92.800 Loss: 0.176 - -2023-09-11 14:42:36,203 - ==> Confusion: -[[912 73] - [ 71 944]] +2025-05-20 18:03:41,713 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:03:41,713 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:03:41,720 - + +2025-05-20 18:03:41,721 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:03:47,766 - Epoch: [161][ 10/ 71] Overall Loss 0.113033 Objective Loss 0.113033 LR 0.000250 Time 0.604480 +2025-05-20 18:03:51,442 - Epoch: [161][ 20/ 71] Overall Loss 0.119248 Objective Loss 0.119248 LR 0.000250 Time 0.486041 +2025-05-20 18:03:55,672 - Epoch: [161][ 30/ 71] Overall Loss 0.123114 Objective Loss 0.123114 LR 0.000250 Time 0.464998 +2025-05-20 18:03:58,829 - Epoch: [161][ 40/ 71] Overall Loss 0.123603 Objective Loss 0.123603 LR 0.000250 Time 0.427677 +2025-05-20 18:04:03,488 - Epoch: [161][ 50/ 71] Overall Loss 0.123289 Objective Loss 0.123289 LR 0.000250 Time 0.435313 +2025-05-20 18:04:06,848 - Epoch: [161][ 60/ 71] Overall Loss 0.124363 Objective Loss 0.124363 LR 0.000250 Time 0.418759 +2025-05-20 18:04:10,322 - Epoch: [161][ 70/ 71] Overall Loss 0.125494 Objective Loss 0.125494 Top1 94.531250 LR 0.000250 Time 0.408556 +2025-05-20 18:04:10,415 - Epoch: [161][ 71/ 71] Overall Loss 0.126511 Objective Loss 0.126511 Top1 93.750000 LR 0.000250 Time 0.404107 +2025-05-20 18:04:10,448 - --- validate (epoch=161)----------- +2025-05-20 18:04:10,448 - 2000 samples (256 per mini-batch) +2025-05-20 18:04:13,890 - Epoch: [161][ 8/ 8] Loss 0.173062 Top1 93.550000 +2025-05-20 18:04:13,923 - ==> Top1: 93.550 Loss: 0.173 + +2025-05-20 18:04:13,923 - ==> Confusion: +[[932 53] + [ 76 939]] -2023-09-11 14:42:36,218 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:42:36,218 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:42:36,220 - - -2023-09-11 14:42:36,220 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:42:39,537 - Epoch: [166][ 10/ 71] Overall Loss 0.125566 Objective Loss 0.125566 LR 0.000250 Time 0.331599 -2023-09-11 14:42:41,803 - Epoch: [166][ 20/ 71] Overall Loss 0.130810 Objective Loss 0.130810 LR 0.000250 Time 0.279105 -2023-09-11 14:42:44,601 - Epoch: [166][ 30/ 71] Overall Loss 0.137340 Objective Loss 0.137340 LR 0.000250 Time 0.279331 -2023-09-11 14:42:46,560 - Epoch: [166][ 40/ 71] Overall Loss 0.136391 Objective Loss 0.136391 LR 0.000250 Time 0.258458 -2023-09-11 14:42:49,205 - Epoch: [166][ 50/ 71] Overall Loss 0.136528 Objective Loss 0.136528 LR 0.000250 Time 0.259661 -2023-09-11 14:42:52,525 - Epoch: [166][ 60/ 71] Overall Loss 0.136732 Objective Loss 0.136732 LR 0.000250 Time 0.271719 -2023-09-11 14:42:54,348 - Epoch: [166][ 70/ 71] Overall Loss 0.135025 Objective Loss 0.135025 Top1 93.359375 LR 0.000250 Time 0.258931 -2023-09-11 14:42:54,404 - Epoch: [166][ 71/ 71] Overall Loss 0.134384 Objective Loss 0.134384 Top1 94.047619 LR 0.000250 Time 0.256075 -2023-09-11 14:42:54,489 - --- validate (epoch=166)----------- -2023-09-11 14:42:54,489 - 2000 samples (256 per mini-batch) -2023-09-11 14:42:57,520 - Epoch: [166][ 8/ 8] Loss 0.175191 Top1 92.950000 -2023-09-11 14:42:57,621 - ==> Top1: 92.950 Loss: 0.175 - -2023-09-11 14:42:57,622 - ==> Confusion: -[[889 96] - [ 45 970]] +2025-05-20 18:04:13,939 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:04:13,939 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:04:13,946 - + +2025-05-20 18:04:13,946 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:04:18,652 - Epoch: [162][ 10/ 71] Overall Loss 0.128482 Objective Loss 0.128482 LR 0.000250 Time 0.470551 +2025-05-20 18:04:23,695 - Epoch: [162][ 20/ 71] Overall Loss 0.129413 Objective Loss 0.129413 LR 0.000250 Time 0.487390 +2025-05-20 18:04:27,236 - Epoch: [162][ 30/ 71] Overall Loss 0.135204 Objective Loss 0.135204 LR 0.000250 Time 0.442951 +2025-05-20 18:04:31,980 - Epoch: [162][ 40/ 71] Overall Loss 0.137008 Objective Loss 0.137008 LR 0.000250 Time 0.450799 +2025-05-20 18:04:34,966 - Epoch: [162][ 50/ 71] Overall Loss 0.131815 Objective Loss 0.131815 LR 0.000250 Time 0.420351 +2025-05-20 18:04:38,381 - Epoch: [162][ 60/ 71] Overall Loss 0.129539 Objective Loss 0.129539 LR 0.000250 Time 0.407205 +2025-05-20 18:04:41,205 - Epoch: [162][ 70/ 71] Overall Loss 0.131135 Objective Loss 0.131135 Top1 94.531250 LR 0.000250 Time 0.389382 +2025-05-20 18:04:41,308 - Epoch: [162][ 71/ 71] Overall Loss 0.130595 Objective Loss 0.130595 Top1 94.940476 LR 0.000250 Time 0.385343 +2025-05-20 18:04:41,342 - --- validate (epoch=162)----------- +2025-05-20 18:04:41,342 - 2000 samples (256 per mini-batch) +2025-05-20 18:04:45,503 - Epoch: [162][ 8/ 8] Loss 0.173997 Top1 93.150000 +2025-05-20 18:04:45,544 - ==> Top1: 93.150 Loss: 0.174 + +2025-05-20 18:04:45,544 - ==> Confusion: +[[906 79] + [ 58 957]] -2023-09-11 14:42:57,636 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:42:57,636 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:42:57,639 - - -2023-09-11 14:42:57,639 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:43:02,045 - Epoch: [167][ 10/ 71] Overall Loss 0.134221 Objective Loss 0.134221 LR 0.000250 Time 0.440543 -2023-09-11 14:43:04,048 - Epoch: [167][ 20/ 71] Overall Loss 0.135043 Objective Loss 0.135043 LR 0.000250 Time 0.320392 -2023-09-11 14:43:07,176 - Epoch: [167][ 30/ 71] Overall Loss 0.146022 Objective Loss 0.146022 LR 0.000250 Time 0.317874 -2023-09-11 14:43:09,729 - Epoch: [167][ 40/ 71] Overall Loss 0.144319 Objective Loss 0.144319 LR 0.000250 Time 0.302224 -2023-09-11 14:43:12,314 - Epoch: [167][ 50/ 71] Overall Loss 0.141617 Objective Loss 0.141617 LR 0.000250 Time 0.293471 -2023-09-11 14:43:15,151 - Epoch: [167][ 60/ 71] Overall Loss 0.142042 Objective Loss 0.142042 LR 0.000250 Time 0.291841 -2023-09-11 14:43:16,942 - Epoch: [167][ 70/ 71] Overall Loss 0.143411 Objective Loss 0.143411 Top1 93.359375 LR 0.000250 Time 0.275726 -2023-09-11 14:43:17,072 - Epoch: [167][ 71/ 71] Overall Loss 0.143113 Objective Loss 0.143113 Top1 93.750000 LR 0.000250 Time 0.273665 -2023-09-11 14:43:17,170 - --- validate (epoch=167)----------- -2023-09-11 14:43:17,171 - 2000 samples (256 per mini-batch) -2023-09-11 14:43:20,016 - Epoch: [167][ 8/ 8] Loss 0.165769 Top1 93.050000 -2023-09-11 14:43:20,112 - ==> Top1: 93.050 Loss: 0.166 - -2023-09-11 14:43:20,113 - ==> Confusion: -[[921 64] +2025-05-20 18:04:45,555 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:04:45,555 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:04:45,562 - + +2025-05-20 18:04:45,562 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:04:52,283 - Epoch: [163][ 10/ 71] Overall Loss 0.125648 Objective Loss 0.125648 LR 0.000250 Time 0.671984 +2025-05-20 18:04:55,005 - Epoch: [163][ 20/ 71] Overall Loss 0.126816 Objective Loss 0.126816 LR 0.000250 Time 0.472061 +2025-05-20 18:04:58,798 - Epoch: [163][ 30/ 71] Overall Loss 0.127119 Objective Loss 0.127119 LR 0.000250 Time 0.441140 +2025-05-20 18:05:04,220 - Epoch: [163][ 40/ 71] Overall Loss 0.124987 Objective Loss 0.124987 LR 0.000250 Time 0.466391 +2025-05-20 18:05:07,740 - Epoch: [163][ 50/ 71] Overall Loss 0.128694 Objective Loss 0.128694 LR 0.000250 Time 0.443517 +2025-05-20 18:05:11,231 - Epoch: [163][ 60/ 71] Overall Loss 0.131987 Objective Loss 0.131987 LR 0.000250 Time 0.427766 +2025-05-20 18:05:14,226 - Epoch: [163][ 70/ 71] Overall Loss 0.133842 Objective Loss 0.133842 Top1 92.968750 LR 0.000250 Time 0.409441 +2025-05-20 18:05:14,334 - Epoch: [163][ 71/ 71] Overall Loss 0.132823 Objective Loss 0.132823 Top1 94.047619 LR 0.000250 Time 0.405197 +2025-05-20 18:05:14,366 - --- validate (epoch=163)----------- +2025-05-20 18:05:14,366 - 2000 samples (256 per mini-batch) +2025-05-20 18:05:17,936 - Epoch: [163][ 8/ 8] Loss 0.166472 Top1 92.700000 +2025-05-20 18:05:17,972 - ==> Top1: 92.700 Loss: 0.166 + +2025-05-20 18:05:17,972 - ==> Confusion: +[[918 67] + [ 79 936]] + +2025-05-20 18:05:17,984 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:05:17,984 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:05:17,991 - + +2025-05-20 18:05:17,991 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:05:22,849 - Epoch: [164][ 10/ 71] Overall Loss 0.135386 Objective Loss 0.135386 LR 0.000250 Time 0.485789 +2025-05-20 18:05:27,166 - Epoch: [164][ 20/ 71] Overall Loss 0.122608 Objective Loss 0.122608 LR 0.000250 Time 0.458715 +2025-05-20 18:05:30,107 - Epoch: [164][ 30/ 71] Overall Loss 0.126198 Objective Loss 0.126198 LR 0.000250 Time 0.403822 +2025-05-20 18:05:34,093 - Epoch: [164][ 40/ 71] Overall Loss 0.125250 Objective Loss 0.125250 LR 0.000250 Time 0.402522 +2025-05-20 18:05:37,420 - Epoch: [164][ 50/ 71] Overall Loss 0.123965 Objective Loss 0.123965 LR 0.000250 Time 0.388546 +2025-05-20 18:05:40,935 - Epoch: [164][ 60/ 71] Overall Loss 0.126672 Objective Loss 0.126672 LR 0.000250 Time 0.382370 +2025-05-20 18:05:44,206 - Epoch: [164][ 70/ 71] Overall Loss 0.126768 Objective Loss 0.126768 Top1 96.093750 LR 0.000250 Time 0.374462 +2025-05-20 18:05:44,314 - Epoch: [164][ 71/ 71] Overall Loss 0.126281 Objective Loss 0.126281 Top1 96.130952 LR 0.000250 Time 0.370707 +2025-05-20 18:05:44,345 - --- validate (epoch=164)----------- +2025-05-20 18:05:44,345 - 2000 samples (256 per mini-batch) +2025-05-20 18:05:48,258 - Epoch: [164][ 8/ 8] Loss 0.183816 Top1 93.150000 +2025-05-20 18:05:48,294 - ==> Top1: 93.150 Loss: 0.184 + +2025-05-20 18:05:48,294 - ==> Confusion: +[[923 62] [ 75 940]] -2023-09-11 14:43:20,128 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:43:20,128 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:43:20,131 - - -2023-09-11 14:43:20,131 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:43:23,618 - Epoch: [168][ 10/ 71] Overall Loss 0.122227 Objective Loss 0.122227 LR 0.000250 Time 0.348689 -2023-09-11 14:43:26,085 - Epoch: [168][ 20/ 71] Overall Loss 0.128404 Objective Loss 0.128404 LR 0.000250 Time 0.297652 -2023-09-11 14:43:28,900 - Epoch: [168][ 30/ 71] Overall Loss 0.132952 Objective Loss 0.132952 LR 0.000250 Time 0.292259 -2023-09-11 14:43:30,854 - Epoch: [168][ 40/ 71] Overall Loss 0.133645 Objective Loss 0.133645 LR 0.000250 Time 0.268048 -2023-09-11 14:43:33,493 - Epoch: [168][ 50/ 71] Overall Loss 0.133681 Objective Loss 0.133681 LR 0.000250 Time 0.267214 -2023-09-11 14:43:35,541 - Epoch: [168][ 60/ 71] Overall Loss 0.136324 Objective Loss 0.136324 LR 0.000250 Time 0.256803 -2023-09-11 14:43:37,892 - Epoch: [168][ 70/ 71] Overall Loss 0.135753 Objective Loss 0.135753 Top1 94.531250 LR 0.000250 Time 0.253694 -2023-09-11 14:43:37,968 - Epoch: [168][ 71/ 71] Overall Loss 0.135240 Objective Loss 0.135240 Top1 95.238095 LR 0.000250 Time 0.251194 -2023-09-11 14:43:38,069 - --- validate (epoch=168)----------- -2023-09-11 14:43:38,069 - 2000 samples (256 per mini-batch) -2023-09-11 14:43:40,500 - Epoch: [168][ 8/ 8] Loss 0.175973 Top1 92.800000 -2023-09-11 14:43:40,597 - ==> Top1: 92.800 Loss: 0.176 - -2023-09-11 14:43:40,597 - ==> Confusion: -[[947 38] - [106 909]] - -2023-09-11 14:43:40,612 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:43:40,612 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:43:40,614 - - -2023-09-11 14:43:40,614 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:43:43,811 - Epoch: [169][ 10/ 71] Overall Loss 0.152357 Objective Loss 0.152357 LR 0.000250 Time 0.319678 -2023-09-11 14:43:45,874 - Epoch: [169][ 20/ 71] Overall Loss 0.143123 Objective Loss 0.143123 LR 0.000250 Time 0.262980 -2023-09-11 14:43:48,354 - Epoch: [169][ 30/ 71] Overall Loss 0.137579 Objective Loss 0.137579 LR 0.000250 Time 0.257950 -2023-09-11 14:43:50,901 - Epoch: [169][ 40/ 71] Overall Loss 0.135455 Objective Loss 0.135455 LR 0.000250 Time 0.257132 -2023-09-11 14:43:54,020 - Epoch: [169][ 50/ 71] Overall Loss 0.136118 Objective Loss 0.136118 LR 0.000250 Time 0.268083 -2023-09-11 14:43:56,032 - Epoch: [169][ 60/ 71] Overall Loss 0.135532 Objective Loss 0.135532 LR 0.000250 Time 0.256929 -2023-09-11 14:43:58,329 - Epoch: [169][ 70/ 71] Overall Loss 0.137433 Objective Loss 0.137433 Top1 93.750000 LR 0.000250 Time 0.253037 -2023-09-11 14:43:58,418 - Epoch: [169][ 71/ 71] Overall Loss 0.137431 Objective Loss 0.137431 Top1 93.750000 LR 0.000250 Time 0.250722 -2023-09-11 14:43:58,512 - --- validate (epoch=169)----------- -2023-09-11 14:43:58,512 - 2000 samples (256 per mini-batch) -2023-09-11 14:44:01,559 - Epoch: [169][ 8/ 8] Loss 0.184606 Top1 92.900000 -2023-09-11 14:44:01,655 - ==> Top1: 92.900 Loss: 0.185 - -2023-09-11 14:44:01,656 - ==> Confusion: -[[939 46] - [ 96 919]] - -2023-09-11 14:44:01,663 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:44:01,664 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:44:01,666 - - -2023-09-11 14:44:01,666 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:44:05,054 - Epoch: [170][ 10/ 71] Overall Loss 0.134194 Objective Loss 0.134194 LR 0.000250 Time 0.338796 -2023-09-11 14:44:07,131 - Epoch: [170][ 20/ 71] Overall Loss 0.137751 Objective Loss 0.137751 LR 0.000250 Time 0.273196 -2023-09-11 14:44:10,411 - Epoch: [170][ 30/ 71] Overall Loss 0.137994 Objective Loss 0.137994 LR 0.000250 Time 0.291443 -2023-09-11 14:44:12,396 - Epoch: [170][ 40/ 71] Overall Loss 0.134324 Objective Loss 0.134324 LR 0.000250 Time 0.268207 -2023-09-11 14:44:14,973 - Epoch: [170][ 50/ 71] Overall Loss 0.133608 Objective Loss 0.133608 LR 0.000250 Time 0.266111 -2023-09-11 14:44:17,017 - Epoch: [170][ 60/ 71] Overall Loss 0.131404 Objective Loss 0.131404 LR 0.000250 Time 0.255815 -2023-09-11 14:44:19,184 - Epoch: [170][ 70/ 71] Overall Loss 0.131643 Objective Loss 0.131643 Top1 95.312500 LR 0.000250 Time 0.250223 -2023-09-11 14:44:19,263 - Epoch: [170][ 71/ 71] Overall Loss 0.132586 Objective Loss 0.132586 Top1 94.940476 LR 0.000250 Time 0.247805 -2023-09-11 14:44:19,354 - --- validate (epoch=170)----------- -2023-09-11 14:44:19,355 - 2000 samples (256 per mini-batch) -2023-09-11 14:44:22,346 - Epoch: [170][ 8/ 8] Loss 0.167478 Top1 93.050000 -2023-09-11 14:44:22,444 - ==> Top1: 93.050 Loss: 0.167 - -2023-09-11 14:44:22,445 - ==> Confusion: -[[920 65] - [ 74 941]] +2025-05-20 18:05:48,299 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:05:48,299 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:05:48,307 - + +2025-05-20 18:05:48,307 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:05:53,794 - Epoch: [165][ 10/ 71] Overall Loss 0.108288 Objective Loss 0.108288 LR 0.000250 Time 0.548664 +2025-05-20 18:05:57,162 - Epoch: [165][ 20/ 71] Overall Loss 0.122706 Objective Loss 0.122706 LR 0.000250 Time 0.442711 +2025-05-20 18:06:01,714 - Epoch: [165][ 30/ 71] Overall Loss 0.126670 Objective Loss 0.126670 LR 0.000250 Time 0.446854 +2025-05-20 18:06:04,715 - Epoch: [165][ 40/ 71] Overall Loss 0.128398 Objective Loss 0.128398 LR 0.000250 Time 0.410170 +2025-05-20 18:06:08,404 - Epoch: [165][ 50/ 71] Overall Loss 0.127365 Objective Loss 0.127365 LR 0.000250 Time 0.401908 +2025-05-20 18:06:11,803 - Epoch: [165][ 60/ 71] Overall Loss 0.125781 Objective Loss 0.125781 LR 0.000250 Time 0.391572 +2025-05-20 18:06:14,997 - Epoch: [165][ 70/ 71] Overall Loss 0.126540 Objective Loss 0.126540 Top1 94.140625 LR 0.000250 Time 0.381251 +2025-05-20 18:06:15,105 - Epoch: [165][ 71/ 71] Overall Loss 0.126509 Objective Loss 0.126509 Top1 94.047619 LR 0.000250 Time 0.377405 +2025-05-20 18:06:15,134 - --- validate (epoch=165)----------- +2025-05-20 18:06:15,134 - 2000 samples (256 per mini-batch) +2025-05-20 18:06:18,834 - Epoch: [165][ 8/ 8] Loss 0.189927 Top1 92.650000 +2025-05-20 18:06:18,866 - ==> Top1: 92.650 Loss: 0.190 + +2025-05-20 18:06:18,866 - ==> Confusion: +[[875 110] + [ 37 978]] -2023-09-11 14:44:22,459 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:44:22,459 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:44:22,463 - - -2023-09-11 14:44:22,463 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:44:25,544 - Epoch: [171][ 10/ 71] Overall Loss 0.133054 Objective Loss 0.133054 LR 0.000250 Time 0.308069 -2023-09-11 14:44:27,958 - Epoch: [171][ 20/ 71] Overall Loss 0.135992 Objective Loss 0.135992 LR 0.000250 Time 0.274680 -2023-09-11 14:44:30,561 - Epoch: [171][ 30/ 71] Overall Loss 0.136042 Objective Loss 0.136042 LR 0.000250 Time 0.269895 -2023-09-11 14:44:32,773 - Epoch: [171][ 40/ 71] Overall Loss 0.135843 Objective Loss 0.135843 LR 0.000250 Time 0.257710 -2023-09-11 14:44:35,810 - Epoch: [171][ 50/ 71] Overall Loss 0.131087 Objective Loss 0.131087 LR 0.000250 Time 0.266910 -2023-09-11 14:44:37,829 - Epoch: [171][ 60/ 71] Overall Loss 0.132278 Objective Loss 0.132278 LR 0.000250 Time 0.256061 -2023-09-11 14:44:40,216 - Epoch: [171][ 70/ 71] Overall Loss 0.132423 Objective Loss 0.132423 Top1 95.312500 LR 0.000250 Time 0.253585 -2023-09-11 14:44:40,305 - Epoch: [171][ 71/ 71] Overall Loss 0.132213 Objective Loss 0.132213 Top1 95.238095 LR 0.000250 Time 0.251255 -2023-09-11 14:44:40,396 - --- validate (epoch=171)----------- -2023-09-11 14:44:40,396 - 2000 samples (256 per mini-batch) -2023-09-11 14:44:42,933 - Epoch: [171][ 8/ 8] Loss 0.171834 Top1 93.150000 -2023-09-11 14:44:43,051 - ==> Top1: 93.150 Loss: 0.172 - -2023-09-11 14:44:43,051 - ==> Confusion: -[[913 72] +2025-05-20 18:06:18,883 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:06:18,884 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:06:18,891 - + +2025-05-20 18:06:18,891 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:06:24,697 - Epoch: [166][ 10/ 71] Overall Loss 0.132945 Objective Loss 0.132945 LR 0.000250 Time 0.580516 +2025-05-20 18:06:28,263 - Epoch: [166][ 20/ 71] Overall Loss 0.129539 Objective Loss 0.129539 LR 0.000250 Time 0.468550 +2025-05-20 18:06:31,918 - Epoch: [166][ 30/ 71] Overall Loss 0.129105 Objective Loss 0.129105 LR 0.000250 Time 0.434198 +2025-05-20 18:06:35,429 - Epoch: [166][ 40/ 71] Overall Loss 0.126852 Objective Loss 0.126852 LR 0.000250 Time 0.413404 +2025-05-20 18:06:39,049 - Epoch: [166][ 50/ 71] Overall Loss 0.127179 Objective Loss 0.127179 LR 0.000250 Time 0.403113 +2025-05-20 18:06:43,395 - Epoch: [166][ 60/ 71] Overall Loss 0.126789 Objective Loss 0.126789 LR 0.000250 Time 0.408367 +2025-05-20 18:06:46,417 - Epoch: [166][ 70/ 71] Overall Loss 0.128136 Objective Loss 0.128136 Top1 92.187500 LR 0.000250 Time 0.393188 +2025-05-20 18:06:46,525 - Epoch: [166][ 71/ 71] Overall Loss 0.128671 Objective Loss 0.128671 Top1 93.154762 LR 0.000250 Time 0.389179 +2025-05-20 18:06:46,554 - --- validate (epoch=166)----------- +2025-05-20 18:06:46,555 - 2000 samples (256 per mini-batch) +2025-05-20 18:06:50,579 - Epoch: [166][ 8/ 8] Loss 0.185029 Top1 93.150000 +2025-05-20 18:06:50,610 - ==> Top1: 93.150 Loss: 0.185 + +2025-05-20 18:06:50,610 - ==> Confusion: +[[948 37] + [100 915]] + +2025-05-20 18:06:50,614 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:06:50,614 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:06:50,622 - + +2025-05-20 18:06:50,622 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:06:55,451 - Epoch: [167][ 10/ 71] Overall Loss 0.127305 Objective Loss 0.127305 LR 0.000250 Time 0.482832 +2025-05-20 18:06:58,835 - Epoch: [167][ 20/ 71] Overall Loss 0.138484 Objective Loss 0.138484 LR 0.000250 Time 0.410588 +2025-05-20 18:07:03,228 - Epoch: [167][ 30/ 71] Overall Loss 0.131446 Objective Loss 0.131446 LR 0.000250 Time 0.420149 +2025-05-20 18:07:06,664 - Epoch: [167][ 40/ 71] Overall Loss 0.129080 Objective Loss 0.129080 LR 0.000250 Time 0.401018 +2025-05-20 18:07:10,881 - Epoch: [167][ 50/ 71] Overall Loss 0.127720 Objective Loss 0.127720 LR 0.000250 Time 0.405149 +2025-05-20 18:07:14,841 - Epoch: [167][ 60/ 71] Overall Loss 0.126622 Objective Loss 0.126622 LR 0.000250 Time 0.403612 +2025-05-20 18:07:18,072 - Epoch: [167][ 70/ 71] Overall Loss 0.125021 Objective Loss 0.125021 Top1 94.531250 LR 0.000250 Time 0.392115 +2025-05-20 18:07:18,175 - Epoch: [167][ 71/ 71] Overall Loss 0.123763 Objective Loss 0.123763 Top1 95.833333 LR 0.000250 Time 0.388028 +2025-05-20 18:07:18,211 - --- validate (epoch=167)----------- +2025-05-20 18:07:18,211 - 2000 samples (256 per mini-batch) +2025-05-20 18:07:21,756 - Epoch: [167][ 8/ 8] Loss 0.168911 Top1 93.500000 +2025-05-20 18:07:21,795 - ==> Top1: 93.500 Loss: 0.169 + +2025-05-20 18:07:21,795 - ==> Confusion: +[[917 68] + [ 62 953]] + +2025-05-20 18:07:21,813 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:07:21,813 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:07:21,820 - + +2025-05-20 18:07:21,820 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:07:27,006 - Epoch: [168][ 10/ 71] Overall Loss 0.129502 Objective Loss 0.129502 LR 0.000250 Time 0.518501 +2025-05-20 18:07:32,236 - Epoch: [168][ 20/ 71] Overall Loss 0.130960 Objective Loss 0.130960 LR 0.000250 Time 0.520721 +2025-05-20 18:07:35,225 - Epoch: [168][ 30/ 71] Overall Loss 0.126506 Objective Loss 0.126506 LR 0.000250 Time 0.446780 +2025-05-20 18:07:39,257 - Epoch: [168][ 40/ 71] Overall Loss 0.124302 Objective Loss 0.124302 LR 0.000250 Time 0.435867 +2025-05-20 18:07:42,374 - Epoch: [168][ 50/ 71] Overall Loss 0.129151 Objective Loss 0.129151 LR 0.000250 Time 0.411031 +2025-05-20 18:07:45,823 - Epoch: [168][ 60/ 71] Overall Loss 0.130062 Objective Loss 0.130062 LR 0.000250 Time 0.400010 +2025-05-20 18:07:48,693 - Epoch: [168][ 70/ 71] Overall Loss 0.130197 Objective Loss 0.130197 Top1 92.968750 LR 0.000250 Time 0.383850 +2025-05-20 18:07:48,800 - Epoch: [168][ 71/ 71] Overall Loss 0.129791 Objective Loss 0.129791 Top1 94.047619 LR 0.000250 Time 0.379956 +2025-05-20 18:07:48,830 - --- validate (epoch=168)----------- +2025-05-20 18:07:48,831 - 2000 samples (256 per mini-batch) +2025-05-20 18:07:52,327 - Epoch: [168][ 8/ 8] Loss 0.164876 Top1 93.050000 +2025-05-20 18:07:52,363 - ==> Top1: 93.050 Loss: 0.165 + +2025-05-20 18:07:52,363 - ==> Confusion: +[[911 74] [ 65 950]] -2023-09-11 14:44:43,064 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:44:43,064 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:44:43,066 - - -2023-09-11 14:44:43,066 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:44:46,188 - Epoch: [172][ 10/ 71] Overall Loss 0.132363 Objective Loss 0.132363 LR 0.000250 Time 0.312066 -2023-09-11 14:44:48,399 - Epoch: [172][ 20/ 71] Overall Loss 0.135917 Objective Loss 0.135917 LR 0.000250 Time 0.266600 -2023-09-11 14:44:51,019 - Epoch: [172][ 30/ 71] Overall Loss 0.135509 Objective Loss 0.135509 LR 0.000250 Time 0.265050 -2023-09-11 14:44:53,348 - Epoch: [172][ 40/ 71] Overall Loss 0.135018 Objective Loss 0.135018 LR 0.000250 Time 0.257009 -2023-09-11 14:44:55,695 - Epoch: [172][ 50/ 71] Overall Loss 0.132839 Objective Loss 0.132839 LR 0.000250 Time 0.252533 -2023-09-11 14:44:58,583 - Epoch: [172][ 60/ 71] Overall Loss 0.132216 Objective Loss 0.132216 LR 0.000250 Time 0.258569 -2023-09-11 14:45:00,472 - Epoch: [172][ 70/ 71] Overall Loss 0.133331 Objective Loss 0.133331 Top1 92.187500 LR 0.000250 Time 0.248621 -2023-09-11 14:45:00,536 - Epoch: [172][ 71/ 71] Overall Loss 0.132964 Objective Loss 0.132964 Top1 93.154762 LR 0.000250 Time 0.246009 -2023-09-11 14:45:00,648 - --- validate (epoch=172)----------- -2023-09-11 14:45:00,648 - 2000 samples (256 per mini-batch) -2023-09-11 14:45:03,515 - Epoch: [172][ 8/ 8] Loss 0.172758 Top1 92.700000 -2023-09-11 14:45:03,612 - ==> Top1: 92.700 Loss: 0.173 - -2023-09-11 14:45:03,612 - ==> Confusion: -[[937 48] - [ 98 917]] +2025-05-20 18:07:52,379 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:07:52,379 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:07:52,386 - + +2025-05-20 18:07:52,386 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:07:57,136 - Epoch: [169][ 10/ 71] Overall Loss 0.117428 Objective Loss 0.117428 LR 0.000250 Time 0.474893 +2025-05-20 18:08:02,274 - Epoch: [169][ 20/ 71] Overall Loss 0.122728 Objective Loss 0.122728 LR 0.000250 Time 0.494350 +2025-05-20 18:08:05,165 - Epoch: [169][ 30/ 71] Overall Loss 0.125824 Objective Loss 0.125824 LR 0.000250 Time 0.425922 +2025-05-20 18:08:09,644 - Epoch: [169][ 40/ 71] Overall Loss 0.124704 Objective Loss 0.124704 LR 0.000250 Time 0.431392 +2025-05-20 18:08:13,140 - Epoch: [169][ 50/ 71] Overall Loss 0.126901 Objective Loss 0.126901 LR 0.000250 Time 0.415022 +2025-05-20 18:08:17,680 - Epoch: [169][ 60/ 71] Overall Loss 0.127351 Objective Loss 0.127351 LR 0.000250 Time 0.421520 +2025-05-20 18:08:20,843 - Epoch: [169][ 70/ 71] Overall Loss 0.126391 Objective Loss 0.126391 Top1 94.140625 LR 0.000250 Time 0.406489 +2025-05-20 18:08:20,951 - Epoch: [169][ 71/ 71] Overall Loss 0.127249 Objective Loss 0.127249 Top1 93.452381 LR 0.000250 Time 0.402276 +2025-05-20 18:08:20,985 - --- validate (epoch=169)----------- +2025-05-20 18:08:20,985 - 2000 samples (256 per mini-batch) +2025-05-20 18:08:24,667 - Epoch: [169][ 8/ 8] Loss 0.153234 Top1 93.850000 +2025-05-20 18:08:24,701 - ==> Top1: 93.850 Loss: 0.153 + +2025-05-20 18:08:24,701 - ==> Confusion: +[[906 79] + [ 44 971]] -2023-09-11 14:45:03,628 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:45:03,628 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:45:03,633 - - -2023-09-11 14:45:03,633 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:45:06,772 - Epoch: [173][ 10/ 71] Overall Loss 0.127918 Objective Loss 0.127918 LR 0.000250 Time 0.313812 -2023-09-11 14:45:09,034 - Epoch: [173][ 20/ 71] Overall Loss 0.130674 Objective Loss 0.130674 LR 0.000250 Time 0.269995 -2023-09-11 14:45:11,497 - Epoch: [173][ 30/ 71] Overall Loss 0.133067 Objective Loss 0.133067 LR 0.000250 Time 0.262079 -2023-09-11 14:45:13,657 - Epoch: [173][ 40/ 71] Overall Loss 0.136387 Objective Loss 0.136387 LR 0.000250 Time 0.250566 -2023-09-11 14:45:16,126 - Epoch: [173][ 50/ 71] Overall Loss 0.134064 Objective Loss 0.134064 LR 0.000250 Time 0.249821 -2023-09-11 14:45:18,247 - Epoch: [173][ 60/ 71] Overall Loss 0.133562 Objective Loss 0.133562 LR 0.000250 Time 0.243522 -2023-09-11 14:45:20,435 - Epoch: [173][ 70/ 71] Overall Loss 0.133620 Objective Loss 0.133620 Top1 94.140625 LR 0.000250 Time 0.239992 -2023-09-11 14:45:20,510 - Epoch: [173][ 71/ 71] Overall Loss 0.133404 Objective Loss 0.133404 Top1 94.345238 LR 0.000250 Time 0.237662 -2023-09-11 14:45:20,607 - --- validate (epoch=173)----------- -2023-09-11 14:45:20,607 - 2000 samples (256 per mini-batch) -2023-09-11 14:45:23,143 - Epoch: [173][ 8/ 8] Loss 0.170964 Top1 92.950000 -2023-09-11 14:45:23,234 - ==> Top1: 92.950 Loss: 0.171 - -2023-09-11 14:45:23,234 - ==> Confusion: -[[892 93] - [ 48 967]] +2025-05-20 18:08:24,717 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:08:24,717 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:08:24,725 - + +2025-05-20 18:08:24,725 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:08:29,844 - Epoch: [170][ 10/ 71] Overall Loss 0.116569 Objective Loss 0.116569 LR 0.000250 Time 0.511853 +2025-05-20 18:08:33,201 - Epoch: [170][ 20/ 71] Overall Loss 0.125086 Objective Loss 0.125086 LR 0.000250 Time 0.423740 +2025-05-20 18:08:38,388 - Epoch: [170][ 30/ 71] Overall Loss 0.124545 Objective Loss 0.124545 LR 0.000250 Time 0.455401 +2025-05-20 18:08:42,056 - Epoch: [170][ 40/ 71] Overall Loss 0.120737 Objective Loss 0.120737 LR 0.000250 Time 0.433239 +2025-05-20 18:08:47,374 - Epoch: [170][ 50/ 71] Overall Loss 0.121162 Objective Loss 0.121162 LR 0.000250 Time 0.452934 +2025-05-20 18:08:50,591 - Epoch: [170][ 60/ 71] Overall Loss 0.122473 Objective Loss 0.122473 LR 0.000250 Time 0.431060 +2025-05-20 18:08:54,771 - Epoch: [170][ 70/ 71] Overall Loss 0.121793 Objective Loss 0.121793 Top1 94.140625 LR 0.000250 Time 0.429159 +2025-05-20 18:08:54,862 - Epoch: [170][ 71/ 71] Overall Loss 0.121649 Objective Loss 0.121649 Top1 94.047619 LR 0.000250 Time 0.424387 +2025-05-20 18:08:54,898 - --- validate (epoch=170)----------- +2025-05-20 18:08:54,899 - 2000 samples (256 per mini-batch) +2025-05-20 18:08:58,949 - Epoch: [170][ 8/ 8] Loss 0.185146 Top1 92.950000 +2025-05-20 18:08:58,987 - ==> Top1: 92.950 Loss: 0.185 + +2025-05-20 18:08:58,987 - ==> Confusion: +[[913 72] + [ 69 946]] -2023-09-11 14:45:23,249 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:45:23,249 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:45:23,251 - - -2023-09-11 14:45:23,252 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:45:27,498 - Epoch: [174][ 10/ 71] Overall Loss 0.128723 Objective Loss 0.128723 LR 0.000250 Time 0.424620 -2023-09-11 14:45:29,522 - Epoch: [174][ 20/ 71] Overall Loss 0.126560 Objective Loss 0.126560 LR 0.000250 Time 0.313457 -2023-09-11 14:45:32,188 - Epoch: [174][ 30/ 71] Overall Loss 0.129779 Objective Loss 0.129779 LR 0.000250 Time 0.297843 -2023-09-11 14:45:34,612 - Epoch: [174][ 40/ 71] Overall Loss 0.133742 Objective Loss 0.133742 LR 0.000250 Time 0.283980 -2023-09-11 14:45:37,138 - Epoch: [174][ 50/ 71] Overall Loss 0.132168 Objective Loss 0.132168 LR 0.000250 Time 0.277691 -2023-09-11 14:45:39,145 - Epoch: [174][ 60/ 71] Overall Loss 0.131761 Objective Loss 0.131761 LR 0.000250 Time 0.264858 -2023-09-11 14:45:41,444 - Epoch: [174][ 70/ 71] Overall Loss 0.131278 Objective Loss 0.131278 Top1 93.750000 LR 0.000250 Time 0.259852 -2023-09-11 14:45:41,494 - Epoch: [174][ 71/ 71] Overall Loss 0.130672 Objective Loss 0.130672 Top1 94.642857 LR 0.000250 Time 0.256903 -2023-09-11 14:45:41,590 - --- validate (epoch=174)----------- -2023-09-11 14:45:41,591 - 2000 samples (256 per mini-batch) -2023-09-11 14:45:44,038 - Epoch: [174][ 8/ 8] Loss 0.170665 Top1 93.300000 -2023-09-11 14:45:44,127 - ==> Top1: 93.300 Loss: 0.171 - -2023-09-11 14:45:44,128 - ==> Confusion: -[[900 85] - [ 49 966]] +2025-05-20 18:08:58,995 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:08:58,995 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:08:59,002 - + +2025-05-20 18:08:59,002 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:09:04,965 - Epoch: [171][ 10/ 71] Overall Loss 0.137195 Objective Loss 0.137195 LR 0.000250 Time 0.596200 +2025-05-20 18:09:08,626 - Epoch: [171][ 20/ 71] Overall Loss 0.137810 Objective Loss 0.137810 LR 0.000250 Time 0.481146 +2025-05-20 18:09:13,638 - Epoch: [171][ 30/ 71] Overall Loss 0.138913 Objective Loss 0.138913 LR 0.000250 Time 0.487824 +2025-05-20 18:09:17,697 - Epoch: [171][ 40/ 71] Overall Loss 0.137196 Objective Loss 0.137196 LR 0.000250 Time 0.467319 +2025-05-20 18:09:22,309 - Epoch: [171][ 50/ 71] Overall Loss 0.134428 Objective Loss 0.134428 LR 0.000250 Time 0.466086 +2025-05-20 18:09:25,479 - Epoch: [171][ 60/ 71] Overall Loss 0.133119 Objective Loss 0.133119 LR 0.000250 Time 0.441241 +2025-05-20 18:09:29,873 - Epoch: [171][ 70/ 71] Overall Loss 0.134781 Objective Loss 0.134781 Top1 94.140625 LR 0.000250 Time 0.440965 +2025-05-20 18:09:29,981 - Epoch: [171][ 71/ 71] Overall Loss 0.135947 Objective Loss 0.135947 Top1 93.452381 LR 0.000250 Time 0.436274 +2025-05-20 18:09:30,013 - --- validate (epoch=171)----------- +2025-05-20 18:09:30,014 - 2000 samples (256 per mini-batch) +2025-05-20 18:09:33,915 - Epoch: [171][ 8/ 8] Loss 0.157675 Top1 93.600000 +2025-05-20 18:09:33,946 - ==> Top1: 93.600 Loss: 0.158 + +2025-05-20 18:09:33,946 - ==> Confusion: +[[930 55] + [ 73 942]] -2023-09-11 14:45:44,139 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:45:44,140 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:45:44,142 - - -2023-09-11 14:45:44,142 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:45:48,033 - Epoch: [175][ 10/ 71] Overall Loss 0.143341 Objective Loss 0.143341 LR 0.000250 Time 0.389067 -2023-09-11 14:45:49,984 - Epoch: [175][ 20/ 71] Overall Loss 0.148124 Objective Loss 0.148124 LR 0.000250 Time 0.292067 -2023-09-11 14:45:52,734 - Epoch: [175][ 30/ 71] Overall Loss 0.141271 Objective Loss 0.141271 LR 0.000250 Time 0.286359 -2023-09-11 14:45:54,923 - Epoch: [175][ 40/ 71] Overall Loss 0.140253 Objective Loss 0.140253 LR 0.000250 Time 0.269495 -2023-09-11 14:45:57,512 - Epoch: [175][ 50/ 71] Overall Loss 0.139174 Objective Loss 0.139174 LR 0.000250 Time 0.267358 -2023-09-11 14:46:00,775 - Epoch: [175][ 60/ 71] Overall Loss 0.137687 Objective Loss 0.137687 LR 0.000250 Time 0.277170 -2023-09-11 14:46:02,587 - Epoch: [175][ 70/ 71] Overall Loss 0.135066 Objective Loss 0.135066 Top1 95.312500 LR 0.000250 Time 0.263468 -2023-09-11 14:46:02,665 - Epoch: [175][ 71/ 71] Overall Loss 0.134892 Objective Loss 0.134892 Top1 95.238095 LR 0.000250 Time 0.260851 -2023-09-11 14:46:02,762 - --- validate (epoch=175)----------- -2023-09-11 14:46:02,762 - 2000 samples (256 per mini-batch) -2023-09-11 14:46:05,169 - Epoch: [175][ 8/ 8] Loss 0.184913 Top1 92.150000 -2023-09-11 14:46:05,264 - ==> Top1: 92.150 Loss: 0.185 - -2023-09-11 14:46:05,265 - ==> Confusion: +2025-05-20 18:09:33,963 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:09:33,963 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:09:33,970 - + +2025-05-20 18:09:33,971 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:09:38,870 - Epoch: [172][ 10/ 71] Overall Loss 0.121411 Objective Loss 0.121411 LR 0.000250 Time 0.489865 +2025-05-20 18:09:42,149 - Epoch: [172][ 20/ 71] Overall Loss 0.126045 Objective Loss 0.126045 LR 0.000250 Time 0.408862 +2025-05-20 18:09:46,233 - Epoch: [172][ 30/ 71] Overall Loss 0.126905 Objective Loss 0.126905 LR 0.000250 Time 0.408714 +2025-05-20 18:09:49,815 - Epoch: [172][ 40/ 71] Overall Loss 0.127569 Objective Loss 0.127569 LR 0.000250 Time 0.396049 +2025-05-20 18:09:54,262 - Epoch: [172][ 50/ 71] Overall Loss 0.126722 Objective Loss 0.126722 LR 0.000250 Time 0.405786 +2025-05-20 18:09:57,458 - Epoch: [172][ 60/ 71] Overall Loss 0.126798 Objective Loss 0.126798 LR 0.000250 Time 0.391409 +2025-05-20 18:10:01,354 - Epoch: [172][ 70/ 71] Overall Loss 0.127993 Objective Loss 0.127993 Top1 94.531250 LR 0.000250 Time 0.391149 +2025-05-20 18:10:01,462 - Epoch: [172][ 71/ 71] Overall Loss 0.129027 Objective Loss 0.129027 Top1 93.452381 LR 0.000250 Time 0.387157 +2025-05-20 18:10:01,494 - --- validate (epoch=172)----------- +2025-05-20 18:10:01,494 - 2000 samples (256 per mini-batch) +2025-05-20 18:10:04,864 - Epoch: [172][ 8/ 8] Loss 0.164023 Top1 93.600000 +2025-05-20 18:10:04,900 - ==> Top1: 93.600 Loss: 0.164 + +2025-05-20 18:10:04,901 - ==> Confusion: [[929 56] - [101 914]] - -2023-09-11 14:46:05,267 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:46:05,267 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:46:05,269 - - -2023-09-11 14:46:05,269 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:46:08,522 - Epoch: [176][ 10/ 71] Overall Loss 0.142931 Objective Loss 0.142931 LR 0.000250 Time 0.325181 -2023-09-11 14:46:11,589 - Epoch: [176][ 20/ 71] Overall Loss 0.138541 Objective Loss 0.138541 LR 0.000250 Time 0.315956 -2023-09-11 14:46:14,179 - Epoch: [176][ 30/ 71] Overall Loss 0.132532 Objective Loss 0.132532 LR 0.000250 Time 0.296945 -2023-09-11 14:46:16,220 - Epoch: [176][ 40/ 71] Overall Loss 0.133170 Objective Loss 0.133170 LR 0.000250 Time 0.273735 -2023-09-11 14:46:18,871 - Epoch: [176][ 50/ 71] Overall Loss 0.140070 Objective Loss 0.140070 LR 0.000250 Time 0.271986 -2023-09-11 14:46:20,942 - Epoch: [176][ 60/ 71] Overall Loss 0.140011 Objective Loss 0.140011 LR 0.000250 Time 0.261166 -2023-09-11 14:46:23,207 - Epoch: [176][ 70/ 71] Overall Loss 0.138617 Objective Loss 0.138617 Top1 92.968750 LR 0.000250 Time 0.256216 -2023-09-11 14:46:23,284 - Epoch: [176][ 71/ 71] Overall Loss 0.138779 Objective Loss 0.138779 Top1 93.154762 LR 0.000250 Time 0.253687 -2023-09-11 14:46:23,372 - --- validate (epoch=176)----------- -2023-09-11 14:46:23,372 - 2000 samples (256 per mini-batch) -2023-09-11 14:46:25,665 - Epoch: [176][ 8/ 8] Loss 0.179898 Top1 93.250000 -2023-09-11 14:46:25,761 - ==> Top1: 93.250 Loss: 0.180 - -2023-09-11 14:46:25,761 - ==> Confusion: -[[937 48] - [ 87 928]] + [ 72 943]] -2023-09-11 14:46:25,778 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:46:25,778 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:46:25,780 - - -2023-09-11 14:46:25,780 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:46:28,788 - Epoch: [177][ 10/ 71] Overall Loss 0.127826 Objective Loss 0.127826 LR 0.000250 Time 0.300687 -2023-09-11 14:46:30,933 - Epoch: [177][ 20/ 71] Overall Loss 0.134463 Objective Loss 0.134463 LR 0.000250 Time 0.257609 -2023-09-11 14:46:33,454 - Epoch: [177][ 30/ 71] Overall Loss 0.134255 Objective Loss 0.134255 LR 0.000250 Time 0.255741 -2023-09-11 14:46:35,328 - Epoch: [177][ 40/ 71] Overall Loss 0.133685 Objective Loss 0.133685 LR 0.000250 Time 0.238661 -2023-09-11 14:46:38,021 - Epoch: [177][ 50/ 71] Overall Loss 0.132056 Objective Loss 0.132056 LR 0.000250 Time 0.244776 -2023-09-11 14:46:40,100 - Epoch: [177][ 60/ 71] Overall Loss 0.135550 Objective Loss 0.135550 LR 0.000250 Time 0.238617 -2023-09-11 14:46:42,492 - Epoch: [177][ 70/ 71] Overall Loss 0.131738 Objective Loss 0.131738 Top1 96.484375 LR 0.000250 Time 0.238700 -2023-09-11 14:46:42,608 - Epoch: [177][ 71/ 71] Overall Loss 0.132341 Objective Loss 0.132341 Top1 95.238095 LR 0.000250 Time 0.236979 -2023-09-11 14:46:42,702 - --- validate (epoch=177)----------- -2023-09-11 14:46:42,703 - 2000 samples (256 per mini-batch) -2023-09-11 14:46:45,844 - Epoch: [177][ 8/ 8] Loss 0.169291 Top1 92.950000 -2023-09-11 14:46:45,942 - ==> Top1: 92.950 Loss: 0.169 - -2023-09-11 14:46:45,943 - ==> Confusion: -[[904 81] - [ 60 955]] +2025-05-20 18:10:04,916 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:10:04,916 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:10:04,924 - + +2025-05-20 18:10:04,924 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:10:09,755 - Epoch: [173][ 10/ 71] Overall Loss 0.111461 Objective Loss 0.111461 LR 0.000250 Time 0.483026 +2025-05-20 18:10:12,585 - Epoch: [173][ 20/ 71] Overall Loss 0.114428 Objective Loss 0.114428 LR 0.000250 Time 0.382989 +2025-05-20 18:10:16,836 - Epoch: [173][ 30/ 71] Overall Loss 0.123212 Objective Loss 0.123212 LR 0.000250 Time 0.397039 +2025-05-20 18:10:20,745 - Epoch: [173][ 40/ 71] Overall Loss 0.124147 Objective Loss 0.124147 LR 0.000250 Time 0.395503 +2025-05-20 18:10:24,540 - Epoch: [173][ 50/ 71] Overall Loss 0.125985 Objective Loss 0.125985 LR 0.000250 Time 0.392280 +2025-05-20 18:10:27,717 - Epoch: [173][ 60/ 71] Overall Loss 0.126285 Objective Loss 0.126285 LR 0.000250 Time 0.379859 +2025-05-20 18:10:31,602 - Epoch: [173][ 70/ 71] Overall Loss 0.124082 Objective Loss 0.124082 Top1 95.703125 LR 0.000250 Time 0.381089 +2025-05-20 18:10:31,699 - Epoch: [173][ 71/ 71] Overall Loss 0.124100 Objective Loss 0.124100 Top1 95.535714 LR 0.000250 Time 0.377082 +2025-05-20 18:10:31,732 - --- validate (epoch=173)----------- +2025-05-20 18:10:31,732 - 2000 samples (256 per mini-batch) +2025-05-20 18:10:35,451 - Epoch: [173][ 8/ 8] Loss 0.183864 Top1 93.250000 +2025-05-20 18:10:35,489 - ==> Top1: 93.250 Loss: 0.184 + +2025-05-20 18:10:35,489 - ==> Confusion: +[[942 43] + [ 92 923]] -2023-09-11 14:46:45,959 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:46:45,959 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:46:45,964 - - -2023-09-11 14:46:45,964 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:46:50,214 - Epoch: [178][ 10/ 71] Overall Loss 0.112617 Objective Loss 0.112617 LR 0.000250 Time 0.424936 -2023-09-11 14:46:52,422 - Epoch: [178][ 20/ 71] Overall Loss 0.123554 Objective Loss 0.123554 LR 0.000250 Time 0.322865 -2023-09-11 14:46:54,938 - Epoch: [178][ 30/ 71] Overall Loss 0.122626 Objective Loss 0.122626 LR 0.000250 Time 0.299089 -2023-09-11 14:46:57,180 - Epoch: [178][ 40/ 71] Overall Loss 0.126571 Objective Loss 0.126571 LR 0.000250 Time 0.280359 -2023-09-11 14:46:59,884 - Epoch: [178][ 50/ 71] Overall Loss 0.124641 Objective Loss 0.124641 LR 0.000250 Time 0.278363 -2023-09-11 14:47:02,016 - Epoch: [178][ 60/ 71] Overall Loss 0.125534 Objective Loss 0.125534 LR 0.000250 Time 0.267498 -2023-09-11 14:47:04,302 - Epoch: [178][ 70/ 71] Overall Loss 0.127335 Objective Loss 0.127335 Top1 94.140625 LR 0.000250 Time 0.261939 -2023-09-11 14:47:04,367 - Epoch: [178][ 71/ 71] Overall Loss 0.127490 Objective Loss 0.127490 Top1 93.750000 LR 0.000250 Time 0.259163 -2023-09-11 14:47:04,465 - --- validate (epoch=178)----------- -2023-09-11 14:47:04,465 - 2000 samples (256 per mini-batch) -2023-09-11 14:47:07,577 - Epoch: [178][ 8/ 8] Loss 0.182017 Top1 92.150000 -2023-09-11 14:47:07,676 - ==> Top1: 92.150 Loss: 0.182 - -2023-09-11 14:47:07,676 - ==> Confusion: -[[884 101] - [ 56 959]] +2025-05-20 18:10:35,501 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:10:35,501 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:10:35,508 - + +2025-05-20 18:10:35,508 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:10:41,557 - Epoch: [174][ 10/ 71] Overall Loss 0.115783 Objective Loss 0.115783 LR 0.000250 Time 0.604838 +2025-05-20 18:10:44,248 - Epoch: [174][ 20/ 71] Overall Loss 0.121393 Objective Loss 0.121393 LR 0.000250 Time 0.436940 +2025-05-20 18:10:48,050 - Epoch: [174][ 30/ 71] Overall Loss 0.128194 Objective Loss 0.128194 LR 0.000250 Time 0.418012 +2025-05-20 18:10:52,843 - Epoch: [174][ 40/ 71] Overall Loss 0.128585 Objective Loss 0.128585 LR 0.000250 Time 0.433326 +2025-05-20 18:10:56,628 - Epoch: [174][ 50/ 71] Overall Loss 0.128664 Objective Loss 0.128664 LR 0.000250 Time 0.422347 +2025-05-20 18:11:00,219 - Epoch: [174][ 60/ 71] Overall Loss 0.128613 Objective Loss 0.128613 LR 0.000250 Time 0.411805 +2025-05-20 18:11:03,521 - Epoch: [174][ 70/ 71] Overall Loss 0.129944 Objective Loss 0.129944 Top1 94.531250 LR 0.000250 Time 0.400145 +2025-05-20 18:11:03,622 - Epoch: [174][ 71/ 71] Overall Loss 0.129663 Objective Loss 0.129663 Top1 95.238095 LR 0.000250 Time 0.395933 +2025-05-20 18:11:03,657 - --- validate (epoch=174)----------- +2025-05-20 18:11:03,657 - 2000 samples (256 per mini-batch) +2025-05-20 18:11:07,371 - Epoch: [174][ 8/ 8] Loss 0.167823 Top1 93.300000 +2025-05-20 18:11:07,407 - ==> Top1: 93.300 Loss: 0.168 + +2025-05-20 18:11:07,407 - ==> Confusion: +[[916 69] + [ 65 950]] -2023-09-11 14:47:07,691 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:47:07,691 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:47:07,693 - - -2023-09-11 14:47:07,693 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:47:12,004 - Epoch: [179][ 10/ 71] Overall Loss 0.141197 Objective Loss 0.141197 LR 0.000250 Time 0.431028 -2023-09-11 14:47:14,195 - Epoch: [179][ 20/ 71] Overall Loss 0.136102 Objective Loss 0.136102 LR 0.000250 Time 0.325012 -2023-09-11 14:47:16,675 - Epoch: [179][ 30/ 71] Overall Loss 0.139273 Objective Loss 0.139273 LR 0.000250 Time 0.299339 -2023-09-11 14:47:18,748 - Epoch: [179][ 40/ 71] Overall Loss 0.132180 Objective Loss 0.132180 LR 0.000250 Time 0.276333 -2023-09-11 14:47:21,378 - Epoch: [179][ 50/ 71] Overall Loss 0.130518 Objective Loss 0.130518 LR 0.000250 Time 0.273661 -2023-09-11 14:47:23,404 - Epoch: [179][ 60/ 71] Overall Loss 0.131219 Objective Loss 0.131219 LR 0.000250 Time 0.261808 -2023-09-11 14:47:25,676 - Epoch: [179][ 70/ 71] Overall Loss 0.129990 Objective Loss 0.129990 Top1 95.312500 LR 0.000250 Time 0.256862 -2023-09-11 14:47:25,755 - Epoch: [179][ 71/ 71] Overall Loss 0.130563 Objective Loss 0.130563 Top1 94.642857 LR 0.000250 Time 0.254347 -2023-09-11 14:47:25,861 - --- validate (epoch=179)----------- -2023-09-11 14:47:25,861 - 2000 samples (256 per mini-batch) -2023-09-11 14:47:29,148 - Epoch: [179][ 8/ 8] Loss 0.174360 Top1 93.500000 -2023-09-11 14:47:29,251 - ==> Top1: 93.500 Loss: 0.174 - -2023-09-11 14:47:29,251 - ==> Confusion: -[[929 56] +2025-05-20 18:11:07,424 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:11:07,424 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:11:07,431 - + +2025-05-20 18:11:07,431 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:11:13,192 - Epoch: [175][ 10/ 71] Overall Loss 0.112718 Objective Loss 0.112718 LR 0.000250 Time 0.576008 +2025-05-20 18:11:16,601 - Epoch: [175][ 20/ 71] Overall Loss 0.123704 Objective Loss 0.123704 LR 0.000250 Time 0.458412 +2025-05-20 18:11:20,296 - Epoch: [175][ 30/ 71] Overall Loss 0.125704 Objective Loss 0.125704 LR 0.000250 Time 0.428770 +2025-05-20 18:11:25,353 - Epoch: [175][ 40/ 71] Overall Loss 0.123662 Objective Loss 0.123662 LR 0.000250 Time 0.448001 +2025-05-20 18:11:29,398 - Epoch: [175][ 50/ 71] Overall Loss 0.122994 Objective Loss 0.122994 LR 0.000250 Time 0.439291 +2025-05-20 18:11:33,553 - Epoch: [175][ 60/ 71] Overall Loss 0.121655 Objective Loss 0.121655 LR 0.000250 Time 0.435331 +2025-05-20 18:11:36,657 - Epoch: [175][ 70/ 71] Overall Loss 0.121636 Objective Loss 0.121636 Top1 92.968750 LR 0.000250 Time 0.417472 +2025-05-20 18:11:36,765 - Epoch: [175][ 71/ 71] Overall Loss 0.123123 Objective Loss 0.123123 Top1 93.154762 LR 0.000250 Time 0.413114 +2025-05-20 18:11:36,798 - --- validate (epoch=175)----------- +2025-05-20 18:11:36,798 - 2000 samples (256 per mini-batch) +2025-05-20 18:11:40,751 - Epoch: [175][ 8/ 8] Loss 0.176355 Top1 92.900000 +2025-05-20 18:11:40,784 - ==> Top1: 92.900 Loss: 0.176 + +2025-05-20 18:11:40,785 - ==> Confusion: +[[917 68] [ 74 941]] -2023-09-11 14:47:29,256 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:47:29,256 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:47:29,260 - - -2023-09-11 14:47:29,261 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:47:32,336 - Epoch: [180][ 10/ 71] Overall Loss 0.137520 Objective Loss 0.137520 LR 0.000250 Time 0.307458 -2023-09-11 14:47:35,656 - Epoch: [180][ 20/ 71] Overall Loss 0.137803 Objective Loss 0.137803 LR 0.000250 Time 0.319717 -2023-09-11 14:47:38,262 - Epoch: [180][ 30/ 71] Overall Loss 0.139084 Objective Loss 0.139084 LR 0.000250 Time 0.300019 -2023-09-11 14:47:40,875 - Epoch: [180][ 40/ 71] Overall Loss 0.137650 Objective Loss 0.137650 LR 0.000250 Time 0.290317 -2023-09-11 14:47:43,006 - Epoch: [180][ 50/ 71] Overall Loss 0.136043 Objective Loss 0.136043 LR 0.000250 Time 0.274865 -2023-09-11 14:47:46,529 - Epoch: [180][ 60/ 71] Overall Loss 0.133523 Objective Loss 0.133523 LR 0.000250 Time 0.287772 -2023-09-11 14:47:48,365 - Epoch: [180][ 70/ 71] Overall Loss 0.131087 Objective Loss 0.131087 Top1 95.703125 LR 0.000250 Time 0.272890 -2023-09-11 14:47:48,442 - Epoch: [180][ 71/ 71] Overall Loss 0.131108 Objective Loss 0.131108 Top1 95.238095 LR 0.000250 Time 0.270119 -2023-09-11 14:47:48,537 - --- validate (epoch=180)----------- -2023-09-11 14:47:48,537 - 2000 samples (256 per mini-batch) -2023-09-11 14:47:50,852 - Epoch: [180][ 8/ 8] Loss 0.185665 Top1 92.350000 -2023-09-11 14:47:50,945 - ==> Top1: 92.350 Loss: 0.186 - -2023-09-11 14:47:50,945 - ==> Confusion: -[[962 23] - [130 885]] - -2023-09-11 14:47:50,961 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:47:50,961 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:47:50,965 - - -2023-09-11 14:47:50,965 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:47:55,381 - Epoch: [181][ 10/ 71] Overall Loss 0.127112 Objective Loss 0.127112 LR 0.000250 Time 0.441536 -2023-09-11 14:47:57,339 - Epoch: [181][ 20/ 71] Overall Loss 0.127441 Objective Loss 0.127441 LR 0.000250 Time 0.318639 -2023-09-11 14:48:00,548 - Epoch: [181][ 30/ 71] Overall Loss 0.128379 Objective Loss 0.128379 LR 0.000250 Time 0.319383 -2023-09-11 14:48:02,749 - Epoch: [181][ 40/ 71] Overall Loss 0.131104 Objective Loss 0.131104 LR 0.000250 Time 0.294551 -2023-09-11 14:48:05,404 - Epoch: [181][ 50/ 71] Overall Loss 0.128096 Objective Loss 0.128096 LR 0.000250 Time 0.288736 -2023-09-11 14:48:07,484 - Epoch: [181][ 60/ 71] Overall Loss 0.125050 Objective Loss 0.125050 LR 0.000250 Time 0.275283 -2023-09-11 14:48:09,658 - Epoch: [181][ 70/ 71] Overall Loss 0.126111 Objective Loss 0.126111 Top1 94.531250 LR 0.000250 Time 0.267005 -2023-09-11 14:48:09,785 - Epoch: [181][ 71/ 71] Overall Loss 0.126659 Objective Loss 0.126659 Top1 94.047619 LR 0.000250 Time 0.265034 -2023-09-11 14:48:09,880 - --- validate (epoch=181)----------- -2023-09-11 14:48:09,880 - 2000 samples (256 per mini-batch) -2023-09-11 14:48:13,059 - Epoch: [181][ 8/ 8] Loss 0.180538 Top1 92.800000 -2023-09-11 14:48:13,139 - ==> Top1: 92.800 Loss: 0.181 - -2023-09-11 14:48:13,140 - ==> Confusion: -[[878 107] - [ 37 978]] +2025-05-20 18:11:40,798 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:11:40,799 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:11:40,806 - + +2025-05-20 18:11:40,806 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:11:45,650 - Epoch: [176][ 10/ 71] Overall Loss 0.136916 Objective Loss 0.136916 LR 0.000250 Time 0.484367 +2025-05-20 18:11:49,086 - Epoch: [176][ 20/ 71] Overall Loss 0.136515 Objective Loss 0.136515 LR 0.000250 Time 0.413949 +2025-05-20 18:11:53,755 - Epoch: [176][ 30/ 71] Overall Loss 0.136656 Objective Loss 0.136656 LR 0.000250 Time 0.431580 +2025-05-20 18:11:57,290 - Epoch: [176][ 40/ 71] Overall Loss 0.132833 Objective Loss 0.132833 LR 0.000250 Time 0.412065 +2025-05-20 18:12:01,131 - Epoch: [176][ 50/ 71] Overall Loss 0.128377 Objective Loss 0.128377 LR 0.000250 Time 0.406470 +2025-05-20 18:12:04,041 - Epoch: [176][ 60/ 71] Overall Loss 0.126315 Objective Loss 0.126315 LR 0.000250 Time 0.387205 +2025-05-20 18:12:07,732 - Epoch: [176][ 70/ 71] Overall Loss 0.123386 Objective Loss 0.123386 Top1 94.140625 LR 0.000250 Time 0.384627 +2025-05-20 18:12:07,826 - Epoch: [176][ 71/ 71] Overall Loss 0.124480 Objective Loss 0.124480 Top1 93.452381 LR 0.000250 Time 0.380530 +2025-05-20 18:12:07,857 - --- validate (epoch=176)----------- +2025-05-20 18:12:07,857 - 2000 samples (256 per mini-batch) +2025-05-20 18:12:11,783 - Epoch: [176][ 8/ 8] Loss 0.188228 Top1 93.000000 +2025-05-20 18:12:11,816 - ==> Top1: 93.000 Loss: 0.188 + +2025-05-20 18:12:11,816 - ==> Confusion: +[[890 95] + [ 45 970]] -2023-09-11 14:48:13,150 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:48:13,150 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:48:13,153 - - -2023-09-11 14:48:13,154 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:48:17,478 - Epoch: [182][ 10/ 71] Overall Loss 0.131095 Objective Loss 0.131095 LR 0.000250 Time 0.432397 -2023-09-11 14:48:19,696 - Epoch: [182][ 20/ 71] Overall Loss 0.133607 Objective Loss 0.133607 LR 0.000250 Time 0.327067 -2023-09-11 14:48:22,480 - Epoch: [182][ 30/ 71] Overall Loss 0.127322 Objective Loss 0.127322 LR 0.000250 Time 0.310830 -2023-09-11 14:48:24,617 - Epoch: [182][ 40/ 71] Overall Loss 0.126136 Objective Loss 0.126136 LR 0.000250 Time 0.286538 -2023-09-11 14:48:28,254 - Epoch: [182][ 50/ 71] Overall Loss 0.126390 Objective Loss 0.126390 LR 0.000250 Time 0.301978 -2023-09-11 14:48:30,405 - Epoch: [182][ 60/ 71] Overall Loss 0.126358 Objective Loss 0.126358 LR 0.000250 Time 0.287492 -2023-09-11 14:48:33,568 - Epoch: [182][ 70/ 71] Overall Loss 0.125898 Objective Loss 0.125898 Top1 94.140625 LR 0.000250 Time 0.291593 -2023-09-11 14:48:33,674 - Epoch: [182][ 71/ 71] Overall Loss 0.125612 Objective Loss 0.125612 Top1 94.345238 LR 0.000250 Time 0.288980 -2023-09-11 14:48:33,757 - --- validate (epoch=182)----------- -2023-09-11 14:48:33,757 - 2000 samples (256 per mini-batch) -2023-09-11 14:48:36,151 - Epoch: [182][ 8/ 8] Loss 0.189474 Top1 92.300000 -2023-09-11 14:48:36,247 - ==> Top1: 92.300 Loss: 0.189 - -2023-09-11 14:48:36,247 - ==> Confusion: -[[919 66] - [ 88 927]] +2025-05-20 18:12:11,823 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:12:11,823 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:12:11,831 - + +2025-05-20 18:12:11,831 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:12:16,718 - Epoch: [177][ 10/ 71] Overall Loss 0.141841 Objective Loss 0.141841 LR 0.000250 Time 0.488622 +2025-05-20 18:12:20,739 - Epoch: [177][ 20/ 71] Overall Loss 0.134050 Objective Loss 0.134050 LR 0.000250 Time 0.445350 +2025-05-20 18:12:24,732 - Epoch: [177][ 30/ 71] Overall Loss 0.127501 Objective Loss 0.127501 LR 0.000250 Time 0.429984 +2025-05-20 18:12:28,034 - Epoch: [177][ 40/ 71] Overall Loss 0.127354 Objective Loss 0.127354 LR 0.000250 Time 0.405035 +2025-05-20 18:12:31,858 - Epoch: [177][ 50/ 71] Overall Loss 0.125436 Objective Loss 0.125436 LR 0.000250 Time 0.400502 +2025-05-20 18:12:36,373 - Epoch: [177][ 60/ 71] Overall Loss 0.123878 Objective Loss 0.123878 LR 0.000250 Time 0.409002 +2025-05-20 18:12:39,428 - Epoch: [177][ 70/ 71] Overall Loss 0.124026 Objective Loss 0.124026 Top1 94.531250 LR 0.000250 Time 0.394213 +2025-05-20 18:12:39,520 - Epoch: [177][ 71/ 71] Overall Loss 0.123813 Objective Loss 0.123813 Top1 94.345238 LR 0.000250 Time 0.389948 +2025-05-20 18:12:39,554 - --- validate (epoch=177)----------- +2025-05-20 18:12:39,554 - 2000 samples (256 per mini-batch) +2025-05-20 18:12:43,295 - Epoch: [177][ 8/ 8] Loss 0.179051 Top1 93.000000 +2025-05-20 18:12:43,334 - ==> Top1: 93.000 Loss: 0.179 + +2025-05-20 18:12:43,335 - ==> Confusion: +[[924 61] + [ 79 936]] -2023-09-11 14:48:36,263 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:48:36,263 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:48:36,268 - - -2023-09-11 14:48:36,268 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:48:41,284 - Epoch: [183][ 10/ 71] Overall Loss 0.124312 Objective Loss 0.124312 LR 0.000250 Time 0.501569 -2023-09-11 14:48:44,425 - Epoch: [183][ 20/ 71] Overall Loss 0.123127 Objective Loss 0.123127 LR 0.000250 Time 0.407821 -2023-09-11 14:48:46,442 - Epoch: [183][ 30/ 71] Overall Loss 0.126233 Objective Loss 0.126233 LR 0.000250 Time 0.339107 -2023-09-11 14:48:49,075 - Epoch: [183][ 40/ 71] Overall Loss 0.128457 Objective Loss 0.128457 LR 0.000250 Time 0.320142 -2023-09-11 14:48:51,205 - Epoch: [183][ 50/ 71] Overall Loss 0.130799 Objective Loss 0.130799 LR 0.000250 Time 0.298710 -2023-09-11 14:48:53,901 - Epoch: [183][ 60/ 71] Overall Loss 0.130739 Objective Loss 0.130739 LR 0.000250 Time 0.293853 -2023-09-11 14:48:56,176 - Epoch: [183][ 70/ 71] Overall Loss 0.132916 Objective Loss 0.132916 Top1 93.359375 LR 0.000250 Time 0.284366 -2023-09-11 14:48:56,270 - Epoch: [183][ 71/ 71] Overall Loss 0.134186 Objective Loss 0.134186 Top1 92.559524 LR 0.000250 Time 0.281688 -2023-09-11 14:48:56,358 - --- validate (epoch=183)----------- -2023-09-11 14:48:56,359 - 2000 samples (256 per mini-batch) -2023-09-11 14:48:59,690 - Epoch: [183][ 8/ 8] Loss 0.170402 Top1 92.750000 -2023-09-11 14:48:59,782 - ==> Top1: 92.750 Loss: 0.170 - -2023-09-11 14:48:59,782 - ==> Confusion: -[[881 104] - [ 41 974]] - -2023-09-11 14:48:59,791 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:48:59,791 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:48:59,794 - - -2023-09-11 14:48:59,794 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:49:03,810 - Epoch: [184][ 10/ 71] Overall Loss 0.143028 Objective Loss 0.143028 LR 0.000250 Time 0.401600 -2023-09-11 14:49:06,404 - Epoch: [184][ 20/ 71] Overall Loss 0.131458 Objective Loss 0.131458 LR 0.000250 Time 0.330479 -2023-09-11 14:49:08,600 - Epoch: [184][ 30/ 71] Overall Loss 0.129792 Objective Loss 0.129792 LR 0.000250 Time 0.293509 -2023-09-11 14:49:12,310 - Epoch: [184][ 40/ 71] Overall Loss 0.129037 Objective Loss 0.129037 LR 0.000250 Time 0.312867 -2023-09-11 14:49:14,612 - Epoch: [184][ 50/ 71] Overall Loss 0.129801 Objective Loss 0.129801 LR 0.000250 Time 0.296331 -2023-09-11 14:49:17,197 - Epoch: [184][ 60/ 71] Overall Loss 0.130348 Objective Loss 0.130348 LR 0.000250 Time 0.290012 -2023-09-11 14:49:19,132 - Epoch: [184][ 70/ 71] Overall Loss 0.127710 Objective Loss 0.127710 Top1 96.484375 LR 0.000250 Time 0.276221 -2023-09-11 14:49:19,215 - Epoch: [184][ 71/ 71] Overall Loss 0.127220 Objective Loss 0.127220 Top1 96.428571 LR 0.000250 Time 0.273504 -2023-09-11 14:49:19,310 - --- validate (epoch=184)----------- -2023-09-11 14:49:19,310 - 2000 samples (256 per mini-batch) -2023-09-11 14:49:21,724 - Epoch: [184][ 8/ 8] Loss 0.181878 Top1 92.850000 -2023-09-11 14:49:21,832 - ==> Top1: 92.850 Loss: 0.182 - -2023-09-11 14:49:21,833 - ==> Confusion: -[[923 62] - [ 81 934]] +2025-05-20 18:12:43,345 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:12:43,345 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:12:43,352 - + +2025-05-20 18:12:43,352 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:12:49,560 - Epoch: [178][ 10/ 71] Overall Loss 0.136575 Objective Loss 0.136575 LR 0.000250 Time 0.620707 +2025-05-20 18:12:52,688 - Epoch: [178][ 20/ 71] Overall Loss 0.138215 Objective Loss 0.138215 LR 0.000250 Time 0.466721 +2025-05-20 18:12:57,288 - Epoch: [178][ 30/ 71] Overall Loss 0.134212 Objective Loss 0.134212 LR 0.000250 Time 0.464477 +2025-05-20 18:13:00,648 - Epoch: [178][ 40/ 71] Overall Loss 0.129333 Objective Loss 0.129333 LR 0.000250 Time 0.432357 +2025-05-20 18:13:04,888 - Epoch: [178][ 50/ 71] Overall Loss 0.127834 Objective Loss 0.127834 LR 0.000250 Time 0.430669 +2025-05-20 18:13:07,766 - Epoch: [178][ 60/ 71] Overall Loss 0.127397 Objective Loss 0.127397 LR 0.000250 Time 0.406858 +2025-05-20 18:13:11,512 - Epoch: [178][ 70/ 71] Overall Loss 0.125787 Objective Loss 0.125787 Top1 94.140625 LR 0.000250 Time 0.402243 +2025-05-20 18:13:11,606 - Epoch: [178][ 71/ 71] Overall Loss 0.126436 Objective Loss 0.126436 Top1 93.750000 LR 0.000250 Time 0.397905 +2025-05-20 18:13:11,636 - --- validate (epoch=178)----------- +2025-05-20 18:13:11,636 - 2000 samples (256 per mini-batch) +2025-05-20 18:13:15,327 - Epoch: [178][ 8/ 8] Loss 0.168053 Top1 92.700000 +2025-05-20 18:13:15,358 - ==> Top1: 92.700 Loss: 0.168 + +2025-05-20 18:13:15,358 - ==> Confusion: +[[913 72] + [ 74 941]] -2023-09-11 14:49:21,849 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:49:21,849 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:49:21,854 - - -2023-09-11 14:49:21,854 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:49:27,056 - Epoch: [185][ 10/ 71] Overall Loss 0.125143 Objective Loss 0.125143 LR 0.000250 Time 0.520174 -2023-09-11 14:49:30,496 - Epoch: [185][ 20/ 71] Overall Loss 0.125934 Objective Loss 0.125934 LR 0.000250 Time 0.432060 -2023-09-11 14:49:33,068 - Epoch: [185][ 30/ 71] Overall Loss 0.126115 Objective Loss 0.126115 LR 0.000250 Time 0.373753 -2023-09-11 14:49:35,122 - Epoch: [185][ 40/ 71] Overall Loss 0.129154 Objective Loss 0.129154 LR 0.000250 Time 0.331661 -2023-09-11 14:49:37,882 - Epoch: [185][ 50/ 71] Overall Loss 0.128861 Objective Loss 0.128861 LR 0.000250 Time 0.320507 -2023-09-11 14:49:40,428 - Epoch: [185][ 60/ 71] Overall Loss 0.130212 Objective Loss 0.130212 LR 0.000250 Time 0.309522 -2023-09-11 14:49:42,322 - Epoch: [185][ 70/ 71] Overall Loss 0.129389 Objective Loss 0.129389 Top1 95.312500 LR 0.000250 Time 0.292357 -2023-09-11 14:49:42,428 - Epoch: [185][ 71/ 71] Overall Loss 0.128962 Objective Loss 0.128962 Top1 95.833333 LR 0.000250 Time 0.289727 -2023-09-11 14:49:42,505 - --- validate (epoch=185)----------- -2023-09-11 14:49:42,505 - 2000 samples (256 per mini-batch) -2023-09-11 14:49:45,178 - Epoch: [185][ 8/ 8] Loss 0.171780 Top1 92.600000 -2023-09-11 14:49:45,269 - ==> Top1: 92.600 Loss: 0.172 - -2023-09-11 14:49:45,269 - ==> Confusion: -[[900 85] - [ 63 952]] +2025-05-20 18:13:15,372 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:13:15,373 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:13:15,380 - + +2025-05-20 18:13:15,380 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:13:20,147 - Epoch: [179][ 10/ 71] Overall Loss 0.123568 Objective Loss 0.123568 LR 0.000250 Time 0.476687 +2025-05-20 18:13:23,114 - Epoch: [179][ 20/ 71] Overall Loss 0.117097 Objective Loss 0.117097 LR 0.000250 Time 0.386654 +2025-05-20 18:13:26,958 - Epoch: [179][ 30/ 71] Overall Loss 0.120801 Objective Loss 0.120801 LR 0.000250 Time 0.385871 +2025-05-20 18:13:30,246 - Epoch: [179][ 40/ 71] Overall Loss 0.116027 Objective Loss 0.116027 LR 0.000250 Time 0.371617 +2025-05-20 18:13:33,985 - Epoch: [179][ 50/ 71] Overall Loss 0.118165 Objective Loss 0.118165 LR 0.000250 Time 0.372056 +2025-05-20 18:13:36,916 - Epoch: [179][ 60/ 71] Overall Loss 0.117086 Objective Loss 0.117086 LR 0.000250 Time 0.358892 +2025-05-20 18:13:40,475 - Epoch: [179][ 70/ 71] Overall Loss 0.121896 Objective Loss 0.121896 Top1 96.093750 LR 0.000250 Time 0.358460 +2025-05-20 18:13:40,583 - Epoch: [179][ 71/ 71] Overall Loss 0.121087 Objective Loss 0.121087 Top1 96.130952 LR 0.000250 Time 0.354929 +2025-05-20 18:13:40,624 - --- validate (epoch=179)----------- +2025-05-20 18:13:40,624 - 2000 samples (256 per mini-batch) +2025-05-20 18:13:44,186 - Epoch: [179][ 8/ 8] Loss 0.171225 Top1 93.150000 +2025-05-20 18:13:44,219 - ==> Top1: 93.150 Loss: 0.171 + +2025-05-20 18:13:44,220 - ==> Confusion: +[[915 70] + [ 67 948]] -2023-09-11 14:49:45,284 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:49:45,284 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:49:45,288 - - -2023-09-11 14:49:45,289 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:49:50,512 - Epoch: [186][ 10/ 71] Overall Loss 0.136277 Objective Loss 0.136277 LR 0.000250 Time 0.522292 -2023-09-11 14:49:53,162 - Epoch: [186][ 20/ 71] Overall Loss 0.136336 Objective Loss 0.136336 LR 0.000250 Time 0.393613 -2023-09-11 14:49:55,653 - Epoch: [186][ 30/ 71] Overall Loss 0.132441 Objective Loss 0.132441 LR 0.000250 Time 0.345428 -2023-09-11 14:49:57,665 - Epoch: [186][ 40/ 71] Overall Loss 0.133013 Objective Loss 0.133013 LR 0.000250 Time 0.309374 -2023-09-11 14:50:00,344 - Epoch: [186][ 50/ 71] Overall Loss 0.133572 Objective Loss 0.133572 LR 0.000250 Time 0.301065 -2023-09-11 14:50:02,394 - Epoch: [186][ 60/ 71] Overall Loss 0.131363 Objective Loss 0.131363 LR 0.000250 Time 0.285063 -2023-09-11 14:50:04,774 - Epoch: [186][ 70/ 71] Overall Loss 0.131570 Objective Loss 0.131570 Top1 89.843750 LR 0.000250 Time 0.278323 -2023-09-11 14:50:04,859 - Epoch: [186][ 71/ 71] Overall Loss 0.131634 Objective Loss 0.131634 Top1 90.773810 LR 0.000250 Time 0.275597 -2023-09-11 14:50:04,935 - --- validate (epoch=186)----------- -2023-09-11 14:50:04,935 - 2000 samples (256 per mini-batch) -2023-09-11 14:50:07,444 - Epoch: [186][ 8/ 8] Loss 0.168164 Top1 92.850000 -2023-09-11 14:50:07,553 - ==> Top1: 92.850 Loss: 0.168 - -2023-09-11 14:50:07,553 - ==> Confusion: -[[897 88] - [ 55 960]] +2025-05-20 18:13:44,229 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:13:44,229 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:13:44,236 - + +2025-05-20 18:13:44,236 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:13:50,197 - Epoch: [180][ 10/ 71] Overall Loss 0.126294 Objective Loss 0.126294 LR 0.000250 Time 0.596096 +2025-05-20 18:13:53,698 - Epoch: [180][ 20/ 71] Overall Loss 0.120207 Objective Loss 0.120207 LR 0.000250 Time 0.473031 +2025-05-20 18:13:57,853 - Epoch: [180][ 30/ 71] Overall Loss 0.118375 Objective Loss 0.118375 LR 0.000250 Time 0.453844 +2025-05-20 18:14:00,954 - Epoch: [180][ 40/ 71] Overall Loss 0.118775 Objective Loss 0.118775 LR 0.000250 Time 0.417910 +2025-05-20 18:14:04,797 - Epoch: [180][ 50/ 71] Overall Loss 0.122524 Objective Loss 0.122524 LR 0.000250 Time 0.411188 +2025-05-20 18:14:07,758 - Epoch: [180][ 60/ 71] Overall Loss 0.124488 Objective Loss 0.124488 LR 0.000250 Time 0.391991 +2025-05-20 18:14:11,590 - Epoch: [180][ 70/ 71] Overall Loss 0.124384 Objective Loss 0.124384 Top1 94.140625 LR 0.000250 Time 0.390730 +2025-05-20 18:14:11,698 - Epoch: [180][ 71/ 71] Overall Loss 0.123380 Objective Loss 0.123380 Top1 95.535714 LR 0.000250 Time 0.386753 +2025-05-20 18:14:11,729 - --- validate (epoch=180)----------- +2025-05-20 18:14:11,729 - 2000 samples (256 per mini-batch) +2025-05-20 18:14:15,354 - Epoch: [180][ 8/ 8] Loss 0.173934 Top1 92.950000 +2025-05-20 18:14:15,393 - ==> Top1: 92.950 Loss: 0.174 + +2025-05-20 18:14:15,394 - ==> Confusion: +[[888 97] + [ 44 971]] -2023-09-11 14:50:07,569 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:50:07,569 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:50:07,573 - - -2023-09-11 14:50:07,573 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:50:12,098 - Epoch: [187][ 10/ 71] Overall Loss 0.132983 Objective Loss 0.132983 LR 0.000250 Time 0.452409 -2023-09-11 14:50:14,476 - Epoch: [187][ 20/ 71] Overall Loss 0.131615 Objective Loss 0.131615 LR 0.000250 Time 0.345077 -2023-09-11 14:50:16,996 - Epoch: [187][ 30/ 71] Overall Loss 0.128881 Objective Loss 0.128881 LR 0.000250 Time 0.314040 -2023-09-11 14:50:19,012 - Epoch: [187][ 40/ 71] Overall Loss 0.129055 Objective Loss 0.129055 LR 0.000250 Time 0.285913 -2023-09-11 14:50:21,796 - Epoch: [187][ 50/ 71] Overall Loss 0.131309 Objective Loss 0.131309 LR 0.000250 Time 0.284419 -2023-09-11 14:50:23,881 - Epoch: [187][ 60/ 71] Overall Loss 0.130936 Objective Loss 0.130936 LR 0.000250 Time 0.271760 -2023-09-11 14:50:26,171 - Epoch: [187][ 70/ 71] Overall Loss 0.130248 Objective Loss 0.130248 Top1 96.093750 LR 0.000250 Time 0.265647 -2023-09-11 14:50:26,274 - Epoch: [187][ 71/ 71] Overall Loss 0.129915 Objective Loss 0.129915 Top1 95.535714 LR 0.000250 Time 0.263348 -2023-09-11 14:50:26,375 - --- validate (epoch=187)----------- -2023-09-11 14:50:26,375 - 2000 samples (256 per mini-batch) -2023-09-11 14:50:29,068 - Epoch: [187][ 8/ 8] Loss 0.170634 Top1 93.450000 -2023-09-11 14:50:29,171 - ==> Top1: 93.450 Loss: 0.171 - -2023-09-11 14:50:29,171 - ==> Confusion: -[[928 57] - [ 74 941]] +2025-05-20 18:14:15,411 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:14:15,411 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:14:15,418 - + +2025-05-20 18:14:15,418 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:14:20,603 - Epoch: [181][ 10/ 71] Overall Loss 0.129640 Objective Loss 0.129640 LR 0.000250 Time 0.518369 +2025-05-20 18:14:24,869 - Epoch: [181][ 20/ 71] Overall Loss 0.121468 Objective Loss 0.121468 LR 0.000250 Time 0.472482 +2025-05-20 18:14:29,014 - Epoch: [181][ 30/ 71] Overall Loss 0.120353 Objective Loss 0.120353 LR 0.000250 Time 0.453150 +2025-05-20 18:14:32,525 - Epoch: [181][ 40/ 71] Overall Loss 0.120401 Objective Loss 0.120401 LR 0.000250 Time 0.427617 +2025-05-20 18:14:36,409 - Epoch: [181][ 50/ 71] Overall Loss 0.118592 Objective Loss 0.118592 LR 0.000250 Time 0.419764 +2025-05-20 18:14:40,335 - Epoch: [181][ 60/ 71] Overall Loss 0.120171 Objective Loss 0.120171 LR 0.000250 Time 0.415233 +2025-05-20 18:14:44,005 - Epoch: [181][ 70/ 71] Overall Loss 0.120763 Objective Loss 0.120763 Top1 95.312500 LR 0.000250 Time 0.408338 +2025-05-20 18:14:44,113 - Epoch: [181][ 71/ 71] Overall Loss 0.120985 Objective Loss 0.120985 Top1 95.535714 LR 0.000250 Time 0.404106 +2025-05-20 18:14:44,149 - --- validate (epoch=181)----------- +2025-05-20 18:14:44,149 - 2000 samples (256 per mini-batch) +2025-05-20 18:14:47,899 - Epoch: [181][ 8/ 8] Loss 0.171550 Top1 92.950000 +2025-05-20 18:14:47,935 - ==> Top1: 92.950 Loss: 0.172 + +2025-05-20 18:14:47,935 - ==> Confusion: +[[907 78] + [ 63 952]] -2023-09-11 14:50:29,187 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:50:29,187 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:50:29,189 - - -2023-09-11 14:50:29,190 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:50:33,681 - Epoch: [188][ 10/ 71] Overall Loss 0.113077 Objective Loss 0.113077 LR 0.000250 Time 0.449051 -2023-09-11 14:50:35,690 - Epoch: [188][ 20/ 71] Overall Loss 0.119249 Objective Loss 0.119249 LR 0.000250 Time 0.324976 -2023-09-11 14:50:38,123 - Epoch: [188][ 30/ 71] Overall Loss 0.129002 Objective Loss 0.129002 LR 0.000250 Time 0.297744 -2023-09-11 14:50:40,940 - Epoch: [188][ 40/ 71] Overall Loss 0.130348 Objective Loss 0.130348 LR 0.000250 Time 0.293724 -2023-09-11 14:50:43,114 - Epoch: [188][ 50/ 71] Overall Loss 0.129375 Objective Loss 0.129375 LR 0.000250 Time 0.278445 -2023-09-11 14:50:45,705 - Epoch: [188][ 60/ 71] Overall Loss 0.128252 Objective Loss 0.128252 LR 0.000250 Time 0.275215 -2023-09-11 14:50:47,621 - Epoch: [188][ 70/ 71] Overall Loss 0.128180 Objective Loss 0.128180 Top1 94.531250 LR 0.000250 Time 0.263274 -2023-09-11 14:50:47,734 - Epoch: [188][ 71/ 71] Overall Loss 0.127477 Objective Loss 0.127477 Top1 94.940476 LR 0.000250 Time 0.261146 -2023-09-11 14:50:47,832 - --- validate (epoch=188)----------- -2023-09-11 14:50:47,832 - 2000 samples (256 per mini-batch) -2023-09-11 14:50:50,868 - Epoch: [188][ 8/ 8] Loss 0.169592 Top1 93.400000 -2023-09-11 14:50:50,965 - ==> Top1: 93.400 Loss: 0.170 - -2023-09-11 14:50:50,965 - ==> Confusion: -[[927 58] - [ 74 941]] +2025-05-20 18:14:47,951 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:14:47,951 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:14:47,958 - + +2025-05-20 18:14:47,959 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:14:53,838 - Epoch: [182][ 10/ 71] Overall Loss 0.113278 Objective Loss 0.113278 LR 0.000250 Time 0.587911 +2025-05-20 18:14:57,168 - Epoch: [182][ 20/ 71] Overall Loss 0.119230 Objective Loss 0.119230 LR 0.000250 Time 0.460411 +2025-05-20 18:15:01,238 - Epoch: [182][ 30/ 71] Overall Loss 0.122119 Objective Loss 0.122119 LR 0.000250 Time 0.442602 +2025-05-20 18:15:04,183 - Epoch: [182][ 40/ 71] Overall Loss 0.124017 Objective Loss 0.124017 LR 0.000250 Time 0.405576 +2025-05-20 18:15:07,785 - Epoch: [182][ 50/ 71] Overall Loss 0.121758 Objective Loss 0.121758 LR 0.000250 Time 0.396485 +2025-05-20 18:15:11,727 - Epoch: [182][ 60/ 71] Overall Loss 0.120466 Objective Loss 0.120466 LR 0.000250 Time 0.396107 +2025-05-20 18:15:15,432 - Epoch: [182][ 70/ 71] Overall Loss 0.123065 Objective Loss 0.123065 Top1 94.140625 LR 0.000250 Time 0.392445 +2025-05-20 18:15:15,532 - Epoch: [182][ 71/ 71] Overall Loss 0.123036 Objective Loss 0.123036 Top1 94.047619 LR 0.000250 Time 0.388313 +2025-05-20 18:15:15,565 - --- validate (epoch=182)----------- +2025-05-20 18:15:15,565 - 2000 samples (256 per mini-batch) +2025-05-20 18:15:19,220 - Epoch: [182][ 8/ 8] Loss 0.172317 Top1 93.450000 +2025-05-20 18:15:19,251 - ==> Top1: 93.450 Loss: 0.172 + +2025-05-20 18:15:19,251 - ==> Confusion: +[[907 78] + [ 53 962]] + +2025-05-20 18:15:19,268 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:15:19,268 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:15:19,275 - + +2025-05-20 18:15:19,275 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:15:24,288 - Epoch: [183][ 10/ 71] Overall Loss 0.121276 Objective Loss 0.121276 LR 0.000250 Time 0.501209 +2025-05-20 18:15:27,955 - Epoch: [183][ 20/ 71] Overall Loss 0.120800 Objective Loss 0.120800 LR 0.000250 Time 0.433919 +2025-05-20 18:15:31,296 - Epoch: [183][ 30/ 71] Overall Loss 0.126355 Objective Loss 0.126355 LR 0.000250 Time 0.400621 +2025-05-20 18:15:36,206 - Epoch: [183][ 40/ 71] Overall Loss 0.122297 Objective Loss 0.122297 LR 0.000250 Time 0.423208 +2025-05-20 18:15:39,128 - Epoch: [183][ 50/ 71] Overall Loss 0.122637 Objective Loss 0.122637 LR 0.000250 Time 0.397015 +2025-05-20 18:15:42,490 - Epoch: [183][ 60/ 71] Overall Loss 0.124236 Objective Loss 0.124236 LR 0.000250 Time 0.386867 +2025-05-20 18:15:45,873 - Epoch: [183][ 70/ 71] Overall Loss 0.125734 Objective Loss 0.125734 Top1 91.406250 LR 0.000250 Time 0.379927 +2025-05-20 18:15:45,975 - Epoch: [183][ 71/ 71] Overall Loss 0.126259 Objective Loss 0.126259 Top1 91.666667 LR 0.000250 Time 0.376015 +2025-05-20 18:15:46,007 - --- validate (epoch=183)----------- +2025-05-20 18:15:46,007 - 2000 samples (256 per mini-batch) +2025-05-20 18:15:49,308 - Epoch: [183][ 8/ 8] Loss 0.171394 Top1 93.650000 +2025-05-20 18:15:49,344 - ==> Top1: 93.650 Loss: 0.171 + +2025-05-20 18:15:49,344 - ==> Confusion: +[[922 63] + [ 64 951]] + +2025-05-20 18:15:49,361 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:15:49,361 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:15:49,368 - + +2025-05-20 18:15:49,368 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:15:54,319 - Epoch: [184][ 10/ 71] Overall Loss 0.130651 Objective Loss 0.130651 LR 0.000250 Time 0.494976 +2025-05-20 18:15:58,907 - Epoch: [184][ 20/ 71] Overall Loss 0.134064 Objective Loss 0.134064 LR 0.000250 Time 0.476881 +2025-05-20 18:16:02,763 - Epoch: [184][ 30/ 71] Overall Loss 0.134191 Objective Loss 0.134191 LR 0.000250 Time 0.446442 +2025-05-20 18:16:07,325 - Epoch: [184][ 40/ 71] Overall Loss 0.133697 Objective Loss 0.133697 LR 0.000250 Time 0.448886 +2025-05-20 18:16:11,221 - Epoch: [184][ 50/ 71] Overall Loss 0.133047 Objective Loss 0.133047 LR 0.000250 Time 0.437017 +2025-05-20 18:16:14,703 - Epoch: [184][ 60/ 71] Overall Loss 0.132079 Objective Loss 0.132079 LR 0.000250 Time 0.422211 +2025-05-20 18:16:18,034 - Epoch: [184][ 70/ 71] Overall Loss 0.130983 Objective Loss 0.130983 Top1 95.703125 LR 0.000250 Time 0.409479 +2025-05-20 18:16:18,126 - Epoch: [184][ 71/ 71] Overall Loss 0.130808 Objective Loss 0.130808 Top1 95.535714 LR 0.000250 Time 0.405005 +2025-05-20 18:16:18,162 - --- validate (epoch=184)----------- +2025-05-20 18:16:18,163 - 2000 samples (256 per mini-batch) +2025-05-20 18:16:21,755 - Epoch: [184][ 8/ 8] Loss 0.180645 Top1 92.600000 +2025-05-20 18:16:21,786 - ==> Top1: 92.600 Loss: 0.181 + +2025-05-20 18:16:21,787 - ==> Confusion: +[[894 91] + [ 57 958]] + +2025-05-20 18:16:21,803 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:16:21,803 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:16:21,810 - + +2025-05-20 18:16:21,810 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:16:26,321 - Epoch: [185][ 10/ 71] Overall Loss 0.121398 Objective Loss 0.121398 LR 0.000250 Time 0.450961 +2025-05-20 18:16:29,455 - Epoch: [185][ 20/ 71] Overall Loss 0.120678 Objective Loss 0.120678 LR 0.000250 Time 0.382172 +2025-05-20 18:16:33,010 - Epoch: [185][ 30/ 71] Overall Loss 0.114624 Objective Loss 0.114624 LR 0.000250 Time 0.373274 +2025-05-20 18:16:38,135 - Epoch: [185][ 40/ 71] Overall Loss 0.117751 Objective Loss 0.117751 LR 0.000250 Time 0.408080 +2025-05-20 18:16:41,662 - Epoch: [185][ 50/ 71] Overall Loss 0.122873 Objective Loss 0.122873 LR 0.000250 Time 0.396988 +2025-05-20 18:16:45,141 - Epoch: [185][ 60/ 71] Overall Loss 0.125932 Objective Loss 0.125932 LR 0.000250 Time 0.388808 +2025-05-20 18:16:48,081 - Epoch: [185][ 70/ 71] Overall Loss 0.125631 Objective Loss 0.125631 Top1 93.359375 LR 0.000250 Time 0.375260 +2025-05-20 18:16:48,191 - Epoch: [185][ 71/ 71] Overall Loss 0.125229 Objective Loss 0.125229 Top1 94.345238 LR 0.000250 Time 0.371519 +2025-05-20 18:16:48,223 - --- validate (epoch=185)----------- +2025-05-20 18:16:48,223 - 2000 samples (256 per mini-batch) +2025-05-20 18:16:51,516 - Epoch: [185][ 8/ 8] Loss 0.169579 Top1 93.850000 +2025-05-20 18:16:51,555 - ==> Top1: 93.850 Loss: 0.170 + +2025-05-20 18:16:51,555 - ==> Confusion: +[[910 75] + [ 48 967]] -2023-09-11 14:50:50,967 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:50:50,967 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:50:50,970 - - -2023-09-11 14:50:50,970 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:50:54,295 - Epoch: [189][ 10/ 71] Overall Loss 0.121154 Objective Loss 0.121154 LR 0.000250 Time 0.332412 -2023-09-11 14:50:57,374 - Epoch: [189][ 20/ 71] Overall Loss 0.134965 Objective Loss 0.134965 LR 0.000250 Time 0.320142 -2023-09-11 14:50:59,933 - Epoch: [189][ 30/ 71] Overall Loss 0.128277 Objective Loss 0.128277 LR 0.000250 Time 0.298725 -2023-09-11 14:51:01,975 - Epoch: [189][ 40/ 71] Overall Loss 0.130441 Objective Loss 0.130441 LR 0.000250 Time 0.275076 -2023-09-11 14:51:04,632 - Epoch: [189][ 50/ 71] Overall Loss 0.128900 Objective Loss 0.128900 LR 0.000250 Time 0.273207 -2023-09-11 14:51:06,693 - Epoch: [189][ 60/ 71] Overall Loss 0.127390 Objective Loss 0.127390 LR 0.000250 Time 0.262012 -2023-09-11 14:51:08,971 - Epoch: [189][ 70/ 71] Overall Loss 0.125540 Objective Loss 0.125540 Top1 95.703125 LR 0.000250 Time 0.257119 -2023-09-11 14:51:09,052 - Epoch: [189][ 71/ 71] Overall Loss 0.126030 Objective Loss 0.126030 Top1 95.238095 LR 0.000250 Time 0.254633 -2023-09-11 14:51:09,141 - --- validate (epoch=189)----------- -2023-09-11 14:51:09,141 - 2000 samples (256 per mini-batch) -2023-09-11 14:51:12,092 - Epoch: [189][ 8/ 8] Loss 0.170420 Top1 93.050000 -2023-09-11 14:51:12,194 - ==> Top1: 93.050 Loss: 0.170 - -2023-09-11 14:51:12,194 - ==> Confusion: -[[936 49] - [ 90 925]] +2025-05-20 18:16:51,571 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:16:51,571 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:16:51,578 - + +2025-05-20 18:16:51,578 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:16:56,676 - Epoch: [186][ 10/ 71] Overall Loss 0.107258 Objective Loss 0.107258 LR 0.000250 Time 0.509669 +2025-05-20 18:17:00,598 - Epoch: [186][ 20/ 71] Overall Loss 0.114366 Objective Loss 0.114366 LR 0.000250 Time 0.450910 +2025-05-20 18:17:04,511 - Epoch: [186][ 30/ 71] Overall Loss 0.120344 Objective Loss 0.120344 LR 0.000250 Time 0.431043 +2025-05-20 18:17:08,778 - Epoch: [186][ 40/ 71] Overall Loss 0.120218 Objective Loss 0.120218 LR 0.000250 Time 0.429946 +2025-05-20 18:17:13,003 - Epoch: [186][ 50/ 71] Overall Loss 0.121200 Objective Loss 0.121200 LR 0.000250 Time 0.428446 +2025-05-20 18:17:16,563 - Epoch: [186][ 60/ 71] Overall Loss 0.120323 Objective Loss 0.120323 LR 0.000250 Time 0.416372 +2025-05-20 18:17:19,870 - Epoch: [186][ 70/ 71] Overall Loss 0.121317 Objective Loss 0.121317 Top1 94.140625 LR 0.000250 Time 0.404123 +2025-05-20 18:17:19,970 - Epoch: [186][ 71/ 71] Overall Loss 0.120235 Objective Loss 0.120235 Top1 95.238095 LR 0.000250 Time 0.399839 +2025-05-20 18:17:20,002 - --- validate (epoch=186)----------- +2025-05-20 18:17:20,002 - 2000 samples (256 per mini-batch) +2025-05-20 18:17:23,711 - Epoch: [186][ 8/ 8] Loss 0.169186 Top1 94.000000 +2025-05-20 18:17:23,745 - ==> Top1: 94.000 Loss: 0.169 + +2025-05-20 18:17:23,745 - ==> Confusion: +[[942 43] + [ 77 938]] -2023-09-11 14:51:12,209 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:51:12,209 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:51:12,214 - - -2023-09-11 14:51:12,214 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:51:16,576 - Epoch: [190][ 10/ 71] Overall Loss 0.131297 Objective Loss 0.131297 LR 0.000250 Time 0.436184 -2023-09-11 14:51:18,570 - Epoch: [190][ 20/ 71] Overall Loss 0.132174 Objective Loss 0.132174 LR 0.000250 Time 0.317788 -2023-09-11 14:51:21,239 - Epoch: [190][ 30/ 71] Overall Loss 0.131618 Objective Loss 0.131618 LR 0.000250 Time 0.300805 -2023-09-11 14:51:23,316 - Epoch: [190][ 40/ 71] Overall Loss 0.133098 Objective Loss 0.133098 LR 0.000250 Time 0.277512 -2023-09-11 14:51:25,940 - Epoch: [190][ 50/ 71] Overall Loss 0.132999 Objective Loss 0.132999 LR 0.000250 Time 0.274496 -2023-09-11 14:51:28,600 - Epoch: [190][ 60/ 71] Overall Loss 0.131805 Objective Loss 0.131805 LR 0.000250 Time 0.273066 -2023-09-11 14:51:30,819 - Epoch: [190][ 70/ 71] Overall Loss 0.130346 Objective Loss 0.130346 Top1 95.703125 LR 0.000250 Time 0.265753 -2023-09-11 14:51:30,897 - Epoch: [190][ 71/ 71] Overall Loss 0.130403 Objective Loss 0.130403 Top1 95.833333 LR 0.000250 Time 0.263111 -2023-09-11 14:51:30,994 - --- validate (epoch=190)----------- -2023-09-11 14:51:30,994 - 2000 samples (256 per mini-batch) -2023-09-11 14:51:34,055 - Epoch: [190][ 8/ 8] Loss 0.168069 Top1 92.700000 -2023-09-11 14:51:34,167 - ==> Top1: 92.700 Loss: 0.168 - -2023-09-11 14:51:34,167 - ==> Confusion: -[[914 71] - [ 75 940]] +2025-05-20 18:17:23,758 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:17:23,758 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:17:23,765 - + +2025-05-20 18:17:23,765 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:17:29,735 - Epoch: [187][ 10/ 71] Overall Loss 0.110216 Objective Loss 0.110216 LR 0.000250 Time 0.596959 +2025-05-20 18:17:33,602 - Epoch: [187][ 20/ 71] Overall Loss 0.119227 Objective Loss 0.119227 LR 0.000250 Time 0.491764 +2025-05-20 18:17:37,418 - Epoch: [187][ 30/ 71] Overall Loss 0.122239 Objective Loss 0.122239 LR 0.000250 Time 0.455042 +2025-05-20 18:17:41,177 - Epoch: [187][ 40/ 71] Overall Loss 0.121775 Objective Loss 0.121775 LR 0.000250 Time 0.435263 +2025-05-20 18:17:45,111 - Epoch: [187][ 50/ 71] Overall Loss 0.119690 Objective Loss 0.119690 LR 0.000250 Time 0.426879 +2025-05-20 18:17:48,358 - Epoch: [187][ 60/ 71] Overall Loss 0.120503 Objective Loss 0.120503 LR 0.000250 Time 0.409838 +2025-05-20 18:17:52,390 - Epoch: [187][ 70/ 71] Overall Loss 0.119898 Objective Loss 0.119898 Top1 95.703125 LR 0.000250 Time 0.408891 +2025-05-20 18:17:52,499 - Epoch: [187][ 71/ 71] Overall Loss 0.121008 Objective Loss 0.121008 Top1 94.940476 LR 0.000250 Time 0.404661 +2025-05-20 18:17:52,535 - --- validate (epoch=187)----------- +2025-05-20 18:17:52,535 - 2000 samples (256 per mini-batch) +2025-05-20 18:17:55,970 - Epoch: [187][ 8/ 8] Loss 0.165146 Top1 93.550000 +2025-05-20 18:17:56,006 - ==> Top1: 93.550 Loss: 0.165 + +2025-05-20 18:17:56,007 - ==> Confusion: +[[933 52] + [ 77 938]] -2023-09-11 14:51:34,184 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:51:34,184 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:51:34,186 - - -2023-09-11 14:51:34,186 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:51:38,640 - Epoch: [191][ 10/ 71] Overall Loss 0.120242 Objective Loss 0.120242 LR 0.000250 Time 0.445329 -2023-09-11 14:51:41,832 - Epoch: [191][ 20/ 71] Overall Loss 0.132689 Objective Loss 0.132689 LR 0.000250 Time 0.382260 -2023-09-11 14:51:43,724 - Epoch: [191][ 30/ 71] Overall Loss 0.131703 Objective Loss 0.131703 LR 0.000250 Time 0.317870 -2023-09-11 14:51:46,368 - Epoch: [191][ 40/ 71] Overall Loss 0.127508 Objective Loss 0.127508 LR 0.000250 Time 0.304506 -2023-09-11 14:51:48,496 - Epoch: [191][ 50/ 71] Overall Loss 0.127173 Objective Loss 0.127173 LR 0.000250 Time 0.286167 -2023-09-11 14:51:51,831 - Epoch: [191][ 60/ 71] Overall Loss 0.128063 Objective Loss 0.128063 LR 0.000250 Time 0.294043 -2023-09-11 14:51:54,143 - Epoch: [191][ 70/ 71] Overall Loss 0.129054 Objective Loss 0.129054 Top1 92.187500 LR 0.000250 Time 0.285057 -2023-09-11 14:51:54,224 - Epoch: [191][ 71/ 71] Overall Loss 0.130081 Objective Loss 0.130081 Top1 92.261905 LR 0.000250 Time 0.282188 -2023-09-11 14:51:54,316 - --- validate (epoch=191)----------- -2023-09-11 14:51:54,316 - 2000 samples (256 per mini-batch) -2023-09-11 14:51:56,727 - Epoch: [191][ 8/ 8] Loss 0.166168 Top1 92.800000 -2023-09-11 14:51:56,815 - ==> Top1: 92.800 Loss: 0.166 - -2023-09-11 14:51:56,816 - ==> Confusion: -[[929 56] - [ 88 927]] +2025-05-20 18:17:56,023 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:17:56,023 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:17:56,030 - + +2025-05-20 18:17:56,030 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:18:00,664 - Epoch: [188][ 10/ 71] Overall Loss 0.142731 Objective Loss 0.142731 LR 0.000250 Time 0.463285 +2025-05-20 18:18:05,570 - Epoch: [188][ 20/ 71] Overall Loss 0.129689 Objective Loss 0.129689 LR 0.000250 Time 0.476914 +2025-05-20 18:18:08,535 - Epoch: [188][ 30/ 71] Overall Loss 0.129325 Objective Loss 0.129325 LR 0.000250 Time 0.416769 +2025-05-20 18:18:12,462 - Epoch: [188][ 40/ 71] Overall Loss 0.124962 Objective Loss 0.124962 LR 0.000250 Time 0.410762 +2025-05-20 18:18:15,512 - Epoch: [188][ 50/ 71] Overall Loss 0.120553 Objective Loss 0.120553 LR 0.000250 Time 0.389602 +2025-05-20 18:18:19,184 - Epoch: [188][ 60/ 71] Overall Loss 0.120443 Objective Loss 0.120443 LR 0.000250 Time 0.385855 +2025-05-20 18:18:22,054 - Epoch: [188][ 70/ 71] Overall Loss 0.119590 Objective Loss 0.119590 Top1 93.750000 LR 0.000250 Time 0.371729 +2025-05-20 18:18:22,154 - Epoch: [188][ 71/ 71] Overall Loss 0.119768 Objective Loss 0.119768 Top1 93.452381 LR 0.000250 Time 0.367900 +2025-05-20 18:18:22,186 - --- validate (epoch=188)----------- +2025-05-20 18:18:22,187 - 2000 samples (256 per mini-batch) +2025-05-20 18:18:26,170 - Epoch: [188][ 8/ 8] Loss 0.173229 Top1 93.600000 +2025-05-20 18:18:26,207 - ==> Top1: 93.600 Loss: 0.173 + +2025-05-20 18:18:26,207 - ==> Confusion: +[[937 48] + [ 80 935]] -2023-09-11 14:51:56,817 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:51:56,817 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:51:56,819 - - -2023-09-11 14:51:56,819 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:52:01,261 - Epoch: [192][ 10/ 71] Overall Loss 0.140559 Objective Loss 0.140559 LR 0.000250 Time 0.444128 -2023-09-11 14:52:03,277 - Epoch: [192][ 20/ 71] Overall Loss 0.135898 Objective Loss 0.135898 LR 0.000250 Time 0.322822 -2023-09-11 14:52:05,714 - Epoch: [192][ 30/ 71] Overall Loss 0.124206 Objective Loss 0.124206 LR 0.000250 Time 0.296460 -2023-09-11 14:52:07,929 - Epoch: [192][ 40/ 71] Overall Loss 0.124901 Objective Loss 0.124901 LR 0.000250 Time 0.277700 -2023-09-11 14:52:10,285 - Epoch: [192][ 50/ 71] Overall Loss 0.125522 Objective Loss 0.125522 LR 0.000250 Time 0.269277 -2023-09-11 14:52:12,550 - Epoch: [192][ 60/ 71] Overall Loss 0.126222 Objective Loss 0.126222 LR 0.000250 Time 0.262138 -2023-09-11 14:52:14,796 - Epoch: [192][ 70/ 71] Overall Loss 0.124071 Objective Loss 0.124071 Top1 95.703125 LR 0.000250 Time 0.256767 -2023-09-11 14:52:14,904 - Epoch: [192][ 71/ 71] Overall Loss 0.124169 Objective Loss 0.124169 Top1 95.238095 LR 0.000250 Time 0.254670 -2023-09-11 14:52:14,989 - --- validate (epoch=192)----------- -2023-09-11 14:52:14,990 - 2000 samples (256 per mini-batch) -2023-09-11 14:52:17,373 - Epoch: [192][ 8/ 8] Loss 0.200578 Top1 92.150000 -2023-09-11 14:52:17,467 - ==> Top1: 92.150 Loss: 0.201 - -2023-09-11 14:52:17,467 - ==> Confusion: -[[938 47] - [110 905]] - -2023-09-11 14:52:17,479 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:52:17,479 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:52:17,483 - - -2023-09-11 14:52:17,484 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:52:20,789 - Epoch: [193][ 10/ 71] Overall Loss 0.124455 Objective Loss 0.124455 LR 0.000250 Time 0.330448 -2023-09-11 14:52:22,861 - Epoch: [193][ 20/ 71] Overall Loss 0.124016 Objective Loss 0.124016 LR 0.000250 Time 0.268818 -2023-09-11 14:52:26,099 - Epoch: [193][ 30/ 71] Overall Loss 0.125611 Objective Loss 0.125611 LR 0.000250 Time 0.287150 -2023-09-11 14:52:28,028 - Epoch: [193][ 40/ 71] Overall Loss 0.127387 Objective Loss 0.127387 LR 0.000250 Time 0.263575 -2023-09-11 14:52:30,736 - Epoch: [193][ 50/ 71] Overall Loss 0.130728 Objective Loss 0.130728 LR 0.000250 Time 0.264998 -2023-09-11 14:52:32,894 - Epoch: [193][ 60/ 71] Overall Loss 0.130074 Objective Loss 0.130074 LR 0.000250 Time 0.256805 -2023-09-11 14:52:35,148 - Epoch: [193][ 70/ 71] Overall Loss 0.129025 Objective Loss 0.129025 Top1 94.531250 LR 0.000250 Time 0.252304 -2023-09-11 14:52:35,241 - Epoch: [193][ 71/ 71] Overall Loss 0.128438 Objective Loss 0.128438 Top1 95.238095 LR 0.000250 Time 0.250056 -2023-09-11 14:52:35,331 - --- validate (epoch=193)----------- -2023-09-11 14:52:35,331 - 2000 samples (256 per mini-batch) -2023-09-11 14:52:38,282 - Epoch: [193][ 8/ 8] Loss 0.201018 Top1 92.250000 -2023-09-11 14:52:38,387 - ==> Top1: 92.250 Loss: 0.201 - -2023-09-11 14:52:38,387 - ==> Confusion: -[[946 39] - [116 899]] +2025-05-20 18:18:26,224 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:18:26,224 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:18:26,232 - + +2025-05-20 18:18:26,232 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:18:32,641 - Epoch: [189][ 10/ 71] Overall Loss 0.140018 Objective Loss 0.140018 LR 0.000250 Time 0.640846 +2025-05-20 18:18:35,413 - Epoch: [189][ 20/ 71] Overall Loss 0.129912 Objective Loss 0.129912 LR 0.000250 Time 0.458991 +2025-05-20 18:18:39,178 - Epoch: [189][ 30/ 71] Overall Loss 0.128024 Objective Loss 0.128024 LR 0.000250 Time 0.431514 +2025-05-20 18:18:42,255 - Epoch: [189][ 40/ 71] Overall Loss 0.129280 Objective Loss 0.129280 LR 0.000250 Time 0.400531 +2025-05-20 18:18:46,063 - Epoch: [189][ 50/ 71] Overall Loss 0.127223 Objective Loss 0.127223 LR 0.000250 Time 0.396590 +2025-05-20 18:18:49,085 - Epoch: [189][ 60/ 71] Overall Loss 0.124202 Objective Loss 0.124202 LR 0.000250 Time 0.380851 +2025-05-20 18:18:52,851 - Epoch: [189][ 70/ 71] Overall Loss 0.124657 Objective Loss 0.124657 Top1 94.531250 LR 0.000250 Time 0.380243 +2025-05-20 18:18:52,959 - Epoch: [189][ 71/ 71] Overall Loss 0.125514 Objective Loss 0.125514 Top1 94.345238 LR 0.000250 Time 0.376408 +2025-05-20 18:18:52,986 - --- validate (epoch=189)----------- +2025-05-20 18:18:52,986 - 2000 samples (256 per mini-batch) +2025-05-20 18:18:57,074 - Epoch: [189][ 8/ 8] Loss 0.208680 Top1 92.650000 +2025-05-20 18:18:57,107 - ==> Top1: 92.650 Loss: 0.209 + +2025-05-20 18:18:57,107 - ==> Confusion: +[[950 35] + [112 903]] + +2025-05-20 18:18:57,124 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:18:57,124 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:18:57,131 - + +2025-05-20 18:18:57,132 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:19:01,813 - Epoch: [190][ 10/ 71] Overall Loss 0.113298 Objective Loss 0.113298 LR 0.000250 Time 0.468109 +2025-05-20 18:19:06,149 - Epoch: [190][ 20/ 71] Overall Loss 0.120718 Objective Loss 0.120718 LR 0.000250 Time 0.450810 +2025-05-20 18:19:09,137 - Epoch: [190][ 30/ 71] Overall Loss 0.119756 Objective Loss 0.119756 LR 0.000250 Time 0.400144 +2025-05-20 18:19:13,196 - Epoch: [190][ 40/ 71] Overall Loss 0.118779 Objective Loss 0.118779 LR 0.000250 Time 0.401581 +2025-05-20 18:19:16,354 - Epoch: [190][ 50/ 71] Overall Loss 0.122371 Objective Loss 0.122371 LR 0.000250 Time 0.384405 +2025-05-20 18:19:19,953 - Epoch: [190][ 60/ 71] Overall Loss 0.121125 Objective Loss 0.121125 LR 0.000250 Time 0.380278 +2025-05-20 18:19:23,918 - Epoch: [190][ 70/ 71] Overall Loss 0.121388 Objective Loss 0.121388 Top1 93.750000 LR 0.000250 Time 0.382587 +2025-05-20 18:19:24,017 - Epoch: [190][ 71/ 71] Overall Loss 0.122251 Objective Loss 0.122251 Top1 93.452381 LR 0.000250 Time 0.378592 +2025-05-20 18:19:24,052 - --- validate (epoch=190)----------- +2025-05-20 18:19:24,052 - 2000 samples (256 per mini-batch) +2025-05-20 18:19:27,948 - Epoch: [190][ 8/ 8] Loss 0.189354 Top1 92.700000 +2025-05-20 18:19:27,982 - ==> Top1: 92.700 Loss: 0.189 + +2025-05-20 18:19:27,982 - ==> Confusion: +[[915 70] + [ 76 939]] -2023-09-11 14:52:38,392 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:52:38,392 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:52:38,396 - - -2023-09-11 14:52:38,397 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:52:42,866 - Epoch: [194][ 10/ 71] Overall Loss 0.119948 Objective Loss 0.119948 LR 0.000250 Time 0.446848 -2023-09-11 14:52:44,836 - Epoch: [194][ 20/ 71] Overall Loss 0.120730 Objective Loss 0.120730 LR 0.000250 Time 0.321910 -2023-09-11 14:52:48,601 - Epoch: [194][ 30/ 71] Overall Loss 0.122618 Objective Loss 0.122618 LR 0.000250 Time 0.340102 -2023-09-11 14:52:50,735 - Epoch: [194][ 40/ 71] Overall Loss 0.124211 Objective Loss 0.124211 LR 0.000250 Time 0.308432 -2023-09-11 14:52:53,295 - Epoch: [194][ 50/ 71] Overall Loss 0.123295 Objective Loss 0.123295 LR 0.000250 Time 0.297922 -2023-09-11 14:52:56,517 - Epoch: [194][ 60/ 71] Overall Loss 0.121289 Objective Loss 0.121289 LR 0.000250 Time 0.301970 -2023-09-11 14:52:58,416 - Epoch: [194][ 70/ 71] Overall Loss 0.122792 Objective Loss 0.122792 Top1 95.312500 LR 0.000250 Time 0.285961 -2023-09-11 14:52:58,503 - Epoch: [194][ 71/ 71] Overall Loss 0.123008 Objective Loss 0.123008 Top1 94.642857 LR 0.000250 Time 0.283150 -2023-09-11 14:52:58,597 - --- validate (epoch=194)----------- -2023-09-11 14:52:58,598 - 2000 samples (256 per mini-batch) -2023-09-11 14:53:01,568 - Epoch: [194][ 8/ 8] Loss 0.178160 Top1 92.500000 -2023-09-11 14:53:01,664 - ==> Top1: 92.500 Loss: 0.178 - -2023-09-11 14:53:01,665 - ==> Confusion: -[[931 54] - [ 96 919]] - -2023-09-11 14:53:01,680 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:53:01,680 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:53:01,682 - - -2023-09-11 14:53:01,683 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:53:05,618 - Epoch: [195][ 10/ 71] Overall Loss 0.128515 Objective Loss 0.128515 LR 0.000250 Time 0.393507 -2023-09-11 14:53:08,576 - Epoch: [195][ 20/ 71] Overall Loss 0.127631 Objective Loss 0.127631 LR 0.000250 Time 0.344627 -2023-09-11 14:53:10,609 - Epoch: [195][ 30/ 71] Overall Loss 0.127666 Objective Loss 0.127666 LR 0.000250 Time 0.297494 -2023-09-11 14:53:13,688 - Epoch: [195][ 40/ 71] Overall Loss 0.126972 Objective Loss 0.126972 LR 0.000250 Time 0.300093 -2023-09-11 14:53:16,519 - Epoch: [195][ 50/ 71] Overall Loss 0.128343 Objective Loss 0.128343 LR 0.000250 Time 0.296697 -2023-09-11 14:53:19,085 - Epoch: [195][ 60/ 71] Overall Loss 0.131771 Objective Loss 0.131771 LR 0.000250 Time 0.290001 -2023-09-11 14:53:21,012 - Epoch: [195][ 70/ 71] Overall Loss 0.132276 Objective Loss 0.132276 Top1 93.750000 LR 0.000250 Time 0.276102 -2023-09-11 14:53:21,092 - Epoch: [195][ 71/ 71] Overall Loss 0.132354 Objective Loss 0.132354 Top1 94.047619 LR 0.000250 Time 0.273331 -2023-09-11 14:53:21,178 - --- validate (epoch=195)----------- -2023-09-11 14:53:21,178 - 2000 samples (256 per mini-batch) -2023-09-11 14:53:24,095 - Epoch: [195][ 8/ 8] Loss 0.177391 Top1 92.600000 -2023-09-11 14:53:24,178 - ==> Top1: 92.600 Loss: 0.177 - -2023-09-11 14:53:24,179 - ==> Confusion: -[[868 117] - [ 31 984]] - -2023-09-11 14:53:24,195 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:53:24,195 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:53:24,197 - - -2023-09-11 14:53:24,197 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:53:28,679 - Epoch: [196][ 10/ 71] Overall Loss 0.149076 Objective Loss 0.149076 LR 0.000250 Time 0.448113 -2023-09-11 14:53:30,731 - Epoch: [196][ 20/ 71] Overall Loss 0.139415 Objective Loss 0.139415 LR 0.000250 Time 0.326638 -2023-09-11 14:53:33,831 - Epoch: [196][ 30/ 71] Overall Loss 0.137735 Objective Loss 0.137735 LR 0.000250 Time 0.321105 -2023-09-11 14:53:36,480 - Epoch: [196][ 40/ 71] Overall Loss 0.137124 Objective Loss 0.137124 LR 0.000250 Time 0.307039 -2023-09-11 14:53:38,739 - Epoch: [196][ 50/ 71] Overall Loss 0.134784 Objective Loss 0.134784 LR 0.000250 Time 0.290808 -2023-09-11 14:53:41,132 - Epoch: [196][ 60/ 71] Overall Loss 0.131577 Objective Loss 0.131577 LR 0.000250 Time 0.282210 -2023-09-11 14:53:43,205 - Epoch: [196][ 70/ 71] Overall Loss 0.130031 Objective Loss 0.130031 Top1 93.359375 LR 0.000250 Time 0.271505 -2023-09-11 14:53:43,279 - Epoch: [196][ 71/ 71] Overall Loss 0.129977 Objective Loss 0.129977 Top1 93.452381 LR 0.000250 Time 0.268722 -2023-09-11 14:53:43,373 - --- validate (epoch=196)----------- -2023-09-11 14:53:43,373 - 2000 samples (256 per mini-batch) -2023-09-11 14:53:45,697 - Epoch: [196][ 8/ 8] Loss 0.169219 Top1 93.050000 -2023-09-11 14:53:45,808 - ==> Top1: 93.050 Loss: 0.169 - -2023-09-11 14:53:45,809 - ==> Confusion: +2025-05-20 18:19:27,997 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:19:27,997 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:19:28,005 - + +2025-05-20 18:19:28,005 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:19:33,694 - Epoch: [191][ 10/ 71] Overall Loss 0.116230 Objective Loss 0.116230 LR 0.000250 Time 0.568852 +2025-05-20 18:19:36,857 - Epoch: [191][ 20/ 71] Overall Loss 0.124866 Objective Loss 0.124866 LR 0.000250 Time 0.442578 +2025-05-20 18:19:40,504 - Epoch: [191][ 30/ 71] Overall Loss 0.123676 Objective Loss 0.123676 LR 0.000250 Time 0.416574 +2025-05-20 18:19:44,072 - Epoch: [191][ 40/ 71] Overall Loss 0.122774 Objective Loss 0.122774 LR 0.000250 Time 0.401644 +2025-05-20 18:19:48,575 - Epoch: [191][ 50/ 71] Overall Loss 0.125472 Objective Loss 0.125472 LR 0.000250 Time 0.411364 +2025-05-20 18:19:51,949 - Epoch: [191][ 60/ 71] Overall Loss 0.129345 Objective Loss 0.129345 LR 0.000250 Time 0.399036 +2025-05-20 18:19:55,969 - Epoch: [191][ 70/ 71] Overall Loss 0.128974 Objective Loss 0.128974 Top1 94.531250 LR 0.000250 Time 0.399442 +2025-05-20 18:19:56,064 - Epoch: [191][ 71/ 71] Overall Loss 0.127825 Objective Loss 0.127825 Top1 95.833333 LR 0.000250 Time 0.395154 +2025-05-20 18:19:56,097 - --- validate (epoch=191)----------- +2025-05-20 18:19:56,098 - 2000 samples (256 per mini-batch) +2025-05-20 18:19:59,849 - Epoch: [191][ 8/ 8] Loss 0.165991 Top1 93.250000 +2025-05-20 18:19:59,886 - ==> Top1: 93.250 Loss: 0.166 + +2025-05-20 18:19:59,886 - ==> Confusion: +[[912 73] + [ 62 953]] + +2025-05-20 18:19:59,898 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:19:59,898 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:19:59,905 - + +2025-05-20 18:19:59,905 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:20:04,685 - Epoch: [192][ 10/ 71] Overall Loss 0.119132 Objective Loss 0.119132 LR 0.000250 Time 0.477911 +2025-05-20 18:20:09,327 - Epoch: [192][ 20/ 71] Overall Loss 0.119311 Objective Loss 0.119311 LR 0.000250 Time 0.471059 +2025-05-20 18:20:12,993 - Epoch: [192][ 30/ 71] Overall Loss 0.117836 Objective Loss 0.117836 LR 0.000250 Time 0.436219 +2025-05-20 18:20:17,139 - Epoch: [192][ 40/ 71] Overall Loss 0.117945 Objective Loss 0.117945 LR 0.000250 Time 0.430801 +2025-05-20 18:20:20,496 - Epoch: [192][ 50/ 71] Overall Loss 0.119131 Objective Loss 0.119131 LR 0.000250 Time 0.411787 +2025-05-20 18:20:25,025 - Epoch: [192][ 60/ 71] Overall Loss 0.118618 Objective Loss 0.118618 LR 0.000250 Time 0.418625 +2025-05-20 18:20:28,043 - Epoch: [192][ 70/ 71] Overall Loss 0.120792 Objective Loss 0.120792 Top1 94.921875 LR 0.000250 Time 0.401931 +2025-05-20 18:20:28,137 - Epoch: [192][ 71/ 71] Overall Loss 0.120379 Objective Loss 0.120379 Top1 95.535714 LR 0.000250 Time 0.397594 +2025-05-20 18:20:28,171 - --- validate (epoch=192)----------- +2025-05-20 18:20:28,171 - 2000 samples (256 per mini-batch) +2025-05-20 18:20:32,073 - Epoch: [192][ 8/ 8] Loss 0.184291 Top1 92.650000 +2025-05-20 18:20:32,110 - ==> Top1: 92.650 Loss: 0.184 + +2025-05-20 18:20:32,110 - ==> Confusion: +[[878 107] + [ 40 975]] + +2025-05-20 18:20:32,121 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:20:32,121 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:20:32,128 - + +2025-05-20 18:20:32,128 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:20:37,100 - Epoch: [193][ 10/ 71] Overall Loss 0.133353 Objective Loss 0.133353 LR 0.000250 Time 0.497133 +2025-05-20 18:20:41,834 - Epoch: [193][ 20/ 71] Overall Loss 0.121168 Objective Loss 0.121168 LR 0.000250 Time 0.485244 +2025-05-20 18:20:45,267 - Epoch: [193][ 30/ 71] Overall Loss 0.118693 Objective Loss 0.118693 LR 0.000250 Time 0.437915 +2025-05-20 18:20:49,560 - Epoch: [193][ 40/ 71] Overall Loss 0.118962 Objective Loss 0.118962 LR 0.000250 Time 0.435763 +2025-05-20 18:20:53,757 - Epoch: [193][ 50/ 71] Overall Loss 0.119046 Objective Loss 0.119046 LR 0.000250 Time 0.432541 +2025-05-20 18:20:56,960 - Epoch: [193][ 60/ 71] Overall Loss 0.121572 Objective Loss 0.121572 LR 0.000250 Time 0.413830 +2025-05-20 18:21:00,577 - Epoch: [193][ 70/ 71] Overall Loss 0.121740 Objective Loss 0.121740 Top1 97.265625 LR 0.000250 Time 0.406372 +2025-05-20 18:21:00,684 - Epoch: [193][ 71/ 71] Overall Loss 0.121487 Objective Loss 0.121487 Top1 97.023810 LR 0.000250 Time 0.402158 +2025-05-20 18:21:00,716 - --- validate (epoch=193)----------- +2025-05-20 18:21:00,716 - 2000 samples (256 per mini-batch) +2025-05-20 18:21:04,771 - Epoch: [193][ 8/ 8] Loss 0.177546 Top1 92.700000 +2025-05-20 18:21:04,812 - ==> Top1: 92.700 Loss: 0.178 + +2025-05-20 18:21:04,812 - ==> Confusion: +[[892 93] + [ 53 962]] + +2025-05-20 18:21:04,828 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:21:04,828 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:21:04,836 - + +2025-05-20 18:21:04,836 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:21:11,258 - Epoch: [194][ 10/ 71] Overall Loss 0.118345 Objective Loss 0.118345 LR 0.000250 Time 0.642208 +2025-05-20 18:21:14,559 - Epoch: [194][ 20/ 71] Overall Loss 0.123278 Objective Loss 0.123278 LR 0.000250 Time 0.486132 +2025-05-20 18:21:18,573 - Epoch: [194][ 30/ 71] Overall Loss 0.124391 Objective Loss 0.124391 LR 0.000250 Time 0.457860 +2025-05-20 18:21:21,782 - Epoch: [194][ 40/ 71] Overall Loss 0.120781 Objective Loss 0.120781 LR 0.000250 Time 0.423625 +2025-05-20 18:21:27,551 - Epoch: [194][ 50/ 71] Overall Loss 0.120750 Objective Loss 0.120750 LR 0.000250 Time 0.454278 +2025-05-20 18:21:31,060 - Epoch: [194][ 60/ 71] Overall Loss 0.118957 Objective Loss 0.118957 LR 0.000250 Time 0.437035 +2025-05-20 18:21:34,675 - Epoch: [194][ 70/ 71] Overall Loss 0.118910 Objective Loss 0.118910 Top1 96.875000 LR 0.000250 Time 0.426232 +2025-05-20 18:21:34,769 - Epoch: [194][ 71/ 71] Overall Loss 0.118910 Objective Loss 0.118910 Top1 95.833333 LR 0.000250 Time 0.421551 +2025-05-20 18:21:34,803 - --- validate (epoch=194)----------- +2025-05-20 18:21:34,803 - 2000 samples (256 per mini-batch) +2025-05-20 18:21:38,226 - Epoch: [194][ 8/ 8] Loss 0.175389 Top1 93.400000 +2025-05-20 18:21:38,264 - ==> Top1: 93.400 Loss: 0.175 + +2025-05-20 18:21:38,265 - ==> Confusion: [[928 57] - [ 82 933]] - -2023-09-11 14:53:45,823 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:53:45,824 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:53:45,828 - - -2023-09-11 14:53:45,828 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:53:50,058 - Epoch: [197][ 10/ 71] Overall Loss 0.117869 Objective Loss 0.117869 LR 0.000250 Time 0.422893 -2023-09-11 14:53:52,130 - Epoch: [197][ 20/ 71] Overall Loss 0.127838 Objective Loss 0.127838 LR 0.000250 Time 0.315029 -2023-09-11 14:53:54,634 - Epoch: [197][ 30/ 71] Overall Loss 0.128143 Objective Loss 0.128143 LR 0.000250 Time 0.293472 -2023-09-11 14:53:56,753 - Epoch: [197][ 40/ 71] Overall Loss 0.130649 Objective Loss 0.130649 LR 0.000250 Time 0.273079 -2023-09-11 14:53:59,374 - Epoch: [197][ 50/ 71] Overall Loss 0.131871 Objective Loss 0.131871 LR 0.000250 Time 0.270875 -2023-09-11 14:54:02,828 - Epoch: [197][ 60/ 71] Overall Loss 0.130391 Objective Loss 0.130391 LR 0.000250 Time 0.283287 -2023-09-11 14:54:04,829 - Epoch: [197][ 70/ 71] Overall Loss 0.130029 Objective Loss 0.130029 Top1 96.093750 LR 0.000250 Time 0.271397 -2023-09-11 14:54:04,908 - Epoch: [197][ 71/ 71] Overall Loss 0.130208 Objective Loss 0.130208 Top1 94.940476 LR 0.000250 Time 0.268695 -2023-09-11 14:54:05,006 - --- validate (epoch=197)----------- -2023-09-11 14:54:05,006 - 2000 samples (256 per mini-batch) -2023-09-11 14:54:08,150 - Epoch: [197][ 8/ 8] Loss 0.167798 Top1 93.650000 -2023-09-11 14:54:08,249 - ==> Top1: 93.650 Loss: 0.168 - -2023-09-11 14:54:08,249 - ==> Confusion: -[[918 67] - [ 60 955]] + [ 75 940]] -2023-09-11 14:54:08,265 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:54:08,265 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:54:08,268 - - -2023-09-11 14:54:08,268 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:54:12,700 - Epoch: [198][ 10/ 71] Overall Loss 0.119994 Objective Loss 0.119994 LR 0.000250 Time 0.443182 -2023-09-11 14:54:14,710 - Epoch: [198][ 20/ 71] Overall Loss 0.127062 Objective Loss 0.127062 LR 0.000250 Time 0.322079 -2023-09-11 14:54:17,194 - Epoch: [198][ 30/ 71] Overall Loss 0.129556 Objective Loss 0.129556 LR 0.000250 Time 0.297482 -2023-09-11 14:54:19,327 - Epoch: [198][ 40/ 71] Overall Loss 0.129119 Objective Loss 0.129119 LR 0.000250 Time 0.276445 -2023-09-11 14:54:21,879 - Epoch: [198][ 50/ 71] Overall Loss 0.129296 Objective Loss 0.129296 LR 0.000250 Time 0.272181 -2023-09-11 14:54:24,357 - Epoch: [198][ 60/ 71] Overall Loss 0.127042 Objective Loss 0.127042 LR 0.000250 Time 0.268110 -2023-09-11 14:54:26,250 - Epoch: [198][ 70/ 71] Overall Loss 0.128644 Objective Loss 0.128644 Top1 94.140625 LR 0.000250 Time 0.256853 -2023-09-11 14:54:26,331 - Epoch: [198][ 71/ 71] Overall Loss 0.129553 Objective Loss 0.129553 Top1 93.750000 LR 0.000250 Time 0.254369 -2023-09-11 14:54:26,428 - --- validate (epoch=198)----------- -2023-09-11 14:54:26,428 - 2000 samples (256 per mini-batch) -2023-09-11 14:54:29,604 - Epoch: [198][ 8/ 8] Loss 0.176949 Top1 92.600000 -2023-09-11 14:54:29,695 - ==> Top1: 92.600 Loss: 0.177 - -2023-09-11 14:54:29,695 - ==> Confusion: -[[881 104] - [ 44 971]] +2025-05-20 18:21:38,271 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:21:38,271 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:21:38,278 - + +2025-05-20 18:21:38,278 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:21:43,038 - Epoch: [195][ 10/ 71] Overall Loss 0.113502 Objective Loss 0.113502 LR 0.000250 Time 0.475892 +2025-05-20 18:21:47,546 - Epoch: [195][ 20/ 71] Overall Loss 0.118911 Objective Loss 0.118911 LR 0.000250 Time 0.463308 +2025-05-20 18:21:50,453 - Epoch: [195][ 30/ 71] Overall Loss 0.122580 Objective Loss 0.122580 LR 0.000250 Time 0.405777 +2025-05-20 18:21:54,714 - Epoch: [195][ 40/ 71] Overall Loss 0.119580 Objective Loss 0.119580 LR 0.000250 Time 0.410844 +2025-05-20 18:21:59,375 - Epoch: [195][ 50/ 71] Overall Loss 0.120672 Objective Loss 0.120672 LR 0.000250 Time 0.421889 +2025-05-20 18:22:02,248 - Epoch: [195][ 60/ 71] Overall Loss 0.121257 Objective Loss 0.121257 LR 0.000250 Time 0.399449 +2025-05-20 18:22:06,210 - Epoch: [195][ 70/ 71] Overall Loss 0.118658 Objective Loss 0.118658 Top1 96.093750 LR 0.000250 Time 0.398989 +2025-05-20 18:22:06,302 - Epoch: [195][ 71/ 71] Overall Loss 0.117976 Objective Loss 0.117976 Top1 96.726190 LR 0.000250 Time 0.394657 +2025-05-20 18:22:06,341 - --- validate (epoch=195)----------- +2025-05-20 18:22:06,341 - 2000 samples (256 per mini-batch) +2025-05-20 18:22:10,433 - Epoch: [195][ 8/ 8] Loss 0.179122 Top1 93.700000 +2025-05-20 18:22:10,465 - ==> Top1: 93.700 Loss: 0.179 + +2025-05-20 18:22:10,465 - ==> Confusion: +[[915 70] + [ 56 959]] -2023-09-11 14:54:29,711 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:54:29,711 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:54:29,713 - - -2023-09-11 14:54:29,713 - Training epoch: 18000 samples (256 per mini-batch) -2023-09-11 14:54:33,871 - Epoch: [199][ 10/ 71] Overall Loss 0.130919 Objective Loss 0.130919 LR 0.000250 Time 0.415678 -2023-09-11 14:54:35,773 - Epoch: [199][ 20/ 71] Overall Loss 0.125064 Objective Loss 0.125064 LR 0.000250 Time 0.302931 -2023-09-11 14:54:38,336 - Epoch: [199][ 30/ 71] Overall Loss 0.119365 Objective Loss 0.119365 LR 0.000250 Time 0.287394 -2023-09-11 14:54:40,303 - Epoch: [199][ 40/ 71] Overall Loss 0.124143 Objective Loss 0.124143 LR 0.000250 Time 0.264711 -2023-09-11 14:54:42,939 - Epoch: [199][ 50/ 71] Overall Loss 0.125196 Objective Loss 0.125196 LR 0.000250 Time 0.264475 -2023-09-11 14:54:45,004 - Epoch: [199][ 60/ 71] Overall Loss 0.126671 Objective Loss 0.126671 LR 0.000250 Time 0.254804 -2023-09-11 14:54:47,959 - Epoch: [199][ 70/ 71] Overall Loss 0.126687 Objective Loss 0.126687 Top1 92.578125 LR 0.000250 Time 0.260622 -2023-09-11 14:54:48,040 - Epoch: [199][ 71/ 71] Overall Loss 0.126936 Objective Loss 0.126936 Top1 92.559524 LR 0.000250 Time 0.258083 -2023-09-11 14:54:48,139 - --- validate (epoch=199)----------- -2023-09-11 14:54:48,139 - 2000 samples (256 per mini-batch) -2023-09-11 14:54:50,795 - Epoch: [199][ 8/ 8] Loss 0.185939 Top1 91.800000 -2023-09-11 14:54:50,884 - ==> Top1: 91.800 Loss: 0.186 - -2023-09-11 14:54:50,885 - ==> Confusion: -[[924 61] - [103 912]] +2025-05-20 18:22:10,481 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:22:10,481 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:22:10,488 - + +2025-05-20 18:22:10,489 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:22:16,942 - Epoch: [196][ 10/ 71] Overall Loss 0.136363 Objective Loss 0.136363 LR 0.000250 Time 0.645312 +2025-05-20 18:22:19,888 - Epoch: [196][ 20/ 71] Overall Loss 0.133464 Objective Loss 0.133464 LR 0.000250 Time 0.469944 +2025-05-20 18:22:24,558 - Epoch: [196][ 30/ 71] Overall Loss 0.127702 Objective Loss 0.127702 LR 0.000250 Time 0.468921 +2025-05-20 18:22:27,829 - Epoch: [196][ 40/ 71] Overall Loss 0.126862 Objective Loss 0.126862 LR 0.000250 Time 0.433468 +2025-05-20 18:22:31,883 - Epoch: [196][ 50/ 71] Overall Loss 0.126916 Objective Loss 0.126916 LR 0.000250 Time 0.427856 +2025-05-20 18:22:35,136 - Epoch: [196][ 60/ 71] Overall Loss 0.125930 Objective Loss 0.125930 LR 0.000250 Time 0.410742 +2025-05-20 18:22:38,641 - Epoch: [196][ 70/ 71] Overall Loss 0.123325 Objective Loss 0.123325 Top1 94.531250 LR 0.000250 Time 0.402131 +2025-05-20 18:22:38,733 - Epoch: [196][ 71/ 71] Overall Loss 0.124383 Objective Loss 0.124383 Top1 94.345238 LR 0.000250 Time 0.397769 +2025-05-20 18:22:38,767 - --- validate (epoch=196)----------- +2025-05-20 18:22:38,767 - 2000 samples (256 per mini-batch) +2025-05-20 18:22:42,448 - Epoch: [196][ 8/ 8] Loss 0.173885 Top1 92.900000 +2025-05-20 18:22:42,478 - ==> Top1: 92.900 Loss: 0.174 + +2025-05-20 18:22:42,479 - ==> Confusion: +[[906 79] + [ 63 952]] -2023-09-11 14:54:50,893 - ==> Best [Top1: 93.700 Sparsity:0.00 Params: 57776 on epoch: 157] -2023-09-11 14:54:50,893 - Saving checkpoint to: logs/2023.09.11-134109/qat_checkpoint.pth.tar -2023-09-11 14:54:50,897 - --- test --------------------- -2023-09-11 14:54:50,897 - 5000 samples (256 per mini-batch) -2023-09-11 14:54:52,546 - Test: [ 10/ 20] Loss 0.138967 Top1 94.218750 -2023-09-11 14:54:53,483 - Test: [ 20/ 20] Loss 0.131839 Top1 94.620000 -2023-09-11 14:54:53,577 - ==> Top1: 94.620 Loss: 0.132 +2025-05-20 18:22:42,490 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:22:42,490 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:22:42,497 - + +2025-05-20 18:22:42,497 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:22:47,139 - Epoch: [197][ 10/ 71] Overall Loss 0.133547 Objective Loss 0.133547 LR 0.000250 Time 0.464182 +2025-05-20 18:22:50,851 - Epoch: [197][ 20/ 71] Overall Loss 0.124207 Objective Loss 0.124207 LR 0.000250 Time 0.417663 +2025-05-20 18:22:54,211 - Epoch: [197][ 30/ 71] Overall Loss 0.125031 Objective Loss 0.125031 LR 0.000250 Time 0.390429 +2025-05-20 18:22:58,159 - Epoch: [197][ 40/ 71] Overall Loss 0.122675 Objective Loss 0.122675 LR 0.000250 Time 0.391510 +2025-05-20 18:23:02,097 - Epoch: [197][ 50/ 71] Overall Loss 0.121816 Objective Loss 0.121816 LR 0.000250 Time 0.391958 +2025-05-20 18:23:06,001 - Epoch: [197][ 60/ 71] Overall Loss 0.122898 Objective Loss 0.122898 LR 0.000250 Time 0.391695 +2025-05-20 18:23:08,851 - Epoch: [197][ 70/ 71] Overall Loss 0.123579 Objective Loss 0.123579 Top1 93.359375 LR 0.000250 Time 0.376442 +2025-05-20 18:23:08,951 - Epoch: [197][ 71/ 71] Overall Loss 0.123501 Objective Loss 0.123501 Top1 93.750000 LR 0.000250 Time 0.372546 +2025-05-20 18:23:08,983 - --- validate (epoch=197)----------- +2025-05-20 18:23:08,983 - 2000 samples (256 per mini-batch) +2025-05-20 18:23:12,588 - Epoch: [197][ 8/ 8] Loss 0.159361 Top1 93.650000 +2025-05-20 18:23:12,619 - ==> Top1: 93.650 Loss: 0.159 + +2025-05-20 18:23:12,620 - ==> Confusion: +[[920 65] + [ 62 953]] + +2025-05-20 18:23:12,637 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:23:12,637 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:23:12,644 - + +2025-05-20 18:23:12,644 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:23:18,222 - Epoch: [198][ 10/ 71] Overall Loss 0.114052 Objective Loss 0.114052 LR 0.000250 Time 0.557685 +2025-05-20 18:23:21,462 - Epoch: [198][ 20/ 71] Overall Loss 0.114121 Objective Loss 0.114121 LR 0.000250 Time 0.440860 +2025-05-20 18:23:25,495 - Epoch: [198][ 30/ 71] Overall Loss 0.115468 Objective Loss 0.115468 LR 0.000250 Time 0.428302 +2025-05-20 18:23:28,480 - Epoch: [198][ 40/ 71] Overall Loss 0.115693 Objective Loss 0.115693 LR 0.000250 Time 0.395852 +2025-05-20 18:23:32,321 - Epoch: [198][ 50/ 71] Overall Loss 0.116231 Objective Loss 0.116231 LR 0.000250 Time 0.393498 +2025-05-20 18:23:36,180 - Epoch: [198][ 60/ 71] Overall Loss 0.116960 Objective Loss 0.116960 LR 0.000250 Time 0.392220 +2025-05-20 18:23:39,199 - Epoch: [198][ 70/ 71] Overall Loss 0.119269 Objective Loss 0.119269 Top1 94.531250 LR 0.000250 Time 0.379323 +2025-05-20 18:23:39,308 - Epoch: [198][ 71/ 71] Overall Loss 0.119466 Objective Loss 0.119466 Top1 94.642857 LR 0.000250 Time 0.375503 +2025-05-20 18:23:39,338 - --- validate (epoch=198)----------- +2025-05-20 18:23:39,338 - 2000 samples (256 per mini-batch) +2025-05-20 18:23:42,763 - Epoch: [198][ 8/ 8] Loss 0.157917 Top1 94.050000 +2025-05-20 18:23:42,793 - ==> Top1: 94.050 Loss: 0.158 + +2025-05-20 18:23:42,794 - ==> Confusion: +[[922 63] + [ 56 959]] -2023-09-11 14:54:53,577 - ==> Confusion: -[[2390 110] - [ 159 2341]] +2025-05-20 18:23:42,812 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:23:42,812 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:23:42,819 - + +2025-05-20 18:23:42,819 - Training epoch: 18000 samples (256 per mini-batch, world size: 1) +2025-05-20 18:23:47,984 - Epoch: [199][ 10/ 71] Overall Loss 0.116605 Objective Loss 0.116605 LR 0.000250 Time 0.516426 +2025-05-20 18:23:50,971 - Epoch: [199][ 20/ 71] Overall Loss 0.109016 Objective Loss 0.109016 LR 0.000250 Time 0.407536 +2025-05-20 18:23:54,904 - Epoch: [199][ 30/ 71] Overall Loss 0.110404 Objective Loss 0.110404 LR 0.000250 Time 0.402788 +2025-05-20 18:23:58,371 - Epoch: [199][ 40/ 71] Overall Loss 0.111240 Objective Loss 0.111240 LR 0.000250 Time 0.388750 +2025-05-20 18:24:02,751 - Epoch: [199][ 50/ 71] Overall Loss 0.113112 Objective Loss 0.113112 LR 0.000250 Time 0.398589 +2025-05-20 18:24:06,890 - Epoch: [199][ 60/ 71] Overall Loss 0.113482 Objective Loss 0.113482 LR 0.000250 Time 0.401142 +2025-05-20 18:24:09,830 - Epoch: [199][ 70/ 71] Overall Loss 0.114246 Objective Loss 0.114246 Top1 94.531250 LR 0.000250 Time 0.385827 +2025-05-20 18:24:09,922 - Epoch: [199][ 71/ 71] Overall Loss 0.114994 Objective Loss 0.114994 Top1 94.345238 LR 0.000250 Time 0.381689 +2025-05-20 18:24:09,950 - --- validate (epoch=199)----------- +2025-05-20 18:24:09,950 - 2000 samples (256 per mini-batch) +2025-05-20 18:24:13,480 - Epoch: [199][ 8/ 8] Loss 0.193749 Top1 92.700000 +2025-05-20 18:24:13,515 - ==> Top1: 92.700 Loss: 0.194 + +2025-05-20 18:24:13,515 - ==> Confusion: +[[937 48] + [ 98 917]] -2023-09-11 14:54:53,579 - -2023-09-11 14:54:53,579 - Log file for this run: /home/ermanokman/repos/ai8x-training/logs/2023.09.11-134109/2023.09.11-134109.log +2025-05-20 18:24:13,518 - ==> Best [Top1: 94.150 Params: 57776 on epoch: 140] +2025-05-20 18:24:13,518 - Saving checkpoint to: logs/catsdogs___2025.05.20-163652/catsdogs_qat_checkpoint.pth.tar +2025-05-20 18:24:13,525 - --- test (ckpt) --------------------- +2025-05-20 18:24:13,525 - 5000 samples (256 per mini-batch) +2025-05-20 18:24:14,672 - Test: [ 10/ 20] Loss 0.119601 Top1 95.156250 +2025-05-20 18:24:15,376 - Test: [ 20/ 20] Loss 0.123886 Top1 94.880000 +2025-05-20 18:24:15,406 - ==> Top1: 94.880 Loss: 0.124 + +2025-05-20 18:24:15,406 - ==> Confusion: +[[2391 109] + [ 147 2353]] + +2025-05-20 18:24:15,406 - --- test (best) --------------------- +2025-05-20 18:24:15,406 - => loading checkpoint logs/catsdogs___2025.05.20-163652/catsdogs_qat_best.pth.tar +2025-05-20 18:24:15,413 - => Checkpoint contents: ++----------------------+-------------+-----------+ +| Key | Type | Value | +|----------------------+-------------+-----------| +| arch | str | ai85cdnet | +| compression_sched | dict | | +| epoch | int | 140 | +| extras | dict | | +| optimizer_state_dict | dict | | +| optimizer_type | type | Adam | +| state_dict | OrderedDict | | ++----------------------+-------------+-----------+ + +2025-05-20 18:24:15,414 - => Checkpoint['extras'] contents: ++--------------+--------+---------+ +| Key | Type | Value | +|--------------+--------+---------| +| best_epoch | int | 140 | +| best_mAP | int | 0 | +| best_top1 | float | 94.15 | +| current_mAP | int | 0 | +| current_top1 | float | 94.15 | ++--------------+--------+---------+ + +2025-05-20 18:24:15,414 - Loaded compression schedule from checkpoint (epoch 140) +2025-05-20 18:24:15,519 - => loaded 'state_dict' from checkpoint 'logs/catsdogs___2025.05.20-163652/catsdogs_qat_best.pth.tar' +2025-05-20 18:24:15,519 - 5000 samples (256 per mini-batch) +2025-05-20 18:24:16,583 - Test: [ 10/ 20] Loss 0.122296 Top1 95.195312 +2025-05-20 18:24:17,226 - Test: [ 20/ 20] Loss 0.125033 Top1 95.100000 +2025-05-20 18:24:17,257 - ==> Top1: 95.100 Loss: 0.125 + +2025-05-20 18:24:17,257 - ==> Confusion: +[[2364 136] + [ 109 2391]] + +2025-05-20 18:24:17,260 - +2025-05-20 18:24:17,260 - Log file for this run: /home/asyaturhal/ai8x-training/logs/catsdogs___2025.05.20-163652/catsdogs___2025.05.20-163652.log diff --git a/trained/ai85-catsdogs-qat8.pth.tar b/trained/ai85-catsdogs-qat8.pth.tar index 66280b54..5012cee6 100644 Binary files a/trained/ai85-catsdogs-qat8.pth.tar and b/trained/ai85-catsdogs-qat8.pth.tar differ diff --git a/trained/ai85-qrcode-tinierssd-kpts-qat8-q.pth.tar b/trained/ai85-qrcode-tinierssd-kpts-qat8-q.pth.tar index 186e159c..538f05e0 100644 Binary files a/trained/ai85-qrcode-tinierssd-kpts-qat8-q.pth.tar and b/trained/ai85-qrcode-tinierssd-kpts-qat8-q.pth.tar differ diff --git a/trained/ai85-qrcode-tinierssd-kpts-qat8.log b/trained/ai85-qrcode-tinierssd-kpts-qat8.log index c3558f23..f3e6a2ac 100644 --- a/trained/ai85-qrcode-tinierssd-kpts-qat8.log +++ b/trained/ai85-qrcode-tinierssd-kpts-qat8.log @@ -1,4440 +1,4460 @@ -2024-05-11 02:56:27,764 - Log file for this run: /home/ermanokman/repos/ai8x-training/logs/2024.05.11-025627/2024.05.11-025627.log -2024-05-11 02:56:27,764 - The open file limit is 1024. Please raise the limit (see documentation). -2024-05-11 02:56:27,764 - Configuring device: MAX78000, simulate=False. -2024-05-11 02:56:27,842 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 02:56:27,865 - Optimizer Type: -2024-05-11 02:56:27,866 - Optimizer Args: {'lr': 0.0002, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0.0005, 'amsgrad': False, 'maximize': False, 'foreach': None, 'capturable': False, 'differentiable': False, 'fused': None} -2024-05-11 02:56:29,330 - /home/ermanokman/repos/ai8x-training/.venv/lib/python3.11/site-packages/pydantic/main.py:347: UserWarning: Pydantic serializer warnings: - Expected `Union[float, tuple[float, float]]` but got `list` - serialized value may not be as expected - Expected `Union[float, tuple[float, float]]` but got `list` - serialized value may not be as expected - Expected `Union[float, tuple[float, float]]` but got `list` - serialized value may not be as expected - Expected `Union[float, tuple[float, float]]` but got `list` - serialized value may not be as expected - return self.__pydantic_serializer__.to_python( - -2024-05-11 02:56:32,751 - /home/ermanokman/repos/ai8x-training/datasets/image_mixer.py:390: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:274.) - boxes = torch.as_tensor(boxes, dtype=torch.float32) - -2024-05-11 02:56:32,751 - Reading compression schedule from: policies/schedule-qrcode.yaml -2024-05-11 02:56:32,757 - torch.compile() not available, using "eager" mode -2024-05-11 02:56:32,757 - Use distributed training to enable torch.compile() with multiple GPUs -2024-05-11 02:56:32,757 - Dataset sizes: +2025-05-15 13:23:05,810 - Log file for this run: /home/asyaturhal/ai8x-training/logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr___2025.05.15-132305.log +2025-05-15 13:23:05,810 - The open file limit is 1024. Please raise the limit (see documentation). +2025-05-15 13:23:05,811 - Configuring device: MAX78000, simulate=False. +2025-05-15 13:23:05,967 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 13:23:05,989 - Optimizer Type: +2025-05-15 13:23:05,989 - Optimizer Args: {'lr': 0.0002, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0.0005, 'amsgrad': False, 'maximize': False, 'foreach': None, 'capturable': False, 'differentiable': False, 'fused': None} +2025-05-15 13:23:08,259 - Reading compression schedule from: policies/schedule-qrcode.yaml +2025-05-15 13:23:08,264 - torch.compile() not available, using "eager" mode +2025-05-15 13:23:08,264 - Use distributed training to enable torch.compile() with multiple GPUs +2025-05-15 13:23:08,265 - Dataset sizes: training=13000 validation=3250 test=3250 -2024-05-11 02:56:32,757 - - -2024-05-11 02:56:32,757 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 02:56:35,894 - /home/ermanokman/repos/ai8x-training/.venv/lib/python3.11/site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) - return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass - -2024-05-11 02:57:26,157 - Epoch: [0][ 100/ 813] Overall Loss 5.761258 Objective Loss 5.761258 LR 0.000200 Time 0.533971 -2024-05-11 02:58:20,032 - Epoch: [0][ 200/ 813] Overall Loss 5.136388 Objective Loss 5.136388 LR 0.000200 Time 0.536356 -2024-05-11 02:59:12,107 - Epoch: [0][ 300/ 813] Overall Loss 4.813736 Objective Loss 4.813736 LR 0.000200 Time 0.531138 -2024-05-11 03:00:03,382 - Epoch: [0][ 400/ 813] Overall Loss 4.616221 Objective Loss 4.616221 LR 0.000200 Time 0.526536 -2024-05-11 03:00:55,317 - Epoch: [0][ 500/ 813] Overall Loss 4.470787 Objective Loss 4.470787 LR 0.000200 Time 0.525097 -2024-05-11 03:01:49,218 - Epoch: [0][ 600/ 813] Overall Loss 4.352911 Objective Loss 4.352911 LR 0.000200 Time 0.527412 -2024-05-11 03:02:41,985 - Epoch: [0][ 700/ 813] Overall Loss 4.258163 Objective Loss 4.258163 LR 0.000200 Time 0.527448 -2024-05-11 03:03:36,788 - Epoch: [0][ 800/ 813] Overall Loss 4.178626 Objective Loss 4.178626 LR 0.000200 Time 0.530010 -2024-05-11 03:03:41,125 - Epoch: [0][ 813/ 813] Overall Loss 4.169941 Objective Loss 4.169941 LR 0.000200 Time 0.526869 -2024-05-11 03:03:41,140 - --- validate (epoch=0)----------- -2024-05-11 03:03:41,141 - 3250 samples (16 per mini-batch) -2024-05-11 03:03:41,142 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 03:03:44,208 - /home/ermanokman/repos/ai8x-training/.venv/lib/python3.11/site-packages/torchmetrics/utilities/prints.py:43: UserWarning: Encountered more than 100 detections in a single image. This means that certain detections with the lowest scores will be ignored, that may have an undesirable impact on performance. Please consider adjusting the `max_detection_threshold` to suit your use case. To disable this warning, set attribute class `warn_on_many_detections=False`, after initializing the metric. - warnings.warn(*args, **kwargs) # noqa: B028 - -2024-05-11 03:04:40,892 - Epoch: [0][ 100/ 204] Loss 3.647782 mAP 0.630841 -2024-05-11 03:05:35,796 - Epoch: [0][ 200/ 204] Loss 3.650331 mAP 0.633880 -2024-05-11 03:05:37,206 - Epoch: [0][ 204/ 204] Loss 3.650087 mAP 0.634220 -2024-05-11 03:05:37,230 - ==> mAP: 0.63422 Loss: 3.650 - -2024-05-11 03:05:37,234 - ==> Best [mAP: 0.634220 vloss: 3.650087 Params: 368352 on epoch: 0] -2024-05-11 03:05:37,234 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 03:05:37,250 - - -2024-05-11 03:05:37,250 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 03:06:33,629 - Epoch: [1][ 100/ 813] Overall Loss 3.586945 Objective Loss 3.586945 LR 0.000200 Time 0.563772 -2024-05-11 03:07:26,211 - Epoch: [1][ 200/ 813] Overall Loss 3.554324 Objective Loss 3.554324 LR 0.000200 Time 0.544789 -2024-05-11 03:08:17,539 - Epoch: [1][ 300/ 813] Overall Loss 3.543970 Objective Loss 3.543970 LR 0.000200 Time 0.534282 -2024-05-11 03:09:08,955 - Epoch: [1][ 400/ 813] Overall Loss 3.526261 Objective Loss 3.526261 LR 0.000200 Time 0.529241 -2024-05-11 03:09:59,322 - Epoch: [1][ 500/ 813] Overall Loss 3.511030 Objective Loss 3.511030 LR 0.000200 Time 0.524123 -2024-05-11 03:10:53,186 - Epoch: [1][ 600/ 813] Overall Loss 3.492895 Objective Loss 3.492895 LR 0.000200 Time 0.526477 -2024-05-11 03:11:45,743 - Epoch: [1][ 700/ 813] Overall Loss 3.473511 Objective Loss 3.473511 LR 0.000200 Time 0.526342 -2024-05-11 03:12:38,777 - Epoch: [1][ 800/ 813] Overall Loss 3.457797 Objective Loss 3.457797 LR 0.000200 Time 0.526840 -2024-05-11 03:12:43,058 - Epoch: [1][ 813/ 813] Overall Loss 3.455861 Objective Loss 3.455861 LR 0.000200 Time 0.523680 -2024-05-11 03:12:43,083 - --- validate (epoch=1)----------- -2024-05-11 03:12:43,084 - 3250 samples (16 per mini-batch) -2024-05-11 03:12:43,085 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 03:13:43,173 - Epoch: [1][ 100/ 204] Loss 3.306525 mAP 0.766591 -2024-05-11 03:14:39,170 - Epoch: [1][ 200/ 204] Loss 3.333250 mAP 0.765651 -2024-05-11 03:14:40,558 - Epoch: [1][ 204/ 204] Loss 3.338234 mAP 0.764050 -2024-05-11 03:14:40,583 - ==> mAP: 0.76405 Loss: 3.338 - -2024-05-11 03:14:40,586 - ==> Best [mAP: 0.764050 vloss: 3.338234 Params: 368352 on epoch: 1] -2024-05-11 03:14:40,586 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 03:14:40,615 - - -2024-05-11 03:14:40,615 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 03:15:34,986 - Epoch: [2][ 100/ 813] Overall Loss 3.318462 Objective Loss 3.318462 LR 0.000200 Time 0.543694 -2024-05-11 03:16:28,841 - Epoch: [2][ 200/ 813] Overall Loss 3.301657 Objective Loss 3.301657 LR 0.000200 Time 0.541102 -2024-05-11 03:17:22,747 - Epoch: [2][ 300/ 813] Overall Loss 3.287267 Objective Loss 3.287267 LR 0.000200 Time 0.540414 -2024-05-11 03:18:12,631 - Epoch: [2][ 400/ 813] Overall Loss 3.282798 Objective Loss 3.282798 LR 0.000200 Time 0.530002 -2024-05-11 03:19:04,993 - Epoch: [2][ 500/ 813] Overall Loss 3.267914 Objective Loss 3.267914 LR 0.000200 Time 0.528721 -2024-05-11 03:19:57,468 - Epoch: [2][ 600/ 813] Overall Loss 3.251227 Objective Loss 3.251227 LR 0.000200 Time 0.528058 -2024-05-11 03:20:50,834 - Epoch: [2][ 700/ 813] Overall Loss 3.238872 Objective Loss 3.238872 LR 0.000200 Time 0.528855 -2024-05-11 03:21:43,207 - Epoch: [2][ 800/ 813] Overall Loss 3.226508 Objective Loss 3.226508 LR 0.000200 Time 0.528213 -2024-05-11 03:21:48,013 - Epoch: [2][ 813/ 813] Overall Loss 3.225150 Objective Loss 3.225150 LR 0.000200 Time 0.525665 -2024-05-11 03:21:48,038 - --- validate (epoch=2)----------- -2024-05-11 03:21:48,039 - 3250 samples (16 per mini-batch) -2024-05-11 03:21:48,040 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 03:22:49,212 - Epoch: [2][ 100/ 204] Loss 3.102314 mAP 0.820712 -2024-05-11 03:23:45,346 - Epoch: [2][ 200/ 204] Loss 3.113218 mAP 0.814790 -2024-05-11 03:23:46,811 - Epoch: [2][ 204/ 204] Loss 3.114789 mAP 0.814285 -2024-05-11 03:23:46,836 - ==> mAP: 0.81428 Loss: 3.115 - -2024-05-11 03:23:46,839 - ==> Best [mAP: 0.814285 vloss: 3.114789 Params: 368352 on epoch: 2] -2024-05-11 03:23:46,839 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 03:23:46,867 - - -2024-05-11 03:23:46,867 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 03:24:41,110 - Epoch: [3][ 100/ 813] Overall Loss 3.124215 Objective Loss 3.124215 LR 0.000200 Time 0.542410 -2024-05-11 03:25:35,594 - Epoch: [3][ 200/ 813] Overall Loss 3.120406 Objective Loss 3.120406 LR 0.000200 Time 0.543608 -2024-05-11 03:26:28,250 - Epoch: [3][ 300/ 813] Overall Loss 3.123073 Objective Loss 3.123073 LR 0.000200 Time 0.537897 -2024-05-11 03:27:19,454 - Epoch: [3][ 400/ 813] Overall Loss 3.115062 Objective Loss 3.115062 LR 0.000200 Time 0.531429 -2024-05-11 03:28:11,916 - Epoch: [3][ 500/ 813] Overall Loss 3.101012 Objective Loss 3.101012 LR 0.000200 Time 0.530066 -2024-05-11 03:29:06,333 - Epoch: [3][ 600/ 813] Overall Loss 3.092760 Objective Loss 3.092760 LR 0.000200 Time 0.532413 -2024-05-11 03:29:59,414 - Epoch: [3][ 700/ 813] Overall Loss 3.082187 Objective Loss 3.082187 LR 0.000200 Time 0.532181 -2024-05-11 03:30:52,033 - Epoch: [3][ 800/ 813] Overall Loss 3.070014 Objective Loss 3.070014 LR 0.000200 Time 0.531431 -2024-05-11 03:30:57,281 - Epoch: [3][ 813/ 813] Overall Loss 3.068205 Objective Loss 3.068205 LR 0.000200 Time 0.529388 -2024-05-11 03:30:57,307 - --- validate (epoch=3)----------- -2024-05-11 03:30:57,308 - 3250 samples (16 per mini-batch) -2024-05-11 03:30:57,309 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 03:31:58,327 - Epoch: [3][ 100/ 204] Loss 2.975857 mAP 0.811308 -2024-05-11 03:32:52,708 - Epoch: [3][ 200/ 204] Loss 2.974174 mAP 0.810398 -2024-05-11 03:32:54,080 - Epoch: [3][ 204/ 204] Loss 2.971976 mAP 0.810600 -2024-05-11 03:32:54,106 - ==> mAP: 0.81060 Loss: 2.972 - -2024-05-11 03:32:54,108 - ==> Best [mAP: 0.814285 vloss: 3.114789 Params: 368352 on epoch: 2] -2024-05-11 03:32:54,108 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 03:32:54,133 - - -2024-05-11 03:32:54,134 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 03:33:49,232 - Epoch: [4][ 100/ 813] Overall Loss 2.998242 Objective Loss 2.998242 LR 0.000200 Time 0.550964 -2024-05-11 03:34:43,095 - Epoch: [4][ 200/ 813] Overall Loss 2.972234 Objective Loss 2.972234 LR 0.000200 Time 0.544791 -2024-05-11 03:35:35,643 - Epoch: [4][ 300/ 813] Overall Loss 2.973364 Objective Loss 2.973364 LR 0.000200 Time 0.538347 -2024-05-11 03:36:27,959 - Epoch: [4][ 400/ 813] Overall Loss 2.962270 Objective Loss 2.962270 LR 0.000200 Time 0.534548 -2024-05-11 03:37:21,461 - Epoch: [4][ 500/ 813] Overall Loss 2.952077 Objective Loss 2.952077 LR 0.000200 Time 0.534636 -2024-05-11 03:38:15,030 - Epoch: [4][ 600/ 813] Overall Loss 2.941944 Objective Loss 2.941944 LR 0.000200 Time 0.534809 -2024-05-11 03:39:07,255 - Epoch: [4][ 700/ 813] Overall Loss 2.933064 Objective Loss 2.933064 LR 0.000200 Time 0.533013 -2024-05-11 03:40:00,376 - Epoch: [4][ 800/ 813] Overall Loss 2.924538 Objective Loss 2.924538 LR 0.000200 Time 0.532786 -2024-05-11 03:40:04,711 - Epoch: [4][ 813/ 813] Overall Loss 2.924461 Objective Loss 2.924461 LR 0.000200 Time 0.529597 -2024-05-11 03:40:04,738 - --- validate (epoch=4)----------- -2024-05-11 03:40:04,739 - 3250 samples (16 per mini-batch) -2024-05-11 03:40:04,740 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 03:41:06,513 - Epoch: [4][ 100/ 204] Loss 2.830865 mAP 0.833219 -2024-05-11 03:42:02,053 - Epoch: [4][ 200/ 204] Loss 2.843822 mAP 0.829765 -2024-05-11 03:42:03,413 - Epoch: [4][ 204/ 204] Loss 2.841039 mAP 0.829287 -2024-05-11 03:42:03,439 - ==> mAP: 0.82929 Loss: 2.841 - -2024-05-11 03:42:03,442 - ==> Best [mAP: 0.829287 vloss: 2.841039 Params: 368352 on epoch: 4] -2024-05-11 03:42:03,442 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 03:42:03,470 - - -2024-05-11 03:42:03,470 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 03:42:58,825 - Epoch: [5][ 100/ 813] Overall Loss 2.868287 Objective Loss 2.868287 LR 0.000200 Time 0.553525 -2024-05-11 03:43:52,402 - Epoch: [5][ 200/ 813] Overall Loss 2.847662 Objective Loss 2.847662 LR 0.000200 Time 0.544640 -2024-05-11 03:44:46,083 - Epoch: [5][ 300/ 813] Overall Loss 2.834858 Objective Loss 2.834858 LR 0.000200 Time 0.542025 -2024-05-11 03:45:35,354 - Epoch: [5][ 400/ 813] Overall Loss 2.826397 Objective Loss 2.826397 LR 0.000200 Time 0.529692 -2024-05-11 03:46:27,783 - Epoch: [5][ 500/ 813] Overall Loss 2.816822 Objective Loss 2.816822 LR 0.000200 Time 0.528590 -2024-05-11 03:47:21,289 - Epoch: [5][ 600/ 813] Overall Loss 2.814036 Objective Loss 2.814036 LR 0.000200 Time 0.529666 -2024-05-11 03:48:15,056 - Epoch: [5][ 700/ 813] Overall Loss 2.802630 Objective Loss 2.802630 LR 0.000200 Time 0.530806 -2024-05-11 03:49:06,858 - Epoch: [5][ 800/ 813] Overall Loss 2.793053 Objective Loss 2.793053 LR 0.000200 Time 0.529207 -2024-05-11 03:49:12,152 - Epoch: [5][ 813/ 813] Overall Loss 2.792033 Objective Loss 2.792033 LR 0.000200 Time 0.527247 -2024-05-11 03:49:12,178 - --- validate (epoch=5)----------- -2024-05-11 03:49:12,179 - 3250 samples (16 per mini-batch) -2024-05-11 03:49:12,180 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 03:50:11,370 - Epoch: [5][ 100/ 204] Loss 2.706617 mAP 0.844725 -2024-05-11 03:51:06,373 - Epoch: [5][ 200/ 204] Loss 2.720989 mAP 0.837952 -2024-05-11 03:51:07,686 - Epoch: [5][ 204/ 204] Loss 2.721850 mAP 0.836616 -2024-05-11 03:51:07,711 - ==> mAP: 0.83662 Loss: 2.722 - -2024-05-11 03:51:07,713 - ==> Best [mAP: 0.836616 vloss: 2.721850 Params: 368352 on epoch: 5] -2024-05-11 03:51:07,714 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 03:51:07,743 - - -2024-05-11 03:51:07,743 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 03:52:04,155 - Epoch: [6][ 100/ 813] Overall Loss 2.758943 Objective Loss 2.758943 LR 0.000200 Time 0.564090 -2024-05-11 03:52:57,856 - Epoch: [6][ 200/ 813] Overall Loss 2.741519 Objective Loss 2.741519 LR 0.000200 Time 0.550542 -2024-05-11 03:53:50,784 - Epoch: [6][ 300/ 813] Overall Loss 2.748044 Objective Loss 2.748044 LR 0.000200 Time 0.543449 -2024-05-11 03:54:40,593 - Epoch: [6][ 400/ 813] Overall Loss 2.733760 Objective Loss 2.733760 LR 0.000200 Time 0.532093 -2024-05-11 03:55:32,205 - Epoch: [6][ 500/ 813] Overall Loss 2.727219 Objective Loss 2.727219 LR 0.000200 Time 0.528896 -2024-05-11 03:56:26,574 - Epoch: [6][ 600/ 813] Overall Loss 2.724164 Objective Loss 2.724164 LR 0.000200 Time 0.531358 -2024-05-11 03:57:19,682 - Epoch: [6][ 700/ 813] Overall Loss 2.717753 Objective Loss 2.717753 LR 0.000200 Time 0.531315 -2024-05-11 03:58:13,132 - Epoch: [6][ 800/ 813] Overall Loss 2.708513 Objective Loss 2.708513 LR 0.000200 Time 0.531710 -2024-05-11 03:58:18,058 - Epoch: [6][ 813/ 813] Overall Loss 2.707837 Objective Loss 2.707837 LR 0.000200 Time 0.529267 -2024-05-11 03:58:18,087 - --- validate (epoch=6)----------- -2024-05-11 03:58:18,088 - 3250 samples (16 per mini-batch) -2024-05-11 03:58:18,090 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 03:59:13,044 - Epoch: [6][ 100/ 204] Loss 2.637729 mAP 0.832093 -2024-05-11 04:00:06,916 - Epoch: [6][ 200/ 204] Loss 2.647083 mAP 0.829857 -2024-05-11 04:00:07,217 - Epoch: [6][ 204/ 204] Loss 2.650470 mAP 0.828620 -2024-05-11 04:00:07,242 - ==> mAP: 0.82862 Loss: 2.650 - -2024-05-11 04:00:07,245 - ==> Best [mAP: 0.836616 vloss: 2.721850 Params: 368352 on epoch: 5] -2024-05-11 04:00:07,246 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 04:00:07,270 - - -2024-05-11 04:00:07,270 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 04:01:02,298 - Epoch: [7][ 100/ 813] Overall Loss 2.645429 Objective Loss 2.645429 LR 0.000200 Time 0.550163 -2024-05-11 04:01:56,856 - Epoch: [7][ 200/ 813] Overall Loss 2.635383 Objective Loss 2.635383 LR 0.000200 Time 0.547866 -2024-05-11 04:02:47,764 - Epoch: [7][ 300/ 813] Overall Loss 2.644736 Objective Loss 2.644736 LR 0.000200 Time 0.534930 -2024-05-11 04:03:39,171 - Epoch: [7][ 400/ 813] Overall Loss 2.642372 Objective Loss 2.642372 LR 0.000200 Time 0.529713 -2024-05-11 04:04:30,353 - Epoch: [7][ 500/ 813] Overall Loss 2.634276 Objective Loss 2.634276 LR 0.000200 Time 0.526125 -2024-05-11 04:05:23,301 - Epoch: [7][ 600/ 813] Overall Loss 2.630013 Objective Loss 2.630013 LR 0.000200 Time 0.526681 -2024-05-11 04:06:16,957 - Epoch: [7][ 700/ 813] Overall Loss 2.621411 Objective Loss 2.621411 LR 0.000200 Time 0.528088 -2024-05-11 04:07:09,317 - Epoch: [7][ 800/ 813] Overall Loss 2.613349 Objective Loss 2.613349 LR 0.000200 Time 0.527499 -2024-05-11 04:07:14,209 - Epoch: [7][ 813/ 813] Overall Loss 2.612566 Objective Loss 2.612566 LR 0.000200 Time 0.525078 -2024-05-11 04:07:14,235 - --- validate (epoch=7)----------- -2024-05-11 04:07:14,236 - 3250 samples (16 per mini-batch) -2024-05-11 04:07:14,237 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 04:08:09,484 - Epoch: [7][ 100/ 204] Loss 2.566684 mAP 0.790770 -2024-05-11 04:09:03,059 - Epoch: [7][ 200/ 204] Loss 2.580117 mAP 0.790542 -2024-05-11 04:09:03,412 - Epoch: [7][ 204/ 204] Loss 2.583805 mAP 0.790364 -2024-05-11 04:09:03,439 - ==> mAP: 0.79036 Loss: 2.584 - -2024-05-11 04:09:03,441 - ==> Best [mAP: 0.836616 vloss: 2.721850 Params: 368352 on epoch: 5] -2024-05-11 04:09:03,441 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 04:09:03,467 - - -2024-05-11 04:09:03,467 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 04:09:58,838 - Epoch: [8][ 100/ 813] Overall Loss 2.566394 Objective Loss 2.566394 LR 0.000200 Time 0.553680 -2024-05-11 04:10:52,695 - Epoch: [8][ 200/ 813] Overall Loss 2.547686 Objective Loss 2.547686 LR 0.000200 Time 0.546062 -2024-05-11 04:11:45,176 - Epoch: [8][ 300/ 813] Overall Loss 2.555562 Objective Loss 2.555562 LR 0.000200 Time 0.538961 -2024-05-11 04:12:35,782 - Epoch: [8][ 400/ 813] Overall Loss 2.547120 Objective Loss 2.547120 LR 0.000200 Time 0.530733 -2024-05-11 04:13:27,545 - Epoch: [8][ 500/ 813] Overall Loss 2.547796 Objective Loss 2.547796 LR 0.000200 Time 0.528109 -2024-05-11 04:14:21,435 - Epoch: [8][ 600/ 813] Overall Loss 2.543483 Objective Loss 2.543483 LR 0.000200 Time 0.529896 -2024-05-11 04:15:14,043 - Epoch: [8][ 700/ 813] Overall Loss 2.538887 Objective Loss 2.538887 LR 0.000200 Time 0.529349 -2024-05-11 04:16:05,992 - Epoch: [8][ 800/ 813] Overall Loss 2.531115 Objective Loss 2.531115 LR 0.000200 Time 0.528115 -2024-05-11 04:16:11,618 - Epoch: [8][ 813/ 813] Overall Loss 2.530897 Objective Loss 2.530897 LR 0.000200 Time 0.526589 -2024-05-11 04:16:11,651 - --- validate (epoch=8)----------- -2024-05-11 04:16:11,652 - 3250 samples (16 per mini-batch) -2024-05-11 04:16:11,654 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 04:17:07,660 - Epoch: [8][ 100/ 204] Loss 2.496317 mAP 0.799493 -2024-05-11 04:18:01,006 - Epoch: [8][ 200/ 204] Loss 2.504969 mAP 0.789859 -2024-05-11 04:18:01,579 - Epoch: [8][ 204/ 204] Loss 2.501488 mAP 0.789903 -2024-05-11 04:18:01,604 - ==> mAP: 0.78990 Loss: 2.501 - -2024-05-11 04:18:01,606 - ==> Best [mAP: 0.836616 vloss: 2.721850 Params: 368352 on epoch: 5] -2024-05-11 04:18:01,606 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 04:18:01,630 - - -2024-05-11 04:18:01,631 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 04:18:56,160 - Epoch: [9][ 100/ 813] Overall Loss 2.472949 Objective Loss 2.472949 LR 0.000200 Time 0.545277 -2024-05-11 04:19:50,098 - Epoch: [9][ 200/ 813] Overall Loss 2.459516 Objective Loss 2.459516 LR 0.000200 Time 0.542294 -2024-05-11 04:20:43,885 - Epoch: [9][ 300/ 813] Overall Loss 2.465455 Objective Loss 2.465455 LR 0.000200 Time 0.540815 -2024-05-11 04:21:33,340 - Epoch: [9][ 400/ 813] Overall Loss 2.471252 Objective Loss 2.471252 LR 0.000200 Time 0.529238 -2024-05-11 04:22:23,686 - Epoch: [9][ 500/ 813] Overall Loss 2.464231 Objective Loss 2.464231 LR 0.000200 Time 0.524074 -2024-05-11 04:23:18,361 - Epoch: [9][ 600/ 813] Overall Loss 2.466089 Objective Loss 2.466089 LR 0.000200 Time 0.527850 -2024-05-11 04:24:11,044 - Epoch: [9][ 700/ 813] Overall Loss 2.463948 Objective Loss 2.463948 LR 0.000200 Time 0.527702 -2024-05-11 04:25:03,359 - Epoch: [9][ 800/ 813] Overall Loss 2.457034 Objective Loss 2.457034 LR 0.000200 Time 0.527130 -2024-05-11 04:25:09,034 - Epoch: [9][ 813/ 813] Overall Loss 2.455866 Objective Loss 2.455866 LR 0.000200 Time 0.525681 -2024-05-11 04:25:09,060 - --- validate (epoch=9)----------- -2024-05-11 04:25:09,061 - 3250 samples (16 per mini-batch) -2024-05-11 04:25:09,062 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 04:26:03,257 - Epoch: [9][ 100/ 204] Loss 2.405929 mAP 0.791062 -2024-05-11 04:26:56,314 - Epoch: [9][ 200/ 204] Loss 2.416355 mAP 0.790734 -2024-05-11 04:26:56,947 - Epoch: [9][ 204/ 204] Loss 2.418564 mAP 0.781129 -2024-05-11 04:26:56,972 - ==> mAP: 0.78113 Loss: 2.419 - -2024-05-11 04:26:56,975 - ==> Best [mAP: 0.836616 vloss: 2.721850 Params: 368352 on epoch: 5] -2024-05-11 04:26:56,975 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 04:26:56,999 - - -2024-05-11 04:26:57,000 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 04:27:51,356 - Epoch: [10][ 100/ 813] Overall Loss 2.409355 Objective Loss 2.409355 LR 0.000200 Time 0.543540 -2024-05-11 04:28:45,320 - Epoch: [10][ 200/ 813] Overall Loss 2.405104 Objective Loss 2.405104 LR 0.000200 Time 0.541581 -2024-05-11 04:29:39,859 - Epoch: [10][ 300/ 813] Overall Loss 2.405719 Objective Loss 2.405719 LR 0.000200 Time 0.542834 -2024-05-11 04:30:29,562 - Epoch: [10][ 400/ 813] Overall Loss 2.401213 Objective Loss 2.401213 LR 0.000200 Time 0.531379 -2024-05-11 04:31:23,033 - Epoch: [10][ 500/ 813] Overall Loss 2.400501 Objective Loss 2.400501 LR 0.000200 Time 0.532042 -2024-05-11 04:32:15,715 - Epoch: [10][ 600/ 813] Overall Loss 2.396199 Objective Loss 2.396199 LR 0.000200 Time 0.531169 -2024-05-11 04:33:08,498 - Epoch: [10][ 700/ 813] Overall Loss 2.393770 Objective Loss 2.393770 LR 0.000200 Time 0.530685 -2024-05-11 04:34:02,050 - Epoch: [10][ 800/ 813] Overall Loss 2.383955 Objective Loss 2.383955 LR 0.000200 Time 0.531278 -2024-05-11 04:34:05,613 - Epoch: [10][ 813/ 813] Overall Loss 2.383192 Objective Loss 2.383192 LR 0.000200 Time 0.527162 -2024-05-11 04:34:05,640 - --- validate (epoch=10)----------- -2024-05-11 04:34:05,640 - 3250 samples (16 per mini-batch) -2024-05-11 04:34:05,641 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 04:35:00,475 - Epoch: [10][ 100/ 204] Loss 2.344288 mAP 0.780135 -2024-05-11 04:35:53,836 - Epoch: [10][ 200/ 204] Loss 2.342435 mAP 0.789930 -2024-05-11 04:35:54,653 - Epoch: [10][ 204/ 204] Loss 2.338881 mAP 0.789982 -2024-05-11 04:35:54,678 - ==> mAP: 0.78998 Loss: 2.339 - -2024-05-11 04:35:54,681 - ==> Best [mAP: 0.836616 vloss: 2.721850 Params: 368352 on epoch: 5] -2024-05-11 04:35:54,681 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 04:35:54,706 - - -2024-05-11 04:35:54,706 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 04:36:50,329 - Epoch: [11][ 100/ 813] Overall Loss 2.330355 Objective Loss 2.330355 LR 0.000200 Time 0.556217 -2024-05-11 04:37:44,883 - Epoch: [11][ 200/ 813] Overall Loss 2.326880 Objective Loss 2.326880 LR 0.000200 Time 0.550868 -2024-05-11 04:38:36,871 - Epoch: [11][ 300/ 813] Overall Loss 2.344526 Objective Loss 2.344526 LR 0.000200 Time 0.540511 -2024-05-11 04:39:27,535 - Epoch: [11][ 400/ 813] Overall Loss 2.338015 Objective Loss 2.338015 LR 0.000200 Time 0.532031 -2024-05-11 04:40:21,380 - Epoch: [11][ 500/ 813] Overall Loss 2.338461 Objective Loss 2.338461 LR 0.000200 Time 0.533305 -2024-05-11 04:41:14,707 - Epoch: [11][ 600/ 813] Overall Loss 2.332194 Objective Loss 2.332194 LR 0.000200 Time 0.533297 -2024-05-11 04:42:07,502 - Epoch: [11][ 700/ 813] Overall Loss 2.329546 Objective Loss 2.329546 LR 0.000200 Time 0.532525 -2024-05-11 04:42:58,958 - Epoch: [11][ 800/ 813] Overall Loss 2.325969 Objective Loss 2.325969 LR 0.000200 Time 0.530277 -2024-05-11 04:43:04,024 - Epoch: [11][ 813/ 813] Overall Loss 2.325317 Objective Loss 2.325317 LR 0.000200 Time 0.528021 -2024-05-11 04:43:04,050 - --- validate (epoch=11)----------- -2024-05-11 04:43:04,051 - 3250 samples (16 per mini-batch) -2024-05-11 04:43:04,052 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 04:43:59,105 - Epoch: [11][ 100/ 204] Loss 2.289907 mAP 0.858552 -2024-05-11 04:44:53,077 - Epoch: [11][ 200/ 204] Loss 2.297838 mAP 0.849701 -2024-05-11 04:44:53,461 - Epoch: [11][ 204/ 204] Loss 2.305691 mAP 0.849737 -2024-05-11 04:44:53,486 - ==> mAP: 0.84974 Loss: 2.306 - -2024-05-11 04:44:53,489 - ==> Best [mAP: 0.849737 vloss: 2.305691 Params: 368352 on epoch: 11] -2024-05-11 04:44:53,489 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 04:44:53,517 - - -2024-05-11 04:44:53,517 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 04:45:50,334 - Epoch: [12][ 100/ 813] Overall Loss 2.276725 Objective Loss 2.276725 LR 0.000200 Time 0.568151 -2024-05-11 04:46:43,912 - Epoch: [12][ 200/ 813] Overall Loss 2.283934 Objective Loss 2.283934 LR 0.000200 Time 0.551958 -2024-05-11 04:47:36,455 - Epoch: [12][ 300/ 813] Overall Loss 2.289318 Objective Loss 2.289318 LR 0.000200 Time 0.543110 -2024-05-11 04:48:27,072 - Epoch: [12][ 400/ 813] Overall Loss 2.290374 Objective Loss 2.290374 LR 0.000200 Time 0.533870 -2024-05-11 04:49:18,817 - Epoch: [12][ 500/ 813] Overall Loss 2.285454 Objective Loss 2.285454 LR 0.000200 Time 0.530576 -2024-05-11 04:50:13,552 - Epoch: [12][ 600/ 813] Overall Loss 2.276049 Objective Loss 2.276049 LR 0.000200 Time 0.533370 -2024-05-11 04:51:06,708 - Epoch: [12][ 700/ 813] Overall Loss 2.270345 Objective Loss 2.270345 LR 0.000200 Time 0.533105 -2024-05-11 04:51:58,003 - Epoch: [12][ 800/ 813] Overall Loss 2.266513 Objective Loss 2.266513 LR 0.000200 Time 0.530579 -2024-05-11 04:52:02,373 - Epoch: [12][ 813/ 813] Overall Loss 2.265849 Objective Loss 2.265849 LR 0.000200 Time 0.527465 -2024-05-11 04:52:02,399 - --- validate (epoch=12)----------- -2024-05-11 04:52:02,399 - 3250 samples (16 per mini-batch) -2024-05-11 04:52:02,401 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 04:52:58,703 - Epoch: [12][ 100/ 204] Loss 2.228369 mAP 0.868752 -2024-05-11 04:53:53,369 - Epoch: [12][ 200/ 204] Loss 2.225449 mAP 0.878468 -2024-05-11 04:53:53,889 - Epoch: [12][ 204/ 204] Loss 2.224604 mAP 0.878530 -2024-05-11 04:53:53,914 - ==> mAP: 0.87853 Loss: 2.225 - -2024-05-11 04:53:53,917 - ==> Best [mAP: 0.878530 vloss: 2.224604 Params: 368352 on epoch: 12] -2024-05-11 04:53:53,917 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 04:53:53,946 - - -2024-05-11 04:53:53,946 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 04:54:48,858 - Epoch: [13][ 100/ 813] Overall Loss 2.208760 Objective Loss 2.208760 LR 0.000200 Time 0.549098 -2024-05-11 04:55:42,871 - Epoch: [13][ 200/ 813] Overall Loss 2.216185 Objective Loss 2.216185 LR 0.000200 Time 0.544595 -2024-05-11 04:56:36,830 - Epoch: [13][ 300/ 813] Overall Loss 2.226103 Objective Loss 2.226103 LR 0.000200 Time 0.542923 -2024-05-11 04:57:26,734 - Epoch: [13][ 400/ 813] Overall Loss 2.220477 Objective Loss 2.220477 LR 0.000200 Time 0.531947 -2024-05-11 04:58:20,080 - Epoch: [13][ 500/ 813] Overall Loss 2.223593 Objective Loss 2.223593 LR 0.000200 Time 0.532242 -2024-05-11 04:59:13,884 - Epoch: [13][ 600/ 813] Overall Loss 2.220461 Objective Loss 2.220461 LR 0.000200 Time 0.533206 -2024-05-11 05:00:07,760 - Epoch: [13][ 700/ 813] Overall Loss 2.214832 Objective Loss 2.214832 LR 0.000200 Time 0.533997 -2024-05-11 05:01:01,016 - Epoch: [13][ 800/ 813] Overall Loss 2.209670 Objective Loss 2.209670 LR 0.000200 Time 0.533811 -2024-05-11 05:01:06,438 - Epoch: [13][ 813/ 813] Overall Loss 2.208950 Objective Loss 2.208950 LR 0.000200 Time 0.531943 -2024-05-11 05:01:06,464 - --- validate (epoch=13)----------- -2024-05-11 05:01:06,465 - 3250 samples (16 per mini-batch) -2024-05-11 05:01:06,466 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 05:02:00,810 - Epoch: [13][ 100/ 204] Loss 2.193568 mAP 0.870616 -2024-05-11 05:02:54,521 - Epoch: [13][ 200/ 204] Loss 2.189048 mAP 0.880271 -2024-05-11 05:02:55,062 - Epoch: [13][ 204/ 204] Loss 2.191081 mAP 0.880280 -2024-05-11 05:02:55,087 - ==> mAP: 0.88028 Loss: 2.191 - -2024-05-11 05:02:55,089 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 05:02:55,090 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 05:02:55,118 - - -2024-05-11 05:02:55,118 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 05:03:51,876 - Epoch: [14][ 100/ 813] Overall Loss 2.158804 Objective Loss 2.158804 LR 0.000200 Time 0.567556 -2024-05-11 05:04:45,744 - Epoch: [14][ 200/ 813] Overall Loss 2.149307 Objective Loss 2.149307 LR 0.000200 Time 0.553112 -2024-05-11 05:05:37,836 - Epoch: [14][ 300/ 813] Overall Loss 2.160702 Objective Loss 2.160702 LR 0.000200 Time 0.542375 -2024-05-11 05:06:29,074 - Epoch: [14][ 400/ 813] Overall Loss 2.150768 Objective Loss 2.150768 LR 0.000200 Time 0.534873 -2024-05-11 05:07:21,074 - Epoch: [14][ 500/ 813] Overall Loss 2.145451 Objective Loss 2.145451 LR 0.000200 Time 0.531895 -2024-05-11 05:08:14,326 - Epoch: [14][ 600/ 813] Overall Loss 2.143632 Objective Loss 2.143632 LR 0.000200 Time 0.531996 -2024-05-11 05:09:06,435 - Epoch: [14][ 700/ 813] Overall Loss 2.140745 Objective Loss 2.140745 LR 0.000200 Time 0.530436 -2024-05-11 05:09:58,902 - Epoch: [14][ 800/ 813] Overall Loss 2.132678 Objective Loss 2.132678 LR 0.000200 Time 0.529713 -2024-05-11 05:10:04,162 - Epoch: [14][ 813/ 813] Overall Loss 2.132768 Objective Loss 2.132768 LR 0.000200 Time 0.527713 -2024-05-11 05:10:04,187 - --- validate (epoch=14)----------- -2024-05-11 05:10:04,187 - 3250 samples (16 per mini-batch) -2024-05-11 05:10:04,189 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 05:10:59,219 - Epoch: [14][ 100/ 204] Loss 2.140189 mAP 0.879176 -2024-05-11 05:11:52,479 - Epoch: [14][ 200/ 204] Loss 2.124005 mAP 0.878669 -2024-05-11 05:11:52,684 - Epoch: [14][ 204/ 204] Loss 2.128480 mAP 0.878621 -2024-05-11 05:11:52,708 - ==> mAP: 0.87862 Loss: 2.128 - -2024-05-11 05:11:52,711 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 05:11:52,711 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 05:11:52,736 - - -2024-05-11 05:11:52,736 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 05:12:48,385 - Epoch: [15][ 100/ 813] Overall Loss 2.086109 Objective Loss 2.086109 LR 0.000200 Time 0.556466 -2024-05-11 05:13:43,768 - Epoch: [15][ 200/ 813] Overall Loss 2.086957 Objective Loss 2.086957 LR 0.000200 Time 0.555142 -2024-05-11 05:14:35,475 - Epoch: [15][ 300/ 813] Overall Loss 2.091040 Objective Loss 2.091040 LR 0.000200 Time 0.542444 -2024-05-11 05:15:26,221 - Epoch: [15][ 400/ 813] Overall Loss 2.102451 Objective Loss 2.102451 LR 0.000200 Time 0.533694 -2024-05-11 05:16:18,988 - Epoch: [15][ 500/ 813] Overall Loss 2.098601 Objective Loss 2.098601 LR 0.000200 Time 0.532487 -2024-05-11 05:17:12,278 - Epoch: [15][ 600/ 813] Overall Loss 2.096398 Objective Loss 2.096398 LR 0.000200 Time 0.532539 -2024-05-11 05:18:05,659 - Epoch: [15][ 700/ 813] Overall Loss 2.090170 Objective Loss 2.090170 LR 0.000200 Time 0.532706 -2024-05-11 05:18:58,245 - Epoch: [15][ 800/ 813] Overall Loss 2.086924 Objective Loss 2.086924 LR 0.000200 Time 0.531848 -2024-05-11 05:19:02,927 - Epoch: [15][ 813/ 813] Overall Loss 2.087310 Objective Loss 2.087310 LR 0.000200 Time 0.529102 -2024-05-11 05:19:02,955 - --- validate (epoch=15)----------- -2024-05-11 05:19:02,956 - 3250 samples (16 per mini-batch) -2024-05-11 05:19:02,957 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 05:19:58,908 - Epoch: [15][ 100/ 204] Loss 2.100547 mAP 0.869846 -2024-05-11 05:20:53,612 - Epoch: [15][ 200/ 204] Loss 2.094679 mAP 0.869325 -2024-05-11 05:20:53,805 - Epoch: [15][ 204/ 204] Loss 2.095165 mAP 0.869333 -2024-05-11 05:20:53,830 - ==> mAP: 0.86933 Loss: 2.095 - -2024-05-11 05:20:53,834 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 05:20:53,834 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 05:20:53,858 - - -2024-05-11 05:20:53,859 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 05:21:49,070 - Epoch: [16][ 100/ 813] Overall Loss 2.050890 Objective Loss 2.050890 LR 0.000200 Time 0.552096 -2024-05-11 05:22:43,442 - Epoch: [16][ 200/ 813] Overall Loss 2.043581 Objective Loss 2.043581 LR 0.000200 Time 0.547900 -2024-05-11 05:23:35,629 - Epoch: [16][ 300/ 813] Overall Loss 2.044674 Objective Loss 2.044674 LR 0.000200 Time 0.539217 -2024-05-11 05:24:26,571 - Epoch: [16][ 400/ 813] Overall Loss 2.041512 Objective Loss 2.041512 LR 0.000200 Time 0.531765 -2024-05-11 05:25:18,753 - Epoch: [16][ 500/ 813] Overall Loss 2.039895 Objective Loss 2.039895 LR 0.000200 Time 0.529771 -2024-05-11 05:26:13,148 - Epoch: [16][ 600/ 813] Overall Loss 2.039737 Objective Loss 2.039737 LR 0.000200 Time 0.532122 -2024-05-11 05:27:05,249 - Epoch: [16][ 700/ 813] Overall Loss 2.034857 Objective Loss 2.034857 LR 0.000200 Time 0.530528 -2024-05-11 05:27:57,828 - Epoch: [16][ 800/ 813] Overall Loss 2.031690 Objective Loss 2.031690 LR 0.000200 Time 0.529931 -2024-05-11 05:28:03,386 - Epoch: [16][ 813/ 813] Overall Loss 2.030716 Objective Loss 2.030716 LR 0.000200 Time 0.528293 -2024-05-11 05:28:03,420 - --- validate (epoch=16)----------- -2024-05-11 05:28:03,421 - 3250 samples (16 per mini-batch) -2024-05-11 05:28:03,422 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 05:28:59,469 - Epoch: [16][ 100/ 204] Loss 2.043303 mAP 0.851000 -2024-05-11 05:29:52,076 - Epoch: [16][ 200/ 204] Loss 2.050525 mAP 0.841166 -2024-05-11 05:29:52,391 - Epoch: [16][ 204/ 204] Loss 2.048552 mAP 0.841172 -2024-05-11 05:29:52,416 - ==> mAP: 0.84117 Loss: 2.049 - -2024-05-11 05:29:52,418 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 05:29:52,418 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 05:29:52,443 - - -2024-05-11 05:29:52,443 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 05:30:49,063 - Epoch: [17][ 100/ 813] Overall Loss 1.994020 Objective Loss 1.994020 LR 0.000200 Time 0.566174 -2024-05-11 05:31:41,938 - Epoch: [17][ 200/ 813] Overall Loss 1.983020 Objective Loss 1.983020 LR 0.000200 Time 0.547456 -2024-05-11 05:32:34,504 - Epoch: [17][ 300/ 813] Overall Loss 1.997261 Objective Loss 1.997261 LR 0.000200 Time 0.540111 -2024-05-11 05:33:23,537 - Epoch: [17][ 400/ 813] Overall Loss 1.995912 Objective Loss 1.995912 LR 0.000200 Time 0.527658 -2024-05-11 05:34:16,493 - Epoch: [17][ 500/ 813] Overall Loss 1.995735 Objective Loss 1.995735 LR 0.000200 Time 0.528021 -2024-05-11 05:35:09,976 - Epoch: [17][ 600/ 813] Overall Loss 1.992281 Objective Loss 1.992281 LR 0.000200 Time 0.529147 -2024-05-11 05:36:03,295 - Epoch: [17][ 700/ 813] Overall Loss 1.992092 Objective Loss 1.992092 LR 0.000200 Time 0.529723 -2024-05-11 05:36:55,719 - Epoch: [17][ 800/ 813] Overall Loss 1.985115 Objective Loss 1.985115 LR 0.000200 Time 0.529035 -2024-05-11 05:37:00,850 - Epoch: [17][ 813/ 813] Overall Loss 1.985128 Objective Loss 1.985128 LR 0.000200 Time 0.526886 -2024-05-11 05:37:00,876 - --- validate (epoch=17)----------- -2024-05-11 05:37:00,877 - 3250 samples (16 per mini-batch) -2024-05-11 05:37:00,878 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 05:37:56,385 - Epoch: [17][ 100/ 204] Loss 2.012869 mAP 0.879087 -2024-05-11 05:38:49,101 - Epoch: [17][ 200/ 204] Loss 2.002012 mAP 0.879253 -2024-05-11 05:38:49,675 - Epoch: [17][ 204/ 204] Loss 2.000230 mAP 0.879245 -2024-05-11 05:38:49,701 - ==> mAP: 0.87924 Loss: 2.000 - -2024-05-11 05:38:49,704 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 05:38:49,704 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 05:38:49,728 - - -2024-05-11 05:38:49,728 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 05:39:45,702 - Epoch: [18][ 100/ 813] Overall Loss 1.963162 Objective Loss 1.963162 LR 0.000200 Time 0.559714 -2024-05-11 05:40:38,286 - Epoch: [18][ 200/ 813] Overall Loss 1.950384 Objective Loss 1.950384 LR 0.000200 Time 0.542746 -2024-05-11 05:41:30,607 - Epoch: [18][ 300/ 813] Overall Loss 1.959739 Objective Loss 1.959739 LR 0.000200 Time 0.536207 -2024-05-11 05:42:21,456 - Epoch: [18][ 400/ 813] Overall Loss 1.958439 Objective Loss 1.958439 LR 0.000200 Time 0.529274 -2024-05-11 05:43:13,034 - Epoch: [18][ 500/ 813] Overall Loss 1.951825 Objective Loss 1.951825 LR 0.000200 Time 0.526559 -2024-05-11 05:44:06,950 - Epoch: [18][ 600/ 813] Overall Loss 1.945294 Objective Loss 1.945294 LR 0.000200 Time 0.528650 -2024-05-11 05:44:59,334 - Epoch: [18][ 700/ 813] Overall Loss 1.941085 Objective Loss 1.941085 LR 0.000200 Time 0.527962 -2024-05-11 05:45:52,825 - Epoch: [18][ 800/ 813] Overall Loss 1.938058 Objective Loss 1.938058 LR 0.000200 Time 0.528828 -2024-05-11 05:45:57,943 - Epoch: [18][ 813/ 813] Overall Loss 1.937840 Objective Loss 1.937840 LR 0.000200 Time 0.526667 -2024-05-11 05:45:57,968 - --- validate (epoch=18)----------- -2024-05-11 05:45:57,969 - 3250 samples (16 per mini-batch) -2024-05-11 05:45:57,970 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 05:46:53,329 - Epoch: [18][ 100/ 204] Loss 1.926444 mAP 0.878738 -2024-05-11 05:47:47,154 - Epoch: [18][ 200/ 204] Loss 1.934458 mAP 0.878820 -2024-05-11 05:47:47,485 - Epoch: [18][ 204/ 204] Loss 1.935007 mAP 0.878775 -2024-05-11 05:47:47,510 - ==> mAP: 0.87877 Loss: 1.935 - -2024-05-11 05:47:47,513 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 05:47:47,513 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 05:47:47,538 - - -2024-05-11 05:47:47,538 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 05:48:42,631 - Epoch: [19][ 100/ 813] Overall Loss 1.900590 Objective Loss 1.900590 LR 0.000200 Time 0.550902 -2024-05-11 05:49:37,077 - Epoch: [19][ 200/ 813] Overall Loss 1.907077 Objective Loss 1.907077 LR 0.000200 Time 0.547677 -2024-05-11 05:50:29,878 - Epoch: [19][ 300/ 813] Overall Loss 1.909780 Objective Loss 1.909780 LR 0.000200 Time 0.541097 -2024-05-11 05:51:20,440 - Epoch: [19][ 400/ 813] Overall Loss 1.913881 Objective Loss 1.913881 LR 0.000200 Time 0.532153 -2024-05-11 05:52:13,896 - Epoch: [19][ 500/ 813] Overall Loss 1.908191 Objective Loss 1.908191 LR 0.000200 Time 0.532631 -2024-05-11 05:53:06,935 - Epoch: [19][ 600/ 813] Overall Loss 1.910210 Objective Loss 1.910210 LR 0.000200 Time 0.532255 -2024-05-11 05:53:59,956 - Epoch: [19][ 700/ 813] Overall Loss 1.910796 Objective Loss 1.910796 LR 0.000200 Time 0.531961 -2024-05-11 05:54:51,772 - Epoch: [19][ 800/ 813] Overall Loss 1.905258 Objective Loss 1.905258 LR 0.000200 Time 0.530225 -2024-05-11 05:54:57,842 - Epoch: [19][ 813/ 813] Overall Loss 1.905486 Objective Loss 1.905486 LR 0.000200 Time 0.529212 -2024-05-11 05:54:57,869 - --- validate (epoch=19)----------- -2024-05-11 05:54:57,869 - 3250 samples (16 per mini-batch) -2024-05-11 05:54:57,870 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 05:55:54,540 - Epoch: [19][ 100/ 204] Loss 1.919830 mAP 0.850434 -2024-05-11 05:56:46,892 - Epoch: [19][ 200/ 204] Loss 1.926336 mAP 0.850343 -2024-05-11 05:56:47,071 - Epoch: [19][ 204/ 204] Loss 1.924686 mAP 0.850359 -2024-05-11 05:56:47,096 - ==> mAP: 0.85036 Loss: 1.925 - -2024-05-11 05:56:47,101 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 05:56:47,101 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 05:56:47,128 - - -2024-05-11 05:56:47,128 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 05:57:44,018 - Epoch: [20][ 100/ 813] Overall Loss 1.889916 Objective Loss 1.889916 LR 0.000200 Time 0.568870 -2024-05-11 05:58:35,409 - Epoch: [20][ 200/ 813] Overall Loss 1.879123 Objective Loss 1.879123 LR 0.000200 Time 0.541352 -2024-05-11 05:59:29,114 - Epoch: [20][ 300/ 813] Overall Loss 1.882823 Objective Loss 1.882823 LR 0.000200 Time 0.539899 -2024-05-11 06:00:19,269 - Epoch: [20][ 400/ 813] Overall Loss 1.874272 Objective Loss 1.874272 LR 0.000200 Time 0.530299 -2024-05-11 06:01:11,773 - Epoch: [20][ 500/ 813] Overall Loss 1.879463 Objective Loss 1.879463 LR 0.000200 Time 0.529244 -2024-05-11 06:02:05,432 - Epoch: [20][ 600/ 813] Overall Loss 1.876814 Objective Loss 1.876814 LR 0.000200 Time 0.530466 -2024-05-11 06:02:58,601 - Epoch: [20][ 700/ 813] Overall Loss 1.872808 Objective Loss 1.872808 LR 0.000200 Time 0.530639 -2024-05-11 06:03:50,135 - Epoch: [20][ 800/ 813] Overall Loss 1.870860 Objective Loss 1.870860 LR 0.000200 Time 0.528724 -2024-05-11 06:03:55,385 - Epoch: [20][ 813/ 813] Overall Loss 1.871376 Objective Loss 1.871376 LR 0.000200 Time 0.526727 -2024-05-11 06:03:55,411 - --- validate (epoch=20)----------- -2024-05-11 06:03:55,411 - 3250 samples (16 per mini-batch) -2024-05-11 06:03:55,413 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 06:04:49,737 - Epoch: [20][ 100/ 204] Loss 1.900064 mAP 0.850182 -2024-05-11 06:05:42,474 - Epoch: [20][ 200/ 204] Loss 1.884574 mAP 0.860194 -2024-05-11 06:05:42,660 - Epoch: [20][ 204/ 204] Loss 1.881205 mAP 0.860216 -2024-05-11 06:05:42,685 - ==> mAP: 0.86022 Loss: 1.881 - -2024-05-11 06:05:42,687 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 06:05:42,687 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 06:05:42,712 - - -2024-05-11 06:05:42,712 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 06:06:38,644 - Epoch: [21][ 100/ 813] Overall Loss 1.863617 Objective Loss 1.863617 LR 0.000200 Time 0.559295 -2024-05-11 06:07:32,530 - Epoch: [21][ 200/ 813] Overall Loss 1.848845 Objective Loss 1.848845 LR 0.000200 Time 0.549044 -2024-05-11 06:08:25,557 - Epoch: [21][ 300/ 813] Overall Loss 1.859142 Objective Loss 1.859142 LR 0.000200 Time 0.542781 -2024-05-11 06:09:15,975 - Epoch: [21][ 400/ 813] Overall Loss 1.853418 Objective Loss 1.853418 LR 0.000200 Time 0.533119 -2024-05-11 06:10:07,429 - Epoch: [21][ 500/ 813] Overall Loss 1.851055 Objective Loss 1.851055 LR 0.000200 Time 0.529394 -2024-05-11 06:11:02,137 - Epoch: [21][ 600/ 813] Overall Loss 1.848780 Objective Loss 1.848780 LR 0.000200 Time 0.532338 -2024-05-11 06:11:54,114 - Epoch: [21][ 700/ 813] Overall Loss 1.844253 Objective Loss 1.844253 LR 0.000200 Time 0.530541 -2024-05-11 06:12:46,114 - Epoch: [21][ 800/ 813] Overall Loss 1.837036 Objective Loss 1.837036 LR 0.000200 Time 0.529215 -2024-05-11 06:12:51,842 - Epoch: [21][ 813/ 813] Overall Loss 1.836304 Objective Loss 1.836304 LR 0.000200 Time 0.527798 -2024-05-11 06:12:51,869 - --- validate (epoch=21)----------- -2024-05-11 06:12:51,870 - 3250 samples (16 per mini-batch) -2024-05-11 06:12:51,873 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 06:13:47,446 - Epoch: [21][ 100/ 204] Loss 1.810496 mAP 0.879160 -2024-05-11 06:14:39,817 - Epoch: [21][ 200/ 204] Loss 1.818099 mAP 0.879028 -2024-05-11 06:14:40,984 - Epoch: [21][ 204/ 204] Loss 1.815664 mAP 0.879073 -2024-05-11 06:14:41,008 - ==> mAP: 0.87907 Loss: 1.816 - -2024-05-11 06:14:41,011 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 06:14:41,011 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 06:14:41,036 - - -2024-05-11 06:14:41,036 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 06:15:36,301 - Epoch: [22][ 100/ 813] Overall Loss 1.793960 Objective Loss 1.793960 LR 0.000200 Time 0.552621 -2024-05-11 06:16:29,274 - Epoch: [22][ 200/ 813] Overall Loss 1.786006 Objective Loss 1.786006 LR 0.000200 Time 0.541172 -2024-05-11 06:17:22,356 - Epoch: [22][ 300/ 813] Overall Loss 1.806188 Objective Loss 1.806188 LR 0.000200 Time 0.537714 -2024-05-11 06:18:13,197 - Epoch: [22][ 400/ 813] Overall Loss 1.800505 Objective Loss 1.800505 LR 0.000200 Time 0.530386 -2024-05-11 06:19:04,903 - Epoch: [22][ 500/ 813] Overall Loss 1.800698 Objective Loss 1.800698 LR 0.000200 Time 0.527716 -2024-05-11 06:19:57,938 - Epoch: [22][ 600/ 813] Overall Loss 1.801239 Objective Loss 1.801239 LR 0.000200 Time 0.528130 -2024-05-11 06:20:50,483 - Epoch: [22][ 700/ 813] Overall Loss 1.792815 Objective Loss 1.792815 LR 0.000200 Time 0.527745 -2024-05-11 06:21:42,104 - Epoch: [22][ 800/ 813] Overall Loss 1.789564 Objective Loss 1.789564 LR 0.000200 Time 0.526293 -2024-05-11 06:21:47,389 - Epoch: [22][ 813/ 813] Overall Loss 1.788436 Objective Loss 1.788436 LR 0.000200 Time 0.524378 -2024-05-11 06:21:47,414 - --- validate (epoch=22)----------- -2024-05-11 06:21:47,415 - 3250 samples (16 per mini-batch) -2024-05-11 06:21:47,416 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 06:22:43,758 - Epoch: [22][ 100/ 204] Loss 1.798302 mAP 0.869160 -2024-05-11 06:23:36,209 - Epoch: [22][ 200/ 204] Loss 1.802069 mAP 0.869250 -2024-05-11 06:23:36,404 - Epoch: [22][ 204/ 204] Loss 1.800040 mAP 0.869282 -2024-05-11 06:23:36,430 - ==> mAP: 0.86928 Loss: 1.800 - -2024-05-11 06:23:36,432 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 06:23:36,433 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 06:23:36,457 - - -2024-05-11 06:23:36,457 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 06:24:31,205 - Epoch: [23][ 100/ 813] Overall Loss 1.738549 Objective Loss 1.738549 LR 0.000200 Time 0.547461 -2024-05-11 06:25:27,847 - Epoch: [23][ 200/ 813] Overall Loss 1.745658 Objective Loss 1.745658 LR 0.000200 Time 0.556929 -2024-05-11 06:26:19,308 - Epoch: [23][ 300/ 813] Overall Loss 1.759877 Objective Loss 1.759877 LR 0.000200 Time 0.542792 -2024-05-11 06:27:09,728 - Epoch: [23][ 400/ 813] Overall Loss 1.755965 Objective Loss 1.755965 LR 0.000200 Time 0.533135 -2024-05-11 06:28:01,453 - Epoch: [23][ 500/ 813] Overall Loss 1.762635 Objective Loss 1.762635 LR 0.000200 Time 0.529955 -2024-05-11 06:28:55,439 - Epoch: [23][ 600/ 813] Overall Loss 1.768196 Objective Loss 1.768196 LR 0.000200 Time 0.531540 -2024-05-11 06:29:48,696 - Epoch: [23][ 700/ 813] Overall Loss 1.764650 Objective Loss 1.764650 LR 0.000200 Time 0.531680 -2024-05-11 06:30:41,843 - Epoch: [23][ 800/ 813] Overall Loss 1.760543 Objective Loss 1.760543 LR 0.000200 Time 0.531652 -2024-05-11 06:30:46,915 - Epoch: [23][ 813/ 813] Overall Loss 1.762698 Objective Loss 1.762698 LR 0.000200 Time 0.529389 -2024-05-11 06:30:46,941 - --- validate (epoch=23)----------- -2024-05-11 06:30:46,942 - 3250 samples (16 per mini-batch) -2024-05-11 06:30:46,943 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 06:31:41,801 - Epoch: [23][ 100/ 204] Loss 1.750666 mAP 0.879363 -2024-05-11 06:32:33,567 - Epoch: [23][ 200/ 204] Loss 1.770428 mAP 0.869467 -2024-05-11 06:32:34,143 - Epoch: [23][ 204/ 204] Loss 1.769687 mAP 0.869464 -2024-05-11 06:32:34,167 - ==> mAP: 0.86946 Loss: 1.770 - -2024-05-11 06:32:34,169 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 06:32:34,170 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 06:32:34,194 - - -2024-05-11 06:32:34,194 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 06:33:28,704 - Epoch: [24][ 100/ 813] Overall Loss 1.737877 Objective Loss 1.737877 LR 0.000200 Time 0.545072 -2024-05-11 06:34:22,210 - Epoch: [24][ 200/ 813] Overall Loss 1.716627 Objective Loss 1.716627 LR 0.000200 Time 0.540043 -2024-05-11 06:35:14,944 - Epoch: [24][ 300/ 813] Overall Loss 1.724068 Objective Loss 1.724068 LR 0.000200 Time 0.535804 -2024-05-11 06:36:05,661 - Epoch: [24][ 400/ 813] Overall Loss 1.726042 Objective Loss 1.726042 LR 0.000200 Time 0.528635 -2024-05-11 06:36:57,677 - Epoch: [24][ 500/ 813] Overall Loss 1.727196 Objective Loss 1.727196 LR 0.000200 Time 0.526936 -2024-05-11 06:37:51,067 - Epoch: [24][ 600/ 813] Overall Loss 1.727001 Objective Loss 1.727001 LR 0.000200 Time 0.528084 -2024-05-11 06:38:44,484 - Epoch: [24][ 700/ 813] Overall Loss 1.725120 Objective Loss 1.725120 LR 0.000200 Time 0.528950 -2024-05-11 06:39:38,663 - Epoch: [24][ 800/ 813] Overall Loss 1.722814 Objective Loss 1.722814 LR 0.000200 Time 0.530554 -2024-05-11 06:39:43,069 - Epoch: [24][ 813/ 813] Overall Loss 1.723358 Objective Loss 1.723358 LR 0.000200 Time 0.527488 -2024-05-11 06:39:43,094 - --- validate (epoch=24)----------- -2024-05-11 06:39:43,095 - 3250 samples (16 per mini-batch) -2024-05-11 06:39:43,096 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 06:40:38,636 - Epoch: [24][ 100/ 204] Loss 1.751206 mAP 0.880409 -2024-05-11 06:41:32,219 - Epoch: [24][ 200/ 204] Loss 1.752977 mAP 0.879962 -2024-05-11 06:41:32,401 - Epoch: [24][ 204/ 204] Loss 1.753204 mAP 0.879967 -2024-05-11 06:41:32,426 - ==> mAP: 0.87997 Loss: 1.753 - -2024-05-11 06:41:32,429 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 06:41:32,429 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 06:41:32,454 - - -2024-05-11 06:41:32,454 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 06:42:27,782 - Epoch: [25][ 100/ 813] Overall Loss 1.656398 Objective Loss 1.656398 LR 0.000100 Time 0.553256 -2024-05-11 06:43:22,162 - Epoch: [25][ 200/ 813] Overall Loss 1.649665 Objective Loss 1.649665 LR 0.000100 Time 0.548522 -2024-05-11 06:44:14,632 - Epoch: [25][ 300/ 813] Overall Loss 1.658117 Objective Loss 1.658117 LR 0.000100 Time 0.540576 -2024-05-11 06:45:05,528 - Epoch: [25][ 400/ 813] Overall Loss 1.660473 Objective Loss 1.660473 LR 0.000100 Time 0.532668 -2024-05-11 06:45:57,036 - Epoch: [25][ 500/ 813] Overall Loss 1.664400 Objective Loss 1.664400 LR 0.000100 Time 0.529149 -2024-05-11 06:46:51,974 - Epoch: [25][ 600/ 813] Overall Loss 1.665962 Objective Loss 1.665962 LR 0.000100 Time 0.532518 -2024-05-11 06:47:44,148 - Epoch: [25][ 700/ 813] Overall Loss 1.667505 Objective Loss 1.667505 LR 0.000100 Time 0.530975 -2024-05-11 06:48:35,290 - Epoch: [25][ 800/ 813] Overall Loss 1.665666 Objective Loss 1.665666 LR 0.000100 Time 0.528526 -2024-05-11 06:48:41,164 - Epoch: [25][ 813/ 813] Overall Loss 1.664882 Objective Loss 1.664882 LR 0.000100 Time 0.527300 -2024-05-11 06:48:41,191 - --- validate (epoch=25)----------- -2024-05-11 06:48:41,191 - 3250 samples (16 per mini-batch) -2024-05-11 06:48:41,193 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 06:49:36,637 - Epoch: [25][ 100/ 204] Loss 1.709663 mAP 0.878377 -2024-05-11 06:50:28,581 - Epoch: [25][ 200/ 204] Loss 1.708104 mAP 0.868190 -2024-05-11 06:50:29,418 - Epoch: [25][ 204/ 204] Loss 1.715792 mAP 0.868211 -2024-05-11 06:50:29,442 - ==> mAP: 0.86821 Loss: 1.716 - -2024-05-11 06:50:29,446 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 06:50:29,447 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 06:50:29,472 - - -2024-05-11 06:50:29,472 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 06:51:25,993 - Epoch: [26][ 100/ 813] Overall Loss 1.634204 Objective Loss 1.634204 LR 0.000100 Time 0.565192 -2024-05-11 06:52:18,929 - Epoch: [26][ 200/ 813] Overall Loss 1.639075 Objective Loss 1.639075 LR 0.000100 Time 0.547270 -2024-05-11 06:53:10,805 - Epoch: [26][ 300/ 813] Overall Loss 1.656330 Objective Loss 1.656330 LR 0.000100 Time 0.537758 -2024-05-11 06:54:01,740 - Epoch: [26][ 400/ 813] Overall Loss 1.659945 Objective Loss 1.659945 LR 0.000100 Time 0.530652 -2024-05-11 06:54:54,233 - Epoch: [26][ 500/ 813] Overall Loss 1.658194 Objective Loss 1.658194 LR 0.000100 Time 0.529506 -2024-05-11 06:55:47,843 - Epoch: [26][ 600/ 813] Overall Loss 1.660197 Objective Loss 1.660197 LR 0.000100 Time 0.530591 -2024-05-11 06:56:40,602 - Epoch: [26][ 700/ 813] Overall Loss 1.656886 Objective Loss 1.656886 LR 0.000100 Time 0.530161 -2024-05-11 06:57:32,862 - Epoch: [26][ 800/ 813] Overall Loss 1.653860 Objective Loss 1.653860 LR 0.000100 Time 0.529214 -2024-05-11 06:57:37,789 - Epoch: [26][ 813/ 813] Overall Loss 1.653028 Objective Loss 1.653028 LR 0.000100 Time 0.526811 -2024-05-11 06:57:37,814 - --- validate (epoch=26)----------- -2024-05-11 06:57:37,815 - 3250 samples (16 per mini-batch) -2024-05-11 06:57:37,816 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 06:58:31,979 - Epoch: [26][ 100/ 204] Loss 1.659689 mAP 0.880087 -2024-05-11 06:59:25,987 - Epoch: [26][ 200/ 204] Loss 1.655337 mAP 0.879377 -2024-05-11 06:59:26,517 - Epoch: [26][ 204/ 204] Loss 1.656499 mAP 0.869625 -2024-05-11 06:59:26,542 - ==> mAP: 0.86962 Loss: 1.656 - -2024-05-11 06:59:26,545 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 06:59:26,545 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 06:59:26,570 - - -2024-05-11 06:59:26,570 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 07:00:21,506 - Epoch: [27][ 100/ 813] Overall Loss 1.631272 Objective Loss 1.631272 LR 0.000100 Time 0.549332 -2024-05-11 07:01:15,729 - Epoch: [27][ 200/ 813] Overall Loss 1.625177 Objective Loss 1.625177 LR 0.000100 Time 0.545775 -2024-05-11 07:02:08,873 - Epoch: [27][ 300/ 813] Overall Loss 1.637732 Objective Loss 1.637732 LR 0.000100 Time 0.540992 -2024-05-11 07:02:59,197 - Epoch: [27][ 400/ 813] Overall Loss 1.639437 Objective Loss 1.639437 LR 0.000100 Time 0.531549 -2024-05-11 07:03:52,179 - Epoch: [27][ 500/ 813] Overall Loss 1.640810 Objective Loss 1.640810 LR 0.000100 Time 0.531201 -2024-05-11 07:04:46,881 - Epoch: [27][ 600/ 813] Overall Loss 1.640000 Objective Loss 1.640000 LR 0.000100 Time 0.533828 -2024-05-11 07:05:38,587 - Epoch: [27][ 700/ 813] Overall Loss 1.636306 Objective Loss 1.636306 LR 0.000100 Time 0.531432 -2024-05-11 07:06:30,205 - Epoch: [27][ 800/ 813] Overall Loss 1.633981 Objective Loss 1.633981 LR 0.000100 Time 0.529511 -2024-05-11 07:06:36,083 - Epoch: [27][ 813/ 813] Overall Loss 1.634332 Objective Loss 1.634332 LR 0.000100 Time 0.528273 -2024-05-11 07:06:36,109 - --- validate (epoch=27)----------- -2024-05-11 07:06:36,110 - 3250 samples (16 per mini-batch) -2024-05-11 07:06:36,111 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 07:07:30,646 - Epoch: [27][ 100/ 204] Loss 1.660441 mAP 0.859176 -2024-05-11 07:08:24,027 - Epoch: [27][ 200/ 204] Loss 1.654880 mAP 0.879301 -2024-05-11 07:08:24,763 - Epoch: [27][ 204/ 204] Loss 1.656489 mAP 0.879308 -2024-05-11 07:08:24,791 - ==> mAP: 0.87931 Loss: 1.656 - -2024-05-11 07:08:24,794 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 07:08:24,794 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 07:08:24,818 - - -2024-05-11 07:08:24,819 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 07:09:19,507 - Epoch: [28][ 100/ 813] Overall Loss 1.604495 Objective Loss 1.604495 LR 0.000100 Time 0.546868 -2024-05-11 07:10:14,376 - Epoch: [28][ 200/ 813] Overall Loss 1.608577 Objective Loss 1.608577 LR 0.000100 Time 0.547768 -2024-05-11 07:11:07,509 - Epoch: [28][ 300/ 813] Overall Loss 1.621631 Objective Loss 1.621631 LR 0.000100 Time 0.542284 -2024-05-11 07:11:58,142 - Epoch: [28][ 400/ 813] Overall Loss 1.620451 Objective Loss 1.620451 LR 0.000100 Time 0.533292 -2024-05-11 07:12:50,117 - Epoch: [28][ 500/ 813] Overall Loss 1.619229 Objective Loss 1.619229 LR 0.000100 Time 0.530480 -2024-05-11 07:13:44,648 - Epoch: [28][ 600/ 813] Overall Loss 1.621264 Objective Loss 1.621264 LR 0.000100 Time 0.532948 -2024-05-11 07:14:37,276 - Epoch: [28][ 700/ 813] Overall Loss 1.620422 Objective Loss 1.620422 LR 0.000100 Time 0.531994 -2024-05-11 07:15:28,214 - Epoch: [28][ 800/ 813] Overall Loss 1.621539 Objective Loss 1.621539 LR 0.000100 Time 0.529165 -2024-05-11 07:15:34,044 - Epoch: [28][ 813/ 813] Overall Loss 1.621366 Objective Loss 1.621366 LR 0.000100 Time 0.527874 -2024-05-11 07:15:34,075 - --- validate (epoch=28)----------- -2024-05-11 07:15:34,075 - 3250 samples (16 per mini-batch) -2024-05-11 07:15:34,077 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 07:16:27,658 - Epoch: [28][ 100/ 204] Loss 1.641786 mAP 0.877794 -2024-05-11 07:17:21,241 - Epoch: [28][ 200/ 204] Loss 1.637492 mAP 0.878124 -2024-05-11 07:17:21,671 - Epoch: [28][ 204/ 204] Loss 1.637013 mAP 0.878147 -2024-05-11 07:17:21,697 - ==> mAP: 0.87815 Loss: 1.637 - -2024-05-11 07:17:21,700 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 07:17:21,700 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 07:17:21,725 - - -2024-05-11 07:17:21,725 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 07:18:16,531 - Epoch: [29][ 100/ 813] Overall Loss 1.588476 Objective Loss 1.588476 LR 0.000100 Time 0.548039 -2024-05-11 07:19:10,357 - Epoch: [29][ 200/ 813] Overall Loss 1.596301 Objective Loss 1.596301 LR 0.000100 Time 0.543122 -2024-05-11 07:20:02,241 - Epoch: [29][ 300/ 813] Overall Loss 1.603796 Objective Loss 1.603796 LR 0.000100 Time 0.535014 -2024-05-11 07:20:53,342 - Epoch: [29][ 400/ 813] Overall Loss 1.608212 Objective Loss 1.608212 LR 0.000100 Time 0.529008 -2024-05-11 07:21:45,737 - Epoch: [29][ 500/ 813] Overall Loss 1.603609 Objective Loss 1.603609 LR 0.000100 Time 0.527994 -2024-05-11 07:22:39,499 - Epoch: [29][ 600/ 813] Overall Loss 1.604841 Objective Loss 1.604841 LR 0.000100 Time 0.529595 -2024-05-11 07:23:32,324 - Epoch: [29][ 700/ 813] Overall Loss 1.605857 Objective Loss 1.605857 LR 0.000100 Time 0.529398 -2024-05-11 07:24:25,024 - Epoch: [29][ 800/ 813] Overall Loss 1.601988 Objective Loss 1.601988 LR 0.000100 Time 0.529096 -2024-05-11 07:24:29,770 - Epoch: [29][ 813/ 813] Overall Loss 1.600932 Objective Loss 1.600932 LR 0.000100 Time 0.526469 -2024-05-11 07:24:29,795 - --- validate (epoch=29)----------- -2024-05-11 07:24:29,796 - 3250 samples (16 per mini-batch) -2024-05-11 07:24:29,797 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 07:25:26,011 - Epoch: [29][ 100/ 204] Loss 1.632972 mAP 0.879581 -2024-05-11 07:26:19,196 - Epoch: [29][ 200/ 204] Loss 1.626281 mAP 0.879526 -2024-05-11 07:26:19,854 - Epoch: [29][ 204/ 204] Loss 1.635412 mAP 0.879537 -2024-05-11 07:26:19,879 - ==> mAP: 0.87954 Loss: 1.635 - -2024-05-11 07:26:19,882 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 07:26:19,882 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 07:26:19,907 - - -2024-05-11 07:26:19,907 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 07:27:16,000 - Epoch: [30][ 100/ 813] Overall Loss 1.562159 Objective Loss 1.562159 LR 0.000100 Time 0.560906 -2024-05-11 07:28:08,879 - Epoch: [30][ 200/ 813] Overall Loss 1.563570 Objective Loss 1.563570 LR 0.000100 Time 0.544823 -2024-05-11 07:29:02,380 - Epoch: [30][ 300/ 813] Overall Loss 1.579061 Objective Loss 1.579061 LR 0.000100 Time 0.541548 -2024-05-11 07:29:53,900 - Epoch: [30][ 400/ 813] Overall Loss 1.578599 Objective Loss 1.578599 LR 0.000100 Time 0.534957 -2024-05-11 07:30:46,293 - Epoch: [30][ 500/ 813] Overall Loss 1.580977 Objective Loss 1.580977 LR 0.000100 Time 0.532749 -2024-05-11 07:31:40,777 - Epoch: [30][ 600/ 813] Overall Loss 1.583787 Objective Loss 1.583787 LR 0.000100 Time 0.534762 -2024-05-11 07:32:32,440 - Epoch: [30][ 700/ 813] Overall Loss 1.576057 Objective Loss 1.576057 LR 0.000100 Time 0.532169 -2024-05-11 07:33:25,599 - Epoch: [30][ 800/ 813] Overall Loss 1.576939 Objective Loss 1.576939 LR 0.000100 Time 0.532095 -2024-05-11 07:33:30,936 - Epoch: [30][ 813/ 813] Overall Loss 1.576567 Objective Loss 1.576567 LR 0.000100 Time 0.530146 -2024-05-11 07:33:30,962 - --- validate (epoch=30)----------- -2024-05-11 07:33:30,962 - 3250 samples (16 per mini-batch) -2024-05-11 07:33:30,963 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 07:34:26,525 - Epoch: [30][ 100/ 204] Loss 1.636609 mAP 0.869404 -2024-05-11 07:35:20,502 - Epoch: [30][ 200/ 204] Loss 1.623377 mAP 0.878861 -2024-05-11 07:35:21,198 - Epoch: [30][ 204/ 204] Loss 1.622501 mAP 0.878861 -2024-05-11 07:35:21,223 - ==> mAP: 0.87886 Loss: 1.623 - -2024-05-11 07:35:21,226 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 07:35:21,226 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 07:35:21,251 - - -2024-05-11 07:35:21,251 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 07:36:17,355 - Epoch: [31][ 100/ 813] Overall Loss 1.559302 Objective Loss 1.559302 LR 0.000100 Time 0.561016 -2024-05-11 07:37:10,870 - Epoch: [31][ 200/ 813] Overall Loss 1.555705 Objective Loss 1.555705 LR 0.000100 Time 0.548075 -2024-05-11 07:38:04,673 - Epoch: [31][ 300/ 813] Overall Loss 1.580240 Objective Loss 1.580240 LR 0.000100 Time 0.544722 -2024-05-11 07:38:54,504 - Epoch: [31][ 400/ 813] Overall Loss 1.584565 Objective Loss 1.584565 LR 0.000100 Time 0.533115 -2024-05-11 07:39:48,568 - Epoch: [31][ 500/ 813] Overall Loss 1.584545 Objective Loss 1.584545 LR 0.000100 Time 0.534603 -2024-05-11 07:40:41,349 - Epoch: [31][ 600/ 813] Overall Loss 1.581199 Objective Loss 1.581199 LR 0.000100 Time 0.533469 -2024-05-11 07:41:33,709 - Epoch: [31][ 700/ 813] Overall Loss 1.580878 Objective Loss 1.580878 LR 0.000100 Time 0.532039 -2024-05-11 07:42:25,309 - Epoch: [31][ 800/ 813] Overall Loss 1.580275 Objective Loss 1.580275 LR 0.000100 Time 0.530027 -2024-05-11 07:42:30,488 - Epoch: [31][ 813/ 813] Overall Loss 1.580081 Objective Loss 1.580081 LR 0.000100 Time 0.527916 -2024-05-11 07:42:30,513 - --- validate (epoch=31)----------- -2024-05-11 07:42:30,513 - 3250 samples (16 per mini-batch) -2024-05-11 07:42:30,514 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 07:43:25,506 - Epoch: [31][ 100/ 204] Loss 1.601106 mAP 0.879178 -2024-05-11 07:44:20,221 - Epoch: [31][ 200/ 204] Loss 1.605816 mAP 0.869109 -2024-05-11 07:44:20,743 - Epoch: [31][ 204/ 204] Loss 1.607205 mAP 0.869147 -2024-05-11 07:44:20,768 - ==> mAP: 0.86915 Loss: 1.607 - -2024-05-11 07:44:20,773 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 07:44:20,773 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 07:44:20,798 - - -2024-05-11 07:44:20,798 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 07:45:17,174 - Epoch: [32][ 100/ 813] Overall Loss 1.539336 Objective Loss 1.539336 LR 0.000100 Time 0.563736 -2024-05-11 07:46:09,883 - Epoch: [32][ 200/ 813] Overall Loss 1.541760 Objective Loss 1.541760 LR 0.000100 Time 0.545409 -2024-05-11 07:47:01,920 - Epoch: [32][ 300/ 813] Overall Loss 1.560573 Objective Loss 1.560573 LR 0.000100 Time 0.537057 -2024-05-11 07:47:52,972 - Epoch: [32][ 400/ 813] Overall Loss 1.562417 Objective Loss 1.562417 LR 0.000100 Time 0.530409 -2024-05-11 07:48:44,232 - Epoch: [32][ 500/ 813] Overall Loss 1.565411 Objective Loss 1.565411 LR 0.000100 Time 0.526845 -2024-05-11 07:49:38,953 - Epoch: [32][ 600/ 813] Overall Loss 1.566700 Objective Loss 1.566700 LR 0.000100 Time 0.530229 -2024-05-11 07:50:31,712 - Epoch: [32][ 700/ 813] Overall Loss 1.565750 Objective Loss 1.565750 LR 0.000100 Time 0.529850 -2024-05-11 07:51:24,695 - Epoch: [32][ 800/ 813] Overall Loss 1.563359 Objective Loss 1.563359 LR 0.000100 Time 0.529840 -2024-05-11 07:51:30,765 - Epoch: [32][ 813/ 813] Overall Loss 1.564363 Objective Loss 1.564363 LR 0.000100 Time 0.528834 -2024-05-11 07:51:30,791 - --- validate (epoch=32)----------- -2024-05-11 07:51:30,792 - 3250 samples (16 per mini-batch) -2024-05-11 07:51:30,793 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 07:52:24,984 - Epoch: [32][ 100/ 204] Loss 1.595037 mAP 0.879091 -2024-05-11 07:53:17,504 - Epoch: [32][ 200/ 204] Loss 1.591754 mAP 0.878837 -2024-05-11 07:53:18,046 - Epoch: [32][ 204/ 204] Loss 1.594754 mAP 0.878855 -2024-05-11 07:53:18,071 - ==> mAP: 0.87885 Loss: 1.595 - -2024-05-11 07:53:18,075 - ==> Best [mAP: 0.880280 vloss: 2.191081 Params: 368352 on epoch: 13] -2024-05-11 07:53:18,075 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 07:53:18,099 - - -2024-05-11 07:53:18,099 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 07:54:13,686 - Epoch: [33][ 100/ 813] Overall Loss 1.517837 Objective Loss 1.517837 LR 0.000100 Time 0.555844 -2024-05-11 07:55:06,178 - Epoch: [33][ 200/ 813] Overall Loss 1.522776 Objective Loss 1.522776 LR 0.000100 Time 0.540374 -2024-05-11 07:55:59,060 - Epoch: [33][ 300/ 813] Overall Loss 1.538558 Objective Loss 1.538558 LR 0.000100 Time 0.536507 -2024-05-11 07:56:49,851 - Epoch: [33][ 400/ 813] Overall Loss 1.540618 Objective Loss 1.540618 LR 0.000100 Time 0.529349 -2024-05-11 07:57:43,176 - Epoch: [33][ 500/ 813] Overall Loss 1.546612 Objective Loss 1.546612 LR 0.000100 Time 0.530128 -2024-05-11 07:58:35,935 - Epoch: [33][ 600/ 813] Overall Loss 1.548097 Objective Loss 1.548097 LR 0.000100 Time 0.529693 -2024-05-11 07:59:27,887 - Epoch: [33][ 700/ 813] Overall Loss 1.548673 Objective Loss 1.548673 LR 0.000100 Time 0.528239 -2024-05-11 08:00:21,931 - Epoch: [33][ 800/ 813] Overall Loss 1.546876 Objective Loss 1.546876 LR 0.000100 Time 0.529753 -2024-05-11 08:00:26,425 - Epoch: [33][ 813/ 813] Overall Loss 1.545750 Objective Loss 1.545750 LR 0.000100 Time 0.526810 -2024-05-11 08:00:26,451 - --- validate (epoch=33)----------- -2024-05-11 08:00:26,451 - 3250 samples (16 per mini-batch) -2024-05-11 08:00:26,453 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 08:01:20,561 - Epoch: [33][ 100/ 204] Loss 1.573421 mAP 0.890285 -2024-05-11 08:02:15,837 - Epoch: [33][ 200/ 204] Loss 1.569714 mAP 0.880348 -2024-05-11 08:02:16,665 - Epoch: [33][ 204/ 204] Loss 1.569009 mAP 0.880357 -2024-05-11 08:02:16,690 - ==> mAP: 0.88036 Loss: 1.569 - -2024-05-11 08:02:16,694 - ==> Best [mAP: 0.880357 vloss: 1.569009 Params: 368352 on epoch: 33] -2024-05-11 08:02:16,694 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 08:02:16,723 - - -2024-05-11 08:02:16,723 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 08:03:12,393 - Epoch: [34][ 100/ 813] Overall Loss 1.490632 Objective Loss 1.490632 LR 0.000100 Time 0.556674 -2024-05-11 08:04:06,167 - Epoch: [34][ 200/ 813] Overall Loss 1.509663 Objective Loss 1.509663 LR 0.000100 Time 0.547197 -2024-05-11 08:04:57,639 - Epoch: [34][ 300/ 813] Overall Loss 1.526937 Objective Loss 1.526937 LR 0.000100 Time 0.536346 -2024-05-11 08:05:49,650 - Epoch: [34][ 400/ 813] Overall Loss 1.531281 Objective Loss 1.531281 LR 0.000100 Time 0.532282 -2024-05-11 08:06:42,115 - Epoch: [34][ 500/ 813] Overall Loss 1.535154 Objective Loss 1.535154 LR 0.000100 Time 0.530752 -2024-05-11 08:07:36,288 - Epoch: [34][ 600/ 813] Overall Loss 1.534614 Objective Loss 1.534614 LR 0.000100 Time 0.532579 -2024-05-11 08:08:29,627 - Epoch: [34][ 700/ 813] Overall Loss 1.537803 Objective Loss 1.537803 LR 0.000100 Time 0.532693 -2024-05-11 08:09:21,471 - Epoch: [34][ 800/ 813] Overall Loss 1.539472 Objective Loss 1.539472 LR 0.000100 Time 0.530909 -2024-05-11 08:09:26,900 - Epoch: [34][ 813/ 813] Overall Loss 1.539459 Objective Loss 1.539459 LR 0.000100 Time 0.529097 -2024-05-11 08:09:26,925 - --- validate (epoch=34)----------- -2024-05-11 08:09:26,925 - 3250 samples (16 per mini-batch) -2024-05-11 08:09:26,926 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 08:10:21,646 - Epoch: [34][ 100/ 204] Loss 1.556755 mAP 0.893832 -2024-05-11 08:11:16,747 - Epoch: [34][ 200/ 204] Loss 1.549186 mAP 0.896131 -2024-05-11 08:11:17,263 - Epoch: [34][ 204/ 204] Loss 1.552641 mAP 0.896102 -2024-05-11 08:11:17,290 - ==> mAP: 0.89610 Loss: 1.553 - -2024-05-11 08:11:17,293 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 08:11:17,293 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 08:11:17,322 - - -2024-05-11 08:11:17,322 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 08:12:12,686 - Epoch: [35][ 100/ 813] Overall Loss 1.495393 Objective Loss 1.495393 LR 0.000100 Time 0.553613 -2024-05-11 08:13:06,958 - Epoch: [35][ 200/ 813] Overall Loss 1.501399 Objective Loss 1.501399 LR 0.000100 Time 0.548142 -2024-05-11 08:13:59,443 - Epoch: [35][ 300/ 813] Overall Loss 1.521559 Objective Loss 1.521559 LR 0.000100 Time 0.540368 -2024-05-11 08:14:50,670 - Epoch: [35][ 400/ 813] Overall Loss 1.518771 Objective Loss 1.518771 LR 0.000100 Time 0.533341 -2024-05-11 08:15:43,918 - Epoch: [35][ 500/ 813] Overall Loss 1.523378 Objective Loss 1.523378 LR 0.000100 Time 0.533165 -2024-05-11 08:16:37,678 - Epoch: [35][ 600/ 813] Overall Loss 1.526588 Objective Loss 1.526588 LR 0.000100 Time 0.533902 -2024-05-11 08:17:30,788 - Epoch: [35][ 700/ 813] Overall Loss 1.531879 Objective Loss 1.531879 LR 0.000100 Time 0.533495 -2024-05-11 08:18:23,247 - Epoch: [35][ 800/ 813] Overall Loss 1.529383 Objective Loss 1.529383 LR 0.000100 Time 0.532379 -2024-05-11 08:18:28,297 - Epoch: [35][ 813/ 813] Overall Loss 1.528684 Objective Loss 1.528684 LR 0.000100 Time 0.530078 -2024-05-11 08:18:28,322 - --- validate (epoch=35)----------- -2024-05-11 08:18:28,323 - 3250 samples (16 per mini-batch) -2024-05-11 08:18:28,324 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 08:19:25,109 - Epoch: [35][ 100/ 204] Loss 1.583292 mAP 0.867845 -2024-05-11 08:20:16,306 - Epoch: [35][ 200/ 204] Loss 1.578594 mAP 0.867154 -2024-05-11 08:20:17,166 - Epoch: [35][ 204/ 204] Loss 1.578332 mAP 0.867176 -2024-05-11 08:20:17,191 - ==> mAP: 0.86718 Loss: 1.578 - -2024-05-11 08:20:17,194 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 08:20:17,194 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 08:20:17,220 - - -2024-05-11 08:20:17,220 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 08:21:12,757 - Epoch: [36][ 100/ 813] Overall Loss 1.513740 Objective Loss 1.513740 LR 0.000100 Time 0.555338 -2024-05-11 08:22:06,934 - Epoch: [36][ 200/ 813] Overall Loss 1.506598 Objective Loss 1.506598 LR 0.000100 Time 0.548548 -2024-05-11 08:22:59,604 - Epoch: [36][ 300/ 813] Overall Loss 1.528036 Objective Loss 1.528036 LR 0.000100 Time 0.541237 -2024-05-11 08:23:49,851 - Epoch: [36][ 400/ 813] Overall Loss 1.526608 Objective Loss 1.526608 LR 0.000100 Time 0.531541 -2024-05-11 08:24:42,060 - Epoch: [36][ 500/ 813] Overall Loss 1.526003 Objective Loss 1.526003 LR 0.000100 Time 0.529647 -2024-05-11 08:25:35,045 - Epoch: [36][ 600/ 813] Overall Loss 1.524202 Objective Loss 1.524202 LR 0.000100 Time 0.529679 -2024-05-11 08:26:28,653 - Epoch: [36][ 700/ 813] Overall Loss 1.525886 Objective Loss 1.525886 LR 0.000100 Time 0.530592 -2024-05-11 08:27:20,690 - Epoch: [36][ 800/ 813] Overall Loss 1.527290 Objective Loss 1.527290 LR 0.000100 Time 0.529312 -2024-05-11 08:27:26,661 - Epoch: [36][ 813/ 813] Overall Loss 1.527037 Objective Loss 1.527037 LR 0.000100 Time 0.528193 -2024-05-11 08:27:26,688 - --- validate (epoch=36)----------- -2024-05-11 08:27:26,688 - 3250 samples (16 per mini-batch) -2024-05-11 08:27:26,690 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 08:28:21,550 - Epoch: [36][ 100/ 204] Loss 1.558106 mAP 0.879650 -2024-05-11 08:29:15,479 - Epoch: [36][ 200/ 204] Loss 1.553163 mAP 0.879521 -2024-05-11 08:29:15,666 - Epoch: [36][ 204/ 204] Loss 1.551751 mAP 0.879577 -2024-05-11 08:29:15,691 - ==> mAP: 0.87958 Loss: 1.552 - -2024-05-11 08:29:15,694 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 08:29:15,694 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 08:29:15,718 - - -2024-05-11 08:29:15,719 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 08:30:11,932 - Epoch: [37][ 100/ 813] Overall Loss 1.490096 Objective Loss 1.490096 LR 0.000100 Time 0.562058 -2024-05-11 08:31:05,785 - Epoch: [37][ 200/ 813] Overall Loss 1.490735 Objective Loss 1.490735 LR 0.000100 Time 0.550289 -2024-05-11 08:31:57,835 - Epoch: [37][ 300/ 813] Overall Loss 1.512221 Objective Loss 1.512221 LR 0.000100 Time 0.540352 -2024-05-11 08:32:47,827 - Epoch: [37][ 400/ 813] Overall Loss 1.515435 Objective Loss 1.515435 LR 0.000100 Time 0.530241 -2024-05-11 08:33:40,816 - Epoch: [37][ 500/ 813] Overall Loss 1.517295 Objective Loss 1.517295 LR 0.000100 Time 0.530167 -2024-05-11 08:34:34,850 - Epoch: [37][ 600/ 813] Overall Loss 1.519162 Objective Loss 1.519162 LR 0.000100 Time 0.531860 -2024-05-11 08:35:27,964 - Epoch: [37][ 700/ 813] Overall Loss 1.516300 Objective Loss 1.516300 LR 0.000100 Time 0.531756 -2024-05-11 08:36:19,666 - Epoch: [37][ 800/ 813] Overall Loss 1.516442 Objective Loss 1.516442 LR 0.000100 Time 0.529911 -2024-05-11 08:36:25,695 - Epoch: [37][ 813/ 813] Overall Loss 1.515384 Objective Loss 1.515384 LR 0.000100 Time 0.528853 -2024-05-11 08:36:25,721 - --- validate (epoch=37)----------- -2024-05-11 08:36:25,722 - 3250 samples (16 per mini-batch) -2024-05-11 08:36:25,723 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 08:37:21,274 - Epoch: [37][ 100/ 204] Loss 1.528831 mAP 0.869431 -2024-05-11 08:38:13,957 - Epoch: [37][ 200/ 204] Loss 1.547503 mAP 0.868999 -2024-05-11 08:38:14,406 - Epoch: [37][ 204/ 204] Loss 1.550027 mAP 0.869016 -2024-05-11 08:38:14,431 - ==> mAP: 0.86902 Loss: 1.550 - -2024-05-11 08:38:14,434 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 08:38:14,434 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 08:38:14,459 - - -2024-05-11 08:38:14,459 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 08:39:10,340 - Epoch: [38][ 100/ 813] Overall Loss 1.473846 Objective Loss 1.473846 LR 0.000100 Time 0.558785 -2024-05-11 08:40:03,733 - Epoch: [38][ 200/ 813] Overall Loss 1.485842 Objective Loss 1.485842 LR 0.000100 Time 0.546352 -2024-05-11 08:40:55,289 - Epoch: [38][ 300/ 813] Overall Loss 1.502814 Objective Loss 1.502814 LR 0.000100 Time 0.536081 -2024-05-11 08:41:46,553 - Epoch: [38][ 400/ 813] Overall Loss 1.505864 Objective Loss 1.505864 LR 0.000100 Time 0.530180 -2024-05-11 08:42:38,603 - Epoch: [38][ 500/ 813] Overall Loss 1.507282 Objective Loss 1.507282 LR 0.000100 Time 0.528242 -2024-05-11 08:43:32,643 - Epoch: [38][ 600/ 813] Overall Loss 1.506553 Objective Loss 1.506553 LR 0.000100 Time 0.530255 -2024-05-11 08:44:25,842 - Epoch: [38][ 700/ 813] Overall Loss 1.506956 Objective Loss 1.506956 LR 0.000100 Time 0.530500 -2024-05-11 08:45:18,318 - Epoch: [38][ 800/ 813] Overall Loss 1.502163 Objective Loss 1.502163 LR 0.000100 Time 0.529781 -2024-05-11 08:45:23,195 - Epoch: [38][ 813/ 813] Overall Loss 1.501618 Objective Loss 1.501618 LR 0.000100 Time 0.527303 -2024-05-11 08:45:23,221 - --- validate (epoch=38)----------- -2024-05-11 08:45:23,222 - 3250 samples (16 per mini-batch) -2024-05-11 08:45:23,223 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 08:46:19,023 - Epoch: [38][ 100/ 204] Loss 1.533225 mAP 0.878110 -2024-05-11 08:47:10,685 - Epoch: [38][ 200/ 204] Loss 1.536782 mAP 0.877285 -2024-05-11 08:47:11,468 - Epoch: [38][ 204/ 204] Loss 1.534057 mAP 0.877223 -2024-05-11 08:47:11,494 - ==> mAP: 0.87722 Loss: 1.534 - -2024-05-11 08:47:11,498 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 08:47:11,498 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 08:47:11,523 - - -2024-05-11 08:47:11,523 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 08:48:07,612 - Epoch: [39][ 100/ 813] Overall Loss 1.457865 Objective Loss 1.457865 LR 0.000100 Time 0.560870 -2024-05-11 08:49:00,474 - Epoch: [39][ 200/ 813] Overall Loss 1.472334 Objective Loss 1.472334 LR 0.000100 Time 0.544735 -2024-05-11 08:49:53,651 - Epoch: [39][ 300/ 813] Overall Loss 1.481861 Objective Loss 1.481861 LR 0.000100 Time 0.540408 -2024-05-11 08:50:44,245 - Epoch: [39][ 400/ 813] Overall Loss 1.484891 Objective Loss 1.484891 LR 0.000100 Time 0.531787 -2024-05-11 08:51:36,230 - Epoch: [39][ 500/ 813] Overall Loss 1.482408 Objective Loss 1.482408 LR 0.000100 Time 0.529396 -2024-05-11 08:52:30,512 - Epoch: [39][ 600/ 813] Overall Loss 1.484827 Objective Loss 1.484827 LR 0.000100 Time 0.531626 -2024-05-11 08:53:21,685 - Epoch: [39][ 700/ 813] Overall Loss 1.484378 Objective Loss 1.484378 LR 0.000100 Time 0.528782 -2024-05-11 08:54:15,248 - Epoch: [39][ 800/ 813] Overall Loss 1.486967 Objective Loss 1.486967 LR 0.000100 Time 0.529636 -2024-05-11 08:54:20,224 - Epoch: [39][ 813/ 813] Overall Loss 1.487040 Objective Loss 1.487040 LR 0.000100 Time 0.527287 -2024-05-11 08:54:20,249 - --- validate (epoch=39)----------- -2024-05-11 08:54:20,250 - 3250 samples (16 per mini-batch) -2024-05-11 08:54:20,258 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 08:55:16,578 - Epoch: [39][ 100/ 204] Loss 1.533482 mAP 0.869156 -2024-05-11 08:56:08,776 - Epoch: [39][ 200/ 204] Loss 1.542946 mAP 0.868585 -2024-05-11 08:56:09,324 - Epoch: [39][ 204/ 204] Loss 1.540002 mAP 0.868609 -2024-05-11 08:56:09,349 - ==> mAP: 0.86861 Loss: 1.540 - -2024-05-11 08:56:09,352 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 08:56:09,352 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 08:56:09,377 - - -2024-05-11 08:56:09,377 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 08:57:05,802 - Epoch: [40][ 100/ 813] Overall Loss 1.449190 Objective Loss 1.449190 LR 0.000100 Time 0.564233 -2024-05-11 08:58:00,234 - Epoch: [40][ 200/ 813] Overall Loss 1.461019 Objective Loss 1.461019 LR 0.000100 Time 0.554267 -2024-05-11 08:58:53,422 - Epoch: [40][ 300/ 813] Overall Loss 1.474224 Objective Loss 1.474224 LR 0.000100 Time 0.546800 -2024-05-11 08:59:42,576 - Epoch: [40][ 400/ 813] Overall Loss 1.481722 Objective Loss 1.481722 LR 0.000100 Time 0.532966 -2024-05-11 09:00:34,712 - Epoch: [40][ 500/ 813] Overall Loss 1.483904 Objective Loss 1.483904 LR 0.000100 Time 0.530635 -2024-05-11 09:01:27,379 - Epoch: [40][ 600/ 813] Overall Loss 1.482749 Objective Loss 1.482749 LR 0.000100 Time 0.529972 -2024-05-11 09:02:19,980 - Epoch: [40][ 700/ 813] Overall Loss 1.482037 Objective Loss 1.482037 LR 0.000100 Time 0.529404 -2024-05-11 09:03:11,843 - Epoch: [40][ 800/ 813] Overall Loss 1.479508 Objective Loss 1.479508 LR 0.000100 Time 0.528043 -2024-05-11 09:03:17,092 - Epoch: [40][ 813/ 813] Overall Loss 1.479246 Objective Loss 1.479246 LR 0.000100 Time 0.526042 -2024-05-11 09:03:17,119 - --- validate (epoch=40)----------- -2024-05-11 09:03:17,120 - 3250 samples (16 per mini-batch) -2024-05-11 09:03:17,121 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 09:04:13,432 - Epoch: [40][ 100/ 204] Loss 1.542175 mAP 0.878035 -2024-05-11 09:05:06,257 - Epoch: [40][ 200/ 204] Loss 1.526005 mAP 0.878263 -2024-05-11 09:05:06,552 - Epoch: [40][ 204/ 204] Loss 1.526157 mAP 0.878281 -2024-05-11 09:05:06,578 - ==> mAP: 0.87828 Loss: 1.526 - -2024-05-11 09:05:06,581 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 09:05:06,581 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 09:05:06,605 - - -2024-05-11 09:05:06,605 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 09:06:02,383 - Epoch: [41][ 100/ 813] Overall Loss 1.447921 Objective Loss 1.447921 LR 0.000100 Time 0.557758 -2024-05-11 09:06:56,921 - Epoch: [41][ 200/ 813] Overall Loss 1.446187 Objective Loss 1.446187 LR 0.000100 Time 0.551562 -2024-05-11 09:07:50,431 - Epoch: [41][ 300/ 813] Overall Loss 1.463025 Objective Loss 1.463025 LR 0.000100 Time 0.546070 -2024-05-11 09:08:39,473 - Epoch: [41][ 400/ 813] Overall Loss 1.469947 Objective Loss 1.469947 LR 0.000100 Time 0.532152 -2024-05-11 09:09:32,072 - Epoch: [41][ 500/ 813] Overall Loss 1.468334 Objective Loss 1.468334 LR 0.000100 Time 0.530886 -2024-05-11 09:10:26,044 - Epoch: [41][ 600/ 813] Overall Loss 1.468522 Objective Loss 1.468522 LR 0.000100 Time 0.532345 -2024-05-11 09:11:18,418 - Epoch: [41][ 700/ 813] Overall Loss 1.467569 Objective Loss 1.467569 LR 0.000100 Time 0.531114 -2024-05-11 09:12:10,881 - Epoch: [41][ 800/ 813] Overall Loss 1.468127 Objective Loss 1.468127 LR 0.000100 Time 0.530298 -2024-05-11 09:12:17,436 - Epoch: [41][ 813/ 813] Overall Loss 1.467384 Objective Loss 1.467384 LR 0.000100 Time 0.529880 -2024-05-11 09:12:17,461 - --- validate (epoch=41)----------- -2024-05-11 09:12:17,461 - 3250 samples (16 per mini-batch) -2024-05-11 09:12:17,462 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 09:13:11,920 - Epoch: [41][ 100/ 204] Loss 1.505888 mAP 0.879674 -2024-05-11 09:14:05,189 - Epoch: [41][ 200/ 204] Loss 1.522550 mAP 0.879243 -2024-05-11 09:14:05,651 - Epoch: [41][ 204/ 204] Loss 1.522649 mAP 0.869456 -2024-05-11 09:14:05,677 - ==> mAP: 0.86946 Loss: 1.523 - -2024-05-11 09:14:05,680 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 09:14:05,680 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 09:14:05,704 - - -2024-05-11 09:14:05,704 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 09:15:01,412 - Epoch: [42][ 100/ 813] Overall Loss 1.440009 Objective Loss 1.440009 LR 0.000100 Time 0.557060 -2024-05-11 09:15:54,996 - Epoch: [42][ 200/ 813] Overall Loss 1.439228 Objective Loss 1.439228 LR 0.000100 Time 0.546420 -2024-05-11 09:16:48,370 - Epoch: [42][ 300/ 813] Overall Loss 1.449793 Objective Loss 1.449793 LR 0.000100 Time 0.542171 -2024-05-11 09:17:38,818 - Epoch: [42][ 400/ 813] Overall Loss 1.448798 Objective Loss 1.448798 LR 0.000100 Time 0.532739 -2024-05-11 09:18:30,883 - Epoch: [42][ 500/ 813] Overall Loss 1.448824 Objective Loss 1.448824 LR 0.000100 Time 0.530318 -2024-05-11 09:19:24,940 - Epoch: [42][ 600/ 813] Overall Loss 1.453719 Objective Loss 1.453719 LR 0.000100 Time 0.532020 -2024-05-11 09:20:17,602 - Epoch: [42][ 700/ 813] Overall Loss 1.453272 Objective Loss 1.453272 LR 0.000100 Time 0.531236 -2024-05-11 09:21:10,200 - Epoch: [42][ 800/ 813] Overall Loss 1.454087 Objective Loss 1.454087 LR 0.000100 Time 0.530577 -2024-05-11 09:21:15,380 - Epoch: [42][ 813/ 813] Overall Loss 1.454602 Objective Loss 1.454602 LR 0.000100 Time 0.528462 -2024-05-11 09:21:15,407 - --- validate (epoch=42)----------- -2024-05-11 09:21:15,407 - 3250 samples (16 per mini-batch) -2024-05-11 09:21:15,409 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 09:22:11,637 - Epoch: [42][ 100/ 204] Loss 1.558950 mAP 0.850674 -2024-05-11 09:23:03,776 - Epoch: [42][ 200/ 204] Loss 1.540229 mAP 0.859813 -2024-05-11 09:23:04,495 - Epoch: [42][ 204/ 204] Loss 1.539118 mAP 0.859802 -2024-05-11 09:23:04,520 - ==> mAP: 0.85980 Loss: 1.539 - -2024-05-11 09:23:04,523 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 09:23:04,523 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 09:23:04,547 - - -2024-05-11 09:23:04,547 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 09:23:59,877 - Epoch: [43][ 100/ 813] Overall Loss 1.420660 Objective Loss 1.420660 LR 0.000100 Time 0.553280 -2024-05-11 09:24:53,100 - Epoch: [43][ 200/ 813] Overall Loss 1.436271 Objective Loss 1.436271 LR 0.000100 Time 0.542746 -2024-05-11 09:25:45,652 - Epoch: [43][ 300/ 813] Overall Loss 1.456156 Objective Loss 1.456156 LR 0.000100 Time 0.536999 -2024-05-11 09:26:36,583 - Epoch: [43][ 400/ 813] Overall Loss 1.456301 Objective Loss 1.456301 LR 0.000100 Time 0.530069 -2024-05-11 09:27:28,657 - Epoch: [43][ 500/ 813] Overall Loss 1.456979 Objective Loss 1.456979 LR 0.000100 Time 0.528196 -2024-05-11 09:28:22,656 - Epoch: [43][ 600/ 813] Overall Loss 1.463208 Objective Loss 1.463208 LR 0.000100 Time 0.530155 -2024-05-11 09:29:16,382 - Epoch: [43][ 700/ 813] Overall Loss 1.458495 Objective Loss 1.458495 LR 0.000100 Time 0.531167 -2024-05-11 09:30:08,912 - Epoch: [43][ 800/ 813] Overall Loss 1.455535 Objective Loss 1.455535 LR 0.000100 Time 0.530432 -2024-05-11 09:30:14,689 - Epoch: [43][ 813/ 813] Overall Loss 1.454453 Objective Loss 1.454453 LR 0.000100 Time 0.529056 -2024-05-11 09:30:14,715 - --- validate (epoch=43)----------- -2024-05-11 09:30:14,716 - 3250 samples (16 per mini-batch) -2024-05-11 09:30:14,717 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 09:31:09,100 - Epoch: [43][ 100/ 204] Loss 1.521750 mAP 0.864592 -2024-05-11 09:32:02,593 - Epoch: [43][ 200/ 204] Loss 1.519558 mAP 0.866362 -2024-05-11 09:32:03,443 - Epoch: [43][ 204/ 204] Loss 1.518761 mAP 0.866404 -2024-05-11 09:32:03,468 - ==> mAP: 0.86640 Loss: 1.519 - -2024-05-11 09:32:03,471 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 09:32:03,471 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 09:32:03,495 - - -2024-05-11 09:32:03,496 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 09:32:59,648 - Epoch: [44][ 100/ 813] Overall Loss 1.422788 Objective Loss 1.422788 LR 0.000100 Time 0.561507 -2024-05-11 09:33:53,311 - Epoch: [44][ 200/ 813] Overall Loss 1.423647 Objective Loss 1.423647 LR 0.000100 Time 0.548987 -2024-05-11 09:34:45,207 - Epoch: [44][ 300/ 813] Overall Loss 1.438066 Objective Loss 1.438066 LR 0.000100 Time 0.538972 -2024-05-11 09:35:35,767 - Epoch: [44][ 400/ 813] Overall Loss 1.438427 Objective Loss 1.438427 LR 0.000100 Time 0.530625 -2024-05-11 09:36:28,020 - Epoch: [44][ 500/ 813] Overall Loss 1.439167 Objective Loss 1.439167 LR 0.000100 Time 0.529003 -2024-05-11 09:37:21,476 - Epoch: [44][ 600/ 813] Overall Loss 1.440368 Objective Loss 1.440368 LR 0.000100 Time 0.529927 -2024-05-11 09:38:14,326 - Epoch: [44][ 700/ 813] Overall Loss 1.444200 Objective Loss 1.444200 LR 0.000100 Time 0.529721 -2024-05-11 09:39:07,026 - Epoch: [44][ 800/ 813] Overall Loss 1.444225 Objective Loss 1.444225 LR 0.000100 Time 0.529357 -2024-05-11 09:39:11,938 - Epoch: [44][ 813/ 813] Overall Loss 1.443913 Objective Loss 1.443913 LR 0.000100 Time 0.526931 -2024-05-11 09:39:11,964 - --- validate (epoch=44)----------- -2024-05-11 09:39:11,964 - 3250 samples (16 per mini-batch) -2024-05-11 09:39:11,965 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 09:40:08,428 - Epoch: [44][ 100/ 204] Loss 1.482953 mAP 0.861263 -2024-05-11 09:41:02,235 - Epoch: [44][ 200/ 204] Loss 1.485159 mAP 0.861141 -2024-05-11 09:41:02,491 - Epoch: [44][ 204/ 204] Loss 1.482054 mAP 0.861138 -2024-05-11 09:41:02,515 - ==> mAP: 0.86114 Loss: 1.482 - -2024-05-11 09:41:02,518 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 09:41:02,518 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 09:41:02,543 - - -2024-05-11 09:41:02,543 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 09:41:57,629 - Epoch: [45][ 100/ 813] Overall Loss 1.379220 Objective Loss 1.379220 LR 0.000100 Time 0.550552 -2024-05-11 09:42:50,181 - Epoch: [45][ 200/ 813] Overall Loss 1.398938 Objective Loss 1.398938 LR 0.000100 Time 0.538026 -2024-05-11 09:43:42,218 - Epoch: [45][ 300/ 813] Overall Loss 1.412307 Objective Loss 1.412307 LR 0.000100 Time 0.532135 -2024-05-11 09:44:33,201 - Epoch: [45][ 400/ 813] Overall Loss 1.422858 Objective Loss 1.422858 LR 0.000100 Time 0.526547 -2024-05-11 09:45:26,088 - Epoch: [45][ 500/ 813] Overall Loss 1.424869 Objective Loss 1.424869 LR 0.000100 Time 0.527001 -2024-05-11 09:46:21,048 - Epoch: [45][ 600/ 813] Overall Loss 1.428468 Objective Loss 1.428468 LR 0.000100 Time 0.530766 -2024-05-11 09:47:13,319 - Epoch: [45][ 700/ 813] Overall Loss 1.430420 Objective Loss 1.430420 LR 0.000100 Time 0.529613 -2024-05-11 09:48:05,450 - Epoch: [45][ 800/ 813] Overall Loss 1.430439 Objective Loss 1.430439 LR 0.000100 Time 0.528573 -2024-05-11 09:48:10,226 - Epoch: [45][ 813/ 813] Overall Loss 1.430611 Objective Loss 1.430611 LR 0.000100 Time 0.525995 -2024-05-11 09:48:10,253 - --- validate (epoch=45)----------- -2024-05-11 09:48:10,254 - 3250 samples (16 per mini-batch) -2024-05-11 09:48:10,255 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 09:49:06,015 - Epoch: [45][ 100/ 204] Loss 1.471195 mAP 0.878872 -2024-05-11 09:49:58,407 - Epoch: [45][ 200/ 204] Loss 1.486397 mAP 0.879070 -2024-05-11 09:49:59,094 - Epoch: [45][ 204/ 204] Loss 1.488688 mAP 0.879110 -2024-05-11 09:49:59,118 - ==> mAP: 0.87911 Loss: 1.489 - -2024-05-11 09:49:59,122 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 09:49:59,122 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 09:49:59,146 - - -2024-05-11 09:49:59,146 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 09:50:55,025 - Epoch: [46][ 100/ 813] Overall Loss 1.401658 Objective Loss 1.401658 LR 0.000100 Time 0.558772 -2024-05-11 09:51:48,278 - Epoch: [46][ 200/ 813] Overall Loss 1.403740 Objective Loss 1.403740 LR 0.000100 Time 0.545590 -2024-05-11 09:52:41,596 - Epoch: [46][ 300/ 813] Overall Loss 1.423321 Objective Loss 1.423321 LR 0.000100 Time 0.541449 -2024-05-11 09:53:33,647 - Epoch: [46][ 400/ 813] Overall Loss 1.420035 Objective Loss 1.420035 LR 0.000100 Time 0.536211 -2024-05-11 09:54:26,526 - Epoch: [46][ 500/ 813] Overall Loss 1.419934 Objective Loss 1.419934 LR 0.000100 Time 0.534723 -2024-05-11 09:55:20,536 - Epoch: [46][ 600/ 813] Overall Loss 1.420694 Objective Loss 1.420694 LR 0.000100 Time 0.535580 -2024-05-11 09:56:13,619 - Epoch: [46][ 700/ 813] Overall Loss 1.421285 Objective Loss 1.421285 LR 0.000100 Time 0.534900 -2024-05-11 09:57:05,255 - Epoch: [46][ 800/ 813] Overall Loss 1.416578 Objective Loss 1.416578 LR 0.000100 Time 0.532580 -2024-05-11 09:57:10,542 - Epoch: [46][ 813/ 813] Overall Loss 1.415672 Objective Loss 1.415672 LR 0.000100 Time 0.530567 -2024-05-11 09:57:10,568 - --- validate (epoch=46)----------- -2024-05-11 09:57:10,569 - 3250 samples (16 per mini-batch) -2024-05-11 09:57:10,570 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 09:58:05,548 - Epoch: [46][ 100/ 204] Loss 1.472300 mAP 0.877306 -2024-05-11 09:58:59,327 - Epoch: [46][ 200/ 204] Loss 1.466368 mAP 0.877933 -2024-05-11 09:58:59,852 - Epoch: [46][ 204/ 204] Loss 1.463826 mAP 0.877883 -2024-05-11 09:58:59,878 - ==> mAP: 0.87788 Loss: 1.464 - -2024-05-11 09:58:59,882 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 09:58:59,882 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 09:58:59,907 - - -2024-05-11 09:58:59,907 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 09:59:55,309 - Epoch: [47][ 100/ 813] Overall Loss 1.371450 Objective Loss 1.371450 LR 0.000100 Time 0.554000 -2024-05-11 10:00:49,067 - Epoch: [47][ 200/ 813] Overall Loss 1.385472 Objective Loss 1.385472 LR 0.000100 Time 0.545781 -2024-05-11 10:01:42,779 - Epoch: [47][ 300/ 813] Overall Loss 1.399623 Objective Loss 1.399623 LR 0.000100 Time 0.542888 -2024-05-11 10:02:33,508 - Epoch: [47][ 400/ 813] Overall Loss 1.407500 Objective Loss 1.407500 LR 0.000100 Time 0.533977 -2024-05-11 10:03:27,216 - Epoch: [47][ 500/ 813] Overall Loss 1.410534 Objective Loss 1.410534 LR 0.000100 Time 0.534593 -2024-05-11 10:04:20,100 - Epoch: [47][ 600/ 813] Overall Loss 1.412380 Objective Loss 1.412380 LR 0.000100 Time 0.533632 -2024-05-11 10:05:12,510 - Epoch: [47][ 700/ 813] Overall Loss 1.413479 Objective Loss 1.413479 LR 0.000100 Time 0.532268 -2024-05-11 10:06:04,674 - Epoch: [47][ 800/ 813] Overall Loss 1.414280 Objective Loss 1.414280 LR 0.000100 Time 0.530933 -2024-05-11 10:06:09,830 - Epoch: [47][ 813/ 813] Overall Loss 1.414415 Objective Loss 1.414415 LR 0.000100 Time 0.528785 -2024-05-11 10:06:09,862 - --- validate (epoch=47)----------- -2024-05-11 10:06:09,862 - 3250 samples (16 per mini-batch) -2024-05-11 10:06:09,864 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 10:07:03,923 - Epoch: [47][ 100/ 204] Loss 1.456741 mAP 0.888601 -2024-05-11 10:07:57,734 - Epoch: [47][ 200/ 204] Loss 1.470239 mAP 0.878647 -2024-05-11 10:07:58,401 - Epoch: [47][ 204/ 204] Loss 1.473616 mAP 0.878687 -2024-05-11 10:07:58,426 - ==> mAP: 0.87869 Loss: 1.474 - -2024-05-11 10:07:58,430 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 10:07:58,430 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 10:07:58,455 - - -2024-05-11 10:07:58,455 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 10:08:54,971 - Epoch: [48][ 100/ 813] Overall Loss 1.399621 Objective Loss 1.399621 LR 0.000100 Time 0.565138 -2024-05-11 10:09:48,506 - Epoch: [48][ 200/ 813] Overall Loss 1.401182 Objective Loss 1.401182 LR 0.000100 Time 0.550221 -2024-05-11 10:10:40,372 - Epoch: [48][ 300/ 813] Overall Loss 1.413703 Objective Loss 1.413703 LR 0.000100 Time 0.539695 -2024-05-11 10:11:31,164 - Epoch: [48][ 400/ 813] Overall Loss 1.420598 Objective Loss 1.420598 LR 0.000100 Time 0.531749 -2024-05-11 10:12:23,952 - Epoch: [48][ 500/ 813] Overall Loss 1.422989 Objective Loss 1.422989 LR 0.000100 Time 0.530957 -2024-05-11 10:13:18,649 - Epoch: [48][ 600/ 813] Overall Loss 1.420266 Objective Loss 1.420266 LR 0.000100 Time 0.533541 -2024-05-11 10:14:11,156 - Epoch: [48][ 700/ 813] Overall Loss 1.416566 Objective Loss 1.416566 LR 0.000100 Time 0.532325 -2024-05-11 10:15:03,793 - Epoch: [48][ 800/ 813] Overall Loss 1.413902 Objective Loss 1.413902 LR 0.000100 Time 0.531579 -2024-05-11 10:15:08,661 - Epoch: [48][ 813/ 813] Overall Loss 1.414514 Objective Loss 1.414514 LR 0.000100 Time 0.529063 -2024-05-11 10:15:08,686 - --- validate (epoch=48)----------- -2024-05-11 10:15:08,687 - 3250 samples (16 per mini-batch) -2024-05-11 10:15:08,688 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 10:16:01,448 - Epoch: [48][ 100/ 204] Loss 1.495449 mAP 0.887514 -2024-05-11 10:16:54,681 - Epoch: [48][ 200/ 204] Loss 1.486485 mAP 0.886844 -2024-05-11 10:16:54,873 - Epoch: [48][ 204/ 204] Loss 1.486148 mAP 0.886871 -2024-05-11 10:16:54,898 - ==> mAP: 0.88687 Loss: 1.486 - -2024-05-11 10:16:54,902 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 10:16:54,902 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 10:16:54,927 - - -2024-05-11 10:16:54,928 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 10:17:50,407 - Epoch: [49][ 100/ 813] Overall Loss 1.378055 Objective Loss 1.378055 LR 0.000100 Time 0.554771 -2024-05-11 10:18:43,891 - Epoch: [49][ 200/ 813] Overall Loss 1.389251 Objective Loss 1.389251 LR 0.000100 Time 0.544800 -2024-05-11 10:19:35,711 - Epoch: [49][ 300/ 813] Overall Loss 1.402859 Objective Loss 1.402859 LR 0.000100 Time 0.535919 -2024-05-11 10:20:26,813 - Epoch: [49][ 400/ 813] Overall Loss 1.401135 Objective Loss 1.401135 LR 0.000100 Time 0.529690 -2024-05-11 10:21:19,223 - Epoch: [49][ 500/ 813] Overall Loss 1.400663 Objective Loss 1.400663 LR 0.000100 Time 0.528568 -2024-05-11 10:22:13,132 - Epoch: [49][ 600/ 813] Overall Loss 1.408334 Objective Loss 1.408334 LR 0.000100 Time 0.530319 -2024-05-11 10:23:05,359 - Epoch: [49][ 700/ 813] Overall Loss 1.405701 Objective Loss 1.405701 LR 0.000100 Time 0.529169 -2024-05-11 10:23:57,211 - Epoch: [49][ 800/ 813] Overall Loss 1.406860 Objective Loss 1.406860 LR 0.000100 Time 0.527835 -2024-05-11 10:24:02,025 - Epoch: [49][ 813/ 813] Overall Loss 1.406239 Objective Loss 1.406239 LR 0.000100 Time 0.525316 -2024-05-11 10:24:02,050 - --- validate (epoch=49)----------- -2024-05-11 10:24:02,050 - 3250 samples (16 per mini-batch) -2024-05-11 10:24:02,051 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 10:24:57,384 - Epoch: [49][ 100/ 204] Loss 1.470565 mAP 0.877326 -2024-05-11 10:25:51,136 - Epoch: [49][ 200/ 204] Loss 1.455415 mAP 0.878475 -2024-05-11 10:25:52,176 - Epoch: [49][ 204/ 204] Loss 1.453747 mAP 0.878492 -2024-05-11 10:25:52,202 - ==> mAP: 0.87849 Loss: 1.454 - -2024-05-11 10:25:52,205 - ==> Best [mAP: 0.896102 vloss: 1.552641 Params: 368352 on epoch: 34] -2024-05-11 10:25:52,205 - Saving checkpoint to: logs/2024.05.11-025627/checkpoint.pth.tar -2024-05-11 10:25:52,230 - Initiating quantization aware training (QAT)... -2024-05-11 10:25:52,250 - - -2024-05-11 10:25:52,250 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 10:26:48,072 - Epoch: [50][ 100/ 813] Overall Loss 1.485550 Objective Loss 1.485550 LR 0.000050 Time 0.558207 -2024-05-11 10:27:41,787 - Epoch: [50][ 200/ 813] Overall Loss 1.446572 Objective Loss 1.446572 LR 0.000050 Time 0.547667 -2024-05-11 10:28:35,095 - Epoch: [50][ 300/ 813] Overall Loss 1.427168 Objective Loss 1.427168 LR 0.000050 Time 0.542799 -2024-05-11 10:29:26,378 - Epoch: [50][ 400/ 813] Overall Loss 1.411920 Objective Loss 1.411920 LR 0.000050 Time 0.535305 -2024-05-11 10:30:19,340 - Epoch: [50][ 500/ 813] Overall Loss 1.397931 Objective Loss 1.397931 LR 0.000050 Time 0.534164 -2024-05-11 10:31:13,420 - Epoch: [50][ 600/ 813] Overall Loss 1.391944 Objective Loss 1.391944 LR 0.000050 Time 0.535268 -2024-05-11 10:32:05,000 - Epoch: [50][ 700/ 813] Overall Loss 1.382878 Objective Loss 1.382878 LR 0.000050 Time 0.532481 -2024-05-11 10:32:58,852 - Epoch: [50][ 800/ 813] Overall Loss 1.377418 Objective Loss 1.377418 LR 0.000050 Time 0.533234 -2024-05-11 10:33:03,216 - Epoch: [50][ 813/ 813] Overall Loss 1.376027 Objective Loss 1.376027 LR 0.000050 Time 0.530075 -2024-05-11 10:33:03,241 - --- validate (epoch=50)----------- -2024-05-11 10:33:03,241 - 3250 samples (16 per mini-batch) -2024-05-11 10:33:03,243 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 10:33:57,742 - Epoch: [50][ 100/ 204] Loss 1.430944 mAP 0.859140 -2024-05-11 10:34:52,077 - Epoch: [50][ 200/ 204] Loss 1.424694 mAP 0.868853 -2024-05-11 10:34:52,845 - Epoch: [50][ 204/ 204] Loss 1.421656 mAP 0.868871 -2024-05-11 10:34:52,871 - ==> mAP: 0.86887 Loss: 1.422 - -2024-05-11 10:34:52,873 - ==> Best [mAP: 0.868871 vloss: 1.421656 Params: 368350 on epoch: 50] -2024-05-11 10:34:52,873 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 10:34:52,885 - - -2024-05-11 10:34:52,886 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 10:35:48,024 - Epoch: [51][ 100/ 813] Overall Loss 1.290279 Objective Loss 1.290279 LR 0.000050 Time 0.551360 -2024-05-11 10:36:42,020 - Epoch: [51][ 200/ 813] Overall Loss 1.291661 Objective Loss 1.291661 LR 0.000050 Time 0.545653 -2024-05-11 10:37:34,039 - Epoch: [51][ 300/ 813] Overall Loss 1.304628 Objective Loss 1.304628 LR 0.000050 Time 0.537155 -2024-05-11 10:38:24,602 - Epoch: [51][ 400/ 813] Overall Loss 1.302485 Objective Loss 1.302485 LR 0.000050 Time 0.529269 -2024-05-11 10:39:17,252 - Epoch: [51][ 500/ 813] Overall Loss 1.302937 Objective Loss 1.302937 LR 0.000050 Time 0.528714 -2024-05-11 10:40:12,673 - Epoch: [51][ 600/ 813] Overall Loss 1.302472 Objective Loss 1.302472 LR 0.000050 Time 0.532956 -2024-05-11 10:41:05,540 - Epoch: [51][ 700/ 813] Overall Loss 1.300317 Objective Loss 1.300317 LR 0.000050 Time 0.532340 -2024-05-11 10:41:57,298 - Epoch: [51][ 800/ 813] Overall Loss 1.301971 Objective Loss 1.301971 LR 0.000050 Time 0.530494 -2024-05-11 10:42:02,839 - Epoch: [51][ 813/ 813] Overall Loss 1.302201 Objective Loss 1.302201 LR 0.000050 Time 0.528826 -2024-05-11 10:42:02,864 - --- validate (epoch=51)----------- -2024-05-11 10:42:02,865 - 3250 samples (16 per mini-batch) -2024-05-11 10:42:02,866 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 10:42:58,117 - Epoch: [51][ 100/ 204] Loss 1.390092 mAP 0.883954 -2024-05-11 10:43:50,483 - Epoch: [51][ 200/ 204] Loss 1.380060 mAP 0.885332 -2024-05-11 10:43:50,954 - Epoch: [51][ 204/ 204] Loss 1.377729 mAP 0.885327 -2024-05-11 10:43:50,980 - ==> mAP: 0.88533 Loss: 1.378 - -2024-05-11 10:43:50,982 - ==> Best [mAP: 0.885327 vloss: 1.377729 Params: 368350 on epoch: 51] -2024-05-11 10:43:50,982 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 10:43:51,006 - - -2024-05-11 10:43:51,006 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 10:44:46,786 - Epoch: [52][ 100/ 813] Overall Loss 1.256556 Objective Loss 1.256556 LR 0.000050 Time 0.557600 -2024-05-11 10:45:40,426 - Epoch: [52][ 200/ 813] Overall Loss 1.278322 Objective Loss 1.278322 LR 0.000050 Time 0.546983 -2024-05-11 10:46:32,889 - Epoch: [52][ 300/ 813] Overall Loss 1.284153 Objective Loss 1.284153 LR 0.000050 Time 0.539529 -2024-05-11 10:47:23,334 - Epoch: [52][ 400/ 813] Overall Loss 1.288443 Objective Loss 1.288443 LR 0.000050 Time 0.530756 -2024-05-11 10:48:15,722 - Epoch: [52][ 500/ 813] Overall Loss 1.288987 Objective Loss 1.288987 LR 0.000050 Time 0.529377 -2024-05-11 10:49:08,988 - Epoch: [52][ 600/ 813] Overall Loss 1.289177 Objective Loss 1.289177 LR 0.000050 Time 0.529920 -2024-05-11 10:50:01,629 - Epoch: [52][ 700/ 813] Overall Loss 1.282708 Objective Loss 1.282708 LR 0.000050 Time 0.529417 -2024-05-11 10:50:54,162 - Epoch: [52][ 800/ 813] Overall Loss 1.280164 Objective Loss 1.280164 LR 0.000050 Time 0.528905 -2024-05-11 10:51:00,193 - Epoch: [52][ 813/ 813] Overall Loss 1.279582 Objective Loss 1.279582 LR 0.000050 Time 0.527865 -2024-05-11 10:51:00,219 - --- validate (epoch=52)----------- -2024-05-11 10:51:00,220 - 3250 samples (16 per mini-batch) -2024-05-11 10:51:00,221 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 10:51:55,531 - Epoch: [52][ 100/ 204] Loss 1.396759 mAP 0.853203 -2024-05-11 10:52:49,344 - Epoch: [52][ 200/ 204] Loss 1.398111 mAP 0.863485 -2024-05-11 10:52:49,936 - Epoch: [52][ 204/ 204] Loss 1.402423 mAP 0.863424 -2024-05-11 10:52:49,961 - ==> mAP: 0.86342 Loss: 1.402 - -2024-05-11 10:52:49,963 - ==> Best [mAP: 0.885327 vloss: 1.377729 Params: 368350 on epoch: 51] -2024-05-11 10:52:49,963 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 10:52:49,984 - - -2024-05-11 10:52:49,984 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 10:53:44,266 - Epoch: [53][ 100/ 813] Overall Loss 1.244421 Objective Loss 1.244421 LR 0.000050 Time 0.542802 -2024-05-11 10:54:38,132 - Epoch: [53][ 200/ 813] Overall Loss 1.252316 Objective Loss 1.252316 LR 0.000050 Time 0.540709 -2024-05-11 10:55:29,694 - Epoch: [53][ 300/ 813] Overall Loss 1.264927 Objective Loss 1.264927 LR 0.000050 Time 0.532344 -2024-05-11 10:56:21,114 - Epoch: [53][ 400/ 813] Overall Loss 1.270130 Objective Loss 1.270130 LR 0.000050 Time 0.527797 -2024-05-11 10:57:13,780 - Epoch: [53][ 500/ 813] Overall Loss 1.269075 Objective Loss 1.269075 LR 0.000050 Time 0.527567 -2024-05-11 10:58:08,333 - Epoch: [53][ 600/ 813] Overall Loss 1.270462 Objective Loss 1.270462 LR 0.000050 Time 0.530554 -2024-05-11 10:59:00,708 - Epoch: [53][ 700/ 813] Overall Loss 1.269808 Objective Loss 1.269808 LR 0.000050 Time 0.529579 -2024-05-11 10:59:52,889 - Epoch: [53][ 800/ 813] Overall Loss 1.266717 Objective Loss 1.266717 LR 0.000050 Time 0.528606 -2024-05-11 10:59:58,212 - Epoch: [53][ 813/ 813] Overall Loss 1.265947 Objective Loss 1.265947 LR 0.000050 Time 0.526691 -2024-05-11 10:59:58,238 - --- validate (epoch=53)----------- -2024-05-11 10:59:58,239 - 3250 samples (16 per mini-batch) -2024-05-11 10:59:58,240 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 11:00:54,044 - Epoch: [53][ 100/ 204] Loss 1.363930 mAP 0.887187 -2024-05-11 11:01:46,338 - Epoch: [53][ 200/ 204] Loss 1.372319 mAP 0.876573 -2024-05-11 11:01:46,798 - Epoch: [53][ 204/ 204] Loss 1.370883 mAP 0.876601 -2024-05-11 11:01:46,823 - ==> mAP: 0.87660 Loss: 1.371 - -2024-05-11 11:01:46,825 - ==> Best [mAP: 0.885327 vloss: 1.377729 Params: 368350 on epoch: 51] -2024-05-11 11:01:46,826 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 11:01:46,846 - - -2024-05-11 11:01:46,846 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 11:02:42,740 - Epoch: [54][ 100/ 813] Overall Loss 1.251563 Objective Loss 1.251563 LR 0.000050 Time 0.558912 -2024-05-11 11:03:37,038 - Epoch: [54][ 200/ 813] Overall Loss 1.249178 Objective Loss 1.249178 LR 0.000050 Time 0.550941 -2024-05-11 11:04:30,271 - Epoch: [54][ 300/ 813] Overall Loss 1.258711 Objective Loss 1.258711 LR 0.000050 Time 0.544721 -2024-05-11 11:05:21,342 - Epoch: [54][ 400/ 813] Overall Loss 1.263301 Objective Loss 1.263301 LR 0.000050 Time 0.536214 -2024-05-11 11:06:14,098 - Epoch: [54][ 500/ 813] Overall Loss 1.266476 Objective Loss 1.266476 LR 0.000050 Time 0.534482 -2024-05-11 11:07:07,626 - Epoch: [54][ 600/ 813] Overall Loss 1.264740 Objective Loss 1.264740 LR 0.000050 Time 0.534611 -2024-05-11 11:08:00,736 - Epoch: [54][ 700/ 813] Overall Loss 1.266087 Objective Loss 1.266087 LR 0.000050 Time 0.534103 -2024-05-11 11:08:53,625 - Epoch: [54][ 800/ 813] Overall Loss 1.266131 Objective Loss 1.266131 LR 0.000050 Time 0.533448 -2024-05-11 11:08:57,940 - Epoch: [54][ 813/ 813] Overall Loss 1.265837 Objective Loss 1.265837 LR 0.000050 Time 0.530225 -2024-05-11 11:08:57,965 - --- validate (epoch=54)----------- -2024-05-11 11:08:57,966 - 3250 samples (16 per mini-batch) -2024-05-11 11:08:57,967 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 11:09:52,712 - Epoch: [54][ 100/ 204] Loss 1.342205 mAP 0.877642 -2024-05-11 11:10:44,859 - Epoch: [54][ 200/ 204] Loss 1.357009 mAP 0.878664 -2024-05-11 11:10:45,536 - Epoch: [54][ 204/ 204] Loss 1.356105 mAP 0.878612 -2024-05-11 11:10:45,561 - ==> mAP: 0.87861 Loss: 1.356 - -2024-05-11 11:10:45,564 - ==> Best [mAP: 0.885327 vloss: 1.377729 Params: 368350 on epoch: 51] -2024-05-11 11:10:45,564 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 11:10:45,584 - - -2024-05-11 11:10:45,584 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 11:11:41,694 - Epoch: [55][ 100/ 813] Overall Loss 1.220621 Objective Loss 1.220621 LR 0.000050 Time 0.561075 -2024-05-11 11:12:35,512 - Epoch: [55][ 200/ 813] Overall Loss 1.231543 Objective Loss 1.231543 LR 0.000050 Time 0.549619 -2024-05-11 11:13:27,865 - Epoch: [55][ 300/ 813] Overall Loss 1.248368 Objective Loss 1.248368 LR 0.000050 Time 0.540909 -2024-05-11 11:14:18,061 - Epoch: [55][ 400/ 813] Overall Loss 1.249384 Objective Loss 1.249384 LR 0.000050 Time 0.531168 -2024-05-11 11:15:10,769 - Epoch: [55][ 500/ 813] Overall Loss 1.253910 Objective Loss 1.253910 LR 0.000050 Time 0.530342 -2024-05-11 11:16:03,854 - Epoch: [55][ 600/ 813] Overall Loss 1.254635 Objective Loss 1.254635 LR 0.000050 Time 0.530421 -2024-05-11 11:16:57,823 - Epoch: [55][ 700/ 813] Overall Loss 1.254284 Objective Loss 1.254284 LR 0.000050 Time 0.531742 -2024-05-11 11:17:49,200 - Epoch: [55][ 800/ 813] Overall Loss 1.254032 Objective Loss 1.254032 LR 0.000050 Time 0.529493 -2024-05-11 11:17:54,805 - Epoch: [55][ 813/ 813] Overall Loss 1.253754 Objective Loss 1.253754 LR 0.000050 Time 0.527921 -2024-05-11 11:17:54,836 - --- validate (epoch=55)----------- -2024-05-11 11:17:54,836 - 3250 samples (16 per mini-batch) -2024-05-11 11:17:54,837 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 11:18:50,332 - Epoch: [55][ 100/ 204] Loss 1.346606 mAP 0.875043 -2024-05-11 11:19:43,213 - Epoch: [55][ 200/ 204] Loss 1.353457 mAP 0.875737 -2024-05-11 11:19:43,738 - Epoch: [55][ 204/ 204] Loss 1.353318 mAP 0.875770 -2024-05-11 11:19:43,763 - ==> mAP: 0.87577 Loss: 1.353 - -2024-05-11 11:19:43,766 - ==> Best [mAP: 0.885327 vloss: 1.377729 Params: 368350 on epoch: 51] -2024-05-11 11:19:43,766 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 11:19:43,786 - - -2024-05-11 11:19:43,786 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 11:20:39,636 - Epoch: [56][ 100/ 813] Overall Loss 1.233299 Objective Loss 1.233299 LR 0.000050 Time 0.558471 -2024-05-11 11:21:34,157 - Epoch: [56][ 200/ 813] Overall Loss 1.237905 Objective Loss 1.237905 LR 0.000050 Time 0.551832 -2024-05-11 11:22:27,280 - Epoch: [56][ 300/ 813] Overall Loss 1.253689 Objective Loss 1.253689 LR 0.000050 Time 0.544953 -2024-05-11 11:23:17,243 - Epoch: [56][ 400/ 813] Overall Loss 1.259936 Objective Loss 1.259936 LR 0.000050 Time 0.533594 -2024-05-11 11:24:09,262 - Epoch: [56][ 500/ 813] Overall Loss 1.264309 Objective Loss 1.264309 LR 0.000050 Time 0.530910 -2024-05-11 11:25:02,722 - Epoch: [56][ 600/ 813] Overall Loss 1.264807 Objective Loss 1.264807 LR 0.000050 Time 0.531521 -2024-05-11 11:25:56,482 - Epoch: [56][ 700/ 813] Overall Loss 1.262640 Objective Loss 1.262640 LR 0.000050 Time 0.532388 -2024-05-11 11:26:48,609 - Epoch: [56][ 800/ 813] Overall Loss 1.260687 Objective Loss 1.260687 LR 0.000050 Time 0.530992 -2024-05-11 11:26:54,939 - Epoch: [56][ 813/ 813] Overall Loss 1.260229 Objective Loss 1.260229 LR 0.000050 Time 0.530287 -2024-05-11 11:26:54,967 - --- validate (epoch=56)----------- -2024-05-11 11:26:54,969 - 3250 samples (16 per mini-batch) -2024-05-11 11:26:54,970 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 11:27:49,131 - Epoch: [56][ 100/ 204] Loss 1.376120 mAP 0.867428 -2024-05-11 11:28:41,936 - Epoch: [56][ 200/ 204] Loss 1.356715 mAP 0.876943 -2024-05-11 11:28:42,614 - Epoch: [56][ 204/ 204] Loss 1.356651 mAP 0.876920 -2024-05-11 11:28:42,640 - ==> mAP: 0.87692 Loss: 1.357 - -2024-05-11 11:28:42,642 - ==> Best [mAP: 0.885327 vloss: 1.377729 Params: 368350 on epoch: 51] -2024-05-11 11:28:42,642 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 11:28:42,663 - - -2024-05-11 11:28:42,663 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 11:29:38,236 - Epoch: [57][ 100/ 813] Overall Loss 1.238841 Objective Loss 1.238841 LR 0.000050 Time 0.555706 -2024-05-11 11:30:32,189 - Epoch: [57][ 200/ 813] Overall Loss 1.238138 Objective Loss 1.238138 LR 0.000050 Time 0.547612 -2024-05-11 11:31:25,353 - Epoch: [57][ 300/ 813] Overall Loss 1.248474 Objective Loss 1.248474 LR 0.000050 Time 0.542272 -2024-05-11 11:32:15,340 - Epoch: [57][ 400/ 813] Overall Loss 1.252581 Objective Loss 1.252581 LR 0.000050 Time 0.531663 -2024-05-11 11:33:07,806 - Epoch: [57][ 500/ 813] Overall Loss 1.254193 Objective Loss 1.254193 LR 0.000050 Time 0.530259 -2024-05-11 11:34:02,173 - Epoch: [57][ 600/ 813] Overall Loss 1.257332 Objective Loss 1.257332 LR 0.000050 Time 0.532482 -2024-05-11 11:34:56,447 - Epoch: [57][ 700/ 813] Overall Loss 1.253290 Objective Loss 1.253290 LR 0.000050 Time 0.533944 -2024-05-11 11:35:48,648 - Epoch: [57][ 800/ 813] Overall Loss 1.253695 Objective Loss 1.253695 LR 0.000050 Time 0.532450 -2024-05-11 11:35:53,656 - Epoch: [57][ 813/ 813] Overall Loss 1.254173 Objective Loss 1.254173 LR 0.000050 Time 0.530095 -2024-05-11 11:35:53,682 - --- validate (epoch=57)----------- -2024-05-11 11:35:53,682 - 3250 samples (16 per mini-batch) -2024-05-11 11:35:53,683 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 11:36:48,040 - Epoch: [57][ 100/ 204] Loss 1.367743 mAP 0.887564 -2024-05-11 11:37:41,782 - Epoch: [57][ 200/ 204] Loss 1.358504 mAP 0.887584 -2024-05-11 11:37:42,243 - Epoch: [57][ 204/ 204] Loss 1.359812 mAP 0.887562 -2024-05-11 11:37:42,268 - ==> mAP: 0.88756 Loss: 1.360 - -2024-05-11 11:37:42,270 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 11:37:42,271 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 11:37:42,295 - - -2024-05-11 11:37:42,295 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 11:38:36,962 - Epoch: [58][ 100/ 813] Overall Loss 1.223678 Objective Loss 1.223678 LR 0.000050 Time 0.546651 -2024-05-11 11:39:32,177 - Epoch: [58][ 200/ 813] Overall Loss 1.232022 Objective Loss 1.232022 LR 0.000050 Time 0.549392 -2024-05-11 11:40:26,233 - Epoch: [58][ 300/ 813] Overall Loss 1.238181 Objective Loss 1.238181 LR 0.000050 Time 0.546442 -2024-05-11 11:41:15,311 - Epoch: [58][ 400/ 813] Overall Loss 1.242006 Objective Loss 1.242006 LR 0.000050 Time 0.532524 -2024-05-11 11:42:07,263 - Epoch: [58][ 500/ 813] Overall Loss 1.244735 Objective Loss 1.244735 LR 0.000050 Time 0.529919 -2024-05-11 11:43:02,254 - Epoch: [58][ 600/ 813] Overall Loss 1.242535 Objective Loss 1.242535 LR 0.000050 Time 0.533249 -2024-05-11 11:43:54,721 - Epoch: [58][ 700/ 813] Overall Loss 1.241184 Objective Loss 1.241184 LR 0.000050 Time 0.532022 -2024-05-11 11:44:46,982 - Epoch: [58][ 800/ 813] Overall Loss 1.240172 Objective Loss 1.240172 LR 0.000050 Time 0.530837 -2024-05-11 11:44:52,610 - Epoch: [58][ 813/ 813] Overall Loss 1.238942 Objective Loss 1.238942 LR 0.000050 Time 0.529271 -2024-05-11 11:44:52,637 - --- validate (epoch=58)----------- -2024-05-11 11:44:52,638 - 3250 samples (16 per mini-batch) -2024-05-11 11:44:52,639 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 11:45:47,484 - Epoch: [58][ 100/ 204] Loss 1.332733 mAP 0.868336 -2024-05-11 11:46:40,570 - Epoch: [58][ 200/ 204] Loss 1.327288 mAP 0.877910 -2024-05-11 11:46:41,381 - Epoch: [58][ 204/ 204] Loss 1.328281 mAP 0.877841 -2024-05-11 11:46:41,406 - ==> mAP: 0.87784 Loss: 1.328 - -2024-05-11 11:46:41,411 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 11:46:41,411 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 11:46:41,435 - - -2024-05-11 11:46:41,435 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 11:47:36,372 - Epoch: [59][ 100/ 813] Overall Loss 1.201073 Objective Loss 1.201073 LR 0.000050 Time 0.549350 -2024-05-11 11:48:29,694 - Epoch: [59][ 200/ 813] Overall Loss 1.210784 Objective Loss 1.210784 LR 0.000050 Time 0.541274 -2024-05-11 11:49:22,022 - Epoch: [59][ 300/ 813] Overall Loss 1.218889 Objective Loss 1.218889 LR 0.000050 Time 0.535273 -2024-05-11 11:50:13,235 - Epoch: [59][ 400/ 813] Overall Loss 1.225712 Objective Loss 1.225712 LR 0.000050 Time 0.529476 -2024-05-11 11:51:04,445 - Epoch: [59][ 500/ 813] Overall Loss 1.231837 Objective Loss 1.231837 LR 0.000050 Time 0.525997 -2024-05-11 11:51:57,573 - Epoch: [59][ 600/ 813] Overall Loss 1.234113 Objective Loss 1.234113 LR 0.000050 Time 0.526876 -2024-05-11 11:52:50,057 - Epoch: [59][ 700/ 813] Overall Loss 1.236439 Objective Loss 1.236439 LR 0.000050 Time 0.526579 -2024-05-11 11:53:43,066 - Epoch: [59][ 800/ 813] Overall Loss 1.237298 Objective Loss 1.237298 LR 0.000050 Time 0.527011 -2024-05-11 11:53:49,163 - Epoch: [59][ 813/ 813] Overall Loss 1.237510 Objective Loss 1.237510 LR 0.000050 Time 0.526083 -2024-05-11 11:53:49,191 - --- validate (epoch=59)----------- -2024-05-11 11:53:49,191 - 3250 samples (16 per mini-batch) -2024-05-11 11:53:49,193 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 11:54:43,703 - Epoch: [59][ 100/ 204] Loss 1.338928 mAP 0.896450 -2024-05-11 11:55:37,987 - Epoch: [59][ 200/ 204] Loss 1.338232 mAP 0.887537 -2024-05-11 11:55:38,746 - Epoch: [59][ 204/ 204] Loss 1.338376 mAP 0.887555 -2024-05-11 11:55:38,771 - ==> mAP: 0.88755 Loss: 1.338 - -2024-05-11 11:55:38,773 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 11:55:38,773 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 11:55:38,793 - - -2024-05-11 11:55:38,793 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 11:56:35,956 - Epoch: [60][ 100/ 813] Overall Loss 1.221954 Objective Loss 1.221954 LR 0.000050 Time 0.571607 -2024-05-11 11:57:29,831 - Epoch: [60][ 200/ 813] Overall Loss 1.225518 Objective Loss 1.225518 LR 0.000050 Time 0.555168 -2024-05-11 11:58:22,541 - Epoch: [60][ 300/ 813] Overall Loss 1.233231 Objective Loss 1.233231 LR 0.000050 Time 0.545808 -2024-05-11 11:59:13,137 - Epoch: [60][ 400/ 813] Overall Loss 1.234742 Objective Loss 1.234742 LR 0.000050 Time 0.535840 -2024-05-11 12:00:04,563 - Epoch: [60][ 500/ 813] Overall Loss 1.235560 Objective Loss 1.235560 LR 0.000050 Time 0.531513 -2024-05-11 12:00:58,363 - Epoch: [60][ 600/ 813] Overall Loss 1.237016 Objective Loss 1.237016 LR 0.000050 Time 0.532587 -2024-05-11 12:01:52,116 - Epoch: [60][ 700/ 813] Overall Loss 1.240154 Objective Loss 1.240154 LR 0.000050 Time 0.533290 -2024-05-11 12:02:44,069 - Epoch: [60][ 800/ 813] Overall Loss 1.240174 Objective Loss 1.240174 LR 0.000050 Time 0.531569 -2024-05-11 12:02:49,294 - Epoch: [60][ 813/ 813] Overall Loss 1.240304 Objective Loss 1.240304 LR 0.000050 Time 0.529492 -2024-05-11 12:02:49,328 - --- validate (epoch=60)----------- -2024-05-11 12:02:49,329 - 3250 samples (16 per mini-batch) -2024-05-11 12:02:49,330 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 12:03:43,140 - Epoch: [60][ 100/ 204] Loss 1.365944 mAP 0.877847 -2024-05-11 12:04:36,540 - Epoch: [60][ 200/ 204] Loss 1.347244 mAP 0.885645 -2024-05-11 12:04:36,773 - Epoch: [60][ 204/ 204] Loss 1.347422 mAP 0.885561 -2024-05-11 12:04:36,798 - ==> mAP: 0.88556 Loss: 1.347 - -2024-05-11 12:04:36,801 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 12:04:36,801 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 12:04:36,821 - - -2024-05-11 12:04:36,821 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 12:05:32,565 - Epoch: [61][ 100/ 813] Overall Loss 1.196725 Objective Loss 1.196725 LR 0.000050 Time 0.557407 -2024-05-11 12:06:26,137 - Epoch: [61][ 200/ 813] Overall Loss 1.217596 Objective Loss 1.217596 LR 0.000050 Time 0.546559 -2024-05-11 12:07:18,705 - Epoch: [61][ 300/ 813] Overall Loss 1.227777 Objective Loss 1.227777 LR 0.000050 Time 0.539594 -2024-05-11 12:08:08,935 - Epoch: [61][ 400/ 813] Overall Loss 1.235343 Objective Loss 1.235343 LR 0.000050 Time 0.530266 -2024-05-11 12:09:02,377 - Epoch: [61][ 500/ 813] Overall Loss 1.240303 Objective Loss 1.240303 LR 0.000050 Time 0.531094 -2024-05-11 12:09:56,714 - Epoch: [61][ 600/ 813] Overall Loss 1.241193 Objective Loss 1.241193 LR 0.000050 Time 0.533138 -2024-05-11 12:10:48,425 - Epoch: [61][ 700/ 813] Overall Loss 1.239855 Objective Loss 1.239855 LR 0.000050 Time 0.530846 -2024-05-11 12:11:40,494 - Epoch: [61][ 800/ 813] Overall Loss 1.237228 Objective Loss 1.237228 LR 0.000050 Time 0.529575 -2024-05-11 12:11:46,734 - Epoch: [61][ 813/ 813] Overall Loss 1.236388 Objective Loss 1.236388 LR 0.000050 Time 0.528781 -2024-05-11 12:11:46,761 - --- validate (epoch=61)----------- -2024-05-11 12:11:46,762 - 3250 samples (16 per mini-batch) -2024-05-11 12:11:46,763 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 12:12:41,196 - Epoch: [61][ 100/ 204] Loss 1.386816 mAP 0.863978 -2024-05-11 12:13:35,303 - Epoch: [61][ 200/ 204] Loss 1.356333 mAP 0.874154 -2024-05-11 12:13:35,675 - Epoch: [61][ 204/ 204] Loss 1.360437 mAP 0.874185 -2024-05-11 12:13:35,701 - ==> mAP: 0.87418 Loss: 1.360 - -2024-05-11 12:13:35,703 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 12:13:35,703 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 12:13:35,723 - - -2024-05-11 12:13:35,723 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 12:14:30,043 - Epoch: [62][ 100/ 813] Overall Loss 1.207957 Objective Loss 1.207957 LR 0.000050 Time 0.543076 -2024-05-11 12:15:23,570 - Epoch: [62][ 200/ 813] Overall Loss 1.213504 Objective Loss 1.213504 LR 0.000050 Time 0.539164 -2024-05-11 12:16:16,469 - Epoch: [62][ 300/ 813] Overall Loss 1.229521 Objective Loss 1.229521 LR 0.000050 Time 0.535732 -2024-05-11 12:17:06,824 - Epoch: [62][ 400/ 813] Overall Loss 1.232922 Objective Loss 1.232922 LR 0.000050 Time 0.527681 -2024-05-11 12:17:59,891 - Epoch: [62][ 500/ 813] Overall Loss 1.237818 Objective Loss 1.237818 LR 0.000050 Time 0.528277 -2024-05-11 12:18:53,199 - Epoch: [62][ 600/ 813] Overall Loss 1.243276 Objective Loss 1.243276 LR 0.000050 Time 0.529075 -2024-05-11 12:19:47,049 - Epoch: [62][ 700/ 813] Overall Loss 1.244697 Objective Loss 1.244697 LR 0.000050 Time 0.530418 -2024-05-11 12:20:38,486 - Epoch: [62][ 800/ 813] Overall Loss 1.243011 Objective Loss 1.243011 LR 0.000050 Time 0.528398 -2024-05-11 12:20:43,629 - Epoch: [62][ 813/ 813] Overall Loss 1.242364 Objective Loss 1.242364 LR 0.000050 Time 0.526274 -2024-05-11 12:20:43,655 - --- validate (epoch=62)----------- -2024-05-11 12:20:43,657 - 3250 samples (16 per mini-batch) -2024-05-11 12:20:43,658 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 12:21:39,824 - Epoch: [62][ 100/ 204] Loss 1.350199 mAP 0.878500 -2024-05-11 12:22:33,832 - Epoch: [62][ 200/ 204] Loss 1.335736 mAP 0.878368 -2024-05-11 12:22:34,114 - Epoch: [62][ 204/ 204] Loss 1.337615 mAP 0.878367 -2024-05-11 12:22:34,138 - ==> mAP: 0.87837 Loss: 1.338 - -2024-05-11 12:22:34,143 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 12:22:34,143 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 12:22:34,168 - - -2024-05-11 12:22:34,168 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 12:23:29,609 - Epoch: [63][ 100/ 813] Overall Loss 1.211566 Objective Loss 1.211566 LR 0.000050 Time 0.554278 -2024-05-11 12:24:23,089 - Epoch: [63][ 200/ 813] Overall Loss 1.217221 Objective Loss 1.217221 LR 0.000050 Time 0.544517 -2024-05-11 12:25:14,729 - Epoch: [63][ 300/ 813] Overall Loss 1.223344 Objective Loss 1.223344 LR 0.000050 Time 0.535138 -2024-05-11 12:26:05,760 - Epoch: [63][ 400/ 813] Overall Loss 1.230767 Objective Loss 1.230767 LR 0.000050 Time 0.528927 -2024-05-11 12:26:57,158 - Epoch: [63][ 500/ 813] Overall Loss 1.233578 Objective Loss 1.233578 LR 0.000050 Time 0.525935 -2024-05-11 12:27:51,430 - Epoch: [63][ 600/ 813] Overall Loss 1.234935 Objective Loss 1.234935 LR 0.000050 Time 0.528730 -2024-05-11 12:28:43,827 - Epoch: [63][ 700/ 813] Overall Loss 1.233076 Objective Loss 1.233076 LR 0.000050 Time 0.528048 -2024-05-11 12:29:36,446 - Epoch: [63][ 800/ 813] Overall Loss 1.234815 Objective Loss 1.234815 LR 0.000050 Time 0.527812 -2024-05-11 12:29:41,769 - Epoch: [63][ 813/ 813] Overall Loss 1.234033 Objective Loss 1.234033 LR 0.000050 Time 0.525915 -2024-05-11 12:29:41,799 - --- validate (epoch=63)----------- -2024-05-11 12:29:41,799 - 3250 samples (16 per mini-batch) -2024-05-11 12:29:41,800 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 12:30:38,201 - Epoch: [63][ 100/ 204] Loss 1.375176 mAP 0.858584 -2024-05-11 12:31:31,154 - Epoch: [63][ 200/ 204] Loss 1.360364 mAP 0.877635 -2024-05-11 12:31:32,019 - Epoch: [63][ 204/ 204] Loss 1.357997 mAP 0.877648 -2024-05-11 12:31:32,044 - ==> mAP: 0.87765 Loss: 1.358 - -2024-05-11 12:31:32,046 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 12:31:32,046 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 12:31:32,067 - - -2024-05-11 12:31:32,067 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 12:32:29,014 - Epoch: [64][ 100/ 813] Overall Loss 1.221101 Objective Loss 1.221101 LR 0.000050 Time 0.569451 -2024-05-11 12:33:22,613 - Epoch: [64][ 200/ 813] Overall Loss 1.229459 Objective Loss 1.229459 LR 0.000050 Time 0.552698 -2024-05-11 12:34:14,583 - Epoch: [64][ 300/ 813] Overall Loss 1.232473 Objective Loss 1.232473 LR 0.000050 Time 0.541693 -2024-05-11 12:35:06,641 - Epoch: [64][ 400/ 813] Overall Loss 1.246461 Objective Loss 1.246461 LR 0.000050 Time 0.536407 -2024-05-11 12:35:58,996 - Epoch: [64][ 500/ 813] Overall Loss 1.249893 Objective Loss 1.249893 LR 0.000050 Time 0.533832 -2024-05-11 12:36:52,512 - Epoch: [64][ 600/ 813] Overall Loss 1.248209 Objective Loss 1.248209 LR 0.000050 Time 0.534046 -2024-05-11 12:37:46,037 - Epoch: [64][ 700/ 813] Overall Loss 1.246465 Objective Loss 1.246465 LR 0.000050 Time 0.534210 -2024-05-11 12:38:37,662 - Epoch: [64][ 800/ 813] Overall Loss 1.247545 Objective Loss 1.247545 LR 0.000050 Time 0.531963 -2024-05-11 12:38:43,666 - Epoch: [64][ 813/ 813] Overall Loss 1.245666 Objective Loss 1.245666 LR 0.000050 Time 0.530841 -2024-05-11 12:38:43,691 - --- validate (epoch=64)----------- -2024-05-11 12:38:43,691 - 3250 samples (16 per mini-batch) -2024-05-11 12:38:43,693 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 12:39:41,138 - Epoch: [64][ 100/ 204] Loss 1.342194 mAP 0.886159 -2024-05-11 12:40:34,160 - Epoch: [64][ 200/ 204] Loss 1.333352 mAP 0.886351 -2024-05-11 12:40:34,527 - Epoch: [64][ 204/ 204] Loss 1.331999 mAP 0.886401 -2024-05-11 12:40:34,553 - ==> mAP: 0.88640 Loss: 1.332 - -2024-05-11 12:40:34,556 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 12:40:34,556 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 12:40:34,576 - - -2024-05-11 12:40:34,576 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 12:41:29,871 - Epoch: [65][ 100/ 813] Overall Loss 1.223878 Objective Loss 1.223878 LR 0.000050 Time 0.552921 -2024-05-11 12:42:24,025 - Epoch: [65][ 200/ 813] Overall Loss 1.217631 Objective Loss 1.217631 LR 0.000050 Time 0.547225 -2024-05-11 12:43:16,010 - Epoch: [65][ 300/ 813] Overall Loss 1.223097 Objective Loss 1.223097 LR 0.000050 Time 0.538094 -2024-05-11 12:44:06,812 - Epoch: [65][ 400/ 813] Overall Loss 1.226798 Objective Loss 1.226798 LR 0.000050 Time 0.530571 -2024-05-11 12:44:59,288 - Epoch: [65][ 500/ 813] Overall Loss 1.232970 Objective Loss 1.232970 LR 0.000050 Time 0.529406 -2024-05-11 12:45:53,297 - Epoch: [65][ 600/ 813] Overall Loss 1.235176 Objective Loss 1.235176 LR 0.000050 Time 0.531185 -2024-05-11 12:46:46,156 - Epoch: [65][ 700/ 813] Overall Loss 1.234119 Objective Loss 1.234119 LR 0.000050 Time 0.530812 -2024-05-11 12:47:37,945 - Epoch: [65][ 800/ 813] Overall Loss 1.236241 Objective Loss 1.236241 LR 0.000050 Time 0.529191 -2024-05-11 12:47:43,690 - Epoch: [65][ 813/ 813] Overall Loss 1.235529 Objective Loss 1.235529 LR 0.000050 Time 0.527795 -2024-05-11 12:47:43,725 - --- validate (epoch=65)----------- -2024-05-11 12:47:43,725 - 3250 samples (16 per mini-batch) -2024-05-11 12:47:43,727 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 12:48:39,423 - Epoch: [65][ 100/ 204] Loss 1.338577 mAP 0.880354 -2024-05-11 12:49:32,342 - Epoch: [65][ 200/ 204] Loss 1.343164 mAP 0.879569 -2024-05-11 12:49:32,839 - Epoch: [65][ 204/ 204] Loss 1.340205 mAP 0.879596 -2024-05-11 12:49:32,864 - ==> mAP: 0.87960 Loss: 1.340 - -2024-05-11 12:49:32,869 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 12:49:32,869 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 12:49:32,892 - - -2024-05-11 12:49:32,892 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 12:50:29,322 - Epoch: [66][ 100/ 813] Overall Loss 1.208551 Objective Loss 1.208551 LR 0.000050 Time 0.564275 -2024-05-11 12:51:23,380 - Epoch: [66][ 200/ 813] Overall Loss 1.213926 Objective Loss 1.213926 LR 0.000050 Time 0.552408 -2024-05-11 12:52:16,270 - Epoch: [66][ 300/ 813] Overall Loss 1.221791 Objective Loss 1.221791 LR 0.000050 Time 0.544566 -2024-05-11 12:53:05,519 - Epoch: [66][ 400/ 813] Overall Loss 1.222522 Objective Loss 1.222522 LR 0.000050 Time 0.531528 -2024-05-11 12:53:58,824 - Epoch: [66][ 500/ 813] Overall Loss 1.222354 Objective Loss 1.222354 LR 0.000050 Time 0.531829 -2024-05-11 12:54:52,204 - Epoch: [66][ 600/ 813] Overall Loss 1.225545 Objective Loss 1.225545 LR 0.000050 Time 0.532149 -2024-05-11 12:55:46,315 - Epoch: [66][ 700/ 813] Overall Loss 1.226374 Objective Loss 1.226374 LR 0.000050 Time 0.533419 -2024-05-11 12:56:38,481 - Epoch: [66][ 800/ 813] Overall Loss 1.224660 Objective Loss 1.224660 LR 0.000050 Time 0.531947 -2024-05-11 12:56:44,475 - Epoch: [66][ 813/ 813] Overall Loss 1.224719 Objective Loss 1.224719 LR 0.000050 Time 0.530814 -2024-05-11 12:56:44,502 - --- validate (epoch=66)----------- -2024-05-11 12:56:44,503 - 3250 samples (16 per mini-batch) -2024-05-11 12:56:44,504 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 12:57:39,271 - Epoch: [66][ 100/ 204] Loss 1.343046 mAP 0.876592 -2024-05-11 12:58:31,610 - Epoch: [66][ 200/ 204] Loss 1.341089 mAP 0.878010 -2024-05-11 12:58:32,242 - Epoch: [66][ 204/ 204] Loss 1.341134 mAP 0.878018 -2024-05-11 12:58:32,267 - ==> mAP: 0.87802 Loss: 1.341 - -2024-05-11 12:58:32,270 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 12:58:32,270 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 12:58:32,290 - - -2024-05-11 12:58:32,290 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 12:59:27,068 - Epoch: [67][ 100/ 813] Overall Loss 1.217987 Objective Loss 1.217987 LR 0.000050 Time 0.547753 -2024-05-11 13:00:21,476 - Epoch: [67][ 200/ 813] Overall Loss 1.224194 Objective Loss 1.224194 LR 0.000050 Time 0.545896 -2024-05-11 13:01:13,815 - Epoch: [67][ 300/ 813] Overall Loss 1.224636 Objective Loss 1.224636 LR 0.000050 Time 0.538369 -2024-05-11 13:02:04,477 - Epoch: [67][ 400/ 813] Overall Loss 1.229831 Objective Loss 1.229831 LR 0.000050 Time 0.530428 -2024-05-11 13:02:57,347 - Epoch: [67][ 500/ 813] Overall Loss 1.229967 Objective Loss 1.229967 LR 0.000050 Time 0.530075 -2024-05-11 13:03:52,183 - Epoch: [67][ 600/ 813] Overall Loss 1.229586 Objective Loss 1.229586 LR 0.000050 Time 0.533119 -2024-05-11 13:04:45,402 - Epoch: [67][ 700/ 813] Overall Loss 1.228623 Objective Loss 1.228623 LR 0.000050 Time 0.532985 -2024-05-11 13:05:37,817 - Epoch: [67][ 800/ 813] Overall Loss 1.228198 Objective Loss 1.228198 LR 0.000050 Time 0.531878 -2024-05-11 13:05:42,859 - Epoch: [67][ 813/ 813] Overall Loss 1.227549 Objective Loss 1.227549 LR 0.000050 Time 0.529575 -2024-05-11 13:05:42,883 - --- validate (epoch=67)----------- -2024-05-11 13:05:42,884 - 3250 samples (16 per mini-batch) -2024-05-11 13:05:42,885 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 13:06:38,363 - Epoch: [67][ 100/ 204] Loss 1.333822 mAP 0.876961 -2024-05-11 13:07:31,803 - Epoch: [67][ 200/ 204] Loss 1.337794 mAP 0.876025 -2024-05-11 13:07:32,435 - Epoch: [67][ 204/ 204] Loss 1.359106 mAP 0.876058 -2024-05-11 13:07:32,461 - ==> mAP: 0.87606 Loss: 1.359 - -2024-05-11 13:07:32,464 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 13:07:32,464 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 13:07:32,484 - - -2024-05-11 13:07:32,484 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 13:08:28,526 - Epoch: [68][ 100/ 813] Overall Loss 1.201960 Objective Loss 1.201960 LR 0.000050 Time 0.560401 -2024-05-11 13:09:23,014 - Epoch: [68][ 200/ 813] Overall Loss 1.212630 Objective Loss 1.212630 LR 0.000050 Time 0.552630 -2024-05-11 13:10:14,901 - Epoch: [68][ 300/ 813] Overall Loss 1.225819 Objective Loss 1.225819 LR 0.000050 Time 0.541365 -2024-05-11 13:11:05,688 - Epoch: [68][ 400/ 813] Overall Loss 1.228325 Objective Loss 1.228325 LR 0.000050 Time 0.532986 -2024-05-11 13:11:57,407 - Epoch: [68][ 500/ 813] Overall Loss 1.231687 Objective Loss 1.231687 LR 0.000050 Time 0.529825 -2024-05-11 13:12:52,106 - Epoch: [68][ 600/ 813] Overall Loss 1.233543 Objective Loss 1.233543 LR 0.000050 Time 0.532682 -2024-05-11 13:13:43,231 - Epoch: [68][ 700/ 813] Overall Loss 1.232145 Objective Loss 1.232145 LR 0.000050 Time 0.529619 -2024-05-11 13:14:36,645 - Epoch: [68][ 800/ 813] Overall Loss 1.230225 Objective Loss 1.230225 LR 0.000050 Time 0.530181 -2024-05-11 13:14:41,689 - Epoch: [68][ 813/ 813] Overall Loss 1.229257 Objective Loss 1.229257 LR 0.000050 Time 0.527908 -2024-05-11 13:14:41,715 - --- validate (epoch=68)----------- -2024-05-11 13:14:41,716 - 3250 samples (16 per mini-batch) -2024-05-11 13:14:41,717 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 13:15:36,921 - Epoch: [68][ 100/ 204] Loss 1.335580 mAP 0.880036 -2024-05-11 13:16:29,902 - Epoch: [68][ 200/ 204] Loss 1.332633 mAP 0.879577 -2024-05-11 13:16:31,206 - Epoch: [68][ 204/ 204] Loss 1.332766 mAP 0.879568 -2024-05-11 13:16:31,231 - ==> mAP: 0.87957 Loss: 1.333 - -2024-05-11 13:16:31,234 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 13:16:31,234 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 13:16:31,254 - - -2024-05-11 13:16:31,255 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 13:17:26,222 - Epoch: [69][ 100/ 813] Overall Loss 1.205293 Objective Loss 1.205293 LR 0.000050 Time 0.549651 -2024-05-11 13:18:20,718 - Epoch: [69][ 200/ 813] Overall Loss 1.207457 Objective Loss 1.207457 LR 0.000050 Time 0.547299 -2024-05-11 13:19:14,156 - Epoch: [69][ 300/ 813] Overall Loss 1.221346 Objective Loss 1.221346 LR 0.000050 Time 0.542987 -2024-05-11 13:20:04,719 - Epoch: [69][ 400/ 813] Overall Loss 1.218761 Objective Loss 1.218761 LR 0.000050 Time 0.533644 -2024-05-11 13:20:57,276 - Epoch: [69][ 500/ 813] Overall Loss 1.222989 Objective Loss 1.222989 LR 0.000050 Time 0.532027 -2024-05-11 13:21:50,609 - Epoch: [69][ 600/ 813] Overall Loss 1.227755 Objective Loss 1.227755 LR 0.000050 Time 0.532238 -2024-05-11 13:22:43,304 - Epoch: [69][ 700/ 813] Overall Loss 1.226736 Objective Loss 1.226736 LR 0.000050 Time 0.531480 -2024-05-11 13:23:36,576 - Epoch: [69][ 800/ 813] Overall Loss 1.227737 Objective Loss 1.227737 LR 0.000050 Time 0.531633 -2024-05-11 13:23:41,845 - Epoch: [69][ 813/ 813] Overall Loss 1.227318 Objective Loss 1.227318 LR 0.000050 Time 0.529613 -2024-05-11 13:23:41,880 - --- validate (epoch=69)----------- -2024-05-11 13:23:41,880 - 3250 samples (16 per mini-batch) -2024-05-11 13:23:41,881 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 13:24:35,978 - Epoch: [69][ 100/ 204] Loss 1.363586 mAP 0.878443 -2024-05-11 13:25:29,822 - Epoch: [69][ 200/ 204] Loss 1.347335 mAP 0.887020 -2024-05-11 13:25:30,264 - Epoch: [69][ 204/ 204] Loss 1.348207 mAP 0.886997 -2024-05-11 13:25:30,291 - ==> mAP: 0.88700 Loss: 1.348 - -2024-05-11 13:25:30,293 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 13:25:30,294 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 13:25:30,314 - - -2024-05-11 13:25:30,314 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 13:26:26,253 - Epoch: [70][ 100/ 813] Overall Loss 1.186665 Objective Loss 1.186665 LR 0.000050 Time 0.559371 -2024-05-11 13:27:19,151 - Epoch: [70][ 200/ 813] Overall Loss 1.198897 Objective Loss 1.198897 LR 0.000050 Time 0.544139 -2024-05-11 13:28:11,466 - Epoch: [70][ 300/ 813] Overall Loss 1.212746 Objective Loss 1.212746 LR 0.000050 Time 0.537138 -2024-05-11 13:29:03,598 - Epoch: [70][ 400/ 813] Overall Loss 1.216052 Objective Loss 1.216052 LR 0.000050 Time 0.533180 -2024-05-11 13:29:54,528 - Epoch: [70][ 500/ 813] Overall Loss 1.216607 Objective Loss 1.216607 LR 0.000050 Time 0.528400 -2024-05-11 13:30:48,469 - Epoch: [70][ 600/ 813] Overall Loss 1.215893 Objective Loss 1.215893 LR 0.000050 Time 0.530232 -2024-05-11 13:31:39,683 - Epoch: [70][ 700/ 813] Overall Loss 1.215540 Objective Loss 1.215540 LR 0.000050 Time 0.527642 -2024-05-11 13:32:32,224 - Epoch: [70][ 800/ 813] Overall Loss 1.216744 Objective Loss 1.216744 LR 0.000050 Time 0.527361 -2024-05-11 13:32:37,445 - Epoch: [70][ 813/ 813] Overall Loss 1.216311 Objective Loss 1.216311 LR 0.000050 Time 0.525350 -2024-05-11 13:32:37,470 - --- validate (epoch=70)----------- -2024-05-11 13:32:37,471 - 3250 samples (16 per mini-batch) -2024-05-11 13:32:37,472 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 13:33:32,375 - Epoch: [70][ 100/ 204] Loss 1.345440 mAP 0.888774 -2024-05-11 13:34:26,137 - Epoch: [70][ 200/ 204] Loss 1.337534 mAP 0.888480 -2024-05-11 13:34:26,522 - Epoch: [70][ 204/ 204] Loss 1.338205 mAP 0.878744 -2024-05-11 13:34:26,547 - ==> mAP: 0.87874 Loss: 1.338 - -2024-05-11 13:34:26,550 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 13:34:26,550 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 13:34:26,570 - - -2024-05-11 13:34:26,571 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 13:35:22,155 - Epoch: [71][ 100/ 813] Overall Loss 1.198101 Objective Loss 1.198101 LR 0.000050 Time 0.555819 -2024-05-11 13:36:15,663 - Epoch: [71][ 200/ 813] Overall Loss 1.193430 Objective Loss 1.193430 LR 0.000050 Time 0.545445 -2024-05-11 13:37:07,646 - Epoch: [71][ 300/ 813] Overall Loss 1.208419 Objective Loss 1.208419 LR 0.000050 Time 0.536899 -2024-05-11 13:37:57,486 - Epoch: [71][ 400/ 813] Overall Loss 1.216733 Objective Loss 1.216733 LR 0.000050 Time 0.527271 -2024-05-11 13:38:50,643 - Epoch: [71][ 500/ 813] Overall Loss 1.225600 Objective Loss 1.225600 LR 0.000050 Time 0.528127 -2024-05-11 13:39:44,960 - Epoch: [71][ 600/ 813] Overall Loss 1.227111 Objective Loss 1.227111 LR 0.000050 Time 0.530632 -2024-05-11 13:40:36,531 - Epoch: [71][ 700/ 813] Overall Loss 1.227665 Objective Loss 1.227665 LR 0.000050 Time 0.528498 -2024-05-11 13:41:29,305 - Epoch: [71][ 800/ 813] Overall Loss 1.227146 Objective Loss 1.227146 LR 0.000050 Time 0.528399 -2024-05-11 13:41:34,923 - Epoch: [71][ 813/ 813] Overall Loss 1.225910 Objective Loss 1.225910 LR 0.000050 Time 0.526859 -2024-05-11 13:41:34,951 - --- validate (epoch=71)----------- -2024-05-11 13:41:34,952 - 3250 samples (16 per mini-batch) -2024-05-11 13:41:34,953 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 13:42:29,468 - Epoch: [71][ 100/ 204] Loss 1.352711 mAP 0.884569 -2024-05-11 13:43:23,846 - Epoch: [71][ 200/ 204] Loss 1.348542 mAP 0.883996 -2024-05-11 13:43:24,246 - Epoch: [71][ 204/ 204] Loss 1.348490 mAP 0.884016 -2024-05-11 13:43:24,271 - ==> mAP: 0.88402 Loss: 1.348 - -2024-05-11 13:43:24,274 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 13:43:24,274 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 13:43:24,294 - - -2024-05-11 13:43:24,295 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 13:44:19,200 - Epoch: [72][ 100/ 813] Overall Loss 1.181303 Objective Loss 1.181303 LR 0.000050 Time 0.549030 -2024-05-11 13:45:13,236 - Epoch: [72][ 200/ 813] Overall Loss 1.196923 Objective Loss 1.196923 LR 0.000050 Time 0.544689 -2024-05-11 13:46:06,737 - Epoch: [72][ 300/ 813] Overall Loss 1.207176 Objective Loss 1.207176 LR 0.000050 Time 0.541440 -2024-05-11 13:46:56,378 - Epoch: [72][ 400/ 813] Overall Loss 1.209841 Objective Loss 1.209841 LR 0.000050 Time 0.530168 -2024-05-11 13:47:49,546 - Epoch: [72][ 500/ 813] Overall Loss 1.210075 Objective Loss 1.210075 LR 0.000050 Time 0.530466 -2024-05-11 13:48:43,708 - Epoch: [72][ 600/ 813] Overall Loss 1.213096 Objective Loss 1.213096 LR 0.000050 Time 0.532322 -2024-05-11 13:49:34,757 - Epoch: [72][ 700/ 813] Overall Loss 1.215691 Objective Loss 1.215691 LR 0.000050 Time 0.529201 -2024-05-11 13:50:25,619 - Epoch: [72][ 800/ 813] Overall Loss 1.216119 Objective Loss 1.216119 LR 0.000050 Time 0.526627 -2024-05-11 13:50:31,499 - Epoch: [72][ 813/ 813] Overall Loss 1.216304 Objective Loss 1.216304 LR 0.000050 Time 0.525437 -2024-05-11 13:50:31,533 - --- validate (epoch=72)----------- -2024-05-11 13:50:31,534 - 3250 samples (16 per mini-batch) -2024-05-11 13:50:31,535 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 13:51:27,483 - Epoch: [72][ 100/ 204] Loss 1.340430 mAP 0.876973 -2024-05-11 13:52:18,878 - Epoch: [72][ 200/ 204] Loss 1.357473 mAP 0.875687 -2024-05-11 13:52:19,265 - Epoch: [72][ 204/ 204] Loss 1.360778 mAP 0.875684 -2024-05-11 13:52:19,291 - ==> mAP: 0.87568 Loss: 1.361 - -2024-05-11 13:52:19,293 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 13:52:19,293 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 13:52:19,314 - - -2024-05-11 13:52:19,314 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 13:53:14,234 - Epoch: [73][ 100/ 813] Overall Loss 1.201620 Objective Loss 1.201620 LR 0.000050 Time 0.549180 -2024-05-11 13:54:07,165 - Epoch: [73][ 200/ 813] Overall Loss 1.190772 Objective Loss 1.190772 LR 0.000050 Time 0.539236 -2024-05-11 13:55:00,595 - Epoch: [73][ 300/ 813] Overall Loss 1.203160 Objective Loss 1.203160 LR 0.000050 Time 0.537578 -2024-05-11 13:55:50,966 - Epoch: [73][ 400/ 813] Overall Loss 1.209058 Objective Loss 1.209058 LR 0.000050 Time 0.529107 -2024-05-11 13:56:43,805 - Epoch: [73][ 500/ 813] Overall Loss 1.208568 Objective Loss 1.208568 LR 0.000050 Time 0.528961 -2024-05-11 13:57:37,498 - Epoch: [73][ 600/ 813] Overall Loss 1.214388 Objective Loss 1.214388 LR 0.000050 Time 0.530283 -2024-05-11 13:58:30,765 - Epoch: [73][ 700/ 813] Overall Loss 1.217149 Objective Loss 1.217149 LR 0.000050 Time 0.530621 -2024-05-11 13:59:23,706 - Epoch: [73][ 800/ 813] Overall Loss 1.217971 Objective Loss 1.217971 LR 0.000050 Time 0.530465 -2024-05-11 13:59:29,388 - Epoch: [73][ 813/ 813] Overall Loss 1.217149 Objective Loss 1.217149 LR 0.000050 Time 0.528971 -2024-05-11 13:59:29,414 - --- validate (epoch=73)----------- -2024-05-11 13:59:29,414 - 3250 samples (16 per mini-batch) -2024-05-11 13:59:29,415 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 14:00:24,064 - Epoch: [73][ 100/ 204] Loss 1.322772 mAP 0.889337 -2024-05-11 14:01:16,200 - Epoch: [73][ 200/ 204] Loss 1.340554 mAP 0.879156 -2024-05-11 14:01:17,066 - Epoch: [73][ 204/ 204] Loss 1.337532 mAP 0.879167 -2024-05-11 14:01:17,092 - ==> mAP: 0.87917 Loss: 1.338 - -2024-05-11 14:01:17,095 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 14:01:17,095 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 14:01:17,115 - - -2024-05-11 14:01:17,115 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 14:02:13,169 - Epoch: [74][ 100/ 813] Overall Loss 1.183485 Objective Loss 1.183485 LR 0.000050 Time 0.560517 -2024-05-11 14:03:07,923 - Epoch: [74][ 200/ 813] Overall Loss 1.197077 Objective Loss 1.197077 LR 0.000050 Time 0.554023 -2024-05-11 14:04:00,983 - Epoch: [74][ 300/ 813] Overall Loss 1.204763 Objective Loss 1.204763 LR 0.000050 Time 0.546209 -2024-05-11 14:04:50,627 - Epoch: [74][ 400/ 813] Overall Loss 1.211339 Objective Loss 1.211339 LR 0.000050 Time 0.533765 -2024-05-11 14:05:43,156 - Epoch: [74][ 500/ 813] Overall Loss 1.214338 Objective Loss 1.214338 LR 0.000050 Time 0.532066 -2024-05-11 14:06:37,221 - Epoch: [74][ 600/ 813] Overall Loss 1.215246 Objective Loss 1.215246 LR 0.000050 Time 0.533494 -2024-05-11 14:07:30,566 - Epoch: [74][ 700/ 813] Overall Loss 1.212132 Objective Loss 1.212132 LR 0.000050 Time 0.533485 -2024-05-11 14:08:23,090 - Epoch: [74][ 800/ 813] Overall Loss 1.211877 Objective Loss 1.211877 LR 0.000050 Time 0.532453 -2024-05-11 14:08:27,376 - Epoch: [74][ 813/ 813] Overall Loss 1.211540 Objective Loss 1.211540 LR 0.000050 Time 0.529210 -2024-05-11 14:08:27,403 - --- validate (epoch=74)----------- -2024-05-11 14:08:27,404 - 3250 samples (16 per mini-batch) -2024-05-11 14:08:27,405 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 14:09:23,746 - Epoch: [74][ 100/ 204] Loss 1.316117 mAP 0.887229 -2024-05-11 14:10:16,704 - Epoch: [74][ 200/ 204] Loss 1.325229 mAP 0.877707 -2024-05-11 14:10:16,988 - Epoch: [74][ 204/ 204] Loss 1.322676 mAP 0.877743 -2024-05-11 14:10:17,014 - ==> mAP: 0.87774 Loss: 1.323 - -2024-05-11 14:10:17,016 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 14:10:17,017 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 14:10:17,037 - - -2024-05-11 14:10:17,037 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 14:11:11,636 - Epoch: [75][ 100/ 813] Overall Loss 1.191205 Objective Loss 1.191205 LR 0.000050 Time 0.545968 -2024-05-11 14:12:06,256 - Epoch: [75][ 200/ 813] Overall Loss 1.191525 Objective Loss 1.191525 LR 0.000050 Time 0.546076 -2024-05-11 14:12:59,571 - Epoch: [75][ 300/ 813] Overall Loss 1.206894 Objective Loss 1.206894 LR 0.000050 Time 0.541755 -2024-05-11 14:13:49,713 - Epoch: [75][ 400/ 813] Overall Loss 1.207311 Objective Loss 1.207311 LR 0.000050 Time 0.531661 -2024-05-11 14:14:42,133 - Epoch: [75][ 500/ 813] Overall Loss 1.211671 Objective Loss 1.211671 LR 0.000050 Time 0.530165 -2024-05-11 14:15:34,741 - Epoch: [75][ 600/ 813] Overall Loss 1.213443 Objective Loss 1.213443 LR 0.000050 Time 0.529476 -2024-05-11 14:16:27,402 - Epoch: [75][ 700/ 813] Overall Loss 1.210660 Objective Loss 1.210660 LR 0.000050 Time 0.529066 -2024-05-11 14:17:19,615 - Epoch: [75][ 800/ 813] Overall Loss 1.211984 Objective Loss 1.211984 LR 0.000050 Time 0.528194 -2024-05-11 14:17:24,757 - Epoch: [75][ 813/ 813] Overall Loss 1.211305 Objective Loss 1.211305 LR 0.000050 Time 0.526072 -2024-05-11 14:17:24,783 - --- validate (epoch=75)----------- -2024-05-11 14:17:24,784 - 3250 samples (16 per mini-batch) -2024-05-11 14:17:24,785 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 14:18:20,688 - Epoch: [75][ 100/ 204] Loss 1.324885 mAP 0.874856 -2024-05-11 14:19:13,429 - Epoch: [75][ 200/ 204] Loss 1.337244 mAP 0.874981 -2024-05-11 14:19:14,025 - Epoch: [75][ 204/ 204] Loss 1.337583 mAP 0.874939 -2024-05-11 14:19:14,052 - ==> mAP: 0.87494 Loss: 1.338 - -2024-05-11 14:19:14,055 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 14:19:14,055 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 14:19:14,075 - - -2024-05-11 14:19:14,075 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 14:20:09,596 - Epoch: [76][ 100/ 813] Overall Loss 1.193845 Objective Loss 1.193845 LR 0.000050 Time 0.555188 -2024-05-11 14:21:03,171 - Epoch: [76][ 200/ 813] Overall Loss 1.197415 Objective Loss 1.197415 LR 0.000050 Time 0.545462 -2024-05-11 14:21:55,481 - Epoch: [76][ 300/ 813] Overall Loss 1.213155 Objective Loss 1.213155 LR 0.000050 Time 0.538002 -2024-05-11 14:22:45,848 - Epoch: [76][ 400/ 813] Overall Loss 1.214147 Objective Loss 1.214147 LR 0.000050 Time 0.529414 -2024-05-11 14:23:38,637 - Epoch: [76][ 500/ 813] Overall Loss 1.212086 Objective Loss 1.212086 LR 0.000050 Time 0.529108 -2024-05-11 14:24:32,994 - Epoch: [76][ 600/ 813] Overall Loss 1.210550 Objective Loss 1.210550 LR 0.000050 Time 0.531512 -2024-05-11 14:25:24,454 - Epoch: [76][ 700/ 813] Overall Loss 1.212975 Objective Loss 1.212975 LR 0.000050 Time 0.529094 -2024-05-11 14:26:15,951 - Epoch: [76][ 800/ 813] Overall Loss 1.213195 Objective Loss 1.213195 LR 0.000050 Time 0.527327 -2024-05-11 14:26:22,194 - Epoch: [76][ 813/ 813] Overall Loss 1.213376 Objective Loss 1.213376 LR 0.000050 Time 0.526573 -2024-05-11 14:26:22,220 - --- validate (epoch=76)----------- -2024-05-11 14:26:22,221 - 3250 samples (16 per mini-batch) -2024-05-11 14:26:22,222 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 14:27:18,070 - Epoch: [76][ 100/ 204] Loss 1.318840 mAP 0.878049 -2024-05-11 14:28:10,427 - Epoch: [76][ 200/ 204] Loss 1.328322 mAP 0.878171 -2024-05-11 14:28:11,331 - Epoch: [76][ 204/ 204] Loss 1.326833 mAP 0.878187 -2024-05-11 14:28:11,357 - ==> mAP: 0.87819 Loss: 1.327 - -2024-05-11 14:28:11,359 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 14:28:11,359 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 14:28:11,380 - - -2024-05-11 14:28:11,380 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 14:29:06,245 - Epoch: [77][ 100/ 813] Overall Loss 1.167581 Objective Loss 1.167581 LR 0.000050 Time 0.548624 -2024-05-11 14:30:01,305 - Epoch: [77][ 200/ 813] Overall Loss 1.184191 Objective Loss 1.184191 LR 0.000050 Time 0.549607 -2024-05-11 14:30:53,221 - Epoch: [77][ 300/ 813] Overall Loss 1.198988 Objective Loss 1.198988 LR 0.000050 Time 0.539451 -2024-05-11 14:31:43,841 - Epoch: [77][ 400/ 813] Overall Loss 1.198098 Objective Loss 1.198098 LR 0.000050 Time 0.531130 -2024-05-11 14:32:36,567 - Epoch: [77][ 500/ 813] Overall Loss 1.198920 Objective Loss 1.198920 LR 0.000050 Time 0.530354 -2024-05-11 14:33:28,910 - Epoch: [77][ 600/ 813] Overall Loss 1.200014 Objective Loss 1.200014 LR 0.000050 Time 0.529192 -2024-05-11 14:34:22,215 - Epoch: [77][ 700/ 813] Overall Loss 1.200967 Objective Loss 1.200967 LR 0.000050 Time 0.529741 -2024-05-11 14:35:14,325 - Epoch: [77][ 800/ 813] Overall Loss 1.201168 Objective Loss 1.201168 LR 0.000050 Time 0.528653 -2024-05-11 14:35:19,790 - Epoch: [77][ 813/ 813] Overall Loss 1.201089 Objective Loss 1.201089 LR 0.000050 Time 0.526922 -2024-05-11 14:35:19,816 - --- validate (epoch=77)----------- -2024-05-11 14:35:19,817 - 3250 samples (16 per mini-batch) -2024-05-11 14:35:19,818 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 14:36:13,385 - Epoch: [77][ 100/ 204] Loss 1.315289 mAP 0.889608 -2024-05-11 14:37:06,541 - Epoch: [77][ 200/ 204] Loss 1.316933 mAP 0.878574 -2024-05-11 14:37:07,264 - Epoch: [77][ 204/ 204] Loss 1.317593 mAP 0.878559 -2024-05-11 14:37:07,290 - ==> mAP: 0.87856 Loss: 1.318 - -2024-05-11 14:37:07,293 - ==> Best [mAP: 0.887562 vloss: 1.359812 Params: 368351 on epoch: 57] -2024-05-11 14:37:07,294 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 14:37:07,314 - - -2024-05-11 14:37:07,314 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 14:38:03,056 - Epoch: [78][ 100/ 813] Overall Loss 1.187317 Objective Loss 1.187317 LR 0.000050 Time 0.557396 -2024-05-11 14:38:55,862 - Epoch: [78][ 200/ 813] Overall Loss 1.200074 Objective Loss 1.200074 LR 0.000050 Time 0.542721 -2024-05-11 14:39:48,615 - Epoch: [78][ 300/ 813] Overall Loss 1.210132 Objective Loss 1.210132 LR 0.000050 Time 0.537584 -2024-05-11 14:40:38,608 - Epoch: [78][ 400/ 813] Overall Loss 1.206354 Objective Loss 1.206354 LR 0.000050 Time 0.528169 -2024-05-11 14:41:31,674 - Epoch: [78][ 500/ 813] Overall Loss 1.210318 Objective Loss 1.210318 LR 0.000050 Time 0.528664 -2024-05-11 14:42:26,588 - Epoch: [78][ 600/ 813] Overall Loss 1.209354 Objective Loss 1.209354 LR 0.000050 Time 0.532038 -2024-05-11 14:43:19,398 - Epoch: [78][ 700/ 813] Overall Loss 1.207650 Objective Loss 1.207650 LR 0.000050 Time 0.531473 -2024-05-11 14:44:12,478 - Epoch: [78][ 800/ 813] Overall Loss 1.205437 Objective Loss 1.205437 LR 0.000050 Time 0.531381 -2024-05-11 14:44:16,799 - Epoch: [78][ 813/ 813] Overall Loss 1.204897 Objective Loss 1.204897 LR 0.000050 Time 0.528198 -2024-05-11 14:44:16,825 - --- validate (epoch=78)----------- -2024-05-11 14:44:16,825 - 3250 samples (16 per mini-batch) -2024-05-11 14:44:16,826 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 14:45:12,329 - Epoch: [78][ 100/ 204] Loss 1.307929 mAP 0.889273 -2024-05-11 14:46:05,636 - Epoch: [78][ 200/ 204] Loss 1.329826 mAP 0.889209 -2024-05-11 14:46:06,299 - Epoch: [78][ 204/ 204] Loss 1.328652 mAP 0.889215 -2024-05-11 14:46:06,324 - ==> mAP: 0.88921 Loss: 1.329 - -2024-05-11 14:46:06,327 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 14:46:06,327 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 14:46:06,351 - - -2024-05-11 14:46:06,351 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 14:47:01,929 - Epoch: [79][ 100/ 813] Overall Loss 1.186194 Objective Loss 1.186194 LR 0.000050 Time 0.555760 -2024-05-11 14:47:54,602 - Epoch: [79][ 200/ 813] Overall Loss 1.177654 Objective Loss 1.177654 LR 0.000050 Time 0.541236 -2024-05-11 14:48:46,883 - Epoch: [79][ 300/ 813] Overall Loss 1.198739 Objective Loss 1.198739 LR 0.000050 Time 0.535089 -2024-05-11 14:49:37,216 - Epoch: [79][ 400/ 813] Overall Loss 1.198758 Objective Loss 1.198758 LR 0.000050 Time 0.527146 -2024-05-11 14:50:29,833 - Epoch: [79][ 500/ 813] Overall Loss 1.205697 Objective Loss 1.205697 LR 0.000050 Time 0.526946 -2024-05-11 14:51:23,879 - Epoch: [79][ 600/ 813] Overall Loss 1.211943 Objective Loss 1.211943 LR 0.000050 Time 0.529196 -2024-05-11 14:52:16,691 - Epoch: [79][ 700/ 813] Overall Loss 1.212588 Objective Loss 1.212588 LR 0.000050 Time 0.529041 -2024-05-11 14:53:08,891 - Epoch: [79][ 800/ 813] Overall Loss 1.213153 Objective Loss 1.213153 LR 0.000050 Time 0.528158 -2024-05-11 14:53:14,197 - Epoch: [79][ 813/ 813] Overall Loss 1.213228 Objective Loss 1.213228 LR 0.000050 Time 0.526239 -2024-05-11 14:53:14,223 - --- validate (epoch=79)----------- -2024-05-11 14:53:14,224 - 3250 samples (16 per mini-batch) -2024-05-11 14:53:14,225 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 14:54:09,813 - Epoch: [79][ 100/ 204] Loss 1.334656 mAP 0.878842 -2024-05-11 14:55:03,094 - Epoch: [79][ 200/ 204] Loss 1.328748 mAP 0.878407 -2024-05-11 14:55:03,662 - Epoch: [79][ 204/ 204] Loss 1.327388 mAP 0.878340 -2024-05-11 14:55:03,688 - ==> mAP: 0.87834 Loss: 1.327 - -2024-05-11 14:55:03,690 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 14:55:03,690 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 14:55:03,710 - - -2024-05-11 14:55:03,710 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 14:55:58,522 - Epoch: [80][ 100/ 813] Overall Loss 1.204982 Objective Loss 1.204982 LR 0.000050 Time 0.548101 -2024-05-11 14:56:53,078 - Epoch: [80][ 200/ 813] Overall Loss 1.209859 Objective Loss 1.209859 LR 0.000050 Time 0.546821 -2024-05-11 14:57:45,398 - Epoch: [80][ 300/ 813] Overall Loss 1.219933 Objective Loss 1.219933 LR 0.000050 Time 0.538943 -2024-05-11 14:58:37,764 - Epoch: [80][ 400/ 813] Overall Loss 1.218217 Objective Loss 1.218217 LR 0.000050 Time 0.535119 -2024-05-11 14:59:29,212 - Epoch: [80][ 500/ 813] Overall Loss 1.220413 Objective Loss 1.220413 LR 0.000050 Time 0.530987 -2024-05-11 15:00:22,236 - Epoch: [80][ 600/ 813] Overall Loss 1.222839 Objective Loss 1.222839 LR 0.000050 Time 0.530860 -2024-05-11 15:01:15,727 - Epoch: [80][ 700/ 813] Overall Loss 1.219864 Objective Loss 1.219864 LR 0.000050 Time 0.531436 -2024-05-11 15:02:08,455 - Epoch: [80][ 800/ 813] Overall Loss 1.221799 Objective Loss 1.221799 LR 0.000050 Time 0.530915 -2024-05-11 15:02:13,653 - Epoch: [80][ 813/ 813] Overall Loss 1.222569 Objective Loss 1.222569 LR 0.000050 Time 0.528813 -2024-05-11 15:02:13,680 - --- validate (epoch=80)----------- -2024-05-11 15:02:13,681 - 3250 samples (16 per mini-batch) -2024-05-11 15:02:13,682 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 15:03:08,515 - Epoch: [80][ 100/ 204] Loss 1.325462 mAP 0.878179 -2024-05-11 15:04:01,562 - Epoch: [80][ 200/ 204] Loss 1.319120 mAP 0.878258 -2024-05-11 15:04:01,774 - Epoch: [80][ 204/ 204] Loss 1.318918 mAP 0.878213 -2024-05-11 15:04:01,800 - ==> mAP: 0.87821 Loss: 1.319 - -2024-05-11 15:04:01,802 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 15:04:01,802 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 15:04:01,822 - - -2024-05-11 15:04:01,822 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 15:04:56,644 - Epoch: [81][ 100/ 813] Overall Loss 1.178159 Objective Loss 1.178159 LR 0.000050 Time 0.548190 -2024-05-11 15:05:50,764 - Epoch: [81][ 200/ 813] Overall Loss 1.186230 Objective Loss 1.186230 LR 0.000050 Time 0.544675 -2024-05-11 15:06:44,281 - Epoch: [81][ 300/ 813] Overall Loss 1.204234 Objective Loss 1.204234 LR 0.000050 Time 0.541494 -2024-05-11 15:07:36,214 - Epoch: [81][ 400/ 813] Overall Loss 1.204804 Objective Loss 1.204804 LR 0.000050 Time 0.535949 -2024-05-11 15:08:27,680 - Epoch: [81][ 500/ 813] Overall Loss 1.213262 Objective Loss 1.213262 LR 0.000050 Time 0.531689 -2024-05-11 15:09:22,271 - Epoch: [81][ 600/ 813] Overall Loss 1.211447 Objective Loss 1.211447 LR 0.000050 Time 0.534056 -2024-05-11 15:10:15,499 - Epoch: [81][ 700/ 813] Overall Loss 1.210264 Objective Loss 1.210264 LR 0.000050 Time 0.533797 -2024-05-11 15:11:06,430 - Epoch: [81][ 800/ 813] Overall Loss 1.209447 Objective Loss 1.209447 LR 0.000050 Time 0.530733 -2024-05-11 15:11:12,047 - Epoch: [81][ 813/ 813] Overall Loss 1.208688 Objective Loss 1.208688 LR 0.000050 Time 0.529156 -2024-05-11 15:11:12,076 - --- validate (epoch=81)----------- -2024-05-11 15:11:12,077 - 3250 samples (16 per mini-batch) -2024-05-11 15:11:12,078 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 15:12:07,085 - Epoch: [81][ 100/ 204] Loss 1.336778 mAP 0.877873 -2024-05-11 15:13:00,552 - Epoch: [81][ 200/ 204] Loss 1.317543 mAP 0.888245 -2024-05-11 15:13:01,245 - Epoch: [81][ 204/ 204] Loss 1.318516 mAP 0.888196 -2024-05-11 15:13:01,269 - ==> mAP: 0.88820 Loss: 1.319 - -2024-05-11 15:13:01,275 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 15:13:01,275 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 15:13:01,299 - - -2024-05-11 15:13:01,299 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 15:13:56,694 - Epoch: [82][ 100/ 813] Overall Loss 1.188438 Objective Loss 1.188438 LR 0.000050 Time 0.553933 -2024-05-11 15:14:49,654 - Epoch: [82][ 200/ 813] Overall Loss 1.189384 Objective Loss 1.189384 LR 0.000050 Time 0.541757 -2024-05-11 15:15:42,743 - Epoch: [82][ 300/ 813] Overall Loss 1.200001 Objective Loss 1.200001 LR 0.000050 Time 0.538129 -2024-05-11 15:16:33,957 - Epoch: [82][ 400/ 813] Overall Loss 1.201988 Objective Loss 1.201988 LR 0.000050 Time 0.531629 -2024-05-11 15:17:26,861 - Epoch: [82][ 500/ 813] Overall Loss 1.205476 Objective Loss 1.205476 LR 0.000050 Time 0.531107 -2024-05-11 15:18:20,361 - Epoch: [82][ 600/ 813] Overall Loss 1.205353 Objective Loss 1.205353 LR 0.000050 Time 0.531753 -2024-05-11 15:19:13,605 - Epoch: [82][ 700/ 813] Overall Loss 1.208918 Objective Loss 1.208918 LR 0.000050 Time 0.531849 -2024-05-11 15:20:05,415 - Epoch: [82][ 800/ 813] Overall Loss 1.208441 Objective Loss 1.208441 LR 0.000050 Time 0.530126 -2024-05-11 15:20:10,687 - Epoch: [82][ 813/ 813] Overall Loss 1.209154 Objective Loss 1.209154 LR 0.000050 Time 0.528134 -2024-05-11 15:20:10,712 - --- validate (epoch=82)----------- -2024-05-11 15:20:10,713 - 3250 samples (16 per mini-batch) -2024-05-11 15:20:10,714 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 15:21:05,358 - Epoch: [82][ 100/ 204] Loss 1.309440 mAP 0.888621 -2024-05-11 15:21:58,321 - Epoch: [82][ 200/ 204] Loss 1.334642 mAP 0.877679 -2024-05-11 15:21:58,722 - Epoch: [82][ 204/ 204] Loss 1.331599 mAP 0.877626 -2024-05-11 15:21:58,748 - ==> mAP: 0.87763 Loss: 1.332 - -2024-05-11 15:21:58,750 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 15:21:58,750 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 15:21:58,771 - - -2024-05-11 15:21:58,771 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 15:22:54,019 - Epoch: [83][ 100/ 813] Overall Loss 1.197074 Objective Loss 1.197074 LR 0.000050 Time 0.552454 -2024-05-11 15:23:48,073 - Epoch: [83][ 200/ 813] Overall Loss 1.204640 Objective Loss 1.204640 LR 0.000050 Time 0.546481 -2024-05-11 15:24:40,536 - Epoch: [83][ 300/ 813] Overall Loss 1.218297 Objective Loss 1.218297 LR 0.000050 Time 0.539192 -2024-05-11 15:25:29,974 - Epoch: [83][ 400/ 813] Overall Loss 1.221722 Objective Loss 1.221722 LR 0.000050 Time 0.527983 -2024-05-11 15:26:23,597 - Epoch: [83][ 500/ 813] Overall Loss 1.219454 Objective Loss 1.219454 LR 0.000050 Time 0.529631 -2024-05-11 15:27:17,380 - Epoch: [83][ 600/ 813] Overall Loss 1.218877 Objective Loss 1.218877 LR 0.000050 Time 0.530995 -2024-05-11 15:28:10,309 - Epoch: [83][ 700/ 813] Overall Loss 1.219380 Objective Loss 1.219380 LR 0.000050 Time 0.530749 -2024-05-11 15:29:03,339 - Epoch: [83][ 800/ 813] Overall Loss 1.216628 Objective Loss 1.216628 LR 0.000050 Time 0.530690 -2024-05-11 15:29:07,598 - Epoch: [83][ 813/ 813] Overall Loss 1.217167 Objective Loss 1.217167 LR 0.000050 Time 0.527444 -2024-05-11 15:29:07,625 - --- validate (epoch=83)----------- -2024-05-11 15:29:07,625 - 3250 samples (16 per mini-batch) -2024-05-11 15:29:07,626 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 15:30:02,622 - Epoch: [83][ 100/ 204] Loss 1.355179 mAP 0.859859 -2024-05-11 15:30:56,026 - Epoch: [83][ 200/ 204] Loss 1.349827 mAP 0.858728 -2024-05-11 15:30:56,502 - Epoch: [83][ 204/ 204] Loss 1.347368 mAP 0.858711 -2024-05-11 15:30:56,528 - ==> mAP: 0.85871 Loss: 1.347 - -2024-05-11 15:30:56,530 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 15:30:56,531 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 15:30:56,551 - - -2024-05-11 15:30:56,551 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 15:31:51,766 - Epoch: [84][ 100/ 813] Overall Loss 1.163809 Objective Loss 1.163809 LR 0.000050 Time 0.552125 -2024-05-11 15:32:45,887 - Epoch: [84][ 200/ 813] Overall Loss 1.180120 Objective Loss 1.180120 LR 0.000050 Time 0.546648 -2024-05-11 15:33:38,908 - Epoch: [84][ 300/ 813] Overall Loss 1.202780 Objective Loss 1.202780 LR 0.000050 Time 0.541164 -2024-05-11 15:34:30,206 - Epoch: [84][ 400/ 813] Overall Loss 1.211567 Objective Loss 1.211567 LR 0.000050 Time 0.534114 -2024-05-11 15:35:22,669 - Epoch: [84][ 500/ 813] Overall Loss 1.214637 Objective Loss 1.214637 LR 0.000050 Time 0.532213 -2024-05-11 15:36:16,370 - Epoch: [84][ 600/ 813] Overall Loss 1.219330 Objective Loss 1.219330 LR 0.000050 Time 0.533011 -2024-05-11 15:37:09,303 - Epoch: [84][ 700/ 813] Overall Loss 1.218907 Objective Loss 1.218907 LR 0.000050 Time 0.532483 -2024-05-11 15:38:00,729 - Epoch: [84][ 800/ 813] Overall Loss 1.214977 Objective Loss 1.214977 LR 0.000050 Time 0.530203 -2024-05-11 15:38:06,947 - Epoch: [84][ 813/ 813] Overall Loss 1.213375 Objective Loss 1.213375 LR 0.000050 Time 0.529370 -2024-05-11 15:38:06,973 - --- validate (epoch=84)----------- -2024-05-11 15:38:06,974 - 3250 samples (16 per mini-batch) -2024-05-11 15:38:06,975 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 15:39:01,859 - Epoch: [84][ 100/ 204] Loss 1.307083 mAP 0.887574 -2024-05-11 15:39:55,166 - Epoch: [84][ 200/ 204] Loss 1.317892 mAP 0.887735 -2024-05-11 15:39:55,554 - Epoch: [84][ 204/ 204] Loss 1.316993 mAP 0.887669 -2024-05-11 15:39:55,580 - ==> mAP: 0.88767 Loss: 1.317 - -2024-05-11 15:39:55,583 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 15:39:55,583 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 15:39:55,603 - - -2024-05-11 15:39:55,603 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 15:40:49,891 - Epoch: [85][ 100/ 813] Overall Loss 1.195820 Objective Loss 1.195820 LR 0.000050 Time 0.542865 -2024-05-11 15:41:44,763 - Epoch: [85][ 200/ 813] Overall Loss 1.203141 Objective Loss 1.203141 LR 0.000050 Time 0.545781 -2024-05-11 15:42:36,358 - Epoch: [85][ 300/ 813] Overall Loss 1.213466 Objective Loss 1.213466 LR 0.000050 Time 0.535835 -2024-05-11 15:43:27,567 - Epoch: [85][ 400/ 813] Overall Loss 1.214655 Objective Loss 1.214655 LR 0.000050 Time 0.529891 -2024-05-11 15:44:21,479 - Epoch: [85][ 500/ 813] Overall Loss 1.214634 Objective Loss 1.214634 LR 0.000050 Time 0.531733 -2024-05-11 15:45:15,113 - Epoch: [85][ 600/ 813] Overall Loss 1.215143 Objective Loss 1.215143 LR 0.000050 Time 0.532498 -2024-05-11 15:46:06,930 - Epoch: [85][ 700/ 813] Overall Loss 1.214191 Objective Loss 1.214191 LR 0.000050 Time 0.530449 -2024-05-11 15:46:59,157 - Epoch: [85][ 800/ 813] Overall Loss 1.213403 Objective Loss 1.213403 LR 0.000050 Time 0.529423 -2024-05-11 15:47:04,345 - Epoch: [85][ 813/ 813] Overall Loss 1.212063 Objective Loss 1.212063 LR 0.000050 Time 0.527338 -2024-05-11 15:47:04,370 - --- validate (epoch=85)----------- -2024-05-11 15:47:04,371 - 3250 samples (16 per mini-batch) -2024-05-11 15:47:04,372 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 15:47:59,955 - Epoch: [85][ 100/ 204] Loss 1.341468 mAP 0.876751 -2024-05-11 15:48:53,007 - Epoch: [85][ 200/ 204] Loss 1.343889 mAP 0.865674 -2024-05-11 15:48:53,629 - Epoch: [85][ 204/ 204] Loss 1.344551 mAP 0.865682 -2024-05-11 15:48:53,654 - ==> mAP: 0.86568 Loss: 1.345 - -2024-05-11 15:48:53,657 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 15:48:53,657 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 15:48:53,677 - - -2024-05-11 15:48:53,677 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 15:49:48,637 - Epoch: [86][ 100/ 813] Overall Loss 1.195819 Objective Loss 1.195819 LR 0.000050 Time 0.549581 -2024-05-11 15:50:42,955 - Epoch: [86][ 200/ 813] Overall Loss 1.201067 Objective Loss 1.201067 LR 0.000050 Time 0.546372 -2024-05-11 15:51:35,942 - Epoch: [86][ 300/ 813] Overall Loss 1.204744 Objective Loss 1.204744 LR 0.000050 Time 0.540835 -2024-05-11 15:52:27,144 - Epoch: [86][ 400/ 813] Overall Loss 1.210222 Objective Loss 1.210222 LR 0.000050 Time 0.533626 -2024-05-11 15:53:20,019 - Epoch: [86][ 500/ 813] Overall Loss 1.210729 Objective Loss 1.210729 LR 0.000050 Time 0.532648 -2024-05-11 15:54:13,399 - Epoch: [86][ 600/ 813] Overall Loss 1.215539 Objective Loss 1.215539 LR 0.000050 Time 0.532838 -2024-05-11 15:55:07,257 - Epoch: [86][ 700/ 813] Overall Loss 1.217747 Objective Loss 1.217747 LR 0.000050 Time 0.533656 -2024-05-11 15:55:59,667 - Epoch: [86][ 800/ 813] Overall Loss 1.215616 Objective Loss 1.215616 LR 0.000050 Time 0.532459 -2024-05-11 15:56:04,433 - Epoch: [86][ 813/ 813] Overall Loss 1.213905 Objective Loss 1.213905 LR 0.000050 Time 0.529807 -2024-05-11 15:56:04,459 - --- validate (epoch=86)----------- -2024-05-11 15:56:04,460 - 3250 samples (16 per mini-batch) -2024-05-11 15:56:04,461 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 15:57:00,409 - Epoch: [86][ 100/ 204] Loss 1.291428 mAP 0.884746 -2024-05-11 15:57:51,705 - Epoch: [86][ 200/ 204] Loss 1.305351 mAP 0.875683 -2024-05-11 15:57:52,476 - Epoch: [86][ 204/ 204] Loss 1.306768 mAP 0.875747 -2024-05-11 15:57:52,503 - ==> mAP: 0.87575 Loss: 1.307 - -2024-05-11 15:57:52,506 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 15:57:52,506 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 15:57:52,525 - - -2024-05-11 15:57:52,526 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 15:58:48,423 - Epoch: [87][ 100/ 813] Overall Loss 1.176110 Objective Loss 1.176110 LR 0.000050 Time 0.558954 -2024-05-11 15:59:41,808 - Epoch: [87][ 200/ 813] Overall Loss 1.195003 Objective Loss 1.195003 LR 0.000050 Time 0.546395 -2024-05-11 16:00:35,837 - Epoch: [87][ 300/ 813] Overall Loss 1.206130 Objective Loss 1.206130 LR 0.000050 Time 0.544355 -2024-05-11 16:01:25,505 - Epoch: [87][ 400/ 813] Overall Loss 1.201131 Objective Loss 1.201131 LR 0.000050 Time 0.532431 -2024-05-11 16:02:18,375 - Epoch: [87][ 500/ 813] Overall Loss 1.205429 Objective Loss 1.205429 LR 0.000050 Time 0.531676 -2024-05-11 16:03:12,558 - Epoch: [87][ 600/ 813] Overall Loss 1.209740 Objective Loss 1.209740 LR 0.000050 Time 0.533365 -2024-05-11 16:04:04,859 - Epoch: [87][ 700/ 813] Overall Loss 1.206136 Objective Loss 1.206136 LR 0.000050 Time 0.531877 -2024-05-11 16:04:57,155 - Epoch: [87][ 800/ 813] Overall Loss 1.205549 Objective Loss 1.205549 LR 0.000050 Time 0.530760 -2024-05-11 16:05:03,531 - Epoch: [87][ 813/ 813] Overall Loss 1.204991 Objective Loss 1.204991 LR 0.000050 Time 0.530115 -2024-05-11 16:05:03,559 - --- validate (epoch=87)----------- -2024-05-11 16:05:03,560 - 3250 samples (16 per mini-batch) -2024-05-11 16:05:03,561 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 16:05:59,195 - Epoch: [87][ 100/ 204] Loss 1.346800 mAP 0.878851 -2024-05-11 16:06:53,187 - Epoch: [87][ 200/ 204] Loss 1.322223 mAP 0.877709 -2024-05-11 16:06:53,733 - Epoch: [87][ 204/ 204] Loss 1.326632 mAP 0.877677 -2024-05-11 16:06:53,759 - ==> mAP: 0.87768 Loss: 1.327 - -2024-05-11 16:06:53,762 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 16:06:53,762 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 16:06:53,783 - - -2024-05-11 16:06:53,783 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 16:07:49,765 - Epoch: [88][ 100/ 813] Overall Loss 1.165363 Objective Loss 1.165363 LR 0.000050 Time 0.559798 -2024-05-11 16:08:42,913 - Epoch: [88][ 200/ 813] Overall Loss 1.183983 Objective Loss 1.183983 LR 0.000050 Time 0.545619 -2024-05-11 16:09:35,155 - Epoch: [88][ 300/ 813] Overall Loss 1.197887 Objective Loss 1.197887 LR 0.000050 Time 0.537880 -2024-05-11 16:10:26,761 - Epoch: [88][ 400/ 813] Overall Loss 1.206303 Objective Loss 1.206303 LR 0.000050 Time 0.532421 -2024-05-11 16:11:18,242 - Epoch: [88][ 500/ 813] Overall Loss 1.210950 Objective Loss 1.210950 LR 0.000050 Time 0.528895 -2024-05-11 16:12:11,399 - Epoch: [88][ 600/ 813] Overall Loss 1.211197 Objective Loss 1.211197 LR 0.000050 Time 0.529331 -2024-05-11 16:13:04,251 - Epoch: [88][ 700/ 813] Overall Loss 1.211535 Objective Loss 1.211535 LR 0.000050 Time 0.529213 -2024-05-11 16:13:57,492 - Epoch: [88][ 800/ 813] Overall Loss 1.213456 Objective Loss 1.213456 LR 0.000050 Time 0.529610 -2024-05-11 16:14:02,204 - Epoch: [88][ 813/ 813] Overall Loss 1.213078 Objective Loss 1.213078 LR 0.000050 Time 0.526938 -2024-05-11 16:14:02,230 - --- validate (epoch=88)----------- -2024-05-11 16:14:02,231 - 3250 samples (16 per mini-batch) -2024-05-11 16:14:02,232 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 16:14:57,901 - Epoch: [88][ 100/ 204] Loss 1.320902 mAP 0.886732 -2024-05-11 16:15:50,913 - Epoch: [88][ 200/ 204] Loss 1.322857 mAP 0.886923 -2024-05-11 16:15:51,135 - Epoch: [88][ 204/ 204] Loss 1.325979 mAP 0.886902 -2024-05-11 16:15:51,163 - ==> mAP: 0.88690 Loss: 1.326 - -2024-05-11 16:15:51,166 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 16:15:51,166 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 16:15:51,187 - - -2024-05-11 16:15:51,187 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 16:16:47,658 - Epoch: [89][ 100/ 813] Overall Loss 1.182149 Objective Loss 1.182149 LR 0.000050 Time 0.564689 -2024-05-11 16:17:41,680 - Epoch: [89][ 200/ 813] Overall Loss 1.181711 Objective Loss 1.181711 LR 0.000050 Time 0.552446 -2024-05-11 16:18:32,723 - Epoch: [89][ 300/ 813] Overall Loss 1.195434 Objective Loss 1.195434 LR 0.000050 Time 0.538426 -2024-05-11 16:19:23,738 - Epoch: [89][ 400/ 813] Overall Loss 1.201334 Objective Loss 1.201334 LR 0.000050 Time 0.531353 -2024-05-11 16:20:15,190 - Epoch: [89][ 500/ 813] Overall Loss 1.203177 Objective Loss 1.203177 LR 0.000050 Time 0.527961 -2024-05-11 16:21:09,822 - Epoch: [89][ 600/ 813] Overall Loss 1.202252 Objective Loss 1.202252 LR 0.000050 Time 0.531019 -2024-05-11 16:22:03,853 - Epoch: [89][ 700/ 813] Overall Loss 1.202862 Objective Loss 1.202862 LR 0.000050 Time 0.532344 -2024-05-11 16:22:55,336 - Epoch: [89][ 800/ 813] Overall Loss 1.205514 Objective Loss 1.205514 LR 0.000050 Time 0.530153 -2024-05-11 16:23:00,433 - Epoch: [89][ 813/ 813] Overall Loss 1.206238 Objective Loss 1.206238 LR 0.000050 Time 0.527944 -2024-05-11 16:23:00,459 - --- validate (epoch=89)----------- -2024-05-11 16:23:00,460 - 3250 samples (16 per mini-batch) -2024-05-11 16:23:00,461 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 16:23:56,757 - Epoch: [89][ 100/ 204] Loss 1.319365 mAP 0.877901 -2024-05-11 16:24:50,656 - Epoch: [89][ 200/ 204] Loss 1.314251 mAP 0.887094 -2024-05-11 16:24:51,686 - Epoch: [89][ 204/ 204] Loss 1.313558 mAP 0.887066 -2024-05-11 16:24:51,712 - ==> mAP: 0.88707 Loss: 1.314 - -2024-05-11 16:24:51,714 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 16:24:51,714 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 16:24:51,735 - - -2024-05-11 16:24:51,735 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 16:25:47,760 - Epoch: [90][ 100/ 813] Overall Loss 1.154420 Objective Loss 1.154420 LR 0.000050 Time 0.560233 -2024-05-11 16:26:42,807 - Epoch: [90][ 200/ 813] Overall Loss 1.177277 Objective Loss 1.177277 LR 0.000050 Time 0.555328 -2024-05-11 16:27:34,315 - Epoch: [90][ 300/ 813] Overall Loss 1.203931 Objective Loss 1.203931 LR 0.000050 Time 0.541905 -2024-05-11 16:28:25,117 - Epoch: [90][ 400/ 813] Overall Loss 1.203673 Objective Loss 1.203673 LR 0.000050 Time 0.533432 -2024-05-11 16:29:16,850 - Epoch: [90][ 500/ 813] Overall Loss 1.204741 Objective Loss 1.204741 LR 0.000050 Time 0.530208 -2024-05-11 16:30:10,028 - Epoch: [90][ 600/ 813] Overall Loss 1.207636 Objective Loss 1.207636 LR 0.000050 Time 0.530467 -2024-05-11 16:31:03,148 - Epoch: [90][ 700/ 813] Overall Loss 1.205842 Objective Loss 1.205842 LR 0.000050 Time 0.530570 -2024-05-11 16:31:57,383 - Epoch: [90][ 800/ 813] Overall Loss 1.206303 Objective Loss 1.206303 LR 0.000050 Time 0.532040 -2024-05-11 16:32:01,817 - Epoch: [90][ 813/ 813] Overall Loss 1.207463 Objective Loss 1.207463 LR 0.000050 Time 0.528986 -2024-05-11 16:32:01,843 - --- validate (epoch=90)----------- -2024-05-11 16:32:01,844 - 3250 samples (16 per mini-batch) -2024-05-11 16:32:01,845 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 16:32:55,666 - Epoch: [90][ 100/ 204] Loss 1.343115 mAP 0.888264 -2024-05-11 16:33:48,676 - Epoch: [90][ 200/ 204] Loss 1.339831 mAP 0.887935 -2024-05-11 16:33:49,389 - Epoch: [90][ 204/ 204] Loss 1.341392 mAP 0.887945 -2024-05-11 16:33:49,415 - ==> mAP: 0.88794 Loss: 1.341 - -2024-05-11 16:33:49,418 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 16:33:49,418 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 16:33:49,439 - - -2024-05-11 16:33:49,439 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 16:34:44,136 - Epoch: [91][ 100/ 813] Overall Loss 1.173123 Objective Loss 1.173123 LR 0.000050 Time 0.546950 -2024-05-11 16:35:38,006 - Epoch: [91][ 200/ 813] Overall Loss 1.187004 Objective Loss 1.187004 LR 0.000050 Time 0.542808 -2024-05-11 16:36:30,685 - Epoch: [91][ 300/ 813] Overall Loss 1.201482 Objective Loss 1.201482 LR 0.000050 Time 0.537464 -2024-05-11 16:37:21,500 - Epoch: [91][ 400/ 813] Overall Loss 1.205708 Objective Loss 1.205708 LR 0.000050 Time 0.530131 -2024-05-11 16:38:13,504 - Epoch: [91][ 500/ 813] Overall Loss 1.208972 Objective Loss 1.208972 LR 0.000050 Time 0.528074 -2024-05-11 16:39:07,760 - Epoch: [91][ 600/ 813] Overall Loss 1.212494 Objective Loss 1.212494 LR 0.000050 Time 0.530487 -2024-05-11 16:40:01,233 - Epoch: [91][ 700/ 813] Overall Loss 1.211593 Objective Loss 1.211593 LR 0.000050 Time 0.531089 -2024-05-11 16:40:54,111 - Epoch: [91][ 800/ 813] Overall Loss 1.208365 Objective Loss 1.208365 LR 0.000050 Time 0.530799 -2024-05-11 16:40:59,316 - Epoch: [91][ 813/ 813] Overall Loss 1.208444 Objective Loss 1.208444 LR 0.000050 Time 0.528710 -2024-05-11 16:40:59,343 - --- validate (epoch=91)----------- -2024-05-11 16:40:59,343 - 3250 samples (16 per mini-batch) -2024-05-11 16:40:59,345 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 16:41:57,198 - Epoch: [91][ 100/ 204] Loss 1.317091 mAP 0.889904 -2024-05-11 16:42:49,738 - Epoch: [91][ 200/ 204] Loss 1.322107 mAP 0.888555 -2024-05-11 16:42:50,440 - Epoch: [91][ 204/ 204] Loss 1.324953 mAP 0.888531 -2024-05-11 16:42:50,465 - ==> mAP: 0.88853 Loss: 1.325 - -2024-05-11 16:42:50,468 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 16:42:50,468 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 16:42:50,488 - - -2024-05-11 16:42:50,488 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 16:43:47,458 - Epoch: [92][ 100/ 813] Overall Loss 1.174642 Objective Loss 1.174642 LR 0.000050 Time 0.569670 -2024-05-11 16:44:40,728 - Epoch: [92][ 200/ 813] Overall Loss 1.186922 Objective Loss 1.186922 LR 0.000050 Time 0.551181 -2024-05-11 16:45:33,391 - Epoch: [92][ 300/ 813] Overall Loss 1.195932 Objective Loss 1.195932 LR 0.000050 Time 0.542984 -2024-05-11 16:46:23,181 - Epoch: [92][ 400/ 813] Overall Loss 1.205170 Objective Loss 1.205170 LR 0.000050 Time 0.531680 -2024-05-11 16:47:15,449 - Epoch: [92][ 500/ 813] Overall Loss 1.206545 Objective Loss 1.206545 LR 0.000050 Time 0.529876 -2024-05-11 16:48:10,868 - Epoch: [92][ 600/ 813] Overall Loss 1.208699 Objective Loss 1.208699 LR 0.000050 Time 0.533925 -2024-05-11 16:49:04,628 - Epoch: [92][ 700/ 813] Overall Loss 1.206441 Objective Loss 1.206441 LR 0.000050 Time 0.534448 -2024-05-11 16:49:55,198 - Epoch: [92][ 800/ 813] Overall Loss 1.206930 Objective Loss 1.206930 LR 0.000050 Time 0.530852 -2024-05-11 16:50:01,195 - Epoch: [92][ 813/ 813] Overall Loss 1.206542 Objective Loss 1.206542 LR 0.000050 Time 0.529730 -2024-05-11 16:50:01,222 - --- validate (epoch=92)----------- -2024-05-11 16:50:01,223 - 3250 samples (16 per mini-batch) -2024-05-11 16:50:01,224 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 16:50:55,101 - Epoch: [92][ 100/ 204] Loss 1.328760 mAP 0.886192 -2024-05-11 16:51:48,398 - Epoch: [92][ 200/ 204] Loss 1.322770 mAP 0.886326 -2024-05-11 16:51:49,336 - Epoch: [92][ 204/ 204] Loss 1.320581 mAP 0.886273 -2024-05-11 16:51:49,360 - ==> mAP: 0.88627 Loss: 1.321 - -2024-05-11 16:51:49,363 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 16:51:49,363 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 16:51:49,384 - - -2024-05-11 16:51:49,384 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 16:52:44,209 - Epoch: [93][ 100/ 813] Overall Loss 1.162784 Objective Loss 1.162784 LR 0.000050 Time 0.548232 -2024-05-11 16:53:37,808 - Epoch: [93][ 200/ 813] Overall Loss 1.187563 Objective Loss 1.187563 LR 0.000050 Time 0.542105 -2024-05-11 16:54:30,403 - Epoch: [93][ 300/ 813] Overall Loss 1.195633 Objective Loss 1.195633 LR 0.000050 Time 0.536709 -2024-05-11 16:55:21,137 - Epoch: [93][ 400/ 813] Overall Loss 1.194264 Objective Loss 1.194264 LR 0.000050 Time 0.529362 -2024-05-11 16:56:14,272 - Epoch: [93][ 500/ 813] Overall Loss 1.198621 Objective Loss 1.198621 LR 0.000050 Time 0.529758 -2024-05-11 16:57:08,323 - Epoch: [93][ 600/ 813] Overall Loss 1.196307 Objective Loss 1.196307 LR 0.000050 Time 0.531544 -2024-05-11 16:58:00,922 - Epoch: [93][ 700/ 813] Overall Loss 1.198749 Objective Loss 1.198749 LR 0.000050 Time 0.530747 -2024-05-11 16:58:53,908 - Epoch: [93][ 800/ 813] Overall Loss 1.199613 Objective Loss 1.199613 LR 0.000050 Time 0.530635 -2024-05-11 16:58:58,544 - Epoch: [93][ 813/ 813] Overall Loss 1.198608 Objective Loss 1.198608 LR 0.000050 Time 0.527851 -2024-05-11 16:58:58,569 - --- validate (epoch=93)----------- -2024-05-11 16:58:58,570 - 3250 samples (16 per mini-batch) -2024-05-11 16:58:58,571 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 16:59:54,463 - Epoch: [93][ 100/ 204] Loss 1.337958 mAP 0.877898 -2024-05-11 17:00:47,126 - Epoch: [93][ 200/ 204] Loss 1.320006 mAP 0.877421 -2024-05-11 17:00:47,860 - Epoch: [93][ 204/ 204] Loss 1.317807 mAP 0.877436 -2024-05-11 17:00:47,887 - ==> mAP: 0.87744 Loss: 1.318 - -2024-05-11 17:00:47,890 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 17:00:47,890 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 17:00:47,910 - - -2024-05-11 17:00:47,910 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 17:01:43,186 - Epoch: [94][ 100/ 813] Overall Loss 1.173577 Objective Loss 1.173577 LR 0.000050 Time 0.552736 -2024-05-11 17:02:37,341 - Epoch: [94][ 200/ 813] Overall Loss 1.179151 Objective Loss 1.179151 LR 0.000050 Time 0.547126 -2024-05-11 17:03:30,686 - Epoch: [94][ 300/ 813] Overall Loss 1.190808 Objective Loss 1.190808 LR 0.000050 Time 0.542543 -2024-05-11 17:04:21,027 - Epoch: [94][ 400/ 813] Overall Loss 1.196896 Objective Loss 1.196896 LR 0.000050 Time 0.532758 -2024-05-11 17:05:12,724 - Epoch: [94][ 500/ 813] Overall Loss 1.202362 Objective Loss 1.202362 LR 0.000050 Time 0.529589 -2024-05-11 17:06:06,986 - Epoch: [94][ 600/ 813] Overall Loss 1.205205 Objective Loss 1.205205 LR 0.000050 Time 0.531750 -2024-05-11 17:07:00,787 - Epoch: [94][ 700/ 813] Overall Loss 1.203906 Objective Loss 1.203906 LR 0.000050 Time 0.532642 -2024-05-11 17:07:52,326 - Epoch: [94][ 800/ 813] Overall Loss 1.203659 Objective Loss 1.203659 LR 0.000050 Time 0.530471 -2024-05-11 17:07:57,106 - Epoch: [94][ 813/ 813] Overall Loss 1.203927 Objective Loss 1.203927 LR 0.000050 Time 0.527867 -2024-05-11 17:07:57,131 - --- validate (epoch=94)----------- -2024-05-11 17:07:57,132 - 3250 samples (16 per mini-batch) -2024-05-11 17:07:57,133 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 17:08:51,967 - Epoch: [94][ 100/ 204] Loss 1.322303 mAP 0.884700 -2024-05-11 17:09:43,739 - Epoch: [94][ 200/ 204] Loss 1.325230 mAP 0.874688 -2024-05-11 17:09:44,715 - Epoch: [94][ 204/ 204] Loss 1.322777 mAP 0.884233 -2024-05-11 17:09:44,740 - ==> mAP: 0.88423 Loss: 1.323 - -2024-05-11 17:09:44,743 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 17:09:44,743 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 17:09:44,764 - - -2024-05-11 17:09:44,764 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 17:10:40,345 - Epoch: [95][ 100/ 813] Overall Loss 1.162028 Objective Loss 1.162028 LR 0.000050 Time 0.555784 -2024-05-11 17:11:34,709 - Epoch: [95][ 200/ 813] Overall Loss 1.174339 Objective Loss 1.174339 LR 0.000050 Time 0.549705 -2024-05-11 17:12:26,923 - Epoch: [95][ 300/ 813] Overall Loss 1.191310 Objective Loss 1.191310 LR 0.000050 Time 0.540512 -2024-05-11 17:13:16,752 - Epoch: [95][ 400/ 813] Overall Loss 1.198639 Objective Loss 1.198639 LR 0.000050 Time 0.529952 -2024-05-11 17:14:08,679 - Epoch: [95][ 500/ 813] Overall Loss 1.200417 Objective Loss 1.200417 LR 0.000050 Time 0.527801 -2024-05-11 17:15:02,302 - Epoch: [95][ 600/ 813] Overall Loss 1.201094 Objective Loss 1.201094 LR 0.000050 Time 0.529199 -2024-05-11 17:15:55,844 - Epoch: [95][ 700/ 813] Overall Loss 1.203281 Objective Loss 1.203281 LR 0.000050 Time 0.530085 -2024-05-11 17:16:47,844 - Epoch: [95][ 800/ 813] Overall Loss 1.204030 Objective Loss 1.204030 LR 0.000050 Time 0.528822 -2024-05-11 17:16:53,626 - Epoch: [95][ 813/ 813] Overall Loss 1.203283 Objective Loss 1.203283 LR 0.000050 Time 0.527478 -2024-05-11 17:16:53,653 - --- validate (epoch=95)----------- -2024-05-11 17:16:53,653 - 3250 samples (16 per mini-batch) -2024-05-11 17:16:53,655 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 17:17:48,584 - Epoch: [95][ 100/ 204] Loss 1.311038 mAP 0.898529 -2024-05-11 17:18:41,201 - Epoch: [95][ 200/ 204] Loss 1.328743 mAP 0.887708 -2024-05-11 17:18:41,410 - Epoch: [95][ 204/ 204] Loss 1.332446 mAP 0.878055 -2024-05-11 17:18:41,435 - ==> mAP: 0.87805 Loss: 1.332 - -2024-05-11 17:18:41,439 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 17:18:41,439 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 17:18:41,460 - - -2024-05-11 17:18:41,460 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 17:19:36,954 - Epoch: [96][ 100/ 813] Overall Loss 1.184778 Objective Loss 1.184778 LR 0.000050 Time 0.554919 -2024-05-11 17:20:30,542 - Epoch: [96][ 200/ 813] Overall Loss 1.198053 Objective Loss 1.198053 LR 0.000050 Time 0.545394 -2024-05-11 17:21:23,126 - Epoch: [96][ 300/ 813] Overall Loss 1.204325 Objective Loss 1.204325 LR 0.000050 Time 0.538870 -2024-05-11 17:22:13,523 - Epoch: [96][ 400/ 813] Overall Loss 1.212823 Objective Loss 1.212823 LR 0.000050 Time 0.530140 -2024-05-11 17:23:05,779 - Epoch: [96][ 500/ 813] Overall Loss 1.214435 Objective Loss 1.214435 LR 0.000050 Time 0.528616 -2024-05-11 17:24:00,552 - Epoch: [96][ 600/ 813] Overall Loss 1.212973 Objective Loss 1.212973 LR 0.000050 Time 0.531791 -2024-05-11 17:24:52,583 - Epoch: [96][ 700/ 813] Overall Loss 1.212083 Objective Loss 1.212083 LR 0.000050 Time 0.530148 -2024-05-11 17:25:44,962 - Epoch: [96][ 800/ 813] Overall Loss 1.207821 Objective Loss 1.207821 LR 0.000050 Time 0.529352 -2024-05-11 17:25:50,844 - Epoch: [96][ 813/ 813] Overall Loss 1.207481 Objective Loss 1.207481 LR 0.000050 Time 0.528122 -2024-05-11 17:25:50,872 - --- validate (epoch=96)----------- -2024-05-11 17:25:50,872 - 3250 samples (16 per mini-batch) -2024-05-11 17:25:50,873 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 17:26:45,406 - Epoch: [96][ 100/ 204] Loss 1.322343 mAP 0.865851 -2024-05-11 17:27:39,059 - Epoch: [96][ 200/ 204] Loss 1.333622 mAP 0.865970 -2024-05-11 17:27:39,625 - Epoch: [96][ 204/ 204] Loss 1.330877 mAP 0.866027 -2024-05-11 17:27:39,651 - ==> mAP: 0.86603 Loss: 1.331 - -2024-05-11 17:27:39,654 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 17:27:39,654 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 17:27:39,674 - - -2024-05-11 17:27:39,674 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 17:28:35,593 - Epoch: [97][ 100/ 813] Overall Loss 1.179474 Objective Loss 1.179474 LR 0.000050 Time 0.559163 -2024-05-11 17:29:28,526 - Epoch: [97][ 200/ 813] Overall Loss 1.192666 Objective Loss 1.192666 LR 0.000050 Time 0.544239 -2024-05-11 17:30:21,059 - Epoch: [97][ 300/ 813] Overall Loss 1.200050 Objective Loss 1.200050 LR 0.000050 Time 0.537932 -2024-05-11 17:31:12,112 - Epoch: [97][ 400/ 813] Overall Loss 1.200976 Objective Loss 1.200976 LR 0.000050 Time 0.531077 -2024-05-11 17:32:05,335 - Epoch: [97][ 500/ 813] Overall Loss 1.201096 Objective Loss 1.201096 LR 0.000050 Time 0.531305 -2024-05-11 17:32:57,913 - Epoch: [97][ 600/ 813] Overall Loss 1.202735 Objective Loss 1.202735 LR 0.000050 Time 0.530381 -2024-05-11 17:33:51,393 - Epoch: [97][ 700/ 813] Overall Loss 1.205336 Objective Loss 1.205336 LR 0.000050 Time 0.531011 -2024-05-11 17:34:43,771 - Epoch: [97][ 800/ 813] Overall Loss 1.206982 Objective Loss 1.206982 LR 0.000050 Time 0.530105 -2024-05-11 17:34:48,906 - Epoch: [97][ 813/ 813] Overall Loss 1.207396 Objective Loss 1.207396 LR 0.000050 Time 0.527939 -2024-05-11 17:34:48,932 - --- validate (epoch=97)----------- -2024-05-11 17:34:48,933 - 3250 samples (16 per mini-batch) -2024-05-11 17:34:48,935 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 17:35:43,253 - Epoch: [97][ 100/ 204] Loss 1.332658 mAP 0.887651 -2024-05-11 17:36:36,361 - Epoch: [97][ 200/ 204] Loss 1.339178 mAP 0.887082 -2024-05-11 17:36:36,883 - Epoch: [97][ 204/ 204] Loss 1.340324 mAP 0.887044 -2024-05-11 17:36:36,907 - ==> mAP: 0.88704 Loss: 1.340 - -2024-05-11 17:36:36,910 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 17:36:36,910 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 17:36:36,930 - - -2024-05-11 17:36:36,930 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 17:37:32,381 - Epoch: [98][ 100/ 813] Overall Loss 1.173657 Objective Loss 1.173657 LR 0.000050 Time 0.554492 -2024-05-11 17:38:26,343 - Epoch: [98][ 200/ 813] Overall Loss 1.187174 Objective Loss 1.187174 LR 0.000050 Time 0.547046 -2024-05-11 17:39:19,308 - Epoch: [98][ 300/ 813] Overall Loss 1.195180 Objective Loss 1.195180 LR 0.000050 Time 0.541241 -2024-05-11 17:40:11,044 - Epoch: [98][ 400/ 813] Overall Loss 1.200087 Objective Loss 1.200087 LR 0.000050 Time 0.535268 -2024-05-11 17:41:03,025 - Epoch: [98][ 500/ 813] Overall Loss 1.199572 Objective Loss 1.199572 LR 0.000050 Time 0.532172 -2024-05-11 17:41:57,387 - Epoch: [98][ 600/ 813] Overall Loss 1.197442 Objective Loss 1.197442 LR 0.000050 Time 0.534077 -2024-05-11 17:42:50,456 - Epoch: [98][ 700/ 813] Overall Loss 1.192632 Objective Loss 1.192632 LR 0.000050 Time 0.533586 -2024-05-11 17:43:42,143 - Epoch: [98][ 800/ 813] Overall Loss 1.191714 Objective Loss 1.191714 LR 0.000050 Time 0.531492 -2024-05-11 17:43:48,210 - Epoch: [98][ 813/ 813] Overall Loss 1.191933 Objective Loss 1.191933 LR 0.000050 Time 0.530449 -2024-05-11 17:43:48,237 - --- validate (epoch=98)----------- -2024-05-11 17:43:48,237 - 3250 samples (16 per mini-batch) -2024-05-11 17:43:48,238 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 17:44:43,411 - Epoch: [98][ 100/ 204] Loss 1.310049 mAP 0.878352 -2024-05-11 17:45:37,946 - Epoch: [98][ 200/ 204] Loss 1.314192 mAP 0.888830 -2024-05-11 17:45:38,147 - Epoch: [98][ 204/ 204] Loss 1.317138 mAP 0.888846 -2024-05-11 17:45:38,172 - ==> mAP: 0.88885 Loss: 1.317 - -2024-05-11 17:45:38,178 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 17:45:38,178 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 17:45:38,201 - - -2024-05-11 17:45:38,201 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 17:46:34,198 - Epoch: [99][ 100/ 813] Overall Loss 1.181133 Objective Loss 1.181133 LR 0.000050 Time 0.559947 -2024-05-11 17:47:27,523 - Epoch: [99][ 200/ 813] Overall Loss 1.186767 Objective Loss 1.186767 LR 0.000050 Time 0.546590 -2024-05-11 17:48:19,191 - Epoch: [99][ 300/ 813] Overall Loss 1.192807 Objective Loss 1.192807 LR 0.000050 Time 0.536614 -2024-05-11 17:49:09,785 - Epoch: [99][ 400/ 813] Overall Loss 1.198784 Objective Loss 1.198784 LR 0.000050 Time 0.528942 -2024-05-11 17:50:03,823 - Epoch: [99][ 500/ 813] Overall Loss 1.200953 Objective Loss 1.200953 LR 0.000050 Time 0.531219 -2024-05-11 17:50:56,264 - Epoch: [99][ 600/ 813] Overall Loss 1.200091 Objective Loss 1.200091 LR 0.000050 Time 0.530080 -2024-05-11 17:51:49,420 - Epoch: [99][ 700/ 813] Overall Loss 1.200071 Objective Loss 1.200071 LR 0.000050 Time 0.530290 -2024-05-11 17:52:41,699 - Epoch: [99][ 800/ 813] Overall Loss 1.199414 Objective Loss 1.199414 LR 0.000050 Time 0.529349 -2024-05-11 17:52:46,670 - Epoch: [99][ 813/ 813] Overall Loss 1.199720 Objective Loss 1.199720 LR 0.000050 Time 0.526999 -2024-05-11 17:52:46,696 - --- validate (epoch=99)----------- -2024-05-11 17:52:46,696 - 3250 samples (16 per mini-batch) -2024-05-11 17:52:46,697 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 17:53:42,380 - Epoch: [99][ 100/ 204] Loss 1.315643 mAP 0.887298 -2024-05-11 17:54:36,652 - Epoch: [99][ 200/ 204] Loss 1.324411 mAP 0.878024 -2024-05-11 17:54:37,250 - Epoch: [99][ 204/ 204] Loss 1.326370 mAP 0.878065 -2024-05-11 17:54:37,276 - ==> mAP: 0.87807 Loss: 1.326 - -2024-05-11 17:54:37,278 - ==> Best [mAP: 0.889215 vloss: 1.328652 Params: 368350 on epoch: 78] -2024-05-11 17:54:37,278 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 17:54:37,299 - - -2024-05-11 17:54:37,299 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 17:55:32,475 - Epoch: [100][ 100/ 813] Overall Loss 1.151493 Objective Loss 1.151493 LR 0.000025 Time 0.551737 -2024-05-11 17:56:26,427 - Epoch: [100][ 200/ 813] Overall Loss 1.172084 Objective Loss 1.172084 LR 0.000025 Time 0.545590 -2024-05-11 17:57:18,030 - Epoch: [100][ 300/ 813] Overall Loss 1.183460 Objective Loss 1.183460 LR 0.000025 Time 0.535707 -2024-05-11 17:58:08,734 - Epoch: [100][ 400/ 813] Overall Loss 1.187757 Objective Loss 1.187757 LR 0.000025 Time 0.528535 -2024-05-11 17:59:00,770 - Epoch: [100][ 500/ 813] Overall Loss 1.190226 Objective Loss 1.190226 LR 0.000025 Time 0.526898 -2024-05-11 17:59:55,593 - Epoch: [100][ 600/ 813] Overall Loss 1.189986 Objective Loss 1.189986 LR 0.000025 Time 0.530446 -2024-05-11 18:00:48,006 - Epoch: [100][ 700/ 813] Overall Loss 1.186304 Objective Loss 1.186304 LR 0.000025 Time 0.529542 -2024-05-11 18:01:40,423 - Epoch: [100][ 800/ 813] Overall Loss 1.188468 Objective Loss 1.188468 LR 0.000025 Time 0.528868 -2024-05-11 18:01:46,151 - Epoch: [100][ 813/ 813] Overall Loss 1.187867 Objective Loss 1.187867 LR 0.000025 Time 0.527453 -2024-05-11 18:01:46,177 - --- validate (epoch=100)----------- -2024-05-11 18:01:46,178 - 3250 samples (16 per mini-batch) -2024-05-11 18:01:46,179 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 18:02:40,896 - Epoch: [100][ 100/ 204] Loss 1.303083 mAP 0.885254 -2024-05-11 18:03:35,550 - Epoch: [100][ 200/ 204] Loss 1.298604 mAP 0.895506 -2024-05-11 18:03:36,059 - Epoch: [100][ 204/ 204] Loss 1.301162 mAP 0.895510 -2024-05-11 18:03:36,085 - ==> mAP: 0.89551 Loss: 1.301 - -2024-05-11 18:03:36,088 - ==> Best [mAP: 0.895510 vloss: 1.301162 Params: 368352 on epoch: 100] -2024-05-11 18:03:36,088 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 18:03:36,112 - - -2024-05-11 18:03:36,112 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 18:04:31,021 - Epoch: [101][ 100/ 813] Overall Loss 1.169231 Objective Loss 1.169231 LR 0.000025 Time 0.549066 -2024-05-11 18:05:25,729 - Epoch: [101][ 200/ 813] Overall Loss 1.172525 Objective Loss 1.172525 LR 0.000025 Time 0.548049 -2024-05-11 18:06:17,388 - Epoch: [101][ 300/ 813] Overall Loss 1.181362 Objective Loss 1.181362 LR 0.000025 Time 0.537559 -2024-05-11 18:07:07,626 - Epoch: [101][ 400/ 813] Overall Loss 1.185232 Objective Loss 1.185232 LR 0.000025 Time 0.528756 -2024-05-11 18:08:02,526 - Epoch: [101][ 500/ 813] Overall Loss 1.185834 Objective Loss 1.185834 LR 0.000025 Time 0.532800 -2024-05-11 18:08:55,824 - Epoch: [101][ 600/ 813] Overall Loss 1.189182 Objective Loss 1.189182 LR 0.000025 Time 0.532829 -2024-05-11 18:09:48,084 - Epoch: [101][ 700/ 813] Overall Loss 1.188391 Objective Loss 1.188391 LR 0.000025 Time 0.531365 -2024-05-11 18:10:42,135 - Epoch: [101][ 800/ 813] Overall Loss 1.187143 Objective Loss 1.187143 LR 0.000025 Time 0.532505 -2024-05-11 18:10:47,162 - Epoch: [101][ 813/ 813] Overall Loss 1.186477 Objective Loss 1.186477 LR 0.000025 Time 0.530172 -2024-05-11 18:10:47,188 - --- validate (epoch=101)----------- -2024-05-11 18:10:47,189 - 3250 samples (16 per mini-batch) -2024-05-11 18:10:47,190 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 18:11:42,209 - Epoch: [101][ 100/ 204] Loss 1.319003 mAP 0.900015 -2024-05-11 18:12:36,728 - Epoch: [101][ 200/ 204] Loss 1.326255 mAP 0.889745 -2024-05-11 18:12:37,066 - Epoch: [101][ 204/ 204] Loss 1.325703 mAP 0.889720 -2024-05-11 18:12:37,091 - ==> mAP: 0.88972 Loss: 1.326 - -2024-05-11 18:12:37,095 - ==> Best [mAP: 0.895510 vloss: 1.301162 Params: 368352 on epoch: 100] -2024-05-11 18:12:37,095 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 18:12:37,115 - - -2024-05-11 18:12:37,116 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 18:13:33,091 - Epoch: [102][ 100/ 813] Overall Loss 1.159317 Objective Loss 1.159317 LR 0.000025 Time 0.559729 -2024-05-11 18:14:27,006 - Epoch: [102][ 200/ 813] Overall Loss 1.161218 Objective Loss 1.161218 LR 0.000025 Time 0.549435 -2024-05-11 18:15:19,781 - Epoch: [102][ 300/ 813] Overall Loss 1.169438 Objective Loss 1.169438 LR 0.000025 Time 0.542203 -2024-05-11 18:16:10,228 - Epoch: [102][ 400/ 813] Overall Loss 1.177074 Objective Loss 1.177074 LR 0.000025 Time 0.532765 -2024-05-11 18:17:03,036 - Epoch: [102][ 500/ 813] Overall Loss 1.183760 Objective Loss 1.183760 LR 0.000025 Time 0.531824 -2024-05-11 18:17:55,953 - Epoch: [102][ 600/ 813] Overall Loss 1.188556 Objective Loss 1.188556 LR 0.000025 Time 0.531376 -2024-05-11 18:18:49,306 - Epoch: [102][ 700/ 813] Overall Loss 1.186690 Objective Loss 1.186690 LR 0.000025 Time 0.531682 -2024-05-11 18:19:42,139 - Epoch: [102][ 800/ 813] Overall Loss 1.187025 Objective Loss 1.187025 LR 0.000025 Time 0.531250 -2024-05-11 18:19:47,307 - Epoch: [102][ 813/ 813] Overall Loss 1.187631 Objective Loss 1.187631 LR 0.000025 Time 0.529106 -2024-05-11 18:19:47,333 - --- validate (epoch=102)----------- -2024-05-11 18:19:47,333 - 3250 samples (16 per mini-batch) -2024-05-11 18:19:47,334 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 18:20:41,714 - Epoch: [102][ 100/ 204] Loss 1.299103 mAP 0.877894 -2024-05-11 18:21:35,342 - Epoch: [102][ 200/ 204] Loss 1.314396 mAP 0.867662 -2024-05-11 18:21:35,809 - Epoch: [102][ 204/ 204] Loss 1.313559 mAP 0.877331 -2024-05-11 18:21:35,835 - ==> mAP: 0.87733 Loss: 1.314 - -2024-05-11 18:21:35,838 - ==> Best [mAP: 0.895510 vloss: 1.301162 Params: 368352 on epoch: 100] -2024-05-11 18:21:35,838 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 18:21:35,859 - - -2024-05-11 18:21:35,860 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 18:22:30,971 - Epoch: [103][ 100/ 813] Overall Loss 1.172751 Objective Loss 1.172751 LR 0.000025 Time 0.551087 -2024-05-11 18:23:24,770 - Epoch: [103][ 200/ 813] Overall Loss 1.188434 Objective Loss 1.188434 LR 0.000025 Time 0.544533 -2024-05-11 18:24:16,702 - Epoch: [103][ 300/ 813] Overall Loss 1.197575 Objective Loss 1.197575 LR 0.000025 Time 0.536123 -2024-05-11 18:25:07,884 - Epoch: [103][ 400/ 813] Overall Loss 1.199213 Objective Loss 1.199213 LR 0.000025 Time 0.530042 -2024-05-11 18:26:00,856 - Epoch: [103][ 500/ 813] Overall Loss 1.199561 Objective Loss 1.199561 LR 0.000025 Time 0.529975 -2024-05-11 18:26:53,973 - Epoch: [103][ 600/ 813] Overall Loss 1.198278 Objective Loss 1.198278 LR 0.000025 Time 0.530171 -2024-05-11 18:27:48,023 - Epoch: [103][ 700/ 813] Overall Loss 1.197489 Objective Loss 1.197489 LR 0.000025 Time 0.531645 -2024-05-11 18:28:40,576 - Epoch: [103][ 800/ 813] Overall Loss 1.192941 Objective Loss 1.192941 LR 0.000025 Time 0.530875 -2024-05-11 18:28:46,138 - Epoch: [103][ 813/ 813] Overall Loss 1.192461 Objective Loss 1.192461 LR 0.000025 Time 0.529227 -2024-05-11 18:28:46,165 - --- validate (epoch=103)----------- -2024-05-11 18:28:46,165 - 3250 samples (16 per mini-batch) -2024-05-11 18:28:46,166 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 18:29:40,447 - Epoch: [103][ 100/ 204] Loss 1.295076 mAP 0.875355 -2024-05-11 18:30:32,845 - Epoch: [103][ 200/ 204] Loss 1.298781 mAP 0.876381 -2024-05-11 18:30:33,708 - Epoch: [103][ 204/ 204] Loss 1.301472 mAP 0.876439 -2024-05-11 18:30:33,734 - ==> mAP: 0.87644 Loss: 1.301 - -2024-05-11 18:30:33,737 - ==> Best [mAP: 0.895510 vloss: 1.301162 Params: 368352 on epoch: 100] -2024-05-11 18:30:33,738 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 18:30:33,758 - - -2024-05-11 18:30:33,758 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 18:31:28,391 - Epoch: [104][ 100/ 813] Overall Loss 1.186891 Objective Loss 1.186891 LR 0.000025 Time 0.546308 -2024-05-11 18:32:22,774 - Epoch: [104][ 200/ 813] Overall Loss 1.189089 Objective Loss 1.189089 LR 0.000025 Time 0.545063 -2024-05-11 18:33:15,365 - Epoch: [104][ 300/ 813] Overall Loss 1.195770 Objective Loss 1.195770 LR 0.000025 Time 0.538668 -2024-05-11 18:34:06,360 - Epoch: [104][ 400/ 813] Overall Loss 1.199649 Objective Loss 1.199649 LR 0.000025 Time 0.531483 -2024-05-11 18:34:58,270 - Epoch: [104][ 500/ 813] Overall Loss 1.199362 Objective Loss 1.199362 LR 0.000025 Time 0.528993 -2024-05-11 18:35:52,588 - Epoch: [104][ 600/ 813] Overall Loss 1.198436 Objective Loss 1.198436 LR 0.000025 Time 0.531353 -2024-05-11 18:36:45,016 - Epoch: [104][ 700/ 813] Overall Loss 1.196388 Objective Loss 1.196388 LR 0.000025 Time 0.530333 -2024-05-11 18:37:38,159 - Epoch: [104][ 800/ 813] Overall Loss 1.193700 Objective Loss 1.193700 LR 0.000025 Time 0.530465 -2024-05-11 18:37:43,170 - Epoch: [104][ 813/ 813] Overall Loss 1.193644 Objective Loss 1.193644 LR 0.000025 Time 0.528143 -2024-05-11 18:37:43,197 - --- validate (epoch=104)----------- -2024-05-11 18:37:43,197 - 3250 samples (16 per mini-batch) -2024-05-11 18:37:43,199 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 18:38:39,816 - Epoch: [104][ 100/ 204] Loss 1.329669 mAP 0.876163 -2024-05-11 18:39:33,095 - Epoch: [104][ 200/ 204] Loss 1.315086 mAP 0.876400 -2024-05-11 18:39:33,365 - Epoch: [104][ 204/ 204] Loss 1.324274 mAP 0.876481 -2024-05-11 18:39:33,391 - ==> mAP: 0.87648 Loss: 1.324 - -2024-05-11 18:39:33,394 - ==> Best [mAP: 0.895510 vloss: 1.301162 Params: 368352 on epoch: 100] -2024-05-11 18:39:33,394 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 18:39:33,414 - - -2024-05-11 18:39:33,414 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 18:40:28,006 - Epoch: [105][ 100/ 813] Overall Loss 1.145603 Objective Loss 1.145603 LR 0.000025 Time 0.545901 -2024-05-11 18:41:21,900 - Epoch: [105][ 200/ 813] Overall Loss 1.165776 Objective Loss 1.165776 LR 0.000025 Time 0.542411 -2024-05-11 18:42:15,087 - Epoch: [105][ 300/ 813] Overall Loss 1.179838 Objective Loss 1.179838 LR 0.000025 Time 0.538892 -2024-05-11 18:43:05,468 - Epoch: [105][ 400/ 813] Overall Loss 1.180040 Objective Loss 1.180040 LR 0.000025 Time 0.530118 -2024-05-11 18:43:57,554 - Epoch: [105][ 500/ 813] Overall Loss 1.182381 Objective Loss 1.182381 LR 0.000025 Time 0.528261 -2024-05-11 18:44:51,963 - Epoch: [105][ 600/ 813] Overall Loss 1.183832 Objective Loss 1.183832 LR 0.000025 Time 0.530892 -2024-05-11 18:45:42,638 - Epoch: [105][ 700/ 813] Overall Loss 1.186062 Objective Loss 1.186062 LR 0.000025 Time 0.527440 -2024-05-11 18:46:33,931 - Epoch: [105][ 800/ 813] Overall Loss 1.186729 Objective Loss 1.186729 LR 0.000025 Time 0.525625 -2024-05-11 18:46:39,648 - Epoch: [105][ 813/ 813] Overall Loss 1.186472 Objective Loss 1.186472 LR 0.000025 Time 0.524252 -2024-05-11 18:46:39,674 - --- validate (epoch=105)----------- -2024-05-11 18:46:39,675 - 3250 samples (16 per mini-batch) -2024-05-11 18:46:39,676 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 18:47:34,451 - Epoch: [105][ 100/ 204] Loss 1.323958 mAP 0.869468 -2024-05-11 18:48:27,296 - Epoch: [105][ 200/ 204] Loss 1.313580 mAP 0.878507 -2024-05-11 18:48:27,526 - Epoch: [105][ 204/ 204] Loss 1.312312 mAP 0.878529 -2024-05-11 18:48:27,551 - ==> mAP: 0.87853 Loss: 1.312 - -2024-05-11 18:48:27,555 - ==> Best [mAP: 0.895510 vloss: 1.301162 Params: 368352 on epoch: 100] -2024-05-11 18:48:27,555 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 18:48:27,576 - - -2024-05-11 18:48:27,576 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 18:49:24,239 - Epoch: [106][ 100/ 813] Overall Loss 1.164907 Objective Loss 1.164907 LR 0.000025 Time 0.566612 -2024-05-11 18:50:17,389 - Epoch: [106][ 200/ 813] Overall Loss 1.164369 Objective Loss 1.164369 LR 0.000025 Time 0.549034 -2024-05-11 18:51:10,184 - Epoch: [106][ 300/ 813] Overall Loss 1.175515 Objective Loss 1.175515 LR 0.000025 Time 0.541992 -2024-05-11 18:52:01,326 - Epoch: [106][ 400/ 813] Overall Loss 1.187882 Objective Loss 1.187882 LR 0.000025 Time 0.534326 -2024-05-11 18:52:53,942 - Epoch: [106][ 500/ 813] Overall Loss 1.196992 Objective Loss 1.196992 LR 0.000025 Time 0.532689 -2024-05-11 18:53:47,588 - Epoch: [106][ 600/ 813] Overall Loss 1.197932 Objective Loss 1.197932 LR 0.000025 Time 0.533311 -2024-05-11 18:54:42,001 - Epoch: [106][ 700/ 813] Overall Loss 1.195343 Objective Loss 1.195343 LR 0.000025 Time 0.534851 -2024-05-11 18:55:34,373 - Epoch: [106][ 800/ 813] Overall Loss 1.193077 Objective Loss 1.193077 LR 0.000025 Time 0.533458 -2024-05-11 18:55:39,082 - Epoch: [106][ 813/ 813] Overall Loss 1.193379 Objective Loss 1.193379 LR 0.000025 Time 0.530719 -2024-05-11 18:55:39,108 - --- validate (epoch=106)----------- -2024-05-11 18:55:39,109 - 3250 samples (16 per mini-batch) -2024-05-11 18:55:39,110 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 18:56:33,884 - Epoch: [106][ 100/ 204] Loss 1.301972 mAP 0.887486 -2024-05-11 18:57:25,472 - Epoch: [106][ 200/ 204] Loss 1.306425 mAP 0.877549 -2024-05-11 18:57:26,229 - Epoch: [106][ 204/ 204] Loss 1.305302 mAP 0.877557 -2024-05-11 18:57:26,255 - ==> mAP: 0.87756 Loss: 1.305 - -2024-05-11 18:57:26,259 - ==> Best [mAP: 0.895510 vloss: 1.301162 Params: 368352 on epoch: 100] -2024-05-11 18:57:26,259 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 18:57:26,279 - - -2024-05-11 18:57:26,279 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 18:58:21,411 - Epoch: [107][ 100/ 813] Overall Loss 1.163039 Objective Loss 1.163039 LR 0.000025 Time 0.551298 -2024-05-11 18:59:16,022 - Epoch: [107][ 200/ 813] Overall Loss 1.168299 Objective Loss 1.168299 LR 0.000025 Time 0.548685 -2024-05-11 19:00:08,993 - Epoch: [107][ 300/ 813] Overall Loss 1.178970 Objective Loss 1.178970 LR 0.000025 Time 0.542335 -2024-05-11 19:01:00,666 - Epoch: [107][ 400/ 813] Overall Loss 1.183689 Objective Loss 1.183689 LR 0.000025 Time 0.535924 -2024-05-11 19:01:49,979 - Epoch: [107][ 500/ 813] Overall Loss 1.188060 Objective Loss 1.188060 LR 0.000025 Time 0.527362 -2024-05-11 19:02:44,396 - Epoch: [107][ 600/ 813] Overall Loss 1.188050 Objective Loss 1.188050 LR 0.000025 Time 0.530157 -2024-05-11 19:03:37,474 - Epoch: [107][ 700/ 813] Overall Loss 1.191556 Objective Loss 1.191556 LR 0.000025 Time 0.530243 -2024-05-11 19:04:30,277 - Epoch: [107][ 800/ 813] Overall Loss 1.191135 Objective Loss 1.191135 LR 0.000025 Time 0.529964 -2024-05-11 19:04:35,943 - Epoch: [107][ 813/ 813] Overall Loss 1.191398 Objective Loss 1.191398 LR 0.000025 Time 0.528458 -2024-05-11 19:04:35,968 - --- validate (epoch=107)----------- -2024-05-11 19:04:35,969 - 3250 samples (16 per mini-batch) -2024-05-11 19:04:35,970 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 19:05:31,512 - Epoch: [107][ 100/ 204] Loss 1.319705 mAP 0.888966 -2024-05-11 19:06:24,780 - Epoch: [107][ 200/ 204] Loss 1.320565 mAP 0.878220 -2024-05-11 19:06:25,125 - Epoch: [107][ 204/ 204] Loss 1.318971 mAP 0.878237 -2024-05-11 19:06:25,151 - ==> mAP: 0.87824 Loss: 1.319 - -2024-05-11 19:06:25,154 - ==> Best [mAP: 0.895510 vloss: 1.301162 Params: 368352 on epoch: 100] -2024-05-11 19:06:25,154 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 19:06:25,174 - - -2024-05-11 19:06:25,174 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 19:07:20,561 - Epoch: [108][ 100/ 813] Overall Loss 1.139496 Objective Loss 1.139496 LR 0.000025 Time 0.553849 -2024-05-11 19:08:12,402 - Epoch: [108][ 200/ 813] Overall Loss 1.165568 Objective Loss 1.165568 LR 0.000025 Time 0.536094 -2024-05-11 19:09:04,851 - Epoch: [108][ 300/ 813] Overall Loss 1.179505 Objective Loss 1.179505 LR 0.000025 Time 0.532221 -2024-05-11 19:09:55,888 - Epoch: [108][ 400/ 813] Overall Loss 1.190008 Objective Loss 1.190008 LR 0.000025 Time 0.526753 -2024-05-11 19:10:49,165 - Epoch: [108][ 500/ 813] Overall Loss 1.192873 Objective Loss 1.192873 LR 0.000025 Time 0.527953 -2024-05-11 19:11:43,576 - Epoch: [108][ 600/ 813] Overall Loss 1.193769 Objective Loss 1.193769 LR 0.000025 Time 0.530634 -2024-05-11 19:12:34,720 - Epoch: [108][ 700/ 813] Overall Loss 1.190909 Objective Loss 1.190909 LR 0.000025 Time 0.527888 -2024-05-11 19:13:28,663 - Epoch: [108][ 800/ 813] Overall Loss 1.190835 Objective Loss 1.190835 LR 0.000025 Time 0.529330 -2024-05-11 19:13:33,839 - Epoch: [108][ 813/ 813] Overall Loss 1.190898 Objective Loss 1.190898 LR 0.000025 Time 0.527228 -2024-05-11 19:13:33,875 - --- validate (epoch=108)----------- -2024-05-11 19:13:33,876 - 3250 samples (16 per mini-batch) -2024-05-11 19:13:33,877 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 19:14:28,492 - Epoch: [108][ 100/ 204] Loss 1.300695 mAP 0.886436 -2024-05-11 19:15:21,033 - Epoch: [108][ 200/ 204] Loss 1.312264 mAP 0.886653 -2024-05-11 19:15:21,564 - Epoch: [108][ 204/ 204] Loss 1.311596 mAP 0.886654 -2024-05-11 19:15:21,591 - ==> mAP: 0.88665 Loss: 1.312 - -2024-05-11 19:15:21,594 - ==> Best [mAP: 0.895510 vloss: 1.301162 Params: 368352 on epoch: 100] -2024-05-11 19:15:21,594 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 19:15:21,614 - - -2024-05-11 19:15:21,614 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 19:16:17,904 - Epoch: [109][ 100/ 813] Overall Loss 1.148394 Objective Loss 1.148394 LR 0.000025 Time 0.562876 -2024-05-11 19:17:10,301 - Epoch: [109][ 200/ 813] Overall Loss 1.158896 Objective Loss 1.158896 LR 0.000025 Time 0.543415 -2024-05-11 19:18:02,972 - Epoch: [109][ 300/ 813] Overall Loss 1.182643 Objective Loss 1.182643 LR 0.000025 Time 0.537842 -2024-05-11 19:18:54,124 - Epoch: [109][ 400/ 813] Overall Loss 1.185959 Objective Loss 1.185959 LR 0.000025 Time 0.531256 -2024-05-11 19:19:46,580 - Epoch: [109][ 500/ 813] Overall Loss 1.188613 Objective Loss 1.188613 LR 0.000025 Time 0.529914 -2024-05-11 19:20:40,282 - Epoch: [109][ 600/ 813] Overall Loss 1.186210 Objective Loss 1.186210 LR 0.000025 Time 0.531097 -2024-05-11 19:21:32,970 - Epoch: [109][ 700/ 813] Overall Loss 1.187380 Objective Loss 1.187380 LR 0.000025 Time 0.530492 -2024-05-11 19:22:25,461 - Epoch: [109][ 800/ 813] Overall Loss 1.188303 Objective Loss 1.188303 LR 0.000025 Time 0.529793 -2024-05-11 19:22:30,153 - Epoch: [109][ 813/ 813] Overall Loss 1.188768 Objective Loss 1.188768 LR 0.000025 Time 0.527076 -2024-05-11 19:22:30,180 - --- validate (epoch=109)----------- -2024-05-11 19:22:30,181 - 3250 samples (16 per mini-batch) -2024-05-11 19:22:30,182 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 19:23:25,181 - Epoch: [109][ 100/ 204] Loss 1.310516 mAP 0.889519 -2024-05-11 19:24:19,041 - Epoch: [109][ 200/ 204] Loss 1.308507 mAP 0.898508 -2024-05-11 19:24:19,445 - Epoch: [109][ 204/ 204] Loss 1.308378 mAP 0.898425 -2024-05-11 19:24:19,470 - ==> mAP: 0.89842 Loss: 1.308 - -2024-05-11 19:24:19,473 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 19:24:19,473 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 19:24:19,497 - - -2024-05-11 19:24:19,497 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 19:25:15,106 - Epoch: [110][ 100/ 813] Overall Loss 1.172711 Objective Loss 1.172711 LR 0.000025 Time 0.556065 -2024-05-11 19:26:09,029 - Epoch: [110][ 200/ 813] Overall Loss 1.173101 Objective Loss 1.173101 LR 0.000025 Time 0.547601 -2024-05-11 19:27:02,359 - Epoch: [110][ 300/ 813] Overall Loss 1.181867 Objective Loss 1.181867 LR 0.000025 Time 0.542826 -2024-05-11 19:27:51,934 - Epoch: [110][ 400/ 813] Overall Loss 1.186648 Objective Loss 1.186648 LR 0.000025 Time 0.531047 -2024-05-11 19:28:44,553 - Epoch: [110][ 500/ 813] Overall Loss 1.193272 Objective Loss 1.193272 LR 0.000025 Time 0.530073 -2024-05-11 19:29:37,451 - Epoch: [110][ 600/ 813] Overall Loss 1.195689 Objective Loss 1.195689 LR 0.000025 Time 0.529889 -2024-05-11 19:30:30,047 - Epoch: [110][ 700/ 813] Overall Loss 1.193173 Objective Loss 1.193173 LR 0.000025 Time 0.529323 -2024-05-11 19:31:22,654 - Epoch: [110][ 800/ 813] Overall Loss 1.191128 Objective Loss 1.191128 LR 0.000025 Time 0.528914 -2024-05-11 19:31:28,140 - Epoch: [110][ 813/ 813] Overall Loss 1.190955 Objective Loss 1.190955 LR 0.000025 Time 0.527204 -2024-05-11 19:31:28,170 - --- validate (epoch=110)----------- -2024-05-11 19:31:28,170 - 3250 samples (16 per mini-batch) -2024-05-11 19:31:28,172 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 19:32:21,789 - Epoch: [110][ 100/ 204] Loss 1.300126 mAP 0.886520 -2024-05-11 19:33:15,947 - Epoch: [110][ 200/ 204] Loss 1.307943 mAP 0.885796 -2024-05-11 19:33:16,631 - Epoch: [110][ 204/ 204] Loss 1.305984 mAP 0.885723 -2024-05-11 19:33:16,656 - ==> mAP: 0.88572 Loss: 1.306 - -2024-05-11 19:33:16,659 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 19:33:16,659 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 19:33:16,679 - - -2024-05-11 19:33:16,679 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 19:34:12,343 - Epoch: [111][ 100/ 813] Overall Loss 1.135320 Objective Loss 1.135320 LR 0.000025 Time 0.556614 -2024-05-11 19:35:05,727 - Epoch: [111][ 200/ 813] Overall Loss 1.150075 Objective Loss 1.150075 LR 0.000025 Time 0.545132 -2024-05-11 19:35:59,317 - Epoch: [111][ 300/ 813] Overall Loss 1.174232 Objective Loss 1.174232 LR 0.000025 Time 0.542045 -2024-05-11 19:36:49,361 - Epoch: [111][ 400/ 813] Overall Loss 1.179434 Objective Loss 1.179434 LR 0.000025 Time 0.531635 -2024-05-11 19:37:42,501 - Epoch: [111][ 500/ 813] Overall Loss 1.178404 Objective Loss 1.178404 LR 0.000025 Time 0.531584 -2024-05-11 19:38:37,313 - Epoch: [111][ 600/ 813] Overall Loss 1.180252 Objective Loss 1.180252 LR 0.000025 Time 0.534332 -2024-05-11 19:39:29,777 - Epoch: [111][ 700/ 813] Overall Loss 1.182091 Objective Loss 1.182091 LR 0.000025 Time 0.532934 -2024-05-11 19:40:20,892 - Epoch: [111][ 800/ 813] Overall Loss 1.185811 Objective Loss 1.185811 LR 0.000025 Time 0.530207 -2024-05-11 19:40:26,968 - Epoch: [111][ 813/ 813] Overall Loss 1.186703 Objective Loss 1.186703 LR 0.000025 Time 0.529202 -2024-05-11 19:40:27,011 - --- validate (epoch=111)----------- -2024-05-11 19:40:27,012 - 3250 samples (16 per mini-batch) -2024-05-11 19:40:27,013 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 19:41:22,874 - Epoch: [111][ 100/ 204] Loss 1.331285 mAP 0.867051 -2024-05-11 19:42:15,666 - Epoch: [111][ 200/ 204] Loss 1.332637 mAP 0.866696 -2024-05-11 19:42:16,224 - Epoch: [111][ 204/ 204] Loss 1.334194 mAP 0.866715 -2024-05-11 19:42:16,249 - ==> mAP: 0.86671 Loss: 1.334 - -2024-05-11 19:42:16,252 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 19:42:16,252 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 19:42:16,273 - - -2024-05-11 19:42:16,273 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 19:43:11,675 - Epoch: [112][ 100/ 813] Overall Loss 1.159489 Objective Loss 1.159489 LR 0.000025 Time 0.553997 -2024-05-11 19:44:04,944 - Epoch: [112][ 200/ 813] Overall Loss 1.176461 Objective Loss 1.176461 LR 0.000025 Time 0.543337 -2024-05-11 19:44:57,538 - Epoch: [112][ 300/ 813] Overall Loss 1.186368 Objective Loss 1.186368 LR 0.000025 Time 0.537533 -2024-05-11 19:45:48,937 - Epoch: [112][ 400/ 813] Overall Loss 1.191178 Objective Loss 1.191178 LR 0.000025 Time 0.531644 -2024-05-11 19:46:41,296 - Epoch: [112][ 500/ 813] Overall Loss 1.189073 Objective Loss 1.189073 LR 0.000025 Time 0.530030 -2024-05-11 19:47:35,488 - Epoch: [112][ 600/ 813] Overall Loss 1.194708 Objective Loss 1.194708 LR 0.000025 Time 0.532008 -2024-05-11 19:48:28,687 - Epoch: [112][ 700/ 813] Overall Loss 1.189654 Objective Loss 1.189654 LR 0.000025 Time 0.532004 -2024-05-11 19:49:20,617 - Epoch: [112][ 800/ 813] Overall Loss 1.189747 Objective Loss 1.189747 LR 0.000025 Time 0.530414 -2024-05-11 19:49:25,844 - Epoch: [112][ 813/ 813] Overall Loss 1.189493 Objective Loss 1.189493 LR 0.000025 Time 0.528362 -2024-05-11 19:49:25,870 - --- validate (epoch=112)----------- -2024-05-11 19:49:25,871 - 3250 samples (16 per mini-batch) -2024-05-11 19:49:25,872 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 19:50:22,104 - Epoch: [112][ 100/ 204] Loss 1.323834 mAP 0.874652 -2024-05-11 19:51:14,139 - Epoch: [112][ 200/ 204] Loss 1.310892 mAP 0.876587 -2024-05-11 19:51:14,745 - Epoch: [112][ 204/ 204] Loss 1.313375 mAP 0.876658 -2024-05-11 19:51:14,771 - ==> mAP: 0.87666 Loss: 1.313 - -2024-05-11 19:51:14,774 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 19:51:14,774 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 19:51:14,794 - - -2024-05-11 19:51:14,794 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 19:52:10,255 - Epoch: [113][ 100/ 813] Overall Loss 1.144859 Objective Loss 1.144859 LR 0.000025 Time 0.554584 -2024-05-11 19:53:03,952 - Epoch: [113][ 200/ 813] Overall Loss 1.158736 Objective Loss 1.158736 LR 0.000025 Time 0.545769 -2024-05-11 19:53:56,210 - Epoch: [113][ 300/ 813] Overall Loss 1.173477 Objective Loss 1.173477 LR 0.000025 Time 0.538037 -2024-05-11 19:54:46,253 - Epoch: [113][ 400/ 813] Overall Loss 1.176418 Objective Loss 1.176418 LR 0.000025 Time 0.528624 -2024-05-11 19:55:38,907 - Epoch: [113][ 500/ 813] Overall Loss 1.180005 Objective Loss 1.180005 LR 0.000025 Time 0.528205 -2024-05-11 19:56:33,052 - Epoch: [113][ 600/ 813] Overall Loss 1.182683 Objective Loss 1.182683 LR 0.000025 Time 0.530410 -2024-05-11 19:57:24,360 - Epoch: [113][ 700/ 813] Overall Loss 1.184027 Objective Loss 1.184027 LR 0.000025 Time 0.527925 -2024-05-11 19:58:17,703 - Epoch: [113][ 800/ 813] Overall Loss 1.187102 Objective Loss 1.187102 LR 0.000025 Time 0.528612 -2024-05-11 19:58:22,995 - Epoch: [113][ 813/ 813] Overall Loss 1.187746 Objective Loss 1.187746 LR 0.000025 Time 0.526668 -2024-05-11 19:58:23,030 - --- validate (epoch=113)----------- -2024-05-11 19:58:23,031 - 3250 samples (16 per mini-batch) -2024-05-11 19:58:23,032 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 19:59:20,582 - Epoch: [113][ 100/ 204] Loss 1.297524 mAP 0.889329 -2024-05-11 20:00:14,545 - Epoch: [113][ 200/ 204] Loss 1.314668 mAP 0.877791 -2024-05-11 20:00:14,922 - Epoch: [113][ 204/ 204] Loss 1.312675 mAP 0.877780 -2024-05-11 20:00:14,948 - ==> mAP: 0.87778 Loss: 1.313 - -2024-05-11 20:00:14,951 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 20:00:14,951 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 20:00:14,971 - - -2024-05-11 20:00:14,971 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 20:01:10,023 - Epoch: [114][ 100/ 813] Overall Loss 1.168670 Objective Loss 1.168670 LR 0.000025 Time 0.550494 -2024-05-11 20:02:03,343 - Epoch: [114][ 200/ 813] Overall Loss 1.172829 Objective Loss 1.172829 LR 0.000025 Time 0.541828 -2024-05-11 20:02:56,446 - Epoch: [114][ 300/ 813] Overall Loss 1.182142 Objective Loss 1.182142 LR 0.000025 Time 0.538215 -2024-05-11 20:03:47,202 - Epoch: [114][ 400/ 813] Overall Loss 1.187454 Objective Loss 1.187454 LR 0.000025 Time 0.530545 -2024-05-11 20:04:39,995 - Epoch: [114][ 500/ 813] Overall Loss 1.192167 Objective Loss 1.192167 LR 0.000025 Time 0.530020 -2024-05-11 20:05:32,393 - Epoch: [114][ 600/ 813] Overall Loss 1.194447 Objective Loss 1.194447 LR 0.000025 Time 0.529010 -2024-05-11 20:06:25,558 - Epoch: [114][ 700/ 813] Overall Loss 1.191439 Objective Loss 1.191439 LR 0.000025 Time 0.529386 -2024-05-11 20:07:18,310 - Epoch: [114][ 800/ 813] Overall Loss 1.191013 Objective Loss 1.191013 LR 0.000025 Time 0.529150 -2024-05-11 20:07:23,345 - Epoch: [114][ 813/ 813] Overall Loss 1.190686 Objective Loss 1.190686 LR 0.000025 Time 0.526881 -2024-05-11 20:07:23,372 - --- validate (epoch=114)----------- -2024-05-11 20:07:23,373 - 3250 samples (16 per mini-batch) -2024-05-11 20:07:23,374 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 20:08:19,632 - Epoch: [114][ 100/ 204] Loss 1.317976 mAP 0.887407 -2024-05-11 20:09:12,319 - Epoch: [114][ 200/ 204] Loss 1.316293 mAP 0.887294 -2024-05-11 20:09:12,690 - Epoch: [114][ 204/ 204] Loss 1.314663 mAP 0.887308 -2024-05-11 20:09:12,714 - ==> mAP: 0.88731 Loss: 1.315 - -2024-05-11 20:09:12,717 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 20:09:12,717 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 20:09:12,738 - - -2024-05-11 20:09:12,738 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 20:10:09,137 - Epoch: [115][ 100/ 813] Overall Loss 1.127761 Objective Loss 1.127761 LR 0.000025 Time 0.563889 -2024-05-11 20:11:03,812 - Epoch: [115][ 200/ 813] Overall Loss 1.163218 Objective Loss 1.163218 LR 0.000025 Time 0.555296 -2024-05-11 20:11:56,052 - Epoch: [115][ 300/ 813] Overall Loss 1.171962 Objective Loss 1.171962 LR 0.000025 Time 0.544326 -2024-05-11 20:12:45,969 - Epoch: [115][ 400/ 813] Overall Loss 1.174340 Objective Loss 1.174340 LR 0.000025 Time 0.533033 -2024-05-11 20:13:37,726 - Epoch: [115][ 500/ 813] Overall Loss 1.177019 Objective Loss 1.177019 LR 0.000025 Time 0.529939 -2024-05-11 20:14:32,125 - Epoch: [115][ 600/ 813] Overall Loss 1.177933 Objective Loss 1.177933 LR 0.000025 Time 0.532277 -2024-05-11 20:15:26,221 - Epoch: [115][ 700/ 813] Overall Loss 1.179342 Objective Loss 1.179342 LR 0.000025 Time 0.533515 -2024-05-11 20:16:17,423 - Epoch: [115][ 800/ 813] Overall Loss 1.180991 Objective Loss 1.180991 LR 0.000025 Time 0.530827 -2024-05-11 20:16:23,614 - Epoch: [115][ 813/ 813] Overall Loss 1.180686 Objective Loss 1.180686 LR 0.000025 Time 0.529951 -2024-05-11 20:16:23,641 - --- validate (epoch=115)----------- -2024-05-11 20:16:23,641 - 3250 samples (16 per mini-batch) -2024-05-11 20:16:23,643 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 20:17:17,947 - Epoch: [115][ 100/ 204] Loss 1.307557 mAP 0.888110 -2024-05-11 20:18:11,296 - Epoch: [115][ 200/ 204] Loss 1.296077 mAP 0.887522 -2024-05-11 20:18:11,842 - Epoch: [115][ 204/ 204] Loss 1.314162 mAP 0.887531 -2024-05-11 20:18:11,867 - ==> mAP: 0.88753 Loss: 1.314 - -2024-05-11 20:18:11,870 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 20:18:11,870 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 20:18:11,890 - - -2024-05-11 20:18:11,890 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 20:19:08,267 - Epoch: [116][ 100/ 813] Overall Loss 1.158300 Objective Loss 1.158300 LR 0.000025 Time 0.563749 -2024-05-11 20:20:02,145 - Epoch: [116][ 200/ 813] Overall Loss 1.171606 Objective Loss 1.171606 LR 0.000025 Time 0.551257 -2024-05-11 20:20:56,273 - Epoch: [116][ 300/ 813] Overall Loss 1.181206 Objective Loss 1.181206 LR 0.000025 Time 0.547926 -2024-05-11 20:21:46,556 - Epoch: [116][ 400/ 813] Overall Loss 1.180389 Objective Loss 1.180389 LR 0.000025 Time 0.536647 -2024-05-11 20:22:38,700 - Epoch: [116][ 500/ 813] Overall Loss 1.184544 Objective Loss 1.184544 LR 0.000025 Time 0.533604 -2024-05-11 20:23:32,172 - Epoch: [116][ 600/ 813] Overall Loss 1.189359 Objective Loss 1.189359 LR 0.000025 Time 0.533787 -2024-05-11 20:24:25,912 - Epoch: [116][ 700/ 813] Overall Loss 1.186636 Objective Loss 1.186636 LR 0.000025 Time 0.534300 -2024-05-11 20:25:16,779 - Epoch: [116][ 800/ 813] Overall Loss 1.186362 Objective Loss 1.186362 LR 0.000025 Time 0.531095 -2024-05-11 20:25:22,632 - Epoch: [116][ 813/ 813] Overall Loss 1.185921 Objective Loss 1.185921 LR 0.000025 Time 0.529801 -2024-05-11 20:25:22,657 - --- validate (epoch=116)----------- -2024-05-11 20:25:22,658 - 3250 samples (16 per mini-batch) -2024-05-11 20:25:22,659 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 20:26:17,728 - Epoch: [116][ 100/ 204] Loss 1.297153 mAP 0.908271 -2024-05-11 20:27:11,777 - Epoch: [116][ 200/ 204] Loss 1.292510 mAP 0.898265 -2024-05-11 20:27:12,212 - Epoch: [116][ 204/ 204] Loss 1.292057 mAP 0.898171 -2024-05-11 20:27:12,237 - ==> mAP: 0.89817 Loss: 1.292 - -2024-05-11 20:27:12,240 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 20:27:12,240 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 20:27:12,260 - - -2024-05-11 20:27:12,261 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 20:28:06,514 - Epoch: [117][ 100/ 813] Overall Loss 1.163257 Objective Loss 1.163257 LR 0.000025 Time 0.542512 -2024-05-11 20:29:00,975 - Epoch: [117][ 200/ 813] Overall Loss 1.156327 Objective Loss 1.156327 LR 0.000025 Time 0.543555 -2024-05-11 20:29:54,112 - Epoch: [117][ 300/ 813] Overall Loss 1.176157 Objective Loss 1.176157 LR 0.000025 Time 0.539471 -2024-05-11 20:30:44,215 - Epoch: [117][ 400/ 813] Overall Loss 1.177406 Objective Loss 1.177406 LR 0.000025 Time 0.529849 -2024-05-11 20:31:36,288 - Epoch: [117][ 500/ 813] Overall Loss 1.179097 Objective Loss 1.179097 LR 0.000025 Time 0.528024 -2024-05-11 20:32:29,191 - Epoch: [117][ 600/ 813] Overall Loss 1.182183 Objective Loss 1.182183 LR 0.000025 Time 0.528188 -2024-05-11 20:33:21,513 - Epoch: [117][ 700/ 813] Overall Loss 1.183870 Objective Loss 1.183870 LR 0.000025 Time 0.527476 -2024-05-11 20:34:13,388 - Epoch: [117][ 800/ 813] Overall Loss 1.185310 Objective Loss 1.185310 LR 0.000025 Time 0.526381 -2024-05-11 20:34:18,925 - Epoch: [117][ 813/ 813] Overall Loss 1.184253 Objective Loss 1.184253 LR 0.000025 Time 0.524775 -2024-05-11 20:34:18,952 - --- validate (epoch=117)----------- -2024-05-11 20:34:18,953 - 3250 samples (16 per mini-batch) -2024-05-11 20:34:18,954 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 20:35:14,459 - Epoch: [117][ 100/ 204] Loss 1.318760 mAP 0.868476 -2024-05-11 20:36:08,356 - Epoch: [117][ 200/ 204] Loss 1.315475 mAP 0.877435 -2024-05-11 20:36:08,969 - Epoch: [117][ 204/ 204] Loss 1.314764 mAP 0.877430 -2024-05-11 20:36:08,995 - ==> mAP: 0.87743 Loss: 1.315 - -2024-05-11 20:36:08,998 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 20:36:08,999 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 20:36:09,019 - - -2024-05-11 20:36:09,019 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 20:37:04,274 - Epoch: [118][ 100/ 813] Overall Loss 1.132084 Objective Loss 1.132084 LR 0.000025 Time 0.552531 -2024-05-11 20:37:57,344 - Epoch: [118][ 200/ 813] Overall Loss 1.148150 Objective Loss 1.148150 LR 0.000025 Time 0.541608 -2024-05-11 20:38:50,079 - Epoch: [118][ 300/ 813] Overall Loss 1.167820 Objective Loss 1.167820 LR 0.000025 Time 0.536848 -2024-05-11 20:39:41,501 - Epoch: [118][ 400/ 813] Overall Loss 1.170717 Objective Loss 1.170717 LR 0.000025 Time 0.531186 -2024-05-11 20:40:34,397 - Epoch: [118][ 500/ 813] Overall Loss 1.173873 Objective Loss 1.173873 LR 0.000025 Time 0.530737 -2024-05-11 20:41:27,104 - Epoch: [118][ 600/ 813] Overall Loss 1.175831 Objective Loss 1.175831 LR 0.000025 Time 0.530119 -2024-05-11 20:42:19,599 - Epoch: [118][ 700/ 813] Overall Loss 1.177795 Objective Loss 1.177795 LR 0.000025 Time 0.529380 -2024-05-11 20:43:13,110 - Epoch: [118][ 800/ 813] Overall Loss 1.179439 Objective Loss 1.179439 LR 0.000025 Time 0.530094 -2024-05-11 20:43:18,501 - Epoch: [118][ 813/ 813] Overall Loss 1.179265 Objective Loss 1.179265 LR 0.000025 Time 0.528238 -2024-05-11 20:43:18,528 - --- validate (epoch=118)----------- -2024-05-11 20:43:18,529 - 3250 samples (16 per mini-batch) -2024-05-11 20:43:18,530 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 20:44:12,436 - Epoch: [118][ 100/ 204] Loss 1.316706 mAP 0.889282 -2024-05-11 20:45:06,415 - Epoch: [118][ 200/ 204] Loss 1.320378 mAP 0.888432 -2024-05-11 20:45:06,618 - Epoch: [118][ 204/ 204] Loss 1.325546 mAP 0.888445 -2024-05-11 20:45:06,643 - ==> mAP: 0.88844 Loss: 1.326 - -2024-05-11 20:45:06,646 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 20:45:06,646 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 20:45:06,666 - - -2024-05-11 20:45:06,666 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 20:46:01,981 - Epoch: [119][ 100/ 813] Overall Loss 1.151268 Objective Loss 1.151268 LR 0.000025 Time 0.553125 -2024-05-11 20:46:55,861 - Epoch: [119][ 200/ 813] Overall Loss 1.170428 Objective Loss 1.170428 LR 0.000025 Time 0.545956 -2024-05-11 20:47:48,545 - Epoch: [119][ 300/ 813] Overall Loss 1.174037 Objective Loss 1.174037 LR 0.000025 Time 0.539560 -2024-05-11 20:48:38,710 - Epoch: [119][ 400/ 813] Overall Loss 1.184652 Objective Loss 1.184652 LR 0.000025 Time 0.530079 -2024-05-11 20:49:30,230 - Epoch: [119][ 500/ 813] Overall Loss 1.181464 Objective Loss 1.181464 LR 0.000025 Time 0.527099 -2024-05-11 20:50:24,800 - Epoch: [119][ 600/ 813] Overall Loss 1.186007 Objective Loss 1.186007 LR 0.000025 Time 0.530197 -2024-05-11 20:51:17,701 - Epoch: [119][ 700/ 813] Overall Loss 1.186278 Objective Loss 1.186278 LR 0.000025 Time 0.530026 -2024-05-11 20:52:09,323 - Epoch: [119][ 800/ 813] Overall Loss 1.185521 Objective Loss 1.185521 LR 0.000025 Time 0.528298 -2024-05-11 20:52:15,424 - Epoch: [119][ 813/ 813] Overall Loss 1.184458 Objective Loss 1.184458 LR 0.000025 Time 0.527350 -2024-05-11 20:52:15,451 - --- validate (epoch=119)----------- -2024-05-11 20:52:15,451 - 3250 samples (16 per mini-batch) -2024-05-11 20:52:15,452 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 20:53:11,834 - Epoch: [119][ 100/ 204] Loss 1.319696 mAP 0.888020 -2024-05-11 20:54:03,871 - Epoch: [119][ 200/ 204] Loss 1.317155 mAP 0.878105 -2024-05-11 20:54:04,087 - Epoch: [119][ 204/ 204] Loss 1.316302 mAP 0.878131 -2024-05-11 20:54:04,116 - ==> mAP: 0.87813 Loss: 1.316 - -2024-05-11 20:54:04,119 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 20:54:04,119 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 20:54:04,139 - - -2024-05-11 20:54:04,140 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 20:54:57,903 - Epoch: [120][ 100/ 813] Overall Loss 1.153992 Objective Loss 1.153992 LR 0.000025 Time 0.537611 -2024-05-11 20:55:53,856 - Epoch: [120][ 200/ 813] Overall Loss 1.172521 Objective Loss 1.172521 LR 0.000025 Time 0.548565 -2024-05-11 20:56:45,201 - Epoch: [120][ 300/ 813] Overall Loss 1.182806 Objective Loss 1.182806 LR 0.000025 Time 0.536855 -2024-05-11 20:57:35,671 - Epoch: [120][ 400/ 813] Overall Loss 1.184639 Objective Loss 1.184639 LR 0.000025 Time 0.528811 -2024-05-11 20:58:29,807 - Epoch: [120][ 500/ 813] Overall Loss 1.182622 Objective Loss 1.182622 LR 0.000025 Time 0.531318 -2024-05-11 20:59:22,600 - Epoch: [120][ 600/ 813] Overall Loss 1.186435 Objective Loss 1.186435 LR 0.000025 Time 0.530750 -2024-05-11 21:00:14,842 - Epoch: [120][ 700/ 813] Overall Loss 1.185129 Objective Loss 1.185129 LR 0.000025 Time 0.529558 -2024-05-11 21:01:06,439 - Epoch: [120][ 800/ 813] Overall Loss 1.187610 Objective Loss 1.187610 LR 0.000025 Time 0.527858 -2024-05-11 21:01:12,041 - Epoch: [120][ 813/ 813] Overall Loss 1.186728 Objective Loss 1.186728 LR 0.000025 Time 0.526307 -2024-05-11 21:01:12,075 - --- validate (epoch=120)----------- -2024-05-11 21:01:12,076 - 3250 samples (16 per mini-batch) -2024-05-11 21:01:12,077 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 21:02:08,610 - Epoch: [120][ 100/ 204] Loss 1.313291 mAP 0.878711 -2024-05-11 21:03:02,670 - Epoch: [120][ 200/ 204] Loss 1.319577 mAP 0.878843 -2024-05-11 21:03:03,293 - Epoch: [120][ 204/ 204] Loss 1.320014 mAP 0.878839 -2024-05-11 21:03:03,318 - ==> mAP: 0.87884 Loss: 1.320 - -2024-05-11 21:03:03,322 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 21:03:03,322 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 21:03:03,343 - - -2024-05-11 21:03:03,343 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 21:03:59,703 - Epoch: [121][ 100/ 813] Overall Loss 1.148017 Objective Loss 1.148017 LR 0.000025 Time 0.563366 -2024-05-11 21:04:52,006 - Epoch: [121][ 200/ 813] Overall Loss 1.151617 Objective Loss 1.151617 LR 0.000025 Time 0.543177 -2024-05-11 21:05:44,071 - Epoch: [121][ 300/ 813] Overall Loss 1.167228 Objective Loss 1.167228 LR 0.000025 Time 0.535655 -2024-05-11 21:06:36,244 - Epoch: [121][ 400/ 813] Overall Loss 1.169814 Objective Loss 1.169814 LR 0.000025 Time 0.532169 -2024-05-11 21:07:28,923 - Epoch: [121][ 500/ 813] Overall Loss 1.171693 Objective Loss 1.171693 LR 0.000025 Time 0.531090 -2024-05-11 21:08:23,341 - Epoch: [121][ 600/ 813] Overall Loss 1.175800 Objective Loss 1.175800 LR 0.000025 Time 0.533268 -2024-05-11 21:09:17,586 - Epoch: [121][ 700/ 813] Overall Loss 1.179527 Objective Loss 1.179527 LR 0.000025 Time 0.534573 -2024-05-11 21:10:09,887 - Epoch: [121][ 800/ 813] Overall Loss 1.179477 Objective Loss 1.179477 LR 0.000025 Time 0.533119 -2024-05-11 21:10:15,821 - Epoch: [121][ 813/ 813] Overall Loss 1.178703 Objective Loss 1.178703 LR 0.000025 Time 0.531893 -2024-05-11 21:10:15,848 - --- validate (epoch=121)----------- -2024-05-11 21:10:15,849 - 3250 samples (16 per mini-batch) -2024-05-11 21:10:15,850 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 21:11:09,919 - Epoch: [121][ 100/ 204] Loss 1.324119 mAP 0.873151 -2024-05-11 21:12:03,124 - Epoch: [121][ 200/ 204] Loss 1.312306 mAP 0.873833 -2024-05-11 21:12:03,638 - Epoch: [121][ 204/ 204] Loss 1.315005 mAP 0.864072 -2024-05-11 21:12:03,664 - ==> mAP: 0.86407 Loss: 1.315 - -2024-05-11 21:12:03,666 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 21:12:03,666 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 21:12:03,687 - - -2024-05-11 21:12:03,687 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 21:12:58,878 - Epoch: [122][ 100/ 813] Overall Loss 1.148669 Objective Loss 1.148669 LR 0.000025 Time 0.551896 -2024-05-11 21:13:52,864 - Epoch: [122][ 200/ 813] Overall Loss 1.158580 Objective Loss 1.158580 LR 0.000025 Time 0.545868 -2024-05-11 21:14:45,433 - Epoch: [122][ 300/ 813] Overall Loss 1.181991 Objective Loss 1.181991 LR 0.000025 Time 0.539136 -2024-05-11 21:15:35,017 - Epoch: [122][ 400/ 813] Overall Loss 1.184255 Objective Loss 1.184255 LR 0.000025 Time 0.528308 -2024-05-11 21:16:26,369 - Epoch: [122][ 500/ 813] Overall Loss 1.187760 Objective Loss 1.187760 LR 0.000025 Time 0.525347 -2024-05-11 21:17:20,094 - Epoch: [122][ 600/ 813] Overall Loss 1.189502 Objective Loss 1.189502 LR 0.000025 Time 0.527329 -2024-05-11 21:18:13,521 - Epoch: [122][ 700/ 813] Overall Loss 1.191581 Objective Loss 1.191581 LR 0.000025 Time 0.528318 -2024-05-11 21:19:05,106 - Epoch: [122][ 800/ 813] Overall Loss 1.189384 Objective Loss 1.189384 LR 0.000025 Time 0.526757 -2024-05-11 21:19:10,166 - Epoch: [122][ 813/ 813] Overall Loss 1.190160 Objective Loss 1.190160 LR 0.000025 Time 0.524559 -2024-05-11 21:19:10,192 - --- validate (epoch=122)----------- -2024-05-11 21:19:10,193 - 3250 samples (16 per mini-batch) -2024-05-11 21:19:10,194 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 21:20:05,052 - Epoch: [122][ 100/ 204] Loss 1.308314 mAP 0.884431 -2024-05-11 21:20:57,490 - Epoch: [122][ 200/ 204] Loss 1.304109 mAP 0.895098 -2024-05-11 21:20:57,823 - Epoch: [122][ 204/ 204] Loss 1.305835 mAP 0.895039 -2024-05-11 21:20:57,849 - ==> mAP: 0.89504 Loss: 1.306 - -2024-05-11 21:20:57,852 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 21:20:57,852 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 21:20:57,873 - - -2024-05-11 21:20:57,873 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 21:21:53,726 - Epoch: [123][ 100/ 813] Overall Loss 1.163583 Objective Loss 1.163583 LR 0.000025 Time 0.558515 -2024-05-11 21:22:48,509 - Epoch: [123][ 200/ 813] Overall Loss 1.165883 Objective Loss 1.165883 LR 0.000025 Time 0.553140 -2024-05-11 21:23:41,694 - Epoch: [123][ 300/ 813] Overall Loss 1.174188 Objective Loss 1.174188 LR 0.000025 Time 0.546038 -2024-05-11 21:24:32,113 - Epoch: [123][ 400/ 813] Overall Loss 1.180161 Objective Loss 1.180161 LR 0.000025 Time 0.535565 -2024-05-11 21:25:23,463 - Epoch: [123][ 500/ 813] Overall Loss 1.187917 Objective Loss 1.187917 LR 0.000025 Time 0.531125 -2024-05-11 21:26:17,060 - Epoch: [123][ 600/ 813] Overall Loss 1.188903 Objective Loss 1.188903 LR 0.000025 Time 0.531931 -2024-05-11 21:27:09,801 - Epoch: [123][ 700/ 813] Overall Loss 1.188712 Objective Loss 1.188712 LR 0.000025 Time 0.531283 -2024-05-11 21:28:01,367 - Epoch: [123][ 800/ 813] Overall Loss 1.191911 Objective Loss 1.191911 LR 0.000025 Time 0.529328 -2024-05-11 21:28:06,121 - Epoch: [123][ 813/ 813] Overall Loss 1.191993 Objective Loss 1.191993 LR 0.000025 Time 0.526711 -2024-05-11 21:28:06,147 - --- validate (epoch=123)----------- -2024-05-11 21:28:06,148 - 3250 samples (16 per mini-batch) -2024-05-11 21:28:06,149 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 21:29:01,556 - Epoch: [123][ 100/ 204] Loss 1.311521 mAP 0.884618 -2024-05-11 21:29:54,939 - Epoch: [123][ 200/ 204] Loss 1.307937 mAP 0.874730 -2024-05-11 21:29:55,815 - Epoch: [123][ 204/ 204] Loss 1.307000 mAP 0.874666 -2024-05-11 21:29:55,839 - ==> mAP: 0.87467 Loss: 1.307 - -2024-05-11 21:29:55,842 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 21:29:55,842 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 21:29:55,863 - - -2024-05-11 21:29:55,863 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 21:30:52,861 - Epoch: [124][ 100/ 813] Overall Loss 1.125834 Objective Loss 1.125834 LR 0.000025 Time 0.569917 -2024-05-11 21:31:45,972 - Epoch: [124][ 200/ 813] Overall Loss 1.137671 Objective Loss 1.137671 LR 0.000025 Time 0.550506 -2024-05-11 21:32:37,933 - Epoch: [124][ 300/ 813] Overall Loss 1.154835 Objective Loss 1.154835 LR 0.000025 Time 0.540186 -2024-05-11 21:33:27,688 - Epoch: [124][ 400/ 813] Overall Loss 1.161858 Objective Loss 1.161858 LR 0.000025 Time 0.529523 -2024-05-11 21:34:20,390 - Epoch: [124][ 500/ 813] Overall Loss 1.166230 Objective Loss 1.166230 LR 0.000025 Time 0.529018 -2024-05-11 21:35:12,792 - Epoch: [124][ 600/ 813] Overall Loss 1.171716 Objective Loss 1.171716 LR 0.000025 Time 0.528176 -2024-05-11 21:36:05,971 - Epoch: [124][ 700/ 813] Overall Loss 1.173602 Objective Loss 1.173602 LR 0.000025 Time 0.528687 -2024-05-11 21:36:59,999 - Epoch: [124][ 800/ 813] Overall Loss 1.173245 Objective Loss 1.173245 LR 0.000025 Time 0.530134 -2024-05-11 21:37:05,360 - Epoch: [124][ 813/ 813] Overall Loss 1.172733 Objective Loss 1.172733 LR 0.000025 Time 0.528251 -2024-05-11 21:37:05,386 - --- validate (epoch=124)----------- -2024-05-11 21:37:05,386 - 3250 samples (16 per mini-batch) -2024-05-11 21:37:05,387 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 21:38:00,849 - Epoch: [124][ 100/ 204] Loss 1.290827 mAP 0.879775 -2024-05-11 21:38:53,326 - Epoch: [124][ 200/ 204] Loss 1.307866 mAP 0.878523 -2024-05-11 21:38:53,853 - Epoch: [124][ 204/ 204] Loss 1.308001 mAP 0.878485 -2024-05-11 21:38:53,878 - ==> mAP: 0.87849 Loss: 1.308 - -2024-05-11 21:38:53,881 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 21:38:53,881 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 21:38:53,902 - - -2024-05-11 21:38:53,902 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 21:39:50,297 - Epoch: [125][ 100/ 813] Overall Loss 1.154062 Objective Loss 1.154062 LR 0.000025 Time 0.563928 -2024-05-11 21:40:43,513 - Epoch: [125][ 200/ 813] Overall Loss 1.167521 Objective Loss 1.167521 LR 0.000025 Time 0.548033 -2024-05-11 21:41:36,760 - Epoch: [125][ 300/ 813] Overall Loss 1.184647 Objective Loss 1.184647 LR 0.000025 Time 0.542841 -2024-05-11 21:42:25,509 - Epoch: [125][ 400/ 813] Overall Loss 1.189015 Objective Loss 1.189015 LR 0.000025 Time 0.528999 -2024-05-11 21:43:17,281 - Epoch: [125][ 500/ 813] Overall Loss 1.187549 Objective Loss 1.187549 LR 0.000025 Time 0.526741 -2024-05-11 21:44:11,578 - Epoch: [125][ 600/ 813] Overall Loss 1.189429 Objective Loss 1.189429 LR 0.000025 Time 0.529442 -2024-05-11 21:45:04,745 - Epoch: [125][ 700/ 813] Overall Loss 1.184303 Objective Loss 1.184303 LR 0.000025 Time 0.529759 -2024-05-11 21:45:57,728 - Epoch: [125][ 800/ 813] Overall Loss 1.185133 Objective Loss 1.185133 LR 0.000025 Time 0.529766 -2024-05-11 21:46:02,637 - Epoch: [125][ 813/ 813] Overall Loss 1.185221 Objective Loss 1.185221 LR 0.000025 Time 0.527333 -2024-05-11 21:46:02,663 - --- validate (epoch=125)----------- -2024-05-11 21:46:02,663 - 3250 samples (16 per mini-batch) -2024-05-11 21:46:02,665 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 21:46:58,862 - Epoch: [125][ 100/ 204] Loss 1.326228 mAP 0.887268 -2024-05-11 21:47:52,272 - Epoch: [125][ 200/ 204] Loss 1.321198 mAP 0.886981 -2024-05-11 21:47:52,474 - Epoch: [125][ 204/ 204] Loss 1.318384 mAP 0.886992 -2024-05-11 21:47:52,498 - ==> mAP: 0.88699 Loss: 1.318 - -2024-05-11 21:47:52,501 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 21:47:52,501 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 21:47:52,521 - - -2024-05-11 21:47:52,521 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 21:48:48,706 - Epoch: [126][ 100/ 813] Overall Loss 1.151557 Objective Loss 1.151557 LR 0.000025 Time 0.561829 -2024-05-11 21:49:41,371 - Epoch: [126][ 200/ 813] Overall Loss 1.172184 Objective Loss 1.172184 LR 0.000025 Time 0.544204 -2024-05-11 21:50:34,121 - Epoch: [126][ 300/ 813] Overall Loss 1.183792 Objective Loss 1.183792 LR 0.000025 Time 0.538629 -2024-05-11 21:51:25,494 - Epoch: [126][ 400/ 813] Overall Loss 1.187057 Objective Loss 1.187057 LR 0.000025 Time 0.532401 -2024-05-11 21:52:16,349 - Epoch: [126][ 500/ 813] Overall Loss 1.193957 Objective Loss 1.193957 LR 0.000025 Time 0.527627 -2024-05-11 21:53:10,615 - Epoch: [126][ 600/ 813] Overall Loss 1.193629 Objective Loss 1.193629 LR 0.000025 Time 0.530130 -2024-05-11 21:54:03,668 - Epoch: [126][ 700/ 813] Overall Loss 1.194346 Objective Loss 1.194346 LR 0.000025 Time 0.530185 -2024-05-11 21:54:55,042 - Epoch: [126][ 800/ 813] Overall Loss 1.192188 Objective Loss 1.192188 LR 0.000025 Time 0.528125 -2024-05-11 21:55:00,805 - Epoch: [126][ 813/ 813] Overall Loss 1.192071 Objective Loss 1.192071 LR 0.000025 Time 0.526768 -2024-05-11 21:55:00,831 - --- validate (epoch=126)----------- -2024-05-11 21:55:00,832 - 3250 samples (16 per mini-batch) -2024-05-11 21:55:00,833 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 21:55:56,523 - Epoch: [126][ 100/ 204] Loss 1.337088 mAP 0.879253 -2024-05-11 21:56:50,448 - Epoch: [126][ 200/ 204] Loss 1.320696 mAP 0.878667 -2024-05-11 21:56:51,031 - Epoch: [126][ 204/ 204] Loss 1.320897 mAP 0.878644 -2024-05-11 21:56:51,056 - ==> mAP: 0.87864 Loss: 1.321 - -2024-05-11 21:56:51,059 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 21:56:51,059 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 21:56:51,080 - - -2024-05-11 21:56:51,080 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 21:57:45,521 - Epoch: [127][ 100/ 813] Overall Loss 1.151113 Objective Loss 1.151113 LR 0.000025 Time 0.544381 -2024-05-11 21:58:39,602 - Epoch: [127][ 200/ 813] Overall Loss 1.164430 Objective Loss 1.164430 LR 0.000025 Time 0.542590 -2024-05-11 21:59:31,589 - Epoch: [127][ 300/ 813] Overall Loss 1.179254 Objective Loss 1.179254 LR 0.000025 Time 0.535010 -2024-05-11 22:00:22,938 - Epoch: [127][ 400/ 813] Overall Loss 1.186043 Objective Loss 1.186043 LR 0.000025 Time 0.529627 -2024-05-11 22:01:15,700 - Epoch: [127][ 500/ 813] Overall Loss 1.192366 Objective Loss 1.192366 LR 0.000025 Time 0.529222 -2024-05-11 22:02:08,975 - Epoch: [127][ 600/ 813] Overall Loss 1.196170 Objective Loss 1.196170 LR 0.000025 Time 0.529808 -2024-05-11 22:03:02,368 - Epoch: [127][ 700/ 813] Overall Loss 1.191035 Objective Loss 1.191035 LR 0.000025 Time 0.530394 -2024-05-11 22:03:55,318 - Epoch: [127][ 800/ 813] Overall Loss 1.191555 Objective Loss 1.191555 LR 0.000025 Time 0.530280 -2024-05-11 22:04:01,099 - Epoch: [127][ 813/ 813] Overall Loss 1.190574 Objective Loss 1.190574 LR 0.000025 Time 0.528912 -2024-05-11 22:04:01,126 - --- validate (epoch=127)----------- -2024-05-11 22:04:01,127 - 3250 samples (16 per mini-batch) -2024-05-11 22:04:01,128 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 22:04:55,740 - Epoch: [127][ 100/ 204] Loss 1.320110 mAP 0.886408 -2024-05-11 22:05:47,734 - Epoch: [127][ 200/ 204] Loss 1.313895 mAP 0.886612 -2024-05-11 22:05:47,944 - Epoch: [127][ 204/ 204] Loss 1.317108 mAP 0.886642 -2024-05-11 22:05:47,970 - ==> mAP: 0.88664 Loss: 1.317 - -2024-05-11 22:05:47,973 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 22:05:47,973 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 22:05:47,993 - - -2024-05-11 22:05:47,993 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 22:06:43,127 - Epoch: [128][ 100/ 813] Overall Loss 1.157320 Objective Loss 1.157320 LR 0.000025 Time 0.551313 -2024-05-11 22:07:38,188 - Epoch: [128][ 200/ 813] Overall Loss 1.180504 Objective Loss 1.180504 LR 0.000025 Time 0.550955 -2024-05-11 22:08:29,366 - Epoch: [128][ 300/ 813] Overall Loss 1.185724 Objective Loss 1.185724 LR 0.000025 Time 0.537894 -2024-05-11 22:09:19,858 - Epoch: [128][ 400/ 813] Overall Loss 1.191047 Objective Loss 1.191047 LR 0.000025 Time 0.529646 -2024-05-11 22:10:11,694 - Epoch: [128][ 500/ 813] Overall Loss 1.190293 Objective Loss 1.190293 LR 0.000025 Time 0.527385 -2024-05-11 22:11:05,765 - Epoch: [128][ 600/ 813] Overall Loss 1.190194 Objective Loss 1.190194 LR 0.000025 Time 0.529594 -2024-05-11 22:11:58,662 - Epoch: [128][ 700/ 813] Overall Loss 1.191319 Objective Loss 1.191319 LR 0.000025 Time 0.529503 -2024-05-11 22:12:50,569 - Epoch: [128][ 800/ 813] Overall Loss 1.193574 Objective Loss 1.193574 LR 0.000025 Time 0.528196 -2024-05-11 22:12:56,278 - Epoch: [128][ 813/ 813] Overall Loss 1.192414 Objective Loss 1.192414 LR 0.000025 Time 0.526768 -2024-05-11 22:12:56,312 - --- validate (epoch=128)----------- -2024-05-11 22:12:56,313 - 3250 samples (16 per mini-batch) -2024-05-11 22:12:56,314 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 22:13:50,641 - Epoch: [128][ 100/ 204] Loss 1.331729 mAP 0.876860 -2024-05-11 22:14:45,195 - Epoch: [128][ 200/ 204] Loss 1.319554 mAP 0.876769 -2024-05-11 22:14:45,903 - Epoch: [128][ 204/ 204] Loss 1.315174 mAP 0.876685 -2024-05-11 22:14:45,928 - ==> mAP: 0.87668 Loss: 1.315 - -2024-05-11 22:14:45,930 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 22:14:45,930 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 22:14:45,950 - - -2024-05-11 22:14:45,951 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 22:15:43,633 - Epoch: [129][ 100/ 813] Overall Loss 1.148998 Objective Loss 1.148998 LR 0.000025 Time 0.576801 -2024-05-11 22:16:35,622 - Epoch: [129][ 200/ 813] Overall Loss 1.152907 Objective Loss 1.152907 LR 0.000025 Time 0.548339 -2024-05-11 22:17:28,330 - Epoch: [129][ 300/ 813] Overall Loss 1.172609 Objective Loss 1.172609 LR 0.000025 Time 0.541247 -2024-05-11 22:18:17,528 - Epoch: [129][ 400/ 813] Overall Loss 1.168671 Objective Loss 1.168671 LR 0.000025 Time 0.528927 -2024-05-11 22:19:10,601 - Epoch: [129][ 500/ 813] Overall Loss 1.172699 Objective Loss 1.172699 LR 0.000025 Time 0.529284 -2024-05-11 22:20:04,971 - Epoch: [129][ 600/ 813] Overall Loss 1.173002 Objective Loss 1.173002 LR 0.000025 Time 0.531684 -2024-05-11 22:20:58,851 - Epoch: [129][ 700/ 813] Overall Loss 1.172328 Objective Loss 1.172328 LR 0.000025 Time 0.532691 -2024-05-11 22:21:50,213 - Epoch: [129][ 800/ 813] Overall Loss 1.172289 Objective Loss 1.172289 LR 0.000025 Time 0.530305 -2024-05-11 22:21:55,592 - Epoch: [129][ 813/ 813] Overall Loss 1.172118 Objective Loss 1.172118 LR 0.000025 Time 0.528441 -2024-05-11 22:21:55,618 - --- validate (epoch=129)----------- -2024-05-11 22:21:55,619 - 3250 samples (16 per mini-batch) -2024-05-11 22:21:55,620 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 22:22:51,576 - Epoch: [129][ 100/ 204] Loss 1.261245 mAP 0.886036 -2024-05-11 22:23:45,648 - Epoch: [129][ 200/ 204] Loss 1.282450 mAP 0.886404 -2024-05-11 22:23:46,203 - Epoch: [129][ 204/ 204] Loss 1.286818 mAP 0.886392 -2024-05-11 22:23:46,228 - ==> mAP: 0.88639 Loss: 1.287 - -2024-05-11 22:23:46,231 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 22:23:46,231 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 22:23:46,251 - - -2024-05-11 22:23:46,251 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 22:24:41,048 - Epoch: [130][ 100/ 813] Overall Loss 1.128598 Objective Loss 1.128598 LR 0.000025 Time 0.547950 -2024-05-11 22:25:34,028 - Epoch: [130][ 200/ 813] Overall Loss 1.146739 Objective Loss 1.146739 LR 0.000025 Time 0.538865 -2024-05-11 22:26:26,556 - Epoch: [130][ 300/ 813] Overall Loss 1.162491 Objective Loss 1.162491 LR 0.000025 Time 0.534324 -2024-05-11 22:27:17,994 - Epoch: [130][ 400/ 813] Overall Loss 1.170957 Objective Loss 1.170957 LR 0.000025 Time 0.529314 -2024-05-11 22:28:09,783 - Epoch: [130][ 500/ 813] Overall Loss 1.178823 Objective Loss 1.178823 LR 0.000025 Time 0.527025 -2024-05-11 22:29:02,900 - Epoch: [130][ 600/ 813] Overall Loss 1.177318 Objective Loss 1.177318 LR 0.000025 Time 0.527708 -2024-05-11 22:29:55,554 - Epoch: [130][ 700/ 813] Overall Loss 1.178291 Objective Loss 1.178291 LR 0.000025 Time 0.527539 -2024-05-11 22:30:47,436 - Epoch: [130][ 800/ 813] Overall Loss 1.178219 Objective Loss 1.178219 LR 0.000025 Time 0.526448 -2024-05-11 22:30:53,159 - Epoch: [130][ 813/ 813] Overall Loss 1.177235 Objective Loss 1.177235 LR 0.000025 Time 0.525059 -2024-05-11 22:30:53,190 - --- validate (epoch=130)----------- -2024-05-11 22:30:53,191 - 3250 samples (16 per mini-batch) -2024-05-11 22:30:53,192 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 22:31:48,538 - Epoch: [130][ 100/ 204] Loss 1.301864 mAP 0.876783 -2024-05-11 22:32:42,359 - Epoch: [130][ 200/ 204] Loss 1.313982 mAP 0.865879 -2024-05-11 22:32:42,573 - Epoch: [130][ 204/ 204] Loss 1.313884 mAP 0.865882 -2024-05-11 22:32:42,598 - ==> mAP: 0.86588 Loss: 1.314 - -2024-05-11 22:32:42,604 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 22:32:42,605 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 22:32:42,627 - - -2024-05-11 22:32:42,627 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 22:33:39,063 - Epoch: [131][ 100/ 813] Overall Loss 1.155066 Objective Loss 1.155066 LR 0.000025 Time 0.564329 -2024-05-11 22:34:31,901 - Epoch: [131][ 200/ 813] Overall Loss 1.170159 Objective Loss 1.170159 LR 0.000025 Time 0.546333 -2024-05-11 22:35:23,997 - Epoch: [131][ 300/ 813] Overall Loss 1.175033 Objective Loss 1.175033 LR 0.000025 Time 0.537869 -2024-05-11 22:36:14,599 - Epoch: [131][ 400/ 813] Overall Loss 1.182121 Objective Loss 1.182121 LR 0.000025 Time 0.529902 -2024-05-11 22:37:07,047 - Epoch: [131][ 500/ 813] Overall Loss 1.183732 Objective Loss 1.183732 LR 0.000025 Time 0.528779 -2024-05-11 22:38:00,873 - Epoch: [131][ 600/ 813] Overall Loss 1.185041 Objective Loss 1.185041 LR 0.000025 Time 0.530351 -2024-05-11 22:38:53,743 - Epoch: [131][ 700/ 813] Overall Loss 1.183889 Objective Loss 1.183889 LR 0.000025 Time 0.530109 -2024-05-11 22:39:45,878 - Epoch: [131][ 800/ 813] Overall Loss 1.185366 Objective Loss 1.185366 LR 0.000025 Time 0.529013 -2024-05-11 22:39:51,664 - Epoch: [131][ 813/ 813] Overall Loss 1.185478 Objective Loss 1.185478 LR 0.000025 Time 0.527670 -2024-05-11 22:39:51,692 - --- validate (epoch=131)----------- -2024-05-11 22:39:51,693 - 3250 samples (16 per mini-batch) -2024-05-11 22:39:51,694 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 22:40:47,382 - Epoch: [131][ 100/ 204] Loss 1.310962 mAP 0.886042 -2024-05-11 22:41:38,960 - Epoch: [131][ 200/ 204] Loss 1.312987 mAP 0.886191 -2024-05-11 22:41:39,458 - Epoch: [131][ 204/ 204] Loss 1.309747 mAP 0.886135 -2024-05-11 22:41:39,483 - ==> mAP: 0.88614 Loss: 1.310 - -2024-05-11 22:41:39,485 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 22:41:39,485 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 22:41:39,505 - - -2024-05-11 22:41:39,505 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 22:42:34,699 - Epoch: [132][ 100/ 813] Overall Loss 1.141744 Objective Loss 1.141744 LR 0.000025 Time 0.551912 -2024-05-11 22:43:29,059 - Epoch: [132][ 200/ 813] Overall Loss 1.161711 Objective Loss 1.161711 LR 0.000025 Time 0.547749 -2024-05-11 22:44:22,460 - Epoch: [132][ 300/ 813] Overall Loss 1.172597 Objective Loss 1.172597 LR 0.000025 Time 0.543165 -2024-05-11 22:45:12,183 - Epoch: [132][ 400/ 813] Overall Loss 1.183644 Objective Loss 1.183644 LR 0.000025 Time 0.531676 -2024-05-11 22:46:04,257 - Epoch: [132][ 500/ 813] Overall Loss 1.182146 Objective Loss 1.182146 LR 0.000025 Time 0.529486 -2024-05-11 22:46:57,031 - Epoch: [132][ 600/ 813] Overall Loss 1.181937 Objective Loss 1.181937 LR 0.000025 Time 0.529192 -2024-05-11 22:47:49,944 - Epoch: [132][ 700/ 813] Overall Loss 1.183802 Objective Loss 1.183802 LR 0.000025 Time 0.529182 -2024-05-11 22:48:42,032 - Epoch: [132][ 800/ 813] Overall Loss 1.185212 Objective Loss 1.185212 LR 0.000025 Time 0.528138 -2024-05-11 22:48:48,282 - Epoch: [132][ 813/ 813] Overall Loss 1.186282 Objective Loss 1.186282 LR 0.000025 Time 0.527382 -2024-05-11 22:48:48,309 - --- validate (epoch=132)----------- -2024-05-11 22:48:48,309 - 3250 samples (16 per mini-batch) -2024-05-11 22:48:48,311 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 22:49:43,413 - Epoch: [132][ 100/ 204] Loss 1.319189 mAP 0.875540 -2024-05-11 22:50:36,992 - Epoch: [132][ 200/ 204] Loss 1.309740 mAP 0.875632 -2024-05-11 22:50:37,592 - Epoch: [132][ 204/ 204] Loss 1.308149 mAP 0.875659 -2024-05-11 22:50:37,619 - ==> mAP: 0.87566 Loss: 1.308 - -2024-05-11 22:50:37,622 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 22:50:37,622 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 22:50:37,642 - - -2024-05-11 22:50:37,642 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 22:51:33,651 - Epoch: [133][ 100/ 813] Overall Loss 1.132830 Objective Loss 1.132830 LR 0.000025 Time 0.560067 -2024-05-11 22:52:26,666 - Epoch: [133][ 200/ 813] Overall Loss 1.164064 Objective Loss 1.164064 LR 0.000025 Time 0.544981 -2024-05-11 22:53:19,878 - Epoch: [133][ 300/ 813] Overall Loss 1.181428 Objective Loss 1.181428 LR 0.000025 Time 0.540681 -2024-05-11 22:54:10,563 - Epoch: [133][ 400/ 813] Overall Loss 1.184870 Objective Loss 1.184870 LR 0.000025 Time 0.532218 -2024-05-11 22:55:03,352 - Epoch: [133][ 500/ 813] Overall Loss 1.184707 Objective Loss 1.184707 LR 0.000025 Time 0.531350 -2024-05-11 22:55:56,651 - Epoch: [133][ 600/ 813] Overall Loss 1.183882 Objective Loss 1.183882 LR 0.000025 Time 0.531621 -2024-05-11 22:56:49,754 - Epoch: [133][ 700/ 813] Overall Loss 1.185980 Objective Loss 1.185980 LR 0.000025 Time 0.531534 -2024-05-11 22:57:42,619 - Epoch: [133][ 800/ 813] Overall Loss 1.183498 Objective Loss 1.183498 LR 0.000025 Time 0.531172 -2024-05-11 22:57:47,208 - Epoch: [133][ 813/ 813] Overall Loss 1.183777 Objective Loss 1.183777 LR 0.000025 Time 0.528296 -2024-05-11 22:57:47,233 - --- validate (epoch=133)----------- -2024-05-11 22:57:47,234 - 3250 samples (16 per mini-batch) -2024-05-11 22:57:47,235 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 22:58:42,926 - Epoch: [133][ 100/ 204] Loss 1.320869 mAP 0.889772 -2024-05-11 22:59:37,078 - Epoch: [133][ 200/ 204] Loss 1.307458 mAP 0.888031 -2024-05-11 22:59:37,394 - Epoch: [133][ 204/ 204] Loss 1.313139 mAP 0.888002 -2024-05-11 22:59:37,420 - ==> mAP: 0.88800 Loss: 1.313 - -2024-05-11 22:59:37,423 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 22:59:37,423 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 22:59:37,444 - - -2024-05-11 22:59:37,444 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 23:00:32,466 - Epoch: [134][ 100/ 813] Overall Loss 1.165977 Objective Loss 1.165977 LR 0.000025 Time 0.550201 -2024-05-11 23:01:27,489 - Epoch: [134][ 200/ 813] Overall Loss 1.168218 Objective Loss 1.168218 LR 0.000025 Time 0.550207 -2024-05-11 23:02:19,684 - Epoch: [134][ 300/ 813] Overall Loss 1.171058 Objective Loss 1.171058 LR 0.000025 Time 0.540784 -2024-05-11 23:03:10,822 - Epoch: [134][ 400/ 813] Overall Loss 1.180814 Objective Loss 1.180814 LR 0.000025 Time 0.533429 -2024-05-11 23:04:02,468 - Epoch: [134][ 500/ 813] Overall Loss 1.180957 Objective Loss 1.180957 LR 0.000025 Time 0.530028 -2024-05-11 23:04:56,782 - Epoch: [134][ 600/ 813] Overall Loss 1.180940 Objective Loss 1.180940 LR 0.000025 Time 0.532211 -2024-05-11 23:05:49,679 - Epoch: [134][ 700/ 813] Overall Loss 1.182652 Objective Loss 1.182652 LR 0.000025 Time 0.531745 -2024-05-11 23:06:41,713 - Epoch: [134][ 800/ 813] Overall Loss 1.185752 Objective Loss 1.185752 LR 0.000025 Time 0.530317 -2024-05-11 23:06:47,085 - Epoch: [134][ 813/ 813] Overall Loss 1.186149 Objective Loss 1.186149 LR 0.000025 Time 0.528444 -2024-05-11 23:06:47,111 - --- validate (epoch=134)----------- -2024-05-11 23:06:47,112 - 3250 samples (16 per mini-batch) -2024-05-11 23:06:47,113 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 23:07:42,456 - Epoch: [134][ 100/ 204] Loss 1.296623 mAP 0.895349 -2024-05-11 23:08:35,706 - Epoch: [134][ 200/ 204] Loss 1.300470 mAP 0.894452 -2024-05-11 23:08:36,199 - Epoch: [134][ 204/ 204] Loss 1.303300 mAP 0.894399 -2024-05-11 23:08:36,226 - ==> mAP: 0.89440 Loss: 1.303 - -2024-05-11 23:08:36,229 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 23:08:36,229 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 23:08:36,250 - - -2024-05-11 23:08:36,250 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 23:09:31,200 - Epoch: [135][ 100/ 813] Overall Loss 1.164173 Objective Loss 1.164173 LR 0.000025 Time 0.549478 -2024-05-11 23:10:23,250 - Epoch: [135][ 200/ 813] Overall Loss 1.169513 Objective Loss 1.169513 LR 0.000025 Time 0.534978 -2024-05-11 23:11:17,126 - Epoch: [135][ 300/ 813] Overall Loss 1.174797 Objective Loss 1.174797 LR 0.000025 Time 0.536236 -2024-05-11 23:12:08,504 - Epoch: [135][ 400/ 813] Overall Loss 1.177043 Objective Loss 1.177043 LR 0.000025 Time 0.530618 -2024-05-11 23:13:00,541 - Epoch: [135][ 500/ 813] Overall Loss 1.180574 Objective Loss 1.180574 LR 0.000025 Time 0.528564 -2024-05-11 23:13:53,872 - Epoch: [135][ 600/ 813] Overall Loss 1.183555 Objective Loss 1.183555 LR 0.000025 Time 0.529344 -2024-05-11 23:14:46,359 - Epoch: [135][ 700/ 813] Overall Loss 1.183560 Objective Loss 1.183560 LR 0.000025 Time 0.528703 -2024-05-11 23:15:38,392 - Epoch: [135][ 800/ 813] Overall Loss 1.182181 Objective Loss 1.182181 LR 0.000025 Time 0.527654 -2024-05-11 23:15:43,845 - Epoch: [135][ 813/ 813] Overall Loss 1.181535 Objective Loss 1.181535 LR 0.000025 Time 0.525924 -2024-05-11 23:15:43,872 - --- validate (epoch=135)----------- -2024-05-11 23:15:43,872 - 3250 samples (16 per mini-batch) -2024-05-11 23:15:43,873 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 23:16:39,171 - Epoch: [135][ 100/ 204] Loss 1.324425 mAP 0.876489 -2024-05-11 23:17:32,810 - Epoch: [135][ 200/ 204] Loss 1.318262 mAP 0.887119 -2024-05-11 23:17:33,402 - Epoch: [135][ 204/ 204] Loss 1.315639 mAP 0.887169 -2024-05-11 23:17:33,429 - ==> mAP: 0.88717 Loss: 1.316 - -2024-05-11 23:17:33,432 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 23:17:33,432 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 23:17:33,453 - - -2024-05-11 23:17:33,453 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 23:18:27,752 - Epoch: [136][ 100/ 813] Overall Loss 1.161525 Objective Loss 1.161525 LR 0.000025 Time 0.542973 -2024-05-11 23:19:22,480 - Epoch: [136][ 200/ 813] Overall Loss 1.166588 Objective Loss 1.166588 LR 0.000025 Time 0.545117 -2024-05-11 23:20:15,512 - Epoch: [136][ 300/ 813] Overall Loss 1.181883 Objective Loss 1.181883 LR 0.000025 Time 0.540174 -2024-05-11 23:21:06,171 - Epoch: [136][ 400/ 813] Overall Loss 1.182500 Objective Loss 1.182500 LR 0.000025 Time 0.531775 -2024-05-11 23:21:59,554 - Epoch: [136][ 500/ 813] Overall Loss 1.185197 Objective Loss 1.185197 LR 0.000025 Time 0.532172 -2024-05-11 23:22:53,992 - Epoch: [136][ 600/ 813] Overall Loss 1.184940 Objective Loss 1.184940 LR 0.000025 Time 0.534200 -2024-05-11 23:23:46,023 - Epoch: [136][ 700/ 813] Overall Loss 1.181687 Objective Loss 1.181687 LR 0.000025 Time 0.532214 -2024-05-11 23:24:39,873 - Epoch: [136][ 800/ 813] Overall Loss 1.182712 Objective Loss 1.182712 LR 0.000025 Time 0.532996 -2024-05-11 23:24:43,604 - Epoch: [136][ 813/ 813] Overall Loss 1.183066 Objective Loss 1.183066 LR 0.000025 Time 0.529059 -2024-05-11 23:24:43,630 - --- validate (epoch=136)----------- -2024-05-11 23:24:43,631 - 3250 samples (16 per mini-batch) -2024-05-11 23:24:43,632 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 23:25:39,007 - Epoch: [136][ 100/ 204] Loss 1.298770 mAP 0.886736 -2024-05-11 23:26:32,786 - Epoch: [136][ 200/ 204] Loss 1.301071 mAP 0.876349 -2024-05-11 23:26:33,457 - Epoch: [136][ 204/ 204] Loss 1.301050 mAP 0.876350 -2024-05-11 23:26:33,481 - ==> mAP: 0.87635 Loss: 1.301 - -2024-05-11 23:26:33,485 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 23:26:33,485 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 23:26:33,505 - - -2024-05-11 23:26:33,505 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 23:27:30,510 - Epoch: [137][ 100/ 813] Overall Loss 1.138424 Objective Loss 1.138424 LR 0.000025 Time 0.570022 -2024-05-11 23:28:24,451 - Epoch: [137][ 200/ 813] Overall Loss 1.167448 Objective Loss 1.167448 LR 0.000025 Time 0.554711 -2024-05-11 23:29:16,901 - Epoch: [137][ 300/ 813] Overall Loss 1.179071 Objective Loss 1.179071 LR 0.000025 Time 0.544635 -2024-05-11 23:30:07,015 - Epoch: [137][ 400/ 813] Overall Loss 1.181451 Objective Loss 1.181451 LR 0.000025 Time 0.533759 -2024-05-11 23:30:58,167 - Epoch: [137][ 500/ 813] Overall Loss 1.183473 Objective Loss 1.183473 LR 0.000025 Time 0.529306 -2024-05-11 23:31:50,800 - Epoch: [137][ 600/ 813] Overall Loss 1.188556 Objective Loss 1.188556 LR 0.000025 Time 0.528808 -2024-05-11 23:32:44,338 - Epoch: [137][ 700/ 813] Overall Loss 1.187287 Objective Loss 1.187287 LR 0.000025 Time 0.529741 -2024-05-11 23:33:36,798 - Epoch: [137][ 800/ 813] Overall Loss 1.187159 Objective Loss 1.187159 LR 0.000025 Time 0.529096 -2024-05-11 23:33:42,456 - Epoch: [137][ 813/ 813] Overall Loss 1.186542 Objective Loss 1.186542 LR 0.000025 Time 0.527592 -2024-05-11 23:33:42,484 - --- validate (epoch=137)----------- -2024-05-11 23:33:42,485 - 3250 samples (16 per mini-batch) -2024-05-11 23:33:42,486 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 23:34:37,584 - Epoch: [137][ 100/ 204] Loss 1.307732 mAP 0.875402 -2024-05-11 23:35:30,764 - Epoch: [137][ 200/ 204] Loss 1.306495 mAP 0.885770 -2024-05-11 23:35:31,237 - Epoch: [137][ 204/ 204] Loss 1.304745 mAP 0.885785 -2024-05-11 23:35:31,263 - ==> mAP: 0.88578 Loss: 1.305 - -2024-05-11 23:35:31,266 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 23:35:31,267 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 23:35:31,287 - - -2024-05-11 23:35:31,287 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 23:36:27,304 - Epoch: [138][ 100/ 813] Overall Loss 1.153139 Objective Loss 1.153139 LR 0.000025 Time 0.560145 -2024-05-11 23:37:20,764 - Epoch: [138][ 200/ 813] Overall Loss 1.162225 Objective Loss 1.162225 LR 0.000025 Time 0.547350 -2024-05-11 23:38:13,451 - Epoch: [138][ 300/ 813] Overall Loss 1.169354 Objective Loss 1.169354 LR 0.000025 Time 0.540520 -2024-05-11 23:39:03,476 - Epoch: [138][ 400/ 813] Overall Loss 1.179392 Objective Loss 1.179392 LR 0.000025 Time 0.530442 -2024-05-11 23:39:55,939 - Epoch: [138][ 500/ 813] Overall Loss 1.184979 Objective Loss 1.184979 LR 0.000025 Time 0.529276 -2024-05-11 23:40:49,492 - Epoch: [138][ 600/ 813] Overall Loss 1.186608 Objective Loss 1.186608 LR 0.000025 Time 0.530315 -2024-05-11 23:41:43,385 - Epoch: [138][ 700/ 813] Overall Loss 1.186163 Objective Loss 1.186163 LR 0.000025 Time 0.531529 -2024-05-11 23:42:35,353 - Epoch: [138][ 800/ 813] Overall Loss 1.186367 Objective Loss 1.186367 LR 0.000025 Time 0.530046 -2024-05-11 23:42:39,993 - Epoch: [138][ 813/ 813] Overall Loss 1.186882 Objective Loss 1.186882 LR 0.000025 Time 0.527278 -2024-05-11 23:42:40,019 - --- validate (epoch=138)----------- -2024-05-11 23:42:40,020 - 3250 samples (16 per mini-batch) -2024-05-11 23:42:40,021 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 23:43:34,981 - Epoch: [138][ 100/ 204] Loss 1.324361 mAP 0.880100 -2024-05-11 23:44:28,914 - Epoch: [138][ 200/ 204] Loss 1.316336 mAP 0.878442 -2024-05-11 23:44:29,360 - Epoch: [138][ 204/ 204] Loss 1.319011 mAP 0.878453 -2024-05-11 23:44:29,386 - ==> mAP: 0.87845 Loss: 1.319 - -2024-05-11 23:44:29,389 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 23:44:29,389 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 23:44:29,410 - - -2024-05-11 23:44:29,410 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 23:45:24,698 - Epoch: [139][ 100/ 813] Overall Loss 1.138009 Objective Loss 1.138009 LR 0.000025 Time 0.552860 -2024-05-11 23:46:18,662 - Epoch: [139][ 200/ 813] Overall Loss 1.163358 Objective Loss 1.163358 LR 0.000025 Time 0.546242 -2024-05-11 23:47:11,319 - Epoch: [139][ 300/ 813] Overall Loss 1.173129 Objective Loss 1.173129 LR 0.000025 Time 0.539679 -2024-05-11 23:48:01,545 - Epoch: [139][ 400/ 813] Overall Loss 1.178889 Objective Loss 1.178889 LR 0.000025 Time 0.530320 -2024-05-11 23:48:52,942 - Epoch: [139][ 500/ 813] Overall Loss 1.177120 Objective Loss 1.177120 LR 0.000025 Time 0.527048 -2024-05-11 23:49:46,966 - Epoch: [139][ 600/ 813] Overall Loss 1.176446 Objective Loss 1.176446 LR 0.000025 Time 0.529218 -2024-05-11 23:50:39,883 - Epoch: [139][ 700/ 813] Overall Loss 1.177437 Objective Loss 1.177437 LR 0.000025 Time 0.529209 -2024-05-11 23:51:32,423 - Epoch: [139][ 800/ 813] Overall Loss 1.176962 Objective Loss 1.176962 LR 0.000025 Time 0.528730 -2024-05-11 23:51:37,246 - Epoch: [139][ 813/ 813] Overall Loss 1.176831 Objective Loss 1.176831 LR 0.000025 Time 0.526201 -2024-05-11 23:51:37,273 - --- validate (epoch=139)----------- -2024-05-11 23:51:37,273 - 3250 samples (16 per mini-batch) -2024-05-11 23:51:37,274 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-11 23:52:31,513 - Epoch: [139][ 100/ 204] Loss 1.275929 mAP 0.897443 -2024-05-11 23:53:24,963 - Epoch: [139][ 200/ 204] Loss 1.281496 mAP 0.897581 -2024-05-11 23:53:25,172 - Epoch: [139][ 204/ 204] Loss 1.278437 mAP 0.897553 -2024-05-11 23:53:25,198 - ==> mAP: 0.89755 Loss: 1.278 - -2024-05-11 23:53:25,202 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-11 23:53:25,202 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-11 23:53:25,223 - - -2024-05-11 23:53:25,223 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-11 23:54:20,699 - Epoch: [140][ 100/ 813] Overall Loss 1.132789 Objective Loss 1.132789 LR 0.000025 Time 0.554733 -2024-05-11 23:55:14,197 - Epoch: [140][ 200/ 813] Overall Loss 1.153770 Objective Loss 1.153770 LR 0.000025 Time 0.544849 -2024-05-11 23:56:05,489 - Epoch: [140][ 300/ 813] Overall Loss 1.165898 Objective Loss 1.165898 LR 0.000025 Time 0.534201 -2024-05-11 23:56:55,906 - Epoch: [140][ 400/ 813] Overall Loss 1.176482 Objective Loss 1.176482 LR 0.000025 Time 0.526677 -2024-05-11 23:57:48,605 - Epoch: [140][ 500/ 813] Overall Loss 1.178600 Objective Loss 1.178600 LR 0.000025 Time 0.526736 -2024-05-11 23:58:42,578 - Epoch: [140][ 600/ 813] Overall Loss 1.179130 Objective Loss 1.179130 LR 0.000025 Time 0.528899 -2024-05-11 23:59:35,952 - Epoch: [140][ 700/ 813] Overall Loss 1.176370 Objective Loss 1.176370 LR 0.000025 Time 0.529589 -2024-05-12 00:00:28,035 - Epoch: [140][ 800/ 813] Overall Loss 1.179468 Objective Loss 1.179468 LR 0.000025 Time 0.528492 -2024-05-12 00:00:32,820 - Epoch: [140][ 813/ 813] Overall Loss 1.179498 Objective Loss 1.179498 LR 0.000025 Time 0.525923 -2024-05-12 00:00:32,846 - --- validate (epoch=140)----------- -2024-05-12 00:00:32,847 - 3250 samples (16 per mini-batch) -2024-05-12 00:00:32,848 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 00:01:26,880 - Epoch: [140][ 100/ 204] Loss 1.283491 mAP 0.896751 -2024-05-12 00:02:19,397 - Epoch: [140][ 200/ 204] Loss 1.302275 mAP 0.886074 -2024-05-12 00:02:20,502 - Epoch: [140][ 204/ 204] Loss 1.306338 mAP 0.886077 -2024-05-12 00:02:20,528 - ==> mAP: 0.88608 Loss: 1.306 - -2024-05-12 00:02:20,530 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 00:02:20,531 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 00:02:20,551 - - -2024-05-12 00:02:20,551 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 00:03:15,571 - Epoch: [141][ 100/ 813] Overall Loss 1.128011 Objective Loss 1.128011 LR 0.000025 Time 0.550172 -2024-05-12 00:04:09,166 - Epoch: [141][ 200/ 813] Overall Loss 1.138620 Objective Loss 1.138620 LR 0.000025 Time 0.543046 -2024-05-12 00:05:01,689 - Epoch: [141][ 300/ 813] Overall Loss 1.161479 Objective Loss 1.161479 LR 0.000025 Time 0.537104 -2024-05-12 00:05:52,921 - Epoch: [141][ 400/ 813] Overall Loss 1.168446 Objective Loss 1.168446 LR 0.000025 Time 0.530891 -2024-05-12 00:06:45,852 - Epoch: [141][ 500/ 813] Overall Loss 1.177521 Objective Loss 1.177521 LR 0.000025 Time 0.530570 -2024-05-12 00:07:40,077 - Epoch: [141][ 600/ 813] Overall Loss 1.181549 Objective Loss 1.181549 LR 0.000025 Time 0.532515 -2024-05-12 00:08:32,606 - Epoch: [141][ 700/ 813] Overall Loss 1.181278 Objective Loss 1.181278 LR 0.000025 Time 0.531480 -2024-05-12 00:09:25,040 - Epoch: [141][ 800/ 813] Overall Loss 1.181336 Objective Loss 1.181336 LR 0.000025 Time 0.530585 -2024-05-12 00:09:30,339 - Epoch: [141][ 813/ 813] Overall Loss 1.180567 Objective Loss 1.180567 LR 0.000025 Time 0.528619 -2024-05-12 00:09:30,364 - --- validate (epoch=141)----------- -2024-05-12 00:09:30,365 - 3250 samples (16 per mini-batch) -2024-05-12 00:09:30,366 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 00:10:24,465 - Epoch: [141][ 100/ 204] Loss 1.279893 mAP 0.885946 -2024-05-12 00:11:19,409 - Epoch: [141][ 200/ 204] Loss 1.288158 mAP 0.887282 -2024-05-12 00:11:19,828 - Epoch: [141][ 204/ 204] Loss 1.298110 mAP 0.887284 -2024-05-12 00:11:19,854 - ==> mAP: 0.88728 Loss: 1.298 - -2024-05-12 00:11:19,857 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 00:11:19,857 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 00:11:19,877 - - -2024-05-12 00:11:19,877 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 00:12:15,909 - Epoch: [142][ 100/ 813] Overall Loss 1.144798 Objective Loss 1.144798 LR 0.000025 Time 0.560293 -2024-05-12 00:13:09,362 - Epoch: [142][ 200/ 813] Overall Loss 1.153402 Objective Loss 1.153402 LR 0.000025 Time 0.547402 -2024-05-12 00:14:02,274 - Epoch: [142][ 300/ 813] Overall Loss 1.163050 Objective Loss 1.163050 LR 0.000025 Time 0.541297 -2024-05-12 00:14:53,412 - Epoch: [142][ 400/ 813] Overall Loss 1.170399 Objective Loss 1.170399 LR 0.000025 Time 0.533814 -2024-05-12 00:15:44,192 - Epoch: [142][ 500/ 813] Overall Loss 1.171302 Objective Loss 1.171302 LR 0.000025 Time 0.528609 -2024-05-12 00:16:38,812 - Epoch: [142][ 600/ 813] Overall Loss 1.174563 Objective Loss 1.174563 LR 0.000025 Time 0.531537 -2024-05-12 00:17:31,454 - Epoch: [142][ 700/ 813] Overall Loss 1.172332 Objective Loss 1.172332 LR 0.000025 Time 0.530805 -2024-05-12 00:18:22,445 - Epoch: [142][ 800/ 813] Overall Loss 1.173234 Objective Loss 1.173234 LR 0.000025 Time 0.528191 -2024-05-12 00:18:27,305 - Epoch: [142][ 813/ 813] Overall Loss 1.173192 Objective Loss 1.173192 LR 0.000025 Time 0.525721 -2024-05-12 00:18:27,331 - --- validate (epoch=142)----------- -2024-05-12 00:18:27,332 - 3250 samples (16 per mini-batch) -2024-05-12 00:18:27,333 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 00:19:22,325 - Epoch: [142][ 100/ 204] Loss 1.333454 mAP 0.879502 -2024-05-12 00:20:14,195 - Epoch: [142][ 200/ 204] Loss 1.310573 mAP 0.888618 -2024-05-12 00:20:14,757 - Epoch: [142][ 204/ 204] Loss 1.309993 mAP 0.888590 -2024-05-12 00:20:14,782 - ==> mAP: 0.88859 Loss: 1.310 - -2024-05-12 00:20:14,785 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 00:20:14,785 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 00:20:14,806 - - -2024-05-12 00:20:14,806 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 00:21:09,718 - Epoch: [143][ 100/ 813] Overall Loss 1.147972 Objective Loss 1.147972 LR 0.000025 Time 0.549101 -2024-05-12 00:22:02,373 - Epoch: [143][ 200/ 813] Overall Loss 1.165116 Objective Loss 1.165116 LR 0.000025 Time 0.537802 -2024-05-12 00:22:55,593 - Epoch: [143][ 300/ 813] Overall Loss 1.179555 Objective Loss 1.179555 LR 0.000025 Time 0.535914 -2024-05-12 00:23:46,754 - Epoch: [143][ 400/ 813] Overall Loss 1.183819 Objective Loss 1.183819 LR 0.000025 Time 0.529836 -2024-05-12 00:24:38,793 - Epoch: [143][ 500/ 813] Overall Loss 1.184722 Objective Loss 1.184722 LR 0.000025 Time 0.527944 -2024-05-12 00:25:31,915 - Epoch: [143][ 600/ 813] Overall Loss 1.182515 Objective Loss 1.182515 LR 0.000025 Time 0.528487 -2024-05-12 00:26:25,072 - Epoch: [143][ 700/ 813] Overall Loss 1.179927 Objective Loss 1.179927 LR 0.000025 Time 0.528913 -2024-05-12 00:27:16,946 - Epoch: [143][ 800/ 813] Overall Loss 1.181525 Objective Loss 1.181525 LR 0.000025 Time 0.527634 -2024-05-12 00:27:22,740 - Epoch: [143][ 813/ 813] Overall Loss 1.179845 Objective Loss 1.179845 LR 0.000025 Time 0.526323 -2024-05-12 00:27:22,767 - --- validate (epoch=143)----------- -2024-05-12 00:27:22,768 - 3250 samples (16 per mini-batch) -2024-05-12 00:27:22,769 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 00:28:18,045 - Epoch: [143][ 100/ 204] Loss 1.308170 mAP 0.895936 -2024-05-12 00:29:12,207 - Epoch: [143][ 200/ 204] Loss 1.304287 mAP 0.895760 -2024-05-12 00:29:12,489 - Epoch: [143][ 204/ 204] Loss 1.303278 mAP 0.895764 -2024-05-12 00:29:12,515 - ==> mAP: 0.89576 Loss: 1.303 - -2024-05-12 00:29:12,518 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 00:29:12,518 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 00:29:12,538 - - -2024-05-12 00:29:12,538 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 00:30:07,631 - Epoch: [144][ 100/ 813] Overall Loss 1.157441 Objective Loss 1.157441 LR 0.000025 Time 0.550904 -2024-05-12 00:31:00,611 - Epoch: [144][ 200/ 813] Overall Loss 1.167595 Objective Loss 1.167595 LR 0.000025 Time 0.540347 -2024-05-12 00:31:52,671 - Epoch: [144][ 300/ 813] Overall Loss 1.178090 Objective Loss 1.178090 LR 0.000025 Time 0.533757 -2024-05-12 00:32:43,424 - Epoch: [144][ 400/ 813] Overall Loss 1.177608 Objective Loss 1.177608 LR 0.000025 Time 0.527196 -2024-05-12 00:33:35,933 - Epoch: [144][ 500/ 813] Overall Loss 1.178257 Objective Loss 1.178257 LR 0.000025 Time 0.526773 -2024-05-12 00:34:30,750 - Epoch: [144][ 600/ 813] Overall Loss 1.179592 Objective Loss 1.179592 LR 0.000025 Time 0.530336 -2024-05-12 00:35:22,775 - Epoch: [144][ 700/ 813] Overall Loss 1.179601 Objective Loss 1.179601 LR 0.000025 Time 0.528894 -2024-05-12 00:36:16,271 - Epoch: [144][ 800/ 813] Overall Loss 1.178937 Objective Loss 1.178937 LR 0.000025 Time 0.529642 -2024-05-12 00:36:20,194 - Epoch: [144][ 813/ 813] Overall Loss 1.178413 Objective Loss 1.178413 LR 0.000025 Time 0.525998 -2024-05-12 00:36:20,220 - --- validate (epoch=144)----------- -2024-05-12 00:36:20,221 - 3250 samples (16 per mini-batch) -2024-05-12 00:36:20,222 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 00:37:15,024 - Epoch: [144][ 100/ 204] Loss 1.307268 mAP 0.887527 -2024-05-12 00:38:08,747 - Epoch: [144][ 200/ 204] Loss 1.298787 mAP 0.887086 -2024-05-12 00:38:09,437 - Epoch: [144][ 204/ 204] Loss 1.307918 mAP 0.887091 -2024-05-12 00:38:09,463 - ==> mAP: 0.88709 Loss: 1.308 - -2024-05-12 00:38:09,466 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 00:38:09,466 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 00:38:09,487 - - -2024-05-12 00:38:09,487 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 00:39:05,585 - Epoch: [145][ 100/ 813] Overall Loss 1.152057 Objective Loss 1.152057 LR 0.000025 Time 0.560961 -2024-05-12 00:39:59,930 - Epoch: [145][ 200/ 813] Overall Loss 1.147317 Objective Loss 1.147317 LR 0.000025 Time 0.552181 -2024-05-12 00:40:52,028 - Epoch: [145][ 300/ 813] Overall Loss 1.168067 Objective Loss 1.168067 LR 0.000025 Time 0.541776 -2024-05-12 00:41:43,103 - Epoch: [145][ 400/ 813] Overall Loss 1.169023 Objective Loss 1.169023 LR 0.000025 Time 0.534016 -2024-05-12 00:42:34,390 - Epoch: [145][ 500/ 813] Overall Loss 1.171586 Objective Loss 1.171586 LR 0.000025 Time 0.529783 -2024-05-12 00:43:27,627 - Epoch: [145][ 600/ 813] Overall Loss 1.170688 Objective Loss 1.170688 LR 0.000025 Time 0.530212 -2024-05-12 00:44:21,771 - Epoch: [145][ 700/ 813] Overall Loss 1.171509 Objective Loss 1.171509 LR 0.000025 Time 0.531811 -2024-05-12 00:45:14,645 - Epoch: [145][ 800/ 813] Overall Loss 1.172823 Objective Loss 1.172823 LR 0.000025 Time 0.531425 -2024-05-12 00:45:19,201 - Epoch: [145][ 813/ 813] Overall Loss 1.172839 Objective Loss 1.172839 LR 0.000025 Time 0.528532 -2024-05-12 00:45:19,227 - --- validate (epoch=145)----------- -2024-05-12 00:45:19,227 - 3250 samples (16 per mini-batch) -2024-05-12 00:45:19,228 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 00:46:13,986 - Epoch: [145][ 100/ 204] Loss 1.278389 mAP 0.898218 -2024-05-12 00:47:06,689 - Epoch: [145][ 200/ 204] Loss 1.283447 mAP 0.897414 -2024-05-12 00:47:07,208 - Epoch: [145][ 204/ 204] Loss 1.284104 mAP 0.897397 -2024-05-12 00:47:07,234 - ==> mAP: 0.89740 Loss: 1.284 - -2024-05-12 00:47:07,237 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 00:47:07,237 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 00:47:07,258 - - -2024-05-12 00:47:07,258 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 00:48:02,591 - Epoch: [146][ 100/ 813] Overall Loss 1.155656 Objective Loss 1.155656 LR 0.000025 Time 0.553309 -2024-05-12 00:48:56,226 - Epoch: [146][ 200/ 813] Overall Loss 1.167039 Objective Loss 1.167039 LR 0.000025 Time 0.544808 -2024-05-12 00:49:49,827 - Epoch: [146][ 300/ 813] Overall Loss 1.179784 Objective Loss 1.179784 LR 0.000025 Time 0.541869 -2024-05-12 00:50:41,091 - Epoch: [146][ 400/ 813] Overall Loss 1.188991 Objective Loss 1.188991 LR 0.000025 Time 0.534559 -2024-05-12 00:51:35,346 - Epoch: [146][ 500/ 813] Overall Loss 1.185502 Objective Loss 1.185502 LR 0.000025 Time 0.536153 -2024-05-12 00:52:27,894 - Epoch: [146][ 600/ 813] Overall Loss 1.183841 Objective Loss 1.183841 LR 0.000025 Time 0.534367 -2024-05-12 00:53:21,691 - Epoch: [146][ 700/ 813] Overall Loss 1.182860 Objective Loss 1.182860 LR 0.000025 Time 0.534879 -2024-05-12 00:54:13,208 - Epoch: [146][ 800/ 813] Overall Loss 1.184563 Objective Loss 1.184563 LR 0.000025 Time 0.532408 -2024-05-12 00:54:18,552 - Epoch: [146][ 813/ 813] Overall Loss 1.184972 Objective Loss 1.184972 LR 0.000025 Time 0.530467 -2024-05-12 00:54:18,588 - --- validate (epoch=146)----------- -2024-05-12 00:54:18,589 - 3250 samples (16 per mini-batch) -2024-05-12 00:54:18,590 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 00:55:14,794 - Epoch: [146][ 100/ 204] Loss 1.288050 mAP 0.886491 -2024-05-12 00:56:07,455 - Epoch: [146][ 200/ 204] Loss 1.302662 mAP 0.885864 -2024-05-12 00:56:08,030 - Epoch: [146][ 204/ 204] Loss 1.301606 mAP 0.885845 -2024-05-12 00:56:08,056 - ==> mAP: 0.88585 Loss: 1.302 - -2024-05-12 00:56:08,058 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 00:56:08,058 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 00:56:08,079 - - -2024-05-12 00:56:08,079 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 00:57:02,919 - Epoch: [147][ 100/ 813] Overall Loss 1.137913 Objective Loss 1.137913 LR 0.000025 Time 0.548378 -2024-05-12 00:57:57,116 - Epoch: [147][ 200/ 813] Overall Loss 1.156726 Objective Loss 1.156726 LR 0.000025 Time 0.545161 -2024-05-12 00:58:49,283 - Epoch: [147][ 300/ 813] Overall Loss 1.171922 Objective Loss 1.171922 LR 0.000025 Time 0.537325 -2024-05-12 00:59:39,500 - Epoch: [147][ 400/ 813] Overall Loss 1.179847 Objective Loss 1.179847 LR 0.000025 Time 0.528532 -2024-05-12 01:00:32,246 - Epoch: [147][ 500/ 813] Overall Loss 1.179569 Objective Loss 1.179569 LR 0.000025 Time 0.528315 -2024-05-12 01:01:26,814 - Epoch: [147][ 600/ 813] Overall Loss 1.178504 Objective Loss 1.178504 LR 0.000025 Time 0.531200 -2024-05-12 01:02:20,850 - Epoch: [147][ 700/ 813] Overall Loss 1.176727 Objective Loss 1.176727 LR 0.000025 Time 0.532506 -2024-05-12 01:03:13,870 - Epoch: [147][ 800/ 813] Overall Loss 1.179313 Objective Loss 1.179313 LR 0.000025 Time 0.532216 -2024-05-12 01:03:17,866 - Epoch: [147][ 813/ 813] Overall Loss 1.178863 Objective Loss 1.178863 LR 0.000025 Time 0.528621 -2024-05-12 01:03:17,893 - --- validate (epoch=147)----------- -2024-05-12 01:03:17,894 - 3250 samples (16 per mini-batch) -2024-05-12 01:03:17,895 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 01:04:13,565 - Epoch: [147][ 100/ 204] Loss 1.307368 mAP 0.888658 -2024-05-12 01:05:05,770 - Epoch: [147][ 200/ 204] Loss 1.302467 mAP 0.888083 -2024-05-12 01:05:06,043 - Epoch: [147][ 204/ 204] Loss 1.307443 mAP 0.888075 -2024-05-12 01:05:06,071 - ==> mAP: 0.88807 Loss: 1.307 - -2024-05-12 01:05:06,074 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 01:05:06,074 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 01:05:06,094 - - -2024-05-12 01:05:06,095 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 01:06:01,151 - Epoch: [148][ 100/ 813] Overall Loss 1.158313 Objective Loss 1.158313 LR 0.000025 Time 0.550540 -2024-05-12 01:06:54,731 - Epoch: [148][ 200/ 813] Overall Loss 1.164533 Objective Loss 1.164533 LR 0.000025 Time 0.543162 -2024-05-12 01:07:46,161 - Epoch: [148][ 300/ 813] Overall Loss 1.177124 Objective Loss 1.177124 LR 0.000025 Time 0.533539 -2024-05-12 01:08:37,487 - Epoch: [148][ 400/ 813] Overall Loss 1.178058 Objective Loss 1.178058 LR 0.000025 Time 0.528457 -2024-05-12 01:09:30,191 - Epoch: [148][ 500/ 813] Overall Loss 1.182582 Objective Loss 1.182582 LR 0.000025 Time 0.528165 -2024-05-12 01:10:23,705 - Epoch: [148][ 600/ 813] Overall Loss 1.183261 Objective Loss 1.183261 LR 0.000025 Time 0.529325 -2024-05-12 01:11:15,377 - Epoch: [148][ 700/ 813] Overall Loss 1.184389 Objective Loss 1.184389 LR 0.000025 Time 0.527523 -2024-05-12 01:12:08,841 - Epoch: [148][ 800/ 813] Overall Loss 1.184080 Objective Loss 1.184080 LR 0.000025 Time 0.528408 -2024-05-12 01:12:13,917 - Epoch: [148][ 813/ 813] Overall Loss 1.184145 Objective Loss 1.184145 LR 0.000025 Time 0.526201 -2024-05-12 01:12:13,951 - --- validate (epoch=148)----------- -2024-05-12 01:12:13,952 - 3250 samples (16 per mini-batch) -2024-05-12 01:12:13,953 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 01:13:09,608 - Epoch: [148][ 100/ 204] Loss 1.292184 mAP 0.885510 -2024-05-12 01:14:02,733 - Epoch: [148][ 200/ 204] Loss 1.316300 mAP 0.885137 -2024-05-12 01:14:02,954 - Epoch: [148][ 204/ 204] Loss 1.316213 mAP 0.885112 -2024-05-12 01:14:02,979 - ==> mAP: 0.88511 Loss: 1.316 - -2024-05-12 01:14:02,982 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 01:14:02,982 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 01:14:03,003 - - -2024-05-12 01:14:03,003 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 01:14:59,530 - Epoch: [149][ 100/ 813] Overall Loss 1.147310 Objective Loss 1.147310 LR 0.000025 Time 0.565251 -2024-05-12 01:15:53,474 - Epoch: [149][ 200/ 813] Overall Loss 1.160546 Objective Loss 1.160546 LR 0.000025 Time 0.552338 -2024-05-12 01:16:46,451 - Epoch: [149][ 300/ 813] Overall Loss 1.174064 Objective Loss 1.174064 LR 0.000025 Time 0.544810 -2024-05-12 01:17:36,495 - Epoch: [149][ 400/ 813] Overall Loss 1.177999 Objective Loss 1.177999 LR 0.000025 Time 0.533714 -2024-05-12 01:18:29,722 - Epoch: [149][ 500/ 813] Overall Loss 1.184007 Objective Loss 1.184007 LR 0.000025 Time 0.533423 -2024-05-12 01:19:23,227 - Epoch: [149][ 600/ 813] Overall Loss 1.184862 Objective Loss 1.184862 LR 0.000025 Time 0.533692 -2024-05-12 01:20:15,742 - Epoch: [149][ 700/ 813] Overall Loss 1.182454 Objective Loss 1.182454 LR 0.000025 Time 0.532463 -2024-05-12 01:21:08,047 - Epoch: [149][ 800/ 813] Overall Loss 1.185371 Objective Loss 1.185371 LR 0.000025 Time 0.531280 -2024-05-12 01:21:12,923 - Epoch: [149][ 813/ 813] Overall Loss 1.184384 Objective Loss 1.184384 LR 0.000025 Time 0.528783 -2024-05-12 01:21:12,948 - --- validate (epoch=149)----------- -2024-05-12 01:21:12,949 - 3250 samples (16 per mini-batch) -2024-05-12 01:21:12,950 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 01:22:08,877 - Epoch: [149][ 100/ 204] Loss 1.283941 mAP 0.887064 -2024-05-12 01:23:02,364 - Epoch: [149][ 200/ 204] Loss 1.282663 mAP 0.886731 -2024-05-12 01:23:03,280 - Epoch: [149][ 204/ 204] Loss 1.284331 mAP 0.886748 -2024-05-12 01:23:03,305 - ==> mAP: 0.88675 Loss: 1.284 - -2024-05-12 01:23:03,308 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 01:23:03,309 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 01:23:03,331 - - -2024-05-12 01:23:03,331 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 01:23:58,815 - Epoch: [150][ 100/ 813] Overall Loss 1.131674 Objective Loss 1.131674 LR 0.000025 Time 0.554819 -2024-05-12 01:24:53,368 - Epoch: [150][ 200/ 813] Overall Loss 1.144561 Objective Loss 1.144561 LR 0.000025 Time 0.550157 -2024-05-12 01:25:46,292 - Epoch: [150][ 300/ 813] Overall Loss 1.158075 Objective Loss 1.158075 LR 0.000025 Time 0.543180 -2024-05-12 01:26:36,212 - Epoch: [150][ 400/ 813] Overall Loss 1.162134 Objective Loss 1.162134 LR 0.000025 Time 0.532181 -2024-05-12 01:27:28,897 - Epoch: [150][ 500/ 813] Overall Loss 1.160003 Objective Loss 1.160003 LR 0.000025 Time 0.531111 -2024-05-12 01:28:22,062 - Epoch: [150][ 600/ 813] Overall Loss 1.161847 Objective Loss 1.161847 LR 0.000025 Time 0.531199 -2024-05-12 01:29:13,592 - Epoch: [150][ 700/ 813] Overall Loss 1.164277 Objective Loss 1.164277 LR 0.000025 Time 0.528926 -2024-05-12 01:30:06,737 - Epoch: [150][ 800/ 813] Overall Loss 1.164822 Objective Loss 1.164822 LR 0.000025 Time 0.529239 -2024-05-12 01:30:12,266 - Epoch: [150][ 813/ 813] Overall Loss 1.163757 Objective Loss 1.163757 LR 0.000025 Time 0.527576 -2024-05-12 01:30:12,300 - --- validate (epoch=150)----------- -2024-05-12 01:30:12,301 - 3250 samples (16 per mini-batch) -2024-05-12 01:30:12,302 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 01:31:07,296 - Epoch: [150][ 100/ 204] Loss 1.321377 mAP 0.885823 -2024-05-12 01:32:00,070 - Epoch: [150][ 200/ 204] Loss 1.312467 mAP 0.895043 -2024-05-12 01:32:00,639 - Epoch: [150][ 204/ 204] Loss 1.312325 mAP 0.885393 -2024-05-12 01:32:00,664 - ==> mAP: 0.88539 Loss: 1.312 - -2024-05-12 01:32:00,667 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 01:32:00,667 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 01:32:00,688 - - -2024-05-12 01:32:00,688 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 01:32:56,675 - Epoch: [151][ 100/ 813] Overall Loss 1.148848 Objective Loss 1.148848 LR 0.000025 Time 0.559854 -2024-05-12 01:33:50,627 - Epoch: [151][ 200/ 813] Overall Loss 1.162594 Objective Loss 1.162594 LR 0.000025 Time 0.549676 -2024-05-12 01:34:43,453 - Epoch: [151][ 300/ 813] Overall Loss 1.178718 Objective Loss 1.178718 LR 0.000025 Time 0.542534 -2024-05-12 01:35:33,018 - Epoch: [151][ 400/ 813] Overall Loss 1.187559 Objective Loss 1.187559 LR 0.000025 Time 0.530801 -2024-05-12 01:36:25,646 - Epoch: [151][ 500/ 813] Overall Loss 1.187911 Objective Loss 1.187911 LR 0.000025 Time 0.529893 -2024-05-12 01:37:20,237 - Epoch: [151][ 600/ 813] Overall Loss 1.188864 Objective Loss 1.188864 LR 0.000025 Time 0.532556 -2024-05-12 01:38:13,808 - Epoch: [151][ 700/ 813] Overall Loss 1.187871 Objective Loss 1.187871 LR 0.000025 Time 0.533005 -2024-05-12 01:39:04,308 - Epoch: [151][ 800/ 813] Overall Loss 1.186720 Objective Loss 1.186720 LR 0.000025 Time 0.529498 -2024-05-12 01:39:11,181 - Epoch: [151][ 813/ 813] Overall Loss 1.186682 Objective Loss 1.186682 LR 0.000025 Time 0.529486 -2024-05-12 01:39:11,209 - --- validate (epoch=151)----------- -2024-05-12 01:39:11,210 - 3250 samples (16 per mini-batch) -2024-05-12 01:39:11,211 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 01:40:06,997 - Epoch: [151][ 100/ 204] Loss 1.307315 mAP 0.884294 -2024-05-12 01:40:59,331 - Epoch: [151][ 200/ 204] Loss 1.301798 mAP 0.886257 -2024-05-12 01:41:00,265 - Epoch: [151][ 204/ 204] Loss 1.301197 mAP 0.886255 -2024-05-12 01:41:00,291 - ==> mAP: 0.88625 Loss: 1.301 - -2024-05-12 01:41:00,295 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 01:41:00,295 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 01:41:00,315 - - -2024-05-12 01:41:00,316 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 01:41:56,000 - Epoch: [152][ 100/ 813] Overall Loss 1.163813 Objective Loss 1.163813 LR 0.000025 Time 0.556820 -2024-05-12 01:42:48,677 - Epoch: [152][ 200/ 813] Overall Loss 1.154629 Objective Loss 1.154629 LR 0.000025 Time 0.541775 -2024-05-12 01:43:40,298 - Epoch: [152][ 300/ 813] Overall Loss 1.171894 Objective Loss 1.171894 LR 0.000025 Time 0.533249 -2024-05-12 01:44:30,942 - Epoch: [152][ 400/ 813] Overall Loss 1.178232 Objective Loss 1.178232 LR 0.000025 Time 0.526541 -2024-05-12 01:45:24,668 - Epoch: [152][ 500/ 813] Overall Loss 1.178961 Objective Loss 1.178961 LR 0.000025 Time 0.528682 -2024-05-12 01:46:17,899 - Epoch: [152][ 600/ 813] Overall Loss 1.181964 Objective Loss 1.181964 LR 0.000025 Time 0.529279 -2024-05-12 01:47:10,878 - Epoch: [152][ 700/ 813] Overall Loss 1.179092 Objective Loss 1.179092 LR 0.000025 Time 0.529347 -2024-05-12 01:48:04,210 - Epoch: [152][ 800/ 813] Overall Loss 1.175923 Objective Loss 1.175923 LR 0.000025 Time 0.529842 -2024-05-12 01:48:09,590 - Epoch: [152][ 813/ 813] Overall Loss 1.176270 Objective Loss 1.176270 LR 0.000025 Time 0.527986 -2024-05-12 01:48:09,616 - --- validate (epoch=152)----------- -2024-05-12 01:48:09,617 - 3250 samples (16 per mini-batch) -2024-05-12 01:48:09,618 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 01:49:03,992 - Epoch: [152][ 100/ 204] Loss 1.291201 mAP 0.886748 -2024-05-12 01:49:58,220 - Epoch: [152][ 200/ 204] Loss 1.286311 mAP 0.887361 -2024-05-12 01:49:58,738 - Epoch: [152][ 204/ 204] Loss 1.290425 mAP 0.887354 -2024-05-12 01:49:58,762 - ==> mAP: 0.88735 Loss: 1.290 - -2024-05-12 01:49:58,766 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 01:49:58,766 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 01:49:58,777 - - -2024-05-12 01:49:58,777 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 01:50:54,610 - Epoch: [153][ 100/ 813] Overall Loss 1.138460 Objective Loss 1.138460 LR 0.000025 Time 0.558306 -2024-05-12 01:51:49,603 - Epoch: [153][ 200/ 813] Overall Loss 1.148755 Objective Loss 1.148755 LR 0.000025 Time 0.554109 -2024-05-12 01:52:41,201 - Epoch: [153][ 300/ 813] Overall Loss 1.158889 Objective Loss 1.158889 LR 0.000025 Time 0.541394 -2024-05-12 01:53:32,196 - Epoch: [153][ 400/ 813] Overall Loss 1.168272 Objective Loss 1.168272 LR 0.000025 Time 0.533530 -2024-05-12 01:54:23,100 - Epoch: [153][ 500/ 813] Overall Loss 1.169248 Objective Loss 1.169248 LR 0.000025 Time 0.528628 -2024-05-12 01:55:17,665 - Epoch: [153][ 600/ 813] Overall Loss 1.171287 Objective Loss 1.171287 LR 0.000025 Time 0.531462 -2024-05-12 01:56:11,748 - Epoch: [153][ 700/ 813] Overall Loss 1.174561 Objective Loss 1.174561 LR 0.000025 Time 0.532793 -2024-05-12 01:57:05,008 - Epoch: [153][ 800/ 813] Overall Loss 1.179344 Objective Loss 1.179344 LR 0.000025 Time 0.532765 -2024-05-12 01:57:10,250 - Epoch: [153][ 813/ 813] Overall Loss 1.179529 Objective Loss 1.179529 LR 0.000025 Time 0.530692 -2024-05-12 01:57:10,276 - --- validate (epoch=153)----------- -2024-05-12 01:57:10,277 - 3250 samples (16 per mini-batch) -2024-05-12 01:57:10,278 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 01:58:04,463 - Epoch: [153][ 100/ 204] Loss 1.298584 mAP 0.887391 -2024-05-12 01:58:59,811 - Epoch: [153][ 200/ 204] Loss 1.305037 mAP 0.886966 -2024-05-12 01:59:00,299 - Epoch: [153][ 204/ 204] Loss 1.304253 mAP 0.886949 -2024-05-12 01:59:00,325 - ==> mAP: 0.88695 Loss: 1.304 - -2024-05-12 01:59:00,328 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 01:59:00,328 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 01:59:00,349 - - -2024-05-12 01:59:00,349 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 01:59:55,845 - Epoch: [154][ 100/ 813] Overall Loss 1.133174 Objective Loss 1.133174 LR 0.000025 Time 0.554936 -2024-05-12 02:00:49,574 - Epoch: [154][ 200/ 813] Overall Loss 1.155783 Objective Loss 1.155783 LR 0.000025 Time 0.546106 -2024-05-12 02:01:40,601 - Epoch: [154][ 300/ 813] Overall Loss 1.163195 Objective Loss 1.163195 LR 0.000025 Time 0.534155 -2024-05-12 02:02:31,395 - Epoch: [154][ 400/ 813] Overall Loss 1.168795 Objective Loss 1.168795 LR 0.000025 Time 0.527590 -2024-05-12 02:03:23,591 - Epoch: [154][ 500/ 813] Overall Loss 1.172854 Objective Loss 1.172854 LR 0.000025 Time 0.526461 -2024-05-12 02:04:18,237 - Epoch: [154][ 600/ 813] Overall Loss 1.172918 Objective Loss 1.172918 LR 0.000025 Time 0.529789 -2024-05-12 02:05:10,129 - Epoch: [154][ 700/ 813] Overall Loss 1.175170 Objective Loss 1.175170 LR 0.000025 Time 0.528234 -2024-05-12 02:06:03,231 - Epoch: [154][ 800/ 813] Overall Loss 1.176345 Objective Loss 1.176345 LR 0.000025 Time 0.528580 -2024-05-12 02:06:07,687 - Epoch: [154][ 813/ 813] Overall Loss 1.175773 Objective Loss 1.175773 LR 0.000025 Time 0.525607 -2024-05-12 02:06:07,713 - --- validate (epoch=154)----------- -2024-05-12 02:06:07,714 - 3250 samples (16 per mini-batch) -2024-05-12 02:06:07,715 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 02:07:03,612 - Epoch: [154][ 100/ 204] Loss 1.311776 mAP 0.895502 -2024-05-12 02:07:59,314 - Epoch: [154][ 200/ 204] Loss 1.306627 mAP 0.886422 -2024-05-12 02:07:59,905 - Epoch: [154][ 204/ 204] Loss 1.304381 mAP 0.886299 -2024-05-12 02:07:59,931 - ==> mAP: 0.88630 Loss: 1.304 - -2024-05-12 02:07:59,935 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 02:07:59,935 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 02:07:59,956 - - -2024-05-12 02:07:59,956 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 02:08:54,878 - Epoch: [155][ 100/ 813] Overall Loss 1.131536 Objective Loss 1.131536 LR 0.000025 Time 0.549201 -2024-05-12 02:09:49,431 - Epoch: [155][ 200/ 813] Overall Loss 1.147716 Objective Loss 1.147716 LR 0.000025 Time 0.547343 -2024-05-12 02:10:43,549 - Epoch: [155][ 300/ 813] Overall Loss 1.161825 Objective Loss 1.161825 LR 0.000025 Time 0.545285 -2024-05-12 02:11:33,907 - Epoch: [155][ 400/ 813] Overall Loss 1.169168 Objective Loss 1.169168 LR 0.000025 Time 0.534841 -2024-05-12 02:12:26,172 - Epoch: [155][ 500/ 813] Overall Loss 1.172429 Objective Loss 1.172429 LR 0.000025 Time 0.532389 -2024-05-12 02:13:19,669 - Epoch: [155][ 600/ 813] Overall Loss 1.175063 Objective Loss 1.175063 LR 0.000025 Time 0.532816 -2024-05-12 02:14:13,338 - Epoch: [155][ 700/ 813] Overall Loss 1.175723 Objective Loss 1.175723 LR 0.000025 Time 0.533368 -2024-05-12 02:15:05,202 - Epoch: [155][ 800/ 813] Overall Loss 1.174085 Objective Loss 1.174085 LR 0.000025 Time 0.531525 -2024-05-12 02:15:10,709 - Epoch: [155][ 813/ 813] Overall Loss 1.174681 Objective Loss 1.174681 LR 0.000025 Time 0.529798 -2024-05-12 02:15:10,735 - --- validate (epoch=155)----------- -2024-05-12 02:15:10,735 - 3250 samples (16 per mini-batch) -2024-05-12 02:15:10,736 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 02:16:06,621 - Epoch: [155][ 100/ 204] Loss 1.310915 mAP 0.878004 -2024-05-12 02:16:58,508 - Epoch: [155][ 200/ 204] Loss 1.296486 mAP 0.878051 -2024-05-12 02:16:59,080 - Epoch: [155][ 204/ 204] Loss 1.294397 mAP 0.878075 -2024-05-12 02:16:59,104 - ==> mAP: 0.87807 Loss: 1.294 - -2024-05-12 02:16:59,108 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 02:16:59,108 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 02:16:59,128 - - -2024-05-12 02:16:59,129 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 02:17:54,918 - Epoch: [156][ 100/ 813] Overall Loss 1.130891 Objective Loss 1.130891 LR 0.000025 Time 0.557868 -2024-05-12 02:18:48,927 - Epoch: [156][ 200/ 813] Overall Loss 1.144789 Objective Loss 1.144789 LR 0.000025 Time 0.548971 -2024-05-12 02:19:40,947 - Epoch: [156][ 300/ 813] Overall Loss 1.157151 Objective Loss 1.157151 LR 0.000025 Time 0.539378 -2024-05-12 02:20:31,939 - Epoch: [156][ 400/ 813] Overall Loss 1.164122 Objective Loss 1.164122 LR 0.000025 Time 0.532008 -2024-05-12 02:21:23,914 - Epoch: [156][ 500/ 813] Overall Loss 1.171847 Objective Loss 1.171847 LR 0.000025 Time 0.529554 -2024-05-12 02:22:16,612 - Epoch: [156][ 600/ 813] Overall Loss 1.173505 Objective Loss 1.173505 LR 0.000025 Time 0.529120 -2024-05-12 02:23:09,306 - Epoch: [156][ 700/ 813] Overall Loss 1.173563 Objective Loss 1.173563 LR 0.000025 Time 0.528803 -2024-05-12 02:24:02,387 - Epoch: [156][ 800/ 813] Overall Loss 1.173857 Objective Loss 1.173857 LR 0.000025 Time 0.529049 -2024-05-12 02:24:07,738 - Epoch: [156][ 813/ 813] Overall Loss 1.173409 Objective Loss 1.173409 LR 0.000025 Time 0.527172 -2024-05-12 02:24:07,765 - --- validate (epoch=156)----------- -2024-05-12 02:24:07,765 - 3250 samples (16 per mini-batch) -2024-05-12 02:24:07,767 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 02:25:01,642 - Epoch: [156][ 100/ 204] Loss 1.302381 mAP 0.898653 -2024-05-12 02:25:54,384 - Epoch: [156][ 200/ 204] Loss 1.302756 mAP 0.887603 -2024-05-12 02:25:55,186 - Epoch: [156][ 204/ 204] Loss 1.304231 mAP 0.887508 -2024-05-12 02:25:55,211 - ==> mAP: 0.88751 Loss: 1.304 - -2024-05-12 02:25:55,214 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 02:25:55,214 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 02:25:55,235 - - -2024-05-12 02:25:55,235 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 02:26:50,991 - Epoch: [157][ 100/ 813] Overall Loss 1.147409 Objective Loss 1.147409 LR 0.000025 Time 0.557534 -2024-05-12 02:27:44,414 - Epoch: [157][ 200/ 813] Overall Loss 1.148891 Objective Loss 1.148891 LR 0.000025 Time 0.545860 -2024-05-12 02:28:37,006 - Epoch: [157][ 300/ 813] Overall Loss 1.157256 Objective Loss 1.157256 LR 0.000025 Time 0.539209 -2024-05-12 02:29:26,153 - Epoch: [157][ 400/ 813] Overall Loss 1.162089 Objective Loss 1.162089 LR 0.000025 Time 0.527270 -2024-05-12 02:30:18,542 - Epoch: [157][ 500/ 813] Overall Loss 1.167884 Objective Loss 1.167884 LR 0.000025 Time 0.526591 -2024-05-12 02:31:11,686 - Epoch: [157][ 600/ 813] Overall Loss 1.168760 Objective Loss 1.168760 LR 0.000025 Time 0.527397 -2024-05-12 02:32:04,275 - Epoch: [157][ 700/ 813] Overall Loss 1.170565 Objective Loss 1.170565 LR 0.000025 Time 0.527180 -2024-05-12 02:32:57,660 - Epoch: [157][ 800/ 813] Overall Loss 1.174746 Objective Loss 1.174746 LR 0.000025 Time 0.528009 -2024-05-12 02:33:02,213 - Epoch: [157][ 813/ 813] Overall Loss 1.175549 Objective Loss 1.175549 LR 0.000025 Time 0.525164 -2024-05-12 02:33:02,240 - --- validate (epoch=157)----------- -2024-05-12 02:33:02,240 - 3250 samples (16 per mini-batch) -2024-05-12 02:33:02,241 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 02:33:57,657 - Epoch: [157][ 100/ 204] Loss 1.305689 mAP 0.876756 -2024-05-12 02:34:52,437 - Epoch: [157][ 200/ 204] Loss 1.310238 mAP 0.876303 -2024-05-12 02:34:53,338 - Epoch: [157][ 204/ 204] Loss 1.309447 mAP 0.876298 -2024-05-12 02:34:53,364 - ==> mAP: 0.87630 Loss: 1.309 - -2024-05-12 02:34:53,367 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 02:34:53,367 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 02:34:53,387 - - -2024-05-12 02:34:53,387 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 02:35:48,448 - Epoch: [158][ 100/ 813] Overall Loss 1.135553 Objective Loss 1.135553 LR 0.000025 Time 0.550585 -2024-05-12 02:36:41,857 - Epoch: [158][ 200/ 813] Overall Loss 1.144952 Objective Loss 1.144952 LR 0.000025 Time 0.542323 -2024-05-12 02:37:33,904 - Epoch: [158][ 300/ 813] Overall Loss 1.160236 Objective Loss 1.160236 LR 0.000025 Time 0.535024 -2024-05-12 02:38:24,257 - Epoch: [158][ 400/ 813] Overall Loss 1.169376 Objective Loss 1.169376 LR 0.000025 Time 0.527149 -2024-05-12 02:39:16,093 - Epoch: [158][ 500/ 813] Overall Loss 1.173033 Objective Loss 1.173033 LR 0.000025 Time 0.525388 -2024-05-12 02:40:12,111 - Epoch: [158][ 600/ 813] Overall Loss 1.173268 Objective Loss 1.173268 LR 0.000025 Time 0.531175 -2024-05-12 02:41:05,368 - Epoch: [158][ 700/ 813] Overall Loss 1.174926 Objective Loss 1.174926 LR 0.000025 Time 0.531373 -2024-05-12 02:41:57,555 - Epoch: [158][ 800/ 813] Overall Loss 1.175163 Objective Loss 1.175163 LR 0.000025 Time 0.530179 -2024-05-12 02:42:02,045 - Epoch: [158][ 813/ 813] Overall Loss 1.175787 Objective Loss 1.175787 LR 0.000025 Time 0.527225 -2024-05-12 02:42:02,072 - --- validate (epoch=158)----------- -2024-05-12 02:42:02,073 - 3250 samples (16 per mini-batch) -2024-05-12 02:42:02,074 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 02:42:55,009 - Epoch: [158][ 100/ 204] Loss 1.308912 mAP 0.862570 -2024-05-12 02:43:48,047 - Epoch: [158][ 200/ 204] Loss 1.313565 mAP 0.864095 -2024-05-12 02:43:49,102 - Epoch: [158][ 204/ 204] Loss 1.310091 mAP 0.864221 -2024-05-12 02:43:49,128 - ==> mAP: 0.86422 Loss: 1.310 - -2024-05-12 02:43:49,130 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 02:43:49,131 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 02:43:49,150 - - -2024-05-12 02:43:49,151 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 02:44:44,557 - Epoch: [159][ 100/ 813] Overall Loss 1.149551 Objective Loss 1.149551 LR 0.000025 Time 0.554042 -2024-05-12 02:45:37,527 - Epoch: [159][ 200/ 813] Overall Loss 1.160066 Objective Loss 1.160066 LR 0.000025 Time 0.541836 -2024-05-12 02:46:29,680 - Epoch: [159][ 300/ 813] Overall Loss 1.177798 Objective Loss 1.177798 LR 0.000025 Time 0.535063 -2024-05-12 02:47:21,011 - Epoch: [159][ 400/ 813] Overall Loss 1.187804 Objective Loss 1.187804 LR 0.000025 Time 0.529619 -2024-05-12 02:48:12,595 - Epoch: [159][ 500/ 813] Overall Loss 1.185731 Objective Loss 1.185731 LR 0.000025 Time 0.526857 -2024-05-12 02:49:07,103 - Epoch: [159][ 600/ 813] Overall Loss 1.181858 Objective Loss 1.181858 LR 0.000025 Time 0.529893 -2024-05-12 02:50:00,051 - Epoch: [159][ 700/ 813] Overall Loss 1.178610 Objective Loss 1.178610 LR 0.000025 Time 0.529827 -2024-05-12 02:50:52,895 - Epoch: [159][ 800/ 813] Overall Loss 1.179567 Objective Loss 1.179567 LR 0.000025 Time 0.529645 -2024-05-12 02:50:58,071 - Epoch: [159][ 813/ 813] Overall Loss 1.179857 Objective Loss 1.179857 LR 0.000025 Time 0.527539 -2024-05-12 02:50:58,098 - --- validate (epoch=159)----------- -2024-05-12 02:50:58,099 - 3250 samples (16 per mini-batch) -2024-05-12 02:50:58,100 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 02:51:52,671 - Epoch: [159][ 100/ 204] Loss 1.293087 mAP 0.887358 -2024-05-12 02:52:46,722 - Epoch: [159][ 200/ 204] Loss 1.306178 mAP 0.886213 -2024-05-12 02:52:46,968 - Epoch: [159][ 204/ 204] Loss 1.306101 mAP 0.886081 -2024-05-12 02:52:46,993 - ==> mAP: 0.88608 Loss: 1.306 - -2024-05-12 02:52:46,996 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 02:52:46,996 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 02:52:47,016 - - -2024-05-12 02:52:47,016 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 02:53:43,222 - Epoch: [160][ 100/ 813] Overall Loss 1.136505 Objective Loss 1.136505 LR 0.000025 Time 0.562039 -2024-05-12 02:54:36,712 - Epoch: [160][ 200/ 813] Overall Loss 1.153757 Objective Loss 1.153757 LR 0.000025 Time 0.548461 -2024-05-12 02:55:31,022 - Epoch: [160][ 300/ 813] Overall Loss 1.161318 Objective Loss 1.161318 LR 0.000025 Time 0.546659 -2024-05-12 02:56:21,005 - Epoch: [160][ 400/ 813] Overall Loss 1.168849 Objective Loss 1.168849 LR 0.000025 Time 0.534947 -2024-05-12 02:57:10,811 - Epoch: [160][ 500/ 813] Overall Loss 1.171186 Objective Loss 1.171186 LR 0.000025 Time 0.527544 -2024-05-12 02:58:04,610 - Epoch: [160][ 600/ 813] Overall Loss 1.174316 Objective Loss 1.174316 LR 0.000025 Time 0.529281 -2024-05-12 02:58:57,509 - Epoch: [160][ 700/ 813] Overall Loss 1.173808 Objective Loss 1.173808 LR 0.000025 Time 0.529236 -2024-05-12 02:59:49,128 - Epoch: [160][ 800/ 813] Overall Loss 1.172513 Objective Loss 1.172513 LR 0.000025 Time 0.527604 -2024-05-12 02:59:55,124 - Epoch: [160][ 813/ 813] Overall Loss 1.172609 Objective Loss 1.172609 LR 0.000025 Time 0.526543 -2024-05-12 02:59:55,152 - --- validate (epoch=160)----------- -2024-05-12 02:59:55,153 - 3250 samples (16 per mini-batch) -2024-05-12 02:59:55,154 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 03:00:49,739 - Epoch: [160][ 100/ 204] Loss 1.322865 mAP 0.887658 -2024-05-12 03:01:43,156 - Epoch: [160][ 200/ 204] Loss 1.308489 mAP 0.885680 -2024-05-12 03:01:43,362 - Epoch: [160][ 204/ 204] Loss 1.308402 mAP 0.885686 -2024-05-12 03:01:43,387 - ==> mAP: 0.88569 Loss: 1.308 - -2024-05-12 03:01:43,394 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 03:01:43,394 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 03:01:43,416 - - -2024-05-12 03:01:43,417 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 03:02:38,711 - Epoch: [161][ 100/ 813] Overall Loss 1.138220 Objective Loss 1.138220 LR 0.000025 Time 0.552906 -2024-05-12 03:03:33,551 - Epoch: [161][ 200/ 813] Overall Loss 1.143183 Objective Loss 1.143183 LR 0.000025 Time 0.550649 -2024-05-12 03:04:26,537 - Epoch: [161][ 300/ 813] Overall Loss 1.156597 Objective Loss 1.156597 LR 0.000025 Time 0.543713 -2024-05-12 03:05:17,000 - Epoch: [161][ 400/ 813] Overall Loss 1.165598 Objective Loss 1.165598 LR 0.000025 Time 0.533935 -2024-05-12 03:06:09,114 - Epoch: [161][ 500/ 813] Overall Loss 1.168382 Objective Loss 1.168382 LR 0.000025 Time 0.531372 -2024-05-12 03:07:03,373 - Epoch: [161][ 600/ 813] Overall Loss 1.171678 Objective Loss 1.171678 LR 0.000025 Time 0.533235 -2024-05-12 03:07:55,244 - Epoch: [161][ 700/ 813] Overall Loss 1.171176 Objective Loss 1.171176 LR 0.000025 Time 0.531158 -2024-05-12 03:08:48,288 - Epoch: [161][ 800/ 813] Overall Loss 1.171375 Objective Loss 1.171375 LR 0.000025 Time 0.531067 -2024-05-12 03:08:53,300 - Epoch: [161][ 813/ 813] Overall Loss 1.171053 Objective Loss 1.171053 LR 0.000025 Time 0.528738 -2024-05-12 03:08:53,325 - --- validate (epoch=161)----------- -2024-05-12 03:08:53,326 - 3250 samples (16 per mini-batch) -2024-05-12 03:08:53,327 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 03:09:49,992 - Epoch: [161][ 100/ 204] Loss 1.304845 mAP 0.886615 -2024-05-12 03:10:43,701 - Epoch: [161][ 200/ 204] Loss 1.295859 mAP 0.886828 -2024-05-12 03:10:43,912 - Epoch: [161][ 204/ 204] Loss 1.299741 mAP 0.886841 -2024-05-12 03:10:43,938 - ==> mAP: 0.88684 Loss: 1.300 - -2024-05-12 03:10:43,942 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 03:10:43,943 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 03:10:43,963 - - -2024-05-12 03:10:43,963 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 03:11:39,205 - Epoch: [162][ 100/ 813] Overall Loss 1.145224 Objective Loss 1.145224 LR 0.000025 Time 0.552401 -2024-05-12 03:12:32,995 - Epoch: [162][ 200/ 813] Overall Loss 1.150073 Objective Loss 1.150073 LR 0.000025 Time 0.545139 -2024-05-12 03:13:26,356 - Epoch: [162][ 300/ 813] Overall Loss 1.163960 Objective Loss 1.163960 LR 0.000025 Time 0.541293 -2024-05-12 03:14:17,089 - Epoch: [162][ 400/ 813] Overall Loss 1.173248 Objective Loss 1.173248 LR 0.000025 Time 0.532798 -2024-05-12 03:15:08,449 - Epoch: [162][ 500/ 813] Overall Loss 1.173982 Objective Loss 1.173982 LR 0.000025 Time 0.528955 -2024-05-12 03:16:03,807 - Epoch: [162][ 600/ 813] Overall Loss 1.177349 Objective Loss 1.177349 LR 0.000025 Time 0.533053 -2024-05-12 03:16:56,869 - Epoch: [162][ 700/ 813] Overall Loss 1.173598 Objective Loss 1.173598 LR 0.000025 Time 0.532696 -2024-05-12 03:17:49,426 - Epoch: [162][ 800/ 813] Overall Loss 1.178274 Objective Loss 1.178274 LR 0.000025 Time 0.531802 -2024-05-12 03:17:54,142 - Epoch: [162][ 813/ 813] Overall Loss 1.178136 Objective Loss 1.178136 LR 0.000025 Time 0.529100 -2024-05-12 03:17:54,167 - --- validate (epoch=162)----------- -2024-05-12 03:17:54,168 - 3250 samples (16 per mini-batch) -2024-05-12 03:17:54,169 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 03:18:50,252 - Epoch: [162][ 100/ 204] Loss 1.295656 mAP 0.895811 -2024-05-12 03:19:43,234 - Epoch: [162][ 200/ 204] Loss 1.304371 mAP 0.895200 -2024-05-12 03:19:43,814 - Epoch: [162][ 204/ 204] Loss 1.309789 mAP 0.895200 -2024-05-12 03:19:43,839 - ==> mAP: 0.89520 Loss: 1.310 - -2024-05-12 03:19:43,842 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 03:19:43,842 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 03:19:43,862 - - -2024-05-12 03:19:43,862 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 03:20:38,918 - Epoch: [163][ 100/ 813] Overall Loss 1.134766 Objective Loss 1.134766 LR 0.000025 Time 0.550540 -2024-05-12 03:21:33,236 - Epoch: [163][ 200/ 813] Overall Loss 1.153308 Objective Loss 1.153308 LR 0.000025 Time 0.546839 -2024-05-12 03:22:27,101 - Epoch: [163][ 300/ 813] Overall Loss 1.165664 Objective Loss 1.165664 LR 0.000025 Time 0.544105 -2024-05-12 03:23:17,342 - Epoch: [163][ 400/ 813] Overall Loss 1.178537 Objective Loss 1.178537 LR 0.000025 Time 0.533676 -2024-05-12 03:24:10,057 - Epoch: [163][ 500/ 813] Overall Loss 1.178921 Objective Loss 1.178921 LR 0.000025 Time 0.532364 -2024-05-12 03:25:03,104 - Epoch: [163][ 600/ 813] Overall Loss 1.180250 Objective Loss 1.180250 LR 0.000025 Time 0.532045 -2024-05-12 03:25:56,464 - Epoch: [163][ 700/ 813] Overall Loss 1.178315 Objective Loss 1.178315 LR 0.000025 Time 0.532265 -2024-05-12 03:26:50,292 - Epoch: [163][ 800/ 813] Overall Loss 1.179369 Objective Loss 1.179369 LR 0.000025 Time 0.533011 -2024-05-12 03:26:54,693 - Epoch: [163][ 813/ 813] Overall Loss 1.179771 Objective Loss 1.179771 LR 0.000025 Time 0.529901 -2024-05-12 03:26:54,719 - --- validate (epoch=163)----------- -2024-05-12 03:26:54,720 - 3250 samples (16 per mini-batch) -2024-05-12 03:26:54,721 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 03:27:49,130 - Epoch: [163][ 100/ 204] Loss 1.318819 mAP 0.878884 -2024-05-12 03:28:40,171 - Epoch: [163][ 200/ 204] Loss 1.319707 mAP 0.868255 -2024-05-12 03:28:40,925 - Epoch: [163][ 204/ 204] Loss 1.317324 mAP 0.868272 -2024-05-12 03:28:40,951 - ==> mAP: 0.86827 Loss: 1.317 - -2024-05-12 03:28:40,955 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 03:28:40,955 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 03:28:40,975 - - -2024-05-12 03:28:40,975 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 03:29:36,583 - Epoch: [164][ 100/ 813] Overall Loss 1.134674 Objective Loss 1.134674 LR 0.000025 Time 0.556065 -2024-05-12 03:30:29,827 - Epoch: [164][ 200/ 813] Overall Loss 1.138873 Objective Loss 1.138873 LR 0.000025 Time 0.544109 -2024-05-12 03:31:21,849 - Epoch: [164][ 300/ 813] Overall Loss 1.159647 Objective Loss 1.159647 LR 0.000025 Time 0.536115 -2024-05-12 03:32:13,456 - Epoch: [164][ 400/ 813] Overall Loss 1.165869 Objective Loss 1.165869 LR 0.000025 Time 0.531100 -2024-05-12 03:33:05,305 - Epoch: [164][ 500/ 813] Overall Loss 1.166347 Objective Loss 1.166347 LR 0.000025 Time 0.528573 -2024-05-12 03:33:58,441 - Epoch: [164][ 600/ 813] Overall Loss 1.165467 Objective Loss 1.165467 LR 0.000025 Time 0.529036 -2024-05-12 03:34:51,552 - Epoch: [164][ 700/ 813] Overall Loss 1.166980 Objective Loss 1.166980 LR 0.000025 Time 0.529329 -2024-05-12 03:35:44,500 - Epoch: [164][ 800/ 813] Overall Loss 1.169182 Objective Loss 1.169182 LR 0.000025 Time 0.529328 -2024-05-12 03:35:49,555 - Epoch: [164][ 813/ 813] Overall Loss 1.168286 Objective Loss 1.168286 LR 0.000025 Time 0.527081 -2024-05-12 03:35:49,582 - --- validate (epoch=164)----------- -2024-05-12 03:35:49,583 - 3250 samples (16 per mini-batch) -2024-05-12 03:35:49,584 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 03:36:45,642 - Epoch: [164][ 100/ 204] Loss 1.324127 mAP 0.865077 -2024-05-12 03:37:39,414 - Epoch: [164][ 200/ 204] Loss 1.307599 mAP 0.876242 -2024-05-12 03:37:39,764 - Epoch: [164][ 204/ 204] Loss 1.305955 mAP 0.876314 -2024-05-12 03:37:39,789 - ==> mAP: 0.87631 Loss: 1.306 - -2024-05-12 03:37:39,793 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 03:37:39,793 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 03:37:39,813 - - -2024-05-12 03:37:39,813 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 03:38:35,205 - Epoch: [165][ 100/ 813] Overall Loss 1.155078 Objective Loss 1.155078 LR 0.000025 Time 0.553899 -2024-05-12 03:39:28,810 - Epoch: [165][ 200/ 813] Overall Loss 1.156379 Objective Loss 1.156379 LR 0.000025 Time 0.544966 -2024-05-12 03:40:21,668 - Epoch: [165][ 300/ 813] Overall Loss 1.167419 Objective Loss 1.167419 LR 0.000025 Time 0.539498 -2024-05-12 03:41:11,865 - Epoch: [165][ 400/ 813] Overall Loss 1.175172 Objective Loss 1.175172 LR 0.000025 Time 0.530113 -2024-05-12 03:42:04,540 - Epoch: [165][ 500/ 813] Overall Loss 1.179117 Objective Loss 1.179117 LR 0.000025 Time 0.529438 -2024-05-12 03:42:58,638 - Epoch: [165][ 600/ 813] Overall Loss 1.174506 Objective Loss 1.174506 LR 0.000025 Time 0.531357 -2024-05-12 03:43:51,658 - Epoch: [165][ 700/ 813] Overall Loss 1.175039 Objective Loss 1.175039 LR 0.000025 Time 0.531191 -2024-05-12 03:44:44,245 - Epoch: [165][ 800/ 813] Overall Loss 1.174587 Objective Loss 1.174587 LR 0.000025 Time 0.530523 -2024-05-12 03:44:50,160 - Epoch: [165][ 813/ 813] Overall Loss 1.174986 Objective Loss 1.174986 LR 0.000025 Time 0.529312 -2024-05-12 03:44:50,188 - --- validate (epoch=165)----------- -2024-05-12 03:44:50,188 - 3250 samples (16 per mini-batch) -2024-05-12 03:44:50,190 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 03:45:45,396 - Epoch: [165][ 100/ 204] Loss 1.305693 mAP 0.884499 -2024-05-12 03:46:38,515 - Epoch: [165][ 200/ 204] Loss 1.306268 mAP 0.885953 -2024-05-12 03:46:38,987 - Epoch: [165][ 204/ 204] Loss 1.303667 mAP 0.885865 -2024-05-12 03:46:39,012 - ==> mAP: 0.88586 Loss: 1.304 - -2024-05-12 03:46:39,015 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 03:46:39,015 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 03:46:39,036 - - -2024-05-12 03:46:39,036 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 03:47:35,173 - Epoch: [166][ 100/ 813] Overall Loss 1.134045 Objective Loss 1.134045 LR 0.000025 Time 0.561352 -2024-05-12 03:48:28,653 - Epoch: [166][ 200/ 813] Overall Loss 1.155432 Objective Loss 1.155432 LR 0.000025 Time 0.548067 -2024-05-12 03:49:21,212 - Epoch: [166][ 300/ 813] Overall Loss 1.166748 Objective Loss 1.166748 LR 0.000025 Time 0.540568 -2024-05-12 03:50:10,476 - Epoch: [166][ 400/ 813] Overall Loss 1.170356 Objective Loss 1.170356 LR 0.000025 Time 0.528583 -2024-05-12 03:51:03,251 - Epoch: [166][ 500/ 813] Overall Loss 1.172563 Objective Loss 1.172563 LR 0.000025 Time 0.528414 -2024-05-12 03:51:57,616 - Epoch: [166][ 600/ 813] Overall Loss 1.170967 Objective Loss 1.170967 LR 0.000025 Time 0.530950 -2024-05-12 03:52:49,096 - Epoch: [166][ 700/ 813] Overall Loss 1.172948 Objective Loss 1.172948 LR 0.000025 Time 0.528640 -2024-05-12 03:53:42,749 - Epoch: [166][ 800/ 813] Overall Loss 1.176050 Objective Loss 1.176050 LR 0.000025 Time 0.529625 -2024-05-12 03:53:47,927 - Epoch: [166][ 813/ 813] Overall Loss 1.174760 Objective Loss 1.174760 LR 0.000025 Time 0.527525 -2024-05-12 03:53:47,956 - --- validate (epoch=166)----------- -2024-05-12 03:53:47,957 - 3250 samples (16 per mini-batch) -2024-05-12 03:53:47,958 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 03:54:43,237 - Epoch: [166][ 100/ 204] Loss 1.283718 mAP 0.887554 -2024-05-12 03:55:36,645 - Epoch: [166][ 200/ 204] Loss 1.300106 mAP 0.877287 -2024-05-12 03:55:37,464 - Epoch: [166][ 204/ 204] Loss 1.296735 mAP 0.886868 -2024-05-12 03:55:37,489 - ==> mAP: 0.88687 Loss: 1.297 - -2024-05-12 03:55:37,492 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 03:55:37,492 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 03:55:37,513 - - -2024-05-12 03:55:37,513 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 03:56:33,364 - Epoch: [167][ 100/ 813] Overall Loss 1.147006 Objective Loss 1.147006 LR 0.000025 Time 0.558495 -2024-05-12 03:57:26,194 - Epoch: [167][ 200/ 813] Overall Loss 1.149297 Objective Loss 1.149297 LR 0.000025 Time 0.543387 -2024-05-12 03:58:19,778 - Epoch: [167][ 300/ 813] Overall Loss 1.160652 Objective Loss 1.160652 LR 0.000025 Time 0.540867 -2024-05-12 03:59:09,438 - Epoch: [167][ 400/ 813] Overall Loss 1.171067 Objective Loss 1.171067 LR 0.000025 Time 0.529797 -2024-05-12 04:00:01,977 - Epoch: [167][ 500/ 813] Overall Loss 1.172462 Objective Loss 1.172462 LR 0.000025 Time 0.528912 -2024-05-12 04:00:54,712 - Epoch: [167][ 600/ 813] Overall Loss 1.173611 Objective Loss 1.173611 LR 0.000025 Time 0.528644 -2024-05-12 04:01:47,774 - Epoch: [167][ 700/ 813] Overall Loss 1.174109 Objective Loss 1.174109 LR 0.000025 Time 0.528924 -2024-05-12 04:02:40,645 - Epoch: [167][ 800/ 813] Overall Loss 1.174592 Objective Loss 1.174592 LR 0.000025 Time 0.528892 -2024-05-12 04:02:46,568 - Epoch: [167][ 813/ 813] Overall Loss 1.173593 Objective Loss 1.173593 LR 0.000025 Time 0.527720 -2024-05-12 04:02:46,599 - --- validate (epoch=167)----------- -2024-05-12 04:02:46,600 - 3250 samples (16 per mini-batch) -2024-05-12 04:02:46,601 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 04:03:42,636 - Epoch: [167][ 100/ 204] Loss 1.301253 mAP 0.879395 -2024-05-12 04:04:36,766 - Epoch: [167][ 200/ 204] Loss 1.302687 mAP 0.878833 -2024-05-12 04:04:36,972 - Epoch: [167][ 204/ 204] Loss 1.303967 mAP 0.878828 -2024-05-12 04:04:36,998 - ==> mAP: 0.87883 Loss: 1.304 - -2024-05-12 04:04:37,001 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 04:04:37,001 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 04:04:37,021 - - -2024-05-12 04:04:37,021 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 04:05:32,362 - Epoch: [168][ 100/ 813] Overall Loss 1.175788 Objective Loss 1.175788 LR 0.000025 Time 0.553384 -2024-05-12 04:06:26,841 - Epoch: [168][ 200/ 813] Overall Loss 1.174188 Objective Loss 1.174188 LR 0.000025 Time 0.549063 -2024-05-12 04:07:18,882 - Epoch: [168][ 300/ 813] Overall Loss 1.176085 Objective Loss 1.176085 LR 0.000025 Time 0.539506 -2024-05-12 04:08:08,988 - Epoch: [168][ 400/ 813] Overall Loss 1.174859 Objective Loss 1.174859 LR 0.000025 Time 0.529892 -2024-05-12 04:09:01,293 - Epoch: [168][ 500/ 813] Overall Loss 1.177922 Objective Loss 1.177922 LR 0.000025 Time 0.528517 -2024-05-12 04:09:55,090 - Epoch: [168][ 600/ 813] Overall Loss 1.180009 Objective Loss 1.180009 LR 0.000025 Time 0.530090 -2024-05-12 04:10:49,230 - Epoch: [168][ 700/ 813] Overall Loss 1.181916 Objective Loss 1.181916 LR 0.000025 Time 0.531703 -2024-05-12 04:11:40,918 - Epoch: [168][ 800/ 813] Overall Loss 1.183007 Objective Loss 1.183007 LR 0.000025 Time 0.529845 -2024-05-12 04:11:46,515 - Epoch: [168][ 813/ 813] Overall Loss 1.182513 Objective Loss 1.182513 LR 0.000025 Time 0.528251 -2024-05-12 04:11:46,543 - --- validate (epoch=168)----------- -2024-05-12 04:11:46,544 - 3250 samples (16 per mini-batch) -2024-05-12 04:11:46,546 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 04:12:40,925 - Epoch: [168][ 100/ 204] Loss 1.301197 mAP 0.877960 -2024-05-12 04:13:34,862 - Epoch: [168][ 200/ 204] Loss 1.307233 mAP 0.887928 -2024-05-12 04:13:35,373 - Epoch: [168][ 204/ 204] Loss 1.305427 mAP 0.887928 -2024-05-12 04:13:35,399 - ==> mAP: 0.88793 Loss: 1.305 - -2024-05-12 04:13:35,403 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 04:13:35,403 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 04:13:35,423 - - -2024-05-12 04:13:35,423 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 04:14:30,889 - Epoch: [169][ 100/ 813] Overall Loss 1.158652 Objective Loss 1.158652 LR 0.000025 Time 0.554637 -2024-05-12 04:15:25,155 - Epoch: [169][ 200/ 813] Overall Loss 1.161536 Objective Loss 1.161536 LR 0.000025 Time 0.548640 -2024-05-12 04:16:17,953 - Epoch: [169][ 300/ 813] Overall Loss 1.169948 Objective Loss 1.169948 LR 0.000025 Time 0.541746 -2024-05-12 04:17:09,469 - Epoch: [169][ 400/ 813] Overall Loss 1.178244 Objective Loss 1.178244 LR 0.000025 Time 0.535097 -2024-05-12 04:18:00,830 - Epoch: [169][ 500/ 813] Overall Loss 1.182093 Objective Loss 1.182093 LR 0.000025 Time 0.530791 -2024-05-12 04:18:55,689 - Epoch: [169][ 600/ 813] Overall Loss 1.181145 Objective Loss 1.181145 LR 0.000025 Time 0.533743 -2024-05-12 04:19:48,407 - Epoch: [169][ 700/ 813] Overall Loss 1.177859 Objective Loss 1.177859 LR 0.000025 Time 0.532802 -2024-05-12 04:20:41,844 - Epoch: [169][ 800/ 813] Overall Loss 1.177165 Objective Loss 1.177165 LR 0.000025 Time 0.532996 -2024-05-12 04:20:46,369 - Epoch: [169][ 813/ 813] Overall Loss 1.176535 Objective Loss 1.176535 LR 0.000025 Time 0.530039 -2024-05-12 04:20:46,401 - --- validate (epoch=169)----------- -2024-05-12 04:20:46,402 - 3250 samples (16 per mini-batch) -2024-05-12 04:20:46,404 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 04:21:42,835 - Epoch: [169][ 100/ 204] Loss 1.313710 mAP 0.887041 -2024-05-12 04:22:36,222 - Epoch: [169][ 200/ 204] Loss 1.302252 mAP 0.886882 -2024-05-12 04:22:36,870 - Epoch: [169][ 204/ 204] Loss 1.300614 mAP 0.886913 -2024-05-12 04:22:36,896 - ==> mAP: 0.88691 Loss: 1.301 - -2024-05-12 04:22:36,899 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 04:22:36,899 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 04:22:36,920 - - -2024-05-12 04:22:36,920 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 04:23:33,143 - Epoch: [170][ 100/ 813] Overall Loss 1.169083 Objective Loss 1.169083 LR 0.000025 Time 0.562212 -2024-05-12 04:24:27,109 - Epoch: [170][ 200/ 813] Overall Loss 1.171211 Objective Loss 1.171211 LR 0.000025 Time 0.550847 -2024-05-12 04:25:20,671 - Epoch: [170][ 300/ 813] Overall Loss 1.180369 Objective Loss 1.180369 LR 0.000025 Time 0.545766 -2024-05-12 04:26:10,969 - Epoch: [170][ 400/ 813] Overall Loss 1.179113 Objective Loss 1.179113 LR 0.000025 Time 0.535066 -2024-05-12 04:27:03,710 - Epoch: [170][ 500/ 813] Overall Loss 1.182349 Objective Loss 1.182349 LR 0.000025 Time 0.533532 -2024-05-12 04:27:57,887 - Epoch: [170][ 600/ 813] Overall Loss 1.185537 Objective Loss 1.185537 LR 0.000025 Time 0.534902 -2024-05-12 04:28:51,582 - Epoch: [170][ 700/ 813] Overall Loss 1.185400 Objective Loss 1.185400 LR 0.000025 Time 0.535192 -2024-05-12 04:29:43,389 - Epoch: [170][ 800/ 813] Overall Loss 1.186505 Objective Loss 1.186505 LR 0.000025 Time 0.533050 -2024-05-12 04:29:49,122 - Epoch: [170][ 813/ 813] Overall Loss 1.186328 Objective Loss 1.186328 LR 0.000025 Time 0.531576 -2024-05-12 04:29:49,148 - --- validate (epoch=170)----------- -2024-05-12 04:29:49,149 - 3250 samples (16 per mini-batch) -2024-05-12 04:29:49,150 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 04:30:43,040 - Epoch: [170][ 100/ 204] Loss 1.272385 mAP 0.898132 -2024-05-12 04:31:37,686 - Epoch: [170][ 200/ 204] Loss 1.286711 mAP 0.888392 -2024-05-12 04:31:38,183 - Epoch: [170][ 204/ 204] Loss 1.285816 mAP 0.888412 -2024-05-12 04:31:38,209 - ==> mAP: 0.88841 Loss: 1.286 - -2024-05-12 04:31:38,214 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 04:31:38,214 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 04:31:38,234 - - -2024-05-12 04:31:38,234 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 04:32:34,058 - Epoch: [171][ 100/ 813] Overall Loss 1.146935 Objective Loss 1.146935 LR 0.000025 Time 0.558222 -2024-05-12 04:33:27,579 - Epoch: [171][ 200/ 813] Overall Loss 1.156269 Objective Loss 1.156269 LR 0.000025 Time 0.546704 -2024-05-12 04:34:20,796 - Epoch: [171][ 300/ 813] Overall Loss 1.161679 Objective Loss 1.161679 LR 0.000025 Time 0.541833 -2024-05-12 04:35:10,589 - Epoch: [171][ 400/ 813] Overall Loss 1.166923 Objective Loss 1.166923 LR 0.000025 Time 0.530851 -2024-05-12 04:36:02,198 - Epoch: [171][ 500/ 813] Overall Loss 1.170058 Objective Loss 1.170058 LR 0.000025 Time 0.527897 -2024-05-12 04:36:57,147 - Epoch: [171][ 600/ 813] Overall Loss 1.173076 Objective Loss 1.173076 LR 0.000025 Time 0.531493 -2024-05-12 04:37:49,528 - Epoch: [171][ 700/ 813] Overall Loss 1.175280 Objective Loss 1.175280 LR 0.000025 Time 0.530393 -2024-05-12 04:38:42,667 - Epoch: [171][ 800/ 813] Overall Loss 1.175833 Objective Loss 1.175833 LR 0.000025 Time 0.530513 -2024-05-12 04:38:48,244 - Epoch: [171][ 813/ 813] Overall Loss 1.174025 Objective Loss 1.174025 LR 0.000025 Time 0.528887 -2024-05-12 04:38:48,270 - --- validate (epoch=171)----------- -2024-05-12 04:38:48,270 - 3250 samples (16 per mini-batch) -2024-05-12 04:38:48,271 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 04:39:43,615 - Epoch: [171][ 100/ 204] Loss 1.285406 mAP 0.896179 -2024-05-12 04:40:37,129 - Epoch: [171][ 200/ 204] Loss 1.277825 mAP 0.885384 -2024-05-12 04:40:37,607 - Epoch: [171][ 204/ 204] Loss 1.276038 mAP 0.885384 -2024-05-12 04:40:37,632 - ==> mAP: 0.88538 Loss: 1.276 - -2024-05-12 04:40:37,636 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 04:40:37,636 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 04:40:37,656 - - -2024-05-12 04:40:37,656 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 04:41:33,519 - Epoch: [172][ 100/ 813] Overall Loss 1.124677 Objective Loss 1.124677 LR 0.000025 Time 0.558612 -2024-05-12 04:42:27,054 - Epoch: [172][ 200/ 813] Overall Loss 1.143108 Objective Loss 1.143108 LR 0.000025 Time 0.546969 -2024-05-12 04:43:19,940 - Epoch: [172][ 300/ 813] Overall Loss 1.160761 Objective Loss 1.160761 LR 0.000025 Time 0.540929 -2024-05-12 04:44:11,243 - Epoch: [172][ 400/ 813] Overall Loss 1.167905 Objective Loss 1.167905 LR 0.000025 Time 0.533948 -2024-05-12 04:45:04,592 - Epoch: [172][ 500/ 813] Overall Loss 1.169052 Objective Loss 1.169052 LR 0.000025 Time 0.533855 -2024-05-12 04:45:57,326 - Epoch: [172][ 600/ 813] Overall Loss 1.172663 Objective Loss 1.172663 LR 0.000025 Time 0.532766 -2024-05-12 04:46:51,180 - Epoch: [172][ 700/ 813] Overall Loss 1.170005 Objective Loss 1.170005 LR 0.000025 Time 0.533589 -2024-05-12 04:47:43,555 - Epoch: [172][ 800/ 813] Overall Loss 1.170776 Objective Loss 1.170776 LR 0.000025 Time 0.532346 -2024-05-12 04:47:48,148 - Epoch: [172][ 813/ 813] Overall Loss 1.170834 Objective Loss 1.170834 LR 0.000025 Time 0.529483 -2024-05-12 04:47:48,175 - --- validate (epoch=172)----------- -2024-05-12 04:47:48,176 - 3250 samples (16 per mini-batch) -2024-05-12 04:47:48,177 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 04:48:41,936 - Epoch: [172][ 100/ 204] Loss 1.290522 mAP 0.897847 -2024-05-12 04:49:34,983 - Epoch: [172][ 200/ 204] Loss 1.291828 mAP 0.877634 -2024-05-12 04:49:35,942 - Epoch: [172][ 204/ 204] Loss 1.291132 mAP 0.877630 -2024-05-12 04:49:35,967 - ==> mAP: 0.87763 Loss: 1.291 - -2024-05-12 04:49:35,970 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 04:49:35,970 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 04:49:35,991 - - -2024-05-12 04:49:35,991 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 04:50:30,724 - Epoch: [173][ 100/ 813] Overall Loss 1.128987 Objective Loss 1.128987 LR 0.000025 Time 0.547311 -2024-05-12 04:51:23,816 - Epoch: [173][ 200/ 813] Overall Loss 1.134941 Objective Loss 1.134941 LR 0.000025 Time 0.539105 -2024-05-12 04:52:16,992 - Epoch: [173][ 300/ 813] Overall Loss 1.146175 Objective Loss 1.146175 LR 0.000025 Time 0.536652 -2024-05-12 04:53:07,296 - Epoch: [173][ 400/ 813] Overall Loss 1.155114 Objective Loss 1.155114 LR 0.000025 Time 0.528246 -2024-05-12 04:53:58,509 - Epoch: [173][ 500/ 813] Overall Loss 1.159356 Objective Loss 1.159356 LR 0.000025 Time 0.525018 -2024-05-12 04:54:52,082 - Epoch: [173][ 600/ 813] Overall Loss 1.163823 Objective Loss 1.163823 LR 0.000025 Time 0.526801 -2024-05-12 04:55:45,611 - Epoch: [173][ 700/ 813] Overall Loss 1.163925 Objective Loss 1.163925 LR 0.000025 Time 0.528004 -2024-05-12 04:56:38,537 - Epoch: [173][ 800/ 813] Overall Loss 1.166245 Objective Loss 1.166245 LR 0.000025 Time 0.528159 -2024-05-12 04:56:42,842 - Epoch: [173][ 813/ 813] Overall Loss 1.165717 Objective Loss 1.165717 LR 0.000025 Time 0.525009 -2024-05-12 04:56:42,868 - --- validate (epoch=173)----------- -2024-05-12 04:56:42,869 - 3250 samples (16 per mini-batch) -2024-05-12 04:56:42,870 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 04:57:37,680 - Epoch: [173][ 100/ 204] Loss 1.306696 mAP 0.877320 -2024-05-12 04:58:30,842 - Epoch: [173][ 200/ 204] Loss 1.303450 mAP 0.867415 -2024-05-12 04:58:31,337 - Epoch: [173][ 204/ 204] Loss 1.305896 mAP 0.867420 -2024-05-12 04:58:31,363 - ==> mAP: 0.86742 Loss: 1.306 - -2024-05-12 04:58:31,366 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 04:58:31,367 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 04:58:31,387 - - -2024-05-12 04:58:31,387 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 04:59:27,372 - Epoch: [174][ 100/ 813] Overall Loss 1.154914 Objective Loss 1.154914 LR 0.000025 Time 0.559831 -2024-05-12 05:00:20,623 - Epoch: [174][ 200/ 813] Overall Loss 1.161184 Objective Loss 1.161184 LR 0.000025 Time 0.546161 -2024-05-12 05:01:13,656 - Epoch: [174][ 300/ 813] Overall Loss 1.168663 Objective Loss 1.168663 LR 0.000025 Time 0.540874 -2024-05-12 05:02:05,723 - Epoch: [174][ 400/ 813] Overall Loss 1.171122 Objective Loss 1.171122 LR 0.000025 Time 0.535817 -2024-05-12 05:02:57,448 - Epoch: [174][ 500/ 813] Overall Loss 1.170496 Objective Loss 1.170496 LR 0.000025 Time 0.532097 -2024-05-12 05:03:51,540 - Epoch: [174][ 600/ 813] Overall Loss 1.176142 Objective Loss 1.176142 LR 0.000025 Time 0.533566 -2024-05-12 05:04:43,149 - Epoch: [174][ 700/ 813] Overall Loss 1.174163 Objective Loss 1.174163 LR 0.000025 Time 0.531067 -2024-05-12 05:05:34,869 - Epoch: [174][ 800/ 813] Overall Loss 1.174720 Objective Loss 1.174720 LR 0.000025 Time 0.529331 -2024-05-12 05:05:40,385 - Epoch: [174][ 813/ 813] Overall Loss 1.175080 Objective Loss 1.175080 LR 0.000025 Time 0.527651 -2024-05-12 05:05:40,411 - --- validate (epoch=174)----------- -2024-05-12 05:05:40,412 - 3250 samples (16 per mini-batch) -2024-05-12 05:05:40,413 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 05:06:34,897 - Epoch: [174][ 100/ 204] Loss 1.340831 mAP 0.876833 -2024-05-12 05:07:28,879 - Epoch: [174][ 200/ 204] Loss 1.331112 mAP 0.876964 -2024-05-12 05:07:29,557 - Epoch: [174][ 204/ 204] Loss 1.332021 mAP 0.877006 -2024-05-12 05:07:29,584 - ==> mAP: 0.87701 Loss: 1.332 - -2024-05-12 05:07:29,587 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 05:07:29,587 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 05:07:29,608 - - -2024-05-12 05:07:29,608 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 05:08:24,877 - Epoch: [175][ 100/ 813] Overall Loss 1.114535 Objective Loss 1.114535 LR 0.000025 Time 0.552669 -2024-05-12 05:09:18,605 - Epoch: [175][ 200/ 813] Overall Loss 1.141647 Objective Loss 1.141647 LR 0.000025 Time 0.544969 -2024-05-12 05:10:11,779 - Epoch: [175][ 300/ 813] Overall Loss 1.156004 Objective Loss 1.156004 LR 0.000025 Time 0.540554 -2024-05-12 05:11:03,110 - Epoch: [175][ 400/ 813] Overall Loss 1.164958 Objective Loss 1.164958 LR 0.000025 Time 0.533737 -2024-05-12 05:11:56,744 - Epoch: [175][ 500/ 813] Overall Loss 1.171065 Objective Loss 1.171065 LR 0.000025 Time 0.534256 -2024-05-12 05:12:49,344 - Epoch: [175][ 600/ 813] Overall Loss 1.171489 Objective Loss 1.171489 LR 0.000025 Time 0.532874 -2024-05-12 05:13:42,046 - Epoch: [175][ 700/ 813] Overall Loss 1.171104 Objective Loss 1.171104 LR 0.000025 Time 0.532035 -2024-05-12 05:14:34,847 - Epoch: [175][ 800/ 813] Overall Loss 1.170422 Objective Loss 1.170422 LR 0.000025 Time 0.531530 -2024-05-12 05:14:40,340 - Epoch: [175][ 813/ 813] Overall Loss 1.170457 Objective Loss 1.170457 LR 0.000025 Time 0.529786 -2024-05-12 05:14:40,375 - --- validate (epoch=175)----------- -2024-05-12 05:14:40,377 - 3250 samples (16 per mini-batch) -2024-05-12 05:14:40,378 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 05:15:34,332 - Epoch: [175][ 100/ 204] Loss 1.318343 mAP 0.877668 -2024-05-12 05:16:28,038 - Epoch: [175][ 200/ 204] Loss 1.306139 mAP 0.888629 -2024-05-12 05:16:28,267 - Epoch: [175][ 204/ 204] Loss 1.309964 mAP 0.888573 -2024-05-12 05:16:28,293 - ==> mAP: 0.88857 Loss: 1.310 - -2024-05-12 05:16:28,297 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 05:16:28,297 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 05:16:28,317 - - -2024-05-12 05:16:28,317 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 05:17:23,975 - Epoch: [176][ 100/ 813] Overall Loss 1.120114 Objective Loss 1.120114 LR 0.000025 Time 0.556557 -2024-05-12 05:18:17,301 - Epoch: [176][ 200/ 813] Overall Loss 1.147182 Objective Loss 1.147182 LR 0.000025 Time 0.544824 -2024-05-12 05:19:09,668 - Epoch: [176][ 300/ 813] Overall Loss 1.153620 Objective Loss 1.153620 LR 0.000025 Time 0.537758 -2024-05-12 05:19:59,917 - Epoch: [176][ 400/ 813] Overall Loss 1.159051 Objective Loss 1.159051 LR 0.000025 Time 0.528937 -2024-05-12 05:20:51,735 - Epoch: [176][ 500/ 813] Overall Loss 1.163946 Objective Loss 1.163946 LR 0.000025 Time 0.526784 -2024-05-12 05:21:46,389 - Epoch: [176][ 600/ 813] Overall Loss 1.169205 Objective Loss 1.169205 LR 0.000025 Time 0.530073 -2024-05-12 05:22:39,197 - Epoch: [176][ 700/ 813] Overall Loss 1.170308 Objective Loss 1.170308 LR 0.000025 Time 0.529786 -2024-05-12 05:23:30,976 - Epoch: [176][ 800/ 813] Overall Loss 1.167287 Objective Loss 1.167287 LR 0.000025 Time 0.528283 -2024-05-12 05:23:36,074 - Epoch: [176][ 813/ 813] Overall Loss 1.168294 Objective Loss 1.168294 LR 0.000025 Time 0.526107 -2024-05-12 05:23:36,102 - --- validate (epoch=176)----------- -2024-05-12 05:23:36,102 - 3250 samples (16 per mini-batch) -2024-05-12 05:23:36,103 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 05:24:32,473 - Epoch: [176][ 100/ 204] Loss 1.298519 mAP 0.885794 -2024-05-12 05:25:25,520 - Epoch: [176][ 200/ 204] Loss 1.300586 mAP 0.876448 -2024-05-12 05:25:26,352 - Epoch: [176][ 204/ 204] Loss 1.303117 mAP 0.876480 -2024-05-12 05:25:26,377 - ==> mAP: 0.87648 Loss: 1.303 - -2024-05-12 05:25:26,380 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 05:25:26,380 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 05:25:26,401 - - -2024-05-12 05:25:26,401 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 05:26:21,135 - Epoch: [177][ 100/ 813] Overall Loss 1.145245 Objective Loss 1.145245 LR 0.000025 Time 0.547313 -2024-05-12 05:27:14,998 - Epoch: [177][ 200/ 813] Overall Loss 1.149000 Objective Loss 1.149000 LR 0.000025 Time 0.542964 -2024-05-12 05:28:08,550 - Epoch: [177][ 300/ 813] Overall Loss 1.156762 Objective Loss 1.156762 LR 0.000025 Time 0.540478 -2024-05-12 05:28:59,643 - Epoch: [177][ 400/ 813] Overall Loss 1.157212 Objective Loss 1.157212 LR 0.000025 Time 0.533086 -2024-05-12 05:29:51,641 - Epoch: [177][ 500/ 813] Overall Loss 1.164459 Objective Loss 1.164459 LR 0.000025 Time 0.530461 -2024-05-12 05:30:45,331 - Epoch: [177][ 600/ 813] Overall Loss 1.164944 Objective Loss 1.164944 LR 0.000025 Time 0.531533 -2024-05-12 05:31:36,959 - Epoch: [177][ 700/ 813] Overall Loss 1.167924 Objective Loss 1.167924 LR 0.000025 Time 0.529344 -2024-05-12 05:32:29,636 - Epoch: [177][ 800/ 813] Overall Loss 1.165523 Objective Loss 1.165523 LR 0.000025 Time 0.529017 -2024-05-12 05:32:34,955 - Epoch: [177][ 813/ 813] Overall Loss 1.166151 Objective Loss 1.166151 LR 0.000025 Time 0.527100 -2024-05-12 05:32:34,981 - --- validate (epoch=177)----------- -2024-05-12 05:32:34,981 - 3250 samples (16 per mini-batch) -2024-05-12 05:32:34,982 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 05:33:29,071 - Epoch: [177][ 100/ 204] Loss 1.317964 mAP 0.878737 -2024-05-12 05:34:22,886 - Epoch: [177][ 200/ 204] Loss 1.307089 mAP 0.887530 -2024-05-12 05:34:23,164 - Epoch: [177][ 204/ 204] Loss 1.305078 mAP 0.887532 -2024-05-12 05:34:23,190 - ==> mAP: 0.88753 Loss: 1.305 - -2024-05-12 05:34:23,192 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 05:34:23,192 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 05:34:23,213 - - -2024-05-12 05:34:23,213 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 05:35:20,083 - Epoch: [178][ 100/ 813] Overall Loss 1.131297 Objective Loss 1.131297 LR 0.000025 Time 0.568678 -2024-05-12 05:36:14,189 - Epoch: [178][ 200/ 813] Overall Loss 1.148945 Objective Loss 1.148945 LR 0.000025 Time 0.554853 -2024-05-12 05:37:05,129 - Epoch: [178][ 300/ 813] Overall Loss 1.163080 Objective Loss 1.163080 LR 0.000025 Time 0.539697 -2024-05-12 05:37:55,225 - Epoch: [178][ 400/ 813] Overall Loss 1.171333 Objective Loss 1.171333 LR 0.000025 Time 0.530010 -2024-05-12 05:38:47,627 - Epoch: [178][ 500/ 813] Overall Loss 1.173693 Objective Loss 1.173693 LR 0.000025 Time 0.528799 -2024-05-12 05:39:41,236 - Epoch: [178][ 600/ 813] Overall Loss 1.173755 Objective Loss 1.173755 LR 0.000025 Time 0.530013 -2024-05-12 05:40:34,756 - Epoch: [178][ 700/ 813] Overall Loss 1.174038 Objective Loss 1.174038 LR 0.000025 Time 0.530751 -2024-05-12 05:41:26,840 - Epoch: [178][ 800/ 813] Overall Loss 1.173959 Objective Loss 1.173959 LR 0.000025 Time 0.529507 -2024-05-12 05:41:31,993 - Epoch: [178][ 813/ 813] Overall Loss 1.173974 Objective Loss 1.173974 LR 0.000025 Time 0.527378 -2024-05-12 05:41:32,019 - --- validate (epoch=178)----------- -2024-05-12 05:41:32,020 - 3250 samples (16 per mini-batch) -2024-05-12 05:41:32,022 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 05:42:28,175 - Epoch: [178][ 100/ 204] Loss 1.297964 mAP 0.885718 -2024-05-12 05:43:21,237 - Epoch: [178][ 200/ 204] Loss 1.301421 mAP 0.875837 -2024-05-12 05:43:22,390 - Epoch: [178][ 204/ 204] Loss 1.299032 mAP 0.875833 -2024-05-12 05:43:22,417 - ==> mAP: 0.87583 Loss: 1.299 - -2024-05-12 05:43:22,420 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 05:43:22,420 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 05:43:22,440 - - -2024-05-12 05:43:22,440 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 05:44:19,037 - Epoch: [179][ 100/ 813] Overall Loss 1.120120 Objective Loss 1.120120 LR 0.000025 Time 0.565951 -2024-05-12 05:45:12,104 - Epoch: [179][ 200/ 813] Overall Loss 1.144014 Objective Loss 1.144014 LR 0.000025 Time 0.548301 -2024-05-12 05:46:04,594 - Epoch: [179][ 300/ 813] Overall Loss 1.151203 Objective Loss 1.151203 LR 0.000025 Time 0.540497 -2024-05-12 05:46:55,051 - Epoch: [179][ 400/ 813] Overall Loss 1.170236 Objective Loss 1.170236 LR 0.000025 Time 0.531504 -2024-05-12 05:47:47,300 - Epoch: [179][ 500/ 813] Overall Loss 1.175089 Objective Loss 1.175089 LR 0.000025 Time 0.529693 -2024-05-12 05:48:41,574 - Epoch: [179][ 600/ 813] Overall Loss 1.177068 Objective Loss 1.177068 LR 0.000025 Time 0.531866 -2024-05-12 05:49:36,465 - Epoch: [179][ 700/ 813] Overall Loss 1.174282 Objective Loss 1.174282 LR 0.000025 Time 0.534298 -2024-05-12 05:50:27,450 - Epoch: [179][ 800/ 813] Overall Loss 1.173471 Objective Loss 1.173471 LR 0.000025 Time 0.531234 -2024-05-12 05:50:33,145 - Epoch: [179][ 813/ 813] Overall Loss 1.172564 Objective Loss 1.172564 LR 0.000025 Time 0.529744 -2024-05-12 05:50:33,172 - --- validate (epoch=179)----------- -2024-05-12 05:50:33,172 - 3250 samples (16 per mini-batch) -2024-05-12 05:50:33,173 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 05:51:29,973 - Epoch: [179][ 100/ 204] Loss 1.299739 mAP 0.876213 -2024-05-12 05:52:21,248 - Epoch: [179][ 200/ 204] Loss 1.288939 mAP 0.877463 -2024-05-12 05:52:22,007 - Epoch: [179][ 204/ 204] Loss 1.299157 mAP 0.877522 -2024-05-12 05:52:22,032 - ==> mAP: 0.87752 Loss: 1.299 - -2024-05-12 05:52:22,035 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 05:52:22,035 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 05:52:22,055 - - -2024-05-12 05:52:22,055 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 05:53:16,161 - Epoch: [180][ 100/ 813] Overall Loss 1.150323 Objective Loss 1.150323 LR 0.000025 Time 0.540826 -2024-05-12 05:54:11,232 - Epoch: [180][ 200/ 813] Overall Loss 1.149312 Objective Loss 1.149312 LR 0.000025 Time 0.545762 -2024-05-12 05:55:03,176 - Epoch: [180][ 300/ 813] Overall Loss 1.159897 Objective Loss 1.159897 LR 0.000025 Time 0.536982 -2024-05-12 05:55:53,778 - Epoch: [180][ 400/ 813] Overall Loss 1.170250 Objective Loss 1.170250 LR 0.000025 Time 0.529238 -2024-05-12 05:56:47,507 - Epoch: [180][ 500/ 813] Overall Loss 1.175190 Objective Loss 1.175190 LR 0.000025 Time 0.530839 -2024-05-12 05:57:40,436 - Epoch: [180][ 600/ 813] Overall Loss 1.173172 Objective Loss 1.173172 LR 0.000025 Time 0.530570 -2024-05-12 05:58:33,116 - Epoch: [180][ 700/ 813] Overall Loss 1.176446 Objective Loss 1.176446 LR 0.000025 Time 0.530030 -2024-05-12 05:59:26,314 - Epoch: [180][ 800/ 813] Overall Loss 1.176402 Objective Loss 1.176402 LR 0.000025 Time 0.530271 -2024-05-12 05:59:31,573 - Epoch: [180][ 813/ 813] Overall Loss 1.176991 Objective Loss 1.176991 LR 0.000025 Time 0.528261 -2024-05-12 05:59:31,599 - --- validate (epoch=180)----------- -2024-05-12 05:59:31,600 - 3250 samples (16 per mini-batch) -2024-05-12 05:59:31,601 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 06:00:26,046 - Epoch: [180][ 100/ 204] Loss 1.270338 mAP 0.895910 -2024-05-12 06:01:18,920 - Epoch: [180][ 200/ 204] Loss 1.286097 mAP 0.896576 -2024-05-12 06:01:19,560 - Epoch: [180][ 204/ 204] Loss 1.292334 mAP 0.896549 -2024-05-12 06:01:19,585 - ==> mAP: 0.89655 Loss: 1.292 - -2024-05-12 06:01:19,588 - ==> Best [mAP: 0.898425 vloss: 1.308378 Params: 368352 on epoch: 109] -2024-05-12 06:01:19,588 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 06:01:19,608 - - -2024-05-12 06:01:19,608 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 06:02:14,916 - Epoch: [181][ 100/ 813] Overall Loss 1.129084 Objective Loss 1.129084 LR 0.000025 Time 0.553060 -2024-05-12 06:03:08,702 - Epoch: [181][ 200/ 813] Overall Loss 1.142895 Objective Loss 1.142895 LR 0.000025 Time 0.545451 -2024-05-12 06:04:01,916 - Epoch: [181][ 300/ 813] Overall Loss 1.159463 Objective Loss 1.159463 LR 0.000025 Time 0.541009 -2024-05-12 06:04:52,860 - Epoch: [181][ 400/ 813] Overall Loss 1.168617 Objective Loss 1.168617 LR 0.000025 Time 0.533113 -2024-05-12 06:05:46,199 - Epoch: [181][ 500/ 813] Overall Loss 1.171092 Objective Loss 1.171092 LR 0.000025 Time 0.533166 -2024-05-12 06:06:38,631 - Epoch: [181][ 600/ 813] Overall Loss 1.170793 Objective Loss 1.170793 LR 0.000025 Time 0.531683 -2024-05-12 06:07:31,469 - Epoch: [181][ 700/ 813] Overall Loss 1.166589 Objective Loss 1.166589 LR 0.000025 Time 0.531209 -2024-05-12 06:08:25,276 - Epoch: [181][ 800/ 813] Overall Loss 1.167495 Objective Loss 1.167495 LR 0.000025 Time 0.532065 -2024-05-12 06:08:30,089 - Epoch: [181][ 813/ 813] Overall Loss 1.167045 Objective Loss 1.167045 LR 0.000025 Time 0.529471 -2024-05-12 06:08:30,115 - --- validate (epoch=181)----------- -2024-05-12 06:08:30,116 - 3250 samples (16 per mini-batch) -2024-05-12 06:08:30,117 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 06:09:26,385 - Epoch: [181][ 100/ 204] Loss 1.310435 mAP 0.898678 -2024-05-12 06:10:20,692 - Epoch: [181][ 200/ 204] Loss 1.304932 mAP 0.899133 -2024-05-12 06:10:21,312 - Epoch: [181][ 204/ 204] Loss 1.305753 mAP 0.899096 -2024-05-12 06:10:21,338 - ==> mAP: 0.89910 Loss: 1.306 - -2024-05-12 06:10:21,341 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 06:10:21,341 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 06:10:21,366 - - -2024-05-12 06:10:21,366 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 06:11:17,736 - Epoch: [182][ 100/ 813] Overall Loss 1.145339 Objective Loss 1.145339 LR 0.000025 Time 0.563682 -2024-05-12 06:12:10,525 - Epoch: [182][ 200/ 813] Overall Loss 1.154217 Objective Loss 1.154217 LR 0.000025 Time 0.545779 -2024-05-12 06:13:02,844 - Epoch: [182][ 300/ 813] Overall Loss 1.162362 Objective Loss 1.162362 LR 0.000025 Time 0.538243 -2024-05-12 06:13:53,484 - Epoch: [182][ 400/ 813] Overall Loss 1.170999 Objective Loss 1.170999 LR 0.000025 Time 0.530277 -2024-05-12 06:14:45,934 - Epoch: [182][ 500/ 813] Overall Loss 1.175616 Objective Loss 1.175616 LR 0.000025 Time 0.529120 -2024-05-12 06:15:41,131 - Epoch: [182][ 600/ 813] Overall Loss 1.177997 Objective Loss 1.177997 LR 0.000025 Time 0.532926 -2024-05-12 06:16:35,114 - Epoch: [182][ 700/ 813] Overall Loss 1.177549 Objective Loss 1.177549 LR 0.000025 Time 0.533909 -2024-05-12 06:17:27,832 - Epoch: [182][ 800/ 813] Overall Loss 1.179830 Objective Loss 1.179830 LR 0.000025 Time 0.533067 -2024-05-12 06:17:32,500 - Epoch: [182][ 813/ 813] Overall Loss 1.178819 Objective Loss 1.178819 LR 0.000025 Time 0.530284 -2024-05-12 06:17:32,525 - --- validate (epoch=182)----------- -2024-05-12 06:17:32,526 - 3250 samples (16 per mini-batch) -2024-05-12 06:17:32,527 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 06:18:28,445 - Epoch: [182][ 100/ 204] Loss 1.312202 mAP 0.886019 -2024-05-12 06:19:20,202 - Epoch: [182][ 200/ 204] Loss 1.306271 mAP 0.886187 -2024-05-12 06:19:20,945 - Epoch: [182][ 204/ 204] Loss 1.306689 mAP 0.886171 -2024-05-12 06:19:20,970 - ==> mAP: 0.88617 Loss: 1.307 - -2024-05-12 06:19:20,973 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 06:19:20,973 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 06:19:20,994 - - -2024-05-12 06:19:20,994 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 06:20:16,272 - Epoch: [183][ 100/ 813] Overall Loss 1.132989 Objective Loss 1.132989 LR 0.000025 Time 0.552757 -2024-05-12 06:21:11,875 - Epoch: [183][ 200/ 813] Overall Loss 1.141478 Objective Loss 1.141478 LR 0.000025 Time 0.554375 -2024-05-12 06:22:04,456 - Epoch: [183][ 300/ 813] Overall Loss 1.144227 Objective Loss 1.144227 LR 0.000025 Time 0.544840 -2024-05-12 06:22:55,677 - Epoch: [183][ 400/ 813] Overall Loss 1.155445 Objective Loss 1.155445 LR 0.000025 Time 0.536679 -2024-05-12 06:23:48,369 - Epoch: [183][ 500/ 813] Overall Loss 1.160966 Objective Loss 1.160966 LR 0.000025 Time 0.534721 -2024-05-12 06:24:41,145 - Epoch: [183][ 600/ 813] Overall Loss 1.162742 Objective Loss 1.162742 LR 0.000025 Time 0.533557 -2024-05-12 06:25:32,340 - Epoch: [183][ 700/ 813] Overall Loss 1.165476 Objective Loss 1.165476 LR 0.000025 Time 0.530468 -2024-05-12 06:26:26,094 - Epoch: [183][ 800/ 813] Overall Loss 1.169998 Objective Loss 1.169998 LR 0.000025 Time 0.531351 -2024-05-12 06:26:31,193 - Epoch: [183][ 813/ 813] Overall Loss 1.170361 Objective Loss 1.170361 LR 0.000025 Time 0.529126 -2024-05-12 06:26:31,220 - --- validate (epoch=183)----------- -2024-05-12 06:26:31,221 - 3250 samples (16 per mini-batch) -2024-05-12 06:26:31,222 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 06:27:25,744 - Epoch: [183][ 100/ 204] Loss 1.304694 mAP 0.879668 -2024-05-12 06:28:19,257 - Epoch: [183][ 200/ 204] Loss 1.305894 mAP 0.879388 -2024-05-12 06:28:19,560 - Epoch: [183][ 204/ 204] Loss 1.307799 mAP 0.879395 -2024-05-12 06:28:19,586 - ==> mAP: 0.87940 Loss: 1.308 - -2024-05-12 06:28:19,589 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 06:28:19,589 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 06:28:19,609 - - -2024-05-12 06:28:19,609 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 06:29:14,923 - Epoch: [184][ 100/ 813] Overall Loss 1.116420 Objective Loss 1.116420 LR 0.000025 Time 0.553113 -2024-05-12 06:30:07,695 - Epoch: [184][ 200/ 813] Overall Loss 1.139996 Objective Loss 1.139996 LR 0.000025 Time 0.540407 -2024-05-12 06:31:01,368 - Epoch: [184][ 300/ 813] Overall Loss 1.153350 Objective Loss 1.153350 LR 0.000025 Time 0.539177 -2024-05-12 06:31:52,576 - Epoch: [184][ 400/ 813] Overall Loss 1.157024 Objective Loss 1.157024 LR 0.000025 Time 0.532398 -2024-05-12 06:32:44,684 - Epoch: [184][ 500/ 813] Overall Loss 1.162273 Objective Loss 1.162273 LR 0.000025 Time 0.530120 -2024-05-12 06:33:39,324 - Epoch: [184][ 600/ 813] Overall Loss 1.164421 Objective Loss 1.164421 LR 0.000025 Time 0.532832 -2024-05-12 06:34:31,560 - Epoch: [184][ 700/ 813] Overall Loss 1.166124 Objective Loss 1.166124 LR 0.000025 Time 0.531330 -2024-05-12 06:35:24,174 - Epoch: [184][ 800/ 813] Overall Loss 1.165254 Objective Loss 1.165254 LR 0.000025 Time 0.530678 -2024-05-12 06:35:29,883 - Epoch: [184][ 813/ 813] Overall Loss 1.165702 Objective Loss 1.165702 LR 0.000025 Time 0.529212 -2024-05-12 06:35:29,911 - --- validate (epoch=184)----------- -2024-05-12 06:35:29,911 - 3250 samples (16 per mini-batch) -2024-05-12 06:35:29,913 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 06:36:25,048 - Epoch: [184][ 100/ 204] Loss 1.314580 mAP 0.876879 -2024-05-12 06:37:17,691 - Epoch: [184][ 200/ 204] Loss 1.296300 mAP 0.877105 -2024-05-12 06:37:18,064 - Epoch: [184][ 204/ 204] Loss 1.298195 mAP 0.886798 -2024-05-12 06:37:18,090 - ==> mAP: 0.88680 Loss: 1.298 - -2024-05-12 06:37:18,093 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 06:37:18,093 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 06:37:18,113 - - -2024-05-12 06:37:18,114 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 06:38:13,557 - Epoch: [185][ 100/ 813] Overall Loss 1.137857 Objective Loss 1.137857 LR 0.000025 Time 0.554412 -2024-05-12 06:39:06,592 - Epoch: [185][ 200/ 813] Overall Loss 1.153895 Objective Loss 1.153895 LR 0.000025 Time 0.542375 -2024-05-12 06:39:59,034 - Epoch: [185][ 300/ 813] Overall Loss 1.169614 Objective Loss 1.169614 LR 0.000025 Time 0.536384 -2024-05-12 06:40:50,559 - Epoch: [185][ 400/ 813] Overall Loss 1.169455 Objective Loss 1.169455 LR 0.000025 Time 0.531096 -2024-05-12 06:41:43,435 - Epoch: [185][ 500/ 813] Overall Loss 1.171091 Objective Loss 1.171091 LR 0.000025 Time 0.530626 -2024-05-12 06:42:37,251 - Epoch: [185][ 600/ 813] Overall Loss 1.173226 Objective Loss 1.173226 LR 0.000025 Time 0.531876 -2024-05-12 06:43:29,002 - Epoch: [185][ 700/ 813] Overall Loss 1.172236 Objective Loss 1.172236 LR 0.000025 Time 0.529821 -2024-05-12 06:44:22,773 - Epoch: [185][ 800/ 813] Overall Loss 1.170659 Objective Loss 1.170659 LR 0.000025 Time 0.530806 -2024-05-12 06:44:27,412 - Epoch: [185][ 813/ 813] Overall Loss 1.169961 Objective Loss 1.169961 LR 0.000025 Time 0.528024 -2024-05-12 06:44:27,439 - --- validate (epoch=185)----------- -2024-05-12 06:44:27,439 - 3250 samples (16 per mini-batch) -2024-05-12 06:44:27,441 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 06:45:23,911 - Epoch: [185][ 100/ 204] Loss 1.315438 mAP 0.876854 -2024-05-12 06:46:17,525 - Epoch: [185][ 200/ 204] Loss 1.307326 mAP 0.885121 -2024-05-12 06:46:17,740 - Epoch: [185][ 204/ 204] Loss 1.305895 mAP 0.885104 -2024-05-12 06:46:17,768 - ==> mAP: 0.88510 Loss: 1.306 - -2024-05-12 06:46:17,771 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 06:46:17,771 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 06:46:17,791 - - -2024-05-12 06:46:17,791 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 06:47:13,762 - Epoch: [186][ 100/ 813] Overall Loss 1.130189 Objective Loss 1.130189 LR 0.000025 Time 0.559686 -2024-05-12 06:48:07,541 - Epoch: [186][ 200/ 813] Overall Loss 1.157556 Objective Loss 1.157556 LR 0.000025 Time 0.548730 -2024-05-12 06:48:59,249 - Epoch: [186][ 300/ 813] Overall Loss 1.166983 Objective Loss 1.166983 LR 0.000025 Time 0.538175 -2024-05-12 06:49:50,581 - Epoch: [186][ 400/ 813] Overall Loss 1.169418 Objective Loss 1.169418 LR 0.000025 Time 0.531950 -2024-05-12 06:50:42,587 - Epoch: [186][ 500/ 813] Overall Loss 1.168086 Objective Loss 1.168086 LR 0.000025 Time 0.529570 -2024-05-12 06:51:36,458 - Epoch: [186][ 600/ 813] Overall Loss 1.165204 Objective Loss 1.165204 LR 0.000025 Time 0.531087 -2024-05-12 06:52:29,785 - Epoch: [186][ 700/ 813] Overall Loss 1.165913 Objective Loss 1.165913 LR 0.000025 Time 0.531397 -2024-05-12 06:53:22,591 - Epoch: [186][ 800/ 813] Overall Loss 1.168090 Objective Loss 1.168090 LR 0.000025 Time 0.530977 -2024-05-12 06:53:27,254 - Epoch: [186][ 813/ 813] Overall Loss 1.166891 Objective Loss 1.166891 LR 0.000025 Time 0.528222 -2024-05-12 06:53:27,280 - --- validate (epoch=186)----------- -2024-05-12 06:53:27,281 - 3250 samples (16 per mini-batch) -2024-05-12 06:53:27,282 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 06:54:21,048 - Epoch: [186][ 100/ 204] Loss 1.299064 mAP 0.878561 -2024-05-12 06:55:15,797 - Epoch: [186][ 200/ 204] Loss 1.309066 mAP 0.887326 -2024-05-12 06:55:16,207 - Epoch: [186][ 204/ 204] Loss 1.319784 mAP 0.887237 -2024-05-12 06:55:16,233 - ==> mAP: 0.88724 Loss: 1.320 - -2024-05-12 06:55:16,236 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 06:55:16,236 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 06:55:16,256 - - -2024-05-12 06:55:16,256 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 06:56:11,480 - Epoch: [187][ 100/ 813] Overall Loss 1.141778 Objective Loss 1.141778 LR 0.000025 Time 0.552215 -2024-05-12 06:57:06,227 - Epoch: [187][ 200/ 813] Overall Loss 1.152162 Objective Loss 1.152162 LR 0.000025 Time 0.549836 -2024-05-12 06:57:59,612 - Epoch: [187][ 300/ 813] Overall Loss 1.165276 Objective Loss 1.165276 LR 0.000025 Time 0.544502 -2024-05-12 06:58:49,383 - Epoch: [187][ 400/ 813] Overall Loss 1.170388 Objective Loss 1.170388 LR 0.000025 Time 0.532800 -2024-05-12 06:59:41,309 - Epoch: [187][ 500/ 813] Overall Loss 1.171370 Objective Loss 1.171370 LR 0.000025 Time 0.530084 -2024-05-12 07:00:34,389 - Epoch: [187][ 600/ 813] Overall Loss 1.172820 Objective Loss 1.172820 LR 0.000025 Time 0.530196 -2024-05-12 07:01:26,565 - Epoch: [187][ 700/ 813] Overall Loss 1.171167 Objective Loss 1.171167 LR 0.000025 Time 0.528989 -2024-05-12 07:02:18,584 - Epoch: [187][ 800/ 813] Overall Loss 1.170254 Objective Loss 1.170254 LR 0.000025 Time 0.527887 -2024-05-12 07:02:24,956 - Epoch: [187][ 813/ 813] Overall Loss 1.169392 Objective Loss 1.169392 LR 0.000025 Time 0.527280 -2024-05-12 07:02:24,990 - --- validate (epoch=187)----------- -2024-05-12 07:02:24,991 - 3250 samples (16 per mini-batch) -2024-05-12 07:02:24,993 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 07:03:19,404 - Epoch: [187][ 100/ 204] Loss 1.330402 mAP 0.879031 -2024-05-12 07:04:11,155 - Epoch: [187][ 200/ 204] Loss 1.316026 mAP 0.887900 -2024-05-12 07:04:12,214 - Epoch: [187][ 204/ 204] Loss 1.317055 mAP 0.887847 -2024-05-12 07:04:12,239 - ==> mAP: 0.88785 Loss: 1.317 - -2024-05-12 07:04:12,242 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 07:04:12,243 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 07:04:12,263 - - -2024-05-12 07:04:12,263 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 07:05:07,354 - Epoch: [188][ 100/ 813] Overall Loss 1.139670 Objective Loss 1.139670 LR 0.000025 Time 0.550891 -2024-05-12 07:06:01,363 - Epoch: [188][ 200/ 813] Overall Loss 1.137031 Objective Loss 1.137031 LR 0.000025 Time 0.545466 -2024-05-12 07:06:54,192 - Epoch: [188][ 300/ 813] Overall Loss 1.157276 Objective Loss 1.157276 LR 0.000025 Time 0.539735 -2024-05-12 07:07:43,601 - Epoch: [188][ 400/ 813] Overall Loss 1.164840 Objective Loss 1.164840 LR 0.000025 Time 0.528321 -2024-05-12 07:08:34,934 - Epoch: [188][ 500/ 813] Overall Loss 1.165908 Objective Loss 1.165908 LR 0.000025 Time 0.525315 -2024-05-12 07:09:29,652 - Epoch: [188][ 600/ 813] Overall Loss 1.166918 Objective Loss 1.166918 LR 0.000025 Time 0.528950 -2024-05-12 07:10:22,933 - Epoch: [188][ 700/ 813] Overall Loss 1.170955 Objective Loss 1.170955 LR 0.000025 Time 0.529499 -2024-05-12 07:11:14,490 - Epoch: [188][ 800/ 813] Overall Loss 1.169617 Objective Loss 1.169617 LR 0.000025 Time 0.527752 -2024-05-12 07:11:19,444 - Epoch: [188][ 813/ 813] Overall Loss 1.169717 Objective Loss 1.169717 LR 0.000025 Time 0.525406 -2024-05-12 07:11:19,471 - --- validate (epoch=188)----------- -2024-05-12 07:11:19,471 - 3250 samples (16 per mini-batch) -2024-05-12 07:11:19,473 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 07:12:15,372 - Epoch: [188][ 100/ 204] Loss 1.323378 mAP 0.876014 -2024-05-12 07:13:07,191 - Epoch: [188][ 200/ 204] Loss 1.326981 mAP 0.875837 -2024-05-12 07:13:07,491 - Epoch: [188][ 204/ 204] Loss 1.325551 mAP 0.875825 -2024-05-12 07:13:07,515 - ==> mAP: 0.87583 Loss: 1.326 - -2024-05-12 07:13:07,518 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 07:13:07,518 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 07:13:07,539 - - -2024-05-12 07:13:07,539 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 07:14:02,146 - Epoch: [189][ 100/ 813] Overall Loss 1.148902 Objective Loss 1.148902 LR 0.000025 Time 0.546045 -2024-05-12 07:14:56,941 - Epoch: [189][ 200/ 813] Overall Loss 1.152425 Objective Loss 1.152425 LR 0.000025 Time 0.546993 -2024-05-12 07:15:49,247 - Epoch: [189][ 300/ 813] Overall Loss 1.162947 Objective Loss 1.162947 LR 0.000025 Time 0.539010 -2024-05-12 07:16:39,175 - Epoch: [189][ 400/ 813] Overall Loss 1.166033 Objective Loss 1.166033 LR 0.000025 Time 0.529067 -2024-05-12 07:17:31,795 - Epoch: [189][ 500/ 813] Overall Loss 1.166629 Objective Loss 1.166629 LR 0.000025 Time 0.528493 -2024-05-12 07:18:26,401 - Epoch: [189][ 600/ 813] Overall Loss 1.165838 Objective Loss 1.165838 LR 0.000025 Time 0.531417 -2024-05-12 07:19:19,345 - Epoch: [189][ 700/ 813] Overall Loss 1.167460 Objective Loss 1.167460 LR 0.000025 Time 0.531133 -2024-05-12 07:20:12,095 - Epoch: [189][ 800/ 813] Overall Loss 1.167894 Objective Loss 1.167894 LR 0.000025 Time 0.530677 -2024-05-12 07:20:17,193 - Epoch: [189][ 813/ 813] Overall Loss 1.168193 Objective Loss 1.168193 LR 0.000025 Time 0.528455 -2024-05-12 07:20:17,219 - --- validate (epoch=189)----------- -2024-05-12 07:20:17,219 - 3250 samples (16 per mini-batch) -2024-05-12 07:20:17,220 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 07:21:12,266 - Epoch: [189][ 100/ 204] Loss 1.286759 mAP 0.873492 -2024-05-12 07:22:06,832 - Epoch: [189][ 200/ 204] Loss 1.287138 mAP 0.874725 -2024-05-12 07:22:07,231 - Epoch: [189][ 204/ 204] Loss 1.288260 mAP 0.874792 -2024-05-12 07:22:07,257 - ==> mAP: 0.87479 Loss: 1.288 - -2024-05-12 07:22:07,262 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 07:22:07,262 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 07:22:07,273 - - -2024-05-12 07:22:07,273 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 07:23:03,080 - Epoch: [190][ 100/ 813] Overall Loss 1.141750 Objective Loss 1.141750 LR 0.000025 Time 0.558041 -2024-05-12 07:23:56,935 - Epoch: [190][ 200/ 813] Overall Loss 1.151078 Objective Loss 1.151078 LR 0.000025 Time 0.548292 -2024-05-12 07:24:48,713 - Epoch: [190][ 300/ 813] Overall Loss 1.154972 Objective Loss 1.154972 LR 0.000025 Time 0.538097 -2024-05-12 07:25:39,399 - Epoch: [190][ 400/ 813] Overall Loss 1.159222 Objective Loss 1.159222 LR 0.000025 Time 0.530284 -2024-05-12 07:26:32,521 - Epoch: [190][ 500/ 813] Overall Loss 1.164072 Objective Loss 1.164072 LR 0.000025 Time 0.530468 -2024-05-12 07:27:26,309 - Epoch: [190][ 600/ 813] Overall Loss 1.164320 Objective Loss 1.164320 LR 0.000025 Time 0.531701 -2024-05-12 07:28:19,137 - Epoch: [190][ 700/ 813] Overall Loss 1.164987 Objective Loss 1.164987 LR 0.000025 Time 0.531210 -2024-05-12 07:29:13,040 - Epoch: [190][ 800/ 813] Overall Loss 1.167243 Objective Loss 1.167243 LR 0.000025 Time 0.532183 -2024-05-12 07:29:17,649 - Epoch: [190][ 813/ 813] Overall Loss 1.167264 Objective Loss 1.167264 LR 0.000025 Time 0.529338 -2024-05-12 07:29:17,675 - --- validate (epoch=190)----------- -2024-05-12 07:29:17,676 - 3250 samples (16 per mini-batch) -2024-05-12 07:29:17,678 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 07:30:12,532 - Epoch: [190][ 100/ 204] Loss 1.310531 mAP 0.867393 -2024-05-12 07:31:05,026 - Epoch: [190][ 200/ 204] Loss 1.324858 mAP 0.866908 -2024-05-12 07:31:05,734 - Epoch: [190][ 204/ 204] Loss 1.324290 mAP 0.866850 -2024-05-12 07:31:05,760 - ==> mAP: 0.86685 Loss: 1.324 - -2024-05-12 07:31:05,762 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 07:31:05,763 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 07:31:05,783 - - -2024-05-12 07:31:05,783 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 07:32:00,911 - Epoch: [191][ 100/ 813] Overall Loss 1.162927 Objective Loss 1.162927 LR 0.000025 Time 0.551260 -2024-05-12 07:32:54,781 - Epoch: [191][ 200/ 813] Overall Loss 1.156116 Objective Loss 1.156116 LR 0.000025 Time 0.544961 -2024-05-12 07:33:47,695 - Epoch: [191][ 300/ 813] Overall Loss 1.163257 Objective Loss 1.163257 LR 0.000025 Time 0.539672 -2024-05-12 07:34:36,735 - Epoch: [191][ 400/ 813] Overall Loss 1.171202 Objective Loss 1.171202 LR 0.000025 Time 0.527349 -2024-05-12 07:35:29,439 - Epoch: [191][ 500/ 813] Overall Loss 1.168922 Objective Loss 1.168922 LR 0.000025 Time 0.527284 -2024-05-12 07:36:23,105 - Epoch: [191][ 600/ 813] Overall Loss 1.168085 Objective Loss 1.168085 LR 0.000025 Time 0.528844 -2024-05-12 07:37:17,225 - Epoch: [191][ 700/ 813] Overall Loss 1.172642 Objective Loss 1.172642 LR 0.000025 Time 0.530608 -2024-05-12 07:38:09,301 - Epoch: [191][ 800/ 813] Overall Loss 1.172132 Objective Loss 1.172132 LR 0.000025 Time 0.529374 -2024-05-12 07:38:14,755 - Epoch: [191][ 813/ 813] Overall Loss 1.172840 Objective Loss 1.172840 LR 0.000025 Time 0.527612 -2024-05-12 07:38:14,783 - --- validate (epoch=191)----------- -2024-05-12 07:38:14,784 - 3250 samples (16 per mini-batch) -2024-05-12 07:38:14,785 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 07:39:10,786 - Epoch: [191][ 100/ 204] Loss 1.301525 mAP 0.899520 -2024-05-12 07:40:04,075 - Epoch: [191][ 200/ 204] Loss 1.297993 mAP 0.898600 -2024-05-12 07:40:04,778 - Epoch: [191][ 204/ 204] Loss 1.294997 mAP 0.898541 -2024-05-12 07:40:04,803 - ==> mAP: 0.89854 Loss: 1.295 - -2024-05-12 07:40:04,807 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 07:40:04,807 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 07:40:04,828 - - -2024-05-12 07:40:04,828 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 07:41:00,169 - Epoch: [192][ 100/ 813] Overall Loss 1.139595 Objective Loss 1.139595 LR 0.000025 Time 0.553385 -2024-05-12 07:41:54,031 - Epoch: [192][ 200/ 813] Overall Loss 1.147879 Objective Loss 1.147879 LR 0.000025 Time 0.545995 -2024-05-12 07:42:46,774 - Epoch: [192][ 300/ 813] Overall Loss 1.158927 Objective Loss 1.158927 LR 0.000025 Time 0.539800 -2024-05-12 07:43:37,958 - Epoch: [192][ 400/ 813] Overall Loss 1.169086 Objective Loss 1.169086 LR 0.000025 Time 0.532807 -2024-05-12 07:44:30,376 - Epoch: [192][ 500/ 813] Overall Loss 1.170527 Objective Loss 1.170527 LR 0.000025 Time 0.531077 -2024-05-12 07:45:25,174 - Epoch: [192][ 600/ 813] Overall Loss 1.173484 Objective Loss 1.173484 LR 0.000025 Time 0.533892 -2024-05-12 07:46:18,664 - Epoch: [192][ 700/ 813] Overall Loss 1.170779 Objective Loss 1.170779 LR 0.000025 Time 0.534034 -2024-05-12 07:47:10,989 - Epoch: [192][ 800/ 813] Overall Loss 1.171806 Objective Loss 1.171806 LR 0.000025 Time 0.532685 -2024-05-12 07:47:15,218 - Epoch: [192][ 813/ 813] Overall Loss 1.171276 Objective Loss 1.171276 LR 0.000025 Time 0.529368 -2024-05-12 07:47:15,245 - --- validate (epoch=192)----------- -2024-05-12 07:47:15,245 - 3250 samples (16 per mini-batch) -2024-05-12 07:47:15,246 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 07:48:12,962 - Epoch: [192][ 100/ 204] Loss 1.317350 mAP 0.887801 -2024-05-12 07:49:04,759 - Epoch: [192][ 200/ 204] Loss 1.297479 mAP 0.886742 -2024-05-12 07:49:05,301 - Epoch: [192][ 204/ 204] Loss 1.302256 mAP 0.886749 -2024-05-12 07:49:05,326 - ==> mAP: 0.88675 Loss: 1.302 - -2024-05-12 07:49:05,329 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 07:49:05,329 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 07:49:05,350 - - -2024-05-12 07:49:05,350 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 07:50:01,495 - Epoch: [193][ 100/ 813] Overall Loss 1.153408 Objective Loss 1.153408 LR 0.000025 Time 0.561429 -2024-05-12 07:50:53,872 - Epoch: [193][ 200/ 813] Overall Loss 1.158151 Objective Loss 1.158151 LR 0.000025 Time 0.542595 -2024-05-12 07:51:47,923 - Epoch: [193][ 300/ 813] Overall Loss 1.166789 Objective Loss 1.166789 LR 0.000025 Time 0.541891 -2024-05-12 07:52:39,182 - Epoch: [193][ 400/ 813] Overall Loss 1.168176 Objective Loss 1.168176 LR 0.000025 Time 0.534562 -2024-05-12 07:53:30,582 - Epoch: [193][ 500/ 813] Overall Loss 1.169647 Objective Loss 1.169647 LR 0.000025 Time 0.530448 -2024-05-12 07:54:21,521 - Epoch: [193][ 600/ 813] Overall Loss 1.169184 Objective Loss 1.169184 LR 0.000025 Time 0.526935 -2024-05-12 07:55:15,346 - Epoch: [193][ 700/ 813] Overall Loss 1.171400 Objective Loss 1.171400 LR 0.000025 Time 0.528548 -2024-05-12 07:56:07,941 - Epoch: [193][ 800/ 813] Overall Loss 1.171980 Objective Loss 1.171980 LR 0.000025 Time 0.528222 -2024-05-12 07:56:13,330 - Epoch: [193][ 813/ 813] Overall Loss 1.171173 Objective Loss 1.171173 LR 0.000025 Time 0.526398 -2024-05-12 07:56:13,357 - --- validate (epoch=193)----------- -2024-05-12 07:56:13,357 - 3250 samples (16 per mini-batch) -2024-05-12 07:56:13,358 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 07:57:08,794 - Epoch: [193][ 100/ 204] Loss 1.280210 mAP 0.878846 -2024-05-12 07:58:01,432 - Epoch: [193][ 200/ 204] Loss 1.290038 mAP 0.886558 -2024-05-12 07:58:01,982 - Epoch: [193][ 204/ 204] Loss 1.302150 mAP 0.886553 -2024-05-12 07:58:02,008 - ==> mAP: 0.88655 Loss: 1.302 - -2024-05-12 07:58:02,012 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 07:58:02,012 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 07:58:02,033 - - -2024-05-12 07:58:02,033 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 07:58:57,013 - Epoch: [194][ 100/ 813] Overall Loss 1.127590 Objective Loss 1.127590 LR 0.000025 Time 0.549777 -2024-05-12 07:59:51,602 - Epoch: [194][ 200/ 813] Overall Loss 1.144642 Objective Loss 1.144642 LR 0.000025 Time 0.547824 -2024-05-12 08:00:44,826 - Epoch: [194][ 300/ 813] Overall Loss 1.157992 Objective Loss 1.157992 LR 0.000025 Time 0.542618 -2024-05-12 08:01:35,384 - Epoch: [194][ 400/ 813] Overall Loss 1.157548 Objective Loss 1.157548 LR 0.000025 Time 0.533354 -2024-05-12 08:02:27,544 - Epoch: [194][ 500/ 813] Overall Loss 1.161345 Objective Loss 1.161345 LR 0.000025 Time 0.531001 -2024-05-12 08:03:20,902 - Epoch: [194][ 600/ 813] Overall Loss 1.161856 Objective Loss 1.161856 LR 0.000025 Time 0.531427 -2024-05-12 08:04:12,772 - Epoch: [194][ 700/ 813] Overall Loss 1.163218 Objective Loss 1.163218 LR 0.000025 Time 0.529607 -2024-05-12 08:05:04,519 - Epoch: [194][ 800/ 813] Overall Loss 1.164876 Objective Loss 1.164876 LR 0.000025 Time 0.528088 -2024-05-12 08:05:09,273 - Epoch: [194][ 813/ 813] Overall Loss 1.165064 Objective Loss 1.165064 LR 0.000025 Time 0.525490 -2024-05-12 08:05:09,298 - --- validate (epoch=194)----------- -2024-05-12 08:05:09,299 - 3250 samples (16 per mini-batch) -2024-05-12 08:05:09,301 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 08:06:05,172 - Epoch: [194][ 100/ 204] Loss 1.300071 mAP 0.883173 -2024-05-12 08:06:57,494 - Epoch: [194][ 200/ 204] Loss 1.305389 mAP 0.884085 -2024-05-12 08:06:57,862 - Epoch: [194][ 204/ 204] Loss 1.309084 mAP 0.884055 -2024-05-12 08:06:57,887 - ==> mAP: 0.88405 Loss: 1.309 - -2024-05-12 08:06:57,890 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 08:06:57,890 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 08:06:57,911 - - -2024-05-12 08:06:57,911 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 08:07:53,879 - Epoch: [195][ 100/ 813] Overall Loss 1.149280 Objective Loss 1.149280 LR 0.000025 Time 0.559660 -2024-05-12 08:08:46,909 - Epoch: [195][ 200/ 813] Overall Loss 1.162378 Objective Loss 1.162378 LR 0.000025 Time 0.544972 -2024-05-12 08:09:39,126 - Epoch: [195][ 300/ 813] Overall Loss 1.164149 Objective Loss 1.164149 LR 0.000025 Time 0.537366 -2024-05-12 08:10:29,285 - Epoch: [195][ 400/ 813] Overall Loss 1.167955 Objective Loss 1.167955 LR 0.000025 Time 0.528417 -2024-05-12 08:11:21,149 - Epoch: [195][ 500/ 813] Overall Loss 1.171400 Objective Loss 1.171400 LR 0.000025 Time 0.526459 -2024-05-12 08:12:14,235 - Epoch: [195][ 600/ 813] Overall Loss 1.172275 Objective Loss 1.172275 LR 0.000025 Time 0.527191 -2024-05-12 08:13:07,010 - Epoch: [195][ 700/ 813] Overall Loss 1.170971 Objective Loss 1.170971 LR 0.000025 Time 0.527265 -2024-05-12 08:13:59,846 - Epoch: [195][ 800/ 813] Overall Loss 1.171349 Objective Loss 1.171349 LR 0.000025 Time 0.527393 -2024-05-12 08:14:04,362 - Epoch: [195][ 813/ 813] Overall Loss 1.171262 Objective Loss 1.171262 LR 0.000025 Time 0.524514 -2024-05-12 08:14:04,388 - --- validate (epoch=195)----------- -2024-05-12 08:14:04,389 - 3250 samples (16 per mini-batch) -2024-05-12 08:14:04,390 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 08:14:58,610 - Epoch: [195][ 100/ 204] Loss 1.297586 mAP 0.896154 -2024-05-12 08:15:51,782 - Epoch: [195][ 200/ 204] Loss 1.314791 mAP 0.885514 -2024-05-12 08:15:52,166 - Epoch: [195][ 204/ 204] Loss 1.313729 mAP 0.885510 -2024-05-12 08:15:52,192 - ==> mAP: 0.88551 Loss: 1.314 - -2024-05-12 08:15:52,195 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 08:15:52,195 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 08:15:52,215 - - -2024-05-12 08:15:52,215 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 08:16:47,153 - Epoch: [196][ 100/ 813] Overall Loss 1.140930 Objective Loss 1.140930 LR 0.000025 Time 0.549361 -2024-05-12 08:17:41,522 - Epoch: [196][ 200/ 813] Overall Loss 1.161000 Objective Loss 1.161000 LR 0.000025 Time 0.546517 -2024-05-12 08:18:34,615 - Epoch: [196][ 300/ 813] Overall Loss 1.165436 Objective Loss 1.165436 LR 0.000025 Time 0.541315 -2024-05-12 08:19:24,932 - Epoch: [196][ 400/ 813] Overall Loss 1.173765 Objective Loss 1.173765 LR 0.000025 Time 0.531774 -2024-05-12 08:20:16,960 - Epoch: [196][ 500/ 813] Overall Loss 1.171372 Objective Loss 1.171372 LR 0.000025 Time 0.529472 -2024-05-12 08:21:10,482 - Epoch: [196][ 600/ 813] Overall Loss 1.174966 Objective Loss 1.174966 LR 0.000025 Time 0.530423 -2024-05-12 08:22:03,087 - Epoch: [196][ 700/ 813] Overall Loss 1.171483 Objective Loss 1.171483 LR 0.000025 Time 0.529797 -2024-05-12 08:22:55,206 - Epoch: [196][ 800/ 813] Overall Loss 1.168971 Objective Loss 1.168971 LR 0.000025 Time 0.528719 -2024-05-12 08:23:00,748 - Epoch: [196][ 813/ 813] Overall Loss 1.169421 Objective Loss 1.169421 LR 0.000025 Time 0.527081 -2024-05-12 08:23:00,775 - --- validate (epoch=196)----------- -2024-05-12 08:23:00,775 - 3250 samples (16 per mini-batch) -2024-05-12 08:23:00,777 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 08:23:56,727 - Epoch: [196][ 100/ 204] Loss 1.280746 mAP 0.875310 -2024-05-12 08:24:50,025 - Epoch: [196][ 200/ 204] Loss 1.294617 mAP 0.875612 -2024-05-12 08:24:50,867 - Epoch: [196][ 204/ 204] Loss 1.292837 mAP 0.875656 -2024-05-12 08:24:50,893 - ==> mAP: 0.87566 Loss: 1.293 - -2024-05-12 08:24:50,897 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 08:24:50,897 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 08:24:50,918 - - -2024-05-12 08:24:50,918 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 08:25:45,495 - Epoch: [197][ 100/ 813] Overall Loss 1.121144 Objective Loss 1.121144 LR 0.000025 Time 0.545746 -2024-05-12 08:26:38,512 - Epoch: [197][ 200/ 813] Overall Loss 1.139449 Objective Loss 1.139449 LR 0.000025 Time 0.537952 -2024-05-12 08:27:31,669 - Epoch: [197][ 300/ 813] Overall Loss 1.149605 Objective Loss 1.149605 LR 0.000025 Time 0.535818 -2024-05-12 08:28:21,773 - Epoch: [197][ 400/ 813] Overall Loss 1.156154 Objective Loss 1.156154 LR 0.000025 Time 0.527119 -2024-05-12 08:29:14,278 - Epoch: [197][ 500/ 813] Overall Loss 1.160077 Objective Loss 1.160077 LR 0.000025 Time 0.526698 -2024-05-12 08:30:07,118 - Epoch: [197][ 600/ 813] Overall Loss 1.164550 Objective Loss 1.164550 LR 0.000025 Time 0.526979 -2024-05-12 08:31:00,712 - Epoch: [197][ 700/ 813] Overall Loss 1.166013 Objective Loss 1.166013 LR 0.000025 Time 0.528221 -2024-05-12 08:31:54,323 - Epoch: [197][ 800/ 813] Overall Loss 1.167731 Objective Loss 1.167731 LR 0.000025 Time 0.529199 -2024-05-12 08:31:59,062 - Epoch: [197][ 813/ 813] Overall Loss 1.167745 Objective Loss 1.167745 LR 0.000025 Time 0.526566 -2024-05-12 08:31:59,088 - --- validate (epoch=197)----------- -2024-05-12 08:31:59,089 - 3250 samples (16 per mini-batch) -2024-05-12 08:31:59,090 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 08:32:54,414 - Epoch: [197][ 100/ 204] Loss 1.302627 mAP 0.885927 -2024-05-12 08:33:46,905 - Epoch: [197][ 200/ 204] Loss 1.295872 mAP 0.885620 -2024-05-12 08:33:47,519 - Epoch: [197][ 204/ 204] Loss 1.293196 mAP 0.885639 -2024-05-12 08:33:47,544 - ==> mAP: 0.88564 Loss: 1.293 - -2024-05-12 08:33:47,546 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 08:33:47,547 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 08:33:47,568 - - -2024-05-12 08:33:47,568 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 08:34:41,403 - Epoch: [198][ 100/ 813] Overall Loss 1.132204 Objective Loss 1.132204 LR 0.000025 Time 0.538332 -2024-05-12 08:35:36,474 - Epoch: [198][ 200/ 813] Overall Loss 1.144708 Objective Loss 1.144708 LR 0.000025 Time 0.544512 -2024-05-12 08:36:28,477 - Epoch: [198][ 300/ 813] Overall Loss 1.155357 Objective Loss 1.155357 LR 0.000025 Time 0.536344 -2024-05-12 08:37:18,515 - Epoch: [198][ 400/ 813] Overall Loss 1.161685 Objective Loss 1.161685 LR 0.000025 Time 0.527349 -2024-05-12 08:38:11,646 - Epoch: [198][ 500/ 813] Overall Loss 1.163226 Objective Loss 1.163226 LR 0.000025 Time 0.528138 -2024-05-12 08:39:06,544 - Epoch: [198][ 600/ 813] Overall Loss 1.164027 Objective Loss 1.164027 LR 0.000025 Time 0.531606 -2024-05-12 08:39:58,682 - Epoch: [198][ 700/ 813] Overall Loss 1.165585 Objective Loss 1.165585 LR 0.000025 Time 0.530143 -2024-05-12 08:40:51,217 - Epoch: [198][ 800/ 813] Overall Loss 1.166199 Objective Loss 1.166199 LR 0.000025 Time 0.529542 -2024-05-12 08:40:55,930 - Epoch: [198][ 813/ 813] Overall Loss 1.166085 Objective Loss 1.166085 LR 0.000025 Time 0.526871 -2024-05-12 08:40:55,957 - --- validate (epoch=198)----------- -2024-05-12 08:40:55,958 - 3250 samples (16 per mini-batch) -2024-05-12 08:40:55,959 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 08:41:51,174 - Epoch: [198][ 100/ 204] Loss 1.315704 mAP 0.876520 -2024-05-12 08:42:43,820 - Epoch: [198][ 200/ 204] Loss 1.307660 mAP 0.876014 -2024-05-12 08:42:44,414 - Epoch: [198][ 204/ 204] Loss 1.306681 mAP 0.876077 -2024-05-12 08:42:44,440 - ==> mAP: 0.87608 Loss: 1.307 - -2024-05-12 08:42:44,443 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 08:42:44,443 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 08:42:44,463 - - -2024-05-12 08:42:44,463 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) -2024-05-12 08:43:40,380 - Epoch: [199][ 100/ 813] Overall Loss 1.152401 Objective Loss 1.152401 LR 0.000025 Time 0.559142 -2024-05-12 08:44:33,278 - Epoch: [199][ 200/ 813] Overall Loss 1.152691 Objective Loss 1.152691 LR 0.000025 Time 0.544054 -2024-05-12 08:45:25,380 - Epoch: [199][ 300/ 813] Overall Loss 1.163875 Objective Loss 1.163875 LR 0.000025 Time 0.536371 -2024-05-12 08:46:16,304 - Epoch: [199][ 400/ 813] Overall Loss 1.168279 Objective Loss 1.168279 LR 0.000025 Time 0.529585 -2024-05-12 08:47:10,054 - Epoch: [199][ 500/ 813] Overall Loss 1.172100 Objective Loss 1.172100 LR 0.000025 Time 0.531164 -2024-05-12 08:48:02,559 - Epoch: [199][ 600/ 813] Overall Loss 1.169116 Objective Loss 1.169116 LR 0.000025 Time 0.530143 -2024-05-12 08:48:55,743 - Epoch: [199][ 700/ 813] Overall Loss 1.170772 Objective Loss 1.170772 LR 0.000025 Time 0.530382 -2024-05-12 08:49:45,732 - Epoch: [199][ 800/ 813] Overall Loss 1.173154 Objective Loss 1.173154 LR 0.000025 Time 0.526569 -2024-05-12 08:49:51,866 - Epoch: [199][ 813/ 813] Overall Loss 1.173389 Objective Loss 1.173389 LR 0.000025 Time 0.525693 -2024-05-12 08:49:51,891 - --- validate (epoch=199)----------- -2024-05-12 08:49:51,892 - 3250 samples (16 per mini-batch) -2024-05-12 08:49:51,893 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 08:50:46,968 - Epoch: [199][ 100/ 204] Loss 1.291166 mAP 0.876399 -2024-05-12 08:51:39,145 - Epoch: [199][ 200/ 204] Loss 1.306374 mAP 0.885319 -2024-05-12 08:51:40,068 - Epoch: [199][ 204/ 204] Loss 1.308403 mAP 0.885326 -2024-05-12 08:51:40,093 - ==> mAP: 0.88533 Loss: 1.308 - -2024-05-12 08:51:40,097 - ==> Best [mAP: 0.899096 vloss: 1.305753 Params: 368352 on epoch: 181] -2024-05-12 08:51:40,097 - Saving checkpoint to: logs/2024.05.11-025627/qat_checkpoint.pth.tar -2024-05-12 08:51:40,118 - --- test --------------------- -2024-05-12 08:51:40,118 - 3250 samples (16 per mini-batch) -2024-05-12 08:51:40,119 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} -2024-05-12 08:52:35,157 - Test: [ 100/ 204] Loss 1.316698 mAP 0.874996 -2024-05-12 08:53:28,260 - Test: [ 200/ 204] Loss 1.312490 mAP 0.875578 -2024-05-12 08:53:29,283 - Test: [ 204/ 204] Loss 1.313750 mAP 0.875623 -2024-05-12 08:53:29,309 - ==> mAP: 0.87562 Loss: 1.314 - -2024-05-12 08:53:29,315 - -2024-05-12 08:53:29,315 - Log file for this run: /home/ermanokman/repos/ai8x-training/logs/2024.05.11-025627/2024.05.11-025627.log +2025-05-15 13:23:08,265 - + +2025-05-15 13:23:08,265 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 13:24:02,823 - Epoch: [0][ 100/ 813] Overall Loss 10.124517 Objective Loss 10.124517 LR 0.000200 Time 0.545555 +2025-05-15 13:24:56,299 - Epoch: [0][ 200/ 813] Overall Loss 8.186104 Objective Loss 8.186104 LR 0.000200 Time 0.540149 +2025-05-15 13:25:47,890 - Epoch: [0][ 300/ 813] Overall Loss 7.214131 Objective Loss 7.214131 LR 0.000200 Time 0.532065 +2025-05-15 13:26:38,444 - Epoch: [0][ 400/ 813] Overall Loss 6.602227 Objective Loss 6.602227 LR 0.000200 Time 0.525431 +2025-05-15 13:27:31,238 - Epoch: [0][ 500/ 813] Overall Loss 6.152803 Objective Loss 6.152803 LR 0.000200 Time 0.525930 +2025-05-15 13:28:23,717 - Epoch: [0][ 600/ 813] Overall Loss 5.826249 Objective Loss 5.826249 LR 0.000200 Time 0.525725 +2025-05-15 13:29:15,643 - Epoch: [0][ 700/ 813] Overall Loss 5.566117 Objective Loss 5.566117 LR 0.000200 Time 0.524799 +2025-05-15 13:30:07,857 - Epoch: [0][ 800/ 813] Overall Loss 5.356024 Objective Loss 5.356024 LR 0.000200 Time 0.524464 +2025-05-15 13:30:12,720 - Epoch: [0][ 813/ 813] Overall Loss 5.332231 Objective Loss 5.332231 LR 0.000200 Time 0.522060 +2025-05-15 13:30:12,741 - --- validate (epoch=0)----------- +2025-05-15 13:30:12,742 - 3250 samples (16 per mini-batch) +2025-05-15 13:30:12,744 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 13:31:16,439 - Epoch: [0][ 100/ 204] Loss 3.797962 mAP 0.505031 +2025-05-15 13:32:16,407 - Epoch: [0][ 200/ 204] Loss 3.819586 mAP 0.504779 +2025-05-15 13:32:19,565 - Epoch: [0][ 204/ 204] Loss 3.825673 mAP 0.504014 +2025-05-15 13:32:19,594 - ==> mAP: 0.50401 Loss: 3.826 + +2025-05-15 13:32:19,599 - ==> Best [mAP: 0.504014 vloss: 3.825673 Params: 368352 on epoch: 0] +2025-05-15 13:32:19,599 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 13:32:19,637 - + +2025-05-15 13:32:19,637 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 13:33:16,096 - Epoch: [1][ 100/ 813] Overall Loss 3.719389 Objective Loss 3.719389 LR 0.000200 Time 0.564568 +2025-05-15 13:34:08,120 - Epoch: [1][ 200/ 813] Overall Loss 3.636197 Objective Loss 3.636197 LR 0.000200 Time 0.542376 +2025-05-15 13:35:02,383 - Epoch: [1][ 300/ 813] Overall Loss 3.577566 Objective Loss 3.577566 LR 0.000200 Time 0.542455 +2025-05-15 13:35:51,385 - Epoch: [1][ 400/ 813] Overall Loss 3.534075 Objective Loss 3.534075 LR 0.000200 Time 0.529341 +2025-05-15 13:36:44,437 - Epoch: [1][ 500/ 813] Overall Loss 3.487971 Objective Loss 3.487971 LR 0.000200 Time 0.529574 +2025-05-15 13:37:37,269 - Epoch: [1][ 600/ 813] Overall Loss 3.446655 Objective Loss 3.446655 LR 0.000200 Time 0.529363 +2025-05-15 13:38:29,090 - Epoch: [1][ 700/ 813] Overall Loss 3.410469 Objective Loss 3.410469 LR 0.000200 Time 0.527766 +2025-05-15 13:39:20,598 - Epoch: [1][ 800/ 813] Overall Loss 3.367450 Objective Loss 3.367450 LR 0.000200 Time 0.526179 +2025-05-15 13:39:25,633 - Epoch: [1][ 813/ 813] Overall Loss 3.363718 Objective Loss 3.363718 LR 0.000200 Time 0.523957 +2025-05-15 13:39:25,664 - --- validate (epoch=1)----------- +2025-05-15 13:39:25,664 - 3250 samples (16 per mini-batch) +2025-05-15 13:39:25,667 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 13:40:26,118 - Epoch: [1][ 100/ 204] Loss 3.170118 mAP 0.658413 +2025-05-15 13:41:22,571 - Epoch: [1][ 200/ 204] Loss 3.166556 mAP 0.657545 +2025-05-15 13:41:25,396 - Epoch: [1][ 204/ 204] Loss 3.170338 mAP 0.655380 +2025-05-15 13:41:25,423 - ==> mAP: 0.65538 Loss: 3.170 + +2025-05-15 13:41:25,427 - ==> Best [mAP: 0.655380 vloss: 3.170338 Params: 368352 on epoch: 1] +2025-05-15 13:41:25,427 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 13:41:25,475 - + +2025-05-15 13:41:25,475 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 13:42:19,350 - Epoch: [2][ 100/ 813] Overall Loss 3.088835 Objective Loss 3.088835 LR 0.000200 Time 0.538721 +2025-05-15 13:43:13,117 - Epoch: [2][ 200/ 813] Overall Loss 3.042161 Objective Loss 3.042161 LR 0.000200 Time 0.538189 +2025-05-15 13:44:05,884 - Epoch: [2][ 300/ 813] Overall Loss 2.991517 Objective Loss 2.991517 LR 0.000200 Time 0.534679 +2025-05-15 13:44:56,318 - Epoch: [2][ 400/ 813] Overall Loss 2.969819 Objective Loss 2.969819 LR 0.000200 Time 0.527089 +2025-05-15 13:45:48,503 - Epoch: [2][ 500/ 813] Overall Loss 2.946748 Objective Loss 2.946748 LR 0.000200 Time 0.526038 +2025-05-15 13:46:42,798 - Epoch: [2][ 600/ 813] Overall Loss 2.924713 Objective Loss 2.924713 LR 0.000200 Time 0.528846 +2025-05-15 13:47:37,133 - Epoch: [2][ 700/ 813] Overall Loss 2.902045 Objective Loss 2.902045 LR 0.000200 Time 0.530916 +2025-05-15 13:48:27,810 - Epoch: [2][ 800/ 813] Overall Loss 2.875021 Objective Loss 2.875021 LR 0.000200 Time 0.527886 +2025-05-15 13:48:32,852 - Epoch: [2][ 813/ 813] Overall Loss 2.872612 Objective Loss 2.872612 LR 0.000200 Time 0.525648 +2025-05-15 13:48:32,887 - --- validate (epoch=2)----------- +2025-05-15 13:48:32,888 - 3250 samples (16 per mini-batch) +2025-05-15 13:48:32,890 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 13:49:28,014 - Epoch: [2][ 100/ 204] Loss 2.769882 mAP 0.649721 +2025-05-15 13:50:19,564 - Epoch: [2][ 200/ 204] Loss 2.795195 mAP 0.649756 +2025-05-15 13:50:20,016 - Epoch: [2][ 204/ 204] Loss 2.797914 mAP 0.649648 +2025-05-15 13:50:20,047 - ==> mAP: 0.64965 Loss: 2.798 + +2025-05-15 13:50:20,051 - ==> Best [mAP: 0.655380 vloss: 3.170338 Params: 368352 on epoch: 1] +2025-05-15 13:50:20,051 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 13:50:20,096 - + +2025-05-15 13:50:20,096 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 13:51:14,943 - Epoch: [3][ 100/ 813] Overall Loss 2.644576 Objective Loss 2.644576 LR 0.000200 Time 0.548445 +2025-05-15 13:52:06,298 - Epoch: [3][ 200/ 813] Overall Loss 2.638438 Objective Loss 2.638438 LR 0.000200 Time 0.530987 +2025-05-15 13:52:59,258 - Epoch: [3][ 300/ 813] Overall Loss 2.610677 Objective Loss 2.610677 LR 0.000200 Time 0.530519 +2025-05-15 13:53:49,271 - Epoch: [3][ 400/ 813] Overall Loss 2.603392 Objective Loss 2.603392 LR 0.000200 Time 0.522918 +2025-05-15 13:54:41,456 - Epoch: [3][ 500/ 813] Overall Loss 2.582317 Objective Loss 2.582317 LR 0.000200 Time 0.522700 +2025-05-15 13:55:34,990 - Epoch: [3][ 600/ 813] Overall Loss 2.560227 Objective Loss 2.560227 LR 0.000200 Time 0.524804 +2025-05-15 13:56:28,257 - Epoch: [3][ 700/ 813] Overall Loss 2.543063 Objective Loss 2.543063 LR 0.000200 Time 0.525926 +2025-05-15 13:57:19,277 - Epoch: [3][ 800/ 813] Overall Loss 2.523634 Objective Loss 2.523634 LR 0.000200 Time 0.523959 +2025-05-15 13:57:25,570 - Epoch: [3][ 813/ 813] Overall Loss 2.521658 Objective Loss 2.521658 LR 0.000200 Time 0.523320 +2025-05-15 13:57:25,601 - --- validate (epoch=3)----------- +2025-05-15 13:57:25,602 - 3250 samples (16 per mini-batch) +2025-05-15 13:57:25,604 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 13:58:20,490 - Epoch: [3][ 100/ 204] Loss 2.389732 mAP 0.756655 +2025-05-15 13:59:11,812 - Epoch: [3][ 200/ 204] Loss 2.387764 mAP 0.745733 +2025-05-15 13:59:12,641 - Epoch: [3][ 204/ 204] Loss 2.386518 mAP 0.745579 +2025-05-15 13:59:12,672 - ==> mAP: 0.74558 Loss: 2.387 + +2025-05-15 13:59:12,675 - ==> Best [mAP: 0.745579 vloss: 2.386518 Params: 368352 on epoch: 3] +2025-05-15 13:59:12,676 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 13:59:12,725 - + +2025-05-15 13:59:12,725 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 14:00:05,903 - Epoch: [4][ 100/ 813] Overall Loss 2.399078 Objective Loss 2.399078 LR 0.000200 Time 0.531750 +2025-05-15 14:00:59,380 - Epoch: [4][ 200/ 813] Overall Loss 2.372715 Objective Loss 2.372715 LR 0.000200 Time 0.533252 +2025-05-15 14:01:51,100 - Epoch: [4][ 300/ 813] Overall Loss 2.346172 Objective Loss 2.346172 LR 0.000200 Time 0.527895 +2025-05-15 14:02:41,117 - Epoch: [4][ 400/ 813] Overall Loss 2.338342 Objective Loss 2.338342 LR 0.000200 Time 0.520942 +2025-05-15 14:03:33,371 - Epoch: [4][ 500/ 813] Overall Loss 2.318029 Objective Loss 2.318029 LR 0.000200 Time 0.521259 +2025-05-15 14:04:26,233 - Epoch: [4][ 600/ 813] Overall Loss 2.305108 Objective Loss 2.305108 LR 0.000200 Time 0.522471 +2025-05-15 14:05:18,125 - Epoch: [4][ 700/ 813] Overall Loss 2.290316 Objective Loss 2.290316 LR 0.000200 Time 0.521961 +2025-05-15 14:06:10,825 - Epoch: [4][ 800/ 813] Overall Loss 2.270362 Objective Loss 2.270362 LR 0.000200 Time 0.522585 +2025-05-15 14:06:15,449 - Epoch: [4][ 813/ 813] Overall Loss 2.268993 Objective Loss 2.268993 LR 0.000200 Time 0.519916 +2025-05-15 14:06:15,480 - --- validate (epoch=4)----------- +2025-05-15 14:06:15,481 - 3250 samples (16 per mini-batch) +2025-05-15 14:06:15,483 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 14:07:09,617 - Epoch: [4][ 100/ 204] Loss 2.106288 mAP 0.771077 +2025-05-15 14:08:01,245 - Epoch: [4][ 200/ 204] Loss 2.105297 mAP 0.769971 +2025-05-15 14:08:02,244 - Epoch: [4][ 204/ 204] Loss 2.108900 mAP 0.769983 +2025-05-15 14:08:02,277 - ==> mAP: 0.76998 Loss: 2.109 + +2025-05-15 14:08:02,281 - ==> Best [mAP: 0.769983 vloss: 2.108900 Params: 368352 on epoch: 4] +2025-05-15 14:08:02,282 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 14:08:02,329 - + +2025-05-15 14:08:02,329 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 14:08:56,917 - Epoch: [5][ 100/ 813] Overall Loss 2.142011 Objective Loss 2.142011 LR 0.000200 Time 0.545850 +2025-05-15 14:09:50,885 - Epoch: [5][ 200/ 813] Overall Loss 2.115770 Objective Loss 2.115770 LR 0.000200 Time 0.542737 +2025-05-15 14:10:42,100 - Epoch: [5][ 300/ 813] Overall Loss 2.085581 Objective Loss 2.085581 LR 0.000200 Time 0.532524 +2025-05-15 14:11:31,298 - Epoch: [5][ 400/ 813] Overall Loss 2.074696 Objective Loss 2.074696 LR 0.000200 Time 0.522383 +2025-05-15 14:12:25,106 - Epoch: [5][ 500/ 813] Overall Loss 2.061228 Objective Loss 2.061228 LR 0.000200 Time 0.525473 +2025-05-15 14:13:18,498 - Epoch: [5][ 600/ 813] Overall Loss 2.054529 Objective Loss 2.054529 LR 0.000200 Time 0.526879 +2025-05-15 14:14:09,677 - Epoch: [5][ 700/ 813] Overall Loss 2.047435 Objective Loss 2.047435 LR 0.000200 Time 0.524720 +2025-05-15 14:15:01,322 - Epoch: [5][ 800/ 813] Overall Loss 2.031293 Objective Loss 2.031293 LR 0.000200 Time 0.523684 +2025-05-15 14:15:07,895 - Epoch: [5][ 813/ 813] Overall Loss 2.030353 Objective Loss 2.030353 LR 0.000200 Time 0.523395 +2025-05-15 14:15:07,921 - --- validate (epoch=5)----------- +2025-05-15 14:15:07,922 - 3250 samples (16 per mini-batch) +2025-05-15 14:15:07,924 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 14:16:02,757 - Epoch: [5][ 100/ 204] Loss 2.003645 mAP 0.731069 +2025-05-15 14:16:54,240 - Epoch: [5][ 200/ 204] Loss 1.988495 mAP 0.721366 +2025-05-15 14:16:55,139 - Epoch: [5][ 204/ 204] Loss 1.991320 mAP 0.721266 +2025-05-15 14:16:55,170 - ==> mAP: 0.72127 Loss: 1.991 + +2025-05-15 14:16:55,174 - ==> Best [mAP: 0.769983 vloss: 2.108900 Params: 368352 on epoch: 4] +2025-05-15 14:16:55,174 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 14:16:55,220 - + +2025-05-15 14:16:55,220 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 14:17:49,899 - Epoch: [6][ 100/ 813] Overall Loss 1.912766 Objective Loss 1.912766 LR 0.000200 Time 0.546763 +2025-05-15 14:18:43,545 - Epoch: [6][ 200/ 813] Overall Loss 1.902217 Objective Loss 1.902217 LR 0.000200 Time 0.541596 +2025-05-15 14:19:35,577 - Epoch: [6][ 300/ 813] Overall Loss 1.887340 Objective Loss 1.887340 LR 0.000200 Time 0.534501 +2025-05-15 14:20:26,116 - Epoch: [6][ 400/ 813] Overall Loss 1.889158 Objective Loss 1.889158 LR 0.000200 Time 0.527218 +2025-05-15 14:21:18,118 - Epoch: [6][ 500/ 813] Overall Loss 1.883047 Objective Loss 1.883047 LR 0.000200 Time 0.525774 +2025-05-15 14:22:10,859 - Epoch: [6][ 600/ 813] Overall Loss 1.874293 Objective Loss 1.874293 LR 0.000200 Time 0.526044 +2025-05-15 14:23:02,835 - Epoch: [6][ 700/ 813] Overall Loss 1.864791 Objective Loss 1.864791 LR 0.000200 Time 0.525144 +2025-05-15 14:23:55,011 - Epoch: [6][ 800/ 813] Overall Loss 1.850614 Objective Loss 1.850614 LR 0.000200 Time 0.524719 +2025-05-15 14:24:00,332 - Epoch: [6][ 813/ 813] Overall Loss 1.848163 Objective Loss 1.848163 LR 0.000200 Time 0.522874 +2025-05-15 14:24:00,367 - --- validate (epoch=6)----------- +2025-05-15 14:24:00,368 - 3250 samples (16 per mini-batch) +2025-05-15 14:24:00,370 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 14:24:54,259 - Epoch: [6][ 100/ 204] Loss 1.780934 mAP 0.799874 +2025-05-15 14:25:46,556 - Epoch: [6][ 200/ 204] Loss 1.781053 mAP 0.799464 +2025-05-15 14:25:47,253 - Epoch: [6][ 204/ 204] Loss 1.778199 mAP 0.799474 +2025-05-15 14:25:47,283 - ==> mAP: 0.79947 Loss: 1.778 + +2025-05-15 14:25:47,287 - ==> Best [mAP: 0.799474 vloss: 1.778199 Params: 368352 on epoch: 6] +2025-05-15 14:25:47,287 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 14:25:47,335 - + +2025-05-15 14:25:47,335 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 14:26:42,295 - Epoch: [7][ 100/ 813] Overall Loss 1.774983 Objective Loss 1.774983 LR 0.000200 Time 0.549573 +2025-05-15 14:27:34,855 - Epoch: [7][ 200/ 813] Overall Loss 1.749810 Objective Loss 1.749810 LR 0.000200 Time 0.537555 +2025-05-15 14:28:27,566 - Epoch: [7][ 300/ 813] Overall Loss 1.739234 Objective Loss 1.739234 LR 0.000200 Time 0.534068 +2025-05-15 14:29:18,301 - Epoch: [7][ 400/ 813] Overall Loss 1.729411 Objective Loss 1.729411 LR 0.000200 Time 0.527385 +2025-05-15 14:30:09,429 - Epoch: [7][ 500/ 813] Overall Loss 1.718895 Objective Loss 1.718895 LR 0.000200 Time 0.524160 +2025-05-15 14:31:02,266 - Epoch: [7][ 600/ 813] Overall Loss 1.708443 Objective Loss 1.708443 LR 0.000200 Time 0.524859 +2025-05-15 14:31:54,403 - Epoch: [7][ 700/ 813] Overall Loss 1.701580 Objective Loss 1.701580 LR 0.000200 Time 0.524358 +2025-05-15 14:32:46,258 - Epoch: [7][ 800/ 813] Overall Loss 1.694590 Objective Loss 1.694590 LR 0.000200 Time 0.523630 +2025-05-15 14:32:51,705 - Epoch: [7][ 813/ 813] Overall Loss 1.694382 Objective Loss 1.694382 LR 0.000200 Time 0.521957 +2025-05-15 14:32:51,741 - --- validate (epoch=7)----------- +2025-05-15 14:32:51,742 - 3250 samples (16 per mini-batch) +2025-05-15 14:32:51,744 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 14:33:45,939 - Epoch: [7][ 100/ 204] Loss 1.668945 mAP 0.799753 +2025-05-15 14:34:38,265 - Epoch: [7][ 200/ 204] Loss 1.671421 mAP 0.809377 +2025-05-15 14:34:39,322 - Epoch: [7][ 204/ 204] Loss 1.666772 mAP 0.809408 +2025-05-15 14:34:39,353 - ==> mAP: 0.80941 Loss: 1.667 + +2025-05-15 14:34:39,358 - ==> Best [mAP: 0.809408 vloss: 1.666772 Params: 368352 on epoch: 7] +2025-05-15 14:34:39,358 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 14:34:39,408 - + +2025-05-15 14:34:39,408 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 14:35:33,754 - Epoch: [8][ 100/ 813] Overall Loss 1.569718 Objective Loss 1.569718 LR 0.000200 Time 0.543347 +2025-05-15 14:36:27,462 - Epoch: [8][ 200/ 813] Overall Loss 1.597122 Objective Loss 1.597122 LR 0.000200 Time 0.540206 +2025-05-15 14:37:18,883 - Epoch: [8][ 300/ 813] Overall Loss 1.586963 Objective Loss 1.586963 LR 0.000200 Time 0.531533 +2025-05-15 14:38:09,914 - Epoch: [8][ 400/ 813] Overall Loss 1.593396 Objective Loss 1.593396 LR 0.000200 Time 0.526225 +2025-05-15 14:39:01,809 - Epoch: [8][ 500/ 813] Overall Loss 1.587569 Objective Loss 1.587569 LR 0.000200 Time 0.524766 +2025-05-15 14:39:55,766 - Epoch: [8][ 600/ 813] Overall Loss 1.583254 Objective Loss 1.583254 LR 0.000200 Time 0.527219 +2025-05-15 14:40:46,757 - Epoch: [8][ 700/ 813] Overall Loss 1.573624 Objective Loss 1.573624 LR 0.000200 Time 0.524743 +2025-05-15 14:41:39,173 - Epoch: [8][ 800/ 813] Overall Loss 1.562236 Objective Loss 1.562236 LR 0.000200 Time 0.524669 +2025-05-15 14:41:44,126 - Epoch: [8][ 813/ 813] Overall Loss 1.561563 Objective Loss 1.561563 LR 0.000200 Time 0.522371 +2025-05-15 14:41:44,157 - --- validate (epoch=8)----------- +2025-05-15 14:41:44,158 - 3250 samples (16 per mini-batch) +2025-05-15 14:41:44,160 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 14:42:38,844 - Epoch: [8][ 100/ 204] Loss 1.528980 mAP 0.799423 +2025-05-15 14:43:31,046 - Epoch: [8][ 200/ 204] Loss 1.524667 mAP 0.799975 +2025-05-15 14:43:32,051 - Epoch: [8][ 204/ 204] Loss 1.526562 mAP 0.800001 +2025-05-15 14:43:32,081 - ==> mAP: 0.80000 Loss: 1.527 + +2025-05-15 14:43:32,085 - ==> Best [mAP: 0.809408 vloss: 1.666772 Params: 368352 on epoch: 7] +2025-05-15 14:43:32,085 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 14:43:32,131 - + +2025-05-15 14:43:32,131 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 14:44:27,137 - Epoch: [9][ 100/ 813] Overall Loss 1.478348 Objective Loss 1.478348 LR 0.000200 Time 0.550026 +2025-05-15 14:45:19,295 - Epoch: [9][ 200/ 813] Overall Loss 1.461713 Objective Loss 1.461713 LR 0.000200 Time 0.535798 +2025-05-15 14:46:11,387 - Epoch: [9][ 300/ 813] Overall Loss 1.456256 Objective Loss 1.456256 LR 0.000200 Time 0.530833 +2025-05-15 14:47:02,300 - Epoch: [9][ 400/ 813] Overall Loss 1.456536 Objective Loss 1.456536 LR 0.000200 Time 0.525404 +2025-05-15 14:47:53,234 - Epoch: [9][ 500/ 813] Overall Loss 1.453512 Objective Loss 1.453512 LR 0.000200 Time 0.522187 +2025-05-15 14:48:46,953 - Epoch: [9][ 600/ 813] Overall Loss 1.447939 Objective Loss 1.447939 LR 0.000200 Time 0.524685 +2025-05-15 14:49:39,232 - Epoch: [9][ 700/ 813] Overall Loss 1.438029 Objective Loss 1.438029 LR 0.000200 Time 0.524412 +2025-05-15 14:50:31,723 - Epoch: [9][ 800/ 813] Overall Loss 1.434761 Objective Loss 1.434761 LR 0.000200 Time 0.524468 +2025-05-15 14:50:37,485 - Epoch: [9][ 813/ 813] Overall Loss 1.433002 Objective Loss 1.433002 LR 0.000200 Time 0.523169 +2025-05-15 14:50:37,519 - --- validate (epoch=9)----------- +2025-05-15 14:50:37,520 - 3250 samples (16 per mini-batch) +2025-05-15 14:50:37,522 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 14:51:31,653 - Epoch: [9][ 100/ 204] Loss 1.462508 mAP 0.809202 +2025-05-15 14:52:24,044 - Epoch: [9][ 200/ 204] Loss 1.468938 mAP 0.799131 +2025-05-15 14:52:24,501 - Epoch: [9][ 204/ 204] Loss 1.469188 mAP 0.799115 +2025-05-15 14:52:24,532 - ==> mAP: 0.79912 Loss: 1.469 + +2025-05-15 14:52:24,536 - ==> Best [mAP: 0.809408 vloss: 1.666772 Params: 368352 on epoch: 7] +2025-05-15 14:52:24,536 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 14:52:24,582 - + +2025-05-15 14:52:24,582 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 14:53:19,280 - Epoch: [10][ 100/ 813] Overall Loss 1.382012 Objective Loss 1.382012 LR 0.000200 Time 0.546955 +2025-05-15 14:54:11,401 - Epoch: [10][ 200/ 813] Overall Loss 1.370194 Objective Loss 1.370194 LR 0.000200 Time 0.534075 +2025-05-15 14:55:04,778 - Epoch: [10][ 300/ 813] Overall Loss 1.353280 Objective Loss 1.353280 LR 0.000200 Time 0.533967 +2025-05-15 14:55:56,621 - Epoch: [10][ 400/ 813] Overall Loss 1.349511 Objective Loss 1.349511 LR 0.000200 Time 0.530079 +2025-05-15 14:56:47,784 - Epoch: [10][ 500/ 813] Overall Loss 1.349346 Objective Loss 1.349346 LR 0.000200 Time 0.526368 +2025-05-15 14:57:40,198 - Epoch: [10][ 600/ 813] Overall Loss 1.349142 Objective Loss 1.349142 LR 0.000200 Time 0.525993 +2025-05-15 14:58:34,987 - Epoch: [10][ 700/ 813] Overall Loss 1.346316 Objective Loss 1.346316 LR 0.000200 Time 0.529119 +2025-05-15 14:59:27,681 - Epoch: [10][ 800/ 813] Overall Loss 1.340818 Objective Loss 1.340818 LR 0.000200 Time 0.528844 +2025-05-15 14:59:33,182 - Epoch: [10][ 813/ 813] Overall Loss 1.340286 Objective Loss 1.340286 LR 0.000200 Time 0.527154 +2025-05-15 14:59:33,214 - --- validate (epoch=10)----------- +2025-05-15 14:59:33,215 - 3250 samples (16 per mini-batch) +2025-05-15 14:59:33,217 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 15:00:29,936 - Epoch: [10][ 100/ 204] Loss 1.355537 mAP 0.819785 +2025-05-15 15:01:20,818 - Epoch: [10][ 200/ 204] Loss 1.339168 mAP 0.829711 +2025-05-15 15:01:21,826 - Epoch: [10][ 204/ 204] Loss 1.339359 mAP 0.820055 +2025-05-15 15:01:21,854 - ==> mAP: 0.82005 Loss: 1.339 + +2025-05-15 15:01:21,858 - ==> Best [mAP: 0.820055 vloss: 1.339359 Params: 368352 on epoch: 10] +2025-05-15 15:01:21,858 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 15:01:21,900 - + +2025-05-15 15:01:21,900 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 15:02:15,937 - Epoch: [11][ 100/ 813] Overall Loss 1.256139 Objective Loss 1.256139 LR 0.000200 Time 0.540335 +2025-05-15 15:03:09,279 - Epoch: [11][ 200/ 813] Overall Loss 1.248950 Objective Loss 1.248950 LR 0.000200 Time 0.536868 +2025-05-15 15:04:01,480 - Epoch: [11][ 300/ 813] Overall Loss 1.264192 Objective Loss 1.264192 LR 0.000200 Time 0.531911 +2025-05-15 15:04:52,451 - Epoch: [11][ 400/ 813] Overall Loss 1.271330 Objective Loss 1.271330 LR 0.000200 Time 0.526357 +2025-05-15 15:05:45,003 - Epoch: [11][ 500/ 813] Overall Loss 1.272043 Objective Loss 1.272043 LR 0.000200 Time 0.526185 +2025-05-15 15:06:36,936 - Epoch: [11][ 600/ 813] Overall Loss 1.271176 Objective Loss 1.271176 LR 0.000200 Time 0.525040 +2025-05-15 15:07:30,896 - Epoch: [11][ 700/ 813] Overall Loss 1.267616 Objective Loss 1.267616 LR 0.000200 Time 0.527118 +2025-05-15 15:08:23,016 - Epoch: [11][ 800/ 813] Overall Loss 1.258691 Objective Loss 1.258691 LR 0.000200 Time 0.526375 +2025-05-15 15:08:28,334 - Epoch: [11][ 813/ 813] Overall Loss 1.258061 Objective Loss 1.258061 LR 0.000200 Time 0.524500 +2025-05-15 15:08:28,367 - --- validate (epoch=11)----------- +2025-05-15 15:08:28,368 - 3250 samples (16 per mini-batch) +2025-05-15 15:08:28,370 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 15:09:22,408 - Epoch: [11][ 100/ 204] Loss 1.275079 mAP 0.811494 +2025-05-15 15:10:15,623 - Epoch: [11][ 200/ 204] Loss 1.292349 mAP 0.801517 +2025-05-15 15:10:16,173 - Epoch: [11][ 204/ 204] Loss 1.289204 mAP 0.801475 +2025-05-15 15:10:16,200 - ==> mAP: 0.80148 Loss: 1.289 + +2025-05-15 15:10:16,350 - ==> Best [mAP: 0.820055 vloss: 1.339359 Params: 368352 on epoch: 10] +2025-05-15 15:10:16,350 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 15:10:16,395 - + +2025-05-15 15:10:16,395 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 15:11:10,100 - Epoch: [12][ 100/ 813] Overall Loss 1.212731 Objective Loss 1.212731 LR 0.000200 Time 0.537027 +2025-05-15 15:12:03,094 - Epoch: [12][ 200/ 813] Overall Loss 1.214297 Objective Loss 1.214297 LR 0.000200 Time 0.533473 +2025-05-15 15:12:55,899 - Epoch: [12][ 300/ 813] Overall Loss 1.212544 Objective Loss 1.212544 LR 0.000200 Time 0.531661 +2025-05-15 15:13:46,305 - Epoch: [12][ 400/ 813] Overall Loss 1.212756 Objective Loss 1.212756 LR 0.000200 Time 0.524728 +2025-05-15 15:14:37,797 - Epoch: [12][ 500/ 813] Overall Loss 1.201762 Objective Loss 1.201762 LR 0.000200 Time 0.522756 +2025-05-15 15:15:31,942 - Epoch: [12][ 600/ 813] Overall Loss 1.205018 Objective Loss 1.205018 LR 0.000200 Time 0.525869 +2025-05-15 15:16:24,489 - Epoch: [12][ 700/ 813] Overall Loss 1.208832 Objective Loss 1.208832 LR 0.000200 Time 0.525809 +2025-05-15 15:17:17,547 - Epoch: [12][ 800/ 813] Overall Loss 1.203371 Objective Loss 1.203371 LR 0.000200 Time 0.526403 +2025-05-15 15:17:22,809 - Epoch: [12][ 813/ 813] Overall Loss 1.203019 Objective Loss 1.203019 LR 0.000200 Time 0.524455 +2025-05-15 15:17:22,838 - --- validate (epoch=12)----------- +2025-05-15 15:17:22,839 - 3250 samples (16 per mini-batch) +2025-05-15 15:17:22,841 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 15:18:17,382 - Epoch: [12][ 100/ 204] Loss 1.253805 mAP 0.848920 +2025-05-15 15:19:09,858 - Epoch: [12][ 200/ 204] Loss 1.251445 mAP 0.839727 +2025-05-15 15:19:10,848 - Epoch: [12][ 204/ 204] Loss 1.251222 mAP 0.839713 +2025-05-15 15:19:10,875 - ==> mAP: 0.83971 Loss: 1.251 + +2025-05-15 15:19:10,879 - ==> Best [mAP: 0.839713 vloss: 1.251222 Params: 368352 on epoch: 12] +2025-05-15 15:19:10,879 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 15:19:10,927 - + +2025-05-15 15:19:10,927 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 15:20:05,934 - Epoch: [13][ 100/ 813] Overall Loss 1.147959 Objective Loss 1.147959 LR 0.000200 Time 0.550041 +2025-05-15 15:20:59,794 - Epoch: [13][ 200/ 813] Overall Loss 1.128463 Objective Loss 1.128463 LR 0.000200 Time 0.544311 +2025-05-15 15:21:51,638 - Epoch: [13][ 300/ 813] Overall Loss 1.131836 Objective Loss 1.131836 LR 0.000200 Time 0.535683 +2025-05-15 15:22:41,879 - Epoch: [13][ 400/ 813] Overall Loss 1.130161 Objective Loss 1.130161 LR 0.000200 Time 0.527361 +2025-05-15 15:23:33,887 - Epoch: [13][ 500/ 813] Overall Loss 1.126600 Objective Loss 1.126600 LR 0.000200 Time 0.525900 +2025-05-15 15:24:26,757 - Epoch: [13][ 600/ 813] Overall Loss 1.132102 Objective Loss 1.132102 LR 0.000200 Time 0.526365 +2025-05-15 15:25:19,593 - Epoch: [13][ 700/ 813] Overall Loss 1.128916 Objective Loss 1.128916 LR 0.000200 Time 0.526647 +2025-05-15 15:26:11,513 - Epoch: [13][ 800/ 813] Overall Loss 1.125522 Objective Loss 1.125522 LR 0.000200 Time 0.525714 +2025-05-15 15:26:16,208 - Epoch: [13][ 813/ 813] Overall Loss 1.123880 Objective Loss 1.123880 LR 0.000200 Time 0.523079 +2025-05-15 15:26:16,234 - --- validate (epoch=13)----------- +2025-05-15 15:26:16,234 - 3250 samples (16 per mini-batch) +2025-05-15 15:26:16,236 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 15:27:11,099 - Epoch: [13][ 100/ 204] Loss 1.157643 mAP 0.848827 +2025-05-15 15:28:03,006 - Epoch: [13][ 200/ 204] Loss 1.149642 mAP 0.849377 +2025-05-15 15:28:03,483 - Epoch: [13][ 204/ 204] Loss 1.152988 mAP 0.849197 +2025-05-15 15:28:03,510 - ==> mAP: 0.84920 Loss: 1.153 + +2025-05-15 15:28:03,515 - ==> Best [mAP: 0.849197 vloss: 1.152988 Params: 368352 on epoch: 13] +2025-05-15 15:28:03,515 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 15:28:03,565 - + +2025-05-15 15:28:03,565 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 15:28:58,606 - Epoch: [14][ 100/ 813] Overall Loss 1.095625 Objective Loss 1.095625 LR 0.000200 Time 0.550381 +2025-05-15 15:29:50,300 - Epoch: [14][ 200/ 813] Overall Loss 1.102102 Objective Loss 1.102102 LR 0.000200 Time 0.533656 +2025-05-15 15:30:43,037 - Epoch: [14][ 300/ 813] Overall Loss 1.091360 Objective Loss 1.091360 LR 0.000200 Time 0.531556 +2025-05-15 15:31:34,733 - Epoch: [14][ 400/ 813] Overall Loss 1.094689 Objective Loss 1.094689 LR 0.000200 Time 0.527903 +2025-05-15 15:32:26,169 - Epoch: [14][ 500/ 813] Overall Loss 1.088055 Objective Loss 1.088055 LR 0.000200 Time 0.525190 +2025-05-15 15:33:19,287 - Epoch: [14][ 600/ 813] Overall Loss 1.085448 Objective Loss 1.085448 LR 0.000200 Time 0.526180 +2025-05-15 15:34:11,582 - Epoch: [14][ 700/ 813] Overall Loss 1.088347 Objective Loss 1.088347 LR 0.000200 Time 0.525717 +2025-05-15 15:35:03,856 - Epoch: [14][ 800/ 813] Overall Loss 1.082870 Objective Loss 1.082870 LR 0.000200 Time 0.525339 +2025-05-15 15:35:09,513 - Epoch: [14][ 813/ 813] Overall Loss 1.084262 Objective Loss 1.084262 LR 0.000200 Time 0.523897 +2025-05-15 15:35:09,545 - --- validate (epoch=14)----------- +2025-05-15 15:35:09,546 - 3250 samples (16 per mini-batch) +2025-05-15 15:35:09,548 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 15:36:04,071 - Epoch: [14][ 100/ 204] Loss 1.182224 mAP 0.829646 +2025-05-15 15:36:56,291 - Epoch: [14][ 200/ 204] Loss 1.156715 mAP 0.830356 +2025-05-15 15:36:57,326 - Epoch: [14][ 204/ 204] Loss 1.158254 mAP 0.830380 +2025-05-15 15:36:57,358 - ==> mAP: 0.83038 Loss: 1.158 + +2025-05-15 15:36:57,362 - ==> Best [mAP: 0.849197 vloss: 1.152988 Params: 368352 on epoch: 13] +2025-05-15 15:36:57,363 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 15:36:57,407 - + +2025-05-15 15:36:57,407 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 15:37:53,169 - Epoch: [15][ 100/ 813] Overall Loss 1.043507 Objective Loss 1.043507 LR 0.000200 Time 0.557591 +2025-05-15 15:38:44,474 - Epoch: [15][ 200/ 813] Overall Loss 1.041009 Objective Loss 1.041009 LR 0.000200 Time 0.535109 +2025-05-15 15:39:37,017 - Epoch: [15][ 300/ 813] Overall Loss 1.052627 Objective Loss 1.052627 LR 0.000200 Time 0.531878 +2025-05-15 15:40:29,521 - Epoch: [15][ 400/ 813] Overall Loss 1.041482 Objective Loss 1.041482 LR 0.000200 Time 0.530158 +2025-05-15 15:41:19,989 - Epoch: [15][ 500/ 813] Overall Loss 1.044633 Objective Loss 1.044633 LR 0.000200 Time 0.525059 +2025-05-15 15:42:12,590 - Epoch: [15][ 600/ 813] Overall Loss 1.046219 Objective Loss 1.046219 LR 0.000200 Time 0.525214 +2025-05-15 15:43:06,072 - Epoch: [15][ 700/ 813] Overall Loss 1.049529 Objective Loss 1.049529 LR 0.000200 Time 0.526584 +2025-05-15 15:43:58,398 - Epoch: [15][ 800/ 813] Overall Loss 1.046883 Objective Loss 1.046883 LR 0.000200 Time 0.526167 +2025-05-15 15:44:03,534 - Epoch: [15][ 813/ 813] Overall Loss 1.048433 Objective Loss 1.048433 LR 0.000200 Time 0.524070 +2025-05-15 15:44:03,567 - --- validate (epoch=15)----------- +2025-05-15 15:44:03,567 - 3250 samples (16 per mini-batch) +2025-05-15 15:44:03,569 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 15:44:56,947 - Epoch: [15][ 100/ 204] Loss 1.075308 mAP 0.820902 +2025-05-15 15:45:50,676 - Epoch: [15][ 200/ 204] Loss 1.082926 mAP 0.830734 +2025-05-15 15:45:51,950 - Epoch: [15][ 204/ 204] Loss 1.086301 mAP 0.820903 +2025-05-15 15:45:51,979 - ==> mAP: 0.82090 Loss: 1.086 + +2025-05-15 15:45:51,983 - ==> Best [mAP: 0.849197 vloss: 1.152988 Params: 368352 on epoch: 13] +2025-05-15 15:45:51,983 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 15:45:52,029 - + +2025-05-15 15:45:52,029 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 15:46:45,943 - Epoch: [16][ 100/ 813] Overall Loss 0.998583 Objective Loss 0.998583 LR 0.000200 Time 0.539114 +2025-05-15 15:47:39,478 - Epoch: [16][ 200/ 813] Overall Loss 0.994015 Objective Loss 0.994015 LR 0.000200 Time 0.537221 +2025-05-15 15:48:32,180 - Epoch: [16][ 300/ 813] Overall Loss 1.018692 Objective Loss 1.018692 LR 0.000200 Time 0.533816 +2025-05-15 15:49:22,913 - Epoch: [16][ 400/ 813] Overall Loss 1.015606 Objective Loss 1.015606 LR 0.000200 Time 0.527191 +2025-05-15 15:50:15,659 - Epoch: [16][ 500/ 813] Overall Loss 1.011688 Objective Loss 1.011688 LR 0.000200 Time 0.527240 +2025-05-15 15:51:08,128 - Epoch: [16][ 600/ 813] Overall Loss 1.011280 Objective Loss 1.011280 LR 0.000200 Time 0.526813 +2025-05-15 15:52:01,428 - Epoch: [16][ 700/ 813] Overall Loss 1.014358 Objective Loss 1.014358 LR 0.000200 Time 0.527695 +2025-05-15 15:52:52,259 - Epoch: [16][ 800/ 813] Overall Loss 1.008607 Objective Loss 1.008607 LR 0.000200 Time 0.525260 +2025-05-15 15:52:57,140 - Epoch: [16][ 813/ 813] Overall Loss 1.009936 Objective Loss 1.009936 LR 0.000200 Time 0.522865 +2025-05-15 15:52:57,172 - --- validate (epoch=16)----------- +2025-05-15 15:52:57,173 - 3250 samples (16 per mini-batch) +2025-05-15 15:52:57,175 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 15:53:51,435 - Epoch: [16][ 100/ 204] Loss 1.012208 mAP 0.869335 +2025-05-15 15:54:42,919 - Epoch: [16][ 200/ 204] Loss 1.025243 mAP 0.859021 +2025-05-15 15:54:43,759 - Epoch: [16][ 204/ 204] Loss 1.024664 mAP 0.859069 +2025-05-15 15:54:43,790 - ==> mAP: 0.85907 Loss: 1.025 + +2025-05-15 15:54:43,795 - ==> Best [mAP: 0.859069 vloss: 1.024664 Params: 368352 on epoch: 16] +2025-05-15 15:54:43,795 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 15:54:43,844 - + +2025-05-15 15:54:43,844 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 15:55:38,264 - Epoch: [17][ 100/ 813] Overall Loss 0.986830 Objective Loss 0.986830 LR 0.000200 Time 0.544167 +2025-05-15 15:56:31,639 - Epoch: [17][ 200/ 813] Overall Loss 0.985585 Objective Loss 0.985585 LR 0.000200 Time 0.538952 +2025-05-15 15:57:24,530 - Epoch: [17][ 300/ 813] Overall Loss 0.984086 Objective Loss 0.984086 LR 0.000200 Time 0.535598 +2025-05-15 15:58:14,075 - Epoch: [17][ 400/ 813] Overall Loss 0.981641 Objective Loss 0.981641 LR 0.000200 Time 0.525557 +2025-05-15 15:59:04,738 - Epoch: [17][ 500/ 813] Overall Loss 0.985470 Objective Loss 0.985470 LR 0.000200 Time 0.521768 +2025-05-15 15:59:57,092 - Epoch: [17][ 600/ 813] Overall Loss 0.984207 Objective Loss 0.984207 LR 0.000200 Time 0.522047 +2025-05-15 16:00:52,148 - Epoch: [17][ 700/ 813] Overall Loss 0.985444 Objective Loss 0.985444 LR 0.000200 Time 0.526117 +2025-05-15 16:01:45,055 - Epoch: [17][ 800/ 813] Overall Loss 0.981604 Objective Loss 0.981604 LR 0.000200 Time 0.526476 +2025-05-15 16:01:49,479 - Epoch: [17][ 813/ 813] Overall Loss 0.983156 Objective Loss 0.983156 LR 0.000200 Time 0.523498 +2025-05-15 16:01:49,510 - --- validate (epoch=17)----------- +2025-05-15 16:01:49,511 - 3250 samples (16 per mini-batch) +2025-05-15 16:01:49,513 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 16:02:44,294 - Epoch: [17][ 100/ 204] Loss 0.994773 mAP 0.860326 +2025-05-15 16:03:34,940 - Epoch: [17][ 200/ 204] Loss 1.025991 mAP 0.859415 +2025-05-15 16:03:36,053 - Epoch: [17][ 204/ 204] Loss 1.026449 mAP 0.859286 +2025-05-15 16:03:36,083 - ==> mAP: 0.85929 Loss: 1.026 + +2025-05-15 16:03:36,087 - ==> Best [mAP: 0.859286 vloss: 1.026449 Params: 368352 on epoch: 17] +2025-05-15 16:03:36,087 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 16:03:36,136 - + +2025-05-15 16:03:36,137 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 16:04:30,219 - Epoch: [18][ 100/ 813] Overall Loss 0.954044 Objective Loss 0.954044 LR 0.000200 Time 0.540796 +2025-05-15 16:05:25,000 - Epoch: [18][ 200/ 813] Overall Loss 0.954627 Objective Loss 0.954627 LR 0.000200 Time 0.544281 +2025-05-15 16:06:16,479 - Epoch: [18][ 300/ 813] Overall Loss 0.946776 Objective Loss 0.946776 LR 0.000200 Time 0.534433 +2025-05-15 16:07:06,542 - Epoch: [18][ 400/ 813] Overall Loss 0.939368 Objective Loss 0.939368 LR 0.000200 Time 0.525978 +2025-05-15 16:07:59,881 - Epoch: [18][ 500/ 813] Overall Loss 0.938001 Objective Loss 0.938001 LR 0.000200 Time 0.527457 +2025-05-15 16:08:52,143 - Epoch: [18][ 600/ 813] Overall Loss 0.939419 Objective Loss 0.939419 LR 0.000200 Time 0.526649 +2025-05-15 16:09:45,620 - Epoch: [18][ 700/ 813] Overall Loss 0.943125 Objective Loss 0.943125 LR 0.000200 Time 0.527807 +2025-05-15 16:10:38,052 - Epoch: [18][ 800/ 813] Overall Loss 0.938659 Objective Loss 0.938659 LR 0.000200 Time 0.527364 +2025-05-15 16:10:42,470 - Epoch: [18][ 813/ 813] Overall Loss 0.938365 Objective Loss 0.938365 LR 0.000200 Time 0.524364 +2025-05-15 16:10:42,500 - --- validate (epoch=18)----------- +2025-05-15 16:10:42,501 - 3250 samples (16 per mini-batch) +2025-05-15 16:10:42,503 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 16:11:39,072 - Epoch: [18][ 100/ 204] Loss 0.996894 mAP 0.879663 +2025-05-15 16:12:30,834 - Epoch: [18][ 200/ 204] Loss 1.014814 mAP 0.869579 +2025-05-15 16:12:31,293 - Epoch: [18][ 204/ 204] Loss 1.016605 mAP 0.869596 +2025-05-15 16:12:31,325 - ==> mAP: 0.86960 Loss: 1.017 + +2025-05-15 16:12:31,329 - ==> Best [mAP: 0.869596 vloss: 1.016605 Params: 368352 on epoch: 18] +2025-05-15 16:12:31,329 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 16:12:31,378 - + +2025-05-15 16:12:31,379 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 16:13:26,264 - Epoch: [19][ 100/ 813] Overall Loss 0.890102 Objective Loss 0.890102 LR 0.000200 Time 0.548826 +2025-05-15 16:14:21,121 - Epoch: [19][ 200/ 813] Overall Loss 0.909521 Objective Loss 0.909521 LR 0.000200 Time 0.548687 +2025-05-15 16:15:12,818 - Epoch: [19][ 300/ 813] Overall Loss 0.913391 Objective Loss 0.913391 LR 0.000200 Time 0.538109 +2025-05-15 16:16:03,242 - Epoch: [19][ 400/ 813] Overall Loss 0.922029 Objective Loss 0.922029 LR 0.000200 Time 0.529634 +2025-05-15 16:16:54,721 - Epoch: [19][ 500/ 813] Overall Loss 0.918384 Objective Loss 0.918384 LR 0.000200 Time 0.526661 +2025-05-15 16:17:48,538 - Epoch: [19][ 600/ 813] Overall Loss 0.917417 Objective Loss 0.917417 LR 0.000200 Time 0.528577 +2025-05-15 16:18:40,693 - Epoch: [19][ 700/ 813] Overall Loss 0.924418 Objective Loss 0.924418 LR 0.000200 Time 0.527571 +2025-05-15 16:19:32,748 - Epoch: [19][ 800/ 813] Overall Loss 0.922585 Objective Loss 0.922585 LR 0.000200 Time 0.526690 +2025-05-15 16:19:37,441 - Epoch: [19][ 813/ 813] Overall Loss 0.923455 Objective Loss 0.923455 LR 0.000200 Time 0.524041 +2025-05-15 16:19:37,472 - --- validate (epoch=19)----------- +2025-05-15 16:19:37,473 - 3250 samples (16 per mini-batch) +2025-05-15 16:19:37,475 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 16:20:31,895 - Epoch: [19][ 100/ 204] Loss 1.003249 mAP 0.878976 +2025-05-15 16:21:25,351 - Epoch: [19][ 200/ 204] Loss 0.985129 mAP 0.879500 +2025-05-15 16:21:26,264 - Epoch: [19][ 204/ 204] Loss 0.983651 mAP 0.869750 +2025-05-15 16:21:26,295 - ==> mAP: 0.86975 Loss: 0.984 + +2025-05-15 16:21:26,300 - ==> Best [mAP: 0.869750 vloss: 0.983651 Params: 368352 on epoch: 19] +2025-05-15 16:21:26,300 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 16:21:26,350 - + +2025-05-15 16:21:26,350 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 16:22:21,860 - Epoch: [20][ 100/ 813] Overall Loss 0.864502 Objective Loss 0.864502 LR 0.000200 Time 0.555076 +2025-05-15 16:23:16,486 - Epoch: [20][ 200/ 813] Overall Loss 0.850059 Objective Loss 0.850059 LR 0.000200 Time 0.550660 +2025-05-15 16:24:07,493 - Epoch: [20][ 300/ 813] Overall Loss 0.856546 Objective Loss 0.856546 LR 0.000200 Time 0.537123 +2025-05-15 16:24:56,263 - Epoch: [20][ 400/ 813] Overall Loss 0.867714 Objective Loss 0.867714 LR 0.000200 Time 0.524764 +2025-05-15 16:25:47,999 - Epoch: [20][ 500/ 813] Overall Loss 0.870212 Objective Loss 0.870212 LR 0.000200 Time 0.523280 +2025-05-15 16:26:41,326 - Epoch: [20][ 600/ 813] Overall Loss 0.870546 Objective Loss 0.870546 LR 0.000200 Time 0.524941 +2025-05-15 16:27:33,209 - Epoch: [20][ 700/ 813] Overall Loss 0.878199 Objective Loss 0.878199 LR 0.000200 Time 0.524056 +2025-05-15 16:28:26,099 - Epoch: [20][ 800/ 813] Overall Loss 0.878185 Objective Loss 0.878185 LR 0.000200 Time 0.524654 +2025-05-15 16:28:31,593 - Epoch: [20][ 813/ 813] Overall Loss 0.878424 Objective Loss 0.878424 LR 0.000200 Time 0.523023 +2025-05-15 16:28:31,620 - --- validate (epoch=20)----------- +2025-05-15 16:28:31,621 - 3250 samples (16 per mini-batch) +2025-05-15 16:28:31,623 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 16:29:25,135 - Epoch: [20][ 100/ 204] Loss 0.989549 mAP 0.858906 +2025-05-15 16:30:16,953 - Epoch: [20][ 200/ 204] Loss 0.998209 mAP 0.858955 +2025-05-15 16:30:18,017 - Epoch: [20][ 204/ 204] Loss 1.000600 mAP 0.858954 +2025-05-15 16:30:18,045 - ==> mAP: 0.85895 Loss: 1.001 + +2025-05-15 16:30:18,049 - ==> Best [mAP: 0.869750 vloss: 0.983651 Params: 368352 on epoch: 19] +2025-05-15 16:30:18,049 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 16:30:18,095 - + +2025-05-15 16:30:18,095 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 16:31:13,349 - Epoch: [21][ 100/ 813] Overall Loss 0.861246 Objective Loss 0.861246 LR 0.000200 Time 0.552513 +2025-05-15 16:32:06,046 - Epoch: [21][ 200/ 813] Overall Loss 0.863896 Objective Loss 0.863896 LR 0.000200 Time 0.539731 +2025-05-15 16:32:59,076 - Epoch: [21][ 300/ 813] Overall Loss 0.871026 Objective Loss 0.871026 LR 0.000200 Time 0.536581 +2025-05-15 16:33:50,145 - Epoch: [21][ 400/ 813] Overall Loss 0.869359 Objective Loss 0.869359 LR 0.000200 Time 0.530105 +2025-05-15 16:34:41,021 - Epoch: [21][ 500/ 813] Overall Loss 0.871730 Objective Loss 0.871730 LR 0.000200 Time 0.525832 +2025-05-15 16:35:35,426 - Epoch: [21][ 600/ 813] Overall Loss 0.873939 Objective Loss 0.873939 LR 0.000200 Time 0.528867 +2025-05-15 16:36:28,715 - Epoch: [21][ 700/ 813] Overall Loss 0.874013 Objective Loss 0.874013 LR 0.000200 Time 0.529439 +2025-05-15 16:37:19,257 - Epoch: [21][ 800/ 813] Overall Loss 0.870728 Objective Loss 0.870728 LR 0.000200 Time 0.526434 +2025-05-15 16:37:25,235 - Epoch: [21][ 813/ 813] Overall Loss 0.871283 Objective Loss 0.871283 LR 0.000200 Time 0.525369 +2025-05-15 16:37:25,266 - --- validate (epoch=21)----------- +2025-05-15 16:37:25,266 - 3250 samples (16 per mini-batch) +2025-05-15 16:37:25,268 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 16:38:18,984 - Epoch: [21][ 100/ 204] Loss 0.914565 mAP 0.870503 +2025-05-15 16:39:11,864 - Epoch: [21][ 200/ 204] Loss 0.924681 mAP 0.870671 +2025-05-15 16:39:12,829 - Epoch: [21][ 204/ 204] Loss 0.922767 mAP 0.870680 +2025-05-15 16:39:12,855 - ==> mAP: 0.87068 Loss: 0.923 + +2025-05-15 16:39:12,860 - ==> Best [mAP: 0.870680 vloss: 0.922767 Params: 368352 on epoch: 21] +2025-05-15 16:39:12,860 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 16:39:12,907 - + +2025-05-15 16:39:12,908 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 16:40:08,147 - Epoch: [22][ 100/ 813] Overall Loss 0.842537 Objective Loss 0.842537 LR 0.000200 Time 0.552367 +2025-05-15 16:41:02,218 - Epoch: [22][ 200/ 813] Overall Loss 0.835614 Objective Loss 0.835614 LR 0.000200 Time 0.546530 +2025-05-15 16:41:54,207 - Epoch: [22][ 300/ 813] Overall Loss 0.844446 Objective Loss 0.844446 LR 0.000200 Time 0.537644 +2025-05-15 16:42:45,716 - Epoch: [22][ 400/ 813] Overall Loss 0.848525 Objective Loss 0.848525 LR 0.000200 Time 0.532000 +2025-05-15 16:43:37,846 - Epoch: [22][ 500/ 813] Overall Loss 0.845405 Objective Loss 0.845405 LR 0.000200 Time 0.529858 +2025-05-15 16:44:31,500 - Epoch: [22][ 600/ 813] Overall Loss 0.848254 Objective Loss 0.848254 LR 0.000200 Time 0.530969 +2025-05-15 16:45:23,956 - Epoch: [22][ 700/ 813] Overall Loss 0.851773 Objective Loss 0.851773 LR 0.000200 Time 0.530050 +2025-05-15 16:46:17,245 - Epoch: [22][ 800/ 813] Overall Loss 0.845637 Objective Loss 0.845637 LR 0.000200 Time 0.530403 +2025-05-15 16:46:21,025 - Epoch: [22][ 813/ 813] Overall Loss 0.844950 Objective Loss 0.844950 LR 0.000200 Time 0.526571 +2025-05-15 16:46:21,056 - --- validate (epoch=22)----------- +2025-05-15 16:46:21,057 - 3250 samples (16 per mini-batch) +2025-05-15 16:46:21,059 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 16:47:15,457 - Epoch: [22][ 100/ 204] Loss 0.917976 mAP 0.880844 +2025-05-15 16:48:09,336 - Epoch: [22][ 200/ 204] Loss 0.927920 mAP 0.860811 +2025-05-15 16:48:10,141 - Epoch: [22][ 204/ 204] Loss 0.936698 mAP 0.860816 +2025-05-15 16:48:10,171 - ==> mAP: 0.86082 Loss: 0.937 + +2025-05-15 16:48:10,176 - ==> Best [mAP: 0.870680 vloss: 0.922767 Params: 368352 on epoch: 21] +2025-05-15 16:48:10,176 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 16:48:10,222 - + +2025-05-15 16:48:10,222 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 16:49:05,377 - Epoch: [23][ 100/ 813] Overall Loss 0.827089 Objective Loss 0.827089 LR 0.000200 Time 0.551515 +2025-05-15 16:49:58,870 - Epoch: [23][ 200/ 813] Overall Loss 0.821231 Objective Loss 0.821231 LR 0.000200 Time 0.543217 +2025-05-15 16:50:51,727 - Epoch: [23][ 300/ 813] Overall Loss 0.837220 Objective Loss 0.837220 LR 0.000200 Time 0.538328 +2025-05-15 16:51:41,069 - Epoch: [23][ 400/ 813] Overall Loss 0.832069 Objective Loss 0.832069 LR 0.000200 Time 0.527098 +2025-05-15 16:52:32,374 - Epoch: [23][ 500/ 813] Overall Loss 0.836285 Objective Loss 0.836285 LR 0.000200 Time 0.524286 +2025-05-15 16:53:25,090 - Epoch: [23][ 600/ 813] Overall Loss 0.836891 Objective Loss 0.836891 LR 0.000200 Time 0.524761 +2025-05-15 16:54:18,711 - Epoch: [23][ 700/ 813] Overall Loss 0.835519 Objective Loss 0.835519 LR 0.000200 Time 0.526386 +2025-05-15 16:55:10,172 - Epoch: [23][ 800/ 813] Overall Loss 0.838240 Objective Loss 0.838240 LR 0.000200 Time 0.524902 +2025-05-15 16:55:15,959 - Epoch: [23][ 813/ 813] Overall Loss 0.839865 Objective Loss 0.839865 LR 0.000200 Time 0.523624 +2025-05-15 16:55:15,999 - --- validate (epoch=23)----------- +2025-05-15 16:55:16,000 - 3250 samples (16 per mini-batch) +2025-05-15 16:55:16,002 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 16:56:09,579 - Epoch: [23][ 100/ 204] Loss 0.889371 mAP 0.870559 +2025-05-15 16:57:01,017 - Epoch: [23][ 200/ 204] Loss 0.907466 mAP 0.870356 +2025-05-15 16:57:01,763 - Epoch: [23][ 204/ 204] Loss 0.909743 mAP 0.870339 +2025-05-15 16:57:01,791 - ==> mAP: 0.87034 Loss: 0.910 + +2025-05-15 16:57:01,795 - ==> Best [mAP: 0.870680 vloss: 0.922767 Params: 368352 on epoch: 21] +2025-05-15 16:57:01,796 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 16:57:01,841 - + +2025-05-15 16:57:01,842 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 16:57:57,376 - Epoch: [24][ 100/ 813] Overall Loss 0.823567 Objective Loss 0.823567 LR 0.000200 Time 0.555318 +2025-05-15 16:58:49,964 - Epoch: [24][ 200/ 813] Overall Loss 0.810403 Objective Loss 0.810403 LR 0.000200 Time 0.540570 +2025-05-15 16:59:43,008 - Epoch: [24][ 300/ 813] Overall Loss 0.812753 Objective Loss 0.812753 LR 0.000200 Time 0.537188 +2025-05-15 17:00:31,896 - Epoch: [24][ 400/ 813] Overall Loss 0.808816 Objective Loss 0.808816 LR 0.000200 Time 0.525081 +2025-05-15 17:01:23,118 - Epoch: [24][ 500/ 813] Overall Loss 0.808327 Objective Loss 0.808327 LR 0.000200 Time 0.522505 +2025-05-15 17:02:18,159 - Epoch: [24][ 600/ 813] Overall Loss 0.815737 Objective Loss 0.815737 LR 0.000200 Time 0.527110 +2025-05-15 17:03:09,417 - Epoch: [24][ 700/ 813] Overall Loss 0.820324 Objective Loss 0.820324 LR 0.000200 Time 0.525032 +2025-05-15 17:04:01,531 - Epoch: [24][ 800/ 813] Overall Loss 0.824300 Objective Loss 0.824300 LR 0.000200 Time 0.524534 +2025-05-15 17:04:06,860 - Epoch: [24][ 813/ 813] Overall Loss 0.825980 Objective Loss 0.825980 LR 0.000200 Time 0.522700 +2025-05-15 17:04:06,896 - --- validate (epoch=24)----------- +2025-05-15 17:04:06,897 - 3250 samples (16 per mini-batch) +2025-05-15 17:04:06,899 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 17:05:01,101 - Epoch: [24][ 100/ 204] Loss 0.921848 mAP 0.869930 +2025-05-15 17:05:54,660 - Epoch: [24][ 200/ 204] Loss 0.897045 mAP 0.869881 +2025-05-15 17:05:55,107 - Epoch: [24][ 204/ 204] Loss 0.893259 mAP 0.869903 +2025-05-15 17:05:55,137 - ==> mAP: 0.86990 Loss: 0.893 + +2025-05-15 17:05:55,141 - ==> Best [mAP: 0.870680 vloss: 0.922767 Params: 368352 on epoch: 21] +2025-05-15 17:05:55,141 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 17:05:55,185 - + +2025-05-15 17:05:55,186 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 17:06:50,671 - Epoch: [25][ 100/ 813] Overall Loss 0.730689 Objective Loss 0.730689 LR 0.000100 Time 0.554823 +2025-05-15 17:07:43,685 - Epoch: [25][ 200/ 813] Overall Loss 0.721543 Objective Loss 0.721543 LR 0.000100 Time 0.542473 +2025-05-15 17:08:34,860 - Epoch: [25][ 300/ 813] Overall Loss 0.725506 Objective Loss 0.725506 LR 0.000100 Time 0.532229 +2025-05-15 17:09:25,244 - Epoch: [25][ 400/ 813] Overall Loss 0.734864 Objective Loss 0.734864 LR 0.000100 Time 0.525119 +2025-05-15 17:10:18,511 - Epoch: [25][ 500/ 813] Overall Loss 0.736871 Objective Loss 0.736871 LR 0.000100 Time 0.526623 +2025-05-15 17:11:09,756 - Epoch: [25][ 600/ 813] Overall Loss 0.741113 Objective Loss 0.741113 LR 0.000100 Time 0.524258 +2025-05-15 17:12:03,940 - Epoch: [25][ 700/ 813] Overall Loss 0.744686 Objective Loss 0.744686 LR 0.000100 Time 0.526757 +2025-05-15 17:12:53,924 - Epoch: [25][ 800/ 813] Overall Loss 0.746367 Objective Loss 0.746367 LR 0.000100 Time 0.523391 +2025-05-15 17:12:59,872 - Epoch: [25][ 813/ 813] Overall Loss 0.746590 Objective Loss 0.746590 LR 0.000100 Time 0.522333 +2025-05-15 17:12:59,910 - --- validate (epoch=25)----------- +2025-05-15 17:12:59,910 - 3250 samples (16 per mini-batch) +2025-05-15 17:12:59,913 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 17:13:55,338 - Epoch: [25][ 100/ 204] Loss 0.871289 mAP 0.850301 +2025-05-15 17:14:46,839 - Epoch: [25][ 200/ 204] Loss 0.865403 mAP 0.860534 +2025-05-15 17:14:47,781 - Epoch: [25][ 204/ 204] Loss 0.866750 mAP 0.860539 +2025-05-15 17:14:47,812 - ==> mAP: 0.86054 Loss: 0.867 + +2025-05-15 17:14:47,816 - ==> Best [mAP: 0.870680 vloss: 0.922767 Params: 368352 on epoch: 21] +2025-05-15 17:14:47,816 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 17:14:47,860 - + +2025-05-15 17:14:47,860 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 17:15:43,815 - Epoch: [26][ 100/ 813] Overall Loss 0.715966 Objective Loss 0.715966 LR 0.000100 Time 0.559519 +2025-05-15 17:16:36,167 - Epoch: [26][ 200/ 813] Overall Loss 0.700430 Objective Loss 0.700430 LR 0.000100 Time 0.541513 +2025-05-15 17:17:29,011 - Epoch: [26][ 300/ 813] Overall Loss 0.714601 Objective Loss 0.714601 LR 0.000100 Time 0.537148 +2025-05-15 17:18:18,779 - Epoch: [26][ 400/ 813] Overall Loss 0.725395 Objective Loss 0.725395 LR 0.000100 Time 0.527253 +2025-05-15 17:19:11,165 - Epoch: [26][ 500/ 813] Overall Loss 0.729729 Objective Loss 0.729729 LR 0.000100 Time 0.526563 +2025-05-15 17:20:03,638 - Epoch: [26][ 600/ 813] Overall Loss 0.730209 Objective Loss 0.730209 LR 0.000100 Time 0.526256 +2025-05-15 17:20:56,763 - Epoch: [26][ 700/ 813] Overall Loss 0.731533 Objective Loss 0.731533 LR 0.000100 Time 0.526967 +2025-05-15 17:21:49,119 - Epoch: [26][ 800/ 813] Overall Loss 0.731086 Objective Loss 0.731086 LR 0.000100 Time 0.526539 +2025-05-15 17:21:53,681 - Epoch: [26][ 813/ 813] Overall Loss 0.732255 Objective Loss 0.732255 LR 0.000100 Time 0.523731 +2025-05-15 17:21:53,714 - --- validate (epoch=26)----------- +2025-05-15 17:21:53,715 - 3250 samples (16 per mini-batch) +2025-05-15 17:21:53,717 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 17:22:46,865 - Epoch: [26][ 100/ 204] Loss 0.840995 mAP 0.869537 +2025-05-15 17:23:39,753 - Epoch: [26][ 200/ 204] Loss 0.831644 mAP 0.879560 +2025-05-15 17:23:40,422 - Epoch: [26][ 204/ 204] Loss 0.836037 mAP 0.879574 +2025-05-15 17:23:40,447 - ==> mAP: 0.87957 Loss: 0.836 + +2025-05-15 17:23:40,451 - ==> Best [mAP: 0.879574 vloss: 0.836037 Params: 368352 on epoch: 26] +2025-05-15 17:23:40,451 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 17:23:40,499 - + +2025-05-15 17:23:40,499 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 17:24:35,536 - Epoch: [27][ 100/ 813] Overall Loss 0.709832 Objective Loss 0.709832 LR 0.000100 Time 0.550335 +2025-05-15 17:25:28,212 - Epoch: [27][ 200/ 813] Overall Loss 0.699366 Objective Loss 0.699366 LR 0.000100 Time 0.538539 +2025-05-15 17:26:21,701 - Epoch: [27][ 300/ 813] Overall Loss 0.701001 Objective Loss 0.701001 LR 0.000100 Time 0.537319 +2025-05-15 17:27:13,245 - Epoch: [27][ 400/ 813] Overall Loss 0.706716 Objective Loss 0.706716 LR 0.000100 Time 0.531845 +2025-05-15 17:28:04,246 - Epoch: [27][ 500/ 813] Overall Loss 0.711540 Objective Loss 0.711540 LR 0.000100 Time 0.527475 +2025-05-15 17:28:57,417 - Epoch: [27][ 600/ 813] Overall Loss 0.716924 Objective Loss 0.716924 LR 0.000100 Time 0.528178 +2025-05-15 17:29:50,589 - Epoch: [27][ 700/ 813] Overall Loss 0.721614 Objective Loss 0.721614 LR 0.000100 Time 0.528682 +2025-05-15 17:30:42,301 - Epoch: [27][ 800/ 813] Overall Loss 0.722913 Objective Loss 0.722913 LR 0.000100 Time 0.527235 +2025-05-15 17:30:47,617 - Epoch: [27][ 813/ 813] Overall Loss 0.725980 Objective Loss 0.725980 LR 0.000100 Time 0.525342 +2025-05-15 17:30:47,647 - --- validate (epoch=27)----------- +2025-05-15 17:30:47,647 - 3250 samples (16 per mini-batch) +2025-05-15 17:30:47,649 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 17:31:40,544 - Epoch: [27][ 100/ 204] Loss 0.854426 mAP 0.870226 +2025-05-15 17:32:33,730 - Epoch: [27][ 200/ 204] Loss 0.854267 mAP 0.870175 +2025-05-15 17:32:34,707 - Epoch: [27][ 204/ 204] Loss 0.850674 mAP 0.870203 +2025-05-15 17:32:34,732 - ==> mAP: 0.87020 Loss: 0.851 + +2025-05-15 17:32:34,736 - ==> Best [mAP: 0.879574 vloss: 0.836037 Params: 368352 on epoch: 26] +2025-05-15 17:32:34,736 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 17:32:34,780 - + +2025-05-15 17:32:34,781 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 17:33:28,589 - Epoch: [28][ 100/ 813] Overall Loss 0.718772 Objective Loss 0.718772 LR 0.000100 Time 0.538054 +2025-05-15 17:34:21,967 - Epoch: [28][ 200/ 813] Overall Loss 0.693455 Objective Loss 0.693455 LR 0.000100 Time 0.535908 +2025-05-15 17:35:14,466 - Epoch: [28][ 300/ 813] Overall Loss 0.697921 Objective Loss 0.697921 LR 0.000100 Time 0.532265 +2025-05-15 17:36:03,267 - Epoch: [28][ 400/ 813] Overall Loss 0.699143 Objective Loss 0.699143 LR 0.000100 Time 0.521187 +2025-05-15 17:36:55,794 - Epoch: [28][ 500/ 813] Overall Loss 0.701954 Objective Loss 0.701954 LR 0.000100 Time 0.522000 +2025-05-15 17:37:49,210 - Epoch: [28][ 600/ 813] Overall Loss 0.708387 Objective Loss 0.708387 LR 0.000100 Time 0.524024 +2025-05-15 17:38:40,304 - Epoch: [28][ 700/ 813] Overall Loss 0.713940 Objective Loss 0.713940 LR 0.000100 Time 0.522152 +2025-05-15 17:39:31,331 - Epoch: [28][ 800/ 813] Overall Loss 0.712009 Objective Loss 0.712009 LR 0.000100 Time 0.520665 +2025-05-15 17:39:37,404 - Epoch: [28][ 813/ 813] Overall Loss 0.712999 Objective Loss 0.712999 LR 0.000100 Time 0.519809 +2025-05-15 17:39:37,433 - --- validate (epoch=28)----------- +2025-05-15 17:39:37,434 - 3250 samples (16 per mini-batch) +2025-05-15 17:39:37,436 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 17:40:30,129 - Epoch: [28][ 100/ 204] Loss 0.816434 mAP 0.880540 +2025-05-15 17:41:23,161 - Epoch: [28][ 200/ 204] Loss 0.824122 mAP 0.880057 +2025-05-15 17:41:24,204 - Epoch: [28][ 204/ 204] Loss 0.829517 mAP 0.880093 +2025-05-15 17:41:24,236 - ==> mAP: 0.88009 Loss: 0.830 + +2025-05-15 17:41:24,240 - ==> Best [mAP: 0.880093 vloss: 0.829517 Params: 368352 on epoch: 28] +2025-05-15 17:41:24,241 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 17:41:24,289 - + +2025-05-15 17:41:24,289 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 17:42:19,675 - Epoch: [29][ 100/ 813] Overall Loss 0.660132 Objective Loss 0.660132 LR 0.000100 Time 0.553749 +2025-05-15 17:43:13,131 - Epoch: [29][ 200/ 813] Overall Loss 0.681434 Objective Loss 0.681434 LR 0.000100 Time 0.544145 +2025-05-15 17:44:05,603 - Epoch: [29][ 300/ 813] Overall Loss 0.687823 Objective Loss 0.687823 LR 0.000100 Time 0.537664 +2025-05-15 17:44:54,979 - Epoch: [29][ 400/ 813] Overall Loss 0.699263 Objective Loss 0.699263 LR 0.000100 Time 0.526685 +2025-05-15 17:45:47,091 - Epoch: [29][ 500/ 813] Overall Loss 0.701587 Objective Loss 0.701587 LR 0.000100 Time 0.525569 +2025-05-15 17:46:39,235 - Epoch: [29][ 600/ 813] Overall Loss 0.706192 Objective Loss 0.706192 LR 0.000100 Time 0.524878 +2025-05-15 17:47:31,577 - Epoch: [29][ 700/ 813] Overall Loss 0.715169 Objective Loss 0.715169 LR 0.000100 Time 0.524662 +2025-05-15 17:48:24,215 - Epoch: [29][ 800/ 813] Overall Loss 0.713904 Objective Loss 0.713904 LR 0.000100 Time 0.524874 +2025-05-15 17:48:30,143 - Epoch: [29][ 813/ 813] Overall Loss 0.713807 Objective Loss 0.713807 LR 0.000100 Time 0.523773 +2025-05-15 17:48:30,182 - --- validate (epoch=29)----------- +2025-05-15 17:48:30,183 - 3250 samples (16 per mini-batch) +2025-05-15 17:48:30,185 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 17:49:25,045 - Epoch: [29][ 100/ 204] Loss 0.818095 mAP 0.890125 +2025-05-15 17:50:18,372 - Epoch: [29][ 200/ 204] Loss 0.816715 mAP 0.890202 +2025-05-15 17:50:18,842 - Epoch: [29][ 204/ 204] Loss 0.814781 mAP 0.890220 +2025-05-15 17:50:18,874 - ==> mAP: 0.89022 Loss: 0.815 + +2025-05-15 17:50:18,879 - ==> Best [mAP: 0.890220 vloss: 0.814781 Params: 368352 on epoch: 29] +2025-05-15 17:50:18,879 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 17:50:18,929 - + +2025-05-15 17:50:18,929 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 17:51:14,022 - Epoch: [30][ 100/ 813] Overall Loss 0.674074 Objective Loss 0.674074 LR 0.000100 Time 0.550907 +2025-05-15 17:52:08,056 - Epoch: [30][ 200/ 813] Overall Loss 0.684901 Objective Loss 0.684901 LR 0.000100 Time 0.545612 +2025-05-15 17:53:00,316 - Epoch: [30][ 300/ 813] Overall Loss 0.679468 Objective Loss 0.679468 LR 0.000100 Time 0.537920 +2025-05-15 17:53:51,320 - Epoch: [30][ 400/ 813] Overall Loss 0.688896 Objective Loss 0.688896 LR 0.000100 Time 0.530946 +2025-05-15 17:54:43,470 - Epoch: [30][ 500/ 813] Overall Loss 0.691870 Objective Loss 0.691870 LR 0.000100 Time 0.529055 +2025-05-15 17:55:37,150 - Epoch: [30][ 600/ 813] Overall Loss 0.692308 Objective Loss 0.692308 LR 0.000100 Time 0.530341 +2025-05-15 17:56:29,516 - Epoch: [30][ 700/ 813] Overall Loss 0.695742 Objective Loss 0.695742 LR 0.000100 Time 0.529385 +2025-05-15 17:57:23,251 - Epoch: [30][ 800/ 813] Overall Loss 0.694769 Objective Loss 0.694769 LR 0.000100 Time 0.530379 +2025-05-15 17:57:27,322 - Epoch: [30][ 813/ 813] Overall Loss 0.695203 Objective Loss 0.695203 LR 0.000100 Time 0.526905 +2025-05-15 17:57:27,360 - --- validate (epoch=30)----------- +2025-05-15 17:57:27,361 - 3250 samples (16 per mini-batch) +2025-05-15 17:57:27,363 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 17:58:21,018 - Epoch: [30][ 100/ 204] Loss 0.776403 mAP 0.890564 +2025-05-15 17:59:14,313 - Epoch: [30][ 200/ 204] Loss 0.788728 mAP 0.890568 +2025-05-15 17:59:14,864 - Epoch: [30][ 204/ 204] Loss 0.786327 mAP 0.890576 +2025-05-15 17:59:14,897 - ==> mAP: 0.89058 Loss: 0.786 + +2025-05-15 17:59:14,901 - ==> Best [mAP: 0.890576 vloss: 0.786327 Params: 368352 on epoch: 30] +2025-05-15 17:59:14,901 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 17:59:14,949 - + +2025-05-15 17:59:14,950 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 18:00:10,026 - Epoch: [31][ 100/ 813] Overall Loss 0.638762 Objective Loss 0.638762 LR 0.000100 Time 0.550733 +2025-05-15 18:01:02,237 - Epoch: [31][ 200/ 813] Overall Loss 0.647786 Objective Loss 0.647786 LR 0.000100 Time 0.536360 +2025-05-15 18:01:55,764 - Epoch: [31][ 300/ 813] Overall Loss 0.659914 Objective Loss 0.659914 LR 0.000100 Time 0.535991 +2025-05-15 18:02:46,303 - Epoch: [31][ 400/ 813] Overall Loss 0.671655 Objective Loss 0.671655 LR 0.000100 Time 0.528337 +2025-05-15 18:03:38,761 - Epoch: [31][ 500/ 813] Overall Loss 0.670720 Objective Loss 0.670720 LR 0.000100 Time 0.527580 +2025-05-15 18:04:30,814 - Epoch: [31][ 600/ 813] Overall Loss 0.675787 Objective Loss 0.675787 LR 0.000100 Time 0.526401 +2025-05-15 18:05:22,881 - Epoch: [31][ 700/ 813] Overall Loss 0.682629 Objective Loss 0.682629 LR 0.000100 Time 0.525580 +2025-05-15 18:06:14,881 - Epoch: [31][ 800/ 813] Overall Loss 0.683690 Objective Loss 0.683690 LR 0.000100 Time 0.524878 +2025-05-15 18:06:20,845 - Epoch: [31][ 813/ 813] Overall Loss 0.682515 Objective Loss 0.682515 LR 0.000100 Time 0.523820 +2025-05-15 18:06:20,878 - --- validate (epoch=31)----------- +2025-05-15 18:06:20,878 - 3250 samples (16 per mini-batch) +2025-05-15 18:06:20,881 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 18:07:17,046 - Epoch: [31][ 100/ 204] Loss 0.823536 mAP 0.869476 +2025-05-15 18:08:08,618 - Epoch: [31][ 200/ 204] Loss 0.820887 mAP 0.869944 +2025-05-15 18:08:09,623 - Epoch: [31][ 204/ 204] Loss 0.825271 mAP 0.869964 +2025-05-15 18:08:09,657 - ==> mAP: 0.86996 Loss: 0.825 + +2025-05-15 18:08:09,661 - ==> Best [mAP: 0.890576 vloss: 0.786327 Params: 368352 on epoch: 30] +2025-05-15 18:08:09,661 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 18:08:09,707 - + +2025-05-15 18:08:09,707 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 18:09:03,939 - Epoch: [32][ 100/ 813] Overall Loss 0.639121 Objective Loss 0.639121 LR 0.000100 Time 0.542297 +2025-05-15 18:09:56,484 - Epoch: [32][ 200/ 813] Overall Loss 0.649171 Objective Loss 0.649171 LR 0.000100 Time 0.533861 +2025-05-15 18:10:48,805 - Epoch: [32][ 300/ 813] Overall Loss 0.656809 Objective Loss 0.656809 LR 0.000100 Time 0.530302 +2025-05-15 18:11:39,244 - Epoch: [32][ 400/ 813] Overall Loss 0.664380 Objective Loss 0.664380 LR 0.000100 Time 0.523822 +2025-05-15 18:12:32,161 - Epoch: [32][ 500/ 813] Overall Loss 0.670969 Objective Loss 0.670969 LR 0.000100 Time 0.524887 +2025-05-15 18:13:24,771 - Epoch: [32][ 600/ 813] Overall Loss 0.677921 Objective Loss 0.677921 LR 0.000100 Time 0.525080 +2025-05-15 18:14:17,681 - Epoch: [32][ 700/ 813] Overall Loss 0.687025 Objective Loss 0.687025 LR 0.000100 Time 0.525647 +2025-05-15 18:15:12,638 - Epoch: [32][ 800/ 813] Overall Loss 0.687379 Objective Loss 0.687379 LR 0.000100 Time 0.528635 +2025-05-15 18:15:17,894 - Epoch: [32][ 813/ 813] Overall Loss 0.687145 Objective Loss 0.687145 LR 0.000100 Time 0.526647 +2025-05-15 18:15:17,923 - --- validate (epoch=32)----------- +2025-05-15 18:15:17,924 - 3250 samples (16 per mini-batch) +2025-05-15 18:15:17,925 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 18:16:12,288 - Epoch: [32][ 100/ 204] Loss 0.841048 mAP 0.895955 +2025-05-15 18:17:04,380 - Epoch: [32][ 200/ 204] Loss 0.818804 mAP 0.888071 +2025-05-15 18:17:05,732 - Epoch: [32][ 204/ 204] Loss 0.816770 mAP 0.888014 +2025-05-15 18:17:05,759 - ==> mAP: 0.88801 Loss: 0.817 + +2025-05-15 18:17:05,764 - ==> Best [mAP: 0.890576 vloss: 0.786327 Params: 368352 on epoch: 30] +2025-05-15 18:17:05,764 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 18:17:05,808 - + +2025-05-15 18:17:05,808 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 18:18:01,193 - Epoch: [33][ 100/ 813] Overall Loss 0.631592 Objective Loss 0.631592 LR 0.000100 Time 0.553821 +2025-05-15 18:18:55,395 - Epoch: [33][ 200/ 813] Overall Loss 0.618409 Objective Loss 0.618409 LR 0.000100 Time 0.547869 +2025-05-15 18:19:47,802 - Epoch: [33][ 300/ 813] Overall Loss 0.628562 Objective Loss 0.628562 LR 0.000100 Time 0.539920 +2025-05-15 18:20:38,244 - Epoch: [33][ 400/ 813] Overall Loss 0.643824 Objective Loss 0.643824 LR 0.000100 Time 0.531040 +2025-05-15 18:21:29,321 - Epoch: [33][ 500/ 813] Overall Loss 0.652115 Objective Loss 0.652115 LR 0.000100 Time 0.526977 +2025-05-15 18:22:23,456 - Epoch: [33][ 600/ 813] Overall Loss 0.657380 Objective Loss 0.657380 LR 0.000100 Time 0.529369 +2025-05-15 18:23:14,836 - Epoch: [33][ 700/ 813] Overall Loss 0.664123 Objective Loss 0.664123 LR 0.000100 Time 0.527143 +2025-05-15 18:24:06,440 - Epoch: [33][ 800/ 813] Overall Loss 0.664705 Objective Loss 0.664705 LR 0.000100 Time 0.525753 +2025-05-15 18:24:12,449 - Epoch: [33][ 813/ 813] Overall Loss 0.664435 Objective Loss 0.664435 LR 0.000100 Time 0.524737 +2025-05-15 18:24:12,481 - --- validate (epoch=33)----------- +2025-05-15 18:24:12,482 - 3250 samples (16 per mini-batch) +2025-05-15 18:24:12,484 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 18:25:05,826 - Epoch: [33][ 100/ 204] Loss 0.839697 mAP 0.889634 +2025-05-15 18:25:59,807 - Epoch: [33][ 200/ 204] Loss 0.818184 mAP 0.889731 +2025-05-15 18:26:00,373 - Epoch: [33][ 204/ 204] Loss 0.817732 mAP 0.889768 +2025-05-15 18:26:00,401 - ==> mAP: 0.88977 Loss: 0.818 + +2025-05-15 18:26:00,406 - ==> Best [mAP: 0.890576 vloss: 0.786327 Params: 368352 on epoch: 30] +2025-05-15 18:26:00,406 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 18:26:00,452 - + +2025-05-15 18:26:00,452 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 18:26:54,095 - Epoch: [34][ 100/ 813] Overall Loss 0.602598 Objective Loss 0.602598 LR 0.000100 Time 0.536405 +2025-05-15 18:27:46,197 - Epoch: [34][ 200/ 813] Overall Loss 0.605583 Objective Loss 0.605583 LR 0.000100 Time 0.528657 +2025-05-15 18:28:39,064 - Epoch: [34][ 300/ 813] Overall Loss 0.628832 Objective Loss 0.628832 LR 0.000100 Time 0.528657 +2025-05-15 18:29:30,280 - Epoch: [34][ 400/ 813] Overall Loss 0.641480 Objective Loss 0.641480 LR 0.000100 Time 0.524530 +2025-05-15 18:30:22,942 - Epoch: [34][ 500/ 813] Overall Loss 0.647700 Objective Loss 0.647700 LR 0.000100 Time 0.524943 +2025-05-15 18:31:16,207 - Epoch: [34][ 600/ 813] Overall Loss 0.650460 Objective Loss 0.650460 LR 0.000100 Time 0.526226 +2025-05-15 18:32:07,644 - Epoch: [34][ 700/ 813] Overall Loss 0.651143 Objective Loss 0.651143 LR 0.000100 Time 0.524530 +2025-05-15 18:33:00,699 - Epoch: [34][ 800/ 813] Overall Loss 0.651702 Objective Loss 0.651702 LR 0.000100 Time 0.525280 +2025-05-15 18:33:05,706 - Epoch: [34][ 813/ 813] Overall Loss 0.652144 Objective Loss 0.652144 LR 0.000100 Time 0.522997 +2025-05-15 18:33:05,740 - --- validate (epoch=34)----------- +2025-05-15 18:33:05,741 - 3250 samples (16 per mini-batch) +2025-05-15 18:33:05,743 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 18:33:59,613 - Epoch: [34][ 100/ 204] Loss 0.818621 mAP 0.890172 +2025-05-15 18:34:52,995 - Epoch: [34][ 200/ 204] Loss 0.795242 mAP 0.880500 +2025-05-15 18:34:54,057 - Epoch: [34][ 204/ 204] Loss 0.797776 mAP 0.880433 +2025-05-15 18:34:54,080 - ==> mAP: 0.88043 Loss: 0.798 + +2025-05-15 18:34:54,084 - ==> Best [mAP: 0.890576 vloss: 0.786327 Params: 368352 on epoch: 30] +2025-05-15 18:34:54,084 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 18:34:54,128 - + +2025-05-15 18:34:54,129 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 18:35:48,605 - Epoch: [35][ 100/ 813] Overall Loss 0.604752 Objective Loss 0.604752 LR 0.000100 Time 0.544736 +2025-05-15 18:36:42,017 - Epoch: [35][ 200/ 813] Overall Loss 0.647174 Objective Loss 0.647174 LR 0.000100 Time 0.539417 +2025-05-15 18:37:34,357 - Epoch: [35][ 300/ 813] Overall Loss 0.649656 Objective Loss 0.649656 LR 0.000100 Time 0.534074 +2025-05-15 18:38:24,705 - Epoch: [35][ 400/ 813] Overall Loss 0.664996 Objective Loss 0.664996 LR 0.000100 Time 0.526421 +2025-05-15 18:39:17,311 - Epoch: [35][ 500/ 813] Overall Loss 0.661614 Objective Loss 0.661614 LR 0.000100 Time 0.526345 +2025-05-15 18:40:09,879 - Epoch: [35][ 600/ 813] Overall Loss 0.666629 Objective Loss 0.666629 LR 0.000100 Time 0.526231 +2025-05-15 18:41:03,644 - Epoch: [35][ 700/ 813] Overall Loss 0.668810 Objective Loss 0.668810 LR 0.000100 Time 0.527860 +2025-05-15 18:41:55,064 - Epoch: [35][ 800/ 813] Overall Loss 0.671632 Objective Loss 0.671632 LR 0.000100 Time 0.526150 +2025-05-15 18:42:00,510 - Epoch: [35][ 813/ 813] Overall Loss 0.672013 Objective Loss 0.672013 LR 0.000100 Time 0.524425 +2025-05-15 18:42:00,540 - --- validate (epoch=35)----------- +2025-05-15 18:42:00,541 - 3250 samples (16 per mini-batch) +2025-05-15 18:42:00,543 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 18:42:56,233 - Epoch: [35][ 100/ 204] Loss 0.767724 mAP 0.898754 +2025-05-15 18:43:47,438 - Epoch: [35][ 200/ 204] Loss 0.768820 mAP 0.898815 +2025-05-15 18:43:48,798 - Epoch: [35][ 204/ 204] Loss 0.765439 mAP 0.898834 +2025-05-15 18:43:48,829 - ==> mAP: 0.89883 Loss: 0.765 + +2025-05-15 18:43:48,834 - ==> Best [mAP: 0.898834 vloss: 0.765439 Params: 368352 on epoch: 35] +2025-05-15 18:43:48,834 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 18:43:48,884 - + +2025-05-15 18:43:48,884 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 18:44:42,932 - Epoch: [36][ 100/ 813] Overall Loss 0.619543 Objective Loss 0.619543 LR 0.000100 Time 0.540446 +2025-05-15 18:45:35,394 - Epoch: [36][ 200/ 813] Overall Loss 0.612524 Objective Loss 0.612524 LR 0.000100 Time 0.532526 +2025-05-15 18:46:28,733 - Epoch: [36][ 300/ 813] Overall Loss 0.631302 Objective Loss 0.631302 LR 0.000100 Time 0.532810 +2025-05-15 18:47:17,887 - Epoch: [36][ 400/ 813] Overall Loss 0.639768 Objective Loss 0.639768 LR 0.000100 Time 0.522477 +2025-05-15 18:48:10,181 - Epoch: [36][ 500/ 813] Overall Loss 0.641235 Objective Loss 0.641235 LR 0.000100 Time 0.522568 +2025-05-15 18:49:04,131 - Epoch: [36][ 600/ 813] Overall Loss 0.644634 Objective Loss 0.644634 LR 0.000100 Time 0.525381 +2025-05-15 18:49:56,753 - Epoch: [36][ 700/ 813] Overall Loss 0.645021 Objective Loss 0.645021 LR 0.000100 Time 0.525498 +2025-05-15 18:50:48,818 - Epoch: [36][ 800/ 813] Overall Loss 0.645052 Objective Loss 0.645052 LR 0.000100 Time 0.524889 +2025-05-15 18:50:53,591 - Epoch: [36][ 813/ 813] Overall Loss 0.644282 Objective Loss 0.644282 LR 0.000100 Time 0.522367 +2025-05-15 18:50:53,617 - --- validate (epoch=36)----------- +2025-05-15 18:50:53,618 - 3250 samples (16 per mini-batch) +2025-05-15 18:50:53,620 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 18:51:48,185 - Epoch: [36][ 100/ 204] Loss 0.760258 mAP 0.899270 +2025-05-15 18:52:41,201 - Epoch: [36][ 200/ 204] Loss 0.787890 mAP 0.889762 +2025-05-15 18:52:42,166 - Epoch: [36][ 204/ 204] Loss 0.794829 mAP 0.889728 +2025-05-15 18:52:42,191 - ==> mAP: 0.88973 Loss: 0.795 + +2025-05-15 18:52:42,196 - ==> Best [mAP: 0.898834 vloss: 0.765439 Params: 368352 on epoch: 35] +2025-05-15 18:52:42,196 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 18:52:42,240 - + +2025-05-15 18:52:42,240 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 18:53:35,820 - Epoch: [37][ 100/ 813] Overall Loss 0.628248 Objective Loss 0.628248 LR 0.000100 Time 0.535768 +2025-05-15 18:54:30,623 - Epoch: [37][ 200/ 813] Overall Loss 0.623444 Objective Loss 0.623444 LR 0.000100 Time 0.541894 +2025-05-15 18:55:23,877 - Epoch: [37][ 300/ 813] Overall Loss 0.625152 Objective Loss 0.625152 LR 0.000100 Time 0.538769 +2025-05-15 18:56:14,350 - Epoch: [37][ 400/ 813] Overall Loss 0.622593 Objective Loss 0.622593 LR 0.000100 Time 0.530255 +2025-05-15 18:57:07,123 - Epoch: [37][ 500/ 813] Overall Loss 0.628695 Objective Loss 0.628695 LR 0.000100 Time 0.529746 +2025-05-15 18:58:01,323 - Epoch: [37][ 600/ 813] Overall Loss 0.637682 Objective Loss 0.637682 LR 0.000100 Time 0.531787 +2025-05-15 18:58:51,333 - Epoch: [37][ 700/ 813] Overall Loss 0.644815 Objective Loss 0.644815 LR 0.000100 Time 0.527257 +2025-05-15 18:59:45,233 - Epoch: [37][ 800/ 813] Overall Loss 0.643307 Objective Loss 0.643307 LR 0.000100 Time 0.528723 +2025-05-15 18:59:49,566 - Epoch: [37][ 813/ 813] Overall Loss 0.643379 Objective Loss 0.643379 LR 0.000100 Time 0.525597 +2025-05-15 18:59:49,597 - --- validate (epoch=37)----------- +2025-05-15 18:59:49,597 - 3250 samples (16 per mini-batch) +2025-05-15 18:59:49,599 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 19:00:43,114 - Epoch: [37][ 100/ 204] Loss 0.790425 mAP 0.879798 +2025-05-15 19:01:36,434 - Epoch: [37][ 200/ 204] Loss 0.777647 mAP 0.879595 +2025-05-15 19:01:37,107 - Epoch: [37][ 204/ 204] Loss 0.778064 mAP 0.879595 +2025-05-15 19:01:37,136 - ==> mAP: 0.87960 Loss: 0.778 + +2025-05-15 19:01:37,141 - ==> Best [mAP: 0.898834 vloss: 0.765439 Params: 368352 on epoch: 35] +2025-05-15 19:01:37,141 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 19:01:37,186 - + +2025-05-15 19:01:37,186 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 19:02:31,807 - Epoch: [38][ 100/ 813] Overall Loss 0.603116 Objective Loss 0.603116 LR 0.000100 Time 0.546101 +2025-05-15 19:03:25,205 - Epoch: [38][ 200/ 813] Overall Loss 0.621495 Objective Loss 0.621495 LR 0.000100 Time 0.540033 +2025-05-15 19:04:16,790 - Epoch: [38][ 300/ 813] Overall Loss 0.617197 Objective Loss 0.617197 LR 0.000100 Time 0.531965 +2025-05-15 19:05:07,725 - Epoch: [38][ 400/ 813] Overall Loss 0.628915 Objective Loss 0.628915 LR 0.000100 Time 0.526307 +2025-05-15 19:06:00,219 - Epoch: [38][ 500/ 813] Overall Loss 0.632174 Objective Loss 0.632174 LR 0.000100 Time 0.526016 +2025-05-15 19:06:53,267 - Epoch: [38][ 600/ 813] Overall Loss 0.636383 Objective Loss 0.636383 LR 0.000100 Time 0.526757 +2025-05-15 19:07:44,644 - Epoch: [38][ 700/ 813] Overall Loss 0.642108 Objective Loss 0.642108 LR 0.000100 Time 0.524887 +2025-05-15 19:08:36,357 - Epoch: [38][ 800/ 813] Overall Loss 0.643670 Objective Loss 0.643670 LR 0.000100 Time 0.523915 +2025-05-15 19:08:42,389 - Epoch: [38][ 813/ 813] Overall Loss 0.643543 Objective Loss 0.643543 LR 0.000100 Time 0.522957 +2025-05-15 19:08:42,421 - --- validate (epoch=38)----------- +2025-05-15 19:08:42,422 - 3250 samples (16 per mini-batch) +2025-05-15 19:08:42,424 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 19:09:37,860 - Epoch: [38][ 100/ 204] Loss 0.735002 mAP 0.899823 +2025-05-15 19:10:29,686 - Epoch: [38][ 200/ 204] Loss 0.758872 mAP 0.889781 +2025-05-15 19:10:30,216 - Epoch: [38][ 204/ 204] Loss 0.760502 mAP 0.889767 +2025-05-15 19:10:30,244 - ==> mAP: 0.88977 Loss: 0.761 + +2025-05-15 19:10:30,248 - ==> Best [mAP: 0.898834 vloss: 0.765439 Params: 368352 on epoch: 35] +2025-05-15 19:10:30,249 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 19:10:30,294 - + +2025-05-15 19:10:30,294 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 19:11:24,868 - Epoch: [39][ 100/ 813] Overall Loss 0.586042 Objective Loss 0.586042 LR 0.000100 Time 0.545707 +2025-05-15 19:12:17,650 - Epoch: [39][ 200/ 813] Overall Loss 0.582832 Objective Loss 0.582832 LR 0.000100 Time 0.536755 +2025-05-15 19:13:10,468 - Epoch: [39][ 300/ 813] Overall Loss 0.598415 Objective Loss 0.598415 LR 0.000100 Time 0.533889 +2025-05-15 19:14:02,230 - Epoch: [39][ 400/ 813] Overall Loss 0.603044 Objective Loss 0.603044 LR 0.000100 Time 0.529810 +2025-05-15 19:14:54,235 - Epoch: [39][ 500/ 813] Overall Loss 0.610579 Objective Loss 0.610579 LR 0.000100 Time 0.527856 +2025-05-15 19:15:46,519 - Epoch: [39][ 600/ 813] Overall Loss 0.614747 Objective Loss 0.614747 LR 0.000100 Time 0.527016 +2025-05-15 19:16:38,605 - Epoch: [39][ 700/ 813] Overall Loss 0.626916 Objective Loss 0.626916 LR 0.000100 Time 0.526134 +2025-05-15 19:17:34,272 - Epoch: [39][ 800/ 813] Overall Loss 0.632361 Objective Loss 0.632361 LR 0.000100 Time 0.529948 +2025-05-15 19:17:38,690 - Epoch: [39][ 813/ 813] Overall Loss 0.632018 Objective Loss 0.632018 LR 0.000100 Time 0.526908 +2025-05-15 19:17:38,720 - --- validate (epoch=39)----------- +2025-05-15 19:17:38,721 - 3250 samples (16 per mini-batch) +2025-05-15 19:17:38,723 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 19:18:33,080 - Epoch: [39][ 100/ 204] Loss 0.781065 mAP 0.868843 +2025-05-15 19:19:25,057 - Epoch: [39][ 200/ 204] Loss 0.771016 mAP 0.869722 +2025-05-15 19:19:25,499 - Epoch: [39][ 204/ 204] Loss 0.776117 mAP 0.869662 +2025-05-15 19:19:25,529 - ==> mAP: 0.86966 Loss: 0.776 + +2025-05-15 19:19:25,534 - ==> Best [mAP: 0.898834 vloss: 0.765439 Params: 368352 on epoch: 35] +2025-05-15 19:19:25,534 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 19:19:25,579 - + +2025-05-15 19:19:25,579 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 19:20:21,080 - Epoch: [40][ 100/ 813] Overall Loss 0.610463 Objective Loss 0.610463 LR 0.000100 Time 0.554983 +2025-05-15 19:21:14,091 - Epoch: [40][ 200/ 813] Overall Loss 0.602933 Objective Loss 0.602933 LR 0.000100 Time 0.542528 +2025-05-15 19:22:05,885 - Epoch: [40][ 300/ 813] Overall Loss 0.606299 Objective Loss 0.606299 LR 0.000100 Time 0.534328 +2025-05-15 19:22:55,098 - Epoch: [40][ 400/ 813] Overall Loss 0.604996 Objective Loss 0.604996 LR 0.000100 Time 0.523775 +2025-05-15 19:23:47,610 - Epoch: [40][ 500/ 813] Overall Loss 0.611200 Objective Loss 0.611200 LR 0.000100 Time 0.524040 +2025-05-15 19:24:40,458 - Epoch: [40][ 600/ 813] Overall Loss 0.616873 Objective Loss 0.616873 LR 0.000100 Time 0.524776 +2025-05-15 19:25:32,329 - Epoch: [40][ 700/ 813] Overall Loss 0.623228 Objective Loss 0.623228 LR 0.000100 Time 0.523908 +2025-05-15 19:26:24,870 - Epoch: [40][ 800/ 813] Overall Loss 0.628720 Objective Loss 0.628720 LR 0.000100 Time 0.524093 +2025-05-15 19:26:29,867 - Epoch: [40][ 813/ 813] Overall Loss 0.630212 Objective Loss 0.630212 LR 0.000100 Time 0.521859 +2025-05-15 19:26:29,899 - --- validate (epoch=40)----------- +2025-05-15 19:26:29,899 - 3250 samples (16 per mini-batch) +2025-05-15 19:26:29,901 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 19:27:25,028 - Epoch: [40][ 100/ 204] Loss 0.810264 mAP 0.880884 +2025-05-15 19:28:17,134 - Epoch: [40][ 200/ 204] Loss 0.790667 mAP 0.880850 +2025-05-15 19:28:18,055 - Epoch: [40][ 204/ 204] Loss 0.789921 mAP 0.880834 +2025-05-15 19:28:18,086 - ==> mAP: 0.88083 Loss: 0.790 + +2025-05-15 19:28:18,091 - ==> Best [mAP: 0.898834 vloss: 0.765439 Params: 368352 on epoch: 35] +2025-05-15 19:28:18,092 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 19:28:18,136 - + +2025-05-15 19:28:18,136 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 19:29:13,086 - Epoch: [41][ 100/ 813] Overall Loss 0.597312 Objective Loss 0.597312 LR 0.000100 Time 0.549474 +2025-05-15 19:30:05,023 - Epoch: [41][ 200/ 813] Overall Loss 0.592637 Objective Loss 0.592637 LR 0.000100 Time 0.534414 +2025-05-15 19:30:58,590 - Epoch: [41][ 300/ 813] Overall Loss 0.591813 Objective Loss 0.591813 LR 0.000100 Time 0.534827 +2025-05-15 19:31:48,881 - Epoch: [41][ 400/ 813] Overall Loss 0.599123 Objective Loss 0.599123 LR 0.000100 Time 0.526842 +2025-05-15 19:32:41,460 - Epoch: [41][ 500/ 813] Overall Loss 0.600725 Objective Loss 0.600725 LR 0.000100 Time 0.526630 +2025-05-15 19:33:33,544 - Epoch: [41][ 600/ 813] Overall Loss 0.610323 Objective Loss 0.610323 LR 0.000100 Time 0.525656 +2025-05-15 19:34:26,881 - Epoch: [41][ 700/ 813] Overall Loss 0.614780 Objective Loss 0.614780 LR 0.000100 Time 0.526755 +2025-05-15 19:35:18,243 - Epoch: [41][ 800/ 813] Overall Loss 0.612533 Objective Loss 0.612533 LR 0.000100 Time 0.525112 +2025-05-15 19:35:24,253 - Epoch: [41][ 813/ 813] Overall Loss 0.613375 Objective Loss 0.613375 LR 0.000100 Time 0.524106 +2025-05-15 19:35:24,290 - --- validate (epoch=41)----------- +2025-05-15 19:35:24,290 - 3250 samples (16 per mini-batch) +2025-05-15 19:35:24,292 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 19:36:18,848 - Epoch: [41][ 100/ 204] Loss 0.783737 mAP 0.869930 +2025-05-15 19:37:09,652 - Epoch: [41][ 200/ 204] Loss 0.780272 mAP 0.870232 +2025-05-15 19:37:10,986 - Epoch: [41][ 204/ 204] Loss 0.777283 mAP 0.880015 +2025-05-15 19:37:11,013 - ==> mAP: 0.88002 Loss: 0.777 + +2025-05-15 19:37:11,018 - ==> Best [mAP: 0.898834 vloss: 0.765439 Params: 368352 on epoch: 35] +2025-05-15 19:37:11,018 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 19:37:11,062 - + +2025-05-15 19:37:11,062 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 19:38:05,611 - Epoch: [42][ 100/ 813] Overall Loss 0.597048 Objective Loss 0.597048 LR 0.000100 Time 0.545466 +2025-05-15 19:38:59,450 - Epoch: [42][ 200/ 813] Overall Loss 0.580756 Objective Loss 0.580756 LR 0.000100 Time 0.541919 +2025-05-15 19:39:53,133 - Epoch: [42][ 300/ 813] Overall Loss 0.609007 Objective Loss 0.609007 LR 0.000100 Time 0.540206 +2025-05-15 19:40:42,580 - Epoch: [42][ 400/ 813] Overall Loss 0.605262 Objective Loss 0.605262 LR 0.000100 Time 0.528765 +2025-05-15 19:41:34,969 - Epoch: [42][ 500/ 813] Overall Loss 0.611275 Objective Loss 0.611275 LR 0.000100 Time 0.527788 +2025-05-15 19:42:27,518 - Epoch: [42][ 600/ 813] Overall Loss 0.621745 Objective Loss 0.621745 LR 0.000100 Time 0.527402 +2025-05-15 19:43:20,328 - Epoch: [42][ 700/ 813] Overall Loss 0.621702 Objective Loss 0.621702 LR 0.000100 Time 0.527482 +2025-05-15 19:44:12,359 - Epoch: [42][ 800/ 813] Overall Loss 0.628214 Objective Loss 0.628214 LR 0.000100 Time 0.526583 +2025-05-15 19:44:17,173 - Epoch: [42][ 813/ 813] Overall Loss 0.628815 Objective Loss 0.628815 LR 0.000100 Time 0.524084 +2025-05-15 19:44:17,201 - --- validate (epoch=42)----------- +2025-05-15 19:44:17,202 - 3250 samples (16 per mini-batch) +2025-05-15 19:44:17,204 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 19:45:12,974 - Epoch: [42][ 100/ 204] Loss 0.789590 mAP 0.880221 +2025-05-15 19:46:04,896 - Epoch: [42][ 200/ 204] Loss 0.795128 mAP 0.880087 +2025-05-15 19:46:05,563 - Epoch: [42][ 204/ 204] Loss 0.794691 mAP 0.870258 +2025-05-15 19:46:05,592 - ==> mAP: 0.87026 Loss: 0.795 + +2025-05-15 19:46:05,596 - ==> Best [mAP: 0.898834 vloss: 0.765439 Params: 368352 on epoch: 35] +2025-05-15 19:46:05,597 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 19:46:05,641 - + +2025-05-15 19:46:05,641 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 19:47:00,412 - Epoch: [43][ 100/ 813] Overall Loss 0.583412 Objective Loss 0.583412 LR 0.000100 Time 0.547678 +2025-05-15 19:47:54,055 - Epoch: [43][ 200/ 813] Overall Loss 0.582443 Objective Loss 0.582443 LR 0.000100 Time 0.542046 +2025-05-15 19:48:45,188 - Epoch: [43][ 300/ 813] Overall Loss 0.588971 Objective Loss 0.588971 LR 0.000100 Time 0.531799 +2025-05-15 19:49:36,700 - Epoch: [43][ 400/ 813] Overall Loss 0.602941 Objective Loss 0.602941 LR 0.000100 Time 0.527625 +2025-05-15 19:50:28,533 - Epoch: [43][ 500/ 813] Overall Loss 0.610611 Objective Loss 0.610611 LR 0.000100 Time 0.525764 +2025-05-15 19:51:21,260 - Epoch: [43][ 600/ 813] Overall Loss 0.613124 Objective Loss 0.613124 LR 0.000100 Time 0.526013 +2025-05-15 19:52:13,461 - Epoch: [43][ 700/ 813] Overall Loss 0.621214 Objective Loss 0.621214 LR 0.000100 Time 0.525429 +2025-05-15 19:53:05,734 - Epoch: [43][ 800/ 813] Overall Loss 0.625706 Objective Loss 0.625706 LR 0.000100 Time 0.525090 +2025-05-15 19:53:10,864 - Epoch: [43][ 813/ 813] Overall Loss 0.626627 Objective Loss 0.626627 LR 0.000100 Time 0.523002 +2025-05-15 19:53:10,896 - --- validate (epoch=43)----------- +2025-05-15 19:53:10,897 - 3250 samples (16 per mini-batch) +2025-05-15 19:53:10,899 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 19:54:04,660 - Epoch: [43][ 100/ 204] Loss 0.760744 mAP 0.890054 +2025-05-15 19:54:58,338 - Epoch: [43][ 200/ 204] Loss 0.766660 mAP 0.890025 +2025-05-15 19:54:59,022 - Epoch: [43][ 204/ 204] Loss 0.768504 mAP 0.890007 +2025-05-15 19:54:59,049 - ==> mAP: 0.89001 Loss: 0.769 + +2025-05-15 19:54:59,054 - ==> Best [mAP: 0.898834 vloss: 0.765439 Params: 368352 on epoch: 35] +2025-05-15 19:54:59,054 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 19:54:59,098 - + +2025-05-15 19:54:59,098 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 19:55:54,235 - Epoch: [44][ 100/ 813] Overall Loss 0.558838 Objective Loss 0.558838 LR 0.000100 Time 0.551337 +2025-05-15 19:56:46,733 - Epoch: [44][ 200/ 813] Overall Loss 0.556138 Objective Loss 0.556138 LR 0.000100 Time 0.538149 +2025-05-15 19:57:39,991 - Epoch: [44][ 300/ 813] Overall Loss 0.572399 Objective Loss 0.572399 LR 0.000100 Time 0.536290 +2025-05-15 19:58:29,816 - Epoch: [44][ 400/ 813] Overall Loss 0.587737 Objective Loss 0.587737 LR 0.000100 Time 0.526775 +2025-05-15 19:59:22,837 - Epoch: [44][ 500/ 813] Overall Loss 0.592149 Objective Loss 0.592149 LR 0.000100 Time 0.527456 +2025-05-15 20:00:14,894 - Epoch: [44][ 600/ 813] Overall Loss 0.597410 Objective Loss 0.597410 LR 0.000100 Time 0.526305 +2025-05-15 20:01:08,815 - Epoch: [44][ 700/ 813] Overall Loss 0.602014 Objective Loss 0.602014 LR 0.000100 Time 0.528147 +2025-05-15 20:02:01,118 - Epoch: [44][ 800/ 813] Overall Loss 0.606612 Objective Loss 0.606612 LR 0.000100 Time 0.527504 +2025-05-15 20:02:05,647 - Epoch: [44][ 813/ 813] Overall Loss 0.606965 Objective Loss 0.606965 LR 0.000100 Time 0.524640 +2025-05-15 20:02:05,679 - --- validate (epoch=44)----------- +2025-05-15 20:02:05,679 - 3250 samples (16 per mini-batch) +2025-05-15 20:02:05,682 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 20:02:59,634 - Epoch: [44][ 100/ 204] Loss 0.726118 mAP 0.910360 +2025-05-15 20:03:52,772 - Epoch: [44][ 200/ 204] Loss 0.748592 mAP 0.900142 +2025-05-15 20:03:53,227 - Epoch: [44][ 204/ 204] Loss 0.753084 mAP 0.900154 +2025-05-15 20:03:53,253 - ==> mAP: 0.90015 Loss: 0.753 + +2025-05-15 20:03:53,258 - ==> Best [mAP: 0.900154 vloss: 0.753084 Params: 368352 on epoch: 44] +2025-05-15 20:03:53,258 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 20:03:53,306 - + +2025-05-15 20:03:53,306 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 20:04:46,841 - Epoch: [45][ 100/ 813] Overall Loss 0.540763 Objective Loss 0.540763 LR 0.000100 Time 0.535321 +2025-05-15 20:05:40,077 - Epoch: [45][ 200/ 813] Overall Loss 0.548922 Objective Loss 0.548922 LR 0.000100 Time 0.533828 +2025-05-15 20:06:33,623 - Epoch: [45][ 300/ 813] Overall Loss 0.565072 Objective Loss 0.565072 LR 0.000100 Time 0.534355 +2025-05-15 20:07:23,738 - Epoch: [45][ 400/ 813] Overall Loss 0.581886 Objective Loss 0.581886 LR 0.000100 Time 0.526049 +2025-05-15 20:08:15,741 - Epoch: [45][ 500/ 813] Overall Loss 0.586774 Objective Loss 0.586774 LR 0.000100 Time 0.524837 +2025-05-15 20:09:08,267 - Epoch: [45][ 600/ 813] Overall Loss 0.592589 Objective Loss 0.592589 LR 0.000100 Time 0.524904 +2025-05-15 20:10:00,074 - Epoch: [45][ 700/ 813] Overall Loss 0.599871 Objective Loss 0.599871 LR 0.000100 Time 0.523925 +2025-05-15 20:10:53,034 - Epoch: [45][ 800/ 813] Overall Loss 0.604652 Objective Loss 0.604652 LR 0.000100 Time 0.524633 +2025-05-15 20:10:58,299 - Epoch: [45][ 813/ 813] Overall Loss 0.605382 Objective Loss 0.605382 LR 0.000100 Time 0.522715 +2025-05-15 20:10:58,330 - --- validate (epoch=45)----------- +2025-05-15 20:10:58,331 - 3250 samples (16 per mini-batch) +2025-05-15 20:10:58,333 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 20:11:52,029 - Epoch: [45][ 100/ 204] Loss 0.818650 mAP 0.870139 +2025-05-15 20:12:44,149 - Epoch: [45][ 200/ 204] Loss 0.795666 mAP 0.879590 +2025-05-15 20:12:45,016 - Epoch: [45][ 204/ 204] Loss 0.811139 mAP 0.879590 +2025-05-15 20:12:45,043 - ==> mAP: 0.87959 Loss: 0.811 + +2025-05-15 20:12:45,049 - ==> Best [mAP: 0.900154 vloss: 0.753084 Params: 368352 on epoch: 44] +2025-05-15 20:12:45,049 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 20:12:45,094 - + +2025-05-15 20:12:45,094 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 20:13:39,762 - Epoch: [46][ 100/ 813] Overall Loss 0.618723 Objective Loss 0.618723 LR 0.000100 Time 0.546651 +2025-05-15 20:14:31,757 - Epoch: [46][ 200/ 813] Overall Loss 0.591355 Objective Loss 0.591355 LR 0.000100 Time 0.533293 +2025-05-15 20:15:23,677 - Epoch: [46][ 300/ 813] Overall Loss 0.595085 Objective Loss 0.595085 LR 0.000100 Time 0.528586 +2025-05-15 20:16:14,767 - Epoch: [46][ 400/ 813] Overall Loss 0.599486 Objective Loss 0.599486 LR 0.000100 Time 0.524161 +2025-05-15 20:17:06,970 - Epoch: [46][ 500/ 813] Overall Loss 0.601493 Objective Loss 0.601493 LR 0.000100 Time 0.523730 +2025-05-15 20:17:58,744 - Epoch: [46][ 600/ 813] Overall Loss 0.602310 Objective Loss 0.602310 LR 0.000100 Time 0.522729 +2025-05-15 20:18:52,976 - Epoch: [46][ 700/ 813] Overall Loss 0.606655 Objective Loss 0.606655 LR 0.000100 Time 0.525525 +2025-05-15 20:19:44,136 - Epoch: [46][ 800/ 813] Overall Loss 0.606348 Objective Loss 0.606348 LR 0.000100 Time 0.523782 +2025-05-15 20:19:49,597 - Epoch: [46][ 813/ 813] Overall Loss 0.606310 Objective Loss 0.606310 LR 0.000100 Time 0.522124 +2025-05-15 20:19:49,624 - --- validate (epoch=46)----------- +2025-05-15 20:19:49,625 - 3250 samples (16 per mini-batch) +2025-05-15 20:19:49,627 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 20:20:43,652 - Epoch: [46][ 100/ 204] Loss 0.733081 mAP 0.890232 +2025-05-15 20:21:36,893 - Epoch: [46][ 200/ 204] Loss 0.761724 mAP 0.889393 +2025-05-15 20:21:37,359 - Epoch: [46][ 204/ 204] Loss 0.757877 mAP 0.889336 +2025-05-15 20:21:37,390 - ==> mAP: 0.88934 Loss: 0.758 + +2025-05-15 20:21:37,395 - ==> Best [mAP: 0.900154 vloss: 0.753084 Params: 368352 on epoch: 44] +2025-05-15 20:21:37,395 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 20:21:37,440 - + +2025-05-15 20:21:37,440 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 20:22:33,055 - Epoch: [47][ 100/ 813] Overall Loss 0.567339 Objective Loss 0.567339 LR 0.000100 Time 0.556115 +2025-05-15 20:23:25,499 - Epoch: [47][ 200/ 813] Overall Loss 0.564438 Objective Loss 0.564438 LR 0.000100 Time 0.540270 +2025-05-15 20:24:18,294 - Epoch: [47][ 300/ 813] Overall Loss 0.564288 Objective Loss 0.564288 LR 0.000100 Time 0.536157 +2025-05-15 20:25:08,139 - Epoch: [47][ 400/ 813] Overall Loss 0.570742 Objective Loss 0.570742 LR 0.000100 Time 0.526728 +2025-05-15 20:25:59,654 - Epoch: [47][ 500/ 813] Overall Loss 0.575288 Objective Loss 0.575288 LR 0.000100 Time 0.524408 +2025-05-15 20:26:51,868 - Epoch: [47][ 600/ 813] Overall Loss 0.580005 Objective Loss 0.580005 LR 0.000100 Time 0.524027 +2025-05-15 20:27:45,273 - Epoch: [47][ 700/ 813] Overall Loss 0.587631 Objective Loss 0.587631 LR 0.000100 Time 0.525457 +2025-05-15 20:28:38,193 - Epoch: [47][ 800/ 813] Overall Loss 0.590574 Objective Loss 0.590574 LR 0.000100 Time 0.525920 +2025-05-15 20:28:42,734 - Epoch: [47][ 813/ 813] Overall Loss 0.590385 Objective Loss 0.590385 LR 0.000100 Time 0.523096 +2025-05-15 20:28:42,764 - --- validate (epoch=47)----------- +2025-05-15 20:28:42,765 - 3250 samples (16 per mini-batch) +2025-05-15 20:28:42,767 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 20:29:36,055 - Epoch: [47][ 100/ 204] Loss 0.735700 mAP 0.889684 +2025-05-15 20:30:28,784 - Epoch: [47][ 200/ 204] Loss 0.751340 mAP 0.879818 +2025-05-15 20:30:29,649 - Epoch: [47][ 204/ 204] Loss 0.748446 mAP 0.879840 +2025-05-15 20:30:29,680 - ==> mAP: 0.87984 Loss: 0.748 + +2025-05-15 20:30:29,685 - ==> Best [mAP: 0.900154 vloss: 0.753084 Params: 368352 on epoch: 44] +2025-05-15 20:30:29,685 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 20:30:29,731 - + +2025-05-15 20:30:29,731 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 20:31:23,499 - Epoch: [48][ 100/ 813] Overall Loss 0.521946 Objective Loss 0.521946 LR 0.000100 Time 0.537647 +2025-05-15 20:32:16,091 - Epoch: [48][ 200/ 813] Overall Loss 0.533452 Objective Loss 0.533452 LR 0.000100 Time 0.531779 +2025-05-15 20:33:08,079 - Epoch: [48][ 300/ 813] Overall Loss 0.567058 Objective Loss 0.567058 LR 0.000100 Time 0.527807 +2025-05-15 20:33:58,794 - Epoch: [48][ 400/ 813] Overall Loss 0.568822 Objective Loss 0.568822 LR 0.000100 Time 0.522638 +2025-05-15 20:34:50,534 - Epoch: [48][ 500/ 813] Overall Loss 0.575238 Objective Loss 0.575238 LR 0.000100 Time 0.521587 +2025-05-15 20:35:43,781 - Epoch: [48][ 600/ 813] Overall Loss 0.580312 Objective Loss 0.580312 LR 0.000100 Time 0.523399 +2025-05-15 20:36:34,965 - Epoch: [48][ 700/ 813] Overall Loss 0.589132 Objective Loss 0.589132 LR 0.000100 Time 0.521745 +2025-05-15 20:37:28,861 - Epoch: [48][ 800/ 813] Overall Loss 0.592847 Objective Loss 0.592847 LR 0.000100 Time 0.523890 +2025-05-15 20:37:34,685 - Epoch: [48][ 813/ 813] Overall Loss 0.592224 Objective Loss 0.592224 LR 0.000100 Time 0.522672 +2025-05-15 20:37:34,722 - --- validate (epoch=48)----------- +2025-05-15 20:37:34,723 - 3250 samples (16 per mini-batch) +2025-05-15 20:37:34,725 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 20:38:27,481 - Epoch: [48][ 100/ 204] Loss 0.754156 mAP 0.889725 +2025-05-15 20:39:21,746 - Epoch: [48][ 200/ 204] Loss 0.749966 mAP 0.889654 +2025-05-15 20:39:22,800 - Epoch: [48][ 204/ 204] Loss 0.750797 mAP 0.889671 +2025-05-15 20:39:22,828 - ==> mAP: 0.88967 Loss: 0.751 + +2025-05-15 20:39:22,833 - ==> Best [mAP: 0.900154 vloss: 0.753084 Params: 368352 on epoch: 44] +2025-05-15 20:39:22,833 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 20:39:22,879 - + +2025-05-15 20:39:22,879 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 20:40:18,265 - Epoch: [49][ 100/ 813] Overall Loss 0.548669 Objective Loss 0.548669 LR 0.000100 Time 0.553822 +2025-05-15 20:41:10,160 - Epoch: [49][ 200/ 813] Overall Loss 0.565602 Objective Loss 0.565602 LR 0.000100 Time 0.536379 +2025-05-15 20:42:02,233 - Epoch: [49][ 300/ 813] Overall Loss 0.582384 Objective Loss 0.582384 LR 0.000100 Time 0.531157 +2025-05-15 20:42:52,973 - Epoch: [49][ 400/ 813] Overall Loss 0.581300 Objective Loss 0.581300 LR 0.000100 Time 0.525213 +2025-05-15 20:43:44,956 - Epoch: [49][ 500/ 813] Overall Loss 0.577886 Objective Loss 0.577886 LR 0.000100 Time 0.524135 +2025-05-15 20:44:37,471 - Epoch: [49][ 600/ 813] Overall Loss 0.581657 Objective Loss 0.581657 LR 0.000100 Time 0.524301 +2025-05-15 20:45:30,012 - Epoch: [49][ 700/ 813] Overall Loss 0.584321 Objective Loss 0.584321 LR 0.000100 Time 0.524457 +2025-05-15 20:46:22,405 - Epoch: [49][ 800/ 813] Overall Loss 0.584577 Objective Loss 0.584577 LR 0.000100 Time 0.524387 +2025-05-15 20:46:27,640 - Epoch: [49][ 813/ 813] Overall Loss 0.586285 Objective Loss 0.586285 LR 0.000100 Time 0.522437 +2025-05-15 20:46:27,673 - --- validate (epoch=49)----------- +2025-05-15 20:46:27,674 - 3250 samples (16 per mini-batch) +2025-05-15 20:46:27,676 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 20:47:22,551 - Epoch: [49][ 100/ 204] Loss 0.791955 mAP 0.889926 +2025-05-15 20:48:15,135 - Epoch: [49][ 200/ 204] Loss 0.791475 mAP 0.889998 +2025-05-15 20:48:15,951 - Epoch: [49][ 204/ 204] Loss 0.792341 mAP 0.889988 +2025-05-15 20:48:15,982 - ==> mAP: 0.88999 Loss: 0.792 + +2025-05-15 20:48:15,987 - ==> Best [mAP: 0.900154 vloss: 0.753084 Params: 368352 on epoch: 44] +2025-05-15 20:48:15,987 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_checkpoint.pth.tar +2025-05-15 20:48:16,033 - Initiating quantization aware training (QAT)... +2025-05-15 20:48:16,036 - Collecting statistics for quantization aware training (QAT)... +2025-05-15 20:55:24,764 - + +2025-05-15 20:55:24,764 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 20:56:20,537 - Epoch: [50][ 100/ 813] Overall Loss 0.753537 Objective Loss 0.753537 LR 0.000050 Time 0.557700 +2025-05-15 20:57:13,346 - Epoch: [50][ 200/ 813] Overall Loss 0.661668 Objective Loss 0.661668 LR 0.000050 Time 0.542887 +2025-05-15 20:58:05,104 - Epoch: [50][ 300/ 813] Overall Loss 0.627097 Objective Loss 0.627097 LR 0.000050 Time 0.534445 +2025-05-15 20:58:55,525 - Epoch: [50][ 400/ 813] Overall Loss 0.609928 Objective Loss 0.609928 LR 0.000050 Time 0.526874 +2025-05-15 20:59:48,786 - Epoch: [50][ 500/ 813] Overall Loss 0.598570 Objective Loss 0.598570 LR 0.000050 Time 0.528017 +2025-05-15 21:00:41,683 - Epoch: [50][ 600/ 813] Overall Loss 0.593831 Objective Loss 0.593831 LR 0.000050 Time 0.528173 +2025-05-15 21:01:35,661 - Epoch: [50][ 700/ 813] Overall Loss 0.590844 Objective Loss 0.590844 LR 0.000050 Time 0.529829 +2025-05-15 21:02:28,592 - Epoch: [50][ 800/ 813] Overall Loss 0.586768 Objective Loss 0.586768 LR 0.000050 Time 0.529762 +2025-05-15 21:02:33,471 - Epoch: [50][ 813/ 813] Overall Loss 0.587238 Objective Loss 0.587238 LR 0.000050 Time 0.527285 +2025-05-15 21:02:33,515 - --- validate (epoch=50)----------- +2025-05-15 21:02:33,515 - 3250 samples (16 per mini-batch) +2025-05-15 21:02:33,517 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 21:03:28,099 - Epoch: [50][ 100/ 204] Loss 0.753598 mAP 0.889459 +2025-05-15 21:04:20,815 - Epoch: [50][ 200/ 204] Loss 0.757582 mAP 0.898550 +2025-05-15 21:04:21,619 - Epoch: [50][ 204/ 204] Loss 0.755726 mAP 0.898575 +2025-05-15 21:04:21,658 - ==> mAP: 0.89858 Loss: 0.756 + +2025-05-15 21:04:21,661 - ==> Best [mAP: 0.898575 vloss: 0.755726 Params: 368352 on epoch: 50] +2025-05-15 21:04:21,661 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 21:04:21,690 - + +2025-05-15 21:04:21,690 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 21:05:16,872 - Epoch: [51][ 100/ 813] Overall Loss 0.503175 Objective Loss 0.503175 LR 0.000050 Time 0.551790 +2025-05-15 21:06:11,274 - Epoch: [51][ 200/ 813] Overall Loss 0.517205 Objective Loss 0.517205 LR 0.000050 Time 0.547881 +2025-05-15 21:07:01,705 - Epoch: [51][ 300/ 813] Overall Loss 0.522690 Objective Loss 0.522690 LR 0.000050 Time 0.533353 +2025-05-15 21:07:52,768 - Epoch: [51][ 400/ 813] Overall Loss 0.533753 Objective Loss 0.533753 LR 0.000050 Time 0.527666 +2025-05-15 21:08:44,161 - Epoch: [51][ 500/ 813] Overall Loss 0.533458 Objective Loss 0.533458 LR 0.000050 Time 0.524917 +2025-05-15 21:09:36,976 - Epoch: [51][ 600/ 813] Overall Loss 0.532917 Objective Loss 0.532917 LR 0.000050 Time 0.525452 +2025-05-15 21:10:29,395 - Epoch: [51][ 700/ 813] Overall Loss 0.532734 Objective Loss 0.532734 LR 0.000050 Time 0.525261 +2025-05-15 21:11:22,551 - Epoch: [51][ 800/ 813] Overall Loss 0.533792 Objective Loss 0.533792 LR 0.000050 Time 0.526046 +2025-05-15 21:11:27,536 - Epoch: [51][ 813/ 813] Overall Loss 0.534828 Objective Loss 0.534828 LR 0.000050 Time 0.523765 +2025-05-15 21:11:27,579 - --- validate (epoch=51)----------- +2025-05-15 21:11:27,580 - 3250 samples (16 per mini-batch) +2025-05-15 21:11:27,582 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 21:12:22,807 - Epoch: [51][ 100/ 204] Loss 0.711346 mAP 0.908747 +2025-05-15 21:13:14,630 - Epoch: [51][ 200/ 204] Loss 0.710590 mAP 0.899020 +2025-05-15 21:13:15,779 - Epoch: [51][ 204/ 204] Loss 0.712283 mAP 0.899054 +2025-05-15 21:13:15,822 - ==> mAP: 0.89905 Loss: 0.712 + +2025-05-15 21:13:15,826 - ==> Best [mAP: 0.899054 vloss: 0.712283 Params: 368352 on epoch: 51] +2025-05-15 21:13:15,826 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 21:13:15,866 - + +2025-05-15 21:13:15,866 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 21:14:10,191 - Epoch: [52][ 100/ 813] Overall Loss 0.496149 Objective Loss 0.496149 LR 0.000050 Time 0.543228 +2025-05-15 21:15:05,259 - Epoch: [52][ 200/ 813] Overall Loss 0.491887 Objective Loss 0.491887 LR 0.000050 Time 0.546942 +2025-05-15 21:15:56,535 - Epoch: [52][ 300/ 813] Overall Loss 0.500516 Objective Loss 0.500516 LR 0.000050 Time 0.535543 +2025-05-15 21:16:47,409 - Epoch: [52][ 400/ 813] Overall Loss 0.506693 Objective Loss 0.506693 LR 0.000050 Time 0.528838 +2025-05-15 21:17:39,892 - Epoch: [52][ 500/ 813] Overall Loss 0.503369 Objective Loss 0.503369 LR 0.000050 Time 0.528033 +2025-05-15 21:18:31,615 - Epoch: [52][ 600/ 813] Overall Loss 0.502639 Objective Loss 0.502639 LR 0.000050 Time 0.526218 +2025-05-15 21:19:26,086 - Epoch: [52][ 700/ 813] Overall Loss 0.509640 Objective Loss 0.509640 LR 0.000050 Time 0.528858 +2025-05-15 21:20:16,820 - Epoch: [52][ 800/ 813] Overall Loss 0.510012 Objective Loss 0.510012 LR 0.000050 Time 0.526165 +2025-05-15 21:20:22,496 - Epoch: [52][ 813/ 813] Overall Loss 0.510229 Objective Loss 0.510229 LR 0.000050 Time 0.524729 +2025-05-15 21:20:22,534 - --- validate (epoch=52)----------- +2025-05-15 21:20:22,534 - 3250 samples (16 per mini-batch) +2025-05-15 21:20:22,536 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 21:21:17,531 - Epoch: [52][ 100/ 204] Loss 0.716960 mAP 0.889418 +2025-05-15 21:22:09,805 - Epoch: [52][ 200/ 204] Loss 0.699814 mAP 0.889887 +2025-05-15 21:22:10,511 - Epoch: [52][ 204/ 204] Loss 0.700352 mAP 0.889902 +2025-05-15 21:22:10,552 - ==> mAP: 0.88990 Loss: 0.700 + +2025-05-15 21:22:10,556 - ==> Best [mAP: 0.899054 vloss: 0.712283 Params: 368352 on epoch: 51] +2025-05-15 21:22:10,556 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 21:22:10,591 - + +2025-05-15 21:22:10,592 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 21:23:05,692 - Epoch: [53][ 100/ 813] Overall Loss 0.518589 Objective Loss 0.518589 LR 0.000050 Time 0.550978 +2025-05-15 21:23:58,540 - Epoch: [53][ 200/ 813] Overall Loss 0.533486 Objective Loss 0.533486 LR 0.000050 Time 0.539720 +2025-05-15 21:24:51,576 - Epoch: [53][ 300/ 813] Overall Loss 0.537253 Objective Loss 0.537253 LR 0.000050 Time 0.536593 +2025-05-15 21:25:41,274 - Epoch: [53][ 400/ 813] Overall Loss 0.541864 Objective Loss 0.541864 LR 0.000050 Time 0.526686 +2025-05-15 21:26:33,953 - Epoch: [53][ 500/ 813] Overall Loss 0.545499 Objective Loss 0.545499 LR 0.000050 Time 0.526701 +2025-05-15 21:27:27,173 - Epoch: [53][ 600/ 813] Overall Loss 0.548212 Objective Loss 0.548212 LR 0.000050 Time 0.527616 +2025-05-15 21:28:19,542 - Epoch: [53][ 700/ 813] Overall Loss 0.552557 Objective Loss 0.552557 LR 0.000050 Time 0.527052 +2025-05-15 21:29:12,072 - Epoch: [53][ 800/ 813] Overall Loss 0.549642 Objective Loss 0.549642 LR 0.000050 Time 0.526832 +2025-05-15 21:29:16,801 - Epoch: [53][ 813/ 813] Overall Loss 0.550073 Objective Loss 0.550073 LR 0.000050 Time 0.524223 +2025-05-15 21:29:16,836 - --- validate (epoch=53)----------- +2025-05-15 21:29:16,837 - 3250 samples (16 per mini-batch) +2025-05-15 21:29:16,839 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 21:30:10,578 - Epoch: [53][ 100/ 204] Loss 0.741481 mAP 0.889324 +2025-05-15 21:31:03,940 - Epoch: [53][ 200/ 204] Loss 0.713396 mAP 0.889900 +2025-05-15 21:31:04,734 - Epoch: [53][ 204/ 204] Loss 0.711279 mAP 0.889930 +2025-05-15 21:31:04,781 - ==> mAP: 0.88993 Loss: 0.711 + +2025-05-15 21:31:04,785 - ==> Best [mAP: 0.899054 vloss: 0.712283 Params: 368352 on epoch: 51] +2025-05-15 21:31:04,785 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 21:31:04,814 - + +2025-05-15 21:31:04,814 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 21:31:59,598 - Epoch: [54][ 100/ 813] Overall Loss 0.496482 Objective Loss 0.496482 LR 0.000050 Time 0.547810 +2025-05-15 21:32:53,854 - Epoch: [54][ 200/ 813] Overall Loss 0.520954 Objective Loss 0.520954 LR 0.000050 Time 0.545176 +2025-05-15 21:33:45,351 - Epoch: [54][ 300/ 813] Overall Loss 0.515055 Objective Loss 0.515055 LR 0.000050 Time 0.535101 +2025-05-15 21:34:35,573 - Epoch: [54][ 400/ 813] Overall Loss 0.516070 Objective Loss 0.516070 LR 0.000050 Time 0.526877 +2025-05-15 21:35:26,252 - Epoch: [54][ 500/ 813] Overall Loss 0.517090 Objective Loss 0.517090 LR 0.000050 Time 0.522840 +2025-05-15 21:36:19,338 - Epoch: [54][ 600/ 813] Overall Loss 0.516604 Objective Loss 0.516604 LR 0.000050 Time 0.524175 +2025-05-15 21:37:14,340 - Epoch: [54][ 700/ 813] Overall Loss 0.520399 Objective Loss 0.520399 LR 0.000050 Time 0.527864 +2025-05-15 21:38:06,543 - Epoch: [54][ 800/ 813] Overall Loss 0.526299 Objective Loss 0.526299 LR 0.000050 Time 0.527133 +2025-05-15 21:38:11,855 - Epoch: [54][ 813/ 813] Overall Loss 0.528153 Objective Loss 0.528153 LR 0.000050 Time 0.525237 +2025-05-15 21:38:11,893 - --- validate (epoch=54)----------- +2025-05-15 21:38:11,893 - 3250 samples (16 per mini-batch) +2025-05-15 21:38:11,895 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 21:39:07,140 - Epoch: [54][ 100/ 204] Loss 0.723086 mAP 0.880314 +2025-05-15 21:39:59,696 - Epoch: [54][ 200/ 204] Loss 0.719127 mAP 0.880554 +2025-05-15 21:40:00,200 - Epoch: [54][ 204/ 204] Loss 0.718330 mAP 0.880567 +2025-05-15 21:40:00,238 - ==> mAP: 0.88057 Loss: 0.718 + +2025-05-15 21:40:00,242 - ==> Best [mAP: 0.899054 vloss: 0.712283 Params: 368352 on epoch: 51] +2025-05-15 21:40:00,242 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 21:40:00,276 - + +2025-05-15 21:40:00,276 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 21:40:54,517 - Epoch: [55][ 100/ 813] Overall Loss 0.466437 Objective Loss 0.466437 LR 0.000050 Time 0.542381 +2025-05-15 21:41:47,817 - Epoch: [55][ 200/ 813] Overall Loss 0.445431 Objective Loss 0.445431 LR 0.000050 Time 0.537668 +2025-05-15 21:42:40,603 - Epoch: [55][ 300/ 813] Overall Loss 0.457960 Objective Loss 0.457960 LR 0.000050 Time 0.534392 +2025-05-15 21:43:31,579 - Epoch: [55][ 400/ 813] Overall Loss 0.465042 Objective Loss 0.465042 LR 0.000050 Time 0.528229 +2025-05-15 21:44:24,338 - Epoch: [55][ 500/ 813] Overall Loss 0.474221 Objective Loss 0.474221 LR 0.000050 Time 0.528098 +2025-05-15 21:45:17,643 - Epoch: [55][ 600/ 813] Overall Loss 0.476032 Objective Loss 0.476032 LR 0.000050 Time 0.528920 +2025-05-15 21:46:10,093 - Epoch: [55][ 700/ 813] Overall Loss 0.484284 Objective Loss 0.484284 LR 0.000050 Time 0.528283 +2025-05-15 21:47:01,182 - Epoch: [55][ 800/ 813] Overall Loss 0.487239 Objective Loss 0.487239 LR 0.000050 Time 0.526107 +2025-05-15 21:47:06,979 - Epoch: [55][ 813/ 813] Overall Loss 0.486921 Objective Loss 0.486921 LR 0.000050 Time 0.524824 +2025-05-15 21:47:07,019 - --- validate (epoch=55)----------- +2025-05-15 21:47:07,020 - 3250 samples (16 per mini-batch) +2025-05-15 21:47:07,022 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 21:48:00,584 - Epoch: [55][ 100/ 204] Loss 0.688248 mAP 0.899879 +2025-05-15 21:48:55,120 - Epoch: [55][ 200/ 204] Loss 0.690056 mAP 0.899539 +2025-05-15 21:48:55,598 - Epoch: [55][ 204/ 204] Loss 0.691983 mAP 0.899562 +2025-05-15 21:48:55,647 - ==> mAP: 0.89956 Loss: 0.692 + +2025-05-15 21:48:55,651 - ==> Best [mAP: 0.899562 vloss: 0.691983 Params: 368352 on epoch: 55] +2025-05-15 21:48:55,651 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 21:48:55,689 - + +2025-05-15 21:48:55,689 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 21:49:53,566 - Epoch: [56][ 100/ 813] Overall Loss 0.425739 Objective Loss 0.425739 LR 0.000050 Time 0.578744 +2025-05-15 21:50:45,952 - Epoch: [56][ 200/ 813] Overall Loss 0.437936 Objective Loss 0.437936 LR 0.000050 Time 0.551293 +2025-05-15 21:51:38,427 - Epoch: [56][ 300/ 813] Overall Loss 0.457367 Objective Loss 0.457367 LR 0.000050 Time 0.542442 +2025-05-15 21:52:28,101 - Epoch: [56][ 400/ 813] Overall Loss 0.469259 Objective Loss 0.469259 LR 0.000050 Time 0.531011 +2025-05-15 21:53:20,168 - Epoch: [56][ 500/ 813] Overall Loss 0.475756 Objective Loss 0.475756 LR 0.000050 Time 0.528940 +2025-05-15 21:54:13,825 - Epoch: [56][ 600/ 813] Overall Loss 0.489116 Objective Loss 0.489116 LR 0.000050 Time 0.530208 +2025-05-15 21:55:07,060 - Epoch: [56][ 700/ 813] Overall Loss 0.489840 Objective Loss 0.489840 LR 0.000050 Time 0.530511 +2025-05-15 21:56:00,807 - Epoch: [56][ 800/ 813] Overall Loss 0.495210 Objective Loss 0.495210 LR 0.000050 Time 0.531380 +2025-05-15 21:56:05,382 - Epoch: [56][ 813/ 813] Overall Loss 0.498517 Objective Loss 0.498517 LR 0.000050 Time 0.528509 +2025-05-15 21:56:05,425 - --- validate (epoch=56)----------- +2025-05-15 21:56:05,426 - 3250 samples (16 per mini-batch) +2025-05-15 21:56:05,428 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 21:57:01,142 - Epoch: [56][ 100/ 204] Loss 0.687693 mAP 0.888637 +2025-05-15 21:57:52,907 - Epoch: [56][ 200/ 204] Loss 0.680532 mAP 0.898384 +2025-05-15 21:57:53,858 - Epoch: [56][ 204/ 204] Loss 0.682461 mAP 0.888641 +2025-05-15 21:57:53,900 - ==> mAP: 0.88864 Loss: 0.682 + +2025-05-15 21:57:53,904 - ==> Best [mAP: 0.899562 vloss: 0.691983 Params: 368352 on epoch: 55] +2025-05-15 21:57:53,904 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 21:57:53,940 - + +2025-05-15 21:57:53,940 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 21:58:48,764 - Epoch: [57][ 100/ 813] Overall Loss 0.456267 Objective Loss 0.456267 LR 0.000050 Time 0.548146 +2025-05-15 21:59:42,814 - Epoch: [57][ 200/ 813] Overall Loss 0.473221 Objective Loss 0.473221 LR 0.000050 Time 0.544312 +2025-05-15 22:00:35,716 - Epoch: [57][ 300/ 813] Overall Loss 0.480097 Objective Loss 0.480097 LR 0.000050 Time 0.539211 +2025-05-15 22:01:24,775 - Epoch: [57][ 400/ 813] Overall Loss 0.482970 Objective Loss 0.482970 LR 0.000050 Time 0.527050 +2025-05-15 22:02:16,012 - Epoch: [57][ 500/ 813] Overall Loss 0.482126 Objective Loss 0.482126 LR 0.000050 Time 0.524110 +2025-05-15 22:03:09,651 - Epoch: [57][ 600/ 813] Overall Loss 0.480575 Objective Loss 0.480575 LR 0.000050 Time 0.526145 +2025-05-15 22:04:01,972 - Epoch: [57][ 700/ 813] Overall Loss 0.487314 Objective Loss 0.487314 LR 0.000050 Time 0.525723 +2025-05-15 22:04:55,463 - Epoch: [57][ 800/ 813] Overall Loss 0.488353 Objective Loss 0.488353 LR 0.000050 Time 0.526866 +2025-05-15 22:04:59,864 - Epoch: [57][ 813/ 813] Overall Loss 0.490122 Objective Loss 0.490122 LR 0.000050 Time 0.523846 +2025-05-15 22:04:59,896 - --- validate (epoch=57)----------- +2025-05-15 22:04:59,897 - 3250 samples (16 per mini-batch) +2025-05-15 22:04:59,899 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 22:05:54,936 - Epoch: [57][ 100/ 204] Loss 0.696295 mAP 0.898347 +2025-05-15 22:06:45,936 - Epoch: [57][ 200/ 204] Loss 0.683088 mAP 0.898688 +2025-05-15 22:06:46,738 - Epoch: [57][ 204/ 204] Loss 0.681620 mAP 0.898704 +2025-05-15 22:06:46,784 - ==> mAP: 0.89870 Loss: 0.682 + +2025-05-15 22:06:46,788 - ==> Best [mAP: 0.899562 vloss: 0.691983 Params: 368352 on epoch: 55] +2025-05-15 22:06:46,788 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 22:06:46,824 - + +2025-05-15 22:06:46,824 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 22:07:41,000 - Epoch: [58][ 100/ 813] Overall Loss 0.478004 Objective Loss 0.478004 LR 0.000050 Time 0.541732 +2025-05-15 22:08:36,392 - Epoch: [58][ 200/ 813] Overall Loss 0.463049 Objective Loss 0.463049 LR 0.000050 Time 0.547816 +2025-05-15 22:09:27,957 - Epoch: [58][ 300/ 813] Overall Loss 0.476187 Objective Loss 0.476187 LR 0.000050 Time 0.537090 +2025-05-15 22:10:19,876 - Epoch: [58][ 400/ 813] Overall Loss 0.480976 Objective Loss 0.480976 LR 0.000050 Time 0.532609 +2025-05-15 22:11:10,777 - Epoch: [58][ 500/ 813] Overall Loss 0.481846 Objective Loss 0.481846 LR 0.000050 Time 0.527886 +2025-05-15 22:12:03,996 - Epoch: [58][ 600/ 813] Overall Loss 0.483014 Objective Loss 0.483014 LR 0.000050 Time 0.528601 +2025-05-15 22:12:57,400 - Epoch: [58][ 700/ 813] Overall Loss 0.491412 Objective Loss 0.491412 LR 0.000050 Time 0.529372 +2025-05-15 22:13:50,716 - Epoch: [58][ 800/ 813] Overall Loss 0.493234 Objective Loss 0.493234 LR 0.000050 Time 0.529843 +2025-05-15 22:13:55,247 - Epoch: [58][ 813/ 813] Overall Loss 0.493872 Objective Loss 0.493872 LR 0.000050 Time 0.526943 +2025-05-15 22:13:55,283 - --- validate (epoch=58)----------- +2025-05-15 22:13:55,284 - 3250 samples (16 per mini-batch) +2025-05-15 22:13:55,286 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 22:14:50,597 - Epoch: [58][ 100/ 204] Loss 0.709794 mAP 0.899219 +2025-05-15 22:15:42,895 - Epoch: [58][ 200/ 204] Loss 0.697303 mAP 0.899266 +2025-05-15 22:15:43,993 - Epoch: [58][ 204/ 204] Loss 0.693628 mAP 0.889537 +2025-05-15 22:15:44,035 - ==> mAP: 0.88954 Loss: 0.694 + +2025-05-15 22:15:44,039 - ==> Best [mAP: 0.899562 vloss: 0.691983 Params: 368352 on epoch: 55] +2025-05-15 22:15:44,039 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 22:15:44,074 - + +2025-05-15 22:15:44,074 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 22:16:38,826 - Epoch: [59][ 100/ 813] Overall Loss 0.451380 Objective Loss 0.451380 LR 0.000050 Time 0.547488 +2025-05-15 22:17:32,780 - Epoch: [59][ 200/ 813] Overall Loss 0.472933 Objective Loss 0.472933 LR 0.000050 Time 0.543507 +2025-05-15 22:18:25,576 - Epoch: [59][ 300/ 813] Overall Loss 0.470079 Objective Loss 0.470079 LR 0.000050 Time 0.538317 +2025-05-15 22:19:14,819 - Epoch: [59][ 400/ 813] Overall Loss 0.471888 Objective Loss 0.471888 LR 0.000050 Time 0.526832 +2025-05-15 22:20:07,997 - Epoch: [59][ 500/ 813] Overall Loss 0.476266 Objective Loss 0.476266 LR 0.000050 Time 0.527819 +2025-05-15 22:21:00,757 - Epoch: [59][ 600/ 813] Overall Loss 0.484440 Objective Loss 0.484440 LR 0.000050 Time 0.527779 +2025-05-15 22:21:55,469 - Epoch: [59][ 700/ 813] Overall Loss 0.490242 Objective Loss 0.490242 LR 0.000050 Time 0.530539 +2025-05-15 22:22:47,755 - Epoch: [59][ 800/ 813] Overall Loss 0.490326 Objective Loss 0.490326 LR 0.000050 Time 0.529577 +2025-05-15 22:22:52,735 - Epoch: [59][ 813/ 813] Overall Loss 0.491105 Objective Loss 0.491105 LR 0.000050 Time 0.527234 +2025-05-15 22:22:52,782 - --- validate (epoch=59)----------- +2025-05-15 22:22:52,783 - 3250 samples (16 per mini-batch) +2025-05-15 22:22:52,785 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 22:23:46,941 - Epoch: [59][ 100/ 204] Loss 0.667400 mAP 0.890496 +2025-05-15 22:24:38,934 - Epoch: [59][ 200/ 204] Loss 0.684182 mAP 0.899952 +2025-05-15 22:24:39,587 - Epoch: [59][ 204/ 204] Loss 0.687309 mAP 0.899883 +2025-05-15 22:24:39,629 - ==> mAP: 0.89988 Loss: 0.687 + +2025-05-15 22:24:39,633 - ==> Best [mAP: 0.899883 vloss: 0.687309 Params: 368352 on epoch: 59] +2025-05-15 22:24:39,633 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 22:24:39,671 - + +2025-05-15 22:24:39,671 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 22:25:34,824 - Epoch: [60][ 100/ 813] Overall Loss 0.442040 Objective Loss 0.442040 LR 0.000050 Time 0.551498 +2025-05-15 22:26:28,231 - Epoch: [60][ 200/ 813] Overall Loss 0.446712 Objective Loss 0.446712 LR 0.000050 Time 0.542780 +2025-05-15 22:27:18,596 - Epoch: [60][ 300/ 813] Overall Loss 0.460846 Objective Loss 0.460846 LR 0.000050 Time 0.529728 +2025-05-15 22:28:10,230 - Epoch: [60][ 400/ 813] Overall Loss 0.460527 Objective Loss 0.460527 LR 0.000050 Time 0.526370 +2025-05-15 22:29:02,382 - Epoch: [60][ 500/ 813] Overall Loss 0.462522 Objective Loss 0.462522 LR 0.000050 Time 0.525397 +2025-05-15 22:29:54,677 - Epoch: [60][ 600/ 813] Overall Loss 0.468137 Objective Loss 0.468137 LR 0.000050 Time 0.524986 +2025-05-15 22:30:49,485 - Epoch: [60][ 700/ 813] Overall Loss 0.469571 Objective Loss 0.469571 LR 0.000050 Time 0.528269 +2025-05-15 22:31:41,855 - Epoch: [60][ 800/ 813] Overall Loss 0.470508 Objective Loss 0.470508 LR 0.000050 Time 0.527655 +2025-05-15 22:31:47,541 - Epoch: [60][ 813/ 813] Overall Loss 0.473363 Objective Loss 0.473363 LR 0.000050 Time 0.526212 +2025-05-15 22:31:47,582 - --- validate (epoch=60)----------- +2025-05-15 22:31:47,583 - 3250 samples (16 per mini-batch) +2025-05-15 22:31:47,585 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 22:32:41,500 - Epoch: [60][ 100/ 204] Loss 0.719133 mAP 0.889299 +2025-05-15 22:33:35,093 - Epoch: [60][ 200/ 204] Loss 0.714261 mAP 0.889388 +2025-05-15 22:33:35,637 - Epoch: [60][ 204/ 204] Loss 0.712393 mAP 0.889326 +2025-05-15 22:33:35,680 - ==> mAP: 0.88933 Loss: 0.712 + +2025-05-15 22:33:35,684 - ==> Best [mAP: 0.899883 vloss: 0.687309 Params: 368352 on epoch: 59] +2025-05-15 22:33:35,684 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 22:33:35,718 - + +2025-05-15 22:33:35,719 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 22:34:32,400 - Epoch: [61][ 100/ 813] Overall Loss 0.440294 Objective Loss 0.440294 LR 0.000050 Time 0.566790 +2025-05-15 22:35:24,144 - Epoch: [61][ 200/ 813] Overall Loss 0.440907 Objective Loss 0.440907 LR 0.000050 Time 0.542102 +2025-05-15 22:36:17,398 - Epoch: [61][ 300/ 813] Overall Loss 0.470094 Objective Loss 0.470094 LR 0.000050 Time 0.538911 +2025-05-15 22:37:06,998 - Epoch: [61][ 400/ 813] Overall Loss 0.483716 Objective Loss 0.483716 LR 0.000050 Time 0.528180 +2025-05-15 22:37:58,911 - Epoch: [61][ 500/ 813] Overall Loss 0.485877 Objective Loss 0.485877 LR 0.000050 Time 0.526364 +2025-05-15 22:38:51,987 - Epoch: [61][ 600/ 813] Overall Loss 0.486016 Objective Loss 0.486016 LR 0.000050 Time 0.527095 +2025-05-15 22:39:44,858 - Epoch: [61][ 700/ 813] Overall Loss 0.493095 Objective Loss 0.493095 LR 0.000050 Time 0.527323 +2025-05-15 22:40:36,252 - Epoch: [61][ 800/ 813] Overall Loss 0.490630 Objective Loss 0.490630 LR 0.000050 Time 0.525648 +2025-05-15 22:40:42,164 - Epoch: [61][ 813/ 813] Overall Loss 0.489855 Objective Loss 0.489855 LR 0.000050 Time 0.524514 +2025-05-15 22:40:42,207 - --- validate (epoch=61)----------- +2025-05-15 22:40:42,208 - 3250 samples (16 per mini-batch) +2025-05-15 22:40:42,210 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 22:41:36,839 - Epoch: [61][ 100/ 204] Loss 0.716331 mAP 0.908390 +2025-05-15 22:42:29,696 - Epoch: [61][ 200/ 204] Loss 0.710978 mAP 0.898679 +2025-05-15 22:42:30,773 - Epoch: [61][ 204/ 204] Loss 0.707602 mAP 0.898718 +2025-05-15 22:42:30,818 - ==> mAP: 0.89872 Loss: 0.708 + +2025-05-15 22:42:30,822 - ==> Best [mAP: 0.899883 vloss: 0.687309 Params: 368352 on epoch: 59] +2025-05-15 22:42:30,822 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 22:42:30,857 - + +2025-05-15 22:42:30,857 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 22:43:25,194 - Epoch: [62][ 100/ 813] Overall Loss 0.461374 Objective Loss 0.461374 LR 0.000050 Time 0.543332 +2025-05-15 22:44:19,120 - Epoch: [62][ 200/ 813] Overall Loss 0.457964 Objective Loss 0.457964 LR 0.000050 Time 0.541290 +2025-05-15 22:45:12,271 - Epoch: [62][ 300/ 813] Overall Loss 0.465024 Objective Loss 0.465024 LR 0.000050 Time 0.538024 +2025-05-15 22:46:04,116 - Epoch: [62][ 400/ 813] Overall Loss 0.470598 Objective Loss 0.470598 LR 0.000050 Time 0.533125 +2025-05-15 22:46:56,598 - Epoch: [62][ 500/ 813] Overall Loss 0.469755 Objective Loss 0.469755 LR 0.000050 Time 0.531461 +2025-05-15 22:47:48,033 - Epoch: [62][ 600/ 813] Overall Loss 0.470412 Objective Loss 0.470412 LR 0.000050 Time 0.528606 +2025-05-15 22:48:42,210 - Epoch: [62][ 700/ 813] Overall Loss 0.470603 Objective Loss 0.470603 LR 0.000050 Time 0.530483 +2025-05-15 22:49:33,710 - Epoch: [62][ 800/ 813] Overall Loss 0.469546 Objective Loss 0.469546 LR 0.000050 Time 0.528546 +2025-05-15 22:49:39,289 - Epoch: [62][ 813/ 813] Overall Loss 0.469647 Objective Loss 0.469647 LR 0.000050 Time 0.526956 +2025-05-15 22:49:39,331 - --- validate (epoch=62)----------- +2025-05-15 22:49:39,332 - 3250 samples (16 per mini-batch) +2025-05-15 22:49:39,334 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 22:50:34,627 - Epoch: [62][ 100/ 204] Loss 0.685376 mAP 0.908437 +2025-05-15 22:51:29,287 - Epoch: [62][ 200/ 204] Loss 0.672073 mAP 0.909024 +2025-05-15 22:51:29,788 - Epoch: [62][ 204/ 204] Loss 0.670311 mAP 0.909048 +2025-05-15 22:51:29,832 - ==> mAP: 0.90905 Loss: 0.670 + +2025-05-15 22:51:29,836 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-15 22:51:29,836 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 22:51:29,876 - + +2025-05-15 22:51:29,876 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 22:52:24,840 - Epoch: [63][ 100/ 813] Overall Loss 0.463649 Objective Loss 0.463649 LR 0.000050 Time 0.549613 +2025-05-15 22:53:19,482 - Epoch: [63][ 200/ 813] Overall Loss 0.466498 Objective Loss 0.466498 LR 0.000050 Time 0.548003 +2025-05-15 22:54:09,132 - Epoch: [63][ 300/ 813] Overall Loss 0.466234 Objective Loss 0.466234 LR 0.000050 Time 0.530832 +2025-05-15 22:55:00,785 - Epoch: [63][ 400/ 813] Overall Loss 0.465331 Objective Loss 0.465331 LR 0.000050 Time 0.527249 +2025-05-15 22:55:51,665 - Epoch: [63][ 500/ 813] Overall Loss 0.468632 Objective Loss 0.468632 LR 0.000050 Time 0.523557 +2025-05-15 22:56:45,800 - Epoch: [63][ 600/ 813] Overall Loss 0.469186 Objective Loss 0.469186 LR 0.000050 Time 0.526518 +2025-05-15 22:57:38,191 - Epoch: [63][ 700/ 813] Overall Loss 0.474044 Objective Loss 0.474044 LR 0.000050 Time 0.526140 +2025-05-15 22:58:31,425 - Epoch: [63][ 800/ 813] Overall Loss 0.478700 Objective Loss 0.478700 LR 0.000050 Time 0.526914 +2025-05-15 22:58:36,433 - Epoch: [63][ 813/ 813] Overall Loss 0.478203 Objective Loss 0.478203 LR 0.000050 Time 0.524647 +2025-05-15 22:58:36,473 - --- validate (epoch=63)----------- +2025-05-15 22:58:36,474 - 3250 samples (16 per mini-batch) +2025-05-15 22:58:36,476 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 22:59:32,094 - Epoch: [63][ 100/ 204] Loss 0.744965 mAP 0.900089 +2025-05-15 23:00:24,858 - Epoch: [63][ 200/ 204] Loss 0.719630 mAP 0.899803 +2025-05-15 23:00:25,805 - Epoch: [63][ 204/ 204] Loss 0.714351 mAP 0.899800 +2025-05-15 23:00:25,849 - ==> mAP: 0.89980 Loss: 0.714 + +2025-05-15 23:00:25,853 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-15 23:00:25,853 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 23:00:25,889 - + +2025-05-15 23:00:25,889 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 23:01:22,173 - Epoch: [64][ 100/ 813] Overall Loss 0.423638 Objective Loss 0.423638 LR 0.000050 Time 0.562816 +2025-05-15 23:02:16,460 - Epoch: [64][ 200/ 813] Overall Loss 0.416220 Objective Loss 0.416220 LR 0.000050 Time 0.552834 +2025-05-15 23:03:10,013 - Epoch: [64][ 300/ 813] Overall Loss 0.436712 Objective Loss 0.436712 LR 0.000050 Time 0.547059 +2025-05-15 23:03:58,679 - Epoch: [64][ 400/ 813] Overall Loss 0.442098 Objective Loss 0.442098 LR 0.000050 Time 0.531947 +2025-05-15 23:04:51,263 - Epoch: [64][ 500/ 813] Overall Loss 0.448993 Objective Loss 0.448993 LR 0.000050 Time 0.530723 +2025-05-15 23:05:46,355 - Epoch: [64][ 600/ 813] Overall Loss 0.452073 Objective Loss 0.452073 LR 0.000050 Time 0.534087 +2025-05-15 23:06:38,122 - Epoch: [64][ 700/ 813] Overall Loss 0.457944 Objective Loss 0.457944 LR 0.000050 Time 0.531738 +2025-05-15 23:07:30,166 - Epoch: [64][ 800/ 813] Overall Loss 0.462388 Objective Loss 0.462388 LR 0.000050 Time 0.530324 +2025-05-15 23:07:35,382 - Epoch: [64][ 813/ 813] Overall Loss 0.465719 Objective Loss 0.465719 LR 0.000050 Time 0.528252 +2025-05-15 23:07:35,424 - --- validate (epoch=64)----------- +2025-05-15 23:07:35,425 - 3250 samples (16 per mini-batch) +2025-05-15 23:07:35,427 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 23:08:31,138 - Epoch: [64][ 100/ 204] Loss 0.700375 mAP 0.888943 +2025-05-15 23:09:23,441 - Epoch: [64][ 200/ 204] Loss 0.703527 mAP 0.898823 +2025-05-15 23:09:23,973 - Epoch: [64][ 204/ 204] Loss 0.702528 mAP 0.898840 +2025-05-15 23:09:24,011 - ==> mAP: 0.89884 Loss: 0.703 + +2025-05-15 23:09:24,015 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-15 23:09:24,015 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 23:09:24,051 - + +2025-05-15 23:09:24,051 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 23:10:19,463 - Epoch: [65][ 100/ 813] Overall Loss 0.453669 Objective Loss 0.453669 LR 0.000050 Time 0.554026 +2025-05-15 23:11:12,295 - Epoch: [65][ 200/ 813] Overall Loss 0.462915 Objective Loss 0.462915 LR 0.000050 Time 0.541166 +2025-05-15 23:12:04,149 - Epoch: [65][ 300/ 813] Overall Loss 0.476910 Objective Loss 0.476910 LR 0.000050 Time 0.533609 +2025-05-15 23:12:55,144 - Epoch: [65][ 400/ 813] Overall Loss 0.482228 Objective Loss 0.482228 LR 0.000050 Time 0.527688 +2025-05-15 23:13:46,524 - Epoch: [65][ 500/ 813] Overall Loss 0.482656 Objective Loss 0.482656 LR 0.000050 Time 0.524908 +2025-05-15 23:14:39,443 - Epoch: [65][ 600/ 813] Overall Loss 0.483331 Objective Loss 0.483331 LR 0.000050 Time 0.525619 +2025-05-15 23:15:31,897 - Epoch: [65][ 700/ 813] Overall Loss 0.487114 Objective Loss 0.487114 LR 0.000050 Time 0.525461 +2025-05-15 23:16:25,085 - Epoch: [65][ 800/ 813] Overall Loss 0.488684 Objective Loss 0.488684 LR 0.000050 Time 0.526262 +2025-05-15 23:16:29,990 - Epoch: [65][ 813/ 813] Overall Loss 0.489280 Objective Loss 0.489280 LR 0.000050 Time 0.523879 +2025-05-15 23:16:30,032 - --- validate (epoch=65)----------- +2025-05-15 23:16:30,033 - 3250 samples (16 per mini-batch) +2025-05-15 23:16:30,035 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 23:17:24,895 - Epoch: [65][ 100/ 204] Loss 0.647203 mAP 0.899651 +2025-05-15 23:18:16,706 - Epoch: [65][ 200/ 204] Loss 0.671681 mAP 0.899612 +2025-05-15 23:18:17,697 - Epoch: [65][ 204/ 204] Loss 0.672356 mAP 0.899614 +2025-05-15 23:18:17,741 - ==> mAP: 0.89961 Loss: 0.672 + +2025-05-15 23:18:17,745 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-15 23:18:17,745 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 23:18:17,780 - + +2025-05-15 23:18:17,781 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 23:19:13,485 - Epoch: [66][ 100/ 813] Overall Loss 0.408743 Objective Loss 0.408743 LR 0.000050 Time 0.557020 +2025-05-15 23:20:06,313 - Epoch: [66][ 200/ 813] Overall Loss 0.422733 Objective Loss 0.422733 LR 0.000050 Time 0.542641 +2025-05-15 23:21:00,078 - Epoch: [66][ 300/ 813] Overall Loss 0.438223 Objective Loss 0.438223 LR 0.000050 Time 0.540971 +2025-05-15 23:21:50,636 - Epoch: [66][ 400/ 813] Overall Loss 0.454691 Objective Loss 0.454691 LR 0.000050 Time 0.532118 +2025-05-15 23:22:43,176 - Epoch: [66][ 500/ 813] Overall Loss 0.456872 Objective Loss 0.456872 LR 0.000050 Time 0.530771 +2025-05-15 23:23:35,309 - Epoch: [66][ 600/ 813] Overall Loss 0.457896 Objective Loss 0.457896 LR 0.000050 Time 0.529192 +2025-05-15 23:24:28,860 - Epoch: [66][ 700/ 813] Overall Loss 0.461541 Objective Loss 0.461541 LR 0.000050 Time 0.530092 +2025-05-15 23:25:20,141 - Epoch: [66][ 800/ 813] Overall Loss 0.463534 Objective Loss 0.463534 LR 0.000050 Time 0.527930 +2025-05-15 23:25:25,961 - Epoch: [66][ 813/ 813] Overall Loss 0.465974 Objective Loss 0.465974 LR 0.000050 Time 0.526642 +2025-05-15 23:25:26,002 - --- validate (epoch=66)----------- +2025-05-15 23:25:26,003 - 3250 samples (16 per mini-batch) +2025-05-15 23:25:26,005 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 23:26:22,611 - Epoch: [66][ 100/ 204] Loss 0.657015 mAP 0.899670 +2025-05-15 23:27:14,990 - Epoch: [66][ 200/ 204] Loss 0.679779 mAP 0.889705 +2025-05-15 23:27:15,851 - Epoch: [66][ 204/ 204] Loss 0.676517 mAP 0.889734 +2025-05-15 23:27:15,895 - ==> mAP: 0.88973 Loss: 0.677 + +2025-05-15 23:27:15,899 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-15 23:27:15,899 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 23:27:15,933 - + +2025-05-15 23:27:15,934 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 23:28:11,424 - Epoch: [67][ 100/ 813] Overall Loss 0.427112 Objective Loss 0.427112 LR 0.000050 Time 0.554878 +2025-05-15 23:29:05,186 - Epoch: [67][ 200/ 813] Overall Loss 0.433410 Objective Loss 0.433410 LR 0.000050 Time 0.546241 +2025-05-15 23:29:56,254 - Epoch: [67][ 300/ 813] Overall Loss 0.444685 Objective Loss 0.444685 LR 0.000050 Time 0.534381 +2025-05-15 23:30:47,952 - Epoch: [67][ 400/ 813] Overall Loss 0.445232 Objective Loss 0.445232 LR 0.000050 Time 0.530022 +2025-05-15 23:31:40,840 - Epoch: [67][ 500/ 813] Overall Loss 0.449824 Objective Loss 0.449824 LR 0.000050 Time 0.529790 +2025-05-15 23:32:32,286 - Epoch: [67][ 600/ 813] Overall Loss 0.457517 Objective Loss 0.457517 LR 0.000050 Time 0.527232 +2025-05-15 23:33:26,354 - Epoch: [67][ 700/ 813] Overall Loss 0.463510 Objective Loss 0.463510 LR 0.000050 Time 0.529146 +2025-05-15 23:34:18,229 - Epoch: [67][ 800/ 813] Overall Loss 0.465262 Objective Loss 0.465262 LR 0.000050 Time 0.527845 +2025-05-15 23:34:23,665 - Epoch: [67][ 813/ 813] Overall Loss 0.465730 Objective Loss 0.465730 LR 0.000050 Time 0.526090 +2025-05-15 23:34:23,704 - --- validate (epoch=67)----------- +2025-05-15 23:34:23,705 - 3250 samples (16 per mini-batch) +2025-05-15 23:34:23,707 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 23:35:18,309 - Epoch: [67][ 100/ 204] Loss 0.713154 mAP 0.898035 +2025-05-15 23:36:12,194 - Epoch: [67][ 200/ 204] Loss 0.688865 mAP 0.897582 +2025-05-15 23:36:12,695 - Epoch: [67][ 204/ 204] Loss 0.687880 mAP 0.897539 +2025-05-15 23:36:12,738 - ==> mAP: 0.89754 Loss: 0.688 + +2025-05-15 23:36:12,742 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-15 23:36:12,742 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 23:36:12,776 - + +2025-05-15 23:36:12,777 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 23:37:07,400 - Epoch: [68][ 100/ 813] Overall Loss 0.457013 Objective Loss 0.457013 LR 0.000050 Time 0.546209 +2025-05-15 23:38:00,196 - Epoch: [68][ 200/ 813] Overall Loss 0.475570 Objective Loss 0.475570 LR 0.000050 Time 0.537072 +2025-05-15 23:38:53,557 - Epoch: [68][ 300/ 813] Overall Loss 0.478678 Objective Loss 0.478678 LR 0.000050 Time 0.535916 +2025-05-15 23:39:43,349 - Epoch: [68][ 400/ 813] Overall Loss 0.474643 Objective Loss 0.474643 LR 0.000050 Time 0.526410 +2025-05-15 23:40:36,310 - Epoch: [68][ 500/ 813] Overall Loss 0.475315 Objective Loss 0.475315 LR 0.000050 Time 0.527049 +2025-05-15 23:41:31,413 - Epoch: [68][ 600/ 813] Overall Loss 0.475698 Objective Loss 0.475698 LR 0.000050 Time 0.531041 +2025-05-15 23:42:22,504 - Epoch: [68][ 700/ 813] Overall Loss 0.479761 Objective Loss 0.479761 LR 0.000050 Time 0.528163 +2025-05-15 23:43:14,580 - Epoch: [68][ 800/ 813] Overall Loss 0.478817 Objective Loss 0.478817 LR 0.000050 Time 0.527236 +2025-05-15 23:43:20,316 - Epoch: [68][ 813/ 813] Overall Loss 0.478551 Objective Loss 0.478551 LR 0.000050 Time 0.525860 +2025-05-15 23:43:20,359 - --- validate (epoch=68)----------- +2025-05-15 23:43:20,360 - 3250 samples (16 per mini-batch) +2025-05-15 23:43:20,362 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 23:44:15,086 - Epoch: [68][ 100/ 204] Loss 0.710655 mAP 0.899248 +2025-05-15 23:45:07,906 - Epoch: [68][ 200/ 204] Loss 0.700294 mAP 0.899573 +2025-05-15 23:45:08,455 - Epoch: [68][ 204/ 204] Loss 0.699115 mAP 0.899603 +2025-05-15 23:45:08,498 - ==> mAP: 0.89960 Loss: 0.699 + +2025-05-15 23:45:08,502 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-15 23:45:08,502 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 23:45:08,538 - + +2025-05-15 23:45:08,538 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 23:46:03,866 - Epoch: [69][ 100/ 813] Overall Loss 0.420549 Objective Loss 0.420549 LR 0.000050 Time 0.553257 +2025-05-15 23:46:56,323 - Epoch: [69][ 200/ 813] Overall Loss 0.440346 Objective Loss 0.440346 LR 0.000050 Time 0.538906 +2025-05-15 23:47:49,041 - Epoch: [69][ 300/ 813] Overall Loss 0.446816 Objective Loss 0.446816 LR 0.000050 Time 0.534968 +2025-05-15 23:48:39,973 - Epoch: [69][ 400/ 813] Overall Loss 0.453194 Objective Loss 0.453194 LR 0.000050 Time 0.528551 +2025-05-15 23:49:32,933 - Epoch: [69][ 500/ 813] Overall Loss 0.458176 Objective Loss 0.458176 LR 0.000050 Time 0.528758 +2025-05-15 23:50:25,950 - Epoch: [69][ 600/ 813] Overall Loss 0.468723 Objective Loss 0.468723 LR 0.000050 Time 0.528991 +2025-05-15 23:51:19,107 - Epoch: [69][ 700/ 813] Overall Loss 0.471193 Objective Loss 0.471193 LR 0.000050 Time 0.529356 +2025-05-15 23:52:09,874 - Epoch: [69][ 800/ 813] Overall Loss 0.471951 Objective Loss 0.471951 LR 0.000050 Time 0.526643 +2025-05-15 23:52:15,289 - Epoch: [69][ 813/ 813] Overall Loss 0.472376 Objective Loss 0.472376 LR 0.000050 Time 0.524882 +2025-05-15 23:52:15,331 - --- validate (epoch=69)----------- +2025-05-15 23:52:15,331 - 3250 samples (16 per mini-batch) +2025-05-15 23:52:15,333 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-15 23:53:10,881 - Epoch: [69][ 100/ 204] Loss 0.726197 mAP 0.889490 +2025-05-15 23:54:03,341 - Epoch: [69][ 200/ 204] Loss 0.715704 mAP 0.889653 +2025-05-15 23:54:04,125 - Epoch: [69][ 204/ 204] Loss 0.711144 mAP 0.899455 +2025-05-15 23:54:04,164 - ==> mAP: 0.89945 Loss: 0.711 + +2025-05-15 23:54:04,168 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-15 23:54:04,168 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-15 23:54:04,204 - + +2025-05-15 23:54:04,204 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-15 23:55:00,349 - Epoch: [70][ 100/ 813] Overall Loss 0.448820 Objective Loss 0.448820 LR 0.000050 Time 0.561422 +2025-05-15 23:55:53,026 - Epoch: [70][ 200/ 813] Overall Loss 0.449632 Objective Loss 0.449632 LR 0.000050 Time 0.544087 +2025-05-15 23:56:45,436 - Epoch: [70][ 300/ 813] Overall Loss 0.455928 Objective Loss 0.455928 LR 0.000050 Time 0.537420 +2025-05-15 23:57:36,573 - Epoch: [70][ 400/ 813] Overall Loss 0.461039 Objective Loss 0.461039 LR 0.000050 Time 0.530903 +2025-05-15 23:58:29,091 - Epoch: [70][ 500/ 813] Overall Loss 0.460630 Objective Loss 0.460630 LR 0.000050 Time 0.529753 +2025-05-15 23:59:22,896 - Epoch: [70][ 600/ 813] Overall Loss 0.464382 Objective Loss 0.464382 LR 0.000050 Time 0.531123 +2025-05-16 00:00:15,191 - Epoch: [70][ 700/ 813] Overall Loss 0.464342 Objective Loss 0.464342 LR 0.000050 Time 0.529954 +2025-05-16 00:01:07,407 - Epoch: [70][ 800/ 813] Overall Loss 0.464639 Objective Loss 0.464639 LR 0.000050 Time 0.528977 +2025-05-16 00:01:13,175 - Epoch: [70][ 813/ 813] Overall Loss 0.464738 Objective Loss 0.464738 LR 0.000050 Time 0.527612 +2025-05-16 00:01:13,212 - --- validate (epoch=70)----------- +2025-05-16 00:01:13,213 - 3250 samples (16 per mini-batch) +2025-05-16 00:01:13,215 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 00:02:07,975 - Epoch: [70][ 100/ 204] Loss 0.690467 mAP 0.897838 +2025-05-16 00:03:00,924 - Epoch: [70][ 200/ 204] Loss 0.669796 mAP 0.898586 +2025-05-16 00:03:01,856 - Epoch: [70][ 204/ 204] Loss 0.670733 mAP 0.898602 +2025-05-16 00:03:01,893 - ==> mAP: 0.89860 Loss: 0.671 + +2025-05-16 00:03:01,897 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 00:03:01,898 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 00:03:01,933 - + +2025-05-16 00:03:01,933 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 00:03:57,055 - Epoch: [71][ 100/ 813] Overall Loss 0.437170 Objective Loss 0.437170 LR 0.000050 Time 0.551190 +2025-05-16 00:04:50,621 - Epoch: [71][ 200/ 813] Overall Loss 0.449064 Objective Loss 0.449064 LR 0.000050 Time 0.543417 +2025-05-16 00:05:42,331 - Epoch: [71][ 300/ 813] Overall Loss 0.467855 Objective Loss 0.467855 LR 0.000050 Time 0.534629 +2025-05-16 00:06:33,181 - Epoch: [71][ 400/ 813] Overall Loss 0.475047 Objective Loss 0.475047 LR 0.000050 Time 0.528092 +2025-05-16 00:07:24,434 - Epoch: [71][ 500/ 813] Overall Loss 0.474745 Objective Loss 0.474745 LR 0.000050 Time 0.524976 +2025-05-16 00:08:18,163 - Epoch: [71][ 600/ 813] Overall Loss 0.469064 Objective Loss 0.469064 LR 0.000050 Time 0.527026 +2025-05-16 00:09:12,282 - Epoch: [71][ 700/ 813] Overall Loss 0.469856 Objective Loss 0.469856 LR 0.000050 Time 0.529042 +2025-05-16 00:10:04,870 - Epoch: [71][ 800/ 813] Overall Loss 0.466832 Objective Loss 0.466832 LR 0.000050 Time 0.528645 +2025-05-16 00:10:09,713 - Epoch: [71][ 813/ 813] Overall Loss 0.467187 Objective Loss 0.467187 LR 0.000050 Time 0.526148 +2025-05-16 00:10:09,753 - --- validate (epoch=71)----------- +2025-05-16 00:10:09,754 - 3250 samples (16 per mini-batch) +2025-05-16 00:10:09,756 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 00:11:03,813 - Epoch: [71][ 100/ 204] Loss 0.667639 mAP 0.880430 +2025-05-16 00:11:56,964 - Epoch: [71][ 200/ 204] Loss 0.691746 mAP 0.879654 +2025-05-16 00:11:57,663 - Epoch: [71][ 204/ 204] Loss 0.688183 mAP 0.889476 +2025-05-16 00:11:57,701 - ==> mAP: 0.88948 Loss: 0.688 + +2025-05-16 00:11:57,705 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 00:11:57,705 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 00:11:57,740 - + +2025-05-16 00:11:57,740 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 00:12:53,144 - Epoch: [72][ 100/ 813] Overall Loss 0.411097 Objective Loss 0.411097 LR 0.000050 Time 0.554019 +2025-05-16 00:13:47,080 - Epoch: [72][ 200/ 813] Overall Loss 0.415860 Objective Loss 0.415860 LR 0.000050 Time 0.546680 +2025-05-16 00:14:39,642 - Epoch: [72][ 300/ 813] Overall Loss 0.436698 Objective Loss 0.436698 LR 0.000050 Time 0.539652 +2025-05-16 00:15:29,351 - Epoch: [72][ 400/ 813] Overall Loss 0.440629 Objective Loss 0.440629 LR 0.000050 Time 0.529008 +2025-05-16 00:16:22,746 - Epoch: [72][ 500/ 813] Overall Loss 0.442708 Objective Loss 0.442708 LR 0.000050 Time 0.529978 +2025-05-16 00:17:16,462 - Epoch: [72][ 600/ 813] Overall Loss 0.446885 Objective Loss 0.446885 LR 0.000050 Time 0.531172 +2025-05-16 00:18:06,857 - Epoch: [72][ 700/ 813] Overall Loss 0.455710 Objective Loss 0.455710 LR 0.000050 Time 0.527280 +2025-05-16 00:19:00,181 - Epoch: [72][ 800/ 813] Overall Loss 0.460809 Objective Loss 0.460809 LR 0.000050 Time 0.528023 +2025-05-16 00:19:05,240 - Epoch: [72][ 813/ 813] Overall Loss 0.460664 Objective Loss 0.460664 LR 0.000050 Time 0.525802 +2025-05-16 00:19:05,279 - --- validate (epoch=72)----------- +2025-05-16 00:19:05,280 - 3250 samples (16 per mini-batch) +2025-05-16 00:19:05,282 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 00:19:59,781 - Epoch: [72][ 100/ 204] Loss 0.712660 mAP 0.889807 +2025-05-16 00:20:53,099 - Epoch: [72][ 200/ 204] Loss 0.691431 mAP 0.889939 +2025-05-16 00:20:53,779 - Epoch: [72][ 204/ 204] Loss 0.689623 mAP 0.889972 +2025-05-16 00:20:53,819 - ==> mAP: 0.88997 Loss: 0.690 + +2025-05-16 00:20:53,823 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 00:20:53,823 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 00:20:53,858 - + +2025-05-16 00:20:53,858 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 00:21:49,011 - Epoch: [73][ 100/ 813] Overall Loss 0.393678 Objective Loss 0.393678 LR 0.000050 Time 0.551497 +2025-05-16 00:22:42,360 - Epoch: [73][ 200/ 813] Overall Loss 0.408558 Objective Loss 0.408558 LR 0.000050 Time 0.542487 +2025-05-16 00:23:35,437 - Epoch: [73][ 300/ 813] Overall Loss 0.425076 Objective Loss 0.425076 LR 0.000050 Time 0.538577 +2025-05-16 00:24:25,633 - Epoch: [73][ 400/ 813] Overall Loss 0.432003 Objective Loss 0.432003 LR 0.000050 Time 0.529411 +2025-05-16 00:25:18,268 - Epoch: [73][ 500/ 813] Overall Loss 0.441369 Objective Loss 0.441369 LR 0.000050 Time 0.528795 +2025-05-16 00:26:10,249 - Epoch: [73][ 600/ 813] Overall Loss 0.442070 Objective Loss 0.442070 LR 0.000050 Time 0.527294 +2025-05-16 00:27:03,723 - Epoch: [73][ 700/ 813] Overall Loss 0.452064 Objective Loss 0.452064 LR 0.000050 Time 0.528356 +2025-05-16 00:27:55,707 - Epoch: [73][ 800/ 813] Overall Loss 0.456940 Objective Loss 0.456940 LR 0.000050 Time 0.527289 +2025-05-16 00:28:00,543 - Epoch: [73][ 813/ 813] Overall Loss 0.458571 Objective Loss 0.458571 LR 0.000050 Time 0.524806 +2025-05-16 00:28:00,586 - --- validate (epoch=73)----------- +2025-05-16 00:28:00,587 - 3250 samples (16 per mini-batch) +2025-05-16 00:28:00,589 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 00:28:54,631 - Epoch: [73][ 100/ 204] Loss 0.727173 mAP 0.905073 +2025-05-16 00:29:48,159 - Epoch: [73][ 200/ 204] Loss 0.720604 mAP 0.905999 +2025-05-16 00:29:48,921 - Epoch: [73][ 204/ 204] Loss 0.715996 mAP 0.906058 +2025-05-16 00:29:48,964 - ==> mAP: 0.90606 Loss: 0.716 + +2025-05-16 00:29:48,968 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 00:29:48,968 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 00:29:49,003 - + +2025-05-16 00:29:49,003 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 00:30:44,648 - Epoch: [74][ 100/ 813] Overall Loss 0.437259 Objective Loss 0.437259 LR 0.000050 Time 0.556426 +2025-05-16 00:31:37,099 - Epoch: [74][ 200/ 813] Overall Loss 0.431008 Objective Loss 0.431008 LR 0.000050 Time 0.540454 +2025-05-16 00:32:29,393 - Epoch: [74][ 300/ 813] Overall Loss 0.461436 Objective Loss 0.461436 LR 0.000050 Time 0.534609 +2025-05-16 00:33:20,541 - Epoch: [74][ 400/ 813] Overall Loss 0.461551 Objective Loss 0.461551 LR 0.000050 Time 0.528822 +2025-05-16 00:34:12,381 - Epoch: [74][ 500/ 813] Overall Loss 0.461356 Objective Loss 0.461356 LR 0.000050 Time 0.526735 +2025-05-16 00:35:05,958 - Epoch: [74][ 600/ 813] Overall Loss 0.460919 Objective Loss 0.460919 LR 0.000050 Time 0.528237 +2025-05-16 00:35:58,585 - Epoch: [74][ 700/ 813] Overall Loss 0.461668 Objective Loss 0.461668 LR 0.000050 Time 0.527954 +2025-05-16 00:36:51,315 - Epoch: [74][ 800/ 813] Overall Loss 0.464018 Objective Loss 0.464018 LR 0.000050 Time 0.527869 +2025-05-16 00:36:57,034 - Epoch: [74][ 813/ 813] Overall Loss 0.464959 Objective Loss 0.464959 LR 0.000050 Time 0.526463 +2025-05-16 00:36:57,074 - --- validate (epoch=74)----------- +2025-05-16 00:36:57,075 - 3250 samples (16 per mini-batch) +2025-05-16 00:36:57,077 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 00:37:51,282 - Epoch: [74][ 100/ 204] Loss 0.744524 mAP 0.889377 +2025-05-16 00:38:44,863 - Epoch: [74][ 200/ 204] Loss 0.723844 mAP 0.899116 +2025-05-16 00:38:45,691 - Epoch: [74][ 204/ 204] Loss 0.723480 mAP 0.899105 +2025-05-16 00:38:45,734 - ==> mAP: 0.89911 Loss: 0.723 + +2025-05-16 00:38:45,738 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 00:38:45,739 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 00:38:45,773 - + +2025-05-16 00:38:45,773 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 00:39:40,448 - Epoch: [75][ 100/ 813] Overall Loss 0.433995 Objective Loss 0.433995 LR 0.000050 Time 0.546723 +2025-05-16 00:40:34,932 - Epoch: [75][ 200/ 813] Overall Loss 0.450066 Objective Loss 0.450066 LR 0.000050 Time 0.545758 +2025-05-16 00:41:27,777 - Epoch: [75][ 300/ 813] Overall Loss 0.461136 Objective Loss 0.461136 LR 0.000050 Time 0.539981 +2025-05-16 00:42:19,202 - Epoch: [75][ 400/ 813] Overall Loss 0.458844 Objective Loss 0.458844 LR 0.000050 Time 0.533542 +2025-05-16 00:43:11,128 - Epoch: [75][ 500/ 813] Overall Loss 0.462772 Objective Loss 0.462772 LR 0.000050 Time 0.530682 +2025-05-16 00:44:05,110 - Epoch: [75][ 600/ 813] Overall Loss 0.462559 Objective Loss 0.462559 LR 0.000050 Time 0.532203 +2025-05-16 00:44:57,365 - Epoch: [75][ 700/ 813] Overall Loss 0.465657 Objective Loss 0.465657 LR 0.000050 Time 0.530822 +2025-05-16 00:45:49,143 - Epoch: [75][ 800/ 813] Overall Loss 0.466113 Objective Loss 0.466113 LR 0.000050 Time 0.529189 +2025-05-16 00:45:54,180 - Epoch: [75][ 813/ 813] Overall Loss 0.469076 Objective Loss 0.469076 LR 0.000050 Time 0.526921 +2025-05-16 00:45:54,222 - --- validate (epoch=75)----------- +2025-05-16 00:45:54,223 - 3250 samples (16 per mini-batch) +2025-05-16 00:45:54,225 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 00:46:49,100 - Epoch: [75][ 100/ 204] Loss 0.687971 mAP 0.898706 +2025-05-16 00:47:41,732 - Epoch: [75][ 200/ 204] Loss 0.698848 mAP 0.888984 +2025-05-16 00:47:42,594 - Epoch: [75][ 204/ 204] Loss 0.699843 mAP 0.888984 +2025-05-16 00:47:42,639 - ==> mAP: 0.88898 Loss: 0.700 + +2025-05-16 00:47:42,643 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 00:47:42,643 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 00:47:42,678 - + +2025-05-16 00:47:42,678 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 00:48:37,587 - Epoch: [76][ 100/ 813] Overall Loss 0.426871 Objective Loss 0.426871 LR 0.000050 Time 0.549054 +2025-05-16 00:49:29,580 - Epoch: [76][ 200/ 813] Overall Loss 0.438460 Objective Loss 0.438460 LR 0.000050 Time 0.534484 +2025-05-16 00:50:22,850 - Epoch: [76][ 300/ 813] Overall Loss 0.440293 Objective Loss 0.440293 LR 0.000050 Time 0.533885 +2025-05-16 00:51:13,527 - Epoch: [76][ 400/ 813] Overall Loss 0.436270 Objective Loss 0.436270 LR 0.000050 Time 0.527084 +2025-05-16 00:52:04,965 - Epoch: [76][ 500/ 813] Overall Loss 0.434589 Objective Loss 0.434589 LR 0.000050 Time 0.524540 +2025-05-16 00:52:58,350 - Epoch: [76][ 600/ 813] Overall Loss 0.442371 Objective Loss 0.442371 LR 0.000050 Time 0.526088 +2025-05-16 00:53:51,673 - Epoch: [76][ 700/ 813] Overall Loss 0.447007 Objective Loss 0.447007 LR 0.000050 Time 0.527106 +2025-05-16 00:54:43,823 - Epoch: [76][ 800/ 813] Overall Loss 0.451410 Objective Loss 0.451410 LR 0.000050 Time 0.526400 +2025-05-16 00:54:48,675 - Epoch: [76][ 813/ 813] Overall Loss 0.450933 Objective Loss 0.450933 LR 0.000050 Time 0.523949 +2025-05-16 00:54:48,716 - --- validate (epoch=76)----------- +2025-05-16 00:54:48,716 - 3250 samples (16 per mini-batch) +2025-05-16 00:54:48,718 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 00:55:43,406 - Epoch: [76][ 100/ 204] Loss 0.687073 mAP 0.898768 +2025-05-16 00:56:36,663 - Epoch: [76][ 200/ 204] Loss 0.682582 mAP 0.899463 +2025-05-16 00:56:37,163 - Epoch: [76][ 204/ 204] Loss 0.699555 mAP 0.899472 +2025-05-16 00:56:37,207 - ==> mAP: 0.89947 Loss: 0.700 + +2025-05-16 00:56:37,211 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 00:56:37,211 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 00:56:37,247 - + +2025-05-16 00:56:37,247 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 00:57:31,767 - Epoch: [77][ 100/ 813] Overall Loss 0.387954 Objective Loss 0.387954 LR 0.000050 Time 0.545174 +2025-05-16 00:58:25,743 - Epoch: [77][ 200/ 813] Overall Loss 0.390204 Objective Loss 0.390204 LR 0.000050 Time 0.542443 +2025-05-16 00:59:17,372 - Epoch: [77][ 300/ 813] Overall Loss 0.415536 Objective Loss 0.415536 LR 0.000050 Time 0.533718 +2025-05-16 01:00:08,143 - Epoch: [77][ 400/ 813] Overall Loss 0.423643 Objective Loss 0.423643 LR 0.000050 Time 0.527212 +2025-05-16 01:01:01,933 - Epoch: [77][ 500/ 813] Overall Loss 0.430724 Objective Loss 0.430724 LR 0.000050 Time 0.529346 +2025-05-16 01:01:52,956 - Epoch: [77][ 600/ 813] Overall Loss 0.444600 Objective Loss 0.444600 LR 0.000050 Time 0.526158 +2025-05-16 01:02:45,755 - Epoch: [77][ 700/ 813] Overall Loss 0.451517 Objective Loss 0.451517 LR 0.000050 Time 0.526417 +2025-05-16 01:03:39,918 - Epoch: [77][ 800/ 813] Overall Loss 0.457400 Objective Loss 0.457400 LR 0.000050 Time 0.528316 +2025-05-16 01:03:44,449 - Epoch: [77][ 813/ 813] Overall Loss 0.458328 Objective Loss 0.458328 LR 0.000050 Time 0.525441 +2025-05-16 01:03:44,491 - --- validate (epoch=77)----------- +2025-05-16 01:03:44,492 - 3250 samples (16 per mini-batch) +2025-05-16 01:03:44,494 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 01:04:38,660 - Epoch: [77][ 100/ 204] Loss 0.731116 mAP 0.888537 +2025-05-16 01:05:32,233 - Epoch: [77][ 200/ 204] Loss 0.726177 mAP 0.898626 +2025-05-16 01:05:32,865 - Epoch: [77][ 204/ 204] Loss 0.723874 mAP 0.898677 +2025-05-16 01:05:32,908 - ==> mAP: 0.89868 Loss: 0.724 + +2025-05-16 01:05:32,913 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 01:05:32,913 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 01:05:32,948 - + +2025-05-16 01:05:32,948 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 01:06:28,382 - Epoch: [78][ 100/ 813] Overall Loss 0.430768 Objective Loss 0.430768 LR 0.000050 Time 0.554306 +2025-05-16 01:07:22,150 - Epoch: [78][ 200/ 813] Overall Loss 0.431167 Objective Loss 0.431167 LR 0.000050 Time 0.545969 +2025-05-16 01:08:14,007 - Epoch: [78][ 300/ 813] Overall Loss 0.433837 Objective Loss 0.433837 LR 0.000050 Time 0.536832 +2025-05-16 01:09:04,093 - Epoch: [78][ 400/ 813] Overall Loss 0.438952 Objective Loss 0.438952 LR 0.000050 Time 0.527834 +2025-05-16 01:09:58,035 - Epoch: [78][ 500/ 813] Overall Loss 0.443646 Objective Loss 0.443646 LR 0.000050 Time 0.530147 +2025-05-16 01:10:51,217 - Epoch: [78][ 600/ 813] Overall Loss 0.446632 Objective Loss 0.446632 LR 0.000050 Time 0.530424 +2025-05-16 01:11:43,807 - Epoch: [78][ 700/ 813] Overall Loss 0.453595 Objective Loss 0.453595 LR 0.000050 Time 0.529774 +2025-05-16 01:12:35,410 - Epoch: [78][ 800/ 813] Overall Loss 0.454832 Objective Loss 0.454832 LR 0.000050 Time 0.528054 +2025-05-16 01:12:41,104 - Epoch: [78][ 813/ 813] Overall Loss 0.456153 Objective Loss 0.456153 LR 0.000050 Time 0.526614 +2025-05-16 01:12:41,144 - --- validate (epoch=78)----------- +2025-05-16 01:12:41,145 - 3250 samples (16 per mini-batch) +2025-05-16 01:12:41,147 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 01:13:34,661 - Epoch: [78][ 100/ 204] Loss 0.729408 mAP 0.898890 +2025-05-16 01:14:28,475 - Epoch: [78][ 200/ 204] Loss 0.716216 mAP 0.889327 +2025-05-16 01:14:28,957 - Epoch: [78][ 204/ 204] Loss 0.716548 mAP 0.889356 +2025-05-16 01:14:28,999 - ==> mAP: 0.88936 Loss: 0.717 + +2025-05-16 01:14:29,004 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 01:14:29,004 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 01:14:29,031 - + +2025-05-16 01:14:29,031 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 01:15:23,850 - Epoch: [79][ 100/ 813] Overall Loss 0.426298 Objective Loss 0.426298 LR 0.000050 Time 0.548165 +2025-05-16 01:16:18,113 - Epoch: [79][ 200/ 813] Overall Loss 0.434763 Objective Loss 0.434763 LR 0.000050 Time 0.545385 +2025-05-16 01:17:10,037 - Epoch: [79][ 300/ 813] Overall Loss 0.442697 Objective Loss 0.442697 LR 0.000050 Time 0.536668 +2025-05-16 01:18:00,147 - Epoch: [79][ 400/ 813] Overall Loss 0.455284 Objective Loss 0.455284 LR 0.000050 Time 0.527770 +2025-05-16 01:18:52,130 - Epoch: [79][ 500/ 813] Overall Loss 0.459394 Objective Loss 0.459394 LR 0.000050 Time 0.526179 +2025-05-16 01:19:43,346 - Epoch: [79][ 600/ 813] Overall Loss 0.456855 Objective Loss 0.456855 LR 0.000050 Time 0.523838 +2025-05-16 01:20:36,436 - Epoch: [79][ 700/ 813] Overall Loss 0.459135 Objective Loss 0.459135 LR 0.000050 Time 0.524846 +2025-05-16 01:21:29,044 - Epoch: [79][ 800/ 813] Overall Loss 0.460269 Objective Loss 0.460269 LR 0.000050 Time 0.524997 +2025-05-16 01:21:33,978 - Epoch: [79][ 813/ 813] Overall Loss 0.462215 Objective Loss 0.462215 LR 0.000050 Time 0.522671 +2025-05-16 01:21:34,021 - --- validate (epoch=79)----------- +2025-05-16 01:21:34,022 - 3250 samples (16 per mini-batch) +2025-05-16 01:21:34,024 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 01:22:28,067 - Epoch: [79][ 100/ 204] Loss 0.701916 mAP 0.899619 +2025-05-16 01:23:23,211 - Epoch: [79][ 200/ 204] Loss 0.687851 mAP 0.889876 +2025-05-16 01:23:23,687 - Epoch: [79][ 204/ 204] Loss 0.697106 mAP 0.889886 +2025-05-16 01:23:23,732 - ==> mAP: 0.88989 Loss: 0.697 + +2025-05-16 01:23:23,736 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 01:23:23,736 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 01:23:23,771 - + +2025-05-16 01:23:23,771 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 01:24:18,879 - Epoch: [80][ 100/ 813] Overall Loss 0.408206 Objective Loss 0.408206 LR 0.000050 Time 0.551049 +2025-05-16 01:25:12,156 - Epoch: [80][ 200/ 813] Overall Loss 0.395952 Objective Loss 0.395952 LR 0.000050 Time 0.541902 +2025-05-16 01:26:05,216 - Epoch: [80][ 300/ 813] Overall Loss 0.409977 Objective Loss 0.409977 LR 0.000050 Time 0.538128 +2025-05-16 01:26:55,144 - Epoch: [80][ 400/ 813] Overall Loss 0.422734 Objective Loss 0.422734 LR 0.000050 Time 0.528385 +2025-05-16 01:27:46,928 - Epoch: [80][ 500/ 813] Overall Loss 0.427452 Objective Loss 0.427452 LR 0.000050 Time 0.526234 +2025-05-16 01:28:41,961 - Epoch: [80][ 600/ 813] Overall Loss 0.430915 Objective Loss 0.430915 LR 0.000050 Time 0.530246 +2025-05-16 01:29:33,467 - Epoch: [80][ 700/ 813] Overall Loss 0.443905 Objective Loss 0.443905 LR 0.000050 Time 0.528074 +2025-05-16 01:30:26,951 - Epoch: [80][ 800/ 813] Overall Loss 0.445063 Objective Loss 0.445063 LR 0.000050 Time 0.528917 +2025-05-16 01:30:31,355 - Epoch: [80][ 813/ 813] Overall Loss 0.447586 Objective Loss 0.447586 LR 0.000050 Time 0.525877 +2025-05-16 01:30:31,398 - --- validate (epoch=80)----------- +2025-05-16 01:30:31,399 - 3250 samples (16 per mini-batch) +2025-05-16 01:30:31,401 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 01:31:26,479 - Epoch: [80][ 100/ 204] Loss 0.728236 mAP 0.890255 +2025-05-16 01:32:19,680 - Epoch: [80][ 200/ 204] Loss 0.694076 mAP 0.890299 +2025-05-16 01:32:20,191 - Epoch: [80][ 204/ 204] Loss 0.694380 mAP 0.890308 +2025-05-16 01:32:20,234 - ==> mAP: 0.89031 Loss: 0.694 + +2025-05-16 01:32:20,238 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 01:32:20,239 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 01:32:20,274 - + +2025-05-16 01:32:20,274 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 01:33:16,055 - Epoch: [81][ 100/ 813] Overall Loss 0.408127 Objective Loss 0.408127 LR 0.000050 Time 0.557785 +2025-05-16 01:34:08,598 - Epoch: [81][ 200/ 813] Overall Loss 0.411942 Objective Loss 0.411942 LR 0.000050 Time 0.541568 +2025-05-16 01:35:01,746 - Epoch: [81][ 300/ 813] Overall Loss 0.436152 Objective Loss 0.436152 LR 0.000050 Time 0.538200 +2025-05-16 01:35:51,821 - Epoch: [81][ 400/ 813] Overall Loss 0.444182 Objective Loss 0.444182 LR 0.000050 Time 0.528828 +2025-05-16 01:36:44,067 - Epoch: [81][ 500/ 813] Overall Loss 0.444744 Objective Loss 0.444744 LR 0.000050 Time 0.527552 +2025-05-16 01:37:37,934 - Epoch: [81][ 600/ 813] Overall Loss 0.446193 Objective Loss 0.446193 LR 0.000050 Time 0.529402 +2025-05-16 01:38:30,159 - Epoch: [81][ 700/ 813] Overall Loss 0.449791 Objective Loss 0.449791 LR 0.000050 Time 0.528378 +2025-05-16 01:39:22,509 - Epoch: [81][ 800/ 813] Overall Loss 0.453531 Objective Loss 0.453531 LR 0.000050 Time 0.527759 +2025-05-16 01:39:27,875 - Epoch: [81][ 813/ 813] Overall Loss 0.454622 Objective Loss 0.454622 LR 0.000050 Time 0.525921 +2025-05-16 01:39:27,910 - --- validate (epoch=81)----------- +2025-05-16 01:39:27,911 - 3250 samples (16 per mini-batch) +2025-05-16 01:39:27,913 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 01:40:23,618 - Epoch: [81][ 100/ 204] Loss 0.705522 mAP 0.879635 +2025-05-16 01:41:17,459 - Epoch: [81][ 200/ 204] Loss 0.718103 mAP 0.879570 +2025-05-16 01:41:17,994 - Epoch: [81][ 204/ 204] Loss 0.717315 mAP 0.879578 +2025-05-16 01:41:18,039 - ==> mAP: 0.87958 Loss: 0.717 + +2025-05-16 01:41:18,043 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 01:41:18,043 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 01:41:18,079 - + +2025-05-16 01:41:18,079 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 01:42:12,569 - Epoch: [82][ 100/ 813] Overall Loss 0.405661 Objective Loss 0.405661 LR 0.000050 Time 0.544872 +2025-05-16 01:43:07,153 - Epoch: [82][ 200/ 813] Overall Loss 0.432016 Objective Loss 0.432016 LR 0.000050 Time 0.545344 +2025-05-16 01:43:59,662 - Epoch: [82][ 300/ 813] Overall Loss 0.444490 Objective Loss 0.444490 LR 0.000050 Time 0.538589 +2025-05-16 01:44:50,257 - Epoch: [82][ 400/ 813] Overall Loss 0.451031 Objective Loss 0.451031 LR 0.000050 Time 0.530423 +2025-05-16 01:45:43,023 - Epoch: [82][ 500/ 813] Overall Loss 0.446317 Objective Loss 0.446317 LR 0.000050 Time 0.529867 +2025-05-16 01:46:36,375 - Epoch: [82][ 600/ 813] Overall Loss 0.448840 Objective Loss 0.448840 LR 0.000050 Time 0.530473 +2025-05-16 01:47:27,984 - Epoch: [82][ 700/ 813] Overall Loss 0.451268 Objective Loss 0.451268 LR 0.000050 Time 0.528416 +2025-05-16 01:48:20,793 - Epoch: [82][ 800/ 813] Overall Loss 0.453404 Objective Loss 0.453404 LR 0.000050 Time 0.528365 +2025-05-16 01:48:25,910 - Epoch: [82][ 813/ 813] Overall Loss 0.453597 Objective Loss 0.453597 LR 0.000050 Time 0.526209 +2025-05-16 01:48:25,952 - --- validate (epoch=82)----------- +2025-05-16 01:48:25,953 - 3250 samples (16 per mini-batch) +2025-05-16 01:48:25,955 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 01:49:21,924 - Epoch: [82][ 100/ 204] Loss 0.652993 mAP 0.890430 +2025-05-16 01:50:14,117 - Epoch: [82][ 200/ 204] Loss 0.695092 mAP 0.889518 +2025-05-16 01:50:15,215 - Epoch: [82][ 204/ 204] Loss 0.696932 mAP 0.889552 +2025-05-16 01:50:15,258 - ==> mAP: 0.88955 Loss: 0.697 + +2025-05-16 01:50:15,262 - ==> Best [mAP: 0.909048 vloss: 0.670311 Params: 368352 on epoch: 62] +2025-05-16 01:50:15,262 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 01:50:15,297 - + +2025-05-16 01:50:15,297 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 01:51:12,086 - Epoch: [83][ 100/ 813] Overall Loss 0.412084 Objective Loss 0.412084 LR 0.000050 Time 0.567860 +2025-05-16 01:52:04,105 - Epoch: [83][ 200/ 813] Overall Loss 0.424142 Objective Loss 0.424142 LR 0.000050 Time 0.544018 +2025-05-16 01:52:57,494 - Epoch: [83][ 300/ 813] Overall Loss 0.433752 Objective Loss 0.433752 LR 0.000050 Time 0.540636 +2025-05-16 01:53:47,689 - Epoch: [83][ 400/ 813] Overall Loss 0.434961 Objective Loss 0.434961 LR 0.000050 Time 0.530959 +2025-05-16 01:54:38,262 - Epoch: [83][ 500/ 813] Overall Loss 0.433234 Objective Loss 0.433234 LR 0.000050 Time 0.525898 +2025-05-16 01:55:31,675 - Epoch: [83][ 600/ 813] Overall Loss 0.435810 Objective Loss 0.435810 LR 0.000050 Time 0.527268 +2025-05-16 01:56:24,602 - Epoch: [83][ 700/ 813] Overall Loss 0.438687 Objective Loss 0.438687 LR 0.000050 Time 0.527551 +2025-05-16 01:57:16,098 - Epoch: [83][ 800/ 813] Overall Loss 0.440995 Objective Loss 0.440995 LR 0.000050 Time 0.525975 +2025-05-16 01:57:21,434 - Epoch: [83][ 813/ 813] Overall Loss 0.442472 Objective Loss 0.442472 LR 0.000050 Time 0.524127 +2025-05-16 01:57:21,475 - --- validate (epoch=83)----------- +2025-05-16 01:57:21,476 - 3250 samples (16 per mini-batch) +2025-05-16 01:57:21,478 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 01:58:16,017 - Epoch: [83][ 100/ 204] Loss 0.646332 mAP 0.918670 +2025-05-16 01:59:10,603 - Epoch: [83][ 200/ 204] Loss 0.664977 mAP 0.909077 +2025-05-16 01:59:11,920 - Epoch: [83][ 204/ 204] Loss 0.666068 mAP 0.909098 +2025-05-16 01:59:11,959 - ==> mAP: 0.90910 Loss: 0.666 + +2025-05-16 01:59:11,963 - ==> Best [mAP: 0.909098 vloss: 0.666068 Params: 368352 on epoch: 83] +2025-05-16 01:59:11,963 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 01:59:12,002 - + +2025-05-16 01:59:12,003 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 02:00:06,623 - Epoch: [84][ 100/ 813] Overall Loss 0.416067 Objective Loss 0.416067 LR 0.000050 Time 0.546172 +2025-05-16 02:00:58,299 - Epoch: [84][ 200/ 813] Overall Loss 0.438803 Objective Loss 0.438803 LR 0.000050 Time 0.531457 +2025-05-16 02:01:51,422 - Epoch: [84][ 300/ 813] Overall Loss 0.452493 Objective Loss 0.452493 LR 0.000050 Time 0.531377 +2025-05-16 02:02:41,802 - Epoch: [84][ 400/ 813] Overall Loss 0.461083 Objective Loss 0.461083 LR 0.000050 Time 0.524476 +2025-05-16 02:03:33,583 - Epoch: [84][ 500/ 813] Overall Loss 0.465985 Objective Loss 0.465985 LR 0.000050 Time 0.523139 +2025-05-16 02:04:26,061 - Epoch: [84][ 600/ 813] Overall Loss 0.466640 Objective Loss 0.466640 LR 0.000050 Time 0.523404 +2025-05-16 02:05:20,464 - Epoch: [84][ 700/ 813] Overall Loss 0.467594 Objective Loss 0.467594 LR 0.000050 Time 0.526349 +2025-05-16 02:06:11,605 - Epoch: [84][ 800/ 813] Overall Loss 0.472048 Objective Loss 0.472048 LR 0.000050 Time 0.524479 +2025-05-16 02:06:17,055 - Epoch: [84][ 813/ 813] Overall Loss 0.472981 Objective Loss 0.472981 LR 0.000050 Time 0.522794 +2025-05-16 02:06:17,096 - --- validate (epoch=84)----------- +2025-05-16 02:06:17,097 - 3250 samples (16 per mini-batch) +2025-05-16 02:06:17,099 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 02:07:11,492 - Epoch: [84][ 100/ 204] Loss 0.755347 mAP 0.879331 +2025-05-16 02:08:03,602 - Epoch: [84][ 200/ 204] Loss 0.693004 mAP 0.889615 +2025-05-16 02:08:04,673 - Epoch: [84][ 204/ 204] Loss 0.692520 mAP 0.889663 +2025-05-16 02:08:04,722 - ==> mAP: 0.88966 Loss: 0.693 + +2025-05-16 02:08:04,726 - ==> Best [mAP: 0.909098 vloss: 0.666068 Params: 368352 on epoch: 83] +2025-05-16 02:08:04,727 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 02:08:04,758 - + +2025-05-16 02:08:04,759 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 02:08:59,793 - Epoch: [85][ 100/ 813] Overall Loss 0.433676 Objective Loss 0.433676 LR 0.000050 Time 0.550317 +2025-05-16 02:09:53,149 - Epoch: [85][ 200/ 813] Overall Loss 0.428490 Objective Loss 0.428490 LR 0.000050 Time 0.541913 +2025-05-16 02:10:42,922 - Epoch: [85][ 300/ 813] Overall Loss 0.441239 Objective Loss 0.441239 LR 0.000050 Time 0.527179 +2025-05-16 02:11:33,642 - Epoch: [85][ 400/ 813] Overall Loss 0.443771 Objective Loss 0.443771 LR 0.000050 Time 0.522175 +2025-05-16 02:12:28,392 - Epoch: [85][ 500/ 813] Overall Loss 0.446299 Objective Loss 0.446299 LR 0.000050 Time 0.527235 +2025-05-16 02:13:22,158 - Epoch: [85][ 600/ 813] Overall Loss 0.447421 Objective Loss 0.447421 LR 0.000050 Time 0.528970 +2025-05-16 02:14:14,430 - Epoch: [85][ 700/ 813] Overall Loss 0.455645 Objective Loss 0.455645 LR 0.000050 Time 0.528075 +2025-05-16 02:15:05,239 - Epoch: [85][ 800/ 813] Overall Loss 0.460348 Objective Loss 0.460348 LR 0.000050 Time 0.525574 +2025-05-16 02:15:10,702 - Epoch: [85][ 813/ 813] Overall Loss 0.461374 Objective Loss 0.461374 LR 0.000050 Time 0.523890 +2025-05-16 02:15:10,737 - --- validate (epoch=85)----------- +2025-05-16 02:15:10,738 - 3250 samples (16 per mini-batch) +2025-05-16 02:15:10,740 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 02:16:05,840 - Epoch: [85][ 100/ 204] Loss 0.745578 mAP 0.890269 +2025-05-16 02:16:57,993 - Epoch: [85][ 200/ 204] Loss 0.713885 mAP 0.900146 +2025-05-16 02:16:58,874 - Epoch: [85][ 204/ 204] Loss 0.709054 mAP 0.900152 +2025-05-16 02:16:58,918 - ==> mAP: 0.90015 Loss: 0.709 + +2025-05-16 02:16:58,922 - ==> Best [mAP: 0.909098 vloss: 0.666068 Params: 368352 on epoch: 83] +2025-05-16 02:16:58,923 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 02:16:58,957 - + +2025-05-16 02:16:58,957 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 02:17:53,723 - Epoch: [86][ 100/ 813] Overall Loss 0.402147 Objective Loss 0.402147 LR 0.000050 Time 0.547629 +2025-05-16 02:18:46,404 - Epoch: [86][ 200/ 813] Overall Loss 0.428370 Objective Loss 0.428370 LR 0.000050 Time 0.537210 +2025-05-16 02:19:38,859 - Epoch: [86][ 300/ 813] Overall Loss 0.430619 Objective Loss 0.430619 LR 0.000050 Time 0.532974 +2025-05-16 02:20:29,333 - Epoch: [86][ 400/ 813] Overall Loss 0.438270 Objective Loss 0.438270 LR 0.000050 Time 0.525912 +2025-05-16 02:21:21,038 - Epoch: [86][ 500/ 813] Overall Loss 0.444913 Objective Loss 0.444913 LR 0.000050 Time 0.524126 +2025-05-16 02:22:15,948 - Epoch: [86][ 600/ 813] Overall Loss 0.451693 Objective Loss 0.451693 LR 0.000050 Time 0.528285 +2025-05-16 02:23:08,707 - Epoch: [86][ 700/ 813] Overall Loss 0.455465 Objective Loss 0.455465 LR 0.000050 Time 0.528180 +2025-05-16 02:23:59,826 - Epoch: [86][ 800/ 813] Overall Loss 0.461121 Objective Loss 0.461121 LR 0.000050 Time 0.526054 +2025-05-16 02:24:05,714 - Epoch: [86][ 813/ 813] Overall Loss 0.461351 Objective Loss 0.461351 LR 0.000050 Time 0.524884 +2025-05-16 02:24:05,758 - --- validate (epoch=86)----------- +2025-05-16 02:24:05,759 - 3250 samples (16 per mini-batch) +2025-05-16 02:24:05,761 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 02:25:01,460 - Epoch: [86][ 100/ 204] Loss 0.694468 mAP 0.889890 +2025-05-16 02:25:53,735 - Epoch: [86][ 200/ 204] Loss 0.678127 mAP 0.899995 +2025-05-16 02:25:54,521 - Epoch: [86][ 204/ 204] Loss 0.681148 mAP 0.900018 +2025-05-16 02:25:54,564 - ==> mAP: 0.90002 Loss: 0.681 + +2025-05-16 02:25:54,569 - ==> Best [mAP: 0.909098 vloss: 0.666068 Params: 368352 on epoch: 83] +2025-05-16 02:25:54,569 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 02:25:54,603 - + +2025-05-16 02:25:54,603 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 02:26:49,625 - Epoch: [87][ 100/ 813] Overall Loss 0.388233 Objective Loss 0.388233 LR 0.000050 Time 0.550190 +2025-05-16 02:27:42,922 - Epoch: [87][ 200/ 813] Overall Loss 0.399599 Objective Loss 0.399599 LR 0.000050 Time 0.541572 +2025-05-16 02:28:35,327 - Epoch: [87][ 300/ 813] Overall Loss 0.418864 Objective Loss 0.418864 LR 0.000050 Time 0.535715 +2025-05-16 02:29:23,825 - Epoch: [87][ 400/ 813] Overall Loss 0.424553 Objective Loss 0.424553 LR 0.000050 Time 0.523027 +2025-05-16 02:30:17,981 - Epoch: [87][ 500/ 813] Overall Loss 0.431805 Objective Loss 0.431805 LR 0.000050 Time 0.526729 +2025-05-16 02:31:11,842 - Epoch: [87][ 600/ 813] Overall Loss 0.435956 Objective Loss 0.435956 LR 0.000050 Time 0.528707 +2025-05-16 02:32:04,177 - Epoch: [87][ 700/ 813] Overall Loss 0.439137 Objective Loss 0.439137 LR 0.000050 Time 0.527938 +2025-05-16 02:32:56,433 - Epoch: [87][ 800/ 813] Overall Loss 0.442531 Objective Loss 0.442531 LR 0.000050 Time 0.527265 +2025-05-16 02:33:02,081 - Epoch: [87][ 813/ 813] Overall Loss 0.442345 Objective Loss 0.442345 LR 0.000050 Time 0.525780 +2025-05-16 02:33:02,123 - --- validate (epoch=87)----------- +2025-05-16 02:33:02,123 - 3250 samples (16 per mini-batch) +2025-05-16 02:33:02,125 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 02:33:56,620 - Epoch: [87][ 100/ 204] Loss 0.670875 mAP 0.898187 +2025-05-16 02:34:51,000 - Epoch: [87][ 200/ 204] Loss 0.667549 mAP 0.898748 +2025-05-16 02:34:51,482 - Epoch: [87][ 204/ 204] Loss 0.671590 mAP 0.898737 +2025-05-16 02:34:51,524 - ==> mAP: 0.89874 Loss: 0.672 + +2025-05-16 02:34:51,528 - ==> Best [mAP: 0.909098 vloss: 0.666068 Params: 368352 on epoch: 83] +2025-05-16 02:34:51,528 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 02:34:51,562 - + +2025-05-16 02:34:51,563 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 02:35:47,014 - Epoch: [88][ 100/ 813] Overall Loss 0.403381 Objective Loss 0.403381 LR 0.000050 Time 0.554486 +2025-05-16 02:36:40,396 - Epoch: [88][ 200/ 813] Overall Loss 0.407503 Objective Loss 0.407503 LR 0.000050 Time 0.544144 +2025-05-16 02:37:33,024 - Epoch: [88][ 300/ 813] Overall Loss 0.427009 Objective Loss 0.427009 LR 0.000050 Time 0.538179 +2025-05-16 02:38:23,274 - Epoch: [88][ 400/ 813] Overall Loss 0.441084 Objective Loss 0.441084 LR 0.000050 Time 0.529256 +2025-05-16 02:39:15,104 - Epoch: [88][ 500/ 813] Overall Loss 0.449262 Objective Loss 0.449262 LR 0.000050 Time 0.527061 +2025-05-16 02:40:09,105 - Epoch: [88][ 600/ 813] Overall Loss 0.457547 Objective Loss 0.457547 LR 0.000050 Time 0.529216 +2025-05-16 02:41:02,524 - Epoch: [88][ 700/ 813] Overall Loss 0.458371 Objective Loss 0.458371 LR 0.000050 Time 0.529924 +2025-05-16 02:41:54,554 - Epoch: [88][ 800/ 813] Overall Loss 0.460954 Objective Loss 0.460954 LR 0.000050 Time 0.528717 +2025-05-16 02:41:59,174 - Epoch: [88][ 813/ 813] Overall Loss 0.460305 Objective Loss 0.460305 LR 0.000050 Time 0.525945 +2025-05-16 02:41:59,216 - --- validate (epoch=88)----------- +2025-05-16 02:41:59,217 - 3250 samples (16 per mini-batch) +2025-05-16 02:41:59,219 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 02:42:54,234 - Epoch: [88][ 100/ 204] Loss 0.745469 mAP 0.889664 +2025-05-16 02:43:46,523 - Epoch: [88][ 200/ 204] Loss 0.732652 mAP 0.890060 +2025-05-16 02:43:47,269 - Epoch: [88][ 204/ 204] Loss 0.727735 mAP 0.889960 +2025-05-16 02:43:47,314 - ==> mAP: 0.88996 Loss: 0.728 + +2025-05-16 02:43:47,318 - ==> Best [mAP: 0.909098 vloss: 0.666068 Params: 368352 on epoch: 83] +2025-05-16 02:43:47,319 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 02:43:47,353 - + +2025-05-16 02:43:47,353 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 02:44:43,259 - Epoch: [89][ 100/ 813] Overall Loss 0.391056 Objective Loss 0.391056 LR 0.000050 Time 0.559031 +2025-05-16 02:45:35,834 - Epoch: [89][ 200/ 813] Overall Loss 0.421948 Objective Loss 0.421948 LR 0.000050 Time 0.542384 +2025-05-16 02:46:28,613 - Epoch: [89][ 300/ 813] Overall Loss 0.436400 Objective Loss 0.436400 LR 0.000050 Time 0.537512 +2025-05-16 02:47:20,342 - Epoch: [89][ 400/ 813] Overall Loss 0.441480 Objective Loss 0.441480 LR 0.000050 Time 0.532453 +2025-05-16 02:48:11,573 - Epoch: [89][ 500/ 813] Overall Loss 0.444244 Objective Loss 0.444244 LR 0.000050 Time 0.528419 +2025-05-16 02:49:04,866 - Epoch: [89][ 600/ 813] Overall Loss 0.440629 Objective Loss 0.440629 LR 0.000050 Time 0.529168 +2025-05-16 02:49:57,462 - Epoch: [89][ 700/ 813] Overall Loss 0.446509 Objective Loss 0.446509 LR 0.000050 Time 0.528707 +2025-05-16 02:50:50,306 - Epoch: [89][ 800/ 813] Overall Loss 0.450306 Objective Loss 0.450306 LR 0.000050 Time 0.528672 +2025-05-16 02:50:54,746 - Epoch: [89][ 813/ 813] Overall Loss 0.450466 Objective Loss 0.450466 LR 0.000050 Time 0.525679 +2025-05-16 02:50:54,783 - --- validate (epoch=89)----------- +2025-05-16 02:50:54,784 - 3250 samples (16 per mini-batch) +2025-05-16 02:50:54,786 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 02:51:49,159 - Epoch: [89][ 100/ 204] Loss 0.694009 mAP 0.900133 +2025-05-16 02:52:42,150 - Epoch: [89][ 200/ 204] Loss 0.702374 mAP 0.900130 +2025-05-16 02:52:42,902 - Epoch: [89][ 204/ 204] Loss 0.700465 mAP 0.900140 +2025-05-16 02:52:42,946 - ==> mAP: 0.90014 Loss: 0.700 + +2025-05-16 02:52:42,951 - ==> Best [mAP: 0.909098 vloss: 0.666068 Params: 368352 on epoch: 83] +2025-05-16 02:52:42,951 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 02:52:42,986 - + +2025-05-16 02:52:42,986 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 02:53:39,107 - Epoch: [90][ 100/ 813] Overall Loss 0.442570 Objective Loss 0.442570 LR 0.000050 Time 0.561180 +2025-05-16 02:54:31,945 - Epoch: [90][ 200/ 813] Overall Loss 0.419052 Objective Loss 0.419052 LR 0.000050 Time 0.544759 +2025-05-16 02:55:23,743 - Epoch: [90][ 300/ 813] Overall Loss 0.426292 Objective Loss 0.426292 LR 0.000050 Time 0.535806 +2025-05-16 02:56:13,859 - Epoch: [90][ 400/ 813] Overall Loss 0.432817 Objective Loss 0.432817 LR 0.000050 Time 0.527139 +2025-05-16 02:57:05,105 - Epoch: [90][ 500/ 813] Overall Loss 0.434475 Objective Loss 0.434475 LR 0.000050 Time 0.524201 +2025-05-16 02:57:58,627 - Epoch: [90][ 600/ 813] Overall Loss 0.438953 Objective Loss 0.438953 LR 0.000050 Time 0.526025 +2025-05-16 02:58:50,278 - Epoch: [90][ 700/ 813] Overall Loss 0.444970 Objective Loss 0.444970 LR 0.000050 Time 0.524663 +2025-05-16 02:59:42,415 - Epoch: [90][ 800/ 813] Overall Loss 0.447623 Objective Loss 0.447623 LR 0.000050 Time 0.524249 +2025-05-16 02:59:47,974 - Epoch: [90][ 813/ 813] Overall Loss 0.448890 Objective Loss 0.448890 LR 0.000050 Time 0.522703 +2025-05-16 02:59:48,015 - --- validate (epoch=90)----------- +2025-05-16 02:59:48,016 - 3250 samples (16 per mini-batch) +2025-05-16 02:59:48,018 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 03:00:42,619 - Epoch: [90][ 100/ 204] Loss 0.702602 mAP 0.889256 +2025-05-16 03:01:35,460 - Epoch: [90][ 200/ 204] Loss 0.689358 mAP 0.899542 +2025-05-16 03:01:35,961 - Epoch: [90][ 204/ 204] Loss 0.686908 mAP 0.899551 +2025-05-16 03:01:36,008 - ==> mAP: 0.89955 Loss: 0.687 + +2025-05-16 03:01:36,013 - ==> Best [mAP: 0.909098 vloss: 0.666068 Params: 368352 on epoch: 83] +2025-05-16 03:01:36,013 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 03:01:36,048 - + +2025-05-16 03:01:36,049 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 03:02:31,347 - Epoch: [91][ 100/ 813] Overall Loss 0.424204 Objective Loss 0.424204 LR 0.000050 Time 0.552952 +2025-05-16 03:03:25,272 - Epoch: [91][ 200/ 813] Overall Loss 0.438365 Objective Loss 0.438365 LR 0.000050 Time 0.546094 +2025-05-16 03:04:18,595 - Epoch: [91][ 300/ 813] Overall Loss 0.438095 Objective Loss 0.438095 LR 0.000050 Time 0.541800 +2025-05-16 03:05:08,931 - Epoch: [91][ 400/ 813] Overall Loss 0.441517 Objective Loss 0.441517 LR 0.000050 Time 0.532184 +2025-05-16 03:06:00,184 - Epoch: [91][ 500/ 813] Overall Loss 0.443083 Objective Loss 0.443083 LR 0.000050 Time 0.528251 +2025-05-16 03:06:52,907 - Epoch: [91][ 600/ 813] Overall Loss 0.449205 Objective Loss 0.449205 LR 0.000050 Time 0.528077 +2025-05-16 03:07:44,891 - Epoch: [91][ 700/ 813] Overall Loss 0.451258 Objective Loss 0.451258 LR 0.000050 Time 0.526898 +2025-05-16 03:08:37,894 - Epoch: [91][ 800/ 813] Overall Loss 0.454226 Objective Loss 0.454226 LR 0.000050 Time 0.527283 +2025-05-16 03:08:42,218 - Epoch: [91][ 813/ 813] Overall Loss 0.454531 Objective Loss 0.454531 LR 0.000050 Time 0.524170 +2025-05-16 03:08:42,259 - --- validate (epoch=91)----------- +2025-05-16 03:08:42,260 - 3250 samples (16 per mini-batch) +2025-05-16 03:08:42,262 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 03:09:37,755 - Epoch: [91][ 100/ 204] Loss 0.718156 mAP 0.879470 +2025-05-16 03:10:30,840 - Epoch: [91][ 200/ 204] Loss 0.709062 mAP 0.889708 +2025-05-16 03:10:31,492 - Epoch: [91][ 204/ 204] Loss 0.707428 mAP 0.889729 +2025-05-16 03:10:31,530 - ==> mAP: 0.88973 Loss: 0.707 + +2025-05-16 03:10:31,535 - ==> Best [mAP: 0.909098 vloss: 0.666068 Params: 368352 on epoch: 83] +2025-05-16 03:10:31,535 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 03:10:31,570 - + +2025-05-16 03:10:31,570 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 03:11:27,612 - Epoch: [92][ 100/ 813] Overall Loss 0.415150 Objective Loss 0.415150 LR 0.000050 Time 0.560388 +2025-05-16 03:12:21,372 - Epoch: [92][ 200/ 813] Overall Loss 0.406664 Objective Loss 0.406664 LR 0.000050 Time 0.548985 +2025-05-16 03:13:12,903 - Epoch: [92][ 300/ 813] Overall Loss 0.421852 Objective Loss 0.421852 LR 0.000050 Time 0.537755 +2025-05-16 03:14:02,940 - Epoch: [92][ 400/ 813] Overall Loss 0.423821 Objective Loss 0.423821 LR 0.000050 Time 0.528391 +2025-05-16 03:14:55,829 - Epoch: [92][ 500/ 813] Overall Loss 0.427485 Objective Loss 0.427485 LR 0.000050 Time 0.528479 +2025-05-16 03:15:48,251 - Epoch: [92][ 600/ 813] Overall Loss 0.427716 Objective Loss 0.427716 LR 0.000050 Time 0.527766 +2025-05-16 03:16:43,144 - Epoch: [92][ 700/ 813] Overall Loss 0.437070 Objective Loss 0.437070 LR 0.000050 Time 0.530787 +2025-05-16 03:17:35,664 - Epoch: [92][ 800/ 813] Overall Loss 0.443033 Objective Loss 0.443033 LR 0.000050 Time 0.530086 +2025-05-16 03:17:39,819 - Epoch: [92][ 813/ 813] Overall Loss 0.443278 Objective Loss 0.443278 LR 0.000050 Time 0.526720 +2025-05-16 03:17:39,858 - --- validate (epoch=92)----------- +2025-05-16 03:17:39,859 - 3250 samples (16 per mini-batch) +2025-05-16 03:17:39,861 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 03:18:36,452 - Epoch: [92][ 100/ 204] Loss 0.684844 mAP 0.899410 +2025-05-16 03:19:28,912 - Epoch: [92][ 200/ 204] Loss 0.698786 mAP 0.909277 +2025-05-16 03:19:30,021 - Epoch: [92][ 204/ 204] Loss 0.703166 mAP 0.909177 +2025-05-16 03:19:30,071 - ==> mAP: 0.90918 Loss: 0.703 + +2025-05-16 03:19:30,077 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 03:19:30,077 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 03:19:30,116 - + +2025-05-16 03:19:30,116 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 03:20:24,541 - Epoch: [93][ 100/ 813] Overall Loss 0.421480 Objective Loss 0.421480 LR 0.000050 Time 0.544220 +2025-05-16 03:21:18,741 - Epoch: [93][ 200/ 813] Overall Loss 0.406618 Objective Loss 0.406618 LR 0.000050 Time 0.543099 +2025-05-16 03:22:13,408 - Epoch: [93][ 300/ 813] Overall Loss 0.420348 Objective Loss 0.420348 LR 0.000050 Time 0.544286 +2025-05-16 03:23:02,925 - Epoch: [93][ 400/ 813] Overall Loss 0.419761 Objective Loss 0.419761 LR 0.000050 Time 0.532002 +2025-05-16 03:23:55,409 - Epoch: [93][ 500/ 813] Overall Loss 0.420965 Objective Loss 0.420965 LR 0.000050 Time 0.530566 +2025-05-16 03:24:49,793 - Epoch: [93][ 600/ 813] Overall Loss 0.427013 Objective Loss 0.427013 LR 0.000050 Time 0.532774 +2025-05-16 03:25:43,002 - Epoch: [93][ 700/ 813] Overall Loss 0.432648 Objective Loss 0.432648 LR 0.000050 Time 0.532673 +2025-05-16 03:26:35,436 - Epoch: [93][ 800/ 813] Overall Loss 0.436906 Objective Loss 0.436906 LR 0.000050 Time 0.531629 +2025-05-16 03:26:40,279 - Epoch: [93][ 813/ 813] Overall Loss 0.437986 Objective Loss 0.437986 LR 0.000050 Time 0.529084 +2025-05-16 03:26:40,319 - --- validate (epoch=93)----------- +2025-05-16 03:26:40,320 - 3250 samples (16 per mini-batch) +2025-05-16 03:26:40,322 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 03:27:36,098 - Epoch: [93][ 100/ 204] Loss 0.743807 mAP 0.879728 +2025-05-16 03:28:28,602 - Epoch: [93][ 200/ 204] Loss 0.710472 mAP 0.889772 +2025-05-16 03:28:29,089 - Epoch: [93][ 204/ 204] Loss 0.709126 mAP 0.889684 +2025-05-16 03:28:29,131 - ==> mAP: 0.88968 Loss: 0.709 + +2025-05-16 03:28:29,135 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 03:28:29,135 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 03:28:29,170 - + +2025-05-16 03:28:29,170 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 03:29:24,862 - Epoch: [94][ 100/ 813] Overall Loss 0.409442 Objective Loss 0.409442 LR 0.000050 Time 0.556894 +2025-05-16 03:30:17,449 - Epoch: [94][ 200/ 813] Overall Loss 0.414248 Objective Loss 0.414248 LR 0.000050 Time 0.541372 +2025-05-16 03:31:10,108 - Epoch: [94][ 300/ 813] Overall Loss 0.416837 Objective Loss 0.416837 LR 0.000050 Time 0.536437 +2025-05-16 03:32:00,033 - Epoch: [94][ 400/ 813] Overall Loss 0.423107 Objective Loss 0.423107 LR 0.000050 Time 0.527136 +2025-05-16 03:32:51,900 - Epoch: [94][ 500/ 813] Overall Loss 0.424541 Objective Loss 0.424541 LR 0.000050 Time 0.525427 +2025-05-16 03:33:46,215 - Epoch: [94][ 600/ 813] Overall Loss 0.431704 Objective Loss 0.431704 LR 0.000050 Time 0.528377 +2025-05-16 03:34:38,608 - Epoch: [94][ 700/ 813] Overall Loss 0.436709 Objective Loss 0.436709 LR 0.000050 Time 0.527740 +2025-05-16 03:35:31,770 - Epoch: [94][ 800/ 813] Overall Loss 0.442354 Objective Loss 0.442354 LR 0.000050 Time 0.528222 +2025-05-16 03:35:36,867 - Epoch: [94][ 813/ 813] Overall Loss 0.444172 Objective Loss 0.444172 LR 0.000050 Time 0.526045 +2025-05-16 03:35:36,909 - --- validate (epoch=94)----------- +2025-05-16 03:35:36,910 - 3250 samples (16 per mini-batch) +2025-05-16 03:35:36,912 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 03:36:30,988 - Epoch: [94][ 100/ 204] Loss 0.670911 mAP 0.908456 +2025-05-16 03:37:24,133 - Epoch: [94][ 200/ 204] Loss 0.678223 mAP 0.899020 +2025-05-16 03:37:24,733 - Epoch: [94][ 204/ 204] Loss 0.682141 mAP 0.899078 +2025-05-16 03:37:24,778 - ==> mAP: 0.89908 Loss: 0.682 + +2025-05-16 03:37:24,783 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 03:37:24,783 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 03:37:24,818 - + +2025-05-16 03:37:24,819 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 03:38:20,276 - Epoch: [95][ 100/ 813] Overall Loss 0.425204 Objective Loss 0.425204 LR 0.000050 Time 0.554543 +2025-05-16 03:39:13,678 - Epoch: [95][ 200/ 813] Overall Loss 0.431939 Objective Loss 0.431939 LR 0.000050 Time 0.544272 +2025-05-16 03:40:04,686 - Epoch: [95][ 300/ 813] Overall Loss 0.451538 Objective Loss 0.451538 LR 0.000050 Time 0.532869 +2025-05-16 03:40:55,431 - Epoch: [95][ 400/ 813] Overall Loss 0.451645 Objective Loss 0.451645 LR 0.000050 Time 0.526511 +2025-05-16 03:41:47,008 - Epoch: [95][ 500/ 813] Overall Loss 0.457041 Objective Loss 0.457041 LR 0.000050 Time 0.524360 +2025-05-16 03:42:40,631 - Epoch: [95][ 600/ 813] Overall Loss 0.459421 Objective Loss 0.459421 LR 0.000050 Time 0.526335 +2025-05-16 03:43:33,659 - Epoch: [95][ 700/ 813] Overall Loss 0.460269 Objective Loss 0.460269 LR 0.000050 Time 0.526886 +2025-05-16 03:44:24,566 - Epoch: [95][ 800/ 813] Overall Loss 0.460542 Objective Loss 0.460542 LR 0.000050 Time 0.524657 +2025-05-16 03:44:31,033 - Epoch: [95][ 813/ 813] Overall Loss 0.460029 Objective Loss 0.460029 LR 0.000050 Time 0.524221 +2025-05-16 03:44:31,075 - --- validate (epoch=95)----------- +2025-05-16 03:44:31,076 - 3250 samples (16 per mini-batch) +2025-05-16 03:44:31,078 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 03:45:26,556 - Epoch: [95][ 100/ 204] Loss 0.681345 mAP 0.899620 +2025-05-16 03:46:19,042 - Epoch: [95][ 200/ 204] Loss 0.689515 mAP 0.899695 +2025-05-16 03:46:19,808 - Epoch: [95][ 204/ 204] Loss 0.691190 mAP 0.899709 +2025-05-16 03:46:19,852 - ==> mAP: 0.89971 Loss: 0.691 + +2025-05-16 03:46:19,857 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 03:46:19,857 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 03:46:19,891 - + +2025-05-16 03:46:19,892 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 03:47:15,241 - Epoch: [96][ 100/ 813] Overall Loss 0.411034 Objective Loss 0.411034 LR 0.000050 Time 0.553469 +2025-05-16 03:48:08,535 - Epoch: [96][ 200/ 813] Overall Loss 0.402274 Objective Loss 0.402274 LR 0.000050 Time 0.543193 +2025-05-16 03:49:01,723 - Epoch: [96][ 300/ 813] Overall Loss 0.412039 Objective Loss 0.412039 LR 0.000050 Time 0.539416 +2025-05-16 03:49:52,806 - Epoch: [96][ 400/ 813] Overall Loss 0.416496 Objective Loss 0.416496 LR 0.000050 Time 0.532265 +2025-05-16 03:50:45,001 - Epoch: [96][ 500/ 813] Overall Loss 0.423414 Objective Loss 0.423414 LR 0.000050 Time 0.530199 +2025-05-16 03:51:37,885 - Epoch: [96][ 600/ 813] Overall Loss 0.424090 Objective Loss 0.424090 LR 0.000050 Time 0.529970 +2025-05-16 03:52:30,885 - Epoch: [96][ 700/ 813] Overall Loss 0.430678 Objective Loss 0.430678 LR 0.000050 Time 0.529970 +2025-05-16 03:53:24,568 - Epoch: [96][ 800/ 813] Overall Loss 0.436562 Objective Loss 0.436562 LR 0.000050 Time 0.530826 +2025-05-16 03:53:29,054 - Epoch: [96][ 813/ 813] Overall Loss 0.437515 Objective Loss 0.437515 LR 0.000050 Time 0.527856 +2025-05-16 03:53:29,095 - --- validate (epoch=96)----------- +2025-05-16 03:53:29,096 - 3250 samples (16 per mini-batch) +2025-05-16 03:53:29,098 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 03:54:24,897 - Epoch: [96][ 100/ 204] Loss 0.690810 mAP 0.908952 +2025-05-16 03:55:17,681 - Epoch: [96][ 200/ 204] Loss 0.716337 mAP 0.899520 +2025-05-16 03:55:18,196 - Epoch: [96][ 204/ 204] Loss 0.715419 mAP 0.899526 +2025-05-16 03:55:18,236 - ==> mAP: 0.89953 Loss: 0.715 + +2025-05-16 03:55:18,240 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 03:55:18,240 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 03:55:18,276 - + +2025-05-16 03:55:18,276 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 03:56:14,820 - Epoch: [97][ 100/ 813] Overall Loss 0.438050 Objective Loss 0.438050 LR 0.000050 Time 0.565414 +2025-05-16 03:57:05,806 - Epoch: [97][ 200/ 813] Overall Loss 0.436336 Objective Loss 0.436336 LR 0.000050 Time 0.537629 +2025-05-16 03:57:59,583 - Epoch: [97][ 300/ 813] Overall Loss 0.439351 Objective Loss 0.439351 LR 0.000050 Time 0.537658 +2025-05-16 03:58:48,990 - Epoch: [97][ 400/ 813] Overall Loss 0.444632 Objective Loss 0.444632 LR 0.000050 Time 0.526714 +2025-05-16 03:59:41,991 - Epoch: [97][ 500/ 813] Overall Loss 0.444831 Objective Loss 0.444831 LR 0.000050 Time 0.527364 +2025-05-16 04:00:35,154 - Epoch: [97][ 600/ 813] Overall Loss 0.443636 Objective Loss 0.443636 LR 0.000050 Time 0.528071 +2025-05-16 04:01:28,627 - Epoch: [97][ 700/ 813] Overall Loss 0.447132 Objective Loss 0.447132 LR 0.000050 Time 0.529020 +2025-05-16 04:02:20,912 - Epoch: [97][ 800/ 813] Overall Loss 0.448284 Objective Loss 0.448284 LR 0.000050 Time 0.528239 +2025-05-16 04:02:26,663 - Epoch: [97][ 813/ 813] Overall Loss 0.449282 Objective Loss 0.449282 LR 0.000050 Time 0.526866 +2025-05-16 04:02:26,706 - --- validate (epoch=97)----------- +2025-05-16 04:02:26,707 - 3250 samples (16 per mini-batch) +2025-05-16 04:02:26,709 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 04:03:21,486 - Epoch: [97][ 100/ 204] Loss 0.675595 mAP 0.909866 +2025-05-16 04:04:12,936 - Epoch: [97][ 200/ 204] Loss 0.707904 mAP 0.899194 +2025-05-16 04:04:13,699 - Epoch: [97][ 204/ 204] Loss 0.705516 mAP 0.899222 +2025-05-16 04:04:13,744 - ==> mAP: 0.89922 Loss: 0.706 + +2025-05-16 04:04:13,749 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 04:04:13,749 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 04:04:13,783 - + +2025-05-16 04:04:13,783 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 04:05:07,880 - Epoch: [98][ 100/ 813] Overall Loss 0.440761 Objective Loss 0.440761 LR 0.000050 Time 0.540935 +2025-05-16 04:06:01,331 - Epoch: [98][ 200/ 813] Overall Loss 0.434430 Objective Loss 0.434430 LR 0.000050 Time 0.537713 +2025-05-16 04:06:54,160 - Epoch: [98][ 300/ 813] Overall Loss 0.442038 Objective Loss 0.442038 LR 0.000050 Time 0.534541 +2025-05-16 04:07:45,062 - Epoch: [98][ 400/ 813] Overall Loss 0.456059 Objective Loss 0.456059 LR 0.000050 Time 0.528157 +2025-05-16 04:08:38,087 - Epoch: [98][ 500/ 813] Overall Loss 0.455209 Objective Loss 0.455209 LR 0.000050 Time 0.528571 +2025-05-16 04:09:30,266 - Epoch: [98][ 600/ 813] Overall Loss 0.456854 Objective Loss 0.456854 LR 0.000050 Time 0.527439 +2025-05-16 04:10:23,959 - Epoch: [98][ 700/ 813] Overall Loss 0.457118 Objective Loss 0.457118 LR 0.000050 Time 0.528792 +2025-05-16 04:11:15,870 - Epoch: [98][ 800/ 813] Overall Loss 0.457919 Objective Loss 0.457919 LR 0.000050 Time 0.527579 +2025-05-16 04:11:20,417 - Epoch: [98][ 813/ 813] Overall Loss 0.457389 Objective Loss 0.457389 LR 0.000050 Time 0.524736 +2025-05-16 04:11:20,460 - --- validate (epoch=98)----------- +2025-05-16 04:11:20,461 - 3250 samples (16 per mini-batch) +2025-05-16 04:11:20,463 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 04:12:16,150 - Epoch: [98][ 100/ 204] Loss 0.737720 mAP 0.888627 +2025-05-16 04:13:07,782 - Epoch: [98][ 200/ 204] Loss 0.702758 mAP 0.888878 +2025-05-16 04:13:08,611 - Epoch: [98][ 204/ 204] Loss 0.701752 mAP 0.888907 +2025-05-16 04:13:08,653 - ==> mAP: 0.88891 Loss: 0.702 + +2025-05-16 04:13:08,657 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 04:13:08,658 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 04:13:08,692 - + +2025-05-16 04:13:08,692 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 04:14:05,607 - Epoch: [99][ 100/ 813] Overall Loss 0.402654 Objective Loss 0.402654 LR 0.000050 Time 0.569126 +2025-05-16 04:14:58,823 - Epoch: [99][ 200/ 813] Overall Loss 0.401767 Objective Loss 0.401767 LR 0.000050 Time 0.550632 +2025-05-16 04:15:51,511 - Epoch: [99][ 300/ 813] Overall Loss 0.416931 Objective Loss 0.416931 LR 0.000050 Time 0.542594 +2025-05-16 04:16:41,566 - Epoch: [99][ 400/ 813] Overall Loss 0.424924 Objective Loss 0.424924 LR 0.000050 Time 0.532078 +2025-05-16 04:17:34,417 - Epoch: [99][ 500/ 813] Overall Loss 0.426917 Objective Loss 0.426917 LR 0.000050 Time 0.531361 +2025-05-16 04:18:27,262 - Epoch: [99][ 600/ 813] Overall Loss 0.424598 Objective Loss 0.424598 LR 0.000050 Time 0.530874 +2025-05-16 04:19:19,586 - Epoch: [99][ 700/ 813] Overall Loss 0.430881 Objective Loss 0.430881 LR 0.000050 Time 0.529772 +2025-05-16 04:20:11,532 - Epoch: [99][ 800/ 813] Overall Loss 0.433505 Objective Loss 0.433505 LR 0.000050 Time 0.528480 +2025-05-16 04:20:17,720 - Epoch: [99][ 813/ 813] Overall Loss 0.436153 Objective Loss 0.436153 LR 0.000050 Time 0.527640 +2025-05-16 04:20:17,763 - --- validate (epoch=99)----------- +2025-05-16 04:20:17,764 - 3250 samples (16 per mini-batch) +2025-05-16 04:20:17,766 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 04:21:12,435 - Epoch: [99][ 100/ 204] Loss 0.639063 mAP 0.899504 +2025-05-16 04:22:05,110 - Epoch: [99][ 200/ 204] Loss 0.663970 mAP 0.899120 +2025-05-16 04:22:05,616 - Epoch: [99][ 204/ 204] Loss 0.667214 mAP 0.898995 +2025-05-16 04:22:05,659 - ==> mAP: 0.89899 Loss: 0.667 + +2025-05-16 04:22:05,664 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 04:22:05,664 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 04:22:05,699 - + +2025-05-16 04:22:05,700 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 04:23:00,667 - Epoch: [100][ 100/ 813] Overall Loss 0.424910 Objective Loss 0.424910 LR 0.000025 Time 0.549646 +2025-05-16 04:23:54,379 - Epoch: [100][ 200/ 813] Overall Loss 0.449229 Objective Loss 0.449229 LR 0.000025 Time 0.543371 +2025-05-16 04:24:46,599 - Epoch: [100][ 300/ 813] Overall Loss 0.447019 Objective Loss 0.447019 LR 0.000025 Time 0.536310 +2025-05-16 04:25:37,731 - Epoch: [100][ 400/ 813] Overall Loss 0.444078 Objective Loss 0.444078 LR 0.000025 Time 0.530057 +2025-05-16 04:26:30,249 - Epoch: [100][ 500/ 813] Overall Loss 0.442353 Objective Loss 0.442353 LR 0.000025 Time 0.529080 +2025-05-16 04:27:24,012 - Epoch: [100][ 600/ 813] Overall Loss 0.437383 Objective Loss 0.437383 LR 0.000025 Time 0.530501 +2025-05-16 04:28:15,969 - Epoch: [100][ 700/ 813] Overall Loss 0.436408 Objective Loss 0.436408 LR 0.000025 Time 0.528937 +2025-05-16 04:29:07,365 - Epoch: [100][ 800/ 813] Overall Loss 0.437593 Objective Loss 0.437593 LR 0.000025 Time 0.527063 +2025-05-16 04:29:12,768 - Epoch: [100][ 813/ 813] Overall Loss 0.437373 Objective Loss 0.437373 LR 0.000025 Time 0.525280 +2025-05-16 04:29:12,810 - --- validate (epoch=100)----------- +2025-05-16 04:29:12,811 - 3250 samples (16 per mini-batch) +2025-05-16 04:29:12,813 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 04:30:08,380 - Epoch: [100][ 100/ 204] Loss 0.743848 mAP 0.898699 +2025-05-16 04:31:01,604 - Epoch: [100][ 200/ 204] Loss 0.685471 mAP 0.899469 +2025-05-16 04:31:02,628 - Epoch: [100][ 204/ 204] Loss 0.686317 mAP 0.899492 +2025-05-16 04:31:02,670 - ==> mAP: 0.89949 Loss: 0.686 + +2025-05-16 04:31:02,675 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 04:31:02,675 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 04:31:02,703 - + +2025-05-16 04:31:02,703 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 04:31:57,018 - Epoch: [101][ 100/ 813] Overall Loss 0.395541 Objective Loss 0.395541 LR 0.000025 Time 0.543123 +2025-05-16 04:32:51,388 - Epoch: [101][ 200/ 813] Overall Loss 0.386298 Objective Loss 0.386298 LR 0.000025 Time 0.543398 +2025-05-16 04:33:43,161 - Epoch: [101][ 300/ 813] Overall Loss 0.398566 Objective Loss 0.398566 LR 0.000025 Time 0.534827 +2025-05-16 04:34:33,964 - Epoch: [101][ 400/ 813] Overall Loss 0.402518 Objective Loss 0.402518 LR 0.000025 Time 0.528124 +2025-05-16 04:35:25,718 - Epoch: [101][ 500/ 813] Overall Loss 0.404041 Objective Loss 0.404041 LR 0.000025 Time 0.526005 +2025-05-16 04:36:19,200 - Epoch: [101][ 600/ 813] Overall Loss 0.408554 Objective Loss 0.408554 LR 0.000025 Time 0.527471 +2025-05-16 04:37:11,124 - Epoch: [101][ 700/ 813] Overall Loss 0.414275 Objective Loss 0.414275 LR 0.000025 Time 0.526292 +2025-05-16 04:38:04,798 - Epoch: [101][ 800/ 813] Overall Loss 0.420324 Objective Loss 0.420324 LR 0.000025 Time 0.527592 +2025-05-16 04:38:09,142 - Epoch: [101][ 813/ 813] Overall Loss 0.420651 Objective Loss 0.420651 LR 0.000025 Time 0.524498 +2025-05-16 04:38:09,180 - --- validate (epoch=101)----------- +2025-05-16 04:38:09,181 - 3250 samples (16 per mini-batch) +2025-05-16 04:38:09,183 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 04:39:05,191 - Epoch: [101][ 100/ 204] Loss 0.646033 mAP 0.899770 +2025-05-16 04:39:57,165 - Epoch: [101][ 200/ 204] Loss 0.686132 mAP 0.899272 +2025-05-16 04:39:57,994 - Epoch: [101][ 204/ 204] Loss 0.710150 mAP 0.899201 +2025-05-16 04:39:58,035 - ==> mAP: 0.89920 Loss: 0.710 + +2025-05-16 04:39:58,040 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 04:39:58,040 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 04:39:58,075 - + +2025-05-16 04:39:58,076 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 04:40:54,316 - Epoch: [102][ 100/ 813] Overall Loss 0.378347 Objective Loss 0.378347 LR 0.000025 Time 0.562372 +2025-05-16 04:41:47,447 - Epoch: [102][ 200/ 813] Overall Loss 0.390089 Objective Loss 0.390089 LR 0.000025 Time 0.546835 +2025-05-16 04:42:38,459 - Epoch: [102][ 300/ 813] Overall Loss 0.407629 Objective Loss 0.407629 LR 0.000025 Time 0.534590 +2025-05-16 04:43:29,063 - Epoch: [102][ 400/ 813] Overall Loss 0.411414 Objective Loss 0.411414 LR 0.000025 Time 0.527446 +2025-05-16 04:44:21,902 - Epoch: [102][ 500/ 813] Overall Loss 0.416714 Objective Loss 0.416714 LR 0.000025 Time 0.527626 +2025-05-16 04:45:14,711 - Epoch: [102][ 600/ 813] Overall Loss 0.421199 Objective Loss 0.421199 LR 0.000025 Time 0.527701 +2025-05-16 04:46:06,746 - Epoch: [102][ 700/ 813] Overall Loss 0.428324 Objective Loss 0.428324 LR 0.000025 Time 0.526648 +2025-05-16 04:46:58,079 - Epoch: [102][ 800/ 813] Overall Loss 0.429255 Objective Loss 0.429255 LR 0.000025 Time 0.524982 +2025-05-16 04:47:04,774 - Epoch: [102][ 813/ 813] Overall Loss 0.430427 Objective Loss 0.430427 LR 0.000025 Time 0.524819 +2025-05-16 04:47:04,817 - --- validate (epoch=102)----------- +2025-05-16 04:47:04,818 - 3250 samples (16 per mini-batch) +2025-05-16 04:47:04,820 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 04:47:59,703 - Epoch: [102][ 100/ 204] Loss 0.682535 mAP 0.899522 +2025-05-16 04:48:53,055 - Epoch: [102][ 200/ 204] Loss 0.709683 mAP 0.899380 +2025-05-16 04:48:53,574 - Epoch: [102][ 204/ 204] Loss 0.707135 mAP 0.899413 +2025-05-16 04:48:53,618 - ==> mAP: 0.89941 Loss: 0.707 + +2025-05-16 04:48:53,623 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 04:48:53,623 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 04:48:53,659 - + +2025-05-16 04:48:53,659 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 04:49:49,265 - Epoch: [103][ 100/ 813] Overall Loss 0.392769 Objective Loss 0.392769 LR 0.000025 Time 0.556033 +2025-05-16 04:50:40,689 - Epoch: [103][ 200/ 813] Overall Loss 0.395270 Objective Loss 0.395270 LR 0.000025 Time 0.535128 +2025-05-16 04:51:34,133 - Epoch: [103][ 300/ 813] Overall Loss 0.402152 Objective Loss 0.402152 LR 0.000025 Time 0.534893 +2025-05-16 04:52:24,111 - Epoch: [103][ 400/ 813] Overall Loss 0.402801 Objective Loss 0.402801 LR 0.000025 Time 0.526111 +2025-05-16 04:53:17,187 - Epoch: [103][ 500/ 813] Overall Loss 0.402170 Objective Loss 0.402170 LR 0.000025 Time 0.527037 +2025-05-16 04:54:11,218 - Epoch: [103][ 600/ 813] Overall Loss 0.415940 Objective Loss 0.415940 LR 0.000025 Time 0.529241 +2025-05-16 04:55:02,130 - Epoch: [103][ 700/ 813] Overall Loss 0.422773 Objective Loss 0.422773 LR 0.000025 Time 0.526364 +2025-05-16 04:55:54,495 - Epoch: [103][ 800/ 813] Overall Loss 0.426257 Objective Loss 0.426257 LR 0.000025 Time 0.526022 +2025-05-16 04:55:59,946 - Epoch: [103][ 813/ 813] Overall Loss 0.427596 Objective Loss 0.427596 LR 0.000025 Time 0.524315 +2025-05-16 04:55:59,987 - --- validate (epoch=103)----------- +2025-05-16 04:55:59,988 - 3250 samples (16 per mini-batch) +2025-05-16 04:55:59,990 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 04:56:55,480 - Epoch: [103][ 100/ 204] Loss 0.665785 mAP 0.900232 +2025-05-16 04:57:47,758 - Epoch: [103][ 200/ 204] Loss 0.669029 mAP 0.900115 +2025-05-16 04:57:48,243 - Epoch: [103][ 204/ 204] Loss 0.667845 mAP 0.900129 +2025-05-16 04:57:48,291 - ==> mAP: 0.90013 Loss: 0.668 + +2025-05-16 04:57:48,297 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 04:57:48,297 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 04:57:48,331 - + +2025-05-16 04:57:48,331 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 04:58:42,989 - Epoch: [104][ 100/ 813] Overall Loss 0.385643 Objective Loss 0.385643 LR 0.000025 Time 0.546546 +2025-05-16 04:59:35,772 - Epoch: [104][ 200/ 813] Overall Loss 0.373731 Objective Loss 0.373731 LR 0.000025 Time 0.537180 +2025-05-16 05:00:27,898 - Epoch: [104][ 300/ 813] Overall Loss 0.397379 Objective Loss 0.397379 LR 0.000025 Time 0.531868 +2025-05-16 05:01:18,696 - Epoch: [104][ 400/ 813] Overall Loss 0.411597 Objective Loss 0.411597 LR 0.000025 Time 0.525891 +2025-05-16 05:02:09,750 - Epoch: [104][ 500/ 813] Overall Loss 0.417867 Objective Loss 0.417867 LR 0.000025 Time 0.522818 +2025-05-16 05:03:03,069 - Epoch: [104][ 600/ 813] Overall Loss 0.421292 Objective Loss 0.421292 LR 0.000025 Time 0.524543 +2025-05-16 05:03:57,802 - Epoch: [104][ 700/ 813] Overall Loss 0.427609 Objective Loss 0.427609 LR 0.000025 Time 0.527796 +2025-05-16 05:04:50,053 - Epoch: [104][ 800/ 813] Overall Loss 0.431733 Objective Loss 0.431733 LR 0.000025 Time 0.527133 +2025-05-16 05:04:55,928 - Epoch: [104][ 813/ 813] Overall Loss 0.432973 Objective Loss 0.432973 LR 0.000025 Time 0.525929 +2025-05-16 05:04:55,967 - --- validate (epoch=104)----------- +2025-05-16 05:04:55,968 - 3250 samples (16 per mini-batch) +2025-05-16 05:04:55,970 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 05:05:51,903 - Epoch: [104][ 100/ 204] Loss 0.733140 mAP 0.909160 +2025-05-16 05:06:44,722 - Epoch: [104][ 200/ 204] Loss 0.689140 mAP 0.908873 +2025-05-16 05:06:45,221 - Epoch: [104][ 204/ 204] Loss 0.686855 mAP 0.908835 +2025-05-16 05:06:45,265 - ==> mAP: 0.90884 Loss: 0.687 + +2025-05-16 05:06:45,270 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 05:06:45,270 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 05:06:45,297 - + +2025-05-16 05:06:45,298 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 05:07:40,767 - Epoch: [105][ 100/ 813] Overall Loss 0.384285 Objective Loss 0.384285 LR 0.000025 Time 0.554665 +2025-05-16 05:08:33,067 - Epoch: [105][ 200/ 813] Overall Loss 0.384993 Objective Loss 0.384993 LR 0.000025 Time 0.538825 +2025-05-16 05:09:26,241 - Epoch: [105][ 300/ 813] Overall Loss 0.397987 Objective Loss 0.397987 LR 0.000025 Time 0.536458 +2025-05-16 05:10:16,087 - Epoch: [105][ 400/ 813] Overall Loss 0.403390 Objective Loss 0.403390 LR 0.000025 Time 0.526954 +2025-05-16 05:11:09,599 - Epoch: [105][ 500/ 813] Overall Loss 0.409412 Objective Loss 0.409412 LR 0.000025 Time 0.528584 +2025-05-16 05:12:02,140 - Epoch: [105][ 600/ 813] Overall Loss 0.409708 Objective Loss 0.409708 LR 0.000025 Time 0.528053 +2025-05-16 05:12:54,280 - Epoch: [105][ 700/ 813] Overall Loss 0.418312 Objective Loss 0.418312 LR 0.000025 Time 0.527099 +2025-05-16 05:13:47,775 - Epoch: [105][ 800/ 813] Overall Loss 0.424447 Objective Loss 0.424447 LR 0.000025 Time 0.528078 +2025-05-16 05:13:52,709 - Epoch: [105][ 813/ 813] Overall Loss 0.424893 Objective Loss 0.424893 LR 0.000025 Time 0.525702 +2025-05-16 05:13:52,746 - --- validate (epoch=105)----------- +2025-05-16 05:13:52,747 - 3250 samples (16 per mini-batch) +2025-05-16 05:13:52,748 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 05:14:46,241 - Epoch: [105][ 100/ 204] Loss 0.688025 mAP 0.908832 +2025-05-16 05:15:37,234 - Epoch: [105][ 200/ 204] Loss 0.690371 mAP 0.909136 +2025-05-16 05:15:37,856 - Epoch: [105][ 204/ 204] Loss 0.690858 mAP 0.909158 +2025-05-16 05:15:37,897 - ==> mAP: 0.90916 Loss: 0.691 + +2025-05-16 05:15:37,903 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 05:15:37,903 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 05:15:37,937 - + +2025-05-16 05:15:37,937 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 05:16:34,056 - Epoch: [106][ 100/ 813] Overall Loss 0.399620 Objective Loss 0.399620 LR 0.000025 Time 0.561156 +2025-05-16 05:17:26,209 - Epoch: [106][ 200/ 813] Overall Loss 0.408854 Objective Loss 0.408854 LR 0.000025 Time 0.541333 +2025-05-16 05:18:19,971 - Epoch: [106][ 300/ 813] Overall Loss 0.410582 Objective Loss 0.410582 LR 0.000025 Time 0.540091 +2025-05-16 05:19:10,736 - Epoch: [106][ 400/ 813] Overall Loss 0.418667 Objective Loss 0.418667 LR 0.000025 Time 0.531975 +2025-05-16 05:20:03,844 - Epoch: [106][ 500/ 813] Overall Loss 0.426967 Objective Loss 0.426967 LR 0.000025 Time 0.531794 +2025-05-16 05:20:56,997 - Epoch: [106][ 600/ 813] Overall Loss 0.427127 Objective Loss 0.427127 LR 0.000025 Time 0.531746 +2025-05-16 05:21:49,589 - Epoch: [106][ 700/ 813] Overall Loss 0.433119 Objective Loss 0.433119 LR 0.000025 Time 0.530912 +2025-05-16 05:22:40,310 - Epoch: [106][ 800/ 813] Overall Loss 0.436496 Objective Loss 0.436496 LR 0.000025 Time 0.527946 +2025-05-16 05:22:46,350 - Epoch: [106][ 813/ 813] Overall Loss 0.435894 Objective Loss 0.435894 LR 0.000025 Time 0.526933 +2025-05-16 05:22:46,386 - --- validate (epoch=106)----------- +2025-05-16 05:22:46,387 - 3250 samples (16 per mini-batch) +2025-05-16 05:22:46,389 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 05:23:40,782 - Epoch: [106][ 100/ 204] Loss 0.653944 mAP 0.900357 +2025-05-16 05:24:33,271 - Epoch: [106][ 200/ 204] Loss 0.677664 mAP 0.900232 +2025-05-16 05:24:33,975 - Epoch: [106][ 204/ 204] Loss 0.675347 mAP 0.900240 +2025-05-16 05:24:34,018 - ==> mAP: 0.90024 Loss: 0.675 + +2025-05-16 05:24:34,023 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 05:24:34,023 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 05:24:34,058 - + +2025-05-16 05:24:34,059 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 05:25:28,510 - Epoch: [107][ 100/ 813] Overall Loss 0.383715 Objective Loss 0.383715 LR 0.000025 Time 0.544487 +2025-05-16 05:26:23,132 - Epoch: [107][ 200/ 813] Overall Loss 0.378135 Objective Loss 0.378135 LR 0.000025 Time 0.545341 +2025-05-16 05:27:15,852 - Epoch: [107][ 300/ 813] Overall Loss 0.398166 Objective Loss 0.398166 LR 0.000025 Time 0.539289 +2025-05-16 05:28:05,017 - Epoch: [107][ 400/ 813] Overall Loss 0.408357 Objective Loss 0.408357 LR 0.000025 Time 0.527376 +2025-05-16 05:28:58,422 - Epoch: [107][ 500/ 813] Overall Loss 0.410000 Objective Loss 0.410000 LR 0.000025 Time 0.528707 +2025-05-16 05:29:50,735 - Epoch: [107][ 600/ 813] Overall Loss 0.410144 Objective Loss 0.410144 LR 0.000025 Time 0.527775 +2025-05-16 05:30:43,682 - Epoch: [107][ 700/ 813] Overall Loss 0.416708 Objective Loss 0.416708 LR 0.000025 Time 0.528014 +2025-05-16 05:31:35,695 - Epoch: [107][ 800/ 813] Overall Loss 0.420398 Objective Loss 0.420398 LR 0.000025 Time 0.527027 +2025-05-16 05:31:41,309 - Epoch: [107][ 813/ 813] Overall Loss 0.420879 Objective Loss 0.420879 LR 0.000025 Time 0.525505 +2025-05-16 05:31:41,347 - --- validate (epoch=107)----------- +2025-05-16 05:31:41,348 - 3250 samples (16 per mini-batch) +2025-05-16 05:31:41,350 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 05:32:34,800 - Epoch: [107][ 100/ 204] Loss 0.665918 mAP 0.898644 +2025-05-16 05:33:27,152 - Epoch: [107][ 200/ 204] Loss 0.646247 mAP 0.899523 +2025-05-16 05:33:28,112 - Epoch: [107][ 204/ 204] Loss 0.644506 mAP 0.899572 +2025-05-16 05:33:28,159 - ==> mAP: 0.89957 Loss: 0.645 + +2025-05-16 05:33:28,164 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 05:33:28,164 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 05:33:28,199 - + +2025-05-16 05:33:28,199 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 05:34:23,632 - Epoch: [108][ 100/ 813] Overall Loss 0.386167 Objective Loss 0.386167 LR 0.000025 Time 0.554295 +2025-05-16 05:35:17,340 - Epoch: [108][ 200/ 813] Overall Loss 0.392487 Objective Loss 0.392487 LR 0.000025 Time 0.545681 +2025-05-16 05:36:09,621 - Epoch: [108][ 300/ 813] Overall Loss 0.415445 Objective Loss 0.415445 LR 0.000025 Time 0.538051 +2025-05-16 05:37:01,327 - Epoch: [108][ 400/ 813] Overall Loss 0.414889 Objective Loss 0.414889 LR 0.000025 Time 0.532797 +2025-05-16 05:37:53,807 - Epoch: [108][ 500/ 813] Overall Loss 0.411710 Objective Loss 0.411710 LR 0.000025 Time 0.531195 +2025-05-16 05:38:47,049 - Epoch: [108][ 600/ 813] Overall Loss 0.416866 Objective Loss 0.416866 LR 0.000025 Time 0.531397 +2025-05-16 05:39:39,933 - Epoch: [108][ 700/ 813] Overall Loss 0.420850 Objective Loss 0.420850 LR 0.000025 Time 0.531028 +2025-05-16 05:40:32,919 - Epoch: [108][ 800/ 813] Overall Loss 0.424301 Objective Loss 0.424301 LR 0.000025 Time 0.530880 +2025-05-16 05:40:38,183 - Epoch: [108][ 813/ 813] Overall Loss 0.425424 Objective Loss 0.425424 LR 0.000025 Time 0.528865 +2025-05-16 05:40:38,222 - --- validate (epoch=108)----------- +2025-05-16 05:40:38,223 - 3250 samples (16 per mini-batch) +2025-05-16 05:40:38,225 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 05:41:33,662 - Epoch: [108][ 100/ 204] Loss 0.662689 mAP 0.899373 +2025-05-16 05:42:25,500 - Epoch: [108][ 200/ 204] Loss 0.682196 mAP 0.899346 +2025-05-16 05:42:26,675 - Epoch: [108][ 204/ 204] Loss 0.677900 mAP 0.899363 +2025-05-16 05:42:26,717 - ==> mAP: 0.89936 Loss: 0.678 + +2025-05-16 05:42:26,722 - ==> Best [mAP: 0.909177 vloss: 0.703166 Params: 368352 on epoch: 92] +2025-05-16 05:42:26,722 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 05:42:26,757 - + +2025-05-16 05:42:26,757 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 05:43:22,376 - Epoch: [109][ 100/ 813] Overall Loss 0.379000 Objective Loss 0.379000 LR 0.000025 Time 0.556165 +2025-05-16 05:44:14,075 - Epoch: [109][ 200/ 813] Overall Loss 0.404722 Objective Loss 0.404722 LR 0.000025 Time 0.536567 +2025-05-16 05:45:06,505 - Epoch: [109][ 300/ 813] Overall Loss 0.416407 Objective Loss 0.416407 LR 0.000025 Time 0.532473 +2025-05-16 05:45:57,224 - Epoch: [109][ 400/ 813] Overall Loss 0.419975 Objective Loss 0.419975 LR 0.000025 Time 0.526147 +2025-05-16 05:46:49,790 - Epoch: [109][ 500/ 813] Overall Loss 0.419887 Objective Loss 0.419887 LR 0.000025 Time 0.526047 +2025-05-16 05:47:41,980 - Epoch: [109][ 600/ 813] Overall Loss 0.420604 Objective Loss 0.420604 LR 0.000025 Time 0.525353 +2025-05-16 05:48:35,047 - Epoch: [109][ 700/ 813] Overall Loss 0.426722 Objective Loss 0.426722 LR 0.000025 Time 0.526110 +2025-05-16 05:49:27,172 - Epoch: [109][ 800/ 813] Overall Loss 0.426092 Objective Loss 0.426092 LR 0.000025 Time 0.525500 +2025-05-16 05:49:32,503 - Epoch: [109][ 813/ 813] Overall Loss 0.426643 Objective Loss 0.426643 LR 0.000025 Time 0.523654 +2025-05-16 05:49:32,539 - --- validate (epoch=109)----------- +2025-05-16 05:49:32,540 - 3250 samples (16 per mini-batch) +2025-05-16 05:49:32,542 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 05:50:27,279 - Epoch: [109][ 100/ 204] Loss 0.661278 mAP 0.909538 +2025-05-16 05:51:21,223 - Epoch: [109][ 200/ 204] Loss 0.684372 mAP 0.909560 +2025-05-16 05:51:21,881 - Epoch: [109][ 204/ 204] Loss 0.681648 mAP 0.909573 +2025-05-16 05:51:21,922 - ==> mAP: 0.90957 Loss: 0.682 + +2025-05-16 05:51:21,927 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 05:51:21,928 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 05:51:21,966 - + +2025-05-16 05:51:21,966 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 05:52:15,778 - Epoch: [110][ 100/ 813] Overall Loss 0.356007 Objective Loss 0.356007 LR 0.000025 Time 0.538097 +2025-05-16 05:53:08,969 - Epoch: [110][ 200/ 813] Overall Loss 0.364352 Objective Loss 0.364352 LR 0.000025 Time 0.534992 +2025-05-16 05:54:01,549 - Epoch: [110][ 300/ 813] Overall Loss 0.377646 Objective Loss 0.377646 LR 0.000025 Time 0.531924 +2025-05-16 05:54:52,393 - Epoch: [110][ 400/ 813] Overall Loss 0.393268 Objective Loss 0.393268 LR 0.000025 Time 0.526047 +2025-05-16 05:55:46,254 - Epoch: [110][ 500/ 813] Overall Loss 0.399512 Objective Loss 0.399512 LR 0.000025 Time 0.528556 +2025-05-16 05:56:40,393 - Epoch: [110][ 600/ 813] Overall Loss 0.404478 Objective Loss 0.404478 LR 0.000025 Time 0.530693 +2025-05-16 05:57:31,943 - Epoch: [110][ 700/ 813] Overall Loss 0.410237 Objective Loss 0.410237 LR 0.000025 Time 0.528520 +2025-05-16 05:58:23,203 - Epoch: [110][ 800/ 813] Overall Loss 0.413832 Objective Loss 0.413832 LR 0.000025 Time 0.526524 +2025-05-16 05:58:28,746 - Epoch: [110][ 813/ 813] Overall Loss 0.415040 Objective Loss 0.415040 LR 0.000025 Time 0.524922 +2025-05-16 05:58:28,790 - --- validate (epoch=110)----------- +2025-05-16 05:58:28,791 - 3250 samples (16 per mini-batch) +2025-05-16 05:58:28,793 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 05:59:24,434 - Epoch: [110][ 100/ 204] Loss 0.708079 mAP 0.899236 +2025-05-16 06:00:17,306 - Epoch: [110][ 200/ 204] Loss 0.674279 mAP 0.899540 +2025-05-16 06:00:17,813 - Epoch: [110][ 204/ 204] Loss 0.673097 mAP 0.899550 +2025-05-16 06:00:17,850 - ==> mAP: 0.89955 Loss: 0.673 + +2025-05-16 06:00:17,855 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 06:00:17,855 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 06:00:17,891 - + +2025-05-16 06:00:17,891 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 06:01:12,741 - Epoch: [111][ 100/ 813] Overall Loss 0.365746 Objective Loss 0.365746 LR 0.000025 Time 0.548474 +2025-05-16 06:02:05,737 - Epoch: [111][ 200/ 813] Overall Loss 0.372628 Objective Loss 0.372628 LR 0.000025 Time 0.539208 +2025-05-16 06:02:58,334 - Epoch: [111][ 300/ 813] Overall Loss 0.395770 Objective Loss 0.395770 LR 0.000025 Time 0.534781 +2025-05-16 06:03:50,015 - Epoch: [111][ 400/ 813] Overall Loss 0.411734 Objective Loss 0.411734 LR 0.000025 Time 0.530284 +2025-05-16 06:04:42,183 - Epoch: [111][ 500/ 813] Overall Loss 0.410657 Objective Loss 0.410657 LR 0.000025 Time 0.528558 +2025-05-16 06:05:34,281 - Epoch: [111][ 600/ 813] Overall Loss 0.413791 Objective Loss 0.413791 LR 0.000025 Time 0.527287 +2025-05-16 06:06:28,206 - Epoch: [111][ 700/ 813] Overall Loss 0.414209 Objective Loss 0.414209 LR 0.000025 Time 0.528993 +2025-05-16 06:07:19,012 - Epoch: [111][ 800/ 813] Overall Loss 0.416729 Objective Loss 0.416729 LR 0.000025 Time 0.526375 +2025-05-16 06:07:24,911 - Epoch: [111][ 813/ 813] Overall Loss 0.416976 Objective Loss 0.416976 LR 0.000025 Time 0.525212 +2025-05-16 06:07:24,953 - --- validate (epoch=111)----------- +2025-05-16 06:07:24,953 - 3250 samples (16 per mini-batch) +2025-05-16 06:07:24,955 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 06:08:19,852 - Epoch: [111][ 100/ 204] Loss 0.691125 mAP 0.899192 +2025-05-16 06:09:12,515 - Epoch: [111][ 200/ 204] Loss 0.671974 mAP 0.899632 +2025-05-16 06:09:13,224 - Epoch: [111][ 204/ 204] Loss 0.670114 mAP 0.899661 +2025-05-16 06:09:13,265 - ==> mAP: 0.89966 Loss: 0.670 + +2025-05-16 06:09:13,270 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 06:09:13,270 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 06:09:13,304 - + +2025-05-16 06:09:13,305 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 06:10:08,165 - Epoch: [112][ 100/ 813] Overall Loss 0.347916 Objective Loss 0.347916 LR 0.000025 Time 0.548571 +2025-05-16 06:11:00,400 - Epoch: [112][ 200/ 813] Overall Loss 0.375732 Objective Loss 0.375732 LR 0.000025 Time 0.535455 +2025-05-16 06:11:53,438 - Epoch: [112][ 300/ 813] Overall Loss 0.398376 Objective Loss 0.398376 LR 0.000025 Time 0.533756 +2025-05-16 06:12:43,643 - Epoch: [112][ 400/ 813] Overall Loss 0.404151 Objective Loss 0.404151 LR 0.000025 Time 0.525825 +2025-05-16 06:13:37,233 - Epoch: [112][ 500/ 813] Overall Loss 0.413691 Objective Loss 0.413691 LR 0.000025 Time 0.527836 +2025-05-16 06:14:29,517 - Epoch: [112][ 600/ 813] Overall Loss 0.419570 Objective Loss 0.419570 LR 0.000025 Time 0.527001 +2025-05-16 06:15:23,539 - Epoch: [112][ 700/ 813] Overall Loss 0.425040 Objective Loss 0.425040 LR 0.000025 Time 0.528887 +2025-05-16 06:16:13,994 - Epoch: [112][ 800/ 813] Overall Loss 0.429554 Objective Loss 0.429554 LR 0.000025 Time 0.525843 +2025-05-16 06:16:20,138 - Epoch: [112][ 813/ 813] Overall Loss 0.431453 Objective Loss 0.431453 LR 0.000025 Time 0.524991 +2025-05-16 06:16:20,177 - --- validate (epoch=112)----------- +2025-05-16 06:16:20,178 - 3250 samples (16 per mini-batch) +2025-05-16 06:16:20,180 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 06:17:15,416 - Epoch: [112][ 100/ 204] Loss 0.709957 mAP 0.898727 +2025-05-16 06:18:08,631 - Epoch: [112][ 200/ 204] Loss 0.706712 mAP 0.898354 +2025-05-16 06:18:09,501 - Epoch: [112][ 204/ 204] Loss 0.710421 mAP 0.898381 +2025-05-16 06:18:09,544 - ==> mAP: 0.89838 Loss: 0.710 + +2025-05-16 06:18:09,549 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 06:18:09,549 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 06:18:09,584 - + +2025-05-16 06:18:09,584 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 06:19:04,762 - Epoch: [113][ 100/ 813] Overall Loss 0.391060 Objective Loss 0.391060 LR 0.000025 Time 0.551752 +2025-05-16 06:19:59,296 - Epoch: [113][ 200/ 813] Overall Loss 0.396469 Objective Loss 0.396469 LR 0.000025 Time 0.548538 +2025-05-16 06:20:51,069 - Epoch: [113][ 300/ 813] Overall Loss 0.406492 Objective Loss 0.406492 LR 0.000025 Time 0.538261 +2025-05-16 06:21:41,570 - Epoch: [113][ 400/ 813] Overall Loss 0.416333 Objective Loss 0.416333 LR 0.000025 Time 0.529945 +2025-05-16 06:22:33,472 - Epoch: [113][ 500/ 813] Overall Loss 0.418230 Objective Loss 0.418230 LR 0.000025 Time 0.527756 +2025-05-16 06:23:26,764 - Epoch: [113][ 600/ 813] Overall Loss 0.424129 Objective Loss 0.424129 LR 0.000025 Time 0.528614 +2025-05-16 06:24:21,463 - Epoch: [113][ 700/ 813] Overall Loss 0.424595 Objective Loss 0.424595 LR 0.000025 Time 0.531236 +2025-05-16 06:25:13,284 - Epoch: [113][ 800/ 813] Overall Loss 0.427755 Objective Loss 0.427755 LR 0.000025 Time 0.529605 +2025-05-16 06:25:18,465 - Epoch: [113][ 813/ 813] Overall Loss 0.427850 Objective Loss 0.427850 LR 0.000025 Time 0.527510 +2025-05-16 06:25:18,502 - --- validate (epoch=113)----------- +2025-05-16 06:25:18,503 - 3250 samples (16 per mini-batch) +2025-05-16 06:25:18,504 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 06:26:13,911 - Epoch: [113][ 100/ 204] Loss 0.710134 mAP 0.910013 +2025-05-16 06:27:07,469 - Epoch: [113][ 200/ 204] Loss 0.678005 mAP 0.909377 +2025-05-16 06:27:08,131 - Epoch: [113][ 204/ 204] Loss 0.680347 mAP 0.909369 +2025-05-16 06:27:08,171 - ==> mAP: 0.90937 Loss: 0.680 + +2025-05-16 06:27:08,176 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 06:27:08,176 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 06:27:08,211 - + +2025-05-16 06:27:08,212 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 06:28:03,932 - Epoch: [114][ 100/ 813] Overall Loss 0.372232 Objective Loss 0.372232 LR 0.000025 Time 0.557176 +2025-05-16 06:28:56,195 - Epoch: [114][ 200/ 813] Overall Loss 0.391170 Objective Loss 0.391170 LR 0.000025 Time 0.539894 +2025-05-16 06:29:48,897 - Epoch: [114][ 300/ 813] Overall Loss 0.411434 Objective Loss 0.411434 LR 0.000025 Time 0.535597 +2025-05-16 06:30:38,700 - Epoch: [114][ 400/ 813] Overall Loss 0.421786 Objective Loss 0.421786 LR 0.000025 Time 0.526200 +2025-05-16 06:31:31,117 - Epoch: [114][ 500/ 813] Overall Loss 0.420695 Objective Loss 0.420695 LR 0.000025 Time 0.525791 +2025-05-16 06:32:23,905 - Epoch: [114][ 600/ 813] Overall Loss 0.426827 Objective Loss 0.426827 LR 0.000025 Time 0.526137 +2025-05-16 06:33:17,225 - Epoch: [114][ 700/ 813] Overall Loss 0.433508 Objective Loss 0.433508 LR 0.000025 Time 0.527135 +2025-05-16 06:34:08,447 - Epoch: [114][ 800/ 813] Overall Loss 0.435309 Objective Loss 0.435309 LR 0.000025 Time 0.525269 +2025-05-16 06:34:14,353 - Epoch: [114][ 813/ 813] Overall Loss 0.435648 Objective Loss 0.435648 LR 0.000025 Time 0.524133 +2025-05-16 06:34:14,393 - --- validate (epoch=114)----------- +2025-05-16 06:34:14,394 - 3250 samples (16 per mini-batch) +2025-05-16 06:34:14,396 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 06:35:08,606 - Epoch: [114][ 100/ 204] Loss 0.680649 mAP 0.890177 +2025-05-16 06:36:01,401 - Epoch: [114][ 200/ 204] Loss 0.674555 mAP 0.889606 +2025-05-16 06:36:02,416 - Epoch: [114][ 204/ 204] Loss 0.673079 mAP 0.889636 +2025-05-16 06:36:02,457 - ==> mAP: 0.88964 Loss: 0.673 + +2025-05-16 06:36:02,462 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 06:36:02,462 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 06:36:02,496 - + +2025-05-16 06:36:02,496 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 06:36:57,408 - Epoch: [115][ 100/ 813] Overall Loss 0.368149 Objective Loss 0.368149 LR 0.000025 Time 0.549088 +2025-05-16 06:37:50,517 - Epoch: [115][ 200/ 813] Overall Loss 0.377722 Objective Loss 0.377722 LR 0.000025 Time 0.540081 +2025-05-16 06:38:42,937 - Epoch: [115][ 300/ 813] Overall Loss 0.381140 Objective Loss 0.381140 LR 0.000025 Time 0.534781 +2025-05-16 06:39:32,316 - Epoch: [115][ 400/ 813] Overall Loss 0.393180 Objective Loss 0.393180 LR 0.000025 Time 0.524529 +2025-05-16 06:40:24,615 - Epoch: [115][ 500/ 813] Overall Loss 0.402527 Objective Loss 0.402527 LR 0.000025 Time 0.524218 +2025-05-16 06:41:18,740 - Epoch: [115][ 600/ 813] Overall Loss 0.405554 Objective Loss 0.405554 LR 0.000025 Time 0.527053 +2025-05-16 06:42:11,040 - Epoch: [115][ 700/ 813] Overall Loss 0.407614 Objective Loss 0.407614 LR 0.000025 Time 0.526471 +2025-05-16 06:43:05,244 - Epoch: [115][ 800/ 813] Overall Loss 0.410790 Objective Loss 0.410790 LR 0.000025 Time 0.528415 +2025-05-16 06:43:09,978 - Epoch: [115][ 813/ 813] Overall Loss 0.411855 Objective Loss 0.411855 LR 0.000025 Time 0.525789 +2025-05-16 06:43:10,023 - --- validate (epoch=115)----------- +2025-05-16 06:43:10,024 - 3250 samples (16 per mini-batch) +2025-05-16 06:43:10,026 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 06:44:04,364 - Epoch: [115][ 100/ 204] Loss 0.666779 mAP 0.899653 +2025-05-16 06:44:56,131 - Epoch: [115][ 200/ 204] Loss 0.683470 mAP 0.900084 +2025-05-16 06:44:57,079 - Epoch: [115][ 204/ 204] Loss 0.680763 mAP 0.900083 +2025-05-16 06:44:57,122 - ==> mAP: 0.90008 Loss: 0.681 + +2025-05-16 06:44:57,127 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 06:44:57,127 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 06:44:57,162 - + +2025-05-16 06:44:57,163 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 06:45:54,354 - Epoch: [116][ 100/ 813] Overall Loss 0.417253 Objective Loss 0.417253 LR 0.000025 Time 0.571881 +2025-05-16 06:46:46,941 - Epoch: [116][ 200/ 813] Overall Loss 0.438653 Objective Loss 0.438653 LR 0.000025 Time 0.548870 +2025-05-16 06:47:39,093 - Epoch: [116][ 300/ 813] Overall Loss 0.438911 Objective Loss 0.438911 LR 0.000025 Time 0.539743 +2025-05-16 06:48:29,371 - Epoch: [116][ 400/ 813] Overall Loss 0.441589 Objective Loss 0.441589 LR 0.000025 Time 0.530498 +2025-05-16 06:49:20,636 - Epoch: [116][ 500/ 813] Overall Loss 0.444659 Objective Loss 0.444659 LR 0.000025 Time 0.526925 +2025-05-16 06:50:14,004 - Epoch: [116][ 600/ 813] Overall Loss 0.448424 Objective Loss 0.448424 LR 0.000025 Time 0.528048 +2025-05-16 06:51:06,473 - Epoch: [116][ 700/ 813] Overall Loss 0.450434 Objective Loss 0.450434 LR 0.000025 Time 0.527565 +2025-05-16 06:51:58,546 - Epoch: [116][ 800/ 813] Overall Loss 0.449676 Objective Loss 0.449676 LR 0.000025 Time 0.526709 +2025-05-16 06:52:03,911 - Epoch: [116][ 813/ 813] Overall Loss 0.450820 Objective Loss 0.450820 LR 0.000025 Time 0.524885 +2025-05-16 06:52:03,953 - --- validate (epoch=116)----------- +2025-05-16 06:52:03,953 - 3250 samples (16 per mini-batch) +2025-05-16 06:52:03,955 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 06:53:00,189 - Epoch: [116][ 100/ 204] Loss 0.696659 mAP 0.898826 +2025-05-16 06:53:53,476 - Epoch: [116][ 200/ 204] Loss 0.682570 mAP 0.907701 +2025-05-16 06:53:53,979 - Epoch: [116][ 204/ 204] Loss 0.679045 mAP 0.907741 +2025-05-16 06:53:54,021 - ==> mAP: 0.90774 Loss: 0.679 + +2025-05-16 06:53:54,026 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 06:53:54,026 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 06:53:54,062 - + +2025-05-16 06:53:54,062 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 06:54:48,361 - Epoch: [117][ 100/ 813] Overall Loss 0.396616 Objective Loss 0.396616 LR 0.000025 Time 0.542964 +2025-05-16 06:55:43,215 - Epoch: [117][ 200/ 813] Overall Loss 0.425333 Objective Loss 0.425333 LR 0.000025 Time 0.545742 +2025-05-16 06:56:34,660 - Epoch: [117][ 300/ 813] Overall Loss 0.426474 Objective Loss 0.426474 LR 0.000025 Time 0.535303 +2025-05-16 06:57:25,597 - Epoch: [117][ 400/ 813] Overall Loss 0.423473 Objective Loss 0.423473 LR 0.000025 Time 0.528809 +2025-05-16 06:58:16,413 - Epoch: [117][ 500/ 813] Overall Loss 0.419097 Objective Loss 0.419097 LR 0.000025 Time 0.524676 +2025-05-16 06:59:09,653 - Epoch: [117][ 600/ 813] Overall Loss 0.422518 Objective Loss 0.422518 LR 0.000025 Time 0.525961 +2025-05-16 07:00:02,717 - Epoch: [117][ 700/ 813] Overall Loss 0.419816 Objective Loss 0.419816 LR 0.000025 Time 0.526627 +2025-05-16 07:00:55,178 - Epoch: [117][ 800/ 813] Overall Loss 0.419412 Objective Loss 0.419412 LR 0.000025 Time 0.526371 +2025-05-16 07:01:00,814 - Epoch: [117][ 813/ 813] Overall Loss 0.420502 Objective Loss 0.420502 LR 0.000025 Time 0.524888 +2025-05-16 07:01:00,853 - --- validate (epoch=117)----------- +2025-05-16 07:01:00,854 - 3250 samples (16 per mini-batch) +2025-05-16 07:01:00,856 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 07:01:54,790 - Epoch: [117][ 100/ 204] Loss 0.705152 mAP 0.888847 +2025-05-16 07:02:47,554 - Epoch: [117][ 200/ 204] Loss 0.691690 mAP 0.889286 +2025-05-16 07:02:48,031 - Epoch: [117][ 204/ 204] Loss 0.695261 mAP 0.889197 +2025-05-16 07:02:48,075 - ==> mAP: 0.88920 Loss: 0.695 + +2025-05-16 07:02:48,080 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 07:02:48,080 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 07:02:48,114 - + +2025-05-16 07:02:48,114 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 07:03:42,984 - Epoch: [118][ 100/ 813] Overall Loss 0.418216 Objective Loss 0.418216 LR 0.000025 Time 0.548672 +2025-05-16 07:04:35,749 - Epoch: [118][ 200/ 813] Overall Loss 0.422191 Objective Loss 0.422191 LR 0.000025 Time 0.538151 +2025-05-16 07:05:27,438 - Epoch: [118][ 300/ 813] Overall Loss 0.428196 Objective Loss 0.428196 LR 0.000025 Time 0.531038 +2025-05-16 07:06:18,126 - Epoch: [118][ 400/ 813] Overall Loss 0.422263 Objective Loss 0.422263 LR 0.000025 Time 0.524979 +2025-05-16 07:07:10,034 - Epoch: [118][ 500/ 813] Overall Loss 0.420119 Objective Loss 0.420119 LR 0.000025 Time 0.523795 +2025-05-16 07:08:02,890 - Epoch: [118][ 600/ 813] Overall Loss 0.421039 Objective Loss 0.421039 LR 0.000025 Time 0.524585 +2025-05-16 07:08:55,345 - Epoch: [118][ 700/ 813] Overall Loss 0.422763 Objective Loss 0.422763 LR 0.000025 Time 0.524579 +2025-05-16 07:09:47,743 - Epoch: [118][ 800/ 813] Overall Loss 0.426423 Objective Loss 0.426423 LR 0.000025 Time 0.524497 +2025-05-16 07:09:53,446 - Epoch: [118][ 813/ 813] Overall Loss 0.427425 Objective Loss 0.427425 LR 0.000025 Time 0.523125 +2025-05-16 07:09:53,491 - --- validate (epoch=118)----------- +2025-05-16 07:09:53,492 - 3250 samples (16 per mini-batch) +2025-05-16 07:09:53,493 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 07:10:50,669 - Epoch: [118][ 100/ 204] Loss 0.663210 mAP 0.908345 +2025-05-16 07:11:42,858 - Epoch: [118][ 200/ 204] Loss 0.656644 mAP 0.898456 +2025-05-16 07:11:43,408 - Epoch: [118][ 204/ 204] Loss 0.656112 mAP 0.908217 +2025-05-16 07:11:43,454 - ==> mAP: 0.90822 Loss: 0.656 + +2025-05-16 07:11:43,459 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 07:11:43,459 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 07:11:43,494 - + +2025-05-16 07:11:43,494 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 07:12:38,211 - Epoch: [119][ 100/ 813] Overall Loss 0.367138 Objective Loss 0.367138 LR 0.000025 Time 0.547137 +2025-05-16 07:13:30,798 - Epoch: [119][ 200/ 813] Overall Loss 0.385783 Objective Loss 0.385783 LR 0.000025 Time 0.536491 +2025-05-16 07:14:25,932 - Epoch: [119][ 300/ 813] Overall Loss 0.397429 Objective Loss 0.397429 LR 0.000025 Time 0.541434 +2025-05-16 07:15:14,770 - Epoch: [119][ 400/ 813] Overall Loss 0.404838 Objective Loss 0.404838 LR 0.000025 Time 0.528168 +2025-05-16 07:16:08,124 - Epoch: [119][ 500/ 813] Overall Loss 0.406018 Objective Loss 0.406018 LR 0.000025 Time 0.529238 +2025-05-16 07:17:01,070 - Epoch: [119][ 600/ 813] Overall Loss 0.410883 Objective Loss 0.410883 LR 0.000025 Time 0.529273 +2025-05-16 07:17:55,280 - Epoch: [119][ 700/ 813] Overall Loss 0.411963 Objective Loss 0.411963 LR 0.000025 Time 0.531103 +2025-05-16 07:18:45,846 - Epoch: [119][ 800/ 813] Overall Loss 0.413720 Objective Loss 0.413720 LR 0.000025 Time 0.527916 +2025-05-16 07:18:51,205 - Epoch: [119][ 813/ 813] Overall Loss 0.413663 Objective Loss 0.413663 LR 0.000025 Time 0.526045 +2025-05-16 07:18:51,245 - --- validate (epoch=119)----------- +2025-05-16 07:18:51,245 - 3250 samples (16 per mini-batch) +2025-05-16 07:18:51,247 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 07:19:44,664 - Epoch: [119][ 100/ 204] Loss 0.716968 mAP 0.889547 +2025-05-16 07:20:37,646 - Epoch: [119][ 200/ 204] Loss 0.701210 mAP 0.899098 +2025-05-16 07:20:38,700 - Epoch: [119][ 204/ 204] Loss 0.698687 mAP 0.899130 +2025-05-16 07:20:38,745 - ==> mAP: 0.89913 Loss: 0.699 + +2025-05-16 07:20:38,749 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 07:20:38,750 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 07:20:38,784 - + +2025-05-16 07:20:38,784 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 07:21:32,796 - Epoch: [120][ 100/ 813] Overall Loss 0.391902 Objective Loss 0.391902 LR 0.000025 Time 0.540091 +2025-05-16 07:22:26,812 - Epoch: [120][ 200/ 813] Overall Loss 0.391554 Objective Loss 0.391554 LR 0.000025 Time 0.540117 +2025-05-16 07:23:18,627 - Epoch: [120][ 300/ 813] Overall Loss 0.409420 Objective Loss 0.409420 LR 0.000025 Time 0.532789 +2025-05-16 07:24:08,922 - Epoch: [120][ 400/ 813] Overall Loss 0.417574 Objective Loss 0.417574 LR 0.000025 Time 0.525324 +2025-05-16 07:25:00,991 - Epoch: [120][ 500/ 813] Overall Loss 0.420600 Objective Loss 0.420600 LR 0.000025 Time 0.524394 +2025-05-16 07:25:54,173 - Epoch: [120][ 600/ 813] Overall Loss 0.419881 Objective Loss 0.419881 LR 0.000025 Time 0.525626 +2025-05-16 07:26:46,516 - Epoch: [120][ 700/ 813] Overall Loss 0.424240 Objective Loss 0.424240 LR 0.000025 Time 0.525310 +2025-05-16 07:27:39,271 - Epoch: [120][ 800/ 813] Overall Loss 0.427769 Objective Loss 0.427769 LR 0.000025 Time 0.525587 +2025-05-16 07:27:45,074 - Epoch: [120][ 813/ 813] Overall Loss 0.427600 Objective Loss 0.427600 LR 0.000025 Time 0.524320 +2025-05-16 07:27:45,113 - --- validate (epoch=120)----------- +2025-05-16 07:27:45,114 - 3250 samples (16 per mini-batch) +2025-05-16 07:27:45,116 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 07:28:39,769 - Epoch: [120][ 100/ 204] Loss 0.659926 mAP 0.899442 +2025-05-16 07:29:32,948 - Epoch: [120][ 200/ 204] Loss 0.661245 mAP 0.899418 +2025-05-16 07:29:33,431 - Epoch: [120][ 204/ 204] Loss 0.660937 mAP 0.899422 +2025-05-16 07:29:33,473 - ==> mAP: 0.89942 Loss: 0.661 + +2025-05-16 07:29:33,478 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 07:29:33,478 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 07:29:33,513 - + +2025-05-16 07:29:33,513 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 07:30:28,991 - Epoch: [121][ 100/ 813] Overall Loss 0.371514 Objective Loss 0.371514 LR 0.000025 Time 0.554753 +2025-05-16 07:31:21,377 - Epoch: [121][ 200/ 813] Overall Loss 0.406732 Objective Loss 0.406732 LR 0.000025 Time 0.539291 +2025-05-16 07:32:13,193 - Epoch: [121][ 300/ 813] Overall Loss 0.421956 Objective Loss 0.421956 LR 0.000025 Time 0.532242 +2025-05-16 07:33:04,185 - Epoch: [121][ 400/ 813] Overall Loss 0.432031 Objective Loss 0.432031 LR 0.000025 Time 0.526656 +2025-05-16 07:33:57,140 - Epoch: [121][ 500/ 813] Overall Loss 0.435763 Objective Loss 0.435763 LR 0.000025 Time 0.527232 +2025-05-16 07:34:51,078 - Epoch: [121][ 600/ 813] Overall Loss 0.435799 Objective Loss 0.435799 LR 0.000025 Time 0.529253 +2025-05-16 07:35:43,156 - Epoch: [121][ 700/ 813] Overall Loss 0.436967 Objective Loss 0.436967 LR 0.000025 Time 0.528040 +2025-05-16 07:36:36,783 - Epoch: [121][ 800/ 813] Overall Loss 0.436978 Objective Loss 0.436978 LR 0.000025 Time 0.529067 +2025-05-16 07:36:40,652 - Epoch: [121][ 813/ 813] Overall Loss 0.439223 Objective Loss 0.439223 LR 0.000025 Time 0.525366 +2025-05-16 07:36:40,688 - --- validate (epoch=121)----------- +2025-05-16 07:36:40,688 - 3250 samples (16 per mini-batch) +2025-05-16 07:36:40,690 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 07:37:35,348 - Epoch: [121][ 100/ 204] Loss 0.691897 mAP 0.898491 +2025-05-16 07:38:28,098 - Epoch: [121][ 200/ 204] Loss 0.674465 mAP 0.908892 +2025-05-16 07:38:28,595 - Epoch: [121][ 204/ 204] Loss 0.675568 mAP 0.908927 +2025-05-16 07:38:28,637 - ==> mAP: 0.90893 Loss: 0.676 + +2025-05-16 07:38:28,642 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 07:38:28,642 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 07:38:28,676 - + +2025-05-16 07:38:28,677 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 07:39:23,132 - Epoch: [122][ 100/ 813] Overall Loss 0.366367 Objective Loss 0.366367 LR 0.000025 Time 0.544522 +2025-05-16 07:40:15,696 - Epoch: [122][ 200/ 813] Overall Loss 0.382539 Objective Loss 0.382539 LR 0.000025 Time 0.535073 +2025-05-16 07:41:09,582 - Epoch: [122][ 300/ 813] Overall Loss 0.398457 Objective Loss 0.398457 LR 0.000025 Time 0.536322 +2025-05-16 07:41:59,673 - Epoch: [122][ 400/ 813] Overall Loss 0.397499 Objective Loss 0.397499 LR 0.000025 Time 0.527463 +2025-05-16 07:42:53,906 - Epoch: [122][ 500/ 813] Overall Loss 0.392791 Objective Loss 0.392791 LR 0.000025 Time 0.530434 +2025-05-16 07:43:46,148 - Epoch: [122][ 600/ 813] Overall Loss 0.399447 Objective Loss 0.399447 LR 0.000025 Time 0.529096 +2025-05-16 07:44:39,489 - Epoch: [122][ 700/ 813] Overall Loss 0.407900 Objective Loss 0.407900 LR 0.000025 Time 0.529696 +2025-05-16 07:45:30,965 - Epoch: [122][ 800/ 813] Overall Loss 0.412386 Objective Loss 0.412386 LR 0.000025 Time 0.527827 +2025-05-16 07:45:36,104 - Epoch: [122][ 813/ 813] Overall Loss 0.414464 Objective Loss 0.414464 LR 0.000025 Time 0.525707 +2025-05-16 07:45:36,153 - --- validate (epoch=122)----------- +2025-05-16 07:45:36,154 - 3250 samples (16 per mini-batch) +2025-05-16 07:45:36,156 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 07:46:32,165 - Epoch: [122][ 100/ 204] Loss 0.640365 mAP 0.899945 +2025-05-16 07:47:24,865 - Epoch: [122][ 200/ 204] Loss 0.679209 mAP 0.899127 +2025-05-16 07:47:26,374 - Epoch: [122][ 204/ 204] Loss 0.676923 mAP 0.899113 +2025-05-16 07:47:26,420 - ==> mAP: 0.89911 Loss: 0.677 + +2025-05-16 07:47:26,425 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 07:47:26,425 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 07:47:26,459 - + +2025-05-16 07:47:26,459 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 07:48:21,348 - Epoch: [123][ 100/ 813] Overall Loss 0.409937 Objective Loss 0.409937 LR 0.000025 Time 0.548854 +2025-05-16 07:49:14,801 - Epoch: [123][ 200/ 813] Overall Loss 0.409499 Objective Loss 0.409499 LR 0.000025 Time 0.541683 +2025-05-16 07:50:06,707 - Epoch: [123][ 300/ 813] Overall Loss 0.398655 Objective Loss 0.398655 LR 0.000025 Time 0.534139 +2025-05-16 07:50:57,391 - Epoch: [123][ 400/ 813] Overall Loss 0.406622 Objective Loss 0.406622 LR 0.000025 Time 0.527309 +2025-05-16 07:51:50,889 - Epoch: [123][ 500/ 813] Overall Loss 0.405763 Objective Loss 0.405763 LR 0.000025 Time 0.528806 +2025-05-16 07:52:43,420 - Epoch: [123][ 600/ 813] Overall Loss 0.408751 Objective Loss 0.408751 LR 0.000025 Time 0.528216 +2025-05-16 07:53:35,509 - Epoch: [123][ 700/ 813] Overall Loss 0.409827 Objective Loss 0.409827 LR 0.000025 Time 0.527167 +2025-05-16 07:54:28,055 - Epoch: [123][ 800/ 813] Overall Loss 0.410487 Objective Loss 0.410487 LR 0.000025 Time 0.526947 +2025-05-16 07:54:33,596 - Epoch: [123][ 813/ 813] Overall Loss 0.410852 Objective Loss 0.410852 LR 0.000025 Time 0.525337 +2025-05-16 07:54:33,639 - --- validate (epoch=123)----------- +2025-05-16 07:54:33,639 - 3250 samples (16 per mini-batch) +2025-05-16 07:54:33,641 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 07:55:27,864 - Epoch: [123][ 100/ 204] Loss 0.682830 mAP 0.889175 +2025-05-16 07:56:22,421 - Epoch: [123][ 200/ 204] Loss 0.647368 mAP 0.889530 +2025-05-16 07:56:23,316 - Epoch: [123][ 204/ 204] Loss 0.648798 mAP 0.889554 +2025-05-16 07:56:23,358 - ==> mAP: 0.88955 Loss: 0.649 + +2025-05-16 07:56:23,363 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 07:56:23,363 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 07:56:23,398 - + +2025-05-16 07:56:23,398 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 07:57:20,063 - Epoch: [124][ 100/ 813] Overall Loss 0.346870 Objective Loss 0.346870 LR 0.000025 Time 0.566622 +2025-05-16 07:58:11,901 - Epoch: [124][ 200/ 813] Overall Loss 0.387626 Objective Loss 0.387626 LR 0.000025 Time 0.542491 +2025-05-16 07:59:04,212 - Epoch: [124][ 300/ 813] Overall Loss 0.416464 Objective Loss 0.416464 LR 0.000025 Time 0.536025 +2025-05-16 07:59:54,816 - Epoch: [124][ 400/ 813] Overall Loss 0.424857 Objective Loss 0.424857 LR 0.000025 Time 0.528526 +2025-05-16 08:00:47,044 - Epoch: [124][ 500/ 813] Overall Loss 0.423324 Objective Loss 0.423324 LR 0.000025 Time 0.527273 +2025-05-16 08:01:39,654 - Epoch: [124][ 600/ 813] Overall Loss 0.423519 Objective Loss 0.423519 LR 0.000025 Time 0.527074 +2025-05-16 08:02:31,613 - Epoch: [124][ 700/ 813] Overall Loss 0.425821 Objective Loss 0.425821 LR 0.000025 Time 0.526003 +2025-05-16 08:03:27,080 - Epoch: [124][ 800/ 813] Overall Loss 0.430094 Objective Loss 0.430094 LR 0.000025 Time 0.529577 +2025-05-16 08:03:30,880 - Epoch: [124][ 813/ 813] Overall Loss 0.430583 Objective Loss 0.430583 LR 0.000025 Time 0.525782 +2025-05-16 08:03:30,921 - --- validate (epoch=124)----------- +2025-05-16 08:03:30,921 - 3250 samples (16 per mini-batch) +2025-05-16 08:03:30,923 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 08:04:24,870 - Epoch: [124][ 100/ 204] Loss 0.649043 mAP 0.909369 +2025-05-16 08:05:17,370 - Epoch: [124][ 200/ 204] Loss 0.697728 mAP 0.899273 +2025-05-16 08:05:18,028 - Epoch: [124][ 204/ 204] Loss 0.700178 mAP 0.899247 +2025-05-16 08:05:18,072 - ==> mAP: 0.89925 Loss: 0.700 + +2025-05-16 08:05:18,077 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 08:05:18,077 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 08:05:18,112 - + +2025-05-16 08:05:18,112 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 08:06:14,332 - Epoch: [125][ 100/ 813] Overall Loss 0.342521 Objective Loss 0.342521 LR 0.000025 Time 0.562178 +2025-05-16 08:07:08,960 - Epoch: [125][ 200/ 813] Overall Loss 0.361475 Objective Loss 0.361475 LR 0.000025 Time 0.554220 +2025-05-16 08:08:00,366 - Epoch: [125][ 300/ 813] Overall Loss 0.388864 Objective Loss 0.388864 LR 0.000025 Time 0.540825 +2025-05-16 08:08:51,045 - Epoch: [125][ 400/ 813] Overall Loss 0.394041 Objective Loss 0.394041 LR 0.000025 Time 0.532306 +2025-05-16 08:09:43,869 - Epoch: [125][ 500/ 813] Overall Loss 0.397120 Objective Loss 0.397120 LR 0.000025 Time 0.531490 +2025-05-16 08:10:36,680 - Epoch: [125][ 600/ 813] Overall Loss 0.398723 Objective Loss 0.398723 LR 0.000025 Time 0.530923 +2025-05-16 08:11:30,470 - Epoch: [125][ 700/ 813] Overall Loss 0.405809 Objective Loss 0.405809 LR 0.000025 Time 0.531918 +2025-05-16 08:12:21,551 - Epoch: [125][ 800/ 813] Overall Loss 0.409658 Objective Loss 0.409658 LR 0.000025 Time 0.529277 +2025-05-16 08:12:27,257 - Epoch: [125][ 813/ 813] Overall Loss 0.411156 Objective Loss 0.411156 LR 0.000025 Time 0.527810 +2025-05-16 08:12:27,309 - --- validate (epoch=125)----------- +2025-05-16 08:12:27,310 - 3250 samples (16 per mini-batch) +2025-05-16 08:12:27,312 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 08:13:22,850 - Epoch: [125][ 100/ 204] Loss 0.671965 mAP 0.898236 +2025-05-16 08:14:14,694 - Epoch: [125][ 200/ 204] Loss 0.661981 mAP 0.898446 +2025-05-16 08:14:16,048 - Epoch: [125][ 204/ 204] Loss 0.658244 mAP 0.898501 +2025-05-16 08:14:16,091 - ==> mAP: 0.89850 Loss: 0.658 + +2025-05-16 08:14:16,096 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 08:14:16,096 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 08:14:16,130 - + +2025-05-16 08:14:16,130 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 08:15:11,015 - Epoch: [126][ 100/ 813] Overall Loss 0.378199 Objective Loss 0.378199 LR 0.000025 Time 0.548821 +2025-05-16 08:16:05,273 - Epoch: [126][ 200/ 813] Overall Loss 0.390216 Objective Loss 0.390216 LR 0.000025 Time 0.545687 +2025-05-16 08:16:58,404 - Epoch: [126][ 300/ 813] Overall Loss 0.403835 Objective Loss 0.403835 LR 0.000025 Time 0.540890 +2025-05-16 08:17:47,509 - Epoch: [126][ 400/ 813] Overall Loss 0.414604 Objective Loss 0.414604 LR 0.000025 Time 0.528427 +2025-05-16 08:18:40,863 - Epoch: [126][ 500/ 813] Overall Loss 0.415503 Objective Loss 0.415503 LR 0.000025 Time 0.529444 +2025-05-16 08:19:32,021 - Epoch: [126][ 600/ 813] Overall Loss 0.413989 Objective Loss 0.413989 LR 0.000025 Time 0.526465 +2025-05-16 08:20:25,854 - Epoch: [126][ 700/ 813] Overall Loss 0.417288 Objective Loss 0.417288 LR 0.000025 Time 0.528158 +2025-05-16 08:21:16,979 - Epoch: [126][ 800/ 813] Overall Loss 0.419700 Objective Loss 0.419700 LR 0.000025 Time 0.526041 +2025-05-16 08:21:22,853 - Epoch: [126][ 813/ 813] Overall Loss 0.420374 Objective Loss 0.420374 LR 0.000025 Time 0.524847 +2025-05-16 08:21:22,896 - --- validate (epoch=126)----------- +2025-05-16 08:21:22,897 - 3250 samples (16 per mini-batch) +2025-05-16 08:21:22,899 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 08:22:16,578 - Epoch: [126][ 100/ 204] Loss 0.729867 mAP 0.888353 +2025-05-16 08:23:10,142 - Epoch: [126][ 200/ 204] Loss 0.676750 mAP 0.898835 +2025-05-16 08:23:11,177 - Epoch: [126][ 204/ 204] Loss 0.678770 mAP 0.898870 +2025-05-16 08:23:11,216 - ==> mAP: 0.89887 Loss: 0.679 + +2025-05-16 08:23:11,221 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 08:23:11,221 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 08:23:11,256 - + +2025-05-16 08:23:11,256 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 08:24:06,967 - Epoch: [127][ 100/ 813] Overall Loss 0.384797 Objective Loss 0.384797 LR 0.000025 Time 0.557079 +2025-05-16 08:24:59,686 - Epoch: [127][ 200/ 813] Overall Loss 0.417629 Objective Loss 0.417629 LR 0.000025 Time 0.542125 +2025-05-16 08:25:53,412 - Epoch: [127][ 300/ 813] Overall Loss 0.431062 Objective Loss 0.431062 LR 0.000025 Time 0.540488 +2025-05-16 08:26:44,474 - Epoch: [127][ 400/ 813] Overall Loss 0.429350 Objective Loss 0.429350 LR 0.000025 Time 0.533017 +2025-05-16 08:27:36,696 - Epoch: [127][ 500/ 813] Overall Loss 0.430534 Objective Loss 0.430534 LR 0.000025 Time 0.530854 +2025-05-16 08:28:29,670 - Epoch: [127][ 600/ 813] Overall Loss 0.435187 Objective Loss 0.435187 LR 0.000025 Time 0.530665 +2025-05-16 08:29:21,880 - Epoch: [127][ 700/ 813] Overall Loss 0.444112 Objective Loss 0.444112 LR 0.000025 Time 0.529439 +2025-05-16 08:30:12,683 - Epoch: [127][ 800/ 813] Overall Loss 0.445854 Objective Loss 0.445854 LR 0.000025 Time 0.526761 +2025-05-16 08:30:18,191 - Epoch: [127][ 813/ 813] Overall Loss 0.445147 Objective Loss 0.445147 LR 0.000025 Time 0.525108 +2025-05-16 08:30:18,232 - --- validate (epoch=127)----------- +2025-05-16 08:30:18,233 - 3250 samples (16 per mini-batch) +2025-05-16 08:30:18,235 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 08:31:13,115 - Epoch: [127][ 100/ 204] Loss 0.665503 mAP 0.899053 +2025-05-16 08:32:06,649 - Epoch: [127][ 200/ 204] Loss 0.676536 mAP 0.898635 +2025-05-16 08:32:07,550 - Epoch: [127][ 204/ 204] Loss 0.669434 mAP 0.898685 +2025-05-16 08:32:07,591 - ==> mAP: 0.89868 Loss: 0.669 + +2025-05-16 08:32:07,597 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 08:32:07,597 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 08:32:07,632 - + +2025-05-16 08:32:07,632 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 08:33:03,694 - Epoch: [128][ 100/ 813] Overall Loss 0.345004 Objective Loss 0.345004 LR 0.000025 Time 0.560594 +2025-05-16 08:33:56,441 - Epoch: [128][ 200/ 813] Overall Loss 0.370792 Objective Loss 0.370792 LR 0.000025 Time 0.544022 +2025-05-16 08:34:47,992 - Epoch: [128][ 300/ 813] Overall Loss 0.390177 Objective Loss 0.390177 LR 0.000025 Time 0.534512 +2025-05-16 08:35:39,720 - Epoch: [128][ 400/ 813] Overall Loss 0.399046 Objective Loss 0.399046 LR 0.000025 Time 0.530201 +2025-05-16 08:36:31,040 - Epoch: [128][ 500/ 813] Overall Loss 0.400246 Objective Loss 0.400246 LR 0.000025 Time 0.526797 +2025-05-16 08:37:24,636 - Epoch: [128][ 600/ 813] Overall Loss 0.401986 Objective Loss 0.401986 LR 0.000025 Time 0.528316 +2025-05-16 08:38:18,498 - Epoch: [128][ 700/ 813] Overall Loss 0.407640 Objective Loss 0.407640 LR 0.000025 Time 0.529778 +2025-05-16 08:39:10,972 - Epoch: [128][ 800/ 813] Overall Loss 0.409130 Objective Loss 0.409130 LR 0.000025 Time 0.529142 +2025-05-16 08:39:15,536 - Epoch: [128][ 813/ 813] Overall Loss 0.410305 Objective Loss 0.410305 LR 0.000025 Time 0.526284 +2025-05-16 08:39:15,579 - --- validate (epoch=128)----------- +2025-05-16 08:39:15,580 - 3250 samples (16 per mini-batch) +2025-05-16 08:39:15,582 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 08:40:11,240 - Epoch: [128][ 100/ 204] Loss 0.708611 mAP 0.899534 +2025-05-16 08:41:02,247 - Epoch: [128][ 200/ 204] Loss 0.702040 mAP 0.899059 +2025-05-16 08:41:02,742 - Epoch: [128][ 204/ 204] Loss 0.701942 mAP 0.898928 +2025-05-16 08:41:02,785 - ==> mAP: 0.89893 Loss: 0.702 + +2025-05-16 08:41:02,791 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 08:41:02,791 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 08:41:02,825 - + +2025-05-16 08:41:02,825 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 08:41:56,799 - Epoch: [129][ 100/ 813] Overall Loss 0.391927 Objective Loss 0.391927 LR 0.000025 Time 0.539707 +2025-05-16 08:42:51,397 - Epoch: [129][ 200/ 813] Overall Loss 0.395231 Objective Loss 0.395231 LR 0.000025 Time 0.542835 +2025-05-16 08:43:42,469 - Epoch: [129][ 300/ 813] Overall Loss 0.399197 Objective Loss 0.399197 LR 0.000025 Time 0.532126 +2025-05-16 08:44:32,434 - Epoch: [129][ 400/ 813] Overall Loss 0.405522 Objective Loss 0.405522 LR 0.000025 Time 0.524002 +2025-05-16 08:45:27,018 - Epoch: [129][ 500/ 813] Overall Loss 0.409667 Objective Loss 0.409667 LR 0.000025 Time 0.528348 +2025-05-16 08:46:19,627 - Epoch: [129][ 600/ 813] Overall Loss 0.415546 Objective Loss 0.415546 LR 0.000025 Time 0.527969 +2025-05-16 08:47:12,701 - Epoch: [129][ 700/ 813] Overall Loss 0.419625 Objective Loss 0.419625 LR 0.000025 Time 0.528362 +2025-05-16 08:48:03,645 - Epoch: [129][ 800/ 813] Overall Loss 0.418911 Objective Loss 0.418911 LR 0.000025 Time 0.525995 +2025-05-16 08:48:09,199 - Epoch: [129][ 813/ 813] Overall Loss 0.419471 Objective Loss 0.419471 LR 0.000025 Time 0.524414 +2025-05-16 08:48:09,249 - --- validate (epoch=129)----------- +2025-05-16 08:48:09,250 - 3250 samples (16 per mini-batch) +2025-05-16 08:48:09,252 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 08:49:03,406 - Epoch: [129][ 100/ 204] Loss 0.648945 mAP 0.909591 +2025-05-16 08:49:57,672 - Epoch: [129][ 200/ 204] Loss 0.685642 mAP 0.909144 +2025-05-16 08:49:58,449 - Epoch: [129][ 204/ 204] Loss 0.692091 mAP 0.909176 +2025-05-16 08:49:58,493 - ==> mAP: 0.90918 Loss: 0.692 + +2025-05-16 08:49:58,498 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 08:49:58,498 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 08:49:58,534 - + +2025-05-16 08:49:58,534 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 08:50:53,242 - Epoch: [130][ 100/ 813] Overall Loss 0.363170 Objective Loss 0.363170 LR 0.000025 Time 0.546903 +2025-05-16 08:51:46,164 - Epoch: [130][ 200/ 813] Overall Loss 0.377090 Objective Loss 0.377090 LR 0.000025 Time 0.538046 +2025-05-16 08:52:40,248 - Epoch: [130][ 300/ 813] Overall Loss 0.390535 Objective Loss 0.390535 LR 0.000025 Time 0.538969 +2025-05-16 08:53:32,045 - Epoch: [130][ 400/ 813] Overall Loss 0.394833 Objective Loss 0.394833 LR 0.000025 Time 0.533714 +2025-05-16 08:54:23,085 - Epoch: [130][ 500/ 813] Overall Loss 0.394810 Objective Loss 0.394810 LR 0.000025 Time 0.529043 +2025-05-16 08:55:16,067 - Epoch: [130][ 600/ 813] Overall Loss 0.403134 Objective Loss 0.403134 LR 0.000025 Time 0.529169 +2025-05-16 08:56:09,631 - Epoch: [130][ 700/ 813] Overall Loss 0.409451 Objective Loss 0.409451 LR 0.000025 Time 0.530091 +2025-05-16 08:57:01,329 - Epoch: [130][ 800/ 813] Overall Loss 0.415677 Objective Loss 0.415677 LR 0.000025 Time 0.528450 +2025-05-16 08:57:07,475 - Epoch: [130][ 813/ 813] Overall Loss 0.416007 Objective Loss 0.416007 LR 0.000025 Time 0.527559 +2025-05-16 08:57:07,517 - --- validate (epoch=130)----------- +2025-05-16 08:57:07,517 - 3250 samples (16 per mini-batch) +2025-05-16 08:57:07,519 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 08:58:01,785 - Epoch: [130][ 100/ 204] Loss 0.657577 mAP 0.908548 +2025-05-16 08:58:54,759 - Epoch: [130][ 200/ 204] Loss 0.653989 mAP 0.907895 +2025-05-16 08:58:55,275 - Epoch: [130][ 204/ 204] Loss 0.649191 mAP 0.907995 +2025-05-16 08:58:55,318 - ==> mAP: 0.90800 Loss: 0.649 + +2025-05-16 08:58:55,323 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 08:58:55,323 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 08:58:55,358 - + +2025-05-16 08:58:55,358 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 08:59:51,189 - Epoch: [131][ 100/ 813] Overall Loss 0.397849 Objective Loss 0.397849 LR 0.000025 Time 0.558278 +2025-05-16 09:00:44,512 - Epoch: [131][ 200/ 813] Overall Loss 0.402900 Objective Loss 0.402900 LR 0.000025 Time 0.545742 +2025-05-16 09:01:37,228 - Epoch: [131][ 300/ 813] Overall Loss 0.416986 Objective Loss 0.416986 LR 0.000025 Time 0.539544 +2025-05-16 09:02:27,066 - Epoch: [131][ 400/ 813] Overall Loss 0.420061 Objective Loss 0.420061 LR 0.000025 Time 0.529248 +2025-05-16 09:03:19,554 - Epoch: [131][ 500/ 813] Overall Loss 0.426100 Objective Loss 0.426100 LR 0.000025 Time 0.528369 +2025-05-16 09:04:13,210 - Epoch: [131][ 600/ 813] Overall Loss 0.423532 Objective Loss 0.423532 LR 0.000025 Time 0.529731 +2025-05-16 09:05:05,094 - Epoch: [131][ 700/ 813] Overall Loss 0.427500 Objective Loss 0.427500 LR 0.000025 Time 0.528173 +2025-05-16 09:05:57,980 - Epoch: [131][ 800/ 813] Overall Loss 0.429220 Objective Loss 0.429220 LR 0.000025 Time 0.528256 +2025-05-16 09:06:02,665 - Epoch: [131][ 813/ 813] Overall Loss 0.428948 Objective Loss 0.428948 LR 0.000025 Time 0.525572 +2025-05-16 09:06:02,705 - --- validate (epoch=131)----------- +2025-05-16 09:06:02,705 - 3250 samples (16 per mini-batch) +2025-05-16 09:06:02,707 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 09:06:58,213 - Epoch: [131][ 100/ 204] Loss 0.626866 mAP 0.909207 +2025-05-16 09:07:49,269 - Epoch: [131][ 200/ 204] Loss 0.659118 mAP 0.899104 +2025-05-16 09:07:50,097 - Epoch: [131][ 204/ 204] Loss 0.661227 mAP 0.899073 +2025-05-16 09:07:50,138 - ==> mAP: 0.89907 Loss: 0.661 + +2025-05-16 09:07:50,143 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 09:07:50,143 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 09:07:50,178 - + +2025-05-16 09:07:50,179 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 09:08:44,409 - Epoch: [132][ 100/ 813] Overall Loss 0.379170 Objective Loss 0.379170 LR 0.000025 Time 0.542275 +2025-05-16 09:09:38,166 - Epoch: [132][ 200/ 813] Overall Loss 0.394643 Objective Loss 0.394643 LR 0.000025 Time 0.539911 +2025-05-16 09:10:31,331 - Epoch: [132][ 300/ 813] Overall Loss 0.408581 Objective Loss 0.408581 LR 0.000025 Time 0.537155 +2025-05-16 09:11:20,899 - Epoch: [132][ 400/ 813] Overall Loss 0.408313 Objective Loss 0.408313 LR 0.000025 Time 0.526780 +2025-05-16 09:12:16,121 - Epoch: [132][ 500/ 813] Overall Loss 0.411539 Objective Loss 0.411539 LR 0.000025 Time 0.531866 +2025-05-16 09:13:07,523 - Epoch: [132][ 600/ 813] Overall Loss 0.411829 Objective Loss 0.411829 LR 0.000025 Time 0.528889 +2025-05-16 09:14:01,316 - Epoch: [132][ 700/ 813] Overall Loss 0.413461 Objective Loss 0.413461 LR 0.000025 Time 0.530177 +2025-05-16 09:14:53,805 - Epoch: [132][ 800/ 813] Overall Loss 0.416318 Objective Loss 0.416318 LR 0.000025 Time 0.529515 +2025-05-16 09:14:58,396 - Epoch: [132][ 813/ 813] Overall Loss 0.415998 Objective Loss 0.415998 LR 0.000025 Time 0.526694 +2025-05-16 09:14:58,438 - --- validate (epoch=132)----------- +2025-05-16 09:14:58,438 - 3250 samples (16 per mini-batch) +2025-05-16 09:14:58,440 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 09:15:53,334 - Epoch: [132][ 100/ 204] Loss 0.663003 mAP 0.899154 +2025-05-16 09:16:46,035 - Epoch: [132][ 200/ 204] Loss 0.680756 mAP 0.889258 +2025-05-16 09:16:47,086 - Epoch: [132][ 204/ 204] Loss 0.685756 mAP 0.889233 +2025-05-16 09:16:47,130 - ==> mAP: 0.88923 Loss: 0.686 + +2025-05-16 09:16:47,135 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 09:16:47,135 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 09:16:47,171 - + +2025-05-16 09:16:47,171 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 09:17:40,831 - Epoch: [133][ 100/ 813] Overall Loss 0.375531 Objective Loss 0.375531 LR 0.000025 Time 0.536565 +2025-05-16 09:18:34,172 - Epoch: [133][ 200/ 813] Overall Loss 0.383694 Objective Loss 0.383694 LR 0.000025 Time 0.534981 +2025-05-16 09:19:26,704 - Epoch: [133][ 300/ 813] Overall Loss 0.390460 Objective Loss 0.390460 LR 0.000025 Time 0.531756 +2025-05-16 09:20:19,629 - Epoch: [133][ 400/ 813] Overall Loss 0.395405 Objective Loss 0.395405 LR 0.000025 Time 0.531116 +2025-05-16 09:21:09,876 - Epoch: [133][ 500/ 813] Overall Loss 0.396654 Objective Loss 0.396654 LR 0.000025 Time 0.525384 +2025-05-16 09:22:04,045 - Epoch: [133][ 600/ 813] Overall Loss 0.397598 Objective Loss 0.397598 LR 0.000025 Time 0.528099 +2025-05-16 09:22:56,705 - Epoch: [133][ 700/ 813] Overall Loss 0.403925 Objective Loss 0.403925 LR 0.000025 Time 0.527882 +2025-05-16 09:23:48,818 - Epoch: [133][ 800/ 813] Overall Loss 0.409246 Objective Loss 0.409246 LR 0.000025 Time 0.527035 +2025-05-16 09:23:54,330 - Epoch: [133][ 813/ 813] Overall Loss 0.410887 Objective Loss 0.410887 LR 0.000025 Time 0.525387 +2025-05-16 09:23:54,372 - --- validate (epoch=133)----------- +2025-05-16 09:23:54,373 - 3250 samples (16 per mini-batch) +2025-05-16 09:23:54,375 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 09:24:48,511 - Epoch: [133][ 100/ 204] Loss 0.715585 mAP 0.899409 +2025-05-16 09:25:41,113 - Epoch: [133][ 200/ 204] Loss 0.693335 mAP 0.899451 +2025-05-16 09:25:41,624 - Epoch: [133][ 204/ 204] Loss 0.693461 mAP 0.899475 +2025-05-16 09:25:41,666 - ==> mAP: 0.89948 Loss: 0.693 + +2025-05-16 09:25:41,671 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 09:25:41,671 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 09:25:41,707 - + +2025-05-16 09:25:41,707 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 09:26:36,962 - Epoch: [134][ 100/ 813] Overall Loss 0.367788 Objective Loss 0.367788 LR 0.000025 Time 0.552519 +2025-05-16 09:27:30,474 - Epoch: [134][ 200/ 813] Overall Loss 0.407613 Objective Loss 0.407613 LR 0.000025 Time 0.543814 +2025-05-16 09:28:23,905 - Epoch: [134][ 300/ 813] Overall Loss 0.420360 Objective Loss 0.420360 LR 0.000025 Time 0.540639 +2025-05-16 09:29:14,468 - Epoch: [134][ 400/ 813] Overall Loss 0.426913 Objective Loss 0.426913 LR 0.000025 Time 0.531883 +2025-05-16 09:30:05,356 - Epoch: [134][ 500/ 813] Overall Loss 0.425892 Objective Loss 0.425892 LR 0.000025 Time 0.527279 +2025-05-16 09:30:58,917 - Epoch: [134][ 600/ 813] Overall Loss 0.423442 Objective Loss 0.423442 LR 0.000025 Time 0.528654 +2025-05-16 09:31:50,977 - Epoch: [134][ 700/ 813] Overall Loss 0.425768 Objective Loss 0.425768 LR 0.000025 Time 0.527501 +2025-05-16 09:32:43,153 - Epoch: [134][ 800/ 813] Overall Loss 0.428183 Objective Loss 0.428183 LR 0.000025 Time 0.526781 +2025-05-16 09:32:48,570 - Epoch: [134][ 813/ 813] Overall Loss 0.428399 Objective Loss 0.428399 LR 0.000025 Time 0.525019 +2025-05-16 09:32:48,612 - --- validate (epoch=134)----------- +2025-05-16 09:32:48,613 - 3250 samples (16 per mini-batch) +2025-05-16 09:32:48,614 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 09:33:41,127 - Epoch: [134][ 100/ 204] Loss 0.693492 mAP 0.897958 +2025-05-16 09:34:34,225 - Epoch: [134][ 200/ 204] Loss 0.686987 mAP 0.897942 +2025-05-16 09:34:35,151 - Epoch: [134][ 204/ 204] Loss 0.686966 mAP 0.897979 +2025-05-16 09:34:35,193 - ==> mAP: 0.89798 Loss: 0.687 + +2025-05-16 09:34:35,198 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 09:34:35,198 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 09:34:35,233 - + +2025-05-16 09:34:35,234 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 09:35:29,151 - Epoch: [135][ 100/ 813] Overall Loss 0.412012 Objective Loss 0.412012 LR 0.000025 Time 0.539146 +2025-05-16 09:36:22,426 - Epoch: [135][ 200/ 813] Overall Loss 0.401343 Objective Loss 0.401343 LR 0.000025 Time 0.535937 +2025-05-16 09:37:15,775 - Epoch: [135][ 300/ 813] Overall Loss 0.405977 Objective Loss 0.405977 LR 0.000025 Time 0.535116 +2025-05-16 09:38:05,666 - Epoch: [135][ 400/ 813] Overall Loss 0.405222 Objective Loss 0.405222 LR 0.000025 Time 0.526060 +2025-05-16 09:38:57,068 - Epoch: [135][ 500/ 813] Overall Loss 0.406609 Objective Loss 0.406609 LR 0.000025 Time 0.523649 +2025-05-16 09:39:50,754 - Epoch: [135][ 600/ 813] Overall Loss 0.403406 Objective Loss 0.403406 LR 0.000025 Time 0.525848 +2025-05-16 09:40:42,707 - Epoch: [135][ 700/ 813] Overall Loss 0.414878 Objective Loss 0.414878 LR 0.000025 Time 0.524943 +2025-05-16 09:41:34,898 - Epoch: [135][ 800/ 813] Overall Loss 0.419712 Objective Loss 0.419712 LR 0.000025 Time 0.524560 +2025-05-16 09:41:40,385 - Epoch: [135][ 813/ 813] Overall Loss 0.420590 Objective Loss 0.420590 LR 0.000025 Time 0.522922 +2025-05-16 09:41:40,426 - --- validate (epoch=135)----------- +2025-05-16 09:41:40,427 - 3250 samples (16 per mini-batch) +2025-05-16 09:41:40,429 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 09:42:35,189 - Epoch: [135][ 100/ 204] Loss 0.685133 mAP 0.899131 +2025-05-16 09:43:28,647 - Epoch: [135][ 200/ 204] Loss 0.689493 mAP 0.909255 +2025-05-16 09:43:29,800 - Epoch: [135][ 204/ 204] Loss 0.697467 mAP 0.909048 +2025-05-16 09:43:29,844 - ==> mAP: 0.90905 Loss: 0.697 + +2025-05-16 09:43:29,849 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 09:43:29,849 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 09:43:29,885 - + +2025-05-16 09:43:29,885 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 09:44:25,283 - Epoch: [136][ 100/ 813] Overall Loss 0.392269 Objective Loss 0.392269 LR 0.000025 Time 0.553951 +2025-05-16 09:45:17,236 - Epoch: [136][ 200/ 813] Overall Loss 0.399418 Objective Loss 0.399418 LR 0.000025 Time 0.536733 +2025-05-16 09:46:10,172 - Epoch: [136][ 300/ 813] Overall Loss 0.415127 Objective Loss 0.415127 LR 0.000025 Time 0.534268 +2025-05-16 09:47:01,345 - Epoch: [136][ 400/ 813] Overall Loss 0.414696 Objective Loss 0.414696 LR 0.000025 Time 0.528629 +2025-05-16 09:47:53,190 - Epoch: [136][ 500/ 813] Overall Loss 0.410084 Objective Loss 0.410084 LR 0.000025 Time 0.526591 +2025-05-16 09:48:47,500 - Epoch: [136][ 600/ 813] Overall Loss 0.411887 Objective Loss 0.411887 LR 0.000025 Time 0.529339 +2025-05-16 09:49:40,054 - Epoch: [136][ 700/ 813] Overall Loss 0.415392 Objective Loss 0.415392 LR 0.000025 Time 0.528794 +2025-05-16 09:50:31,472 - Epoch: [136][ 800/ 813] Overall Loss 0.413590 Objective Loss 0.413590 LR 0.000025 Time 0.526965 +2025-05-16 09:50:36,816 - Epoch: [136][ 813/ 813] Overall Loss 0.414514 Objective Loss 0.414514 LR 0.000025 Time 0.525111 +2025-05-16 09:50:36,855 - --- validate (epoch=136)----------- +2025-05-16 09:50:36,855 - 3250 samples (16 per mini-batch) +2025-05-16 09:50:36,857 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 09:51:29,935 - Epoch: [136][ 100/ 204] Loss 0.668688 mAP 0.900154 +2025-05-16 09:52:22,888 - Epoch: [136][ 200/ 204] Loss 0.668519 mAP 0.899659 +2025-05-16 09:52:23,536 - Epoch: [136][ 204/ 204] Loss 0.669762 mAP 0.899678 +2025-05-16 09:52:23,578 - ==> mAP: 0.89968 Loss: 0.670 + +2025-05-16 09:52:23,583 - ==> Best [mAP: 0.909573 vloss: 0.681648 Params: 368352 on epoch: 109] +2025-05-16 09:52:23,583 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 09:52:23,618 - + +2025-05-16 09:52:23,618 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 09:53:18,790 - Epoch: [137][ 100/ 813] Overall Loss 0.375205 Objective Loss 0.375205 LR 0.000025 Time 0.551692 +2025-05-16 09:54:11,723 - Epoch: [137][ 200/ 813] Overall Loss 0.375906 Objective Loss 0.375906 LR 0.000025 Time 0.540499 +2025-05-16 09:55:04,005 - Epoch: [137][ 300/ 813] Overall Loss 0.391846 Objective Loss 0.391846 LR 0.000025 Time 0.534602 +2025-05-16 09:55:54,965 - Epoch: [137][ 400/ 813] Overall Loss 0.397956 Objective Loss 0.397956 LR 0.000025 Time 0.528346 +2025-05-16 09:56:47,163 - Epoch: [137][ 500/ 813] Overall Loss 0.404896 Objective Loss 0.404896 LR 0.000025 Time 0.527070 +2025-05-16 09:57:41,334 - Epoch: [137][ 600/ 813] Overall Loss 0.410800 Objective Loss 0.410800 LR 0.000025 Time 0.529508 +2025-05-16 09:58:32,542 - Epoch: [137][ 700/ 813] Overall Loss 0.416730 Objective Loss 0.416730 LR 0.000025 Time 0.527016 +2025-05-16 09:59:25,434 - Epoch: [137][ 800/ 813] Overall Loss 0.422660 Objective Loss 0.422660 LR 0.000025 Time 0.527251 +2025-05-16 09:59:30,170 - Epoch: [137][ 813/ 813] Overall Loss 0.423330 Objective Loss 0.423330 LR 0.000025 Time 0.524645 +2025-05-16 09:59:30,212 - --- validate (epoch=137)----------- +2025-05-16 09:59:30,213 - 3250 samples (16 per mini-batch) +2025-05-16 09:59:30,215 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 10:00:25,350 - Epoch: [137][ 100/ 204] Loss 0.680567 mAP 0.910450 +2025-05-16 10:01:18,194 - Epoch: [137][ 200/ 204] Loss 0.680497 mAP 0.909666 +2025-05-16 10:01:18,944 - Epoch: [137][ 204/ 204] Loss 0.683046 mAP 0.909684 +2025-05-16 10:01:18,987 - ==> mAP: 0.90968 Loss: 0.683 + +2025-05-16 10:01:18,992 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 10:01:18,992 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 10:01:19,032 - + +2025-05-16 10:01:19,032 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 10:02:14,716 - Epoch: [138][ 100/ 813] Overall Loss 0.382387 Objective Loss 0.382387 LR 0.000025 Time 0.556811 +2025-05-16 10:03:07,308 - Epoch: [138][ 200/ 813] Overall Loss 0.382756 Objective Loss 0.382756 LR 0.000025 Time 0.541356 +2025-05-16 10:04:00,673 - Epoch: [138][ 300/ 813] Overall Loss 0.396569 Objective Loss 0.396569 LR 0.000025 Time 0.538783 +2025-05-16 10:04:50,258 - Epoch: [138][ 400/ 813] Overall Loss 0.403938 Objective Loss 0.403938 LR 0.000025 Time 0.528029 +2025-05-16 10:05:42,262 - Epoch: [138][ 500/ 813] Overall Loss 0.409775 Objective Loss 0.409775 LR 0.000025 Time 0.526428 +2025-05-16 10:06:35,760 - Epoch: [138][ 600/ 813] Overall Loss 0.408812 Objective Loss 0.408812 LR 0.000025 Time 0.527850 +2025-05-16 10:07:29,301 - Epoch: [138][ 700/ 813] Overall Loss 0.408423 Objective Loss 0.408423 LR 0.000025 Time 0.528928 +2025-05-16 10:08:20,969 - Epoch: [138][ 800/ 813] Overall Loss 0.408344 Objective Loss 0.408344 LR 0.000025 Time 0.527394 +2025-05-16 10:08:25,942 - Epoch: [138][ 813/ 813] Overall Loss 0.408597 Objective Loss 0.408597 LR 0.000025 Time 0.525078 +2025-05-16 10:08:25,982 - --- validate (epoch=138)----------- +2025-05-16 10:08:25,983 - 3250 samples (16 per mini-batch) +2025-05-16 10:08:25,985 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 10:09:21,403 - Epoch: [138][ 100/ 204] Loss 0.706353 mAP 0.889041 +2025-05-16 10:10:13,843 - Epoch: [138][ 200/ 204] Loss 0.690034 mAP 0.898915 +2025-05-16 10:10:14,746 - Epoch: [138][ 204/ 204] Loss 0.691924 mAP 0.898981 +2025-05-16 10:10:14,789 - ==> mAP: 0.89898 Loss: 0.692 + +2025-05-16 10:10:14,795 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 10:10:14,795 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 10:10:14,830 - + +2025-05-16 10:10:14,830 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 10:11:09,282 - Epoch: [139][ 100/ 813] Overall Loss 0.349375 Objective Loss 0.349375 LR 0.000025 Time 0.544488 +2025-05-16 10:12:01,919 - Epoch: [139][ 200/ 813] Overall Loss 0.363468 Objective Loss 0.363468 LR 0.000025 Time 0.535420 +2025-05-16 10:12:54,688 - Epoch: [139][ 300/ 813] Overall Loss 0.376927 Objective Loss 0.376927 LR 0.000025 Time 0.532839 +2025-05-16 10:13:44,969 - Epoch: [139][ 400/ 813] Overall Loss 0.390360 Objective Loss 0.390360 LR 0.000025 Time 0.525328 +2025-05-16 10:14:38,346 - Epoch: [139][ 500/ 813] Overall Loss 0.391768 Objective Loss 0.391768 LR 0.000025 Time 0.527012 +2025-05-16 10:15:30,425 - Epoch: [139][ 600/ 813] Overall Loss 0.397638 Objective Loss 0.397638 LR 0.000025 Time 0.525962 +2025-05-16 10:16:24,943 - Epoch: [139][ 700/ 813] Overall Loss 0.403922 Objective Loss 0.403922 LR 0.000025 Time 0.528705 +2025-05-16 10:17:16,836 - Epoch: [139][ 800/ 813] Overall Loss 0.407703 Objective Loss 0.407703 LR 0.000025 Time 0.527481 +2025-05-16 10:17:21,587 - Epoch: [139][ 813/ 813] Overall Loss 0.408739 Objective Loss 0.408739 LR 0.000025 Time 0.524890 +2025-05-16 10:17:21,623 - --- validate (epoch=139)----------- +2025-05-16 10:17:21,624 - 3250 samples (16 per mini-batch) +2025-05-16 10:17:21,626 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 10:18:16,561 - Epoch: [139][ 100/ 204] Loss 0.670060 mAP 0.909015 +2025-05-16 10:19:08,013 - Epoch: [139][ 200/ 204] Loss 0.660580 mAP 0.899223 +2025-05-16 10:19:08,768 - Epoch: [139][ 204/ 204] Loss 0.661971 mAP 0.899250 +2025-05-16 10:19:08,814 - ==> mAP: 0.89925 Loss: 0.662 + +2025-05-16 10:19:08,820 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 10:19:08,820 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 10:19:08,848 - + +2025-05-16 10:19:08,848 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 10:20:02,865 - Epoch: [140][ 100/ 813] Overall Loss 0.360403 Objective Loss 0.360403 LR 0.000025 Time 0.540142 +2025-05-16 10:20:57,239 - Epoch: [140][ 200/ 813] Overall Loss 0.394865 Objective Loss 0.394865 LR 0.000025 Time 0.541932 +2025-05-16 10:21:48,894 - Epoch: [140][ 300/ 813] Overall Loss 0.421312 Objective Loss 0.421312 LR 0.000025 Time 0.533465 +2025-05-16 10:22:38,514 - Epoch: [140][ 400/ 813] Overall Loss 0.422614 Objective Loss 0.422614 LR 0.000025 Time 0.524143 +2025-05-16 10:23:31,162 - Epoch: [140][ 500/ 813] Overall Loss 0.418907 Objective Loss 0.418907 LR 0.000025 Time 0.524607 +2025-05-16 10:24:23,465 - Epoch: [140][ 600/ 813] Overall Loss 0.418857 Objective Loss 0.418857 LR 0.000025 Time 0.524342 +2025-05-16 10:25:16,713 - Epoch: [140][ 700/ 813] Overall Loss 0.422305 Objective Loss 0.422305 LR 0.000025 Time 0.525502 +2025-05-16 10:26:08,260 - Epoch: [140][ 800/ 813] Overall Loss 0.424648 Objective Loss 0.424648 LR 0.000025 Time 0.524246 +2025-05-16 10:26:13,861 - Epoch: [140][ 813/ 813] Overall Loss 0.426393 Objective Loss 0.426393 LR 0.000025 Time 0.522752 +2025-05-16 10:26:13,898 - --- validate (epoch=140)----------- +2025-05-16 10:26:13,899 - 3250 samples (16 per mini-batch) +2025-05-16 10:26:13,901 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 10:27:08,839 - Epoch: [140][ 100/ 204] Loss 0.722900 mAP 0.899219 +2025-05-16 10:28:02,103 - Epoch: [140][ 200/ 204] Loss 0.710247 mAP 0.898807 +2025-05-16 10:28:02,607 - Epoch: [140][ 204/ 204] Loss 0.712755 mAP 0.898611 +2025-05-16 10:28:02,647 - ==> mAP: 0.89861 Loss: 0.713 + +2025-05-16 10:28:02,652 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 10:28:02,652 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 10:28:02,687 - + +2025-05-16 10:28:02,688 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 10:28:57,297 - Epoch: [141][ 100/ 813] Overall Loss 0.362279 Objective Loss 0.362279 LR 0.000025 Time 0.546065 +2025-05-16 10:29:51,891 - Epoch: [141][ 200/ 813] Overall Loss 0.376738 Objective Loss 0.376738 LR 0.000025 Time 0.545992 +2025-05-16 10:30:44,338 - Epoch: [141][ 300/ 813] Overall Loss 0.388180 Objective Loss 0.388180 LR 0.000025 Time 0.538811 +2025-05-16 10:31:33,969 - Epoch: [141][ 400/ 813] Overall Loss 0.391521 Objective Loss 0.391521 LR 0.000025 Time 0.528182 +2025-05-16 10:32:25,697 - Epoch: [141][ 500/ 813] Overall Loss 0.393330 Objective Loss 0.393330 LR 0.000025 Time 0.525993 +2025-05-16 10:33:19,184 - Epoch: [141][ 600/ 813] Overall Loss 0.396642 Objective Loss 0.396642 LR 0.000025 Time 0.527470 +2025-05-16 10:34:12,847 - Epoch: [141][ 700/ 813] Overall Loss 0.401902 Objective Loss 0.401902 LR 0.000025 Time 0.528775 +2025-05-16 10:35:05,914 - Epoch: [141][ 800/ 813] Overall Loss 0.405906 Objective Loss 0.405906 LR 0.000025 Time 0.529010 +2025-05-16 10:35:11,015 - Epoch: [141][ 813/ 813] Overall Loss 0.406881 Objective Loss 0.406881 LR 0.000025 Time 0.526825 +2025-05-16 10:35:11,057 - --- validate (epoch=141)----------- +2025-05-16 10:35:11,057 - 3250 samples (16 per mini-batch) +2025-05-16 10:35:11,059 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 10:36:06,757 - Epoch: [141][ 100/ 204] Loss 0.657682 mAP 0.888410 +2025-05-16 10:37:00,596 - Epoch: [141][ 200/ 204] Loss 0.661631 mAP 0.898906 +2025-05-16 10:37:01,098 - Epoch: [141][ 204/ 204] Loss 0.669336 mAP 0.898913 +2025-05-16 10:37:01,143 - ==> mAP: 0.89891 Loss: 0.669 + +2025-05-16 10:37:01,149 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 10:37:01,149 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 10:37:01,184 - + +2025-05-16 10:37:01,184 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 10:37:57,163 - Epoch: [142][ 100/ 813] Overall Loss 0.347275 Objective Loss 0.347275 LR 0.000025 Time 0.559757 +2025-05-16 10:38:51,297 - Epoch: [142][ 200/ 813] Overall Loss 0.358489 Objective Loss 0.358489 LR 0.000025 Time 0.550537 +2025-05-16 10:39:43,155 - Epoch: [142][ 300/ 813] Overall Loss 0.377699 Objective Loss 0.377699 LR 0.000025 Time 0.539881 +2025-05-16 10:40:33,799 - Epoch: [142][ 400/ 813] Overall Loss 0.381563 Objective Loss 0.381563 LR 0.000025 Time 0.531517 +2025-05-16 10:41:25,654 - Epoch: [142][ 500/ 813] Overall Loss 0.383985 Objective Loss 0.383985 LR 0.000025 Time 0.528913 +2025-05-16 10:42:18,910 - Epoch: [142][ 600/ 813] Overall Loss 0.389352 Objective Loss 0.389352 LR 0.000025 Time 0.529517 +2025-05-16 10:43:11,595 - Epoch: [142][ 700/ 813] Overall Loss 0.396229 Objective Loss 0.396229 LR 0.000025 Time 0.529133 +2025-05-16 10:44:03,798 - Epoch: [142][ 800/ 813] Overall Loss 0.401920 Objective Loss 0.401920 LR 0.000025 Time 0.528240 +2025-05-16 10:44:09,579 - Epoch: [142][ 813/ 813] Overall Loss 0.403843 Objective Loss 0.403843 LR 0.000025 Time 0.526903 +2025-05-16 10:44:09,618 - --- validate (epoch=142)----------- +2025-05-16 10:44:09,619 - 3250 samples (16 per mini-batch) +2025-05-16 10:44:09,621 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 10:45:05,157 - Epoch: [142][ 100/ 204] Loss 0.668900 mAP 0.888890 +2025-05-16 10:45:57,993 - Epoch: [142][ 200/ 204] Loss 0.662484 mAP 0.898621 +2025-05-16 10:45:58,491 - Epoch: [142][ 204/ 204] Loss 0.673292 mAP 0.898640 +2025-05-16 10:45:58,533 - ==> mAP: 0.89864 Loss: 0.673 + +2025-05-16 10:45:58,538 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 10:45:58,538 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 10:45:58,572 - + +2025-05-16 10:45:58,572 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 10:46:52,485 - Epoch: [143][ 100/ 813] Overall Loss 0.372414 Objective Loss 0.372414 LR 0.000025 Time 0.539098 +2025-05-16 10:47:45,926 - Epoch: [143][ 200/ 813] Overall Loss 0.379644 Objective Loss 0.379644 LR 0.000025 Time 0.536747 +2025-05-16 10:48:38,464 - Epoch: [143][ 300/ 813] Overall Loss 0.386053 Objective Loss 0.386053 LR 0.000025 Time 0.532951 +2025-05-16 10:49:29,253 - Epoch: [143][ 400/ 813] Overall Loss 0.387421 Objective Loss 0.387421 LR 0.000025 Time 0.526682 +2025-05-16 10:50:21,821 - Epoch: [143][ 500/ 813] Overall Loss 0.386051 Objective Loss 0.386051 LR 0.000025 Time 0.526478 +2025-05-16 10:51:15,597 - Epoch: [143][ 600/ 813] Overall Loss 0.390665 Objective Loss 0.390665 LR 0.000025 Time 0.528355 +2025-05-16 10:52:09,328 - Epoch: [143][ 700/ 813] Overall Loss 0.396315 Objective Loss 0.396315 LR 0.000025 Time 0.529633 +2025-05-16 10:53:01,374 - Epoch: [143][ 800/ 813] Overall Loss 0.398675 Objective Loss 0.398675 LR 0.000025 Time 0.528483 +2025-05-16 10:53:06,695 - Epoch: [143][ 813/ 813] Overall Loss 0.399368 Objective Loss 0.399368 LR 0.000025 Time 0.526577 +2025-05-16 10:53:06,733 - --- validate (epoch=143)----------- +2025-05-16 10:53:06,734 - 3250 samples (16 per mini-batch) +2025-05-16 10:53:06,735 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 10:54:01,973 - Epoch: [143][ 100/ 204] Loss 0.637252 mAP 0.909209 +2025-05-16 10:54:54,317 - Epoch: [143][ 200/ 204] Loss 0.654717 mAP 0.909243 +2025-05-16 10:54:54,796 - Epoch: [143][ 204/ 204] Loss 0.662723 mAP 0.909267 +2025-05-16 10:54:54,838 - ==> mAP: 0.90927 Loss: 0.663 + +2025-05-16 10:54:54,843 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 10:54:54,843 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 10:54:54,878 - + +2025-05-16 10:54:54,878 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 10:55:49,970 - Epoch: [144][ 100/ 813] Overall Loss 0.403468 Objective Loss 0.403468 LR 0.000025 Time 0.550898 +2025-05-16 10:56:43,949 - Epoch: [144][ 200/ 813] Overall Loss 0.399575 Objective Loss 0.399575 LR 0.000025 Time 0.545334 +2025-05-16 10:57:36,865 - Epoch: [144][ 300/ 813] Overall Loss 0.399344 Objective Loss 0.399344 LR 0.000025 Time 0.539935 +2025-05-16 10:58:27,609 - Epoch: [144][ 400/ 813] Overall Loss 0.404645 Objective Loss 0.404645 LR 0.000025 Time 0.531809 +2025-05-16 10:59:18,882 - Epoch: [144][ 500/ 813] Overall Loss 0.407179 Objective Loss 0.407179 LR 0.000025 Time 0.527989 +2025-05-16 11:00:13,417 - Epoch: [144][ 600/ 813] Overall Loss 0.407425 Objective Loss 0.407425 LR 0.000025 Time 0.530873 +2025-05-16 11:01:05,374 - Epoch: [144][ 700/ 813] Overall Loss 0.410768 Objective Loss 0.410768 LR 0.000025 Time 0.529256 +2025-05-16 11:01:57,878 - Epoch: [144][ 800/ 813] Overall Loss 0.415536 Objective Loss 0.415536 LR 0.000025 Time 0.528727 +2025-05-16 11:02:03,764 - Epoch: [144][ 813/ 813] Overall Loss 0.416852 Objective Loss 0.416852 LR 0.000025 Time 0.527512 +2025-05-16 11:02:03,810 - --- validate (epoch=144)----------- +2025-05-16 11:02:03,811 - 3250 samples (16 per mini-batch) +2025-05-16 11:02:03,813 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 11:03:05,022 - Epoch: [144][ 100/ 204] Loss 0.712455 mAP 0.899827 +2025-05-16 11:04:05,349 - Epoch: [144][ 200/ 204] Loss 0.702643 mAP 0.899963 +2025-05-16 11:04:06,916 - Epoch: [144][ 204/ 204] Loss 0.700247 mAP 0.899954 +2025-05-16 11:04:06,967 - ==> mAP: 0.89995 Loss: 0.700 + +2025-05-16 11:04:06,980 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 11:04:06,980 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 11:04:07,009 - + +2025-05-16 11:04:07,009 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 11:05:11,056 - Epoch: [145][ 100/ 813] Overall Loss 0.356528 Objective Loss 0.356528 LR 0.000025 Time 0.640436 +2025-05-16 11:06:11,639 - Epoch: [145][ 200/ 813] Overall Loss 0.361106 Objective Loss 0.361106 LR 0.000025 Time 0.623115 +2025-05-16 11:07:07,354 - Epoch: [145][ 300/ 813] Overall Loss 0.383495 Objective Loss 0.383495 LR 0.000025 Time 0.601093 +2025-05-16 11:08:01,699 - Epoch: [145][ 400/ 813] Overall Loss 0.386283 Objective Loss 0.386283 LR 0.000025 Time 0.586677 +2025-05-16 11:09:01,354 - Epoch: [145][ 500/ 813] Overall Loss 0.390523 Objective Loss 0.390523 LR 0.000025 Time 0.588645 +2025-05-16 11:10:04,370 - Epoch: [145][ 600/ 813] Overall Loss 0.402567 Objective Loss 0.402567 LR 0.000025 Time 0.595532 +2025-05-16 11:11:02,413 - Epoch: [145][ 700/ 813] Overall Loss 0.410018 Objective Loss 0.410018 LR 0.000025 Time 0.593370 +2025-05-16 11:11:59,281 - Epoch: [145][ 800/ 813] Overall Loss 0.411834 Objective Loss 0.411834 LR 0.000025 Time 0.590281 +2025-05-16 11:12:05,145 - Epoch: [145][ 813/ 813] Overall Loss 0.412428 Objective Loss 0.412428 LR 0.000025 Time 0.588055 +2025-05-16 11:12:05,182 - --- validate (epoch=145)----------- +2025-05-16 11:12:05,183 - 3250 samples (16 per mini-batch) +2025-05-16 11:12:05,184 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 11:13:06,853 - Epoch: [145][ 100/ 204] Loss 0.645035 mAP 0.919374 +2025-05-16 11:14:06,517 - Epoch: [145][ 200/ 204] Loss 0.662320 mAP 0.909408 +2025-05-16 11:14:07,778 - Epoch: [145][ 204/ 204] Loss 0.663927 mAP 0.909424 +2025-05-16 11:14:07,827 - ==> mAP: 0.90942 Loss: 0.664 + +2025-05-16 11:14:07,890 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 11:14:07,890 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 11:14:07,927 - + +2025-05-16 11:14:07,928 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 11:15:10,078 - Epoch: [146][ 100/ 813] Overall Loss 0.373654 Objective Loss 0.373654 LR 0.000025 Time 0.621468 +2025-05-16 11:16:11,209 - Epoch: [146][ 200/ 813] Overall Loss 0.373781 Objective Loss 0.373781 LR 0.000025 Time 0.616373 +2025-05-16 11:17:08,138 - Epoch: [146][ 300/ 813] Overall Loss 0.391934 Objective Loss 0.391934 LR 0.000025 Time 0.600672 +2025-05-16 11:18:05,437 - Epoch: [146][ 400/ 813] Overall Loss 0.394813 Objective Loss 0.394813 LR 0.000025 Time 0.593745 +2025-05-16 11:19:05,302 - Epoch: [146][ 500/ 813] Overall Loss 0.394816 Objective Loss 0.394816 LR 0.000025 Time 0.594719 +2025-05-16 11:20:06,740 - Epoch: [146][ 600/ 813] Overall Loss 0.400543 Objective Loss 0.400543 LR 0.000025 Time 0.597992 +2025-05-16 11:21:06,533 - Epoch: [146][ 700/ 813] Overall Loss 0.404473 Objective Loss 0.404473 LR 0.000025 Time 0.597978 +2025-05-16 11:22:01,132 - Epoch: [146][ 800/ 813] Overall Loss 0.410197 Objective Loss 0.410197 LR 0.000025 Time 0.591477 +2025-05-16 11:22:07,253 - Epoch: [146][ 813/ 813] Overall Loss 0.411101 Objective Loss 0.411101 LR 0.000025 Time 0.589547 +2025-05-16 11:22:07,317 - --- validate (epoch=146)----------- +2025-05-16 11:22:07,319 - 3250 samples (16 per mini-batch) +2025-05-16 11:22:07,321 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 11:23:12,319 - Epoch: [146][ 100/ 204] Loss 0.649721 mAP 0.909762 +2025-05-16 11:24:11,714 - Epoch: [146][ 200/ 204] Loss 0.653141 mAP 0.899596 +2025-05-16 11:24:13,017 - Epoch: [146][ 204/ 204] Loss 0.649961 mAP 0.899608 +2025-05-16 11:24:13,059 - ==> mAP: 0.89961 Loss: 0.650 + +2025-05-16 11:24:13,066 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 11:24:13,066 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 11:24:13,098 - + +2025-05-16 11:24:13,098 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 11:25:15,917 - Epoch: [147][ 100/ 813] Overall Loss 0.391861 Objective Loss 0.391861 LR 0.000025 Time 0.628151 +2025-05-16 11:26:15,935 - Epoch: [147][ 200/ 813] Overall Loss 0.402717 Objective Loss 0.402717 LR 0.000025 Time 0.614155 +2025-05-16 11:27:12,446 - Epoch: [147][ 300/ 813] Overall Loss 0.412992 Objective Loss 0.412992 LR 0.000025 Time 0.597799 +2025-05-16 11:28:10,176 - Epoch: [147][ 400/ 813] Overall Loss 0.413396 Objective Loss 0.413396 LR 0.000025 Time 0.592666 +2025-05-16 11:29:10,090 - Epoch: [147][ 500/ 813] Overall Loss 0.414202 Objective Loss 0.414202 LR 0.000025 Time 0.593943 +2025-05-16 11:30:09,851 - Epoch: [147][ 600/ 813] Overall Loss 0.420554 Objective Loss 0.420554 LR 0.000025 Time 0.594550 +2025-05-16 11:31:10,554 - Epoch: [147][ 700/ 813] Overall Loss 0.423657 Objective Loss 0.423657 LR 0.000025 Time 0.596328 +2025-05-16 11:32:03,461 - Epoch: [147][ 800/ 813] Overall Loss 0.425295 Objective Loss 0.425295 LR 0.000025 Time 0.587918 +2025-05-16 11:32:10,591 - Epoch: [147][ 813/ 813] Overall Loss 0.426746 Objective Loss 0.426746 LR 0.000025 Time 0.587287 +2025-05-16 11:32:10,660 - --- validate (epoch=147)----------- +2025-05-16 11:32:10,663 - 3250 samples (16 per mini-batch) +2025-05-16 11:32:10,666 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 11:33:14,290 - Epoch: [147][ 100/ 204] Loss 0.730281 mAP 0.906584 +2025-05-16 11:34:14,894 - Epoch: [147][ 200/ 204] Loss 0.701347 mAP 0.907716 +2025-05-16 11:34:16,150 - Epoch: [147][ 204/ 204] Loss 0.697683 mAP 0.907789 +2025-05-16 11:34:16,204 - ==> mAP: 0.90779 Loss: 0.698 + +2025-05-16 11:34:16,267 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 11:34:16,268 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 11:34:16,305 - + +2025-05-16 11:34:16,305 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 11:35:18,883 - Epoch: [148][ 100/ 813] Overall Loss 0.358785 Objective Loss 0.358785 LR 0.000025 Time 0.625740 +2025-05-16 11:36:17,409 - Epoch: [148][ 200/ 813] Overall Loss 0.372639 Objective Loss 0.372639 LR 0.000025 Time 0.605490 +2025-05-16 11:37:13,682 - Epoch: [148][ 300/ 813] Overall Loss 0.383446 Objective Loss 0.383446 LR 0.000025 Time 0.591230 +2025-05-16 11:38:11,109 - Epoch: [148][ 400/ 813] Overall Loss 0.395965 Objective Loss 0.395965 LR 0.000025 Time 0.586982 +2025-05-16 11:39:10,447 - Epoch: [148][ 500/ 813] Overall Loss 0.397714 Objective Loss 0.397714 LR 0.000025 Time 0.588257 +2025-05-16 11:40:10,710 - Epoch: [148][ 600/ 813] Overall Loss 0.398465 Objective Loss 0.398465 LR 0.000025 Time 0.590648 +2025-05-16 11:41:08,700 - Epoch: [148][ 700/ 813] Overall Loss 0.406230 Objective Loss 0.406230 LR 0.000025 Time 0.589108 +2025-05-16 11:42:03,171 - Epoch: [148][ 800/ 813] Overall Loss 0.407290 Objective Loss 0.407290 LR 0.000025 Time 0.583555 +2025-05-16 11:42:09,548 - Epoch: [148][ 813/ 813] Overall Loss 0.408553 Objective Loss 0.408553 LR 0.000025 Time 0.582059 +2025-05-16 11:42:09,617 - --- validate (epoch=148)----------- +2025-05-16 11:42:09,618 - 3250 samples (16 per mini-batch) +2025-05-16 11:42:09,621 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 11:43:12,418 - Epoch: [148][ 100/ 204] Loss 0.702213 mAP 0.888361 +2025-05-16 11:44:15,525 - Epoch: [148][ 200/ 204] Loss 0.683747 mAP 0.898997 +2025-05-16 11:44:16,860 - Epoch: [148][ 204/ 204] Loss 0.688131 mAP 0.898926 +2025-05-16 11:44:16,912 - ==> mAP: 0.89893 Loss: 0.688 + +2025-05-16 11:44:16,928 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 11:44:16,928 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 11:44:16,965 - + +2025-05-16 11:44:16,965 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 11:45:19,207 - Epoch: [149][ 100/ 813] Overall Loss 0.381107 Objective Loss 0.381107 LR 0.000025 Time 0.622378 +2025-05-16 11:46:18,368 - Epoch: [149][ 200/ 813] Overall Loss 0.388515 Objective Loss 0.388515 LR 0.000025 Time 0.606980 +2025-05-16 11:47:15,020 - Epoch: [149][ 300/ 813] Overall Loss 0.395188 Objective Loss 0.395188 LR 0.000025 Time 0.593485 +2025-05-16 11:48:14,702 - Epoch: [149][ 400/ 813] Overall Loss 0.404315 Objective Loss 0.404315 LR 0.000025 Time 0.594313 +2025-05-16 11:49:13,521 - Epoch: [149][ 500/ 813] Overall Loss 0.410026 Objective Loss 0.410026 LR 0.000025 Time 0.593076 +2025-05-16 11:50:15,643 - Epoch: [149][ 600/ 813] Overall Loss 0.417119 Objective Loss 0.417119 LR 0.000025 Time 0.597745 +2025-05-16 11:51:14,962 - Epoch: [149][ 700/ 813] Overall Loss 0.420567 Objective Loss 0.420567 LR 0.000025 Time 0.597090 +2025-05-16 11:52:09,431 - Epoch: [149][ 800/ 813] Overall Loss 0.421800 Objective Loss 0.421800 LR 0.000025 Time 0.590528 +2025-05-16 11:52:14,678 - Epoch: [149][ 813/ 813] Overall Loss 0.422771 Objective Loss 0.422771 LR 0.000025 Time 0.587540 +2025-05-16 11:52:14,744 - --- validate (epoch=149)----------- +2025-05-16 11:52:14,745 - 3250 samples (16 per mini-batch) +2025-05-16 11:52:14,748 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 11:53:20,297 - Epoch: [149][ 100/ 204] Loss 0.654914 mAP 0.899052 +2025-05-16 11:54:20,247 - Epoch: [149][ 200/ 204] Loss 0.676728 mAP 0.899092 +2025-05-16 11:54:21,597 - Epoch: [149][ 204/ 204] Loss 0.679264 mAP 0.908860 +2025-05-16 11:54:21,652 - ==> mAP: 0.90886 Loss: 0.679 + +2025-05-16 11:54:21,667 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 11:54:21,667 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 11:54:21,703 - + +2025-05-16 11:54:21,703 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 11:55:23,170 - Epoch: [150][ 100/ 813] Overall Loss 0.366076 Objective Loss 0.366076 LR 0.000025 Time 0.614629 +2025-05-16 11:56:24,362 - Epoch: [150][ 200/ 813] Overall Loss 0.367343 Objective Loss 0.367343 LR 0.000025 Time 0.613265 +2025-05-16 11:57:17,637 - Epoch: [150][ 300/ 813] Overall Loss 0.385569 Objective Loss 0.385569 LR 0.000025 Time 0.586418 +2025-05-16 11:58:16,211 - Epoch: [150][ 400/ 813] Overall Loss 0.399487 Objective Loss 0.399487 LR 0.000025 Time 0.586241 +2025-05-16 11:59:16,918 - Epoch: [150][ 500/ 813] Overall Loss 0.403133 Objective Loss 0.403133 LR 0.000025 Time 0.590402 +2025-05-16 12:00:17,546 - Epoch: [150][ 600/ 813] Overall Loss 0.401759 Objective Loss 0.401759 LR 0.000025 Time 0.593043 +2025-05-16 12:01:17,169 - Epoch: [150][ 700/ 813] Overall Loss 0.408019 Objective Loss 0.408019 LR 0.000025 Time 0.593495 +2025-05-16 12:02:12,010 - Epoch: [150][ 800/ 813] Overall Loss 0.415773 Objective Loss 0.415773 LR 0.000025 Time 0.587856 +2025-05-16 12:02:18,318 - Epoch: [150][ 813/ 813] Overall Loss 0.417442 Objective Loss 0.417442 LR 0.000025 Time 0.586214 +2025-05-16 12:02:18,374 - --- validate (epoch=150)----------- +2025-05-16 12:02:18,378 - 3250 samples (16 per mini-batch) +2025-05-16 12:02:18,380 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 12:03:22,790 - Epoch: [150][ 100/ 204] Loss 0.716572 mAP 0.899741 +2025-05-16 12:04:23,918 - Epoch: [150][ 200/ 204] Loss 0.701504 mAP 0.908962 +2025-05-16 12:04:25,224 - Epoch: [150][ 204/ 204] Loss 0.702400 mAP 0.908987 +2025-05-16 12:04:25,279 - ==> mAP: 0.90899 Loss: 0.702 + +2025-05-16 12:04:25,286 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 12:04:25,287 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 12:04:25,323 - + +2025-05-16 12:04:25,324 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 12:05:29,058 - Epoch: [151][ 100/ 813] Overall Loss 0.384366 Objective Loss 0.384366 LR 0.000025 Time 0.637299 +2025-05-16 12:06:28,768 - Epoch: [151][ 200/ 813] Overall Loss 0.377616 Objective Loss 0.377616 LR 0.000025 Time 0.617185 +2025-05-16 12:07:23,943 - Epoch: [151][ 300/ 813] Overall Loss 0.395755 Objective Loss 0.395755 LR 0.000025 Time 0.595365 +2025-05-16 12:08:21,514 - Epoch: [151][ 400/ 813] Overall Loss 0.401664 Objective Loss 0.401664 LR 0.000025 Time 0.590444 +2025-05-16 12:09:21,205 - Epoch: [151][ 500/ 813] Overall Loss 0.401517 Objective Loss 0.401517 LR 0.000025 Time 0.591729 +2025-05-16 12:10:22,064 - Epoch: [151][ 600/ 813] Overall Loss 0.400075 Objective Loss 0.400075 LR 0.000025 Time 0.594535 +2025-05-16 12:11:20,451 - Epoch: [151][ 700/ 813] Overall Loss 0.401835 Objective Loss 0.401835 LR 0.000025 Time 0.593007 +2025-05-16 12:12:15,897 - Epoch: [151][ 800/ 813] Overall Loss 0.406322 Objective Loss 0.406322 LR 0.000025 Time 0.588186 +2025-05-16 12:12:22,628 - Epoch: [151][ 813/ 813] Overall Loss 0.406156 Objective Loss 0.406156 LR 0.000025 Time 0.587059 +2025-05-16 12:12:22,704 - --- validate (epoch=151)----------- +2025-05-16 12:12:22,708 - 3250 samples (16 per mini-batch) +2025-05-16 12:12:22,711 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 12:13:26,340 - Epoch: [151][ 100/ 204] Loss 0.725203 mAP 0.889561 +2025-05-16 12:14:27,257 - Epoch: [151][ 200/ 204] Loss 0.707814 mAP 0.889645 +2025-05-16 12:14:28,779 - Epoch: [151][ 204/ 204] Loss 0.714276 mAP 0.889597 +2025-05-16 12:14:28,833 - ==> mAP: 0.88960 Loss: 0.714 + +2025-05-16 12:14:28,897 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 12:14:28,897 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 12:14:28,937 - + +2025-05-16 12:14:28,937 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 12:15:31,368 - Epoch: [152][ 100/ 813] Overall Loss 0.394603 Objective Loss 0.394603 LR 0.000025 Time 0.624269 +2025-05-16 12:16:29,560 - Epoch: [152][ 200/ 813] Overall Loss 0.411278 Objective Loss 0.411278 LR 0.000025 Time 0.603084 +2025-05-16 12:17:24,618 - Epoch: [152][ 300/ 813] Overall Loss 0.410569 Objective Loss 0.410569 LR 0.000025 Time 0.585576 +2025-05-16 12:18:22,720 - Epoch: [152][ 400/ 813] Overall Loss 0.406785 Objective Loss 0.406785 LR 0.000025 Time 0.584431 +2025-05-16 12:19:21,004 - Epoch: [152][ 500/ 813] Overall Loss 0.399722 Objective Loss 0.399722 LR 0.000025 Time 0.584106 +2025-05-16 12:20:22,029 - Epoch: [152][ 600/ 813] Overall Loss 0.403322 Objective Loss 0.403322 LR 0.000025 Time 0.588459 +2025-05-16 12:21:22,130 - Epoch: [152][ 700/ 813] Overall Loss 0.409735 Objective Loss 0.409735 LR 0.000025 Time 0.590247 +2025-05-16 12:22:15,534 - Epoch: [152][ 800/ 813] Overall Loss 0.411319 Objective Loss 0.411319 LR 0.000025 Time 0.583213 +2025-05-16 12:22:21,646 - Epoch: [152][ 813/ 813] Overall Loss 0.411633 Objective Loss 0.411633 LR 0.000025 Time 0.581405 +2025-05-16 12:22:21,689 - --- validate (epoch=152)----------- +2025-05-16 12:22:21,690 - 3250 samples (16 per mini-batch) +2025-05-16 12:22:21,693 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 12:23:25,410 - Epoch: [152][ 100/ 204] Loss 0.671370 mAP 0.898508 +2025-05-16 12:24:26,550 - Epoch: [152][ 200/ 204] Loss 0.670543 mAP 0.898258 +2025-05-16 12:24:27,834 - Epoch: [152][ 204/ 204] Loss 0.670186 mAP 0.898305 +2025-05-16 12:24:27,887 - ==> mAP: 0.89830 Loss: 0.670 + +2025-05-16 12:24:27,946 - ==> Best [mAP: 0.909684 vloss: 0.683046 Params: 368352 on epoch: 137] +2025-05-16 12:24:27,946 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.15-132305/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 12:24:27,975 - + +2025-05-16 12:27:19,376 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 12:28:23,213 - Epoch: [153][ 100/ 813] Overall Loss 0.384509 Objective Loss 0.384509 LR 0.000025 Time 0.638331 +2025-05-16 12:29:24,866 - Epoch: [153][ 200/ 813] Overall Loss 0.378456 Objective Loss 0.378456 LR 0.000025 Time 0.627415 +2025-05-16 12:30:23,900 - Epoch: [153][ 300/ 813] Overall Loss 0.389552 Objective Loss 0.389552 LR 0.000025 Time 0.615048 +2025-05-16 12:31:16,556 - Epoch: [153][ 400/ 813] Overall Loss 0.400991 Objective Loss 0.400991 LR 0.000025 Time 0.592917 +2025-05-16 12:32:13,469 - Epoch: [153][ 500/ 813] Overall Loss 0.400616 Objective Loss 0.400616 LR 0.000025 Time 0.588157 +2025-05-16 12:33:15,761 - Epoch: [153][ 600/ 813] Overall Loss 0.399419 Objective Loss 0.399419 LR 0.000025 Time 0.593945 +2025-05-16 12:34:13,777 - Epoch: [153][ 700/ 813] Overall Loss 0.405107 Objective Loss 0.405107 LR 0.000025 Time 0.591973 +2025-05-16 12:35:14,444 - Epoch: [153][ 800/ 813] Overall Loss 0.409393 Objective Loss 0.409393 LR 0.000025 Time 0.593807 +2025-05-16 12:35:20,107 - Epoch: [153][ 813/ 813] Overall Loss 0.410482 Objective Loss 0.410482 LR 0.000025 Time 0.591276 +2025-05-16 12:35:20,131 - --- validate (epoch=153)----------- +2025-05-16 12:35:20,133 - 3250 samples (16 per mini-batch) +2025-05-16 12:35:20,135 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 12:36:18,920 - Epoch: [153][ 100/ 204] Loss 0.652740 mAP 0.918321 +2025-05-16 12:37:14,853 - Epoch: [153][ 200/ 204] Loss 0.653257 mAP 0.908780 +2025-05-16 12:37:16,407 - Epoch: [153][ 204/ 204] Loss 0.658628 mAP 0.908812 +2025-05-16 12:37:16,445 - ==> mAP: 0.90881 Loss: 0.659 + +2025-05-16 12:37:16,502 - ==> Best [mAP: 0.908812 vloss: 0.658628 Params: 368352 on epoch: 153] +2025-05-16 12:37:16,502 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 12:37:16,533 - + +2025-05-16 12:37:16,533 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 12:38:21,675 - Epoch: [154][ 100/ 813] Overall Loss 0.376442 Objective Loss 0.376442 LR 0.000025 Time 0.651378 +2025-05-16 12:39:21,404 - Epoch: [154][ 200/ 813] Overall Loss 0.389674 Objective Loss 0.389674 LR 0.000025 Time 0.624317 +2025-05-16 12:40:24,410 - Epoch: [154][ 300/ 813] Overall Loss 0.390495 Objective Loss 0.390495 LR 0.000025 Time 0.626212 +2025-05-16 12:41:18,079 - Epoch: [154][ 400/ 813] Overall Loss 0.403919 Objective Loss 0.403919 LR 0.000025 Time 0.603825 +2025-05-16 12:42:13,171 - Epoch: [154][ 500/ 813] Overall Loss 0.403968 Objective Loss 0.403968 LR 0.000025 Time 0.593239 +2025-05-16 12:43:15,636 - Epoch: [154][ 600/ 813] Overall Loss 0.403516 Objective Loss 0.403516 LR 0.000025 Time 0.598469 +2025-05-16 12:44:13,764 - Epoch: [154][ 700/ 813] Overall Loss 0.408212 Objective Loss 0.408212 LR 0.000025 Time 0.596010 +2025-05-16 12:45:13,826 - Epoch: [154][ 800/ 813] Overall Loss 0.412328 Objective Loss 0.412328 LR 0.000025 Time 0.596582 +2025-05-16 12:45:20,364 - Epoch: [154][ 813/ 813] Overall Loss 0.413302 Objective Loss 0.413302 LR 0.000025 Time 0.595083 +2025-05-16 12:45:20,408 - --- validate (epoch=154)----------- +2025-05-16 12:45:20,411 - 3250 samples (16 per mini-batch) +2025-05-16 12:45:20,413 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 12:46:19,420 - Epoch: [154][ 100/ 204] Loss 0.653454 mAP 0.899560 +2025-05-16 12:47:16,089 - Epoch: [154][ 200/ 204] Loss 0.647646 mAP 0.899584 +2025-05-16 12:47:17,286 - Epoch: [154][ 204/ 204] Loss 0.655893 mAP 0.899575 +2025-05-16 12:47:17,325 - ==> mAP: 0.89957 Loss: 0.656 + +2025-05-16 12:47:17,359 - ==> Best [mAP: 0.908812 vloss: 0.658628 Params: 368352 on epoch: 153] +2025-05-16 12:47:17,359 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 12:47:17,395 - + +2025-05-16 12:47:17,395 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 12:48:19,549 - Epoch: [155][ 100/ 813] Overall Loss 0.355325 Objective Loss 0.355325 LR 0.000025 Time 0.621497 +2025-05-16 12:49:20,886 - Epoch: [155][ 200/ 813] Overall Loss 0.375453 Objective Loss 0.375453 LR 0.000025 Time 0.617420 +2025-05-16 12:50:21,316 - Epoch: [155][ 300/ 813] Overall Loss 0.382270 Objective Loss 0.382270 LR 0.000025 Time 0.613037 +2025-05-16 12:51:17,698 - Epoch: [155][ 400/ 813] Overall Loss 0.386679 Objective Loss 0.386679 LR 0.000025 Time 0.600726 +2025-05-16 12:52:10,595 - Epoch: [155][ 500/ 813] Overall Loss 0.388154 Objective Loss 0.388154 LR 0.000025 Time 0.586372 +2025-05-16 12:53:14,054 - Epoch: [155][ 600/ 813] Overall Loss 0.395938 Objective Loss 0.395938 LR 0.000025 Time 0.594303 +2025-05-16 12:54:16,311 - Epoch: [155][ 700/ 813] Overall Loss 0.406140 Objective Loss 0.406140 LR 0.000025 Time 0.598312 +2025-05-16 12:55:14,812 - Epoch: [155][ 800/ 813] Overall Loss 0.408854 Objective Loss 0.408854 LR 0.000025 Time 0.596645 +2025-05-16 12:55:21,529 - Epoch: [155][ 813/ 813] Overall Loss 0.409122 Objective Loss 0.409122 LR 0.000025 Time 0.595366 +2025-05-16 12:55:21,566 - --- validate (epoch=155)----------- +2025-05-16 12:55:21,569 - 3250 samples (16 per mini-batch) +2025-05-16 12:55:21,571 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 12:56:21,877 - Epoch: [155][ 100/ 204] Loss 0.698721 mAP 0.899589 +2025-05-16 12:57:17,148 - Epoch: [155][ 200/ 204] Loss 0.695773 mAP 0.899549 +2025-05-16 12:57:17,855 - Epoch: [155][ 204/ 204] Loss 0.703592 mAP 0.899557 +2025-05-16 12:57:17,912 - ==> mAP: 0.89956 Loss: 0.704 + +2025-05-16 12:57:17,917 - ==> Best [mAP: 0.908812 vloss: 0.658628 Params: 368352 on epoch: 153] +2025-05-16 12:57:17,919 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 12:57:17,948 - + +2025-05-16 12:57:17,948 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 12:58:19,727 - Epoch: [156][ 100/ 813] Overall Loss 0.373957 Objective Loss 0.373957 LR 0.000025 Time 0.617746 +2025-05-16 12:59:20,671 - Epoch: [156][ 200/ 813] Overall Loss 0.370321 Objective Loss 0.370321 LR 0.000025 Time 0.613580 +2025-05-16 13:00:22,183 - Epoch: [156][ 300/ 813] Overall Loss 0.390504 Objective Loss 0.390504 LR 0.000025 Time 0.614081 +2025-05-16 13:01:18,594 - Epoch: [156][ 400/ 813] Overall Loss 0.401935 Objective Loss 0.401935 LR 0.000025 Time 0.601582 +2025-05-16 13:02:12,494 - Epoch: [156][ 500/ 813] Overall Loss 0.406283 Objective Loss 0.406283 LR 0.000025 Time 0.589061 +2025-05-16 13:03:13,361 - Epoch: [156][ 600/ 813] Overall Loss 0.405641 Objective Loss 0.405641 LR 0.000025 Time 0.592324 +2025-05-16 13:04:15,151 - Epoch: [156][ 700/ 813] Overall Loss 0.411598 Objective Loss 0.411598 LR 0.000025 Time 0.595973 +2025-05-16 13:05:13,898 - Epoch: [156][ 800/ 813] Overall Loss 0.415139 Objective Loss 0.415139 LR 0.000025 Time 0.594908 +2025-05-16 13:05:20,365 - Epoch: [156][ 813/ 813] Overall Loss 0.416465 Objective Loss 0.416465 LR 0.000025 Time 0.593348 +2025-05-16 13:05:20,403 - --- validate (epoch=156)----------- +2025-05-16 13:05:20,405 - 3250 samples (16 per mini-batch) +2025-05-16 13:05:20,407 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 13:06:22,977 - Epoch: [156][ 100/ 204] Loss 0.677862 mAP 0.899242 +2025-05-16 13:07:17,628 - Epoch: [156][ 200/ 204] Loss 0.664252 mAP 0.899307 +2025-05-16 13:07:18,329 - Epoch: [156][ 204/ 204] Loss 0.669533 mAP 0.899238 +2025-05-16 13:07:18,357 - ==> mAP: 0.89924 Loss: 0.670 + +2025-05-16 13:07:18,365 - ==> Best [mAP: 0.908812 vloss: 0.658628 Params: 368352 on epoch: 153] +2025-05-16 13:07:18,365 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 13:07:18,401 - + +2025-05-16 13:07:18,401 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 13:08:19,717 - Epoch: [157][ 100/ 813] Overall Loss 0.386598 Objective Loss 0.386598 LR 0.000025 Time 0.613122 +2025-05-16 13:09:20,123 - Epoch: [157][ 200/ 813] Overall Loss 0.374075 Objective Loss 0.374075 LR 0.000025 Time 0.608573 +2025-05-16 13:10:19,699 - Epoch: [157][ 300/ 813] Overall Loss 0.378540 Objective Loss 0.378540 LR 0.000025 Time 0.604291 +2025-05-16 13:11:17,632 - Epoch: [157][ 400/ 813] Overall Loss 0.387645 Objective Loss 0.387645 LR 0.000025 Time 0.598043 +2025-05-16 13:12:12,967 - Epoch: [157][ 500/ 813] Overall Loss 0.387718 Objective Loss 0.387718 LR 0.000025 Time 0.589101 +2025-05-16 13:13:11,441 - Epoch: [157][ 600/ 813] Overall Loss 0.401111 Objective Loss 0.401111 LR 0.000025 Time 0.588344 +2025-05-16 13:14:11,591 - Epoch: [157][ 700/ 813] Overall Loss 0.404093 Objective Loss 0.404093 LR 0.000025 Time 0.590219 +2025-05-16 13:15:12,227 - Epoch: [157][ 800/ 813] Overall Loss 0.409310 Objective Loss 0.409310 LR 0.000025 Time 0.592232 +2025-05-16 13:15:18,334 - Epoch: [157][ 813/ 813] Overall Loss 0.411598 Objective Loss 0.411598 LR 0.000025 Time 0.590273 +2025-05-16 13:15:18,374 - --- validate (epoch=157)----------- +2025-05-16 13:15:18,375 - 3250 samples (16 per mini-batch) +2025-05-16 13:15:18,377 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 13:16:21,124 - Epoch: [157][ 100/ 204] Loss 0.648001 mAP 0.918129 +2025-05-16 13:17:16,296 - Epoch: [157][ 200/ 204] Loss 0.633301 mAP 0.908482 +2025-05-16 13:17:17,064 - Epoch: [157][ 204/ 204] Loss 0.638980 mAP 0.908468 +2025-05-16 13:17:17,094 - ==> mAP: 0.90847 Loss: 0.639 + +2025-05-16 13:17:17,103 - ==> Best [mAP: 0.908812 vloss: 0.658628 Params: 368352 on epoch: 153] +2025-05-16 13:17:17,103 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 13:17:17,138 - + +2025-05-16 13:17:17,139 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 13:18:16,255 - Epoch: [158][ 100/ 813] Overall Loss 0.405150 Objective Loss 0.405150 LR 0.000025 Time 0.591132 +2025-05-16 13:19:18,929 - Epoch: [158][ 200/ 813] Overall Loss 0.397924 Objective Loss 0.397924 LR 0.000025 Time 0.608920 +2025-05-16 13:20:18,138 - Epoch: [158][ 300/ 813] Overall Loss 0.396341 Objective Loss 0.396341 LR 0.000025 Time 0.603302 +2025-05-16 13:21:14,592 - Epoch: [158][ 400/ 813] Overall Loss 0.397775 Objective Loss 0.397775 LR 0.000025 Time 0.593602 +2025-05-16 13:22:13,678 - Epoch: [158][ 500/ 813] Overall Loss 0.397374 Objective Loss 0.397374 LR 0.000025 Time 0.593048 +2025-05-16 13:23:10,324 - Epoch: [158][ 600/ 813] Overall Loss 0.398996 Objective Loss 0.398996 LR 0.000025 Time 0.588614 +2025-05-16 13:24:10,184 - Epoch: [158][ 700/ 813] Overall Loss 0.406037 Objective Loss 0.406037 LR 0.000025 Time 0.590035 +2025-05-16 13:25:11,610 - Epoch: [158][ 800/ 813] Overall Loss 0.407691 Objective Loss 0.407691 LR 0.000025 Time 0.593059 +2025-05-16 13:25:18,526 - Epoch: [158][ 813/ 813] Overall Loss 0.408426 Objective Loss 0.408426 LR 0.000025 Time 0.592082 +2025-05-16 13:25:18,564 - --- validate (epoch=158)----------- +2025-05-16 13:25:18,566 - 3250 samples (16 per mini-batch) +2025-05-16 13:25:18,568 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 13:26:21,743 - Epoch: [158][ 100/ 204] Loss 0.713058 mAP 0.898977 +2025-05-16 13:27:18,687 - Epoch: [158][ 200/ 204] Loss 0.701130 mAP 0.898438 +2025-05-16 13:27:19,537 - Epoch: [158][ 204/ 204] Loss 0.701551 mAP 0.898430 +2025-05-16 13:27:19,571 - ==> mAP: 0.89843 Loss: 0.702 + +2025-05-16 13:27:19,615 - ==> Best [mAP: 0.908812 vloss: 0.658628 Params: 368352 on epoch: 153] +2025-05-16 13:27:19,615 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 13:27:19,644 - + +2025-05-16 13:27:19,644 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 13:28:18,935 - Epoch: [159][ 100/ 813] Overall Loss 0.363350 Objective Loss 0.363350 LR 0.000025 Time 0.592880 +2025-05-16 13:29:18,529 - Epoch: [159][ 200/ 813] Overall Loss 0.357184 Objective Loss 0.357184 LR 0.000025 Time 0.594393 +2025-05-16 13:30:19,686 - Epoch: [159][ 300/ 813] Overall Loss 0.370475 Objective Loss 0.370475 LR 0.000025 Time 0.600109 +2025-05-16 13:31:17,522 - Epoch: [159][ 400/ 813] Overall Loss 0.382664 Objective Loss 0.382664 LR 0.000025 Time 0.594667 +2025-05-16 13:32:16,157 - Epoch: [159][ 500/ 813] Overall Loss 0.388925 Objective Loss 0.388925 LR 0.000025 Time 0.592997 +2025-05-16 13:33:12,213 - Epoch: [159][ 600/ 813] Overall Loss 0.393425 Objective Loss 0.393425 LR 0.000025 Time 0.587588 +2025-05-16 13:34:12,982 - Epoch: [159][ 700/ 813] Overall Loss 0.400534 Objective Loss 0.400534 LR 0.000025 Time 0.590455 +2025-05-16 13:35:13,023 - Epoch: [159][ 800/ 813] Overall Loss 0.405582 Objective Loss 0.405582 LR 0.000025 Time 0.591697 +2025-05-16 13:35:19,438 - Epoch: [159][ 813/ 813] Overall Loss 0.405508 Objective Loss 0.405508 LR 0.000025 Time 0.590126 +2025-05-16 13:35:19,476 - --- validate (epoch=159)----------- +2025-05-16 13:35:19,477 - 3250 samples (16 per mini-batch) +2025-05-16 13:35:19,479 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 13:36:22,929 - Epoch: [159][ 100/ 204] Loss 0.697069 mAP 0.898919 +2025-05-16 13:37:21,066 - Epoch: [159][ 200/ 204] Loss 0.696389 mAP 0.898509 +2025-05-16 13:37:22,000 - Epoch: [159][ 204/ 204] Loss 0.691901 mAP 0.898521 +2025-05-16 13:37:22,035 - ==> mAP: 0.89852 Loss: 0.692 + +2025-05-16 13:37:22,055 - ==> Best [mAP: 0.908812 vloss: 0.658628 Params: 368352 on epoch: 153] +2025-05-16 13:37:22,055 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 13:37:22,091 - + +2025-05-16 13:37:22,091 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 13:38:19,101 - Epoch: [160][ 100/ 813] Overall Loss 0.362780 Objective Loss 0.362780 LR 0.000025 Time 0.570068 +2025-05-16 13:39:21,064 - Epoch: [160][ 200/ 813] Overall Loss 0.386308 Objective Loss 0.386308 LR 0.000025 Time 0.594835 +2025-05-16 13:40:22,223 - Epoch: [160][ 300/ 813] Overall Loss 0.395976 Objective Loss 0.395976 LR 0.000025 Time 0.600409 +2025-05-16 13:41:19,368 - Epoch: [160][ 400/ 813] Overall Loss 0.400076 Objective Loss 0.400076 LR 0.000025 Time 0.593161 +2025-05-16 13:42:19,205 - Epoch: [160][ 500/ 813] Overall Loss 0.401052 Objective Loss 0.401052 LR 0.000025 Time 0.594198 +2025-05-16 13:43:14,026 - Epoch: [160][ 600/ 813] Overall Loss 0.404943 Objective Loss 0.404943 LR 0.000025 Time 0.586526 +2025-05-16 13:44:13,202 - Epoch: [160][ 700/ 813] Overall Loss 0.412541 Objective Loss 0.412541 LR 0.000025 Time 0.587265 +2025-05-16 13:45:13,592 - Epoch: [160][ 800/ 813] Overall Loss 0.416329 Objective Loss 0.416329 LR 0.000025 Time 0.589331 +2025-05-16 13:45:19,652 - Epoch: [160][ 813/ 813] Overall Loss 0.416691 Objective Loss 0.416691 LR 0.000025 Time 0.587360 +2025-05-16 13:45:19,684 - --- validate (epoch=160)----------- +2025-05-16 13:45:19,685 - 3250 samples (16 per mini-batch) +2025-05-16 13:45:19,687 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 13:46:23,291 - Epoch: [160][ 100/ 204] Loss 0.696820 mAP 0.909157 +2025-05-16 13:47:23,095 - Epoch: [160][ 200/ 204] Loss 0.689919 mAP 0.909100 +2025-05-16 13:47:23,930 - Epoch: [160][ 204/ 204] Loss 0.687347 mAP 0.909136 +2025-05-16 13:47:23,962 - ==> mAP: 0.90914 Loss: 0.687 + +2025-05-16 13:47:23,971 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 13:47:23,971 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 13:47:24,164 - + +2025-05-16 13:47:24,164 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 13:48:22,252 - Epoch: [161][ 100/ 813] Overall Loss 0.336465 Objective Loss 0.336465 LR 0.000025 Time 0.580853 +2025-05-16 13:49:21,997 - Epoch: [161][ 200/ 813] Overall Loss 0.358805 Objective Loss 0.358805 LR 0.000025 Time 0.589136 +2025-05-16 13:50:22,305 - Epoch: [161][ 300/ 813] Overall Loss 0.371593 Objective Loss 0.371593 LR 0.000025 Time 0.593773 +2025-05-16 13:51:20,102 - Epoch: [161][ 400/ 813] Overall Loss 0.379487 Objective Loss 0.379487 LR 0.000025 Time 0.589815 +2025-05-16 13:52:19,830 - Epoch: [161][ 500/ 813] Overall Loss 0.385750 Objective Loss 0.385750 LR 0.000025 Time 0.591301 +2025-05-16 13:53:16,410 - Epoch: [161][ 600/ 813] Overall Loss 0.394818 Objective Loss 0.394818 LR 0.000025 Time 0.587049 +2025-05-16 13:54:13,497 - Epoch: [161][ 700/ 813] Overall Loss 0.400940 Objective Loss 0.400940 LR 0.000025 Time 0.584734 +2025-05-16 13:55:14,687 - Epoch: [161][ 800/ 813] Overall Loss 0.402772 Objective Loss 0.402772 LR 0.000025 Time 0.588127 +2025-05-16 13:55:21,422 - Epoch: [161][ 813/ 813] Overall Loss 0.403751 Objective Loss 0.403751 LR 0.000025 Time 0.587005 +2025-05-16 13:55:21,470 - --- validate (epoch=161)----------- +2025-05-16 13:55:21,471 - 3250 samples (16 per mini-batch) +2025-05-16 13:55:21,473 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 13:56:24,353 - Epoch: [161][ 100/ 204] Loss 0.710558 mAP 0.898299 +2025-05-16 13:57:24,867 - Epoch: [161][ 200/ 204] Loss 0.674728 mAP 0.899065 +2025-05-16 13:57:26,207 - Epoch: [161][ 204/ 204] Loss 0.674549 mAP 0.899099 +2025-05-16 13:57:26,249 - ==> mAP: 0.89910 Loss: 0.675 + +2025-05-16 13:57:26,308 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 13:57:26,308 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 13:57:26,338 - + +2025-05-16 13:57:26,338 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 13:58:25,442 - Epoch: [162][ 100/ 813] Overall Loss 0.360467 Objective Loss 0.360467 LR 0.000025 Time 0.591008 +2025-05-16 13:59:21,606 - Epoch: [162][ 200/ 813] Overall Loss 0.375114 Objective Loss 0.375114 LR 0.000025 Time 0.576309 +2025-05-16 14:00:22,863 - Epoch: [162][ 300/ 813] Overall Loss 0.389649 Objective Loss 0.389649 LR 0.000025 Time 0.588386 +2025-05-16 14:01:20,408 - Epoch: [162][ 400/ 813] Overall Loss 0.399306 Objective Loss 0.399306 LR 0.000025 Time 0.585142 +2025-05-16 14:02:19,873 - Epoch: [162][ 500/ 813] Overall Loss 0.403509 Objective Loss 0.403509 LR 0.000025 Time 0.586922 +2025-05-16 14:03:19,208 - Epoch: [162][ 600/ 813] Overall Loss 0.407271 Objective Loss 0.407271 LR 0.000025 Time 0.587990 +2025-05-16 14:04:14,269 - Epoch: [162][ 700/ 813] Overall Loss 0.409923 Objective Loss 0.409923 LR 0.000025 Time 0.582647 +2025-05-16 14:05:14,516 - Epoch: [162][ 800/ 813] Overall Loss 0.417355 Objective Loss 0.417355 LR 0.000025 Time 0.585120 +2025-05-16 14:05:21,552 - Epoch: [162][ 813/ 813] Overall Loss 0.417674 Objective Loss 0.417674 LR 0.000025 Time 0.584417 +2025-05-16 14:05:21,591 - --- validate (epoch=162)----------- +2025-05-16 14:05:21,592 - 3250 samples (16 per mini-batch) +2025-05-16 14:05:21,595 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 14:06:24,162 - Epoch: [162][ 100/ 204] Loss 0.698165 mAP 0.879997 +2025-05-16 14:07:25,494 - Epoch: [162][ 200/ 204] Loss 0.698489 mAP 0.889138 +2025-05-16 14:07:26,731 - Epoch: [162][ 204/ 204] Loss 0.697793 mAP 0.889159 +2025-05-16 14:07:26,765 - ==> mAP: 0.88916 Loss: 0.698 + +2025-05-16 14:07:26,826 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 14:07:26,826 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 14:07:26,864 - + +2025-05-16 14:07:26,864 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 14:08:26,854 - Epoch: [163][ 100/ 813] Overall Loss 0.362668 Objective Loss 0.362668 LR 0.000025 Time 0.599863 +2025-05-16 14:09:21,363 - Epoch: [163][ 200/ 813] Overall Loss 0.378470 Objective Loss 0.378470 LR 0.000025 Time 0.572465 +2025-05-16 14:10:20,861 - Epoch: [163][ 300/ 813] Overall Loss 0.386287 Objective Loss 0.386287 LR 0.000025 Time 0.579949 +2025-05-16 14:11:21,278 - Epoch: [163][ 400/ 813] Overall Loss 0.391317 Objective Loss 0.391317 LR 0.000025 Time 0.585998 +2025-05-16 14:12:20,222 - Epoch: [163][ 500/ 813] Overall Loss 0.394864 Objective Loss 0.394864 LR 0.000025 Time 0.586674 +2025-05-16 14:13:19,936 - Epoch: [163][ 600/ 813] Overall Loss 0.397424 Objective Loss 0.397424 LR 0.000025 Time 0.588414 +2025-05-16 14:14:17,193 - Epoch: [163][ 700/ 813] Overall Loss 0.405546 Objective Loss 0.405546 LR 0.000025 Time 0.586147 +2025-05-16 14:15:17,142 - Epoch: [163][ 800/ 813] Overall Loss 0.410364 Objective Loss 0.410364 LR 0.000025 Time 0.587811 +2025-05-16 14:15:23,261 - Epoch: [163][ 813/ 813] Overall Loss 0.412011 Objective Loss 0.412011 LR 0.000025 Time 0.585938 +2025-05-16 14:15:23,298 - --- validate (epoch=163)----------- +2025-05-16 14:15:23,299 - 3250 samples (16 per mini-batch) +2025-05-16 14:15:23,302 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 14:16:26,606 - Epoch: [163][ 100/ 204] Loss 0.686638 mAP 0.900093 +2025-05-16 14:17:26,668 - Epoch: [163][ 200/ 204] Loss 0.674935 mAP 0.899779 +2025-05-16 14:17:28,052 - Epoch: [163][ 204/ 204] Loss 0.674063 mAP 0.899787 +2025-05-16 14:17:28,083 - ==> mAP: 0.89979 Loss: 0.674 + +2025-05-16 14:17:28,096 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 14:17:28,096 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 14:17:28,132 - + +2025-05-16 14:17:28,132 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 14:18:29,795 - Epoch: [164][ 100/ 813] Overall Loss 0.360023 Objective Loss 0.360023 LR 0.000025 Time 0.616590 +2025-05-16 14:19:25,901 - Epoch: [164][ 200/ 813] Overall Loss 0.367388 Objective Loss 0.367388 LR 0.000025 Time 0.588812 +2025-05-16 14:20:25,175 - Epoch: [164][ 300/ 813] Overall Loss 0.383491 Objective Loss 0.383491 LR 0.000025 Time 0.590116 +2025-05-16 14:21:21,816 - Epoch: [164][ 400/ 813] Overall Loss 0.394319 Objective Loss 0.394319 LR 0.000025 Time 0.584182 +2025-05-16 14:22:21,382 - Epoch: [164][ 500/ 813] Overall Loss 0.396875 Objective Loss 0.396875 LR 0.000025 Time 0.586470 +2025-05-16 14:23:22,160 - Epoch: [164][ 600/ 813] Overall Loss 0.399081 Objective Loss 0.399081 LR 0.000025 Time 0.590017 +2025-05-16 14:24:17,408 - Epoch: [164][ 700/ 813] Overall Loss 0.401592 Objective Loss 0.401592 LR 0.000025 Time 0.584652 +2025-05-16 14:25:13,767 - Epoch: [164][ 800/ 813] Overall Loss 0.399809 Objective Loss 0.399809 LR 0.000025 Time 0.582016 +2025-05-16 14:25:20,693 - Epoch: [164][ 813/ 813] Overall Loss 0.401272 Objective Loss 0.401272 LR 0.000025 Time 0.581228 +2025-05-16 14:25:20,724 - --- validate (epoch=164)----------- +2025-05-16 14:25:20,727 - 3250 samples (16 per mini-batch) +2025-05-16 14:25:20,730 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 14:26:23,300 - Epoch: [164][ 100/ 204] Loss 0.650835 mAP 0.909190 +2025-05-16 14:27:26,531 - Epoch: [164][ 200/ 204] Loss 0.662340 mAP 0.899494 +2025-05-16 14:27:27,784 - Epoch: [164][ 204/ 204] Loss 0.663768 mAP 0.899366 +2025-05-16 14:27:27,828 - ==> mAP: 0.89937 Loss: 0.664 + +2025-05-16 14:27:27,889 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 14:27:27,889 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 14:27:27,918 - + +2025-05-16 14:27:27,918 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 14:28:30,962 - Epoch: [165][ 100/ 813] Overall Loss 0.377462 Objective Loss 0.377462 LR 0.000025 Time 0.630401 +2025-05-16 14:29:27,434 - Epoch: [165][ 200/ 813] Overall Loss 0.394943 Objective Loss 0.394943 LR 0.000025 Time 0.597548 +2025-05-16 14:30:25,911 - Epoch: [165][ 300/ 813] Overall Loss 0.408125 Objective Loss 0.408125 LR 0.000025 Time 0.593280 +2025-05-16 14:31:23,554 - Epoch: [165][ 400/ 813] Overall Loss 0.406463 Objective Loss 0.406463 LR 0.000025 Time 0.589061 +2025-05-16 14:32:23,340 - Epoch: [165][ 500/ 813] Overall Loss 0.398013 Objective Loss 0.398013 LR 0.000025 Time 0.590806 +2025-05-16 14:33:25,412 - Epoch: [165][ 600/ 813] Overall Loss 0.401939 Objective Loss 0.401939 LR 0.000025 Time 0.595788 +2025-05-16 14:34:22,043 - Epoch: [165][ 700/ 813] Overall Loss 0.406426 Objective Loss 0.406426 LR 0.000025 Time 0.591573 +2025-05-16 14:35:18,110 - Epoch: [165][ 800/ 813] Overall Loss 0.407232 Objective Loss 0.407232 LR 0.000025 Time 0.587707 +2025-05-16 14:35:24,196 - Epoch: [165][ 813/ 813] Overall Loss 0.407893 Objective Loss 0.407893 LR 0.000025 Time 0.585792 +2025-05-16 14:35:24,232 - --- validate (epoch=165)----------- +2025-05-16 14:35:24,234 - 3250 samples (16 per mini-batch) +2025-05-16 14:35:24,236 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 14:36:28,685 - Epoch: [165][ 100/ 204] Loss 0.719076 mAP 0.908244 +2025-05-16 14:37:29,523 - Epoch: [165][ 200/ 204] Loss 0.706645 mAP 0.908886 +2025-05-16 14:37:30,781 - Epoch: [165][ 204/ 204] Loss 0.708525 mAP 0.908874 +2025-05-16 14:37:30,811 - ==> mAP: 0.90887 Loss: 0.709 + +2025-05-16 14:37:30,852 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 14:37:30,852 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 14:37:30,889 - + +2025-05-16 14:37:30,889 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 14:38:34,542 - Epoch: [166][ 100/ 813] Overall Loss 0.354090 Objective Loss 0.354090 LR 0.000025 Time 0.636489 +2025-05-16 14:39:31,274 - Epoch: [166][ 200/ 813] Overall Loss 0.353672 Objective Loss 0.353672 LR 0.000025 Time 0.601891 +2025-05-16 14:40:29,476 - Epoch: [166][ 300/ 813] Overall Loss 0.379914 Objective Loss 0.379914 LR 0.000025 Time 0.595260 +2025-05-16 14:41:26,875 - Epoch: [166][ 400/ 813] Overall Loss 0.387375 Objective Loss 0.387375 LR 0.000025 Time 0.589934 +2025-05-16 14:42:28,376 - Epoch: [166][ 500/ 813] Overall Loss 0.390361 Objective Loss 0.390361 LR 0.000025 Time 0.594943 +2025-05-16 14:43:28,103 - Epoch: [166][ 600/ 813] Overall Loss 0.394287 Objective Loss 0.394287 LR 0.000025 Time 0.595325 +2025-05-16 14:44:28,117 - Epoch: [166][ 700/ 813] Overall Loss 0.397735 Objective Loss 0.397735 LR 0.000025 Time 0.596008 +2025-05-16 14:45:21,282 - Epoch: [166][ 800/ 813] Overall Loss 0.402118 Objective Loss 0.402118 LR 0.000025 Time 0.587961 +2025-05-16 14:45:27,659 - Epoch: [166][ 813/ 813] Overall Loss 0.401921 Objective Loss 0.401921 LR 0.000025 Time 0.586403 +2025-05-16 14:45:27,702 - --- validate (epoch=166)----------- +2025-05-16 14:45:27,706 - 3250 samples (16 per mini-batch) +2025-05-16 14:45:27,709 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 14:46:29,886 - Epoch: [166][ 100/ 204] Loss 0.668048 mAP 0.890521 +2025-05-16 14:47:30,561 - Epoch: [166][ 200/ 204] Loss 0.670346 mAP 0.900077 +2025-05-16 14:47:31,997 - Epoch: [166][ 204/ 204] Loss 0.675051 mAP 0.900000 +2025-05-16 14:47:32,034 - ==> mAP: 0.90000 Loss: 0.675 + +2025-05-16 14:47:32,067 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 14:47:32,067 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 14:47:32,103 - + +2025-05-16 14:47:32,103 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 14:48:35,850 - Epoch: [167][ 100/ 813] Overall Loss 0.397389 Objective Loss 0.397389 LR 0.000025 Time 0.637425 +2025-05-16 14:49:34,867 - Epoch: [167][ 200/ 813] Overall Loss 0.403438 Objective Loss 0.403438 LR 0.000025 Time 0.613745 +2025-05-16 14:50:29,743 - Epoch: [167][ 300/ 813] Overall Loss 0.413240 Objective Loss 0.413240 LR 0.000025 Time 0.592075 +2025-05-16 14:51:29,213 - Epoch: [167][ 400/ 813] Overall Loss 0.414554 Objective Loss 0.414554 LR 0.000025 Time 0.592725 +2025-05-16 14:52:27,576 - Epoch: [167][ 500/ 813] Overall Loss 0.409498 Objective Loss 0.409498 LR 0.000025 Time 0.590900 +2025-05-16 14:53:28,913 - Epoch: [167][ 600/ 813] Overall Loss 0.403267 Objective Loss 0.403267 LR 0.000025 Time 0.594640 +2025-05-16 14:54:28,179 - Epoch: [167][ 700/ 813] Overall Loss 0.410236 Objective Loss 0.410236 LR 0.000025 Time 0.594352 +2025-05-16 14:55:22,638 - Epoch: [167][ 800/ 813] Overall Loss 0.410224 Objective Loss 0.410224 LR 0.000025 Time 0.588129 +2025-05-16 14:55:29,190 - Epoch: [167][ 813/ 813] Overall Loss 0.411094 Objective Loss 0.411094 LR 0.000025 Time 0.586783 +2025-05-16 14:55:29,223 - --- validate (epoch=167)----------- +2025-05-16 14:55:29,224 - 3250 samples (16 per mini-batch) +2025-05-16 14:55:29,227 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 14:56:29,785 - Epoch: [167][ 100/ 204] Loss 0.694620 mAP 0.908308 +2025-05-16 14:57:31,196 - Epoch: [167][ 200/ 204] Loss 0.691988 mAP 0.899271 +2025-05-16 14:57:32,727 - Epoch: [167][ 204/ 204] Loss 0.692401 mAP 0.899301 +2025-05-16 14:57:32,766 - ==> mAP: 0.89930 Loss: 0.692 + +2025-05-16 14:57:32,798 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 14:57:32,799 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 14:57:33,056 - + +2025-05-16 14:57:33,056 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 14:58:36,859 - Epoch: [168][ 100/ 813] Overall Loss 0.362003 Objective Loss 0.362003 LR 0.000025 Time 0.637983 +2025-05-16 14:59:36,189 - Epoch: [168][ 200/ 813] Overall Loss 0.388274 Objective Loss 0.388274 LR 0.000025 Time 0.615623 +2025-05-16 15:00:31,010 - Epoch: [168][ 300/ 813] Overall Loss 0.406966 Objective Loss 0.406966 LR 0.000025 Time 0.593148 +2025-05-16 15:01:27,456 - Epoch: [168][ 400/ 813] Overall Loss 0.399716 Objective Loss 0.399716 LR 0.000025 Time 0.585969 +2025-05-16 15:02:26,876 - Epoch: [168][ 500/ 813] Overall Loss 0.402797 Objective Loss 0.402797 LR 0.000025 Time 0.587610 +2025-05-16 15:03:26,579 - Epoch: [168][ 600/ 813] Overall Loss 0.401924 Objective Loss 0.401924 LR 0.000025 Time 0.589176 +2025-05-16 15:04:27,077 - Epoch: [168][ 700/ 813] Overall Loss 0.408778 Objective Loss 0.408778 LR 0.000025 Time 0.591428 +2025-05-16 15:05:25,358 - Epoch: [168][ 800/ 813] Overall Loss 0.412363 Objective Loss 0.412363 LR 0.000025 Time 0.590348 +2025-05-16 15:05:30,326 - Epoch: [168][ 813/ 813] Overall Loss 0.414167 Objective Loss 0.414167 LR 0.000025 Time 0.587011 +2025-05-16 15:05:30,357 - --- validate (epoch=168)----------- +2025-05-16 15:05:30,358 - 3250 samples (16 per mini-batch) +2025-05-16 15:05:30,360 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 15:06:28,177 - Epoch: [168][ 100/ 204] Loss 0.650219 mAP 0.909255 +2025-05-16 15:07:30,248 - Epoch: [168][ 200/ 204] Loss 0.656346 mAP 0.908622 +2025-05-16 15:07:32,509 - Epoch: [168][ 204/ 204] Loss 0.660597 mAP 0.908623 +2025-05-16 15:07:32,548 - ==> mAP: 0.90862 Loss: 0.661 + +2025-05-16 15:07:32,609 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 15:07:32,609 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 15:07:32,648 - + +2025-05-16 15:07:32,649 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 15:08:35,747 - Epoch: [169][ 100/ 813] Overall Loss 0.387146 Objective Loss 0.387146 LR 0.000025 Time 0.630948 +2025-05-16 15:09:37,637 - Epoch: [169][ 200/ 813] Overall Loss 0.372316 Objective Loss 0.372316 LR 0.000025 Time 0.624910 +2025-05-16 15:10:34,767 - Epoch: [169][ 300/ 813] Overall Loss 0.391296 Objective Loss 0.391296 LR 0.000025 Time 0.607031 +2025-05-16 15:11:29,338 - Epoch: [169][ 400/ 813] Overall Loss 0.393044 Objective Loss 0.393044 LR 0.000025 Time 0.591696 +2025-05-16 15:12:29,571 - Epoch: [169][ 500/ 813] Overall Loss 0.392471 Objective Loss 0.392471 LR 0.000025 Time 0.593785 +2025-05-16 15:13:30,773 - Epoch: [169][ 600/ 813] Overall Loss 0.394897 Objective Loss 0.394897 LR 0.000025 Time 0.596818 +2025-05-16 15:14:30,566 - Epoch: [169][ 700/ 813] Overall Loss 0.403973 Objective Loss 0.403973 LR 0.000025 Time 0.596974 +2025-05-16 15:15:28,474 - Epoch: [169][ 800/ 813] Overall Loss 0.403921 Objective Loss 0.403921 LR 0.000025 Time 0.594733 +2025-05-16 15:15:34,062 - Epoch: [169][ 813/ 813] Overall Loss 0.406430 Objective Loss 0.406430 LR 0.000025 Time 0.592096 +2025-05-16 15:15:34,094 - --- validate (epoch=169)----------- +2025-05-16 15:15:34,095 - 3250 samples (16 per mini-batch) +2025-05-16 15:15:34,097 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 15:16:30,830 - Epoch: [169][ 100/ 204] Loss 0.652384 mAP 0.909300 +2025-05-16 15:17:31,586 - Epoch: [169][ 200/ 204] Loss 0.671346 mAP 0.898987 +2025-05-16 15:17:33,074 - Epoch: [169][ 204/ 204] Loss 0.669578 mAP 0.899023 +2025-05-16 15:17:33,109 - ==> mAP: 0.89902 Loss: 0.670 + +2025-05-16 15:17:33,170 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 15:17:33,170 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 15:17:33,200 - + +2025-05-16 15:17:33,200 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 15:18:37,425 - Epoch: [170][ 100/ 813] Overall Loss 0.373217 Objective Loss 0.373217 LR 0.000025 Time 0.642197 +2025-05-16 15:19:37,129 - Epoch: [170][ 200/ 813] Overall Loss 0.393403 Objective Loss 0.393403 LR 0.000025 Time 0.619606 +2025-05-16 15:20:36,751 - Epoch: [170][ 300/ 813] Overall Loss 0.400129 Objective Loss 0.400129 LR 0.000025 Time 0.611802 +2025-05-16 15:21:28,653 - Epoch: [170][ 400/ 813] Overall Loss 0.405414 Objective Loss 0.405414 LR 0.000025 Time 0.588601 +2025-05-16 15:22:27,398 - Epoch: [170][ 500/ 813] Overall Loss 0.406898 Objective Loss 0.406898 LR 0.000025 Time 0.588365 +2025-05-16 15:23:27,556 - Epoch: [170][ 600/ 813] Overall Loss 0.406688 Objective Loss 0.406688 LR 0.000025 Time 0.590564 +2025-05-16 15:24:30,470 - Epoch: [170][ 700/ 813] Overall Loss 0.410201 Objective Loss 0.410201 LR 0.000025 Time 0.596069 +2025-05-16 15:25:31,312 - Epoch: [170][ 800/ 813] Overall Loss 0.412859 Objective Loss 0.412859 LR 0.000025 Time 0.597610 +2025-05-16 15:25:35,755 - Epoch: [170][ 813/ 813] Overall Loss 0.413455 Objective Loss 0.413455 LR 0.000025 Time 0.593518 +2025-05-16 15:25:35,790 - --- validate (epoch=170)----------- +2025-05-16 15:25:35,792 - 3250 samples (16 per mini-batch) +2025-05-16 15:25:35,794 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 15:26:34,217 - Epoch: [170][ 100/ 204] Loss 0.624755 mAP 0.909543 +2025-05-16 15:27:32,281 - Epoch: [170][ 200/ 204] Loss 0.672715 mAP 0.899129 +2025-05-16 15:27:33,600 - Epoch: [170][ 204/ 204] Loss 0.676103 mAP 0.899006 +2025-05-16 15:27:33,638 - ==> mAP: 0.89901 Loss: 0.676 + +2025-05-16 15:27:33,667 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 15:27:33,667 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 15:27:33,704 - + +2025-05-16 15:27:33,704 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 15:28:36,438 - Epoch: [171][ 100/ 813] Overall Loss 0.359079 Objective Loss 0.359079 LR 0.000025 Time 0.627303 +2025-05-16 15:29:38,085 - Epoch: [171][ 200/ 813] Overall Loss 0.368452 Objective Loss 0.368452 LR 0.000025 Time 0.621873 +2025-05-16 15:30:37,699 - Epoch: [171][ 300/ 813] Overall Loss 0.372860 Objective Loss 0.372860 LR 0.000025 Time 0.613285 +2025-05-16 15:31:31,267 - Epoch: [171][ 400/ 813] Overall Loss 0.374215 Objective Loss 0.374215 LR 0.000025 Time 0.593879 +2025-05-16 15:32:29,817 - Epoch: [171][ 500/ 813] Overall Loss 0.375869 Objective Loss 0.375869 LR 0.000025 Time 0.592197 +2025-05-16 15:33:30,381 - Epoch: [171][ 600/ 813] Overall Loss 0.380130 Objective Loss 0.380130 LR 0.000025 Time 0.594431 +2025-05-16 15:34:30,962 - Epoch: [171][ 700/ 813] Overall Loss 0.386642 Objective Loss 0.386642 LR 0.000025 Time 0.596052 +2025-05-16 15:35:30,336 - Epoch: [171][ 800/ 813] Overall Loss 0.386362 Objective Loss 0.386362 LR 0.000025 Time 0.595759 +2025-05-16 15:35:36,400 - Epoch: [171][ 813/ 813] Overall Loss 0.387328 Objective Loss 0.387328 LR 0.000025 Time 0.593692 +2025-05-16 15:35:36,439 - --- validate (epoch=171)----------- +2025-05-16 15:35:36,442 - 3250 samples (16 per mini-batch) +2025-05-16 15:35:36,445 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 15:36:37,846 - Epoch: [171][ 100/ 204] Loss 0.672325 mAP 0.909059 +2025-05-16 15:37:33,819 - Epoch: [171][ 200/ 204] Loss 0.693524 mAP 0.908805 +2025-05-16 15:37:35,015 - Epoch: [171][ 204/ 204] Loss 0.695202 mAP 0.908778 +2025-05-16 15:37:35,053 - ==> mAP: 0.90878 Loss: 0.695 + +2025-05-16 15:37:35,114 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 15:37:35,114 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 15:37:35,150 - + +2025-05-16 15:37:35,150 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 15:38:38,392 - Epoch: [172][ 100/ 813] Overall Loss 0.337849 Objective Loss 0.337849 LR 0.000025 Time 0.632378 +2025-05-16 15:39:41,673 - Epoch: [172][ 200/ 813] Overall Loss 0.371881 Objective Loss 0.371881 LR 0.000025 Time 0.632579 +2025-05-16 15:40:42,280 - Epoch: [172][ 300/ 813] Overall Loss 0.390164 Objective Loss 0.390164 LR 0.000025 Time 0.623733 +2025-05-16 15:41:35,795 - Epoch: [172][ 400/ 813] Overall Loss 0.403061 Objective Loss 0.403061 LR 0.000025 Time 0.601580 +2025-05-16 15:42:30,648 - Epoch: [172][ 500/ 813] Overall Loss 0.404514 Objective Loss 0.404514 LR 0.000025 Time 0.590966 +2025-05-16 15:43:32,202 - Epoch: [172][ 600/ 813] Overall Loss 0.399057 Objective Loss 0.399057 LR 0.000025 Time 0.595057 +2025-05-16 15:44:33,364 - Epoch: [172][ 700/ 813] Overall Loss 0.403045 Objective Loss 0.403045 LR 0.000025 Time 0.597419 +2025-05-16 15:45:35,130 - Epoch: [172][ 800/ 813] Overall Loss 0.404743 Objective Loss 0.404743 LR 0.000025 Time 0.599945 +2025-05-16 15:45:40,638 - Epoch: [172][ 813/ 813] Overall Loss 0.405938 Objective Loss 0.405938 LR 0.000025 Time 0.597127 +2025-05-16 15:45:40,669 - --- validate (epoch=172)----------- +2025-05-16 15:45:40,670 - 3250 samples (16 per mini-batch) +2025-05-16 15:45:40,672 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 15:46:39,752 - Epoch: [172][ 100/ 204] Loss 0.697587 mAP 0.908536 +2025-05-16 15:47:34,875 - Epoch: [172][ 200/ 204] Loss 0.675307 mAP 0.908894 +2025-05-16 15:47:36,553 - Epoch: [172][ 204/ 204] Loss 0.671874 mAP 0.908914 +2025-05-16 15:47:36,592 - ==> mAP: 0.90891 Loss: 0.672 + +2025-05-16 15:47:36,606 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 15:47:36,606 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 15:47:36,642 - + +2025-05-16 15:47:36,642 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 15:48:40,860 - Epoch: [173][ 100/ 813] Overall Loss 0.368717 Objective Loss 0.368717 LR 0.000025 Time 0.642137 +2025-05-16 15:49:42,721 - Epoch: [173][ 200/ 813] Overall Loss 0.367688 Objective Loss 0.367688 LR 0.000025 Time 0.630358 +2025-05-16 15:50:41,585 - Epoch: [173][ 300/ 813] Overall Loss 0.373084 Objective Loss 0.373084 LR 0.000025 Time 0.616244 +2025-05-16 15:51:36,692 - Epoch: [173][ 400/ 813] Overall Loss 0.385963 Objective Loss 0.385963 LR 0.000025 Time 0.599944 +2025-05-16 15:52:31,050 - Epoch: [173][ 500/ 813] Overall Loss 0.385969 Objective Loss 0.385969 LR 0.000025 Time 0.588666 +2025-05-16 15:53:30,462 - Epoch: [173][ 600/ 813] Overall Loss 0.388948 Objective Loss 0.388948 LR 0.000025 Time 0.589568 +2025-05-16 15:54:31,317 - Epoch: [173][ 700/ 813] Overall Loss 0.396993 Objective Loss 0.396993 LR 0.000025 Time 0.592259 +2025-05-16 15:55:31,964 - Epoch: [173][ 800/ 813] Overall Loss 0.398663 Objective Loss 0.398663 LR 0.000025 Time 0.594032 +2025-05-16 15:55:38,331 - Epoch: [173][ 813/ 813] Overall Loss 0.399227 Objective Loss 0.399227 LR 0.000025 Time 0.592364 +2025-05-16 15:55:38,372 - --- validate (epoch=173)----------- +2025-05-16 15:55:38,373 - 3250 samples (16 per mini-batch) +2025-05-16 15:55:38,375 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 15:56:40,463 - Epoch: [173][ 100/ 204] Loss 0.703947 mAP 0.898736 +2025-05-16 15:57:35,593 - Epoch: [173][ 200/ 204] Loss 0.709481 mAP 0.899024 +2025-05-16 15:57:36,353 - Epoch: [173][ 204/ 204] Loss 0.708202 mAP 0.899059 +2025-05-16 15:57:36,385 - ==> mAP: 0.89906 Loss: 0.708 + +2025-05-16 15:57:36,393 - ==> Best [mAP: 0.909136 vloss: 0.687347 Params: 368352 on epoch: 160] +2025-05-16 15:57:36,393 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 15:57:36,428 - + +2025-05-16 15:57:36,429 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 15:58:37,222 - Epoch: [174][ 100/ 813] Overall Loss 0.364834 Objective Loss 0.364834 LR 0.000025 Time 0.607903 +2025-05-16 15:59:36,709 - Epoch: [174][ 200/ 813] Overall Loss 0.366384 Objective Loss 0.366384 LR 0.000025 Time 0.601370 +2025-05-16 16:00:38,701 - Epoch: [174][ 300/ 813] Overall Loss 0.380641 Objective Loss 0.380641 LR 0.000025 Time 0.607545 +2025-05-16 16:01:36,609 - Epoch: [174][ 400/ 813] Overall Loss 0.384193 Objective Loss 0.384193 LR 0.000025 Time 0.600420 +2025-05-16 16:02:31,808 - Epoch: [174][ 500/ 813] Overall Loss 0.390715 Objective Loss 0.390715 LR 0.000025 Time 0.590731 +2025-05-16 16:03:28,685 - Epoch: [174][ 600/ 813] Overall Loss 0.393254 Objective Loss 0.393254 LR 0.000025 Time 0.587063 +2025-05-16 16:04:29,431 - Epoch: [174][ 700/ 813] Overall Loss 0.398181 Objective Loss 0.398181 LR 0.000025 Time 0.589973 +2025-05-16 16:05:28,740 - Epoch: [174][ 800/ 813] Overall Loss 0.399316 Objective Loss 0.399316 LR 0.000025 Time 0.590359 +2025-05-16 16:05:35,774 - Epoch: [174][ 813/ 813] Overall Loss 0.400231 Objective Loss 0.400231 LR 0.000025 Time 0.589571 +2025-05-16 16:05:35,813 - --- validate (epoch=174)----------- +2025-05-16 16:05:35,816 - 3250 samples (16 per mini-batch) +2025-05-16 16:05:35,819 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 16:06:37,500 - Epoch: [174][ 100/ 204] Loss 0.692592 mAP 0.919080 +2025-05-16 16:07:35,716 - Epoch: [174][ 200/ 204] Loss 0.694924 mAP 0.909366 +2025-05-16 16:07:36,535 - Epoch: [174][ 204/ 204] Loss 0.690901 mAP 0.909379 +2025-05-16 16:07:36,570 - ==> mAP: 0.90938 Loss: 0.691 + +2025-05-16 16:07:36,626 - ==> Best [mAP: 0.909379 vloss: 0.690901 Params: 368352 on epoch: 174] +2025-05-16 16:07:36,626 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 16:07:36,666 - + +2025-05-16 16:07:36,666 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 16:08:34,175 - Epoch: [175][ 100/ 813] Overall Loss 0.395528 Objective Loss 0.395528 LR 0.000025 Time 0.575055 +2025-05-16 16:09:36,330 - Epoch: [175][ 200/ 813] Overall Loss 0.381734 Objective Loss 0.381734 LR 0.000025 Time 0.598282 +2025-05-16 16:10:36,696 - Epoch: [175][ 300/ 813] Overall Loss 0.392619 Objective Loss 0.392619 LR 0.000025 Time 0.600064 +2025-05-16 16:11:36,338 - Epoch: [175][ 400/ 813] Overall Loss 0.398517 Objective Loss 0.398517 LR 0.000025 Time 0.599146 +2025-05-16 16:12:34,514 - Epoch: [175][ 500/ 813] Overall Loss 0.393922 Objective Loss 0.393922 LR 0.000025 Time 0.595664 +2025-05-16 16:13:30,081 - Epoch: [175][ 600/ 813] Overall Loss 0.398670 Objective Loss 0.398670 LR 0.000025 Time 0.588994 +2025-05-16 16:14:30,259 - Epoch: [175][ 700/ 813] Overall Loss 0.399513 Objective Loss 0.399513 LR 0.000025 Time 0.590817 +2025-05-16 16:15:31,947 - Epoch: [175][ 800/ 813] Overall Loss 0.397282 Objective Loss 0.397282 LR 0.000025 Time 0.594071 +2025-05-16 16:15:36,340 - Epoch: [175][ 813/ 813] Overall Loss 0.396590 Objective Loss 0.396590 LR 0.000025 Time 0.589975 +2025-05-16 16:15:36,374 - --- validate (epoch=175)----------- +2025-05-16 16:15:36,376 - 3250 samples (16 per mini-batch) +2025-05-16 16:15:36,378 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 16:16:38,609 - Epoch: [175][ 100/ 204] Loss 0.667624 mAP 0.909694 +2025-05-16 16:17:39,647 - Epoch: [175][ 200/ 204] Loss 0.665872 mAP 0.899272 +2025-05-16 16:17:40,514 - Epoch: [175][ 204/ 204] Loss 0.676971 mAP 0.899286 +2025-05-16 16:17:40,548 - ==> mAP: 0.89929 Loss: 0.677 + +2025-05-16 16:17:40,577 - ==> Best [mAP: 0.909379 vloss: 0.690901 Params: 368352 on epoch: 174] +2025-05-16 16:17:40,577 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 16:17:40,613 - + +2025-05-16 16:17:40,613 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 16:18:40,436 - Epoch: [176][ 100/ 813] Overall Loss 0.373286 Objective Loss 0.373286 LR 0.000025 Time 0.598197 +2025-05-16 16:19:38,537 - Epoch: [176][ 200/ 813] Overall Loss 0.391785 Objective Loss 0.391785 LR 0.000025 Time 0.589593 +2025-05-16 16:20:40,195 - Epoch: [176][ 300/ 813] Overall Loss 0.408340 Objective Loss 0.408340 LR 0.000025 Time 0.598580 +2025-05-16 16:21:37,371 - Epoch: [176][ 400/ 813] Overall Loss 0.406270 Objective Loss 0.406270 LR 0.000025 Time 0.591864 +2025-05-16 16:22:37,485 - Epoch: [176][ 500/ 813] Overall Loss 0.407765 Objective Loss 0.407765 LR 0.000025 Time 0.593714 +2025-05-16 16:23:32,887 - Epoch: [176][ 600/ 813] Overall Loss 0.411968 Objective Loss 0.411968 LR 0.000025 Time 0.587095 +2025-05-16 16:24:30,926 - Epoch: [176][ 700/ 813] Overall Loss 0.410726 Objective Loss 0.410726 LR 0.000025 Time 0.586134 +2025-05-16 16:25:31,347 - Epoch: [176][ 800/ 813] Overall Loss 0.418872 Objective Loss 0.418872 LR 0.000025 Time 0.588389 +2025-05-16 16:25:37,847 - Epoch: [176][ 813/ 813] Overall Loss 0.421303 Objective Loss 0.421303 LR 0.000025 Time 0.586976 +2025-05-16 16:25:37,886 - --- validate (epoch=176)----------- +2025-05-16 16:25:37,888 - 3250 samples (16 per mini-batch) +2025-05-16 16:25:37,891 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 16:26:41,167 - Epoch: [176][ 100/ 204] Loss 0.676816 mAP 0.900031 +2025-05-16 16:27:42,039 - Epoch: [176][ 200/ 204] Loss 0.692773 mAP 0.899757 +2025-05-16 16:27:43,556 - Epoch: [176][ 204/ 204] Loss 0.694426 mAP 0.899714 +2025-05-16 16:27:43,593 - ==> mAP: 0.89971 Loss: 0.694 + +2025-05-16 16:27:43,656 - ==> Best [mAP: 0.909379 vloss: 0.690901 Params: 368352 on epoch: 174] +2025-05-16 16:27:43,656 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 16:27:43,694 - + +2025-05-16 16:27:43,694 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 16:28:42,908 - Epoch: [177][ 100/ 813] Overall Loss 0.377448 Objective Loss 0.377448 LR 0.000025 Time 0.592109 +2025-05-16 16:29:37,998 - Epoch: [177][ 200/ 813] Overall Loss 0.380355 Objective Loss 0.380355 LR 0.000025 Time 0.571494 +2025-05-16 16:30:31,576 - Epoch: [177][ 300/ 813] Overall Loss 0.387993 Objective Loss 0.387993 LR 0.000025 Time 0.559581 +2025-05-16 16:31:20,761 - Epoch: [177][ 400/ 813] Overall Loss 0.389311 Objective Loss 0.389311 LR 0.000025 Time 0.542645 +2025-05-16 16:32:12,688 - Epoch: [177][ 500/ 813] Overall Loss 0.393916 Objective Loss 0.393916 LR 0.000025 Time 0.537966 +2025-05-16 16:33:07,705 - Epoch: [177][ 600/ 813] Overall Loss 0.399679 Objective Loss 0.399679 LR 0.000025 Time 0.539997 +2025-05-16 16:34:05,493 - Epoch: [177][ 700/ 813] Overall Loss 0.401801 Objective Loss 0.401801 LR 0.000025 Time 0.545405 +2025-05-16 16:35:04,118 - Epoch: [177][ 800/ 813] Overall Loss 0.405858 Objective Loss 0.405858 LR 0.000025 Time 0.550509 +2025-05-16 16:35:10,159 - Epoch: [177][ 813/ 813] Overall Loss 0.407649 Objective Loss 0.407649 LR 0.000025 Time 0.549134 +2025-05-16 16:35:10,193 - --- validate (epoch=177)----------- +2025-05-16 16:35:10,194 - 3250 samples (16 per mini-batch) +2025-05-16 16:35:10,197 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 16:36:08,962 - Epoch: [177][ 100/ 204] Loss 0.729969 mAP 0.908677 +2025-05-16 16:37:07,568 - Epoch: [177][ 200/ 204] Loss 0.696489 mAP 0.908697 +2025-05-16 16:37:08,058 - Epoch: [177][ 204/ 204] Loss 0.693268 mAP 0.908664 +2025-05-16 16:37:08,085 - ==> mAP: 0.90866 Loss: 0.693 + +2025-05-16 16:37:08,090 - ==> Best [mAP: 0.909379 vloss: 0.690901 Params: 368352 on epoch: 174] +2025-05-16 16:37:08,090 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 16:37:08,119 - + +2025-05-16 16:37:08,119 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 16:38:09,248 - Epoch: [178][ 100/ 813] Overall Loss 0.347501 Objective Loss 0.347501 LR 0.000025 Time 0.611259 +2025-05-16 16:39:07,648 - Epoch: [178][ 200/ 813] Overall Loss 0.342553 Objective Loss 0.342553 LR 0.000025 Time 0.597616 +2025-05-16 16:40:05,974 - Epoch: [178][ 300/ 813] Overall Loss 0.356213 Objective Loss 0.356213 LR 0.000025 Time 0.592824 +2025-05-16 16:41:00,823 - Epoch: [178][ 400/ 813] Overall Loss 0.368382 Objective Loss 0.368382 LR 0.000025 Time 0.581735 +2025-05-16 16:41:59,746 - Epoch: [178][ 500/ 813] Overall Loss 0.375789 Objective Loss 0.375789 LR 0.000025 Time 0.583231 +2025-05-16 16:42:56,703 - Epoch: [178][ 600/ 813] Overall Loss 0.381894 Objective Loss 0.381894 LR 0.000025 Time 0.580951 +2025-05-16 16:43:53,978 - Epoch: [178][ 700/ 813] Overall Loss 0.385127 Objective Loss 0.385127 LR 0.000025 Time 0.579777 +2025-05-16 16:44:49,900 - Epoch: [178][ 800/ 813] Overall Loss 0.390041 Objective Loss 0.390041 LR 0.000025 Time 0.577205 +2025-05-16 16:44:55,494 - Epoch: [178][ 813/ 813] Overall Loss 0.391315 Objective Loss 0.391315 LR 0.000025 Time 0.574856 +2025-05-16 16:44:55,522 - --- validate (epoch=178)----------- +2025-05-16 16:44:55,525 - 3250 samples (16 per mini-batch) +2025-05-16 16:44:55,527 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 16:45:55,276 - Epoch: [178][ 100/ 204] Loss 0.693817 mAP 0.899206 +2025-05-16 16:46:52,103 - Epoch: [178][ 200/ 204] Loss 0.688430 mAP 0.899023 +2025-05-16 16:46:53,206 - Epoch: [178][ 204/ 204] Loss 0.691685 mAP 0.899008 +2025-05-16 16:46:53,240 - ==> mAP: 0.89901 Loss: 0.692 + +2025-05-16 16:46:53,246 - ==> Best [mAP: 0.909379 vloss: 0.690901 Params: 368352 on epoch: 174] +2025-05-16 16:46:53,246 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 16:46:53,284 - + +2025-05-16 16:46:53,285 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 16:47:53,937 - Epoch: [179][ 100/ 813] Overall Loss 0.368084 Objective Loss 0.368084 LR 0.000025 Time 0.606498 +2025-05-16 16:48:49,322 - Epoch: [179][ 200/ 813] Overall Loss 0.357552 Objective Loss 0.357552 LR 0.000025 Time 0.580164 +2025-05-16 16:49:46,737 - Epoch: [179][ 300/ 813] Overall Loss 0.369544 Objective Loss 0.369544 LR 0.000025 Time 0.578153 +2025-05-16 16:50:43,204 - Epoch: [179][ 400/ 813] Overall Loss 0.382244 Objective Loss 0.382244 LR 0.000025 Time 0.574777 +2025-05-16 16:51:38,545 - Epoch: [179][ 500/ 813] Overall Loss 0.389319 Objective Loss 0.389319 LR 0.000025 Time 0.570500 +2025-05-16 16:52:36,766 - Epoch: [179][ 600/ 813] Overall Loss 0.391921 Objective Loss 0.391921 LR 0.000025 Time 0.572445 +2025-05-16 16:53:33,888 - Epoch: [179][ 700/ 813] Overall Loss 0.396285 Objective Loss 0.396285 LR 0.000025 Time 0.572268 +2025-05-16 16:54:30,651 - Epoch: [179][ 800/ 813] Overall Loss 0.400000 Objective Loss 0.400000 LR 0.000025 Time 0.571686 +2025-05-16 16:54:35,449 - Epoch: [179][ 813/ 813] Overall Loss 0.401291 Objective Loss 0.401291 LR 0.000025 Time 0.568446 +2025-05-16 16:54:35,481 - --- validate (epoch=179)----------- +2025-05-16 16:54:35,482 - 3250 samples (16 per mini-batch) +2025-05-16 16:54:35,484 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 16:55:34,042 - Epoch: [179][ 100/ 204] Loss 0.696749 mAP 0.889762 +2025-05-16 16:56:31,487 - Epoch: [179][ 200/ 204] Loss 0.684054 mAP 0.890034 +2025-05-16 16:56:32,348 - Epoch: [179][ 204/ 204] Loss 0.683164 mAP 0.890050 +2025-05-16 16:56:32,381 - ==> mAP: 0.89005 Loss: 0.683 + +2025-05-16 16:56:32,385 - ==> Best [mAP: 0.909379 vloss: 0.690901 Params: 368352 on epoch: 174] +2025-05-16 16:56:32,385 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 16:56:32,419 - + +2025-05-16 16:56:32,419 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 16:57:30,032 - Epoch: [180][ 100/ 813] Overall Loss 0.396705 Objective Loss 0.396705 LR 0.000025 Time 0.576097 +2025-05-16 16:58:27,334 - Epoch: [180][ 200/ 813] Overall Loss 0.380691 Objective Loss 0.380691 LR 0.000025 Time 0.574550 +2025-05-16 16:59:25,287 - Epoch: [180][ 300/ 813] Overall Loss 0.387313 Objective Loss 0.387313 LR 0.000025 Time 0.576202 +2025-05-16 17:00:20,664 - Epoch: [180][ 400/ 813] Overall Loss 0.390562 Objective Loss 0.390562 LR 0.000025 Time 0.570589 +2025-05-16 17:01:17,647 - Epoch: [180][ 500/ 813] Overall Loss 0.394563 Objective Loss 0.394563 LR 0.000025 Time 0.570433 +2025-05-16 17:02:13,676 - Epoch: [180][ 600/ 813] Overall Loss 0.396394 Objective Loss 0.396394 LR 0.000025 Time 0.568741 +2025-05-16 17:03:11,976 - Epoch: [180][ 700/ 813] Overall Loss 0.403840 Objective Loss 0.403840 LR 0.000025 Time 0.570775 +2025-05-16 17:04:08,842 - Epoch: [180][ 800/ 813] Overall Loss 0.405917 Objective Loss 0.405917 LR 0.000025 Time 0.570508 +2025-05-16 17:04:13,769 - Epoch: [180][ 813/ 813] Overall Loss 0.408787 Objective Loss 0.408787 LR 0.000025 Time 0.567445 +2025-05-16 17:04:13,798 - --- validate (epoch=180)----------- +2025-05-16 17:04:13,799 - 3250 samples (16 per mini-batch) +2025-05-16 17:04:13,801 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 17:05:12,129 - Epoch: [180][ 100/ 204] Loss 0.697948 mAP 0.899246 +2025-05-16 17:06:08,904 - Epoch: [180][ 200/ 204] Loss 0.714792 mAP 0.898722 +2025-05-16 17:06:09,874 - Epoch: [180][ 204/ 204] Loss 0.710281 mAP 0.898764 +2025-05-16 17:06:09,904 - ==> mAP: 0.89876 Loss: 0.710 + +2025-05-16 17:06:09,908 - ==> Best [mAP: 0.909379 vloss: 0.690901 Params: 368352 on epoch: 174] +2025-05-16 17:06:09,908 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 17:06:09,944 - + +2025-05-16 17:06:09,944 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 17:07:08,225 - Epoch: [181][ 100/ 813] Overall Loss 0.392192 Objective Loss 0.392192 LR 0.000025 Time 0.582786 +2025-05-16 17:08:05,576 - Epoch: [181][ 200/ 813] Overall Loss 0.377657 Objective Loss 0.377657 LR 0.000025 Time 0.578137 +2025-05-16 17:09:01,858 - Epoch: [181][ 300/ 813] Overall Loss 0.383051 Objective Loss 0.383051 LR 0.000025 Time 0.573027 +2025-05-16 17:09:55,476 - Epoch: [181][ 400/ 813] Overall Loss 0.384876 Objective Loss 0.384876 LR 0.000025 Time 0.563809 +2025-05-16 17:10:50,959 - Epoch: [181][ 500/ 813] Overall Loss 0.387652 Objective Loss 0.387652 LR 0.000025 Time 0.562011 +2025-05-16 17:11:49,224 - Epoch: [181][ 600/ 813] Overall Loss 0.394482 Objective Loss 0.394482 LR 0.000025 Time 0.565448 +2025-05-16 17:12:43,842 - Epoch: [181][ 700/ 813] Overall Loss 0.403193 Objective Loss 0.403193 LR 0.000025 Time 0.562683 +2025-05-16 17:13:39,393 - Epoch: [181][ 800/ 813] Overall Loss 0.403738 Objective Loss 0.403738 LR 0.000025 Time 0.561784 +2025-05-16 17:13:45,877 - Epoch: [181][ 813/ 813] Overall Loss 0.405827 Objective Loss 0.405827 LR 0.000025 Time 0.560776 +2025-05-16 17:13:45,913 - --- validate (epoch=181)----------- +2025-05-16 17:13:45,914 - 3250 samples (16 per mini-batch) +2025-05-16 17:13:45,916 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 17:14:43,632 - Epoch: [181][ 100/ 204] Loss 0.668386 mAP 0.909870 +2025-05-16 17:15:39,778 - Epoch: [181][ 200/ 204] Loss 0.689852 mAP 0.908973 +2025-05-16 17:15:41,055 - Epoch: [181][ 204/ 204] Loss 0.708677 mAP 0.908919 +2025-05-16 17:15:41,086 - ==> mAP: 0.90892 Loss: 0.709 + +2025-05-16 17:15:41,090 - ==> Best [mAP: 0.909379 vloss: 0.690901 Params: 368352 on epoch: 174] +2025-05-16 17:15:41,090 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 17:15:41,126 - + +2025-05-16 17:15:41,126 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 17:16:40,573 - Epoch: [182][ 100/ 813] Overall Loss 0.353248 Objective Loss 0.353248 LR 0.000025 Time 0.594439 +2025-05-16 17:17:36,502 - Epoch: [182][ 200/ 813] Overall Loss 0.380315 Objective Loss 0.380315 LR 0.000025 Time 0.576842 +2025-05-16 17:18:32,837 - Epoch: [182][ 300/ 813] Overall Loss 0.389711 Objective Loss 0.389711 LR 0.000025 Time 0.572341 +2025-05-16 17:19:26,150 - Epoch: [182][ 400/ 813] Overall Loss 0.399209 Objective Loss 0.399209 LR 0.000025 Time 0.562532 +2025-05-16 17:20:23,272 - Epoch: [182][ 500/ 813] Overall Loss 0.405250 Objective Loss 0.405250 LR 0.000025 Time 0.564266 +2025-05-16 17:21:19,147 - Epoch: [182][ 600/ 813] Overall Loss 0.406606 Objective Loss 0.406606 LR 0.000025 Time 0.563344 +2025-05-16 17:22:15,389 - Epoch: [182][ 700/ 813] Overall Loss 0.413508 Objective Loss 0.413508 LR 0.000025 Time 0.563209 +2025-05-16 17:23:09,930 - Epoch: [182][ 800/ 813] Overall Loss 0.416096 Objective Loss 0.416096 LR 0.000025 Time 0.560983 +2025-05-16 17:23:16,674 - Epoch: [182][ 813/ 813] Overall Loss 0.416275 Objective Loss 0.416275 LR 0.000025 Time 0.560306 +2025-05-16 17:23:16,698 - --- validate (epoch=182)----------- +2025-05-16 17:23:16,699 - 3250 samples (16 per mini-batch) +2025-05-16 17:23:16,701 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 17:24:14,535 - Epoch: [182][ 100/ 204] Loss 0.692153 mAP 0.909320 +2025-05-16 17:25:11,035 - Epoch: [182][ 200/ 204] Loss 0.681781 mAP 0.909695 +2025-05-16 17:25:12,092 - Epoch: [182][ 204/ 204] Loss 0.678735 mAP 0.909718 +2025-05-16 17:25:12,120 - ==> mAP: 0.90972 Loss: 0.679 + +2025-05-16 17:25:12,124 - ==> Best [mAP: 0.909718 vloss: 0.678735 Params: 368352 on epoch: 182] +2025-05-16 17:25:12,124 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 17:25:12,163 - + +2025-05-16 17:25:12,164 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 17:26:16,319 - Epoch: [183][ 100/ 813] Overall Loss 0.369485 Objective Loss 0.369485 LR 0.000025 Time 0.641520 +2025-05-16 17:27:16,898 - Epoch: [183][ 200/ 813] Overall Loss 0.383119 Objective Loss 0.383119 LR 0.000025 Time 0.623647 +2025-05-16 17:28:18,570 - Epoch: [183][ 300/ 813] Overall Loss 0.383992 Objective Loss 0.383992 LR 0.000025 Time 0.621328 +2025-05-16 17:29:16,333 - Epoch: [183][ 400/ 813] Overall Loss 0.391440 Objective Loss 0.391440 LR 0.000025 Time 0.610399 +2025-05-16 17:30:16,473 - Epoch: [183][ 500/ 813] Overall Loss 0.393799 Objective Loss 0.393799 LR 0.000025 Time 0.608595 +2025-05-16 17:31:18,599 - Epoch: [183][ 600/ 813] Overall Loss 0.391712 Objective Loss 0.391712 LR 0.000025 Time 0.610701 +2025-05-16 17:32:17,778 - Epoch: [183][ 700/ 813] Overall Loss 0.396624 Objective Loss 0.396624 LR 0.000025 Time 0.607987 +2025-05-16 17:33:17,197 - Epoch: [183][ 800/ 813] Overall Loss 0.399094 Objective Loss 0.399094 LR 0.000025 Time 0.606259 +2025-05-16 17:33:23,888 - Epoch: [183][ 813/ 813] Overall Loss 0.399398 Objective Loss 0.399398 LR 0.000025 Time 0.604795 +2025-05-16 17:33:23,928 - --- validate (epoch=183)----------- +2025-05-16 17:33:23,930 - 3250 samples (16 per mini-batch) +2025-05-16 17:33:23,932 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 17:34:35,575 - Epoch: [183][ 100/ 204] Loss 0.643111 mAP 0.909518 +2025-05-16 17:35:44,868 - Epoch: [183][ 200/ 204] Loss 0.654729 mAP 0.909822 +2025-05-16 17:35:47,113 - Epoch: [183][ 204/ 204] Loss 0.652454 mAP 0.909834 +2025-05-16 17:35:47,142 - ==> mAP: 0.90983 Loss: 0.652 + +2025-05-16 17:35:47,205 - ==> Best [mAP: 0.909834 vloss: 0.652454 Params: 368352 on epoch: 183] +2025-05-16 17:35:47,205 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 17:35:47,245 - + +2025-05-16 17:35:47,245 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 17:36:50,822 - Epoch: [184][ 100/ 813] Overall Loss 0.333947 Objective Loss 0.333947 LR 0.000025 Time 0.635734 +2025-05-16 17:37:50,172 - Epoch: [184][ 200/ 813] Overall Loss 0.349291 Objective Loss 0.349291 LR 0.000025 Time 0.614606 +2025-05-16 17:38:56,451 - Epoch: [184][ 300/ 813] Overall Loss 0.370972 Objective Loss 0.370972 LR 0.000025 Time 0.630659 +2025-05-16 17:39:57,364 - Epoch: [184][ 400/ 813] Overall Loss 0.383038 Objective Loss 0.383038 LR 0.000025 Time 0.625259 +2025-05-16 17:40:56,602 - Epoch: [184][ 500/ 813] Overall Loss 0.384746 Objective Loss 0.384746 LR 0.000025 Time 0.618578 +2025-05-16 17:41:57,304 - Epoch: [184][ 600/ 813] Overall Loss 0.389985 Objective Loss 0.389985 LR 0.000025 Time 0.616648 +2025-05-16 17:42:56,709 - Epoch: [184][ 700/ 813] Overall Loss 0.394358 Objective Loss 0.394358 LR 0.000025 Time 0.613416 +2025-05-16 17:43:57,451 - Epoch: [184][ 800/ 813] Overall Loss 0.395910 Objective Loss 0.395910 LR 0.000025 Time 0.612665 +2025-05-16 17:44:03,698 - Epoch: [184][ 813/ 813] Overall Loss 0.395481 Objective Loss 0.395481 LR 0.000025 Time 0.610528 +2025-05-16 17:44:03,732 - --- validate (epoch=184)----------- +2025-05-16 17:44:03,734 - 3250 samples (16 per mini-batch) +2025-05-16 17:44:03,736 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 17:45:16,967 - Epoch: [184][ 100/ 204] Loss 0.701607 mAP 0.898689 +2025-05-16 17:46:26,302 - Epoch: [184][ 200/ 204] Loss 0.693221 mAP 0.899151 +2025-05-16 17:46:28,511 - Epoch: [184][ 204/ 204] Loss 0.694984 mAP 0.899178 +2025-05-16 17:46:28,543 - ==> mAP: 0.89918 Loss: 0.695 + +2025-05-16 17:46:28,605 - ==> Best [mAP: 0.909834 vloss: 0.652454 Params: 368352 on epoch: 183] +2025-05-16 17:46:28,605 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 17:46:28,641 - + +2025-05-16 17:46:28,641 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 17:47:31,728 - Epoch: [185][ 100/ 813] Overall Loss 0.342452 Objective Loss 0.342452 LR 0.000025 Time 0.630842 +2025-05-16 17:48:32,389 - Epoch: [185][ 200/ 813] Overall Loss 0.357598 Objective Loss 0.357598 LR 0.000025 Time 0.618712 +2025-05-16 17:49:31,881 - Epoch: [185][ 300/ 813] Overall Loss 0.372475 Objective Loss 0.372475 LR 0.000025 Time 0.610774 +2025-05-16 17:50:30,661 - Epoch: [185][ 400/ 813] Overall Loss 0.384262 Objective Loss 0.384262 LR 0.000025 Time 0.605026 +2025-05-16 17:51:31,762 - Epoch: [185][ 500/ 813] Overall Loss 0.393704 Objective Loss 0.393704 LR 0.000025 Time 0.606218 +2025-05-16 17:52:32,741 - Epoch: [185][ 600/ 813] Overall Loss 0.398623 Objective Loss 0.398623 LR 0.000025 Time 0.606811 +2025-05-16 17:53:34,560 - Epoch: [185][ 700/ 813] Overall Loss 0.408071 Objective Loss 0.408071 LR 0.000025 Time 0.608433 +2025-05-16 17:54:31,687 - Epoch: [185][ 800/ 813] Overall Loss 0.410397 Objective Loss 0.410397 LR 0.000025 Time 0.603785 +2025-05-16 17:54:37,939 - Epoch: [185][ 813/ 813] Overall Loss 0.410355 Objective Loss 0.410355 LR 0.000025 Time 0.601821 +2025-05-16 17:54:37,973 - --- validate (epoch=185)----------- +2025-05-16 17:54:37,974 - 3250 samples (16 per mini-batch) +2025-05-16 17:54:37,975 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 17:55:40,218 - Epoch: [185][ 100/ 204] Loss 0.743673 mAP 0.916085 +2025-05-16 17:57:03,246 - Epoch: [185][ 200/ 204] Loss 0.712850 mAP 0.907648 +2025-05-16 17:57:05,179 - Epoch: [185][ 204/ 204] Loss 0.709456 mAP 0.907610 +2025-05-16 17:57:05,245 - ==> mAP: 0.90761 Loss: 0.709 + +2025-05-16 17:57:05,251 - ==> Best [mAP: 0.909834 vloss: 0.652454 Params: 368352 on epoch: 183] +2025-05-16 17:57:05,251 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 17:57:05,289 - + +2025-05-16 17:57:05,290 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 17:58:39,532 - Epoch: [186][ 100/ 813] Overall Loss 0.342434 Objective Loss 0.342434 LR 0.000025 Time 0.942370 +2025-05-16 17:59:41,616 - Epoch: [186][ 200/ 813] Overall Loss 0.329912 Objective Loss 0.329912 LR 0.000025 Time 0.781591 +2025-05-16 18:00:43,079 - Epoch: [186][ 300/ 813] Overall Loss 0.344364 Objective Loss 0.344364 LR 0.000025 Time 0.725924 +2025-05-16 18:01:39,248 - Epoch: [186][ 400/ 813] Overall Loss 0.358826 Objective Loss 0.358826 LR 0.000025 Time 0.684860 +2025-05-16 18:02:32,810 - Epoch: [186][ 500/ 813] Overall Loss 0.367947 Objective Loss 0.367947 LR 0.000025 Time 0.655009 +2025-05-16 18:03:33,951 - Epoch: [186][ 600/ 813] Overall Loss 0.374755 Objective Loss 0.374755 LR 0.000025 Time 0.647736 +2025-05-16 18:04:33,702 - Epoch: [186][ 700/ 813] Overall Loss 0.382166 Objective Loss 0.382166 LR 0.000025 Time 0.640558 +2025-05-16 18:05:33,301 - Epoch: [186][ 800/ 813] Overall Loss 0.386031 Objective Loss 0.386031 LR 0.000025 Time 0.634984 +2025-05-16 18:05:40,282 - Epoch: [186][ 813/ 813] Overall Loss 0.386918 Objective Loss 0.386918 LR 0.000025 Time 0.633416 +2025-05-16 18:05:40,317 - --- validate (epoch=186)----------- +2025-05-16 18:05:40,320 - 3250 samples (16 per mini-batch) +2025-05-16 18:05:40,322 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 18:06:42,470 - Epoch: [186][ 100/ 204] Loss 0.751563 mAP 0.917706 +2025-05-16 18:07:37,747 - Epoch: [186][ 200/ 204] Loss 0.723960 mAP 0.908163 +2025-05-16 18:07:38,972 - Epoch: [186][ 204/ 204] Loss 0.723587 mAP 0.908203 +2025-05-16 18:07:39,001 - ==> mAP: 0.90820 Loss: 0.724 + +2025-05-16 18:07:39,005 - ==> Best [mAP: 0.909834 vloss: 0.652454 Params: 368352 on epoch: 183] +2025-05-16 18:07:39,005 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 18:07:39,041 - + +2025-05-16 18:07:39,041 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 18:08:37,583 - Epoch: [187][ 100/ 813] Overall Loss 0.335289 Objective Loss 0.335289 LR 0.000025 Time 0.585380 +2025-05-16 18:09:39,675 - Epoch: [187][ 200/ 813] Overall Loss 0.343420 Objective Loss 0.343420 LR 0.000025 Time 0.603075 +2025-05-16 18:10:39,432 - Epoch: [187][ 300/ 813] Overall Loss 0.367518 Objective Loss 0.367518 LR 0.000025 Time 0.601231 +2025-05-16 18:11:38,477 - Epoch: [187][ 400/ 813] Overall Loss 0.378957 Objective Loss 0.378957 LR 0.000025 Time 0.598529 +2025-05-16 18:12:34,199 - Epoch: [187][ 500/ 813] Overall Loss 0.383457 Objective Loss 0.383457 LR 0.000025 Time 0.590262 +2025-05-16 18:13:32,518 - Epoch: [187][ 600/ 813] Overall Loss 0.382558 Objective Loss 0.382558 LR 0.000025 Time 0.589080 +2025-05-16 18:14:31,733 - Epoch: [187][ 700/ 813] Overall Loss 0.383817 Objective Loss 0.383817 LR 0.000025 Time 0.589515 +2025-05-16 18:15:32,017 - Epoch: [187][ 800/ 813] Overall Loss 0.384909 Objective Loss 0.384909 LR 0.000025 Time 0.591177 +2025-05-16 18:15:38,581 - Epoch: [187][ 813/ 813] Overall Loss 0.385637 Objective Loss 0.385637 LR 0.000025 Time 0.589797 +2025-05-16 18:15:38,622 - --- validate (epoch=187)----------- +2025-05-16 18:15:38,623 - 3250 samples (16 per mini-batch) +2025-05-16 18:15:38,626 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 18:16:41,755 - Epoch: [187][ 100/ 204] Loss 0.700116 mAP 0.899849 +2025-05-16 18:17:39,271 - Epoch: [187][ 200/ 204] Loss 0.673120 mAP 0.899835 +2025-05-16 18:17:40,109 - Epoch: [187][ 204/ 204] Loss 0.676789 mAP 0.899724 +2025-05-16 18:17:40,141 - ==> mAP: 0.89972 Loss: 0.677 + +2025-05-16 18:17:40,196 - ==> Best [mAP: 0.909834 vloss: 0.652454 Params: 368352 on epoch: 183] +2025-05-16 18:17:40,197 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 18:17:40,232 - + +2025-05-16 18:17:40,232 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 18:18:38,444 - Epoch: [188][ 100/ 813] Overall Loss 0.356627 Objective Loss 0.356627 LR 0.000025 Time 0.582090 +2025-05-16 18:19:39,849 - Epoch: [188][ 200/ 813] Overall Loss 0.387992 Objective Loss 0.387992 LR 0.000025 Time 0.598055 +2025-05-16 18:20:39,863 - Epoch: [188][ 300/ 813] Overall Loss 0.398569 Objective Loss 0.398569 LR 0.000025 Time 0.598741 +2025-05-16 18:21:39,618 - Epoch: [188][ 400/ 813] Overall Loss 0.411958 Objective Loss 0.411958 LR 0.000025 Time 0.598427 +2025-05-16 18:22:37,172 - Epoch: [188][ 500/ 813] Overall Loss 0.406073 Objective Loss 0.406073 LR 0.000025 Time 0.593845 +2025-05-16 18:23:33,241 - Epoch: [188][ 600/ 813] Overall Loss 0.410157 Objective Loss 0.410157 LR 0.000025 Time 0.588316 +2025-05-16 18:24:34,682 - Epoch: [188][ 700/ 813] Overall Loss 0.415304 Objective Loss 0.415304 LR 0.000025 Time 0.592039 +2025-05-16 18:25:34,884 - Epoch: [188][ 800/ 813] Overall Loss 0.419598 Objective Loss 0.419598 LR 0.000025 Time 0.593284 +2025-05-16 18:25:40,479 - Epoch: [188][ 813/ 813] Overall Loss 0.419992 Objective Loss 0.419992 LR 0.000025 Time 0.590678 +2025-05-16 18:25:40,526 - --- validate (epoch=188)----------- +2025-05-16 18:25:40,528 - 3250 samples (16 per mini-batch) +2025-05-16 18:25:40,530 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 18:26:44,103 - Epoch: [188][ 100/ 204] Loss 0.649691 mAP 0.908648 +2025-05-16 18:27:43,247 - Epoch: [188][ 200/ 204] Loss 0.649222 mAP 0.908987 +2025-05-16 18:27:44,797 - Epoch: [188][ 204/ 204] Loss 0.646975 mAP 0.909026 +2025-05-16 18:27:44,829 - ==> mAP: 0.90903 Loss: 0.647 + +2025-05-16 18:27:44,838 - ==> Best [mAP: 0.909834 vloss: 0.652454 Params: 368352 on epoch: 183] +2025-05-16 18:27:44,838 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 18:27:44,867 - + +2025-05-16 18:27:44,867 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 18:28:42,390 - Epoch: [189][ 100/ 813] Overall Loss 0.365910 Objective Loss 0.365910 LR 0.000025 Time 0.575202 +2025-05-16 18:29:43,271 - Epoch: [189][ 200/ 813] Overall Loss 0.366568 Objective Loss 0.366568 LR 0.000025 Time 0.591989 +2025-05-16 18:30:43,668 - Epoch: [189][ 300/ 813] Overall Loss 0.384209 Objective Loss 0.384209 LR 0.000025 Time 0.595974 +2025-05-16 18:31:40,601 - Epoch: [189][ 400/ 813] Overall Loss 0.393451 Objective Loss 0.393451 LR 0.000025 Time 0.589305 +2025-05-16 18:32:41,079 - Epoch: [189][ 500/ 813] Overall Loss 0.391119 Objective Loss 0.391119 LR 0.000025 Time 0.592395 +2025-05-16 18:33:36,451 - Epoch: [189][ 600/ 813] Overall Loss 0.394305 Objective Loss 0.394305 LR 0.000025 Time 0.585946 +2025-05-16 18:34:36,393 - Epoch: [189][ 700/ 813] Overall Loss 0.396715 Objective Loss 0.396715 LR 0.000025 Time 0.587867 +2025-05-16 18:35:37,008 - Epoch: [189][ 800/ 813] Overall Loss 0.397993 Objective Loss 0.397993 LR 0.000025 Time 0.590150 +2025-05-16 18:35:42,751 - Epoch: [189][ 813/ 813] Overall Loss 0.397416 Objective Loss 0.397416 LR 0.000025 Time 0.587776 +2025-05-16 18:35:42,793 - --- validate (epoch=189)----------- +2025-05-16 18:35:42,794 - 3250 samples (16 per mini-batch) +2025-05-16 18:35:42,796 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 18:36:46,226 - Epoch: [189][ 100/ 204] Loss 0.646000 mAP 0.909747 +2025-05-16 18:37:45,723 - Epoch: [189][ 200/ 204] Loss 0.681931 mAP 0.909674 +2025-05-16 18:37:46,977 - Epoch: [189][ 204/ 204] Loss 0.691106 mAP 0.909626 +2025-05-16 18:37:47,018 - ==> mAP: 0.90963 Loss: 0.691 + +2025-05-16 18:37:47,080 - ==> Best [mAP: 0.909834 vloss: 0.652454 Params: 368352 on epoch: 183] +2025-05-16 18:37:47,080 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 18:37:47,116 - + +2025-05-16 18:37:47,116 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 18:38:45,129 - Epoch: [190][ 100/ 813] Overall Loss 0.386181 Objective Loss 0.386181 LR 0.000025 Time 0.580092 +2025-05-16 18:39:43,628 - Epoch: [190][ 200/ 813] Overall Loss 0.375847 Objective Loss 0.375847 LR 0.000025 Time 0.582529 +2025-05-16 18:40:44,736 - Epoch: [190][ 300/ 813] Overall Loss 0.380107 Objective Loss 0.380107 LR 0.000025 Time 0.592035 +2025-05-16 18:41:42,114 - Epoch: [190][ 400/ 813] Overall Loss 0.377079 Objective Loss 0.377079 LR 0.000025 Time 0.587464 +2025-05-16 18:42:41,491 - Epoch: [190][ 500/ 813] Overall Loss 0.379720 Objective Loss 0.379720 LR 0.000025 Time 0.588720 +2025-05-16 18:43:40,120 - Epoch: [190][ 600/ 813] Overall Loss 0.387706 Objective Loss 0.387706 LR 0.000025 Time 0.588311 +2025-05-16 18:44:34,208 - Epoch: [190][ 700/ 813] Overall Loss 0.393692 Objective Loss 0.393692 LR 0.000025 Time 0.581532 +2025-05-16 18:45:34,557 - Epoch: [190][ 800/ 813] Overall Loss 0.394901 Objective Loss 0.394901 LR 0.000025 Time 0.584272 +2025-05-16 18:45:40,891 - Epoch: [190][ 813/ 813] Overall Loss 0.395223 Objective Loss 0.395223 LR 0.000025 Time 0.582720 +2025-05-16 18:45:40,933 - --- validate (epoch=190)----------- +2025-05-16 18:45:40,935 - 3250 samples (16 per mini-batch) +2025-05-16 18:45:40,937 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 18:46:43,438 - Epoch: [190][ 100/ 204] Loss 0.714566 mAP 0.899642 +2025-05-16 18:47:45,008 - Epoch: [190][ 200/ 204] Loss 0.687604 mAP 0.899421 +2025-05-16 18:47:46,171 - Epoch: [190][ 204/ 204] Loss 0.687092 mAP 0.899423 +2025-05-16 18:47:46,217 - ==> mAP: 0.89942 Loss: 0.687 + +2025-05-16 18:47:46,279 - ==> Best [mAP: 0.909834 vloss: 0.652454 Params: 368352 on epoch: 183] +2025-05-16 18:47:46,280 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 18:47:46,310 - + +2025-05-16 18:47:46,310 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 18:48:46,153 - Epoch: [191][ 100/ 813] Overall Loss 0.354555 Objective Loss 0.354555 LR 0.000025 Time 0.598394 +2025-05-16 18:49:42,576 - Epoch: [191][ 200/ 813] Overall Loss 0.387533 Objective Loss 0.387533 LR 0.000025 Time 0.581299 +2025-05-16 18:50:42,413 - Epoch: [191][ 300/ 813] Overall Loss 0.383606 Objective Loss 0.383606 LR 0.000025 Time 0.586982 +2025-05-16 18:51:42,459 - Epoch: [191][ 400/ 813] Overall Loss 0.391499 Objective Loss 0.391499 LR 0.000025 Time 0.590340 +2025-05-16 18:52:42,471 - Epoch: [191][ 500/ 813] Overall Loss 0.394400 Objective Loss 0.394400 LR 0.000025 Time 0.592291 +2025-05-16 18:53:42,774 - Epoch: [191][ 600/ 813] Overall Loss 0.398810 Objective Loss 0.398810 LR 0.000025 Time 0.594077 +2025-05-16 18:54:38,581 - Epoch: [191][ 700/ 813] Overall Loss 0.402769 Objective Loss 0.402769 LR 0.000025 Time 0.588930 +2025-05-16 18:55:35,778 - Epoch: [191][ 800/ 813] Overall Loss 0.405020 Objective Loss 0.405020 LR 0.000025 Time 0.586807 +2025-05-16 18:55:42,433 - Epoch: [191][ 813/ 813] Overall Loss 0.405005 Objective Loss 0.405005 LR 0.000025 Time 0.585610 +2025-05-16 18:55:42,471 - --- validate (epoch=191)----------- +2025-05-16 18:55:42,472 - 3250 samples (16 per mini-batch) +2025-05-16 18:55:42,474 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 18:56:46,402 - Epoch: [191][ 100/ 204] Loss 0.629216 mAP 0.900307 +2025-05-16 18:57:46,160 - Epoch: [191][ 200/ 204] Loss 0.656349 mAP 0.899807 +2025-05-16 18:57:47,532 - Epoch: [191][ 204/ 204] Loss 0.660824 mAP 0.899685 +2025-05-16 18:57:47,574 - ==> mAP: 0.89968 Loss: 0.661 + +2025-05-16 18:57:47,604 - ==> Best [mAP: 0.909834 vloss: 0.652454 Params: 368352 on epoch: 183] +2025-05-16 18:57:47,604 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 18:57:47,632 - + +2025-05-16 18:57:47,632 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 18:58:49,387 - Epoch: [192][ 100/ 813] Overall Loss 0.354235 Objective Loss 0.354235 LR 0.000025 Time 0.617512 +2025-05-16 18:59:46,690 - Epoch: [192][ 200/ 813] Overall Loss 0.348842 Objective Loss 0.348842 LR 0.000025 Time 0.595043 +2025-05-16 19:00:43,049 - Epoch: [192][ 300/ 813] Overall Loss 0.361186 Objective Loss 0.361186 LR 0.000025 Time 0.584549 +2025-05-16 19:01:42,482 - Epoch: [192][ 400/ 813] Overall Loss 0.368373 Objective Loss 0.368373 LR 0.000025 Time 0.586988 +2025-05-16 19:02:41,175 - Epoch: [192][ 500/ 813] Overall Loss 0.374541 Objective Loss 0.374541 LR 0.000025 Time 0.586970 +2025-05-16 19:03:41,751 - Epoch: [192][ 600/ 813] Overall Loss 0.379224 Objective Loss 0.379224 LR 0.000025 Time 0.590096 +2025-05-16 19:04:40,213 - Epoch: [192][ 700/ 813] Overall Loss 0.388644 Objective Loss 0.388644 LR 0.000025 Time 0.589310 +2025-05-16 19:05:37,701 - Epoch: [192][ 800/ 813] Overall Loss 0.394631 Objective Loss 0.394631 LR 0.000025 Time 0.587489 +2025-05-16 19:05:43,286 - Epoch: [192][ 813/ 813] Overall Loss 0.395310 Objective Loss 0.395310 LR 0.000025 Time 0.584963 +2025-05-16 19:05:43,325 - --- validate (epoch=192)----------- +2025-05-16 19:05:43,326 - 3250 samples (16 per mini-batch) +2025-05-16 19:05:43,328 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 19:06:45,750 - Epoch: [192][ 100/ 204] Loss 0.679638 mAP 0.898450 +2025-05-16 19:07:45,296 - Epoch: [192][ 200/ 204] Loss 0.672824 mAP 0.898790 +2025-05-16 19:07:46,666 - Epoch: [192][ 204/ 204] Loss 0.673561 mAP 0.898700 +2025-05-16 19:07:46,706 - ==> mAP: 0.89870 Loss: 0.674 + +2025-05-16 19:07:46,737 - ==> Best [mAP: 0.909834 vloss: 0.652454 Params: 368352 on epoch: 183] +2025-05-16 19:07:46,737 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 19:07:46,773 - + +2025-05-16 19:07:46,773 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 19:08:51,034 - Epoch: [193][ 100/ 813] Overall Loss 0.372068 Objective Loss 0.372068 LR 0.000025 Time 0.642567 +2025-05-16 19:09:49,344 - Epoch: [193][ 200/ 813] Overall Loss 0.372723 Objective Loss 0.372723 LR 0.000025 Time 0.612824 +2025-05-16 19:10:44,114 - Epoch: [193][ 300/ 813] Overall Loss 0.377538 Objective Loss 0.377538 LR 0.000025 Time 0.591107 +2025-05-16 19:11:41,718 - Epoch: [193][ 400/ 813] Overall Loss 0.375110 Objective Loss 0.375110 LR 0.000025 Time 0.587333 +2025-05-16 19:12:42,576 - Epoch: [193][ 500/ 813] Overall Loss 0.380921 Objective Loss 0.380921 LR 0.000025 Time 0.591575 +2025-05-16 19:13:43,396 - Epoch: [193][ 600/ 813] Overall Loss 0.386950 Objective Loss 0.386950 LR 0.000025 Time 0.594340 +2025-05-16 19:14:43,897 - Epoch: [193][ 700/ 813] Overall Loss 0.393042 Objective Loss 0.393042 LR 0.000025 Time 0.595853 +2025-05-16 19:15:37,151 - Epoch: [193][ 800/ 813] Overall Loss 0.396431 Objective Loss 0.396431 LR 0.000025 Time 0.587937 +2025-05-16 19:15:43,259 - Epoch: [193][ 813/ 813] Overall Loss 0.398351 Objective Loss 0.398351 LR 0.000025 Time 0.586044 +2025-05-16 19:15:43,288 - --- validate (epoch=193)----------- +2025-05-16 19:15:43,289 - 3250 samples (16 per mini-batch) +2025-05-16 19:15:43,291 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 19:16:45,533 - Epoch: [193][ 100/ 204] Loss 0.711932 mAP 0.910206 +2025-05-16 19:17:46,711 - Epoch: [193][ 200/ 204] Loss 0.686622 mAP 0.910189 +2025-05-16 19:17:47,973 - Epoch: [193][ 204/ 204] Loss 0.687082 mAP 0.910186 +2025-05-16 19:17:48,007 - ==> mAP: 0.91019 Loss: 0.687 + +2025-05-16 19:17:48,043 - ==> Best [mAP: 0.910186 vloss: 0.687082 Params: 368352 on epoch: 193] +2025-05-16 19:17:48,043 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 19:17:48,084 - + +2025-05-16 19:17:48,084 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 19:18:52,278 - Epoch: [194][ 100/ 813] Overall Loss 0.358494 Objective Loss 0.358494 LR 0.000025 Time 0.641900 +2025-05-16 19:19:52,437 - Epoch: [194][ 200/ 813] Overall Loss 0.356580 Objective Loss 0.356580 LR 0.000025 Time 0.621712 +2025-05-16 19:20:47,753 - Epoch: [194][ 300/ 813] Overall Loss 0.364216 Objective Loss 0.364216 LR 0.000025 Time 0.598843 +2025-05-16 19:21:42,463 - Epoch: [194][ 400/ 813] Overall Loss 0.374539 Objective Loss 0.374539 LR 0.000025 Time 0.585901 +2025-05-16 19:22:41,555 - Epoch: [194][ 500/ 813] Overall Loss 0.375300 Objective Loss 0.375300 LR 0.000025 Time 0.586856 +2025-05-16 19:23:42,660 - Epoch: [194][ 600/ 813] Overall Loss 0.383884 Objective Loss 0.383884 LR 0.000025 Time 0.590883 +2025-05-16 19:24:43,244 - Epoch: [194][ 700/ 813] Overall Loss 0.387927 Objective Loss 0.387927 LR 0.000025 Time 0.593015 +2025-05-16 19:25:38,538 - Epoch: [194][ 800/ 813] Overall Loss 0.387868 Objective Loss 0.387868 LR 0.000025 Time 0.588004 +2025-05-16 19:25:45,569 - Epoch: [194][ 813/ 813] Overall Loss 0.389205 Objective Loss 0.389205 LR 0.000025 Time 0.587249 +2025-05-16 19:25:45,603 - --- validate (epoch=194)----------- +2025-05-16 19:25:45,605 - 3250 samples (16 per mini-batch) +2025-05-16 19:25:45,607 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 19:26:44,740 - Epoch: [194][ 100/ 204] Loss 0.694709 mAP 0.879771 +2025-05-16 19:27:43,775 - Epoch: [194][ 200/ 204] Loss 0.692847 mAP 0.889767 +2025-05-16 19:27:45,376 - Epoch: [194][ 204/ 204] Loss 0.689587 mAP 0.889774 +2025-05-16 19:27:45,412 - ==> mAP: 0.88977 Loss: 0.690 + +2025-05-16 19:27:45,431 - ==> Best [mAP: 0.910186 vloss: 0.687082 Params: 368352 on epoch: 193] +2025-05-16 19:27:45,431 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 19:27:45,467 - + +2025-05-16 19:27:45,467 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 19:28:49,231 - Epoch: [195][ 100/ 813] Overall Loss 0.378898 Objective Loss 0.378898 LR 0.000025 Time 0.637596 +2025-05-16 19:29:49,461 - Epoch: [195][ 200/ 813] Overall Loss 0.373060 Objective Loss 0.373060 LR 0.000025 Time 0.619937 +2025-05-16 19:30:48,778 - Epoch: [195][ 300/ 813] Overall Loss 0.395901 Objective Loss 0.395901 LR 0.000025 Time 0.611005 +2025-05-16 19:31:39,366 - Epoch: [195][ 400/ 813] Overall Loss 0.391159 Objective Loss 0.391159 LR 0.000025 Time 0.584714 +2025-05-16 19:32:39,234 - Epoch: [195][ 500/ 813] Overall Loss 0.395191 Objective Loss 0.395191 LR 0.000025 Time 0.587502 +2025-05-16 19:33:39,630 - Epoch: [195][ 600/ 813] Overall Loss 0.406510 Objective Loss 0.406510 LR 0.000025 Time 0.590240 +2025-05-16 19:34:42,270 - Epoch: [195][ 700/ 813] Overall Loss 0.405657 Objective Loss 0.405657 LR 0.000025 Time 0.595401 +2025-05-16 19:35:40,128 - Epoch: [195][ 800/ 813] Overall Loss 0.411102 Objective Loss 0.411102 LR 0.000025 Time 0.593295 +2025-05-16 19:35:45,552 - Epoch: [195][ 813/ 813] Overall Loss 0.411360 Objective Loss 0.411360 LR 0.000025 Time 0.590475 +2025-05-16 19:35:45,583 - --- validate (epoch=195)----------- +2025-05-16 19:35:45,584 - 3250 samples (16 per mini-batch) +2025-05-16 19:35:45,586 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 19:36:44,173 - Epoch: [195][ 100/ 204] Loss 0.662459 mAP 0.899947 +2025-05-16 19:37:43,499 - Epoch: [195][ 200/ 204] Loss 0.676723 mAP 0.899803 +2025-05-16 19:37:44,695 - Epoch: [195][ 204/ 204] Loss 0.675488 mAP 0.899813 +2025-05-16 19:37:44,728 - ==> mAP: 0.89981 Loss: 0.675 + +2025-05-16 19:37:44,790 - ==> Best [mAP: 0.910186 vloss: 0.687082 Params: 368352 on epoch: 193] +2025-05-16 19:37:44,790 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 19:37:44,828 - + +2025-05-16 19:37:44,828 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 19:38:47,784 - Epoch: [196][ 100/ 813] Overall Loss 0.360943 Objective Loss 0.360943 LR 0.000025 Time 0.629519 +2025-05-16 19:39:48,258 - Epoch: [196][ 200/ 813] Overall Loss 0.366222 Objective Loss 0.366222 LR 0.000025 Time 0.616934 +2025-05-16 19:40:48,196 - Epoch: [196][ 300/ 813] Overall Loss 0.378583 Objective Loss 0.378583 LR 0.000025 Time 0.611073 +2025-05-16 19:41:41,338 - Epoch: [196][ 400/ 813] Overall Loss 0.390670 Objective Loss 0.390670 LR 0.000025 Time 0.591153 +2025-05-16 19:42:38,068 - Epoch: [196][ 500/ 813] Overall Loss 0.395433 Objective Loss 0.395433 LR 0.000025 Time 0.586378 +2025-05-16 19:43:39,466 - Epoch: [196][ 600/ 813] Overall Loss 0.398357 Objective Loss 0.398357 LR 0.000025 Time 0.590975 +2025-05-16 19:44:39,039 - Epoch: [196][ 700/ 813] Overall Loss 0.408019 Objective Loss 0.408019 LR 0.000025 Time 0.591648 +2025-05-16 19:45:38,956 - Epoch: [196][ 800/ 813] Overall Loss 0.413469 Objective Loss 0.413469 LR 0.000025 Time 0.592586 +2025-05-16 19:45:45,044 - Epoch: [196][ 813/ 813] Overall Loss 0.414358 Objective Loss 0.414358 LR 0.000025 Time 0.590598 +2025-05-16 19:45:45,096 - --- validate (epoch=196)----------- +2025-05-16 19:45:45,098 - 3250 samples (16 per mini-batch) +2025-05-16 19:45:45,100 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 19:46:43,506 - Epoch: [196][ 100/ 204] Loss 0.665026 mAP 0.909586 +2025-05-16 19:47:40,218 - Epoch: [196][ 200/ 204] Loss 0.669717 mAP 0.899288 +2025-05-16 19:47:41,381 - Epoch: [196][ 204/ 204] Loss 0.672453 mAP 0.899231 +2025-05-16 19:47:41,414 - ==> mAP: 0.89923 Loss: 0.672 + +2025-05-16 19:47:41,461 - ==> Best [mAP: 0.910186 vloss: 0.687082 Params: 368352 on epoch: 193] +2025-05-16 19:47:41,461 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 19:47:41,498 - + +2025-05-16 19:47:41,498 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 19:48:44,686 - Epoch: [197][ 100/ 813] Overall Loss 0.353956 Objective Loss 0.353956 LR 0.000025 Time 0.631841 +2025-05-16 19:49:45,534 - Epoch: [197][ 200/ 813] Overall Loss 0.354979 Objective Loss 0.354979 LR 0.000025 Time 0.620145 +2025-05-16 19:50:46,769 - Epoch: [197][ 300/ 813] Overall Loss 0.369956 Objective Loss 0.369956 LR 0.000025 Time 0.617538 +2025-05-16 19:51:43,246 - Epoch: [197][ 400/ 813] Overall Loss 0.380780 Objective Loss 0.380780 LR 0.000025 Time 0.604339 +2025-05-16 19:52:37,415 - Epoch: [197][ 500/ 813] Overall Loss 0.385366 Objective Loss 0.385366 LR 0.000025 Time 0.591804 +2025-05-16 19:53:38,328 - Epoch: [197][ 600/ 813] Overall Loss 0.388169 Objective Loss 0.388169 LR 0.000025 Time 0.594687 +2025-05-16 19:54:38,728 - Epoch: [197][ 700/ 813] Overall Loss 0.392982 Objective Loss 0.392982 LR 0.000025 Time 0.596014 +2025-05-16 19:55:39,859 - Epoch: [197][ 800/ 813] Overall Loss 0.397198 Objective Loss 0.397198 LR 0.000025 Time 0.597923 +2025-05-16 19:55:45,265 - Epoch: [197][ 813/ 813] Overall Loss 0.398184 Objective Loss 0.398184 LR 0.000025 Time 0.595009 +2025-05-16 19:55:45,307 - --- validate (epoch=197)----------- +2025-05-16 19:55:45,308 - 3250 samples (16 per mini-batch) +2025-05-16 19:55:45,311 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 19:56:47,381 - Epoch: [197][ 100/ 204] Loss 0.628679 mAP 0.920020 +2025-05-16 19:57:43,732 - Epoch: [197][ 200/ 204] Loss 0.660600 mAP 0.909743 +2025-05-16 19:57:44,385 - Epoch: [197][ 204/ 204] Loss 0.666818 mAP 0.909752 +2025-05-16 19:57:44,418 - ==> mAP: 0.90975 Loss: 0.667 + +2025-05-16 19:57:44,427 - ==> Best [mAP: 0.910186 vloss: 0.687082 Params: 368352 on epoch: 193] +2025-05-16 19:57:44,427 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 19:57:44,463 - + +2025-05-16 19:57:44,463 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 19:58:43,708 - Epoch: [198][ 100/ 813] Overall Loss 0.338464 Objective Loss 0.338464 LR 0.000025 Time 0.592417 +2025-05-16 19:59:46,443 - Epoch: [198][ 200/ 813] Overall Loss 0.349106 Objective Loss 0.349106 LR 0.000025 Time 0.609866 +2025-05-16 20:00:47,789 - Epoch: [198][ 300/ 813] Overall Loss 0.366624 Objective Loss 0.366624 LR 0.000025 Time 0.611053 +2025-05-16 20:01:45,735 - Epoch: [198][ 400/ 813] Overall Loss 0.380296 Objective Loss 0.380296 LR 0.000025 Time 0.603113 +2025-05-16 20:02:40,890 - Epoch: [198][ 500/ 813] Overall Loss 0.386316 Objective Loss 0.386316 LR 0.000025 Time 0.592789 +2025-05-16 20:03:38,800 - Epoch: [198][ 600/ 813] Overall Loss 0.389527 Objective Loss 0.389527 LR 0.000025 Time 0.590504 +2025-05-16 20:04:38,116 - Epoch: [198][ 700/ 813] Overall Loss 0.396000 Objective Loss 0.396000 LR 0.000025 Time 0.590878 +2025-05-16 20:05:38,165 - Epoch: [198][ 800/ 813] Overall Loss 0.399911 Objective Loss 0.399911 LR 0.000025 Time 0.592068 +2025-05-16 20:05:44,391 - Epoch: [198][ 813/ 813] Overall Loss 0.400611 Objective Loss 0.400611 LR 0.000025 Time 0.590258 +2025-05-16 20:05:44,435 - --- validate (epoch=198)----------- +2025-05-16 20:05:44,437 - 3250 samples (16 per mini-batch) +2025-05-16 20:05:44,439 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 20:06:46,819 - Epoch: [198][ 100/ 204] Loss 0.742897 mAP 0.890216 +2025-05-16 20:07:44,781 - Epoch: [198][ 200/ 204] Loss 0.714189 mAP 0.899404 +2025-05-16 20:07:45,534 - Epoch: [198][ 204/ 204] Loss 0.728435 mAP 0.899427 +2025-05-16 20:07:45,566 - ==> mAP: 0.89943 Loss: 0.728 + +2025-05-16 20:07:45,575 - ==> Best [mAP: 0.910186 vloss: 0.687082 Params: 368352 on epoch: 193] +2025-05-16 20:07:45,575 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 20:07:45,610 - + +2025-05-16 20:07:45,610 - Training epoch: 13000 samples (16 per mini-batch, world size: 1) +2025-05-16 20:08:44,786 - Epoch: [199][ 100/ 813] Overall Loss 0.398127 Objective Loss 0.398127 LR 0.000025 Time 0.591721 +2025-05-16 20:09:44,687 - Epoch: [199][ 200/ 813] Overall Loss 0.381305 Objective Loss 0.381305 LR 0.000025 Time 0.595353 +2025-05-16 20:10:45,498 - Epoch: [199][ 300/ 813] Overall Loss 0.388777 Objective Loss 0.388777 LR 0.000025 Time 0.599595 +2025-05-16 20:11:44,232 - Epoch: [199][ 400/ 813] Overall Loss 0.396400 Objective Loss 0.396400 LR 0.000025 Time 0.596523 +2025-05-16 20:12:41,977 - Epoch: [199][ 500/ 813] Overall Loss 0.397675 Objective Loss 0.397675 LR 0.000025 Time 0.592703 +2025-05-16 20:13:37,721 - Epoch: [199][ 600/ 813] Overall Loss 0.398364 Objective Loss 0.398364 LR 0.000025 Time 0.586824 +2025-05-16 20:14:39,086 - Epoch: [199][ 700/ 813] Overall Loss 0.403248 Objective Loss 0.403248 LR 0.000025 Time 0.590642 +2025-05-16 20:15:38,470 - Epoch: [199][ 800/ 813] Overall Loss 0.404020 Objective Loss 0.404020 LR 0.000025 Time 0.591038 +2025-05-16 20:15:45,017 - Epoch: [199][ 813/ 813] Overall Loss 0.404273 Objective Loss 0.404273 LR 0.000025 Time 0.589617 +2025-05-16 20:15:45,057 - --- validate (epoch=199)----------- +2025-05-16 20:15:45,060 - 3250 samples (16 per mini-batch) +2025-05-16 20:15:45,062 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 20:16:49,030 - Epoch: [199][ 100/ 204] Loss 0.649350 mAP 0.910064 +2025-05-16 20:17:48,978 - Epoch: [199][ 200/ 204] Loss 0.668451 mAP 0.909058 +2025-05-16 20:17:49,717 - Epoch: [199][ 204/ 204] Loss 0.664466 mAP 0.909078 +2025-05-16 20:17:49,745 - ==> mAP: 0.90908 Loss: 0.664 + +2025-05-16 20:17:49,760 - ==> Best [mAP: 0.910186 vloss: 0.687082 Params: 368352 on epoch: 193] +2025-05-16 20:17:49,760 - Saving checkpoint to: logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_checkpoint.pth.tar +2025-05-16 20:17:49,788 - --- test (ckpt) --------------------- +2025-05-16 20:17:49,788 - 3250 samples (16 per mini-batch) +2025-05-16 20:17:49,790 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 20:18:47,521 - Test: [ 100/ 204] Loss 0.662841 mAP 0.908802 +2025-05-16 20:19:47,392 - Test: [ 200/ 204] Loss 0.692953 mAP 0.908746 +2025-05-16 20:19:49,371 - Test: [ 204/ 204] Loss 0.693674 mAP 0.908757 +2025-05-16 20:19:49,409 - ==> mAP: 0.90876 Loss: 0.694 + +2025-05-16 20:19:49,410 - --- test (best) --------------------- +2025-05-16 20:19:49,410 - => loading checkpoint logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_best.pth.tar +2025-05-16 20:19:49,426 - => Checkpoint contents: ++----------------------+-------------+-----------------+ +| Key | Type | Value | +|----------------------+-------------+-----------------| +| arch | str | ai85tinierssdqr | +| compression_sched | dict | | +| epoch | int | 193 | +| extras | dict | | +| optimizer_state_dict | dict | | +| optimizer_type | type | Adam | +| state_dict | OrderedDict | | ++----------------------+-------------+-----------------+ + +2025-05-16 20:19:49,426 - => Checkpoint['extras'] contents: ++--------------+--------+---------+ +| Key | Type | Value | +|--------------+--------+---------| +| best_epoch | int | 193 | +| best_mAP | Tensor | | +| best_top1 | int | 0 | +| current_mAP | Tensor | | +| current_top1 | int | 0 | ++--------------+--------+---------+ + +2025-05-16 20:19:49,427 - Loaded compression schedule from checkpoint (epoch 193) +2025-05-16 20:19:49,747 - => loaded 'state_dict' from checkpoint 'logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr_qat_best.pth.tar' +2025-05-16 20:19:49,748 - 3250 samples (16 per mini-batch) +2025-05-16 20:19:49,750 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.4, 'max_overlap': 0.1, 'top_k': 20}} +2025-05-16 20:20:53,284 - Test: [ 100/ 204] Loss 0.652667 mAP 0.899319 +2025-05-16 20:21:54,191 - Test: [ 200/ 204] Loss 0.671411 mAP 0.899587 +2025-05-16 20:21:55,684 - Test: [ 204/ 204] Loss 0.676310 mAP 0.899602 +2025-05-16 20:21:55,737 - ==> mAP: 0.89960 Loss: 0.676 + +2025-05-16 20:21:55,742 - +2025-05-16 20:21:55,743 - Log file for this run: /home/asyaturhal/ai8x-training/logs/ai85tinierssdqr___2025.05.16-122716/ai85tinierssdqr___2025.05.16-122716.log diff --git a/trained/ai85-qrcode-tinierssd-kpts-qat8.pth.tar b/trained/ai85-qrcode-tinierssd-kpts-qat8.pth.tar index fb97e984..dedef178 100644 Binary files a/trained/ai85-qrcode-tinierssd-kpts-qat8.pth.tar and b/trained/ai85-qrcode-tinierssd-kpts-qat8.pth.tar differ diff --git a/trained/ai85-svhn-tinierssd-qat.log b/trained/ai85-svhn-tinierssd-qat.log new file mode 100644 index 00000000..3fe8ccf1 --- /dev/null +++ b/trained/ai85-svhn-tinierssd-qat.log @@ -0,0 +1,2361 @@ +2025-05-14 13:36:16,434 - Log file for this run: /home/asyaturhal/ai8x-training/logs/2025.05.14-133616/2025.05.14-133616.log +2025-05-14 13:36:16,434 - The open file limit is 1024. Please raise the limit (see documentation). +2025-05-14 13:36:16,435 - Configuring device: MAX78000, simulate=False. +2025-05-14 13:36:16,592 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:36:16,612 - Optimizer Type: +2025-05-14 13:36:16,612 - Optimizer Args: {'lr': 0.001, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0.0005, 'amsgrad': False, 'maximize': False, 'foreach': None, 'capturable': False, 'differentiable': False, 'fused': None} +2025-05-14 13:36:19,469 - torch.compile() not available, using "eager" mode +2025-05-14 13:36:19,469 - Use distributed training to enable torch.compile() with multiple GPUs +2025-05-14 13:36:19,469 - Dataset sizes: + training=28548 + validation=12251 + test=12251 +2025-05-14 13:36:19,469 - + +2025-05-14 13:36:19,469 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:36:26,272 - Epoch: [0][ 200/ 1785] Overall Loss 11.125986 Objective Loss 11.125986 LR 0.001000 Time 0.034001 +2025-05-14 13:36:32,485 - Epoch: [0][ 400/ 1785] Overall Loss 10.387053 Objective Loss 10.387053 LR 0.001000 Time 0.032530 +2025-05-14 13:36:38,571 - Epoch: [0][ 600/ 1785] Overall Loss 9.909978 Objective Loss 9.909978 LR 0.001000 Time 0.031828 +2025-05-14 13:36:44,637 - Epoch: [0][ 800/ 1785] Overall Loss 9.510768 Objective Loss 9.510768 LR 0.001000 Time 0.031451 +2025-05-14 13:36:50,687 - Epoch: [0][ 1000/ 1785] Overall Loss 9.134875 Objective Loss 9.134875 LR 0.001000 Time 0.031209 +2025-05-14 13:36:56,699 - Epoch: [0][ 1200/ 1785] Overall Loss 8.769884 Objective Loss 8.769884 LR 0.001000 Time 0.031016 +2025-05-14 13:37:02,652 - Epoch: [0][ 1400/ 1785] Overall Loss 8.412219 Objective Loss 8.412219 LR 0.001000 Time 0.030837 +2025-05-14 13:37:08,706 - Epoch: [0][ 1600/ 1785] Overall Loss 8.067360 Objective Loss 8.067360 LR 0.001000 Time 0.030765 +2025-05-14 13:37:14,259 - Epoch: [0][ 1785/ 1785] Overall Loss 7.761158 Objective Loss 7.761158 LR 0.001000 Time 0.030687 +2025-05-14 13:37:14,282 - --- validate (epoch=0)----------- +2025-05-14 13:37:14,283 - 12251 samples (16 per mini-batch) +2025-05-14 13:37:14,285 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:37:29,286 - Epoch: [0][ 200/ 766] Loss 4.909212 mAP 0.747186 +2025-05-14 13:37:45,163 - Epoch: [0][ 400/ 766] Loss 4.905460 mAP 0.744405 +2025-05-14 13:38:02,216 - Epoch: [0][ 600/ 766] Loss 4.910158 mAP 0.741871 +2025-05-14 13:38:17,300 - Epoch: [0][ 766/ 766] Loss 4.907842 mAP 0.744880 +2025-05-14 13:38:17,326 - ==> mAP: 0.74488 Loss: 4.908 + +2025-05-14 13:38:17,330 - ==> Best [mAP: 0.744880 vloss: 4.907842 Params: 335520 on epoch: 0] +2025-05-14 13:38:17,330 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:38:17,364 - + +2025-05-14 13:38:17,364 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:38:23,769 - Epoch: [1][ 200/ 1785] Overall Loss 4.739375 Objective Loss 4.739375 LR 0.001000 Time 0.032013 +2025-05-14 13:38:29,926 - Epoch: [1][ 400/ 1785] Overall Loss 4.574121 Objective Loss 4.574121 LR 0.001000 Time 0.031395 +2025-05-14 13:38:36,009 - Epoch: [1][ 600/ 1785] Overall Loss 4.421693 Objective Loss 4.421693 LR 0.001000 Time 0.031066 +2025-05-14 13:38:42,247 - Epoch: [1][ 800/ 1785] Overall Loss 4.281557 Objective Loss 4.281557 LR 0.001000 Time 0.031095 +2025-05-14 13:38:48,424 - Epoch: [1][ 1000/ 1785] Overall Loss 4.159981 Objective Loss 4.159981 LR 0.001000 Time 0.031051 +2025-05-14 13:38:54,511 - Epoch: [1][ 1200/ 1785] Overall Loss 4.055738 Objective Loss 4.055738 LR 0.001000 Time 0.030947 +2025-05-14 13:39:00,576 - Epoch: [1][ 1400/ 1785] Overall Loss 3.962058 Objective Loss 3.962058 LR 0.001000 Time 0.030857 +2025-05-14 13:39:06,721 - Epoch: [1][ 1600/ 1785] Overall Loss 3.875898 Objective Loss 3.875898 LR 0.001000 Time 0.030840 +2025-05-14 13:39:12,397 - Epoch: [1][ 1785/ 1785] Overall Loss 3.806835 Objective Loss 3.806835 LR 0.001000 Time 0.030822 +2025-05-14 13:39:12,426 - --- validate (epoch=1)----------- +2025-05-14 13:39:12,427 - 12251 samples (16 per mini-batch) +2025-05-14 13:39:12,429 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:39:24,566 - Epoch: [1][ 200/ 766] Loss 3.200989 mAP 0.795585 +2025-05-14 13:39:37,434 - Epoch: [1][ 400/ 766] Loss 3.207952 mAP 0.793494 +2025-05-14 13:39:51,027 - Epoch: [1][ 600/ 766] Loss 3.207550 mAP 0.797036 +2025-05-14 13:40:03,480 - Epoch: [1][ 766/ 766] Loss 3.208756 mAP 0.796336 +2025-05-14 13:40:03,506 - ==> mAP: 0.79634 Loss: 3.209 + +2025-05-14 13:40:03,509 - ==> Best [mAP: 0.796336 vloss: 3.208756 Params: 335520 on epoch: 1] +2025-05-14 13:40:03,510 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:40:03,553 - + +2025-05-14 13:40:03,553 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:40:09,885 - Epoch: [2][ 200/ 1785] Overall Loss 3.105498 Objective Loss 3.105498 LR 0.001000 Time 0.031643 +2025-05-14 13:40:16,054 - Epoch: [2][ 400/ 1785] Overall Loss 3.076837 Objective Loss 3.076837 LR 0.001000 Time 0.031241 +2025-05-14 13:40:22,041 - Epoch: [2][ 600/ 1785] Overall Loss 3.058657 Objective Loss 3.058657 LR 0.001000 Time 0.030803 +2025-05-14 13:40:28,012 - Epoch: [2][ 800/ 1785] Overall Loss 3.015248 Objective Loss 3.015248 LR 0.001000 Time 0.030565 +2025-05-14 13:40:34,042 - Epoch: [2][ 1000/ 1785] Overall Loss 2.989008 Objective Loss 2.989008 LR 0.001000 Time 0.030480 +2025-05-14 13:40:39,901 - Epoch: [2][ 1200/ 1785] Overall Loss 2.965070 Objective Loss 2.965070 LR 0.001000 Time 0.030280 +2025-05-14 13:40:45,731 - Epoch: [2][ 1400/ 1785] Overall Loss 2.948834 Objective Loss 2.948834 LR 0.001000 Time 0.030118 +2025-05-14 13:40:51,668 - Epoch: [2][ 1600/ 1785] Overall Loss 2.935392 Objective Loss 2.935392 LR 0.001000 Time 0.030063 +2025-05-14 13:40:57,123 - Epoch: [2][ 1785/ 1785] Overall Loss 2.919909 Objective Loss 2.919909 LR 0.001000 Time 0.030002 +2025-05-14 13:40:57,152 - --- validate (epoch=2)----------- +2025-05-14 13:40:57,153 - 12251 samples (16 per mini-batch) +2025-05-14 13:40:57,155 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:41:09,188 - Epoch: [2][ 200/ 766] Loss 2.850010 mAP 0.827988 +2025-05-14 13:41:21,856 - Epoch: [2][ 400/ 766] Loss 2.841100 mAP 0.821430 +2025-05-14 13:41:35,575 - Epoch: [2][ 600/ 766] Loss 2.836108 mAP 0.823212 +2025-05-14 13:41:47,973 - Epoch: [2][ 766/ 766] Loss 2.830431 mAP 0.824105 +2025-05-14 13:41:48,000 - ==> mAP: 0.82410 Loss: 2.830 + +2025-05-14 13:41:48,003 - ==> Best [mAP: 0.824105 vloss: 2.830431 Params: 335520 on epoch: 2] +2025-05-14 13:41:48,003 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:41:48,042 - + +2025-05-14 13:41:48,042 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:41:54,137 - Epoch: [3][ 200/ 1785] Overall Loss 2.696976 Objective Loss 2.696976 LR 0.001000 Time 0.030459 +2025-05-14 13:42:00,050 - Epoch: [3][ 400/ 1785] Overall Loss 2.703046 Objective Loss 2.703046 LR 0.001000 Time 0.030008 +2025-05-14 13:42:05,914 - Epoch: [3][ 600/ 1785] Overall Loss 2.700864 Objective Loss 2.700864 LR 0.001000 Time 0.029775 +2025-05-14 13:42:11,867 - Epoch: [3][ 800/ 1785] Overall Loss 2.690429 Objective Loss 2.690429 LR 0.001000 Time 0.029770 +2025-05-14 13:42:17,937 - Epoch: [3][ 1000/ 1785] Overall Loss 2.683471 Objective Loss 2.683471 LR 0.001000 Time 0.029884 +2025-05-14 13:42:24,034 - Epoch: [3][ 1200/ 1785] Overall Loss 2.677577 Objective Loss 2.677577 LR 0.001000 Time 0.029983 +2025-05-14 13:42:30,140 - Epoch: [3][ 1400/ 1785] Overall Loss 2.677457 Objective Loss 2.677457 LR 0.001000 Time 0.030060 +2025-05-14 13:42:36,264 - Epoch: [3][ 1600/ 1785] Overall Loss 2.672667 Objective Loss 2.672667 LR 0.001000 Time 0.030129 +2025-05-14 13:42:41,829 - Epoch: [3][ 1785/ 1785] Overall Loss 2.664895 Objective Loss 2.664895 LR 0.001000 Time 0.030123 +2025-05-14 13:42:41,858 - --- validate (epoch=3)----------- +2025-05-14 13:42:41,859 - 12251 samples (16 per mini-batch) +2025-05-14 13:42:41,861 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:42:54,317 - Epoch: [3][ 200/ 766] Loss 2.682758 mAP 0.847121 +2025-05-14 13:43:07,407 - Epoch: [3][ 400/ 766] Loss 2.693087 mAP 0.842834 +2025-05-14 13:43:20,963 - Epoch: [3][ 600/ 766] Loss 2.686669 mAP 0.846231 +2025-05-14 13:43:33,396 - Epoch: [3][ 766/ 766] Loss 2.677536 mAP 0.847554 +2025-05-14 13:43:33,421 - ==> mAP: 0.84755 Loss: 2.678 + +2025-05-14 13:43:33,425 - ==> Best [mAP: 0.847554 vloss: 2.677536 Params: 335520 on epoch: 3] +2025-05-14 13:43:33,425 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:43:33,469 - + +2025-05-14 13:43:33,469 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:43:39,678 - Epoch: [4][ 200/ 1785] Overall Loss 2.556681 Objective Loss 2.556681 LR 0.001000 Time 0.031032 +2025-05-14 13:43:45,701 - Epoch: [4][ 400/ 1785] Overall Loss 2.576646 Objective Loss 2.576646 LR 0.001000 Time 0.030570 +2025-05-14 13:43:51,753 - Epoch: [4][ 600/ 1785] Overall Loss 2.569027 Objective Loss 2.569027 LR 0.001000 Time 0.030463 +2025-05-14 13:43:57,822 - Epoch: [4][ 800/ 1785] Overall Loss 2.573356 Objective Loss 2.573356 LR 0.001000 Time 0.030431 +2025-05-14 13:44:03,865 - Epoch: [4][ 1000/ 1785] Overall Loss 2.570186 Objective Loss 2.570186 LR 0.001000 Time 0.030387 +2025-05-14 13:44:09,916 - Epoch: [4][ 1200/ 1785] Overall Loss 2.563421 Objective Loss 2.563421 LR 0.001000 Time 0.030364 +2025-05-14 13:44:15,963 - Epoch: [4][ 1400/ 1785] Overall Loss 2.565180 Objective Loss 2.565180 LR 0.001000 Time 0.030344 +2025-05-14 13:44:22,056 - Epoch: [4][ 1600/ 1785] Overall Loss 2.560989 Objective Loss 2.560989 LR 0.001000 Time 0.030358 +2025-05-14 13:44:27,704 - Epoch: [4][ 1785/ 1785] Overall Loss 2.558620 Objective Loss 2.558620 LR 0.001000 Time 0.030375 +2025-05-14 13:44:27,729 - --- validate (epoch=4)----------- +2025-05-14 13:44:27,730 - 12251 samples (16 per mini-batch) +2025-05-14 13:44:27,732 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:44:39,822 - Epoch: [4][ 200/ 766] Loss 2.649904 mAP 0.848861 +2025-05-14 13:44:53,076 - Epoch: [4][ 400/ 766] Loss 2.670428 mAP 0.846770 +2025-05-14 13:45:07,122 - Epoch: [4][ 600/ 766] Loss 2.662839 mAP 0.846062 +2025-05-14 13:45:19,719 - Epoch: [4][ 766/ 766] Loss 2.664850 mAP 0.844885 +2025-05-14 13:45:19,745 - ==> mAP: 0.84489 Loss: 2.665 + +2025-05-14 13:45:19,749 - ==> Best [mAP: 0.847554 vloss: 2.677536 Params: 335520 on epoch: 3] +2025-05-14 13:45:19,749 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:45:19,790 - + +2025-05-14 13:45:19,790 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:45:26,094 - Epoch: [5][ 200/ 1785] Overall Loss 2.462869 Objective Loss 2.462869 LR 0.001000 Time 0.031507 +2025-05-14 13:45:32,270 - Epoch: [5][ 400/ 1785] Overall Loss 2.477209 Objective Loss 2.477209 LR 0.001000 Time 0.031189 +2025-05-14 13:45:38,447 - Epoch: [5][ 600/ 1785] Overall Loss 2.492714 Objective Loss 2.492714 LR 0.001000 Time 0.031085 +2025-05-14 13:45:44,626 - Epoch: [5][ 800/ 1785] Overall Loss 2.495833 Objective Loss 2.495833 LR 0.001000 Time 0.031035 +2025-05-14 13:45:50,926 - Epoch: [5][ 1000/ 1785] Overall Loss 2.498835 Objective Loss 2.498835 LR 0.001000 Time 0.031127 +2025-05-14 13:45:57,240 - Epoch: [5][ 1200/ 1785] Overall Loss 2.496728 Objective Loss 2.496728 LR 0.001000 Time 0.031199 +2025-05-14 13:46:03,448 - Epoch: [5][ 1400/ 1785] Overall Loss 2.494210 Objective Loss 2.494210 LR 0.001000 Time 0.031175 +2025-05-14 13:46:09,369 - Epoch: [5][ 1600/ 1785] Overall Loss 2.493776 Objective Loss 2.493776 LR 0.001000 Time 0.030978 +2025-05-14 13:46:14,934 - Epoch: [5][ 1785/ 1785] Overall Loss 2.495340 Objective Loss 2.495340 LR 0.001000 Time 0.030884 +2025-05-14 13:46:14,959 - --- validate (epoch=5)----------- +2025-05-14 13:46:14,960 - 12251 samples (16 per mini-batch) +2025-05-14 13:46:14,962 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:46:27,304 - Epoch: [5][ 200/ 766] Loss 2.573691 mAP 0.858522 +2025-05-14 13:46:40,361 - Epoch: [5][ 400/ 766] Loss 2.579813 mAP 0.855113 +2025-05-14 13:46:54,449 - Epoch: [5][ 600/ 766] Loss 2.576885 mAP 0.852166 +2025-05-14 13:47:07,174 - Epoch: [5][ 766/ 766] Loss 2.580578 mAP 0.851619 +2025-05-14 13:47:07,201 - ==> mAP: 0.85162 Loss: 2.581 + +2025-05-14 13:47:07,205 - ==> Best [mAP: 0.851619 vloss: 2.580578 Params: 335520 on epoch: 5] +2025-05-14 13:47:07,205 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:47:07,250 - + +2025-05-14 13:47:07,250 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:47:13,656 - Epoch: [6][ 200/ 1785] Overall Loss 2.420717 Objective Loss 2.420717 LR 0.001000 Time 0.032016 +2025-05-14 13:47:19,937 - Epoch: [6][ 400/ 1785] Overall Loss 2.432896 Objective Loss 2.432896 LR 0.001000 Time 0.031706 +2025-05-14 13:47:25,946 - Epoch: [6][ 600/ 1785] Overall Loss 2.445166 Objective Loss 2.445166 LR 0.001000 Time 0.031150 +2025-05-14 13:47:31,839 - Epoch: [6][ 800/ 1785] Overall Loss 2.441812 Objective Loss 2.441812 LR 0.001000 Time 0.030726 +2025-05-14 13:47:37,762 - Epoch: [6][ 1000/ 1785] Overall Loss 2.436593 Objective Loss 2.436593 LR 0.001000 Time 0.030502 +2025-05-14 13:47:43,713 - Epoch: [6][ 1200/ 1785] Overall Loss 2.441001 Objective Loss 2.441001 LR 0.001000 Time 0.030376 +2025-05-14 13:47:49,784 - Epoch: [6][ 1400/ 1785] Overall Loss 2.449437 Objective Loss 2.449437 LR 0.001000 Time 0.030371 +2025-05-14 13:47:55,791 - Epoch: [6][ 1600/ 1785] Overall Loss 2.451051 Objective Loss 2.451051 LR 0.001000 Time 0.030328 +2025-05-14 13:48:01,273 - Epoch: [6][ 1785/ 1785] Overall Loss 2.454925 Objective Loss 2.454925 LR 0.001000 Time 0.030255 +2025-05-14 13:48:01,303 - --- validate (epoch=6)----------- +2025-05-14 13:48:01,304 - 12251 samples (16 per mini-batch) +2025-05-14 13:48:01,305 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:48:13,739 - Epoch: [6][ 200/ 766] Loss 2.560397 mAP 0.874791 +2025-05-14 13:48:26,583 - Epoch: [6][ 400/ 766] Loss 2.521334 mAP 0.878835 +2025-05-14 13:48:40,328 - Epoch: [6][ 600/ 766] Loss 2.529241 mAP 0.876597 +2025-05-14 13:48:52,811 - Epoch: [6][ 766/ 766] Loss 2.539171 mAP 0.878598 +2025-05-14 13:48:52,841 - ==> mAP: 0.87860 Loss: 2.539 + +2025-05-14 13:48:52,845 - ==> Best [mAP: 0.878598 vloss: 2.539171 Params: 335520 on epoch: 6] +2025-05-14 13:48:52,845 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:48:52,889 - + +2025-05-14 13:48:52,889 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:48:58,913 - Epoch: [7][ 200/ 1785] Overall Loss 2.417228 Objective Loss 2.417228 LR 0.001000 Time 0.030103 +2025-05-14 13:49:04,809 - Epoch: [7][ 400/ 1785] Overall Loss 2.405908 Objective Loss 2.405908 LR 0.001000 Time 0.029786 +2025-05-14 13:49:10,717 - Epoch: [7][ 600/ 1785] Overall Loss 2.395678 Objective Loss 2.395678 LR 0.001000 Time 0.029701 +2025-05-14 13:49:16,644 - Epoch: [7][ 800/ 1785] Overall Loss 2.402967 Objective Loss 2.402967 LR 0.001000 Time 0.029682 +2025-05-14 13:49:22,642 - Epoch: [7][ 1000/ 1785] Overall Loss 2.407298 Objective Loss 2.407298 LR 0.001000 Time 0.029742 +2025-05-14 13:49:28,640 - Epoch: [7][ 1200/ 1785] Overall Loss 2.413162 Objective Loss 2.413162 LR 0.001000 Time 0.029781 +2025-05-14 13:49:34,576 - Epoch: [7][ 1400/ 1785] Overall Loss 2.413418 Objective Loss 2.413418 LR 0.001000 Time 0.029766 +2025-05-14 13:49:40,429 - Epoch: [7][ 1600/ 1785] Overall Loss 2.418101 Objective Loss 2.418101 LR 0.001000 Time 0.029702 +2025-05-14 13:49:45,846 - Epoch: [7][ 1785/ 1785] Overall Loss 2.421577 Objective Loss 2.421577 LR 0.001000 Time 0.029657 +2025-05-14 13:49:45,882 - --- validate (epoch=7)----------- +2025-05-14 13:49:45,883 - 12251 samples (16 per mini-batch) +2025-05-14 13:49:45,885 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:49:58,100 - Epoch: [7][ 200/ 766] Loss 2.544135 mAP 0.854273 +2025-05-14 13:50:11,282 - Epoch: [7][ 400/ 766] Loss 2.535808 mAP 0.854163 +2025-05-14 13:50:24,917 - Epoch: [7][ 600/ 766] Loss 2.526996 mAP 0.860272 +2025-05-14 13:50:37,578 - Epoch: [7][ 766/ 766] Loss 2.517351 mAP 0.859721 +2025-05-14 13:50:37,606 - ==> mAP: 0.85972 Loss: 2.517 + +2025-05-14 13:50:37,611 - ==> Best [mAP: 0.878598 vloss: 2.539171 Params: 335520 on epoch: 6] +2025-05-14 13:50:37,611 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:50:37,645 - + +2025-05-14 13:50:37,645 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:50:43,840 - Epoch: [8][ 200/ 1785] Overall Loss 2.333156 Objective Loss 2.333156 LR 0.001000 Time 0.030963 +2025-05-14 13:50:49,956 - Epoch: [8][ 400/ 1785] Overall Loss 2.360875 Objective Loss 2.360875 LR 0.001000 Time 0.030767 +2025-05-14 13:50:55,930 - Epoch: [8][ 600/ 1785] Overall Loss 2.375653 Objective Loss 2.375653 LR 0.001000 Time 0.030465 +2025-05-14 13:51:01,805 - Epoch: [8][ 800/ 1785] Overall Loss 2.384336 Objective Loss 2.384336 LR 0.001000 Time 0.030191 +2025-05-14 13:51:07,721 - Epoch: [8][ 1000/ 1785] Overall Loss 2.383938 Objective Loss 2.383938 LR 0.001000 Time 0.030067 +2025-05-14 13:51:13,692 - Epoch: [8][ 1200/ 1785] Overall Loss 2.387290 Objective Loss 2.387290 LR 0.001000 Time 0.030029 +2025-05-14 13:51:19,678 - Epoch: [8][ 1400/ 1785] Overall Loss 2.387433 Objective Loss 2.387433 LR 0.001000 Time 0.030014 +2025-05-14 13:51:25,688 - Epoch: [8][ 1600/ 1785] Overall Loss 2.391804 Objective Loss 2.391804 LR 0.001000 Time 0.030018 +2025-05-14 13:51:31,395 - Epoch: [8][ 1785/ 1785] Overall Loss 2.396710 Objective Loss 2.396710 LR 0.001000 Time 0.030103 +2025-05-14 13:51:31,428 - --- validate (epoch=8)----------- +2025-05-14 13:51:31,428 - 12251 samples (16 per mini-batch) +2025-05-14 13:51:31,430 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:51:44,045 - Epoch: [8][ 200/ 766] Loss 2.492128 mAP 0.863448 +2025-05-14 13:51:56,726 - Epoch: [8][ 400/ 766] Loss 2.506809 mAP 0.863964 +2025-05-14 13:52:10,218 - Epoch: [8][ 600/ 766] Loss 2.496328 mAP 0.866676 +2025-05-14 13:52:22,654 - Epoch: [8][ 766/ 766] Loss 2.495676 mAP 0.866196 +2025-05-14 13:52:22,682 - ==> mAP: 0.86620 Loss: 2.496 + +2025-05-14 13:52:22,686 - ==> Best [mAP: 0.878598 vloss: 2.539171 Params: 335520 on epoch: 6] +2025-05-14 13:52:22,686 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:52:22,719 - + +2025-05-14 13:52:22,719 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:52:28,839 - Epoch: [9][ 200/ 1785] Overall Loss 2.336738 Objective Loss 2.336738 LR 0.001000 Time 0.030586 +2025-05-14 13:52:34,823 - Epoch: [9][ 400/ 1785] Overall Loss 2.351589 Objective Loss 2.351589 LR 0.001000 Time 0.030249 +2025-05-14 13:52:40,805 - Epoch: [9][ 600/ 1785] Overall Loss 2.352977 Objective Loss 2.352977 LR 0.001000 Time 0.030133 +2025-05-14 13:52:46,785 - Epoch: [9][ 800/ 1785] Overall Loss 2.362334 Objective Loss 2.362334 LR 0.001000 Time 0.030072 +2025-05-14 13:52:52,802 - Epoch: [9][ 1000/ 1785] Overall Loss 2.369105 Objective Loss 2.369105 LR 0.001000 Time 0.030073 +2025-05-14 13:52:58,785 - Epoch: [9][ 1200/ 1785] Overall Loss 2.372998 Objective Loss 2.372998 LR 0.001000 Time 0.030045 +2025-05-14 13:53:04,728 - Epoch: [9][ 1400/ 1785] Overall Loss 2.371611 Objective Loss 2.371611 LR 0.001000 Time 0.029997 +2025-05-14 13:53:10,666 - Epoch: [9][ 1600/ 1785] Overall Loss 2.374227 Objective Loss 2.374227 LR 0.001000 Time 0.029957 +2025-05-14 13:53:16,175 - Epoch: [9][ 1785/ 1785] Overall Loss 2.375258 Objective Loss 2.375258 LR 0.001000 Time 0.029938 +2025-05-14 13:53:16,199 - --- validate (epoch=9)----------- +2025-05-14 13:53:16,200 - 12251 samples (16 per mini-batch) +2025-05-14 13:53:16,202 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:53:28,333 - Epoch: [9][ 200/ 766] Loss 2.505606 mAP 0.871990 +2025-05-14 13:53:41,149 - Epoch: [9][ 400/ 766] Loss 2.497047 mAP 0.872487 +2025-05-14 13:53:54,872 - Epoch: [9][ 600/ 766] Loss 2.495367 mAP 0.872606 +2025-05-14 13:54:07,533 - Epoch: [9][ 766/ 766] Loss 2.499160 mAP 0.872089 +2025-05-14 13:54:07,562 - ==> mAP: 0.87209 Loss: 2.499 + +2025-05-14 13:54:07,567 - ==> Best [mAP: 0.878598 vloss: 2.539171 Params: 335520 on epoch: 6] +2025-05-14 13:54:07,567 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:54:07,608 - + +2025-05-14 13:54:07,608 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:54:13,709 - Epoch: [10][ 200/ 1785] Overall Loss 2.340185 Objective Loss 2.340185 LR 0.001000 Time 0.030491 +2025-05-14 13:54:19,647 - Epoch: [10][ 400/ 1785] Overall Loss 2.332395 Objective Loss 2.332395 LR 0.001000 Time 0.030086 +2025-05-14 13:54:25,593 - Epoch: [10][ 600/ 1785] Overall Loss 2.328881 Objective Loss 2.328881 LR 0.001000 Time 0.029965 +2025-05-14 13:54:31,817 - Epoch: [10][ 800/ 1785] Overall Loss 2.338170 Objective Loss 2.338170 LR 0.001000 Time 0.030252 +2025-05-14 13:54:37,810 - Epoch: [10][ 1000/ 1785] Overall Loss 2.345833 Objective Loss 2.345833 LR 0.001000 Time 0.030192 +2025-05-14 13:54:43,753 - Epoch: [10][ 1200/ 1785] Overall Loss 2.348477 Objective Loss 2.348477 LR 0.001000 Time 0.030111 +2025-05-14 13:54:49,685 - Epoch: [10][ 1400/ 1785] Overall Loss 2.347592 Objective Loss 2.347592 LR 0.001000 Time 0.030045 +2025-05-14 13:54:55,675 - Epoch: [10][ 1600/ 1785] Overall Loss 2.352510 Objective Loss 2.352510 LR 0.001000 Time 0.030032 +2025-05-14 13:55:01,132 - Epoch: [10][ 1785/ 1785] Overall Loss 2.358780 Objective Loss 2.358780 LR 0.001000 Time 0.029976 +2025-05-14 13:55:01,169 - --- validate (epoch=10)----------- +2025-05-14 13:55:01,170 - 12251 samples (16 per mini-batch) +2025-05-14 13:55:01,172 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:55:13,212 - Epoch: [10][ 200/ 766] Loss 2.507110 mAP 0.869957 +2025-05-14 13:55:26,179 - Epoch: [10][ 400/ 766] Loss 2.519023 mAP 0.867069 +2025-05-14 13:55:39,621 - Epoch: [10][ 600/ 766] Loss 2.518690 mAP 0.863877 +2025-05-14 13:55:52,040 - Epoch: [10][ 766/ 766] Loss 2.519738 mAP 0.866009 +2025-05-14 13:55:52,069 - ==> mAP: 0.86601 Loss: 2.520 + +2025-05-14 13:55:52,073 - ==> Best [mAP: 0.878598 vloss: 2.539171 Params: 335520 on epoch: 6] +2025-05-14 13:55:52,073 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:55:52,114 - + +2025-05-14 13:55:52,114 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:55:58,357 - Epoch: [11][ 200/ 1785] Overall Loss 2.314017 Objective Loss 2.314017 LR 0.001000 Time 0.031202 +2025-05-14 13:56:04,524 - Epoch: [11][ 400/ 1785] Overall Loss 2.320340 Objective Loss 2.320340 LR 0.001000 Time 0.031013 +2025-05-14 13:56:10,851 - Epoch: [11][ 600/ 1785] Overall Loss 2.322946 Objective Loss 2.322946 LR 0.001000 Time 0.031218 +2025-05-14 13:56:17,138 - Epoch: [11][ 800/ 1785] Overall Loss 2.330937 Objective Loss 2.330937 LR 0.001000 Time 0.031271 +2025-05-14 13:56:23,405 - Epoch: [11][ 1000/ 1785] Overall Loss 2.327463 Objective Loss 2.327463 LR 0.001000 Time 0.031282 +2025-05-14 13:56:29,644 - Epoch: [11][ 1200/ 1785] Overall Loss 2.331175 Objective Loss 2.331175 LR 0.001000 Time 0.031266 +2025-05-14 13:56:35,758 - Epoch: [11][ 1400/ 1785] Overall Loss 2.336234 Objective Loss 2.336234 LR 0.001000 Time 0.031165 +2025-05-14 13:56:41,880 - Epoch: [11][ 1600/ 1785] Overall Loss 2.340349 Objective Loss 2.340349 LR 0.001000 Time 0.031095 +2025-05-14 13:56:47,561 - Epoch: [11][ 1785/ 1785] Overall Loss 2.344181 Objective Loss 2.344181 LR 0.001000 Time 0.031054 +2025-05-14 13:56:47,590 - --- validate (epoch=11)----------- +2025-05-14 13:56:47,590 - 12251 samples (16 per mini-batch) +2025-05-14 13:56:47,592 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:57:00,019 - Epoch: [11][ 200/ 766] Loss 2.506275 mAP 0.881917 +2025-05-14 13:57:13,157 - Epoch: [11][ 400/ 766] Loss 2.497536 mAP 0.879192 +2025-05-14 13:57:26,959 - Epoch: [11][ 600/ 766] Loss 2.481833 mAP 0.877636 +2025-05-14 13:57:39,770 - Epoch: [11][ 766/ 766] Loss 2.479204 mAP 0.879811 +2025-05-14 13:57:39,795 - ==> mAP: 0.87981 Loss: 2.479 + +2025-05-14 13:57:39,799 - ==> Best [mAP: 0.879811 vloss: 2.479204 Params: 335520 on epoch: 11] +2025-05-14 13:57:39,799 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:57:39,844 - + +2025-05-14 13:57:39,844 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:57:46,289 - Epoch: [12][ 200/ 1785] Overall Loss 2.352394 Objective Loss 2.352394 LR 0.001000 Time 0.032213 +2025-05-14 13:57:52,476 - Epoch: [12][ 400/ 1785] Overall Loss 2.328707 Objective Loss 2.328707 LR 0.001000 Time 0.031570 +2025-05-14 13:57:58,502 - Epoch: [12][ 600/ 1785] Overall Loss 2.332497 Objective Loss 2.332497 LR 0.001000 Time 0.031086 +2025-05-14 13:58:04,745 - Epoch: [12][ 800/ 1785] Overall Loss 2.332657 Objective Loss 2.332657 LR 0.001000 Time 0.031116 +2025-05-14 13:58:10,844 - Epoch: [12][ 1000/ 1785] Overall Loss 2.334134 Objective Loss 2.334134 LR 0.001000 Time 0.030991 +2025-05-14 13:58:16,902 - Epoch: [12][ 1200/ 1785] Overall Loss 2.329264 Objective Loss 2.329264 LR 0.001000 Time 0.030872 +2025-05-14 13:58:22,974 - Epoch: [12][ 1400/ 1785] Overall Loss 2.328909 Objective Loss 2.328909 LR 0.001000 Time 0.030798 +2025-05-14 13:58:29,031 - Epoch: [12][ 1600/ 1785] Overall Loss 2.328726 Objective Loss 2.328726 LR 0.001000 Time 0.030733 +2025-05-14 13:58:34,500 - Epoch: [12][ 1785/ 1785] Overall Loss 2.333006 Objective Loss 2.333006 LR 0.001000 Time 0.030611 +2025-05-14 13:58:34,535 - --- validate (epoch=12)----------- +2025-05-14 13:58:34,536 - 12251 samples (16 per mini-batch) +2025-05-14 13:58:34,538 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 13:58:46,690 - Epoch: [12][ 200/ 766] Loss 2.495480 mAP 0.870753 +2025-05-14 13:58:59,691 - Epoch: [12][ 400/ 766] Loss 2.466553 mAP 0.875831 +2025-05-14 13:59:13,619 - Epoch: [12][ 600/ 766] Loss 2.463980 mAP 0.876835 +2025-05-14 13:59:26,369 - Epoch: [12][ 766/ 766] Loss 2.468918 mAP 0.878376 +2025-05-14 13:59:26,398 - ==> mAP: 0.87838 Loss: 2.469 + +2025-05-14 13:59:26,402 - ==> Best [mAP: 0.879811 vloss: 2.479204 Params: 335520 on epoch: 11] +2025-05-14 13:59:26,402 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 13:59:26,443 - + +2025-05-14 13:59:26,443 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 13:59:32,527 - Epoch: [13][ 200/ 1785] Overall Loss 2.241647 Objective Loss 2.241647 LR 0.001000 Time 0.030405 +2025-05-14 13:59:38,675 - Epoch: [13][ 400/ 1785] Overall Loss 2.280011 Objective Loss 2.280011 LR 0.001000 Time 0.030568 +2025-05-14 13:59:44,616 - Epoch: [13][ 600/ 1785] Overall Loss 2.292557 Objective Loss 2.292557 LR 0.001000 Time 0.030277 +2025-05-14 13:59:50,687 - Epoch: [13][ 800/ 1785] Overall Loss 2.300595 Objective Loss 2.300595 LR 0.001000 Time 0.030295 +2025-05-14 13:59:56,656 - Epoch: [13][ 1000/ 1785] Overall Loss 2.304383 Objective Loss 2.304383 LR 0.001000 Time 0.030203 +2025-05-14 14:00:02,669 - Epoch: [13][ 1200/ 1785] Overall Loss 2.306933 Objective Loss 2.306933 LR 0.001000 Time 0.030178 +2025-05-14 14:00:08,659 - Epoch: [13][ 1400/ 1785] Overall Loss 2.311019 Objective Loss 2.311019 LR 0.001000 Time 0.030144 +2025-05-14 14:00:14,622 - Epoch: [13][ 1600/ 1785] Overall Loss 2.312789 Objective Loss 2.312789 LR 0.001000 Time 0.030102 +2025-05-14 14:00:20,210 - Epoch: [13][ 1785/ 1785] Overall Loss 2.314676 Objective Loss 2.314676 LR 0.001000 Time 0.030112 +2025-05-14 14:00:20,248 - --- validate (epoch=13)----------- +2025-05-14 14:00:20,249 - 12251 samples (16 per mini-batch) +2025-05-14 14:00:20,250 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:00:32,435 - Epoch: [13][ 200/ 766] Loss 2.474540 mAP 0.881953 +2025-05-14 14:00:44,933 - Epoch: [13][ 400/ 766] Loss 2.491942 mAP 0.878177 +2025-05-14 14:00:58,592 - Epoch: [13][ 600/ 766] Loss 2.488352 mAP 0.877496 +2025-05-14 14:01:10,971 - Epoch: [13][ 766/ 766] Loss 2.493606 mAP 0.876826 +2025-05-14 14:01:10,998 - ==> mAP: 0.87683 Loss: 2.494 + +2025-05-14 14:01:11,003 - ==> Best [mAP: 0.879811 vloss: 2.479204 Params: 335520 on epoch: 11] +2025-05-14 14:01:11,003 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 14:01:11,043 - + +2025-05-14 14:01:11,043 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:01:17,470 - Epoch: [14][ 200/ 1785] Overall Loss 2.316276 Objective Loss 2.316276 LR 0.001000 Time 0.032123 +2025-05-14 14:01:23,550 - Epoch: [14][ 400/ 1785] Overall Loss 2.303160 Objective Loss 2.303160 LR 0.001000 Time 0.031256 +2025-05-14 14:01:29,621 - Epoch: [14][ 600/ 1785] Overall Loss 2.295821 Objective Loss 2.295821 LR 0.001000 Time 0.030953 +2025-05-14 14:01:35,734 - Epoch: [14][ 800/ 1785] Overall Loss 2.299368 Objective Loss 2.299368 LR 0.001000 Time 0.030855 +2025-05-14 14:01:41,817 - Epoch: [14][ 1000/ 1785] Overall Loss 2.302789 Objective Loss 2.302789 LR 0.001000 Time 0.030765 +2025-05-14 14:01:47,890 - Epoch: [14][ 1200/ 1785] Overall Loss 2.305557 Objective Loss 2.305557 LR 0.001000 Time 0.030697 +2025-05-14 14:01:53,960 - Epoch: [14][ 1400/ 1785] Overall Loss 2.302385 Objective Loss 2.302385 LR 0.001000 Time 0.030646 +2025-05-14 14:02:00,035 - Epoch: [14][ 1600/ 1785] Overall Loss 2.305810 Objective Loss 2.305810 LR 0.001000 Time 0.030611 +2025-05-14 14:02:05,682 - Epoch: [14][ 1785/ 1785] Overall Loss 2.309144 Objective Loss 2.309144 LR 0.001000 Time 0.030602 +2025-05-14 14:02:05,714 - --- validate (epoch=14)----------- +2025-05-14 14:02:05,715 - 12251 samples (16 per mini-batch) +2025-05-14 14:02:05,716 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:02:17,857 - Epoch: [14][ 200/ 766] Loss 2.470196 mAP 0.881726 +2025-05-14 14:02:30,410 - Epoch: [14][ 400/ 766] Loss 2.466848 mAP 0.881181 +2025-05-14 14:02:43,768 - Epoch: [14][ 600/ 766] Loss 2.472456 mAP 0.875527 +2025-05-14 14:02:56,073 - Epoch: [14][ 766/ 766] Loss 2.475643 mAP 0.874797 +2025-05-14 14:02:56,098 - ==> mAP: 0.87480 Loss: 2.476 + +2025-05-14 14:02:56,102 - ==> Best [mAP: 0.879811 vloss: 2.479204 Params: 335520 on epoch: 11] +2025-05-14 14:02:56,102 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 14:02:56,142 - + +2025-05-14 14:02:56,142 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:03:02,222 - Epoch: [15][ 200/ 1785] Overall Loss 2.220449 Objective Loss 2.220449 LR 0.001000 Time 0.030383 +2025-05-14 14:03:08,157 - Epoch: [15][ 400/ 1785] Overall Loss 2.246031 Objective Loss 2.246031 LR 0.001000 Time 0.030026 +2025-05-14 14:03:14,305 - Epoch: [15][ 600/ 1785] Overall Loss 2.268702 Objective Loss 2.268702 LR 0.001000 Time 0.030260 +2025-05-14 14:03:20,551 - Epoch: [15][ 800/ 1785] Overall Loss 2.277677 Objective Loss 2.277677 LR 0.001000 Time 0.030501 +2025-05-14 14:03:26,751 - Epoch: [15][ 1000/ 1785] Overall Loss 2.292019 Objective Loss 2.292019 LR 0.001000 Time 0.030599 +2025-05-14 14:03:32,962 - Epoch: [15][ 1200/ 1785] Overall Loss 2.300441 Objective Loss 2.300441 LR 0.001000 Time 0.030674 +2025-05-14 14:03:38,957 - Epoch: [15][ 1400/ 1785] Overall Loss 2.301401 Objective Loss 2.301401 LR 0.001000 Time 0.030573 +2025-05-14 14:03:45,049 - Epoch: [15][ 1600/ 1785] Overall Loss 2.298626 Objective Loss 2.298626 LR 0.001000 Time 0.030558 +2025-05-14 14:03:50,698 - Epoch: [15][ 1785/ 1785] Overall Loss 2.299256 Objective Loss 2.299256 LR 0.001000 Time 0.030555 +2025-05-14 14:03:50,728 - --- validate (epoch=15)----------- +2025-05-14 14:03:50,729 - 12251 samples (16 per mini-batch) +2025-05-14 14:03:50,731 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:04:02,773 - Epoch: [15][ 200/ 766] Loss 2.488701 mAP 0.858115 +2025-05-14 14:04:15,885 - Epoch: [15][ 400/ 766] Loss 2.482534 mAP 0.859357 +2025-05-14 14:04:29,412 - Epoch: [15][ 600/ 766] Loss 2.489817 mAP 0.860383 +2025-05-14 14:04:41,893 - Epoch: [15][ 766/ 766] Loss 2.484301 mAP 0.860135 +2025-05-14 14:04:41,918 - ==> mAP: 0.86013 Loss: 2.484 + +2025-05-14 14:04:41,924 - ==> Best [mAP: 0.879811 vloss: 2.479204 Params: 335520 on epoch: 11] +2025-05-14 14:04:41,924 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 14:04:41,964 - + +2025-05-14 14:04:41,964 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:04:48,025 - Epoch: [16][ 200/ 1785] Overall Loss 2.272985 Objective Loss 2.272985 LR 0.001000 Time 0.030288 +2025-05-14 14:04:54,118 - Epoch: [16][ 400/ 1785] Overall Loss 2.269165 Objective Loss 2.269165 LR 0.001000 Time 0.030372 +2025-05-14 14:05:00,224 - Epoch: [16][ 600/ 1785] Overall Loss 2.268889 Objective Loss 2.268889 LR 0.001000 Time 0.030423 +2025-05-14 14:05:06,333 - Epoch: [16][ 800/ 1785] Overall Loss 2.263682 Objective Loss 2.263682 LR 0.001000 Time 0.030451 +2025-05-14 14:05:12,438 - Epoch: [16][ 1000/ 1785] Overall Loss 2.271552 Objective Loss 2.271552 LR 0.001000 Time 0.030464 +2025-05-14 14:05:18,491 - Epoch: [16][ 1200/ 1785] Overall Loss 2.278264 Objective Loss 2.278264 LR 0.001000 Time 0.030430 +2025-05-14 14:05:24,548 - Epoch: [16][ 1400/ 1785] Overall Loss 2.285456 Objective Loss 2.285456 LR 0.001000 Time 0.030408 +2025-05-14 14:05:30,598 - Epoch: [16][ 1600/ 1785] Overall Loss 2.288731 Objective Loss 2.288731 LR 0.001000 Time 0.030387 +2025-05-14 14:05:36,243 - Epoch: [16][ 1785/ 1785] Overall Loss 2.290310 Objective Loss 2.290310 LR 0.001000 Time 0.030400 +2025-05-14 14:05:36,276 - --- validate (epoch=16)----------- +2025-05-14 14:05:36,277 - 12251 samples (16 per mini-batch) +2025-05-14 14:05:36,279 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:05:48,460 - Epoch: [16][ 200/ 766] Loss 2.467860 mAP 0.870973 +2025-05-14 14:06:01,086 - Epoch: [16][ 400/ 766] Loss 2.480058 mAP 0.866371 +2025-05-14 14:06:14,697 - Epoch: [16][ 600/ 766] Loss 2.478257 mAP 0.866228 +2025-05-14 14:06:27,325 - Epoch: [16][ 766/ 766] Loss 2.472631 mAP 0.866927 +2025-05-14 14:06:27,351 - ==> mAP: 0.86693 Loss: 2.473 + +2025-05-14 14:06:27,355 - ==> Best [mAP: 0.879811 vloss: 2.479204 Params: 335520 on epoch: 11] +2025-05-14 14:06:27,355 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 14:06:27,396 - + +2025-05-14 14:06:27,396 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:06:33,731 - Epoch: [17][ 200/ 1785] Overall Loss 2.261422 Objective Loss 2.261422 LR 0.001000 Time 0.031658 +2025-05-14 14:06:40,068 - Epoch: [17][ 400/ 1785] Overall Loss 2.272846 Objective Loss 2.272846 LR 0.001000 Time 0.031669 +2025-05-14 14:06:46,243 - Epoch: [17][ 600/ 1785] Overall Loss 2.278624 Objective Loss 2.278624 LR 0.001000 Time 0.031400 +2025-05-14 14:06:52,236 - Epoch: [17][ 800/ 1785] Overall Loss 2.270994 Objective Loss 2.270994 LR 0.001000 Time 0.031040 +2025-05-14 14:06:58,270 - Epoch: [17][ 1000/ 1785] Overall Loss 2.272180 Objective Loss 2.272180 LR 0.001000 Time 0.030865 +2025-05-14 14:07:04,419 - Epoch: [17][ 1200/ 1785] Overall Loss 2.274386 Objective Loss 2.274386 LR 0.001000 Time 0.030843 +2025-05-14 14:07:10,483 - Epoch: [17][ 1400/ 1785] Overall Loss 2.279298 Objective Loss 2.279298 LR 0.001000 Time 0.030767 +2025-05-14 14:07:16,484 - Epoch: [17][ 1600/ 1785] Overall Loss 2.281927 Objective Loss 2.281927 LR 0.001000 Time 0.030671 +2025-05-14 14:07:22,089 - Epoch: [17][ 1785/ 1785] Overall Loss 2.282575 Objective Loss 2.282575 LR 0.001000 Time 0.030631 +2025-05-14 14:07:22,123 - --- validate (epoch=17)----------- +2025-05-14 14:07:22,124 - 12251 samples (16 per mini-batch) +2025-05-14 14:07:22,126 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:07:34,435 - Epoch: [17][ 200/ 766] Loss 2.449513 mAP 0.884313 +2025-05-14 14:07:47,234 - Epoch: [17][ 400/ 766] Loss 2.439876 mAP 0.886736 +2025-05-14 14:08:00,758 - Epoch: [17][ 600/ 766] Loss 2.449580 mAP 0.881663 +2025-05-14 14:08:13,138 - Epoch: [17][ 766/ 766] Loss 2.443055 mAP 0.882007 +2025-05-14 14:08:13,164 - ==> mAP: 0.88201 Loss: 2.443 + +2025-05-14 14:08:13,168 - ==> Best [mAP: 0.882007 vloss: 2.443055 Params: 335520 on epoch: 17] +2025-05-14 14:08:13,168 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 14:08:13,212 - + +2025-05-14 14:08:13,212 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:08:19,295 - Epoch: [18][ 200/ 1785] Overall Loss 2.211338 Objective Loss 2.211338 LR 0.001000 Time 0.030398 +2025-05-14 14:08:25,245 - Epoch: [18][ 400/ 1785] Overall Loss 2.238711 Objective Loss 2.238711 LR 0.001000 Time 0.030069 +2025-05-14 14:08:31,386 - Epoch: [18][ 600/ 1785] Overall Loss 2.250886 Objective Loss 2.250886 LR 0.001000 Time 0.030279 +2025-05-14 14:08:37,505 - Epoch: [18][ 800/ 1785] Overall Loss 2.261208 Objective Loss 2.261208 LR 0.001000 Time 0.030356 +2025-05-14 14:08:43,584 - Epoch: [18][ 1000/ 1785] Overall Loss 2.269842 Objective Loss 2.269842 LR 0.001000 Time 0.030362 +2025-05-14 14:08:49,712 - Epoch: [18][ 1200/ 1785] Overall Loss 2.267938 Objective Loss 2.267938 LR 0.001000 Time 0.030407 +2025-05-14 14:08:55,837 - Epoch: [18][ 1400/ 1785] Overall Loss 2.274643 Objective Loss 2.274643 LR 0.001000 Time 0.030437 +2025-05-14 14:09:01,963 - Epoch: [18][ 1600/ 1785] Overall Loss 2.277919 Objective Loss 2.277919 LR 0.001000 Time 0.030460 +2025-05-14 14:09:07,606 - Epoch: [18][ 1785/ 1785] Overall Loss 2.278154 Objective Loss 2.278154 LR 0.001000 Time 0.030464 +2025-05-14 14:09:07,631 - --- validate (epoch=18)----------- +2025-05-14 14:09:07,632 - 12251 samples (16 per mini-batch) +2025-05-14 14:09:07,634 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:09:19,636 - Epoch: [18][ 200/ 766] Loss 2.469304 mAP 0.862348 +2025-05-14 14:09:32,293 - Epoch: [18][ 400/ 766] Loss 2.478455 mAP 0.865199 +2025-05-14 14:09:46,132 - Epoch: [18][ 600/ 766] Loss 2.478804 mAP 0.868670 +2025-05-14 14:09:58,869 - Epoch: [18][ 766/ 766] Loss 2.470960 mAP 0.870276 +2025-05-14 14:09:58,897 - ==> mAP: 0.87028 Loss: 2.471 + +2025-05-14 14:09:58,902 - ==> Best [mAP: 0.882007 vloss: 2.443055 Params: 335520 on epoch: 17] +2025-05-14 14:09:58,902 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 14:09:58,935 - + +2025-05-14 14:09:58,935 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:10:05,294 - Epoch: [19][ 200/ 1785] Overall Loss 2.247581 Objective Loss 2.247581 LR 0.001000 Time 0.031782 +2025-05-14 14:10:11,255 - Epoch: [19][ 400/ 1785] Overall Loss 2.261267 Objective Loss 2.261267 LR 0.001000 Time 0.030787 +2025-05-14 14:10:17,147 - Epoch: [19][ 600/ 1785] Overall Loss 2.241398 Objective Loss 2.241398 LR 0.001000 Time 0.030342 +2025-05-14 14:10:23,053 - Epoch: [19][ 800/ 1785] Overall Loss 2.254476 Objective Loss 2.254476 LR 0.001000 Time 0.030137 +2025-05-14 14:10:28,913 - Epoch: [19][ 1000/ 1785] Overall Loss 2.260330 Objective Loss 2.260330 LR 0.001000 Time 0.029968 +2025-05-14 14:10:34,886 - Epoch: [19][ 1200/ 1785] Overall Loss 2.265529 Objective Loss 2.265529 LR 0.001000 Time 0.029948 +2025-05-14 14:10:40,982 - Epoch: [19][ 1400/ 1785] Overall Loss 2.269921 Objective Loss 2.269921 LR 0.001000 Time 0.030023 +2025-05-14 14:10:46,912 - Epoch: [19][ 1600/ 1785] Overall Loss 2.269562 Objective Loss 2.269562 LR 0.001000 Time 0.029975 +2025-05-14 14:10:52,454 - Epoch: [19][ 1785/ 1785] Overall Loss 2.270621 Objective Loss 2.270621 LR 0.001000 Time 0.029972 +2025-05-14 14:10:52,495 - --- validate (epoch=19)----------- +2025-05-14 14:10:52,496 - 12251 samples (16 per mini-batch) +2025-05-14 14:10:52,497 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:11:04,695 - Epoch: [19][ 200/ 766] Loss 2.457019 mAP 0.876019 +2025-05-14 14:11:17,498 - Epoch: [19][ 400/ 766] Loss 2.450931 mAP 0.877674 +2025-05-14 14:11:31,111 - Epoch: [19][ 600/ 766] Loss 2.449189 mAP 0.875046 +2025-05-14 14:11:43,540 - Epoch: [19][ 766/ 766] Loss 2.448616 mAP 0.875796 +2025-05-14 14:11:43,568 - ==> mAP: 0.87580 Loss: 2.449 + +2025-05-14 14:11:43,572 - ==> Best [mAP: 0.882007 vloss: 2.443055 Params: 335520 on epoch: 17] +2025-05-14 14:11:43,572 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 14:11:43,613 - + +2025-05-14 14:11:43,614 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:11:49,974 - Epoch: [20][ 200/ 1785] Overall Loss 2.191682 Objective Loss 2.191682 LR 0.001000 Time 0.031790 +2025-05-14 14:11:56,280 - Epoch: [20][ 400/ 1785] Overall Loss 2.213828 Objective Loss 2.213828 LR 0.001000 Time 0.031654 +2025-05-14 14:12:02,519 - Epoch: [20][ 600/ 1785] Overall Loss 2.216349 Objective Loss 2.216349 LR 0.001000 Time 0.031499 +2025-05-14 14:12:08,739 - Epoch: [20][ 800/ 1785] Overall Loss 2.235487 Objective Loss 2.235487 LR 0.001000 Time 0.031398 +2025-05-14 14:12:14,926 - Epoch: [20][ 1000/ 1785] Overall Loss 2.246947 Objective Loss 2.246947 LR 0.001000 Time 0.031303 +2025-05-14 14:12:21,184 - Epoch: [20][ 1200/ 1785] Overall Loss 2.246264 Objective Loss 2.246264 LR 0.001000 Time 0.031300 +2025-05-14 14:12:27,352 - Epoch: [20][ 1400/ 1785] Overall Loss 2.255610 Objective Loss 2.255610 LR 0.001000 Time 0.031233 +2025-05-14 14:12:33,536 - Epoch: [20][ 1600/ 1785] Overall Loss 2.257458 Objective Loss 2.257458 LR 0.001000 Time 0.031193 +2025-05-14 14:12:39,245 - Epoch: [20][ 1785/ 1785] Overall Loss 2.258647 Objective Loss 2.258647 LR 0.001000 Time 0.031157 +2025-05-14 14:12:39,276 - --- validate (epoch=20)----------- +2025-05-14 14:12:39,277 - 12251 samples (16 per mini-batch) +2025-05-14 14:12:39,279 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:12:51,546 - Epoch: [20][ 200/ 766] Loss 2.469609 mAP 0.882754 +2025-05-14 14:13:04,808 - Epoch: [20][ 400/ 766] Loss 2.465600 mAP 0.886119 +2025-05-14 14:13:18,927 - Epoch: [20][ 600/ 766] Loss 2.453618 mAP 0.880939 +2025-05-14 14:13:31,862 - Epoch: [20][ 766/ 766] Loss 2.449632 mAP 0.880097 +2025-05-14 14:13:31,892 - ==> mAP: 0.88010 Loss: 2.450 + +2025-05-14 14:13:31,897 - ==> Best [mAP: 0.882007 vloss: 2.443055 Params: 335520 on epoch: 17] +2025-05-14 14:13:31,897 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 14:13:31,938 - + +2025-05-14 14:13:31,939 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:13:38,086 - Epoch: [21][ 200/ 1785] Overall Loss 2.222027 Objective Loss 2.222027 LR 0.001000 Time 0.030723 +2025-05-14 14:13:44,115 - Epoch: [21][ 400/ 1785] Overall Loss 2.232494 Objective Loss 2.232494 LR 0.001000 Time 0.030430 +2025-05-14 14:13:50,318 - Epoch: [21][ 600/ 1785] Overall Loss 2.235686 Objective Loss 2.235686 LR 0.001000 Time 0.030622 +2025-05-14 14:13:56,504 - Epoch: [21][ 800/ 1785] Overall Loss 2.243497 Objective Loss 2.243497 LR 0.001000 Time 0.030698 +2025-05-14 14:14:02,614 - Epoch: [21][ 1000/ 1785] Overall Loss 2.237937 Objective Loss 2.237937 LR 0.001000 Time 0.030666 +2025-05-14 14:14:08,807 - Epoch: [21][ 1200/ 1785] Overall Loss 2.241164 Objective Loss 2.241164 LR 0.001000 Time 0.030714 +2025-05-14 14:14:15,060 - Epoch: [21][ 1400/ 1785] Overall Loss 2.249333 Objective Loss 2.249333 LR 0.001000 Time 0.030792 +2025-05-14 14:14:21,350 - Epoch: [21][ 1600/ 1785] Overall Loss 2.253158 Objective Loss 2.253158 LR 0.001000 Time 0.030873 +2025-05-14 14:14:27,179 - Epoch: [21][ 1785/ 1785] Overall Loss 2.259369 Objective Loss 2.259369 LR 0.001000 Time 0.030938 +2025-05-14 14:14:27,217 - --- validate (epoch=21)----------- +2025-05-14 14:14:27,218 - 12251 samples (16 per mini-batch) +2025-05-14 14:14:27,219 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:14:39,378 - Epoch: [21][ 200/ 766] Loss 2.442198 mAP 0.884010 +2025-05-14 14:14:52,367 - Epoch: [21][ 400/ 766] Loss 2.426962 mAP 0.887252 +2025-05-14 14:15:06,382 - Epoch: [21][ 600/ 766] Loss 2.419712 mAP 0.885728 +2025-05-14 14:15:19,225 - Epoch: [21][ 766/ 766] Loss 2.424871 mAP 0.883873 +2025-05-14 14:15:19,255 - ==> mAP: 0.88387 Loss: 2.425 + +2025-05-14 14:15:19,259 - ==> Best [mAP: 0.883873 vloss: 2.424871 Params: 335520 on epoch: 21] +2025-05-14 14:15:19,259 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 14:15:19,305 - + +2025-05-14 14:15:19,305 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:15:25,734 - Epoch: [22][ 200/ 1785] Overall Loss 2.212725 Objective Loss 2.212725 LR 0.001000 Time 0.032135 +2025-05-14 14:15:32,017 - Epoch: [22][ 400/ 1785] Overall Loss 2.231379 Objective Loss 2.231379 LR 0.001000 Time 0.031770 +2025-05-14 14:15:38,276 - Epoch: [22][ 600/ 1785] Overall Loss 2.240567 Objective Loss 2.240567 LR 0.001000 Time 0.031609 +2025-05-14 14:15:44,549 - Epoch: [22][ 800/ 1785] Overall Loss 2.237147 Objective Loss 2.237147 LR 0.001000 Time 0.031546 +2025-05-14 14:15:50,876 - Epoch: [22][ 1000/ 1785] Overall Loss 2.238620 Objective Loss 2.238620 LR 0.001000 Time 0.031562 +2025-05-14 14:15:57,254 - Epoch: [22][ 1200/ 1785] Overall Loss 2.240933 Objective Loss 2.240933 LR 0.001000 Time 0.031616 +2025-05-14 14:16:03,148 - Epoch: [22][ 1400/ 1785] Overall Loss 2.249122 Objective Loss 2.249122 LR 0.001000 Time 0.031308 +2025-05-14 14:16:09,056 - Epoch: [22][ 1600/ 1785] Overall Loss 2.252711 Objective Loss 2.252711 LR 0.001000 Time 0.031085 +2025-05-14 14:16:14,566 - Epoch: [22][ 1785/ 1785] Overall Loss 2.255300 Objective Loss 2.255300 LR 0.001000 Time 0.030950 +2025-05-14 14:16:14,599 - --- validate (epoch=22)----------- +2025-05-14 14:16:14,600 - 12251 samples (16 per mini-batch) +2025-05-14 14:16:14,602 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:16:26,796 - Epoch: [22][ 200/ 766] Loss 2.458638 mAP 0.883862 +2025-05-14 14:16:39,467 - Epoch: [22][ 400/ 766] Loss 2.464943 mAP 0.881080 +2025-05-14 14:16:53,195 - Epoch: [22][ 600/ 766] Loss 2.457851 mAP 0.879679 +2025-05-14 14:17:05,884 - Epoch: [22][ 766/ 766] Loss 2.453405 mAP 0.882657 +2025-05-14 14:17:05,916 - ==> mAP: 0.88266 Loss: 2.453 + +2025-05-14 14:17:05,920 - ==> Best [mAP: 0.883873 vloss: 2.424871 Params: 335520 on epoch: 21] +2025-05-14 14:17:05,920 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 14:17:05,961 - + +2025-05-14 14:17:05,961 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:17:12,239 - Epoch: [23][ 200/ 1785] Overall Loss 2.210420 Objective Loss 2.210420 LR 0.001000 Time 0.031373 +2025-05-14 14:17:18,193 - Epoch: [23][ 400/ 1785] Overall Loss 2.201369 Objective Loss 2.201369 LR 0.001000 Time 0.030566 +2025-05-14 14:17:24,154 - Epoch: [23][ 600/ 1785] Overall Loss 2.214161 Objective Loss 2.214161 LR 0.001000 Time 0.030309 +2025-05-14 14:17:30,154 - Epoch: [23][ 800/ 1785] Overall Loss 2.225396 Objective Loss 2.225396 LR 0.001000 Time 0.030230 +2025-05-14 14:17:36,176 - Epoch: [23][ 1000/ 1785] Overall Loss 2.231919 Objective Loss 2.231919 LR 0.001000 Time 0.030204 +2025-05-14 14:17:42,117 - Epoch: [23][ 1200/ 1785] Overall Loss 2.239481 Objective Loss 2.239481 LR 0.001000 Time 0.030119 +2025-05-14 14:17:48,182 - Epoch: [23][ 1400/ 1785] Overall Loss 2.246129 Objective Loss 2.246129 LR 0.001000 Time 0.030147 +2025-05-14 14:17:54,384 - Epoch: [23][ 1600/ 1785] Overall Loss 2.247237 Objective Loss 2.247237 LR 0.001000 Time 0.030254 +2025-05-14 14:18:00,118 - Epoch: [23][ 1785/ 1785] Overall Loss 2.250397 Objective Loss 2.250397 LR 0.001000 Time 0.030330 +2025-05-14 14:18:00,157 - --- validate (epoch=23)----------- +2025-05-14 14:18:00,158 - 12251 samples (16 per mini-batch) +2025-05-14 14:18:00,160 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:18:12,554 - Epoch: [23][ 200/ 766] Loss 2.510300 mAP 0.863587 +2025-05-14 14:18:25,857 - Epoch: [23][ 400/ 766] Loss 2.497870 mAP 0.865335 +2025-05-14 14:18:39,690 - Epoch: [23][ 600/ 766] Loss 2.496823 mAP 0.864678 +2025-05-14 14:18:52,292 - Epoch: [23][ 766/ 766] Loss 2.487776 mAP 0.865888 +2025-05-14 14:18:52,325 - ==> mAP: 0.86589 Loss: 2.488 + +2025-05-14 14:18:52,331 - ==> Best [mAP: 0.883873 vloss: 2.424871 Params: 335520 on epoch: 21] +2025-05-14 14:18:52,331 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 14:18:52,371 - + +2025-05-14 14:18:52,371 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:18:58,919 - Epoch: [24][ 200/ 1785] Overall Loss 2.211676 Objective Loss 2.211676 LR 0.001000 Time 0.032726 +2025-05-14 14:19:04,890 - Epoch: [24][ 400/ 1785] Overall Loss 2.218563 Objective Loss 2.218563 LR 0.001000 Time 0.031287 +2025-05-14 14:19:10,829 - Epoch: [24][ 600/ 1785] Overall Loss 2.220265 Objective Loss 2.220265 LR 0.001000 Time 0.030753 +2025-05-14 14:19:16,716 - Epoch: [24][ 800/ 1785] Overall Loss 2.222087 Objective Loss 2.222087 LR 0.001000 Time 0.030421 +2025-05-14 14:19:22,687 - Epoch: [24][ 1000/ 1785] Overall Loss 2.234504 Objective Loss 2.234504 LR 0.001000 Time 0.030306 +2025-05-14 14:19:28,713 - Epoch: [24][ 1200/ 1785] Overall Loss 2.238474 Objective Loss 2.238474 LR 0.001000 Time 0.030274 +2025-05-14 14:19:34,671 - Epoch: [24][ 1400/ 1785] Overall Loss 2.243508 Objective Loss 2.243508 LR 0.001000 Time 0.030204 +2025-05-14 14:19:40,555 - Epoch: [24][ 1600/ 1785] Overall Loss 2.249164 Objective Loss 2.249164 LR 0.001000 Time 0.030105 +2025-05-14 14:19:45,973 - Epoch: [24][ 1785/ 1785] Overall Loss 2.248721 Objective Loss 2.248721 LR 0.001000 Time 0.030019 +2025-05-14 14:19:46,007 - --- validate (epoch=24)----------- +2025-05-14 14:19:46,008 - 12251 samples (16 per mini-batch) +2025-05-14 14:19:46,010 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:19:57,943 - Epoch: [24][ 200/ 766] Loss 2.426716 mAP 0.879599 +2025-05-14 14:20:10,532 - Epoch: [24][ 400/ 766] Loss 2.444538 mAP 0.873894 +2025-05-14 14:20:24,256 - Epoch: [24][ 600/ 766] Loss 2.436108 mAP 0.877689 +2025-05-14 14:20:36,877 - Epoch: [24][ 766/ 766] Loss 2.438151 mAP 0.874684 +2025-05-14 14:20:36,908 - ==> mAP: 0.87468 Loss: 2.438 + +2025-05-14 14:20:36,912 - ==> Best [mAP: 0.883873 vloss: 2.424871 Params: 335520 on epoch: 21] +2025-05-14 14:20:36,912 - Saving checkpoint to: logs/2025.05.14-133616/checkpoint.pth.tar +2025-05-14 14:20:36,953 - Initiating quantization aware training (QAT)... +2025-05-14 14:20:36,957 - Collecting statistics for quantization aware training (QAT)... +2025-05-14 14:21:13,980 - + +2025-05-14 14:21:13,980 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:21:22,522 - Epoch: [25][ 200/ 1785] Overall Loss 3.073878 Objective Loss 3.073878 LR 0.001000 Time 0.042699 +2025-05-14 14:21:30,757 - Epoch: [25][ 400/ 1785] Overall Loss 2.704286 Objective Loss 2.704286 LR 0.001000 Time 0.041932 +2025-05-14 14:21:39,089 - Epoch: [25][ 600/ 1785] Overall Loss 2.564254 Objective Loss 2.564254 LR 0.001000 Time 0.041839 +2025-05-14 14:21:47,455 - Epoch: [25][ 800/ 1785] Overall Loss 2.495164 Objective Loss 2.495164 LR 0.001000 Time 0.041834 +2025-05-14 14:21:55,908 - Epoch: [25][ 1000/ 1785] Overall Loss 2.466467 Objective Loss 2.466467 LR 0.001000 Time 0.041919 +2025-05-14 14:22:04,287 - Epoch: [25][ 1200/ 1785] Overall Loss 2.430674 Objective Loss 2.430674 LR 0.001000 Time 0.041914 +2025-05-14 14:22:12,483 - Epoch: [25][ 1400/ 1785] Overall Loss 2.416244 Objective Loss 2.416244 LR 0.001000 Time 0.041779 +2025-05-14 14:22:20,941 - Epoch: [25][ 1600/ 1785] Overall Loss 2.399216 Objective Loss 2.399216 LR 0.001000 Time 0.041842 +2025-05-14 14:22:28,750 - Epoch: [25][ 1785/ 1785] Overall Loss 2.391182 Objective Loss 2.391182 LR 0.001000 Time 0.041879 +2025-05-14 14:22:28,782 - --- validate (epoch=25)----------- +2025-05-14 14:22:28,783 - 12251 samples (16 per mini-batch) +2025-05-14 14:22:28,785 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:22:42,282 - Epoch: [25][ 200/ 766] Loss 2.529417 mAP 0.856150 +2025-05-14 14:22:56,921 - Epoch: [25][ 400/ 766] Loss 2.541538 mAP 0.859319 +2025-05-14 14:23:11,784 - Epoch: [25][ 600/ 766] Loss 2.535932 mAP 0.856086 +2025-05-14 14:23:25,329 - Epoch: [25][ 766/ 766] Loss 2.543305 mAP 0.856122 +2025-05-14 14:23:25,364 - ==> mAP: 0.85612 Loss: 2.543 + +2025-05-14 14:23:25,369 - ==> Best [mAP: 0.856122 vloss: 2.543305 Params: 318604 on epoch: 25] +2025-05-14 14:23:25,369 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:23:25,395 - + +2025-05-14 14:23:25,395 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:23:34,153 - Epoch: [26][ 200/ 1785] Overall Loss 2.220502 Objective Loss 2.220502 LR 0.001000 Time 0.043777 +2025-05-14 14:23:42,653 - Epoch: [26][ 400/ 1785] Overall Loss 2.239989 Objective Loss 2.239989 LR 0.001000 Time 0.043134 +2025-05-14 14:23:51,121 - Epoch: [26][ 600/ 1785] Overall Loss 2.247594 Objective Loss 2.247594 LR 0.001000 Time 0.042868 +2025-05-14 14:23:59,589 - Epoch: [26][ 800/ 1785] Overall Loss 2.255213 Objective Loss 2.255213 LR 0.001000 Time 0.042734 +2025-05-14 14:24:07,999 - Epoch: [26][ 1000/ 1785] Overall Loss 2.262146 Objective Loss 2.262146 LR 0.001000 Time 0.042595 +2025-05-14 14:24:16,398 - Epoch: [26][ 1200/ 1785] Overall Loss 2.270698 Objective Loss 2.270698 LR 0.001000 Time 0.042494 +2025-05-14 14:24:24,802 - Epoch: [26][ 1400/ 1785] Overall Loss 2.268255 Objective Loss 2.268255 LR 0.001000 Time 0.042424 +2025-05-14 14:24:33,235 - Epoch: [26][ 1600/ 1785] Overall Loss 2.268312 Objective Loss 2.268312 LR 0.001000 Time 0.042391 +2025-05-14 14:24:41,004 - Epoch: [26][ 1785/ 1785] Overall Loss 2.271609 Objective Loss 2.271609 LR 0.001000 Time 0.042349 +2025-05-14 14:24:41,045 - --- validate (epoch=26)----------- +2025-05-14 14:24:41,045 - 12251 samples (16 per mini-batch) +2025-05-14 14:24:41,047 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:24:54,941 - Epoch: [26][ 200/ 766] Loss 2.426695 mAP 0.868179 +2025-05-14 14:25:09,220 - Epoch: [26][ 400/ 766] Loss 2.455373 mAP 0.866129 +2025-05-14 14:25:24,562 - Epoch: [26][ 600/ 766] Loss 2.457582 mAP 0.868823 +2025-05-14 14:25:38,507 - Epoch: [26][ 766/ 766] Loss 2.459413 mAP 0.868695 +2025-05-14 14:25:38,549 - ==> mAP: 0.86870 Loss: 2.459 + +2025-05-14 14:25:38,553 - ==> Best [mAP: 0.868695 vloss: 2.459413 Params: 318604 on epoch: 26] +2025-05-14 14:25:38,553 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:25:38,588 - + +2025-05-14 14:25:38,588 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:25:47,151 - Epoch: [27][ 200/ 1785] Overall Loss 2.199132 Objective Loss 2.199132 LR 0.001000 Time 0.042803 +2025-05-14 14:25:55,388 - Epoch: [27][ 400/ 1785] Overall Loss 2.229817 Objective Loss 2.229817 LR 0.001000 Time 0.041988 +2025-05-14 14:26:03,534 - Epoch: [27][ 600/ 1785] Overall Loss 2.236001 Objective Loss 2.236001 LR 0.001000 Time 0.041567 +2025-05-14 14:26:11,684 - Epoch: [27][ 800/ 1785] Overall Loss 2.237069 Objective Loss 2.237069 LR 0.001000 Time 0.041360 +2025-05-14 14:26:19,848 - Epoch: [27][ 1000/ 1785] Overall Loss 2.240647 Objective Loss 2.240647 LR 0.001000 Time 0.041251 +2025-05-14 14:26:28,067 - Epoch: [27][ 1200/ 1785] Overall Loss 2.238155 Objective Loss 2.238155 LR 0.001000 Time 0.041223 +2025-05-14 14:26:36,251 - Epoch: [27][ 1400/ 1785] Overall Loss 2.250734 Objective Loss 2.250734 LR 0.001000 Time 0.041179 +2025-05-14 14:26:44,380 - Epoch: [27][ 1600/ 1785] Overall Loss 2.254220 Objective Loss 2.254220 LR 0.001000 Time 0.041111 +2025-05-14 14:26:51,801 - Epoch: [27][ 1785/ 1785] Overall Loss 2.260385 Objective Loss 2.260385 LR 0.001000 Time 0.041007 +2025-05-14 14:26:51,835 - --- validate (epoch=27)----------- +2025-05-14 14:26:51,836 - 12251 samples (16 per mini-batch) +2025-05-14 14:26:51,837 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:27:05,407 - Epoch: [27][ 200/ 766] Loss 2.431847 mAP 0.881254 +2025-05-14 14:27:19,923 - Epoch: [27][ 400/ 766] Loss 2.413118 mAP 0.885412 +2025-05-14 14:27:35,385 - Epoch: [27][ 600/ 766] Loss 2.412108 mAP 0.883564 +2025-05-14 14:27:48,774 - Epoch: [27][ 766/ 766] Loss 2.411577 mAP 0.882755 +2025-05-14 14:27:48,807 - ==> mAP: 0.88275 Loss: 2.412 + +2025-05-14 14:27:48,812 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:27:48,813 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:27:48,846 - + +2025-05-14 14:27:48,846 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:27:57,183 - Epoch: [28][ 200/ 1785] Overall Loss 2.250111 Objective Loss 2.250111 LR 0.001000 Time 0.041672 +2025-05-14 14:28:05,358 - Epoch: [28][ 400/ 1785] Overall Loss 2.238303 Objective Loss 2.238303 LR 0.001000 Time 0.041270 +2025-05-14 14:28:13,567 - Epoch: [28][ 600/ 1785] Overall Loss 2.229353 Objective Loss 2.229353 LR 0.001000 Time 0.041191 +2025-05-14 14:28:21,864 - Epoch: [28][ 800/ 1785] Overall Loss 2.229833 Objective Loss 2.229833 LR 0.001000 Time 0.041263 +2025-05-14 14:28:30,228 - Epoch: [28][ 1000/ 1785] Overall Loss 2.236889 Objective Loss 2.236889 LR 0.001000 Time 0.041373 +2025-05-14 14:28:38,648 - Epoch: [28][ 1200/ 1785] Overall Loss 2.233163 Objective Loss 2.233163 LR 0.001000 Time 0.041492 +2025-05-14 14:28:47,087 - Epoch: [28][ 1400/ 1785] Overall Loss 2.235785 Objective Loss 2.235785 LR 0.001000 Time 0.041592 +2025-05-14 14:28:55,513 - Epoch: [28][ 1600/ 1785] Overall Loss 2.241238 Objective Loss 2.241238 LR 0.001000 Time 0.041658 +2025-05-14 14:29:03,250 - Epoch: [28][ 1785/ 1785] Overall Loss 2.244519 Objective Loss 2.244519 LR 0.001000 Time 0.041674 +2025-05-14 14:29:03,284 - --- validate (epoch=28)----------- +2025-05-14 14:29:03,285 - 12251 samples (16 per mini-batch) +2025-05-14 14:29:03,287 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:29:17,144 - Epoch: [28][ 200/ 766] Loss 2.420981 mAP 0.881613 +2025-05-14 14:29:31,637 - Epoch: [28][ 400/ 766] Loss 2.411375 mAP 0.875482 +2025-05-14 14:29:47,294 - Epoch: [28][ 600/ 766] Loss 2.430594 mAP 0.873783 +2025-05-14 14:30:01,379 - Epoch: [28][ 766/ 766] Loss 2.427325 mAP 0.873577 +2025-05-14 14:30:01,417 - ==> mAP: 0.87358 Loss: 2.427 + +2025-05-14 14:30:01,422 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:30:01,422 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:30:01,453 - + +2025-05-14 14:30:01,453 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:30:09,969 - Epoch: [29][ 200/ 1785] Overall Loss 2.224285 Objective Loss 2.224285 LR 0.001000 Time 0.042567 +2025-05-14 14:30:18,357 - Epoch: [29][ 400/ 1785] Overall Loss 2.226038 Objective Loss 2.226038 LR 0.001000 Time 0.042249 +2025-05-14 14:30:26,381 - Epoch: [29][ 600/ 1785] Overall Loss 2.229151 Objective Loss 2.229151 LR 0.001000 Time 0.041535 +2025-05-14 14:30:34,386 - Epoch: [29][ 800/ 1785] Overall Loss 2.223220 Objective Loss 2.223220 LR 0.001000 Time 0.041156 +2025-05-14 14:30:42,362 - Epoch: [29][ 1000/ 1785] Overall Loss 2.224801 Objective Loss 2.224801 LR 0.001000 Time 0.040899 +2025-05-14 14:30:50,299 - Epoch: [29][ 1200/ 1785] Overall Loss 2.227997 Objective Loss 2.227997 LR 0.001000 Time 0.040694 +2025-05-14 14:30:58,344 - Epoch: [29][ 1400/ 1785] Overall Loss 2.226846 Objective Loss 2.226846 LR 0.001000 Time 0.040626 +2025-05-14 14:31:06,360 - Epoch: [29][ 1600/ 1785] Overall Loss 2.227880 Objective Loss 2.227880 LR 0.001000 Time 0.040557 +2025-05-14 14:31:13,793 - Epoch: [29][ 1785/ 1785] Overall Loss 2.236666 Objective Loss 2.236666 LR 0.001000 Time 0.040516 +2025-05-14 14:31:13,833 - --- validate (epoch=29)----------- +2025-05-14 14:31:13,834 - 12251 samples (16 per mini-batch) +2025-05-14 14:31:13,835 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:31:27,250 - Epoch: [29][ 200/ 766] Loss 2.401116 mAP 0.875757 +2025-05-14 14:31:41,541 - Epoch: [29][ 400/ 766] Loss 2.397196 mAP 0.878899 +2025-05-14 14:31:56,617 - Epoch: [29][ 600/ 766] Loss 2.399697 mAP 0.876438 +2025-05-14 14:32:10,455 - Epoch: [29][ 766/ 766] Loss 2.406929 mAP 0.875536 +2025-05-14 14:32:10,494 - ==> mAP: 0.87554 Loss: 2.407 + +2025-05-14 14:32:10,499 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:32:10,499 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:32:10,530 - + +2025-05-14 14:32:10,530 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:32:19,218 - Epoch: [30][ 200/ 1785] Overall Loss 2.222298 Objective Loss 2.222298 LR 0.001000 Time 0.043426 +2025-05-14 14:32:27,558 - Epoch: [30][ 400/ 1785] Overall Loss 2.205763 Objective Loss 2.205763 LR 0.001000 Time 0.042560 +2025-05-14 14:32:35,972 - Epoch: [30][ 600/ 1785] Overall Loss 2.208516 Objective Loss 2.208516 LR 0.001000 Time 0.042392 +2025-05-14 14:32:44,323 - Epoch: [30][ 800/ 1785] Overall Loss 2.214699 Objective Loss 2.214699 LR 0.001000 Time 0.042231 +2025-05-14 14:32:52,630 - Epoch: [30][ 1000/ 1785] Overall Loss 2.216606 Objective Loss 2.216606 LR 0.001000 Time 0.042091 +2025-05-14 14:33:00,999 - Epoch: [30][ 1200/ 1785] Overall Loss 2.218978 Objective Loss 2.218978 LR 0.001000 Time 0.042048 +2025-05-14 14:33:09,380 - Epoch: [30][ 1400/ 1785] Overall Loss 2.223732 Objective Loss 2.223732 LR 0.001000 Time 0.042027 +2025-05-14 14:33:17,757 - Epoch: [30][ 1600/ 1785] Overall Loss 2.227297 Objective Loss 2.227297 LR 0.001000 Time 0.042008 +2025-05-14 14:33:25,508 - Epoch: [30][ 1785/ 1785] Overall Loss 2.228240 Objective Loss 2.228240 LR 0.001000 Time 0.041995 +2025-05-14 14:33:25,545 - --- validate (epoch=30)----------- +2025-05-14 14:33:25,546 - 12251 samples (16 per mini-batch) +2025-05-14 14:33:25,547 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:33:39,455 - Epoch: [30][ 200/ 766] Loss 2.448810 mAP 0.872995 +2025-05-14 14:33:53,737 - Epoch: [30][ 400/ 766] Loss 2.445084 mAP 0.870453 +2025-05-14 14:34:08,932 - Epoch: [30][ 600/ 766] Loss 2.435974 mAP 0.868311 +2025-05-14 14:34:22,672 - Epoch: [30][ 766/ 766] Loss 2.446341 mAP 0.867694 +2025-05-14 14:34:22,709 - ==> mAP: 0.86769 Loss: 2.446 + +2025-05-14 14:34:22,714 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:34:22,714 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:34:22,744 - + +2025-05-14 14:34:22,744 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:34:31,020 - Epoch: [31][ 200/ 1785] Overall Loss 2.151813 Objective Loss 2.151813 LR 0.001000 Time 0.041363 +2025-05-14 14:34:39,092 - Epoch: [31][ 400/ 1785] Overall Loss 2.165232 Objective Loss 2.165232 LR 0.001000 Time 0.040855 +2025-05-14 14:34:47,474 - Epoch: [31][ 600/ 1785] Overall Loss 2.180723 Objective Loss 2.180723 LR 0.001000 Time 0.041205 +2025-05-14 14:34:55,873 - Epoch: [31][ 800/ 1785] Overall Loss 2.197216 Objective Loss 2.197216 LR 0.001000 Time 0.041400 +2025-05-14 14:35:04,228 - Epoch: [31][ 1000/ 1785] Overall Loss 2.204215 Objective Loss 2.204215 LR 0.001000 Time 0.041473 +2025-05-14 14:35:12,655 - Epoch: [31][ 1200/ 1785] Overall Loss 2.208412 Objective Loss 2.208412 LR 0.001000 Time 0.041582 +2025-05-14 14:35:20,969 - Epoch: [31][ 1400/ 1785] Overall Loss 2.212270 Objective Loss 2.212270 LR 0.001000 Time 0.041580 +2025-05-14 14:35:29,399 - Epoch: [31][ 1600/ 1785] Overall Loss 2.214623 Objective Loss 2.214623 LR 0.001000 Time 0.041650 +2025-05-14 14:35:37,188 - Epoch: [31][ 1785/ 1785] Overall Loss 2.220587 Objective Loss 2.220587 LR 0.001000 Time 0.041696 +2025-05-14 14:35:37,227 - --- validate (epoch=31)----------- +2025-05-14 14:35:37,228 - 12251 samples (16 per mini-batch) +2025-05-14 14:35:37,229 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:35:51,069 - Epoch: [31][ 200/ 766] Loss 2.412547 mAP 0.872870 +2025-05-14 14:36:05,090 - Epoch: [31][ 400/ 766] Loss 2.427617 mAP 0.876025 +2025-05-14 14:36:20,235 - Epoch: [31][ 600/ 766] Loss 2.431128 mAP 0.874325 +2025-05-14 14:36:34,051 - Epoch: [31][ 766/ 766] Loss 2.420868 mAP 0.877531 +2025-05-14 14:36:34,088 - ==> mAP: 0.87753 Loss: 2.421 + +2025-05-14 14:36:34,093 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:36:34,093 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:36:34,125 - + +2025-05-14 14:36:34,125 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:36:42,601 - Epoch: [32][ 200/ 1785] Overall Loss 2.156588 Objective Loss 2.156588 LR 0.001000 Time 0.042369 +2025-05-14 14:36:50,947 - Epoch: [32][ 400/ 1785] Overall Loss 2.163276 Objective Loss 2.163276 LR 0.001000 Time 0.042045 +2025-05-14 14:36:59,300 - Epoch: [32][ 600/ 1785] Overall Loss 2.176425 Objective Loss 2.176425 LR 0.001000 Time 0.041948 +2025-05-14 14:37:07,644 - Epoch: [32][ 800/ 1785] Overall Loss 2.184959 Objective Loss 2.184959 LR 0.001000 Time 0.041890 +2025-05-14 14:37:16,011 - Epoch: [32][ 1000/ 1785] Overall Loss 2.194745 Objective Loss 2.194745 LR 0.001000 Time 0.041877 +2025-05-14 14:37:24,297 - Epoch: [32][ 1200/ 1785] Overall Loss 2.203256 Objective Loss 2.203256 LR 0.001000 Time 0.041800 +2025-05-14 14:37:32,449 - Epoch: [32][ 1400/ 1785] Overall Loss 2.202731 Objective Loss 2.202731 LR 0.001000 Time 0.041651 +2025-05-14 14:37:40,591 - Epoch: [32][ 1600/ 1785] Overall Loss 2.207423 Objective Loss 2.207423 LR 0.001000 Time 0.041532 +2025-05-14 14:37:48,128 - Epoch: [32][ 1785/ 1785] Overall Loss 2.215617 Objective Loss 2.215617 LR 0.001000 Time 0.041450 +2025-05-14 14:37:48,167 - --- validate (epoch=32)----------- +2025-05-14 14:37:48,167 - 12251 samples (16 per mini-batch) +2025-05-14 14:37:48,169 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:38:02,053 - Epoch: [32][ 200/ 766] Loss 2.425469 mAP 0.869717 +2025-05-14 14:38:16,387 - Epoch: [32][ 400/ 766] Loss 2.421045 mAP 0.869484 +2025-05-14 14:38:31,615 - Epoch: [32][ 600/ 766] Loss 2.413660 mAP 0.873718 +2025-05-14 14:38:45,483 - Epoch: [32][ 766/ 766] Loss 2.406576 mAP 0.874155 +2025-05-14 14:38:45,522 - ==> mAP: 0.87416 Loss: 2.407 + +2025-05-14 14:38:45,527 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:38:45,527 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:38:45,557 - + +2025-05-14 14:38:45,557 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:38:53,930 - Epoch: [33][ 200/ 1785] Overall Loss 2.158511 Objective Loss 2.158511 LR 0.001000 Time 0.041850 +2025-05-14 14:39:02,039 - Epoch: [33][ 400/ 1785] Overall Loss 2.166832 Objective Loss 2.166832 LR 0.001000 Time 0.041194 +2025-05-14 14:39:10,132 - Epoch: [33][ 600/ 1785] Overall Loss 2.179326 Objective Loss 2.179326 LR 0.001000 Time 0.040948 +2025-05-14 14:39:18,231 - Epoch: [33][ 800/ 1785] Overall Loss 2.184691 Objective Loss 2.184691 LR 0.001000 Time 0.040834 +2025-05-14 14:39:26,336 - Epoch: [33][ 1000/ 1785] Overall Loss 2.191933 Objective Loss 2.191933 LR 0.001000 Time 0.040770 +2025-05-14 14:39:34,449 - Epoch: [33][ 1200/ 1785] Overall Loss 2.199124 Objective Loss 2.199124 LR 0.001000 Time 0.040734 +2025-05-14 14:39:42,623 - Epoch: [33][ 1400/ 1785] Overall Loss 2.203108 Objective Loss 2.203108 LR 0.001000 Time 0.040752 +2025-05-14 14:39:50,837 - Epoch: [33][ 1600/ 1785] Overall Loss 2.205766 Objective Loss 2.205766 LR 0.001000 Time 0.040791 +2025-05-14 14:39:58,315 - Epoch: [33][ 1785/ 1785] Overall Loss 2.208079 Objective Loss 2.208079 LR 0.001000 Time 0.040752 +2025-05-14 14:39:58,356 - --- validate (epoch=33)----------- +2025-05-14 14:39:58,356 - 12251 samples (16 per mini-batch) +2025-05-14 14:39:58,358 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:40:12,337 - Epoch: [33][ 200/ 766] Loss 2.391741 mAP 0.865008 +2025-05-14 14:40:26,470 - Epoch: [33][ 400/ 766] Loss 2.407651 mAP 0.861311 +2025-05-14 14:40:41,777 - Epoch: [33][ 600/ 766] Loss 2.400723 mAP 0.863528 +2025-05-14 14:40:55,855 - Epoch: [33][ 766/ 766] Loss 2.405561 mAP 0.864877 +2025-05-14 14:40:55,889 - ==> mAP: 0.86488 Loss: 2.406 + +2025-05-14 14:40:55,895 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:40:55,895 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:40:55,925 - + +2025-05-14 14:40:55,926 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:41:04,154 - Epoch: [34][ 200/ 1785] Overall Loss 2.168611 Objective Loss 2.168611 LR 0.001000 Time 0.041128 +2025-05-14 14:41:12,266 - Epoch: [34][ 400/ 1785] Overall Loss 2.170213 Objective Loss 2.170213 LR 0.001000 Time 0.040841 +2025-05-14 14:41:20,427 - Epoch: [34][ 600/ 1785] Overall Loss 2.184260 Objective Loss 2.184260 LR 0.001000 Time 0.040826 +2025-05-14 14:41:28,569 - Epoch: [34][ 800/ 1785] Overall Loss 2.188873 Objective Loss 2.188873 LR 0.001000 Time 0.040795 +2025-05-14 14:41:36,645 - Epoch: [34][ 1000/ 1785] Overall Loss 2.189812 Objective Loss 2.189812 LR 0.001000 Time 0.040711 +2025-05-14 14:41:44,748 - Epoch: [34][ 1200/ 1785] Overall Loss 2.198992 Objective Loss 2.198992 LR 0.001000 Time 0.040676 +2025-05-14 14:41:52,848 - Epoch: [34][ 1400/ 1785] Overall Loss 2.199210 Objective Loss 2.199210 LR 0.001000 Time 0.040650 +2025-05-14 14:42:00,951 - Epoch: [34][ 1600/ 1785] Overall Loss 2.200411 Objective Loss 2.200411 LR 0.001000 Time 0.040632 +2025-05-14 14:42:08,471 - Epoch: [34][ 1785/ 1785] Overall Loss 2.201150 Objective Loss 2.201150 LR 0.001000 Time 0.040633 +2025-05-14 14:42:08,517 - --- validate (epoch=34)----------- +2025-05-14 14:42:08,518 - 12251 samples (16 per mini-batch) +2025-05-14 14:42:08,520 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:42:22,001 - Epoch: [34][ 200/ 766] Loss 2.431746 mAP 0.863941 +2025-05-14 14:42:36,302 - Epoch: [34][ 400/ 766] Loss 2.444797 mAP 0.859379 +2025-05-14 14:42:51,800 - Epoch: [34][ 600/ 766] Loss 2.435853 mAP 0.860250 +2025-05-14 14:43:05,737 - Epoch: [34][ 766/ 766] Loss 2.433290 mAP 0.860785 +2025-05-14 14:43:05,774 - ==> mAP: 0.86079 Loss: 2.433 + +2025-05-14 14:43:05,779 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:43:05,779 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:43:05,810 - + +2025-05-14 14:43:05,810 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:43:14,401 - Epoch: [35][ 200/ 1785] Overall Loss 2.142035 Objective Loss 2.142035 LR 0.001000 Time 0.042940 +2025-05-14 14:43:22,838 - Epoch: [35][ 400/ 1785] Overall Loss 2.141049 Objective Loss 2.141049 LR 0.001000 Time 0.042560 +2025-05-14 14:43:31,296 - Epoch: [35][ 600/ 1785] Overall Loss 2.155195 Objective Loss 2.155195 LR 0.001000 Time 0.042466 +2025-05-14 14:43:39,777 - Epoch: [35][ 800/ 1785] Overall Loss 2.167630 Objective Loss 2.167630 LR 0.001000 Time 0.042449 +2025-05-14 14:43:48,265 - Epoch: [35][ 1000/ 1785] Overall Loss 2.180456 Objective Loss 2.180456 LR 0.001000 Time 0.042446 +2025-05-14 14:43:56,644 - Epoch: [35][ 1200/ 1785] Overall Loss 2.185995 Objective Loss 2.185995 LR 0.001000 Time 0.042352 +2025-05-14 14:44:05,015 - Epoch: [35][ 1400/ 1785] Overall Loss 2.183624 Objective Loss 2.183624 LR 0.001000 Time 0.042280 +2025-05-14 14:44:13,395 - Epoch: [35][ 1600/ 1785] Overall Loss 2.189818 Objective Loss 2.189818 LR 0.001000 Time 0.042231 +2025-05-14 14:44:20,923 - Epoch: [35][ 1785/ 1785] Overall Loss 2.194345 Objective Loss 2.194345 LR 0.001000 Time 0.042071 +2025-05-14 14:44:20,967 - --- validate (epoch=35)----------- +2025-05-14 14:44:20,967 - 12251 samples (16 per mini-batch) +2025-05-14 14:44:20,969 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:44:34,670 - Epoch: [35][ 200/ 766] Loss 2.459607 mAP 0.862700 +2025-05-14 14:44:49,340 - Epoch: [35][ 400/ 766] Loss 2.471612 mAP 0.854429 +2025-05-14 14:45:04,673 - Epoch: [35][ 600/ 766] Loss 2.472782 mAP 0.852137 +2025-05-14 14:45:18,292 - Epoch: [35][ 766/ 766] Loss 2.478960 mAP 0.851110 +2025-05-14 14:45:18,328 - ==> mAP: 0.85111 Loss: 2.479 + +2025-05-14 14:45:18,334 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:45:18,334 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:45:18,364 - + +2025-05-14 14:45:18,364 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:45:26,890 - Epoch: [36][ 200/ 1785] Overall Loss 2.115314 Objective Loss 2.115314 LR 0.001000 Time 0.042617 +2025-05-14 14:45:35,253 - Epoch: [36][ 400/ 1785] Overall Loss 2.153471 Objective Loss 2.153471 LR 0.001000 Time 0.042211 +2025-05-14 14:45:43,569 - Epoch: [36][ 600/ 1785] Overall Loss 2.152550 Objective Loss 2.152550 LR 0.001000 Time 0.041998 +2025-05-14 14:45:51,880 - Epoch: [36][ 800/ 1785] Overall Loss 2.164595 Objective Loss 2.164595 LR 0.001000 Time 0.041885 +2025-05-14 14:46:00,203 - Epoch: [36][ 1000/ 1785] Overall Loss 2.167433 Objective Loss 2.167433 LR 0.001000 Time 0.041829 +2025-05-14 14:46:08,713 - Epoch: [36][ 1200/ 1785] Overall Loss 2.171750 Objective Loss 2.171750 LR 0.001000 Time 0.041948 +2025-05-14 14:46:17,160 - Epoch: [36][ 1400/ 1785] Overall Loss 2.180173 Objective Loss 2.180173 LR 0.001000 Time 0.041988 +2025-05-14 14:46:25,617 - Epoch: [36][ 1600/ 1785] Overall Loss 2.183480 Objective Loss 2.183480 LR 0.001000 Time 0.042024 +2025-05-14 14:46:33,355 - Epoch: [36][ 1785/ 1785] Overall Loss 2.186400 Objective Loss 2.186400 LR 0.001000 Time 0.042003 +2025-05-14 14:46:33,394 - --- validate (epoch=36)----------- +2025-05-14 14:46:33,395 - 12251 samples (16 per mini-batch) +2025-05-14 14:46:33,397 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:46:47,304 - Epoch: [36][ 200/ 766] Loss 2.445619 mAP 0.864646 +2025-05-14 14:47:01,482 - Epoch: [36][ 400/ 766] Loss 2.461735 mAP 0.862361 +2025-05-14 14:47:16,555 - Epoch: [36][ 600/ 766] Loss 2.471719 mAP 0.858661 +2025-05-14 14:47:30,276 - Epoch: [36][ 766/ 766] Loss 2.466429 mAP 0.860040 +2025-05-14 14:47:30,313 - ==> mAP: 0.86004 Loss: 2.466 + +2025-05-14 14:47:30,319 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:47:30,319 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:47:30,350 - + +2025-05-14 14:47:30,350 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:47:38,558 - Epoch: [37][ 200/ 1785] Overall Loss 2.145868 Objective Loss 2.145868 LR 0.001000 Time 0.041027 +2025-05-14 14:47:46,519 - Epoch: [37][ 400/ 1785] Overall Loss 2.154273 Objective Loss 2.154273 LR 0.001000 Time 0.040410 +2025-05-14 14:47:54,869 - Epoch: [37][ 600/ 1785] Overall Loss 2.155557 Objective Loss 2.155557 LR 0.001000 Time 0.040855 +2025-05-14 14:48:03,165 - Epoch: [37][ 800/ 1785] Overall Loss 2.155151 Objective Loss 2.155151 LR 0.001000 Time 0.041009 +2025-05-14 14:48:11,476 - Epoch: [37][ 1000/ 1785] Overall Loss 2.161772 Objective Loss 2.161772 LR 0.001000 Time 0.041116 +2025-05-14 14:48:19,848 - Epoch: [37][ 1200/ 1785] Overall Loss 2.172773 Objective Loss 2.172773 LR 0.001000 Time 0.041239 +2025-05-14 14:48:28,234 - Epoch: [37][ 1400/ 1785] Overall Loss 2.175455 Objective Loss 2.175455 LR 0.001000 Time 0.041336 +2025-05-14 14:48:36,496 - Epoch: [37][ 1600/ 1785] Overall Loss 2.179271 Objective Loss 2.179271 LR 0.001000 Time 0.041332 +2025-05-14 14:48:44,175 - Epoch: [37][ 1785/ 1785] Overall Loss 2.184483 Objective Loss 2.184483 LR 0.001000 Time 0.041350 +2025-05-14 14:48:44,224 - --- validate (epoch=37)----------- +2025-05-14 14:48:44,224 - 12251 samples (16 per mini-batch) +2025-05-14 14:48:44,226 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:48:57,643 - Epoch: [37][ 200/ 766] Loss 2.420071 mAP 0.868240 +2025-05-14 14:49:11,937 - Epoch: [37][ 400/ 766] Loss 2.430969 mAP 0.869244 +2025-05-14 14:49:27,151 - Epoch: [37][ 600/ 766] Loss 2.420044 mAP 0.868754 +2025-05-14 14:49:40,841 - Epoch: [37][ 766/ 766] Loss 2.426243 mAP 0.866756 +2025-05-14 14:49:40,877 - ==> mAP: 0.86676 Loss: 2.426 + +2025-05-14 14:49:40,882 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:49:40,882 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:49:40,905 - + +2025-05-14 14:49:40,905 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:49:49,411 - Epoch: [38][ 200/ 1785] Overall Loss 2.137430 Objective Loss 2.137430 LR 0.001000 Time 0.042516 +2025-05-14 14:49:57,532 - Epoch: [38][ 400/ 1785] Overall Loss 2.145493 Objective Loss 2.145493 LR 0.001000 Time 0.041557 +2025-05-14 14:50:05,486 - Epoch: [38][ 600/ 1785] Overall Loss 2.146948 Objective Loss 2.146948 LR 0.001000 Time 0.040957 +2025-05-14 14:50:13,456 - Epoch: [38][ 800/ 1785] Overall Loss 2.145210 Objective Loss 2.145210 LR 0.001000 Time 0.040678 +2025-05-14 14:50:21,429 - Epoch: [38][ 1000/ 1785] Overall Loss 2.152642 Objective Loss 2.152642 LR 0.001000 Time 0.040513 +2025-05-14 14:50:29,381 - Epoch: [38][ 1200/ 1785] Overall Loss 2.164059 Objective Loss 2.164059 LR 0.001000 Time 0.040386 +2025-05-14 14:50:37,429 - Epoch: [38][ 1400/ 1785] Overall Loss 2.168999 Objective Loss 2.168999 LR 0.001000 Time 0.040363 +2025-05-14 14:50:45,458 - Epoch: [38][ 1600/ 1785] Overall Loss 2.172245 Objective Loss 2.172245 LR 0.001000 Time 0.040335 +2025-05-14 14:50:52,786 - Epoch: [38][ 1785/ 1785] Overall Loss 2.178288 Objective Loss 2.178288 LR 0.001000 Time 0.040259 +2025-05-14 14:50:52,826 - --- validate (epoch=38)----------- +2025-05-14 14:50:52,827 - 12251 samples (16 per mini-batch) +2025-05-14 14:50:52,829 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:51:06,597 - Epoch: [38][ 200/ 766] Loss 2.414206 mAP 0.876839 +2025-05-14 14:51:21,158 - Epoch: [38][ 400/ 766] Loss 2.398135 mAP 0.878672 +2025-05-14 14:51:36,218 - Epoch: [38][ 600/ 766] Loss 2.414574 mAP 0.878120 +2025-05-14 14:51:50,063 - Epoch: [38][ 766/ 766] Loss 2.413639 mAP 0.880955 +2025-05-14 14:51:50,106 - ==> mAP: 0.88095 Loss: 2.414 + +2025-05-14 14:51:50,111 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:51:50,111 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:51:50,142 - + +2025-05-14 14:51:50,142 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:51:58,788 - Epoch: [39][ 200/ 1785] Overall Loss 2.109945 Objective Loss 2.109945 LR 0.001000 Time 0.043214 +2025-05-14 14:52:07,229 - Epoch: [39][ 400/ 1785] Overall Loss 2.124574 Objective Loss 2.124574 LR 0.001000 Time 0.042705 +2025-05-14 14:52:15,621 - Epoch: [39][ 600/ 1785] Overall Loss 2.124551 Objective Loss 2.124551 LR 0.001000 Time 0.042455 +2025-05-14 14:52:23,925 - Epoch: [39][ 800/ 1785] Overall Loss 2.128885 Objective Loss 2.128885 LR 0.001000 Time 0.042219 +2025-05-14 14:52:32,259 - Epoch: [39][ 1000/ 1785] Overall Loss 2.140434 Objective Loss 2.140434 LR 0.001000 Time 0.042107 +2025-05-14 14:52:40,618 - Epoch: [39][ 1200/ 1785] Overall Loss 2.156736 Objective Loss 2.156736 LR 0.001000 Time 0.042053 +2025-05-14 14:52:48,972 - Epoch: [39][ 1400/ 1785] Overall Loss 2.165904 Objective Loss 2.165904 LR 0.001000 Time 0.042012 +2025-05-14 14:52:57,326 - Epoch: [39][ 1600/ 1785] Overall Loss 2.170406 Objective Loss 2.170406 LR 0.001000 Time 0.041981 +2025-05-14 14:53:05,047 - Epoch: [39][ 1785/ 1785] Overall Loss 2.173427 Objective Loss 2.173427 LR 0.001000 Time 0.041955 +2025-05-14 14:53:05,094 - --- validate (epoch=39)----------- +2025-05-14 14:53:05,095 - 12251 samples (16 per mini-batch) +2025-05-14 14:53:05,096 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:53:18,695 - Epoch: [39][ 200/ 766] Loss 2.433387 mAP 0.882854 +2025-05-14 14:53:32,903 - Epoch: [39][ 400/ 766] Loss 2.431111 mAP 0.880964 +2025-05-14 14:53:48,042 - Epoch: [39][ 600/ 766] Loss 2.428982 mAP 0.881829 +2025-05-14 14:54:01,875 - Epoch: [39][ 766/ 766] Loss 2.436187 mAP 0.879417 +2025-05-14 14:54:01,922 - ==> mAP: 0.87942 Loss: 2.436 + +2025-05-14 14:54:01,927 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:54:01,927 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:54:01,958 - + +2025-05-14 14:54:01,958 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:54:10,470 - Epoch: [40][ 200/ 1785] Overall Loss 2.087709 Objective Loss 2.087709 LR 0.001000 Time 0.042547 +2025-05-14 14:54:18,639 - Epoch: [40][ 400/ 1785] Overall Loss 2.118054 Objective Loss 2.118054 LR 0.001000 Time 0.041692 +2025-05-14 14:54:26,801 - Epoch: [40][ 600/ 1785] Overall Loss 2.145593 Objective Loss 2.145593 LR 0.001000 Time 0.041396 +2025-05-14 14:54:34,897 - Epoch: [40][ 800/ 1785] Overall Loss 2.157647 Objective Loss 2.157647 LR 0.001000 Time 0.041164 +2025-05-14 14:54:42,838 - Epoch: [40][ 1000/ 1785] Overall Loss 2.158428 Objective Loss 2.158428 LR 0.001000 Time 0.040870 +2025-05-14 14:54:50,867 - Epoch: [40][ 1200/ 1785] Overall Loss 2.164916 Objective Loss 2.164916 LR 0.001000 Time 0.040748 +2025-05-14 14:54:59,004 - Epoch: [40][ 1400/ 1785] Overall Loss 2.164720 Objective Loss 2.164720 LR 0.001000 Time 0.040737 +2025-05-14 14:55:07,289 - Epoch: [40][ 1600/ 1785] Overall Loss 2.171166 Objective Loss 2.171166 LR 0.001000 Time 0.040822 +2025-05-14 14:55:15,030 - Epoch: [40][ 1785/ 1785] Overall Loss 2.171054 Objective Loss 2.171054 LR 0.001000 Time 0.040927 +2025-05-14 14:55:15,076 - --- validate (epoch=40)----------- +2025-05-14 14:55:15,077 - 12251 samples (16 per mini-batch) +2025-05-14 14:55:15,079 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:55:28,789 - Epoch: [40][ 200/ 766] Loss 2.423778 mAP 0.878476 +2025-05-14 14:55:43,228 - Epoch: [40][ 400/ 766] Loss 2.435862 mAP 0.873811 +2025-05-14 14:55:58,739 - Epoch: [40][ 600/ 766] Loss 2.434183 mAP 0.877564 +2025-05-14 14:56:12,785 - Epoch: [40][ 766/ 766] Loss 2.432174 mAP 0.877514 +2025-05-14 14:56:12,827 - ==> mAP: 0.87751 Loss: 2.432 + +2025-05-14 14:56:12,833 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:56:12,833 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:56:12,863 - + +2025-05-14 14:56:12,863 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:56:21,465 - Epoch: [41][ 200/ 1785] Overall Loss 2.129757 Objective Loss 2.129757 LR 0.001000 Time 0.042995 +2025-05-14 14:56:29,819 - Epoch: [41][ 400/ 1785] Overall Loss 2.123320 Objective Loss 2.123320 LR 0.001000 Time 0.042377 +2025-05-14 14:56:38,233 - Epoch: [41][ 600/ 1785] Overall Loss 2.139597 Objective Loss 2.139597 LR 0.001000 Time 0.042273 +2025-05-14 14:56:46,624 - Epoch: [41][ 800/ 1785] Overall Loss 2.130642 Objective Loss 2.130642 LR 0.001000 Time 0.042191 +2025-05-14 14:56:54,830 - Epoch: [41][ 1000/ 1785] Overall Loss 2.138754 Objective Loss 2.138754 LR 0.001000 Time 0.041957 +2025-05-14 14:57:02,826 - Epoch: [41][ 1200/ 1785] Overall Loss 2.142712 Objective Loss 2.142712 LR 0.001000 Time 0.041626 +2025-05-14 14:57:10,863 - Epoch: [41][ 1400/ 1785] Overall Loss 2.151801 Objective Loss 2.151801 LR 0.001000 Time 0.041418 +2025-05-14 14:57:18,945 - Epoch: [41][ 1600/ 1785] Overall Loss 2.158969 Objective Loss 2.158969 LR 0.001000 Time 0.041291 +2025-05-14 14:57:26,697 - Epoch: [41][ 1785/ 1785] Overall Loss 2.162266 Objective Loss 2.162266 LR 0.001000 Time 0.041354 +2025-05-14 14:57:26,737 - --- validate (epoch=41)----------- +2025-05-14 14:57:26,738 - 12251 samples (16 per mini-batch) +2025-05-14 14:57:26,740 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:57:40,072 - Epoch: [41][ 200/ 766] Loss 2.444402 mAP 0.859963 +2025-05-14 14:57:54,141 - Epoch: [41][ 400/ 766] Loss 2.436055 mAP 0.862393 +2025-05-14 14:58:09,326 - Epoch: [41][ 600/ 766] Loss 2.438374 mAP 0.862584 +2025-05-14 14:58:23,262 - Epoch: [41][ 766/ 766] Loss 2.441182 mAP 0.860062 +2025-05-14 14:58:23,303 - ==> mAP: 0.86006 Loss: 2.441 + +2025-05-14 14:58:23,309 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 14:58:23,309 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 14:58:23,333 - + +2025-05-14 14:58:23,333 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 14:58:31,517 - Epoch: [42][ 200/ 1785] Overall Loss 2.117042 Objective Loss 2.117042 LR 0.001000 Time 0.040905 +2025-05-14 14:58:39,869 - Epoch: [42][ 400/ 1785] Overall Loss 2.134254 Objective Loss 2.134254 LR 0.001000 Time 0.041330 +2025-05-14 14:58:48,264 - Epoch: [42][ 600/ 1785] Overall Loss 2.141981 Objective Loss 2.141981 LR 0.001000 Time 0.041541 +2025-05-14 14:58:56,651 - Epoch: [42][ 800/ 1785] Overall Loss 2.147651 Objective Loss 2.147651 LR 0.001000 Time 0.041638 +2025-05-14 14:59:05,053 - Epoch: [42][ 1000/ 1785] Overall Loss 2.144808 Objective Loss 2.144808 LR 0.001000 Time 0.041711 +2025-05-14 14:59:13,402 - Epoch: [42][ 1200/ 1785] Overall Loss 2.151490 Objective Loss 2.151490 LR 0.001000 Time 0.041715 +2025-05-14 14:59:21,698 - Epoch: [42][ 1400/ 1785] Overall Loss 2.153832 Objective Loss 2.153832 LR 0.001000 Time 0.041680 +2025-05-14 14:59:29,623 - Epoch: [42][ 1600/ 1785] Overall Loss 2.159163 Objective Loss 2.159163 LR 0.001000 Time 0.041422 +2025-05-14 14:59:36,948 - Epoch: [42][ 1785/ 1785] Overall Loss 2.163566 Objective Loss 2.163566 LR 0.001000 Time 0.041232 +2025-05-14 14:59:36,992 - --- validate (epoch=42)----------- +2025-05-14 14:59:36,993 - 12251 samples (16 per mini-batch) +2025-05-14 14:59:36,995 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 14:59:50,568 - Epoch: [42][ 200/ 766] Loss 2.425652 mAP 0.868092 +2025-05-14 15:00:04,957 - Epoch: [42][ 400/ 766] Loss 2.438704 mAP 0.865963 +2025-05-14 15:00:20,116 - Epoch: [42][ 600/ 766] Loss 2.438858 mAP 0.862616 +2025-05-14 15:00:33,880 - Epoch: [42][ 766/ 766] Loss 2.427697 mAP 0.862229 +2025-05-14 15:00:33,915 - ==> mAP: 0.86223 Loss: 2.428 + +2025-05-14 15:00:33,920 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 15:00:33,921 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:00:33,944 - + +2025-05-14 15:00:33,944 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:00:42,021 - Epoch: [43][ 200/ 1785] Overall Loss 2.114102 Objective Loss 2.114102 LR 0.001000 Time 0.040372 +2025-05-14 15:00:50,013 - Epoch: [43][ 400/ 1785] Overall Loss 2.114236 Objective Loss 2.114236 LR 0.001000 Time 0.040160 +2025-05-14 15:00:58,417 - Epoch: [43][ 600/ 1785] Overall Loss 2.129617 Objective Loss 2.129617 LR 0.001000 Time 0.040778 +2025-05-14 15:01:06,744 - Epoch: [43][ 800/ 1785] Overall Loss 2.132102 Objective Loss 2.132102 LR 0.001000 Time 0.040990 +2025-05-14 15:01:14,937 - Epoch: [43][ 1000/ 1785] Overall Loss 2.137861 Objective Loss 2.137861 LR 0.001000 Time 0.040983 +2025-05-14 15:01:23,127 - Epoch: [43][ 1200/ 1785] Overall Loss 2.149463 Objective Loss 2.149463 LR 0.001000 Time 0.040976 +2025-05-14 15:01:31,318 - Epoch: [43][ 1400/ 1785] Overall Loss 2.155978 Objective Loss 2.155978 LR 0.001000 Time 0.040972 +2025-05-14 15:01:39,507 - Epoch: [43][ 1600/ 1785] Overall Loss 2.158530 Objective Loss 2.158530 LR 0.001000 Time 0.040968 +2025-05-14 15:01:47,097 - Epoch: [43][ 1785/ 1785] Overall Loss 2.159122 Objective Loss 2.159122 LR 0.001000 Time 0.040973 +2025-05-14 15:01:47,136 - --- validate (epoch=43)----------- +2025-05-14 15:01:47,137 - 12251 samples (16 per mini-batch) +2025-05-14 15:01:47,138 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:02:00,933 - Epoch: [43][ 200/ 766] Loss 2.408491 mAP 0.875327 +2025-05-14 15:02:14,995 - Epoch: [43][ 400/ 766] Loss 2.412373 mAP 0.876499 +2025-05-14 15:02:30,314 - Epoch: [43][ 600/ 766] Loss 2.402039 mAP 0.878690 +2025-05-14 15:02:44,189 - Epoch: [43][ 766/ 766] Loss 2.402754 mAP 0.878541 +2025-05-14 15:02:44,228 - ==> mAP: 0.87854 Loss: 2.403 + +2025-05-14 15:02:44,233 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 15:02:44,233 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:02:44,256 - + +2025-05-14 15:02:44,256 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:02:52,522 - Epoch: [44][ 200/ 1785] Overall Loss 2.109437 Objective Loss 2.109437 LR 0.001000 Time 0.041315 +2025-05-14 15:03:00,736 - Epoch: [44][ 400/ 1785] Overall Loss 2.118411 Objective Loss 2.118411 LR 0.001000 Time 0.041188 +2025-05-14 15:03:08,736 - Epoch: [44][ 600/ 1785] Overall Loss 2.132826 Objective Loss 2.132826 LR 0.001000 Time 0.040788 +2025-05-14 15:03:16,748 - Epoch: [44][ 800/ 1785] Overall Loss 2.141775 Objective Loss 2.141775 LR 0.001000 Time 0.040604 +2025-05-14 15:03:24,844 - Epoch: [44][ 1000/ 1785] Overall Loss 2.146718 Objective Loss 2.146718 LR 0.001000 Time 0.040577 +2025-05-14 15:03:32,997 - Epoch: [44][ 1200/ 1785] Overall Loss 2.152294 Objective Loss 2.152294 LR 0.001000 Time 0.040607 +2025-05-14 15:03:41,454 - Epoch: [44][ 1400/ 1785] Overall Loss 2.154728 Objective Loss 2.154728 LR 0.001000 Time 0.040846 +2025-05-14 15:03:49,468 - Epoch: [44][ 1600/ 1785] Overall Loss 2.158037 Objective Loss 2.158037 LR 0.001000 Time 0.040747 +2025-05-14 15:03:57,018 - Epoch: [44][ 1785/ 1785] Overall Loss 2.159970 Objective Loss 2.159970 LR 0.001000 Time 0.040753 +2025-05-14 15:03:57,054 - --- validate (epoch=44)----------- +2025-05-14 15:03:57,055 - 12251 samples (16 per mini-batch) +2025-05-14 15:03:57,057 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:04:11,071 - Epoch: [44][ 200/ 766] Loss 2.436082 mAP 0.880654 +2025-05-14 15:04:25,720 - Epoch: [44][ 400/ 766] Loss 2.457788 mAP 0.877638 +2025-05-14 15:04:41,368 - Epoch: [44][ 600/ 766] Loss 2.444943 mAP 0.880159 +2025-05-14 15:04:55,262 - Epoch: [44][ 766/ 766] Loss 2.446069 mAP 0.881560 +2025-05-14 15:04:55,301 - ==> mAP: 0.88156 Loss: 2.446 + +2025-05-14 15:04:55,306 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 15:04:55,306 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:04:55,337 - + +2025-05-14 15:04:55,337 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:05:03,599 - Epoch: [45][ 200/ 1785] Overall Loss 2.101534 Objective Loss 2.101534 LR 0.001000 Time 0.041294 +2025-05-14 15:05:11,790 - Epoch: [45][ 400/ 1785] Overall Loss 2.105191 Objective Loss 2.105191 LR 0.001000 Time 0.041120 +2025-05-14 15:05:19,950 - Epoch: [45][ 600/ 1785] Overall Loss 2.111785 Objective Loss 2.111785 LR 0.001000 Time 0.041010 +2025-05-14 15:05:27,884 - Epoch: [45][ 800/ 1785] Overall Loss 2.116245 Objective Loss 2.116245 LR 0.001000 Time 0.040672 +2025-05-14 15:05:35,808 - Epoch: [45][ 1000/ 1785] Overall Loss 2.128025 Objective Loss 2.128025 LR 0.001000 Time 0.040459 +2025-05-14 15:05:43,732 - Epoch: [45][ 1200/ 1785] Overall Loss 2.132179 Objective Loss 2.132179 LR 0.001000 Time 0.040317 +2025-05-14 15:05:51,832 - Epoch: [45][ 1400/ 1785] Overall Loss 2.139600 Objective Loss 2.139600 LR 0.001000 Time 0.040343 +2025-05-14 15:06:00,027 - Epoch: [45][ 1600/ 1785] Overall Loss 2.146262 Objective Loss 2.146262 LR 0.001000 Time 0.040421 +2025-05-14 15:06:07,614 - Epoch: [45][ 1785/ 1785] Overall Loss 2.152846 Objective Loss 2.152846 LR 0.001000 Time 0.040481 +2025-05-14 15:06:07,662 - --- validate (epoch=45)----------- +2025-05-14 15:06:07,663 - 12251 samples (16 per mini-batch) +2025-05-14 15:06:07,664 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:06:21,460 - Epoch: [45][ 200/ 766] Loss 2.402437 mAP 0.869594 +2025-05-14 15:06:35,777 - Epoch: [45][ 400/ 766] Loss 2.389649 mAP 0.870286 +2025-05-14 15:06:51,200 - Epoch: [45][ 600/ 766] Loss 2.387754 mAP 0.871664 +2025-05-14 15:07:04,902 - Epoch: [45][ 766/ 766] Loss 2.395684 mAP 0.870354 +2025-05-14 15:07:04,940 - ==> mAP: 0.87035 Loss: 2.396 + +2025-05-14 15:07:04,945 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 15:07:04,945 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:07:04,976 - + +2025-05-14 15:07:04,976 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:07:13,256 - Epoch: [46][ 200/ 1785] Overall Loss 2.076638 Objective Loss 2.076638 LR 0.001000 Time 0.041383 +2025-05-14 15:07:21,281 - Epoch: [46][ 400/ 1785] Overall Loss 2.109165 Objective Loss 2.109165 LR 0.001000 Time 0.040751 +2025-05-14 15:07:29,449 - Epoch: [46][ 600/ 1785] Overall Loss 2.121698 Objective Loss 2.121698 LR 0.001000 Time 0.040777 +2025-05-14 15:07:37,605 - Epoch: [46][ 800/ 1785] Overall Loss 2.127004 Objective Loss 2.127004 LR 0.001000 Time 0.040776 +2025-05-14 15:07:45,820 - Epoch: [46][ 1000/ 1785] Overall Loss 2.131785 Objective Loss 2.131785 LR 0.001000 Time 0.040835 +2025-05-14 15:07:54,211 - Epoch: [46][ 1200/ 1785] Overall Loss 2.140478 Objective Loss 2.140478 LR 0.001000 Time 0.041020 +2025-05-14 15:08:02,612 - Epoch: [46][ 1400/ 1785] Overall Loss 2.144397 Objective Loss 2.144397 LR 0.001000 Time 0.041159 +2025-05-14 15:08:10,922 - Epoch: [46][ 1600/ 1785] Overall Loss 2.145388 Objective Loss 2.145388 LR 0.001000 Time 0.041207 +2025-05-14 15:08:18,693 - Epoch: [46][ 1785/ 1785] Overall Loss 2.149835 Objective Loss 2.149835 LR 0.001000 Time 0.041289 +2025-05-14 15:08:18,727 - --- validate (epoch=46)----------- +2025-05-14 15:08:18,728 - 12251 samples (16 per mini-batch) +2025-05-14 15:08:18,730 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:08:32,396 - Epoch: [46][ 200/ 766] Loss 2.401515 mAP 0.872353 +2025-05-14 15:08:46,725 - Epoch: [46][ 400/ 766] Loss 2.406087 mAP 0.868240 +2025-05-14 15:09:01,849 - Epoch: [46][ 600/ 766] Loss 2.418327 mAP 0.869049 +2025-05-14 15:09:15,876 - Epoch: [46][ 766/ 766] Loss 2.428540 mAP 0.868560 +2025-05-14 15:09:15,912 - ==> mAP: 0.86856 Loss: 2.429 + +2025-05-14 15:09:15,918 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 15:09:15,918 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:09:15,948 - + +2025-05-14 15:09:15,949 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:09:24,722 - Epoch: [47][ 200/ 1785] Overall Loss 2.115241 Objective Loss 2.115241 LR 0.001000 Time 0.043851 +2025-05-14 15:09:33,076 - Epoch: [47][ 400/ 1785] Overall Loss 2.119333 Objective Loss 2.119333 LR 0.001000 Time 0.042807 +2025-05-14 15:09:41,374 - Epoch: [47][ 600/ 1785] Overall Loss 2.118882 Objective Loss 2.118882 LR 0.001000 Time 0.042366 +2025-05-14 15:09:49,853 - Epoch: [47][ 800/ 1785] Overall Loss 2.127428 Objective Loss 2.127428 LR 0.001000 Time 0.042371 +2025-05-14 15:09:58,237 - Epoch: [47][ 1000/ 1785] Overall Loss 2.135846 Objective Loss 2.135846 LR 0.001000 Time 0.042279 +2025-05-14 15:10:06,634 - Epoch: [47][ 1200/ 1785] Overall Loss 2.141037 Objective Loss 2.141037 LR 0.001000 Time 0.042228 +2025-05-14 15:10:14,700 - Epoch: [47][ 1400/ 1785] Overall Loss 2.139745 Objective Loss 2.139745 LR 0.001000 Time 0.041956 +2025-05-14 15:10:23,173 - Epoch: [47][ 1600/ 1785] Overall Loss 2.139810 Objective Loss 2.139810 LR 0.001000 Time 0.042006 +2025-05-14 15:10:31,006 - Epoch: [47][ 1785/ 1785] Overall Loss 2.142740 Objective Loss 2.142740 LR 0.001000 Time 0.042040 +2025-05-14 15:10:31,046 - --- validate (epoch=47)----------- +2025-05-14 15:10:31,046 - 12251 samples (16 per mini-batch) +2025-05-14 15:10:31,048 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:10:45,051 - Epoch: [47][ 200/ 766] Loss 2.379311 mAP 0.871034 +2025-05-14 15:10:59,703 - Epoch: [47][ 400/ 766] Loss 2.397757 mAP 0.865039 +2025-05-14 15:11:14,978 - Epoch: [47][ 600/ 766] Loss 2.396873 mAP 0.871266 +2025-05-14 15:11:28,811 - Epoch: [47][ 766/ 766] Loss 2.390989 mAP 0.874073 +2025-05-14 15:11:28,853 - ==> mAP: 0.87407 Loss: 2.391 + +2025-05-14 15:11:28,859 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 15:11:28,859 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:11:28,889 - + +2025-05-14 15:11:28,890 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:11:37,503 - Epoch: [48][ 200/ 1785] Overall Loss 2.088319 Objective Loss 2.088319 LR 0.001000 Time 0.043054 +2025-05-14 15:11:45,948 - Epoch: [48][ 400/ 1785] Overall Loss 2.098063 Objective Loss 2.098063 LR 0.001000 Time 0.042634 +2025-05-14 15:11:54,212 - Epoch: [48][ 600/ 1785] Overall Loss 2.113973 Objective Loss 2.113973 LR 0.001000 Time 0.042193 +2025-05-14 15:12:02,673 - Epoch: [48][ 800/ 1785] Overall Loss 2.121089 Objective Loss 2.121089 LR 0.001000 Time 0.042220 +2025-05-14 15:12:10,978 - Epoch: [48][ 1000/ 1785] Overall Loss 2.122554 Objective Loss 2.122554 LR 0.001000 Time 0.042079 +2025-05-14 15:12:19,386 - Epoch: [48][ 1200/ 1785] Overall Loss 2.126358 Objective Loss 2.126358 LR 0.001000 Time 0.042071 +2025-05-14 15:12:27,745 - Epoch: [48][ 1400/ 1785] Overall Loss 2.136075 Objective Loss 2.136075 LR 0.001000 Time 0.042030 +2025-05-14 15:12:36,136 - Epoch: [48][ 1600/ 1785] Overall Loss 2.137928 Objective Loss 2.137928 LR 0.001000 Time 0.042020 +2025-05-14 15:12:43,849 - Epoch: [48][ 1785/ 1785] Overall Loss 2.142416 Objective Loss 2.142416 LR 0.001000 Time 0.041985 +2025-05-14 15:12:43,893 - --- validate (epoch=48)----------- +2025-05-14 15:12:43,894 - 12251 samples (16 per mini-batch) +2025-05-14 15:12:43,896 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:12:57,522 - Epoch: [48][ 200/ 766] Loss 2.380134 mAP 0.883845 +2025-05-14 15:13:12,010 - Epoch: [48][ 400/ 766] Loss 2.384719 mAP 0.883317 +2025-05-14 15:13:27,254 - Epoch: [48][ 600/ 766] Loss 2.382013 mAP 0.882263 +2025-05-14 15:13:40,955 - Epoch: [48][ 766/ 766] Loss 2.388430 mAP 0.881016 +2025-05-14 15:13:40,990 - ==> mAP: 0.88102 Loss: 2.388 + +2025-05-14 15:13:40,996 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 15:13:40,996 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:13:41,026 - + +2025-05-14 15:13:41,026 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:13:49,387 - Epoch: [49][ 200/ 1785] Overall Loss 2.082001 Objective Loss 2.082001 LR 0.001000 Time 0.041791 +2025-05-14 15:13:57,728 - Epoch: [49][ 400/ 1785] Overall Loss 2.089109 Objective Loss 2.089109 LR 0.001000 Time 0.041746 +2025-05-14 15:14:06,042 - Epoch: [49][ 600/ 1785] Overall Loss 2.103832 Objective Loss 2.103832 LR 0.001000 Time 0.041683 +2025-05-14 15:14:14,316 - Epoch: [49][ 800/ 1785] Overall Loss 2.108089 Objective Loss 2.108089 LR 0.001000 Time 0.041603 +2025-05-14 15:14:22,439 - Epoch: [49][ 1000/ 1785] Overall Loss 2.117807 Objective Loss 2.117807 LR 0.001000 Time 0.041404 +2025-05-14 15:14:30,668 - Epoch: [49][ 1200/ 1785] Overall Loss 2.128672 Objective Loss 2.128672 LR 0.001000 Time 0.041359 +2025-05-14 15:14:38,898 - Epoch: [49][ 1400/ 1785] Overall Loss 2.132399 Objective Loss 2.132399 LR 0.001000 Time 0.041328 +2025-05-14 15:14:47,078 - Epoch: [49][ 1600/ 1785] Overall Loss 2.132632 Objective Loss 2.132632 LR 0.001000 Time 0.041273 +2025-05-14 15:14:54,562 - Epoch: [49][ 1785/ 1785] Overall Loss 2.133898 Objective Loss 2.133898 LR 0.001000 Time 0.041188 +2025-05-14 15:14:54,609 - --- validate (epoch=49)----------- +2025-05-14 15:14:54,609 - 12251 samples (16 per mini-batch) +2025-05-14 15:14:54,611 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:15:08,314 - Epoch: [49][ 200/ 766] Loss 2.457334 mAP 0.876561 +2025-05-14 15:15:22,579 - Epoch: [49][ 400/ 766] Loss 2.442987 mAP 0.876704 +2025-05-14 15:15:37,945 - Epoch: [49][ 600/ 766] Loss 2.433667 mAP 0.877387 +2025-05-14 15:15:51,570 - Epoch: [49][ 766/ 766] Loss 2.425865 mAP 0.878125 +2025-05-14 15:15:51,613 - ==> mAP: 0.87813 Loss: 2.426 + +2025-05-14 15:15:51,619 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 15:15:51,619 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:15:51,649 - + +2025-05-14 15:15:51,649 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:16:00,366 - Epoch: [50][ 200/ 1785] Overall Loss 2.100426 Objective Loss 2.100426 LR 0.001000 Time 0.043570 +2025-05-14 15:16:08,710 - Epoch: [50][ 400/ 1785] Overall Loss 2.116599 Objective Loss 2.116599 LR 0.001000 Time 0.042641 +2025-05-14 15:16:17,065 - Epoch: [50][ 600/ 1785] Overall Loss 2.116745 Objective Loss 2.116745 LR 0.001000 Time 0.042349 +2025-05-14 15:16:25,305 - Epoch: [50][ 800/ 1785] Overall Loss 2.125801 Objective Loss 2.125801 LR 0.001000 Time 0.042060 +2025-05-14 15:16:33,663 - Epoch: [50][ 1000/ 1785] Overall Loss 2.126387 Objective Loss 2.126387 LR 0.001000 Time 0.042005 +2025-05-14 15:16:41,984 - Epoch: [50][ 1200/ 1785] Overall Loss 2.128588 Objective Loss 2.128588 LR 0.001000 Time 0.041936 +2025-05-14 15:16:50,371 - Epoch: [50][ 1400/ 1785] Overall Loss 2.134480 Objective Loss 2.134480 LR 0.001000 Time 0.041935 +2025-05-14 15:16:58,697 - Epoch: [50][ 1600/ 1785] Overall Loss 2.138607 Objective Loss 2.138607 LR 0.001000 Time 0.041896 +2025-05-14 15:17:06,417 - Epoch: [50][ 1785/ 1785] Overall Loss 2.139953 Objective Loss 2.139953 LR 0.001000 Time 0.041878 +2025-05-14 15:17:06,467 - --- validate (epoch=50)----------- +2025-05-14 15:17:06,468 - 12251 samples (16 per mini-batch) +2025-05-14 15:17:06,469 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:17:20,204 - Epoch: [50][ 200/ 766] Loss 2.405037 mAP 0.868139 +2025-05-14 15:17:34,428 - Epoch: [50][ 400/ 766] Loss 2.405499 mAP 0.867806 +2025-05-14 15:17:49,826 - Epoch: [50][ 600/ 766] Loss 2.410884 mAP 0.866456 +2025-05-14 15:18:03,826 - Epoch: [50][ 766/ 766] Loss 2.408105 mAP 0.868784 +2025-05-14 15:18:03,861 - ==> mAP: 0.86878 Loss: 2.408 + +2025-05-14 15:18:03,867 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 15:18:03,867 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:18:03,890 - + +2025-05-14 15:18:03,891 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:18:12,417 - Epoch: [51][ 200/ 1785] Overall Loss 2.063823 Objective Loss 2.063823 LR 0.001000 Time 0.042616 +2025-05-14 15:18:20,731 - Epoch: [51][ 400/ 1785] Overall Loss 2.076137 Objective Loss 2.076137 LR 0.001000 Time 0.042090 +2025-05-14 15:18:29,086 - Epoch: [51][ 600/ 1785] Overall Loss 2.090390 Objective Loss 2.090390 LR 0.001000 Time 0.041983 +2025-05-14 15:18:37,402 - Epoch: [51][ 800/ 1785] Overall Loss 2.098495 Objective Loss 2.098495 LR 0.001000 Time 0.041879 +2025-05-14 15:18:45,702 - Epoch: [51][ 1000/ 1785] Overall Loss 2.105692 Objective Loss 2.105692 LR 0.001000 Time 0.041802 +2025-05-14 15:18:54,077 - Epoch: [51][ 1200/ 1785] Overall Loss 2.113178 Objective Loss 2.113178 LR 0.001000 Time 0.041813 +2025-05-14 15:19:02,529 - Epoch: [51][ 1400/ 1785] Overall Loss 2.123293 Objective Loss 2.123293 LR 0.001000 Time 0.041875 +2025-05-14 15:19:10,839 - Epoch: [51][ 1600/ 1785] Overall Loss 2.126282 Objective Loss 2.126282 LR 0.001000 Time 0.041834 +2025-05-14 15:19:18,357 - Epoch: [51][ 1785/ 1785] Overall Loss 2.133601 Objective Loss 2.133601 LR 0.001000 Time 0.041709 +2025-05-14 15:19:18,402 - --- validate (epoch=51)----------- +2025-05-14 15:19:18,402 - 12251 samples (16 per mini-batch) +2025-05-14 15:19:18,404 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:19:32,148 - Epoch: [51][ 200/ 766] Loss 2.399317 mAP 0.871580 +2025-05-14 15:19:46,274 - Epoch: [51][ 400/ 766] Loss 2.400576 mAP 0.868943 +2025-05-14 15:20:01,471 - Epoch: [51][ 600/ 766] Loss 2.393890 mAP 0.869347 +2025-05-14 15:20:15,143 - Epoch: [51][ 766/ 766] Loss 2.392577 mAP 0.873071 +2025-05-14 15:20:15,177 - ==> mAP: 0.87307 Loss: 2.393 + +2025-05-14 15:20:15,183 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 15:20:15,183 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:20:15,214 - + +2025-05-14 15:20:15,214 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:20:23,856 - Epoch: [52][ 200/ 1785] Overall Loss 2.122709 Objective Loss 2.122709 LR 0.001000 Time 0.043199 +2025-05-14 15:20:32,257 - Epoch: [52][ 400/ 1785] Overall Loss 2.111179 Objective Loss 2.111179 LR 0.001000 Time 0.042597 +2025-05-14 15:20:40,651 - Epoch: [52][ 600/ 1785] Overall Loss 2.115989 Objective Loss 2.115989 LR 0.001000 Time 0.042385 +2025-05-14 15:20:49,101 - Epoch: [52][ 800/ 1785] Overall Loss 2.120416 Objective Loss 2.120416 LR 0.001000 Time 0.042349 +2025-05-14 15:20:57,477 - Epoch: [52][ 1000/ 1785] Overall Loss 2.123002 Objective Loss 2.123002 LR 0.001000 Time 0.042253 +2025-05-14 15:21:05,850 - Epoch: [52][ 1200/ 1785] Overall Loss 2.125038 Objective Loss 2.125038 LR 0.001000 Time 0.042187 +2025-05-14 15:21:14,226 - Epoch: [52][ 1400/ 1785] Overall Loss 2.131054 Objective Loss 2.131054 LR 0.001000 Time 0.042142 +2025-05-14 15:21:22,643 - Epoch: [52][ 1600/ 1785] Overall Loss 2.132122 Objective Loss 2.132122 LR 0.001000 Time 0.042134 +2025-05-14 15:21:30,422 - Epoch: [52][ 1785/ 1785] Overall Loss 2.133575 Objective Loss 2.133575 LR 0.001000 Time 0.042124 +2025-05-14 15:21:30,476 - --- validate (epoch=52)----------- +2025-05-14 15:21:30,477 - 12251 samples (16 per mini-batch) +2025-05-14 15:21:30,479 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:21:44,210 - Epoch: [52][ 200/ 766] Loss 2.431815 mAP 0.869565 +2025-05-14 15:21:58,939 - Epoch: [52][ 400/ 766] Loss 2.391209 mAP 0.875848 +2025-05-14 15:22:14,363 - Epoch: [52][ 600/ 766] Loss 2.391892 mAP 0.874689 +2025-05-14 15:22:28,238 - Epoch: [52][ 766/ 766] Loss 2.389184 mAP 0.875879 +2025-05-14 15:22:28,275 - ==> mAP: 0.87588 Loss: 2.389 + +2025-05-14 15:22:28,281 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 15:22:28,281 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:22:28,312 - + +2025-05-14 15:22:28,312 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:22:36,888 - Epoch: [53][ 200/ 1785] Overall Loss 2.084943 Objective Loss 2.084943 LR 0.001000 Time 0.042870 +2025-05-14 15:22:45,291 - Epoch: [53][ 400/ 1785] Overall Loss 2.093427 Objective Loss 2.093427 LR 0.001000 Time 0.042438 +2025-05-14 15:22:53,673 - Epoch: [53][ 600/ 1785] Overall Loss 2.100680 Objective Loss 2.100680 LR 0.001000 Time 0.042259 +2025-05-14 15:23:02,078 - Epoch: [53][ 800/ 1785] Overall Loss 2.109220 Objective Loss 2.109220 LR 0.001000 Time 0.042197 +2025-05-14 15:23:10,557 - Epoch: [53][ 1000/ 1785] Overall Loss 2.115695 Objective Loss 2.115695 LR 0.001000 Time 0.042235 +2025-05-14 15:23:18,998 - Epoch: [53][ 1200/ 1785] Overall Loss 2.121064 Objective Loss 2.121064 LR 0.001000 Time 0.042228 +2025-05-14 15:23:27,401 - Epoch: [53][ 1400/ 1785] Overall Loss 2.124121 Objective Loss 2.124121 LR 0.001000 Time 0.042197 +2025-05-14 15:23:35,861 - Epoch: [53][ 1600/ 1785] Overall Loss 2.123577 Objective Loss 2.123577 LR 0.001000 Time 0.042208 +2025-05-14 15:23:43,700 - Epoch: [53][ 1785/ 1785] Overall Loss 2.128179 Objective Loss 2.128179 LR 0.001000 Time 0.042225 +2025-05-14 15:23:43,749 - --- validate (epoch=53)----------- +2025-05-14 15:23:43,750 - 12251 samples (16 per mini-batch) +2025-05-14 15:23:43,752 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:23:57,240 - Epoch: [53][ 200/ 766] Loss 2.422578 mAP 0.866833 +2025-05-14 15:24:11,587 - Epoch: [53][ 400/ 766] Loss 2.417078 mAP 0.870318 +2025-05-14 15:24:27,038 - Epoch: [53][ 600/ 766] Loss 2.422274 mAP 0.871041 +2025-05-14 15:24:40,917 - Epoch: [53][ 766/ 766] Loss 2.419697 mAP 0.871266 +2025-05-14 15:24:40,951 - ==> mAP: 0.87127 Loss: 2.420 + +2025-05-14 15:24:40,957 - ==> Best [mAP: 0.882755 vloss: 2.411577 Params: 318604 on epoch: 27] +2025-05-14 15:24:40,957 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:24:40,987 - + +2025-05-14 15:24:40,987 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:24:49,533 - Epoch: [54][ 200/ 1785] Overall Loss 2.073676 Objective Loss 2.073676 LR 0.001000 Time 0.042713 +2025-05-14 15:24:57,902 - Epoch: [54][ 400/ 1785] Overall Loss 2.084418 Objective Loss 2.084418 LR 0.001000 Time 0.042277 +2025-05-14 15:25:06,298 - Epoch: [54][ 600/ 1785] Overall Loss 2.101136 Objective Loss 2.101136 LR 0.001000 Time 0.042174 +2025-05-14 15:25:14,694 - Epoch: [54][ 800/ 1785] Overall Loss 2.104609 Objective Loss 2.104609 LR 0.001000 Time 0.042123 +2025-05-14 15:25:22,891 - Epoch: [54][ 1000/ 1785] Overall Loss 2.109306 Objective Loss 2.109306 LR 0.001000 Time 0.041894 +2025-05-14 15:25:31,005 - Epoch: [54][ 1200/ 1785] Overall Loss 2.116144 Objective Loss 2.116144 LR 0.001000 Time 0.041672 +2025-05-14 15:25:39,097 - Epoch: [54][ 1400/ 1785] Overall Loss 2.122454 Objective Loss 2.122454 LR 0.001000 Time 0.041498 +2025-05-14 15:25:47,191 - Epoch: [54][ 1600/ 1785] Overall Loss 2.126327 Objective Loss 2.126327 LR 0.001000 Time 0.041368 +2025-05-14 15:25:54,691 - Epoch: [54][ 1785/ 1785] Overall Loss 2.130692 Objective Loss 2.130692 LR 0.001000 Time 0.041282 +2025-05-14 15:25:54,735 - --- validate (epoch=54)----------- +2025-05-14 15:25:54,736 - 12251 samples (16 per mini-batch) +2025-05-14 15:25:54,738 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:26:08,343 - Epoch: [54][ 200/ 766] Loss 2.359101 mAP 0.891774 +2025-05-14 15:26:22,850 - Epoch: [54][ 400/ 766] Loss 2.373144 mAP 0.886439 +2025-05-14 15:26:38,374 - Epoch: [54][ 600/ 766] Loss 2.378534 mAP 0.885527 +2025-05-14 15:26:52,497 - Epoch: [54][ 766/ 766] Loss 2.380652 mAP 0.883112 +2025-05-14 15:26:52,534 - ==> mAP: 0.88311 Loss: 2.381 + +2025-05-14 15:26:52,540 - ==> Best [mAP: 0.883112 vloss: 2.380652 Params: 318604 on epoch: 54] +2025-05-14 15:26:52,540 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:26:52,575 - + +2025-05-14 15:26:52,575 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:27:00,679 - Epoch: [55][ 200/ 1785] Overall Loss 2.076289 Objective Loss 2.076289 LR 0.001000 Time 0.040508 +2025-05-14 15:27:08,834 - Epoch: [55][ 400/ 1785] Overall Loss 2.094622 Objective Loss 2.094622 LR 0.001000 Time 0.040636 +2025-05-14 15:27:16,834 - Epoch: [55][ 600/ 1785] Overall Loss 2.111583 Objective Loss 2.111583 LR 0.001000 Time 0.040421 +2025-05-14 15:27:24,831 - Epoch: [55][ 800/ 1785] Overall Loss 2.112545 Objective Loss 2.112545 LR 0.001000 Time 0.040310 +2025-05-14 15:27:32,937 - Epoch: [55][ 1000/ 1785] Overall Loss 2.113883 Objective Loss 2.113883 LR 0.001000 Time 0.040351 +2025-05-14 15:27:41,116 - Epoch: [55][ 1200/ 1785] Overall Loss 2.119139 Objective Loss 2.119139 LR 0.001000 Time 0.040440 +2025-05-14 15:27:49,498 - Epoch: [55][ 1400/ 1785] Overall Loss 2.121914 Objective Loss 2.121914 LR 0.001000 Time 0.040649 +2025-05-14 15:27:57,839 - Epoch: [55][ 1600/ 1785] Overall Loss 2.126444 Objective Loss 2.126444 LR 0.001000 Time 0.040780 +2025-05-14 15:28:05,498 - Epoch: [55][ 1785/ 1785] Overall Loss 2.131514 Objective Loss 2.131514 LR 0.001000 Time 0.040844 +2025-05-14 15:28:05,540 - --- validate (epoch=55)----------- +2025-05-14 15:28:05,541 - 12251 samples (16 per mini-batch) +2025-05-14 15:28:05,543 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:28:19,041 - Epoch: [55][ 200/ 766] Loss 2.396601 mAP 0.881302 +2025-05-14 15:28:33,277 - Epoch: [55][ 400/ 766] Loss 2.388979 mAP 0.880836 +2025-05-14 15:28:48,403 - Epoch: [55][ 600/ 766] Loss 2.390582 mAP 0.878050 +2025-05-14 15:29:02,462 - Epoch: [55][ 766/ 766] Loss 2.395564 mAP 0.877140 +2025-05-14 15:29:02,505 - ==> mAP: 0.87714 Loss: 2.396 + +2025-05-14 15:29:02,511 - ==> Best [mAP: 0.883112 vloss: 2.380652 Params: 318604 on epoch: 54] +2025-05-14 15:29:02,511 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:29:02,542 - + +2025-05-14 15:29:02,542 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:29:11,061 - Epoch: [56][ 200/ 1785] Overall Loss 2.078760 Objective Loss 2.078760 LR 0.001000 Time 0.042580 +2025-05-14 15:29:19,394 - Epoch: [56][ 400/ 1785] Overall Loss 2.074125 Objective Loss 2.074125 LR 0.001000 Time 0.042120 +2025-05-14 15:29:27,502 - Epoch: [56][ 600/ 1785] Overall Loss 2.090610 Objective Loss 2.090610 LR 0.001000 Time 0.041589 +2025-05-14 15:29:35,448 - Epoch: [56][ 800/ 1785] Overall Loss 2.098377 Objective Loss 2.098377 LR 0.001000 Time 0.041122 +2025-05-14 15:29:43,383 - Epoch: [56][ 1000/ 1785] Overall Loss 2.115007 Objective Loss 2.115007 LR 0.001000 Time 0.040830 +2025-05-14 15:29:51,382 - Epoch: [56][ 1200/ 1785] Overall Loss 2.118472 Objective Loss 2.118472 LR 0.001000 Time 0.040689 +2025-05-14 15:29:59,531 - Epoch: [56][ 1400/ 1785] Overall Loss 2.121602 Objective Loss 2.121602 LR 0.001000 Time 0.040696 +2025-05-14 15:30:07,735 - Epoch: [56][ 1600/ 1785] Overall Loss 2.120133 Objective Loss 2.120133 LR 0.001000 Time 0.040735 +2025-05-14 15:30:15,206 - Epoch: [56][ 1785/ 1785] Overall Loss 2.120255 Objective Loss 2.120255 LR 0.001000 Time 0.040698 +2025-05-14 15:30:15,242 - --- validate (epoch=56)----------- +2025-05-14 15:30:15,242 - 12251 samples (16 per mini-batch) +2025-05-14 15:30:15,244 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:30:29,149 - Epoch: [56][ 200/ 766] Loss 2.439312 mAP 0.860507 +2025-05-14 15:30:43,469 - Epoch: [56][ 400/ 766] Loss 2.447879 mAP 0.862534 +2025-05-14 15:30:58,618 - Epoch: [56][ 600/ 766] Loss 2.461509 mAP 0.857089 +2025-05-14 15:31:12,705 - Epoch: [56][ 766/ 766] Loss 2.457960 mAP 0.858153 +2025-05-14 15:31:12,741 - ==> mAP: 0.85815 Loss: 2.458 + +2025-05-14 15:31:12,747 - ==> Best [mAP: 0.883112 vloss: 2.380652 Params: 318604 on epoch: 54] +2025-05-14 15:31:12,748 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:31:12,772 - + +2025-05-14 15:31:12,772 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:31:21,261 - Epoch: [57][ 200/ 1785] Overall Loss 2.084532 Objective Loss 2.084532 LR 0.001000 Time 0.042431 +2025-05-14 15:31:29,457 - Epoch: [57][ 400/ 1785] Overall Loss 2.097913 Objective Loss 2.097913 LR 0.001000 Time 0.041701 +2025-05-14 15:31:37,657 - Epoch: [57][ 600/ 1785] Overall Loss 2.084511 Objective Loss 2.084511 LR 0.001000 Time 0.041465 +2025-05-14 15:31:45,847 - Epoch: [57][ 800/ 1785] Overall Loss 2.093469 Objective Loss 2.093469 LR 0.001000 Time 0.041333 +2025-05-14 15:31:54,046 - Epoch: [57][ 1000/ 1785] Overall Loss 2.101271 Objective Loss 2.101271 LR 0.001000 Time 0.041264 +2025-05-14 15:32:02,257 - Epoch: [57][ 1200/ 1785] Overall Loss 2.106505 Objective Loss 2.106505 LR 0.001000 Time 0.041228 +2025-05-14 15:32:10,461 - Epoch: [57][ 1400/ 1785] Overall Loss 2.110545 Objective Loss 2.110545 LR 0.001000 Time 0.041198 +2025-05-14 15:32:18,669 - Epoch: [57][ 1600/ 1785] Overall Loss 2.115424 Objective Loss 2.115424 LR 0.001000 Time 0.041177 +2025-05-14 15:32:26,261 - Epoch: [57][ 1785/ 1785] Overall Loss 2.120358 Objective Loss 2.120358 LR 0.001000 Time 0.041162 +2025-05-14 15:32:26,311 - --- validate (epoch=57)----------- +2025-05-14 15:32:26,312 - 12251 samples (16 per mini-batch) +2025-05-14 15:32:26,313 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:32:39,832 - Epoch: [57][ 200/ 766] Loss 2.387803 mAP 0.883982 +2025-05-14 15:32:53,886 - Epoch: [57][ 400/ 766] Loss 2.375238 mAP 0.887348 +2025-05-14 15:33:08,893 - Epoch: [57][ 600/ 766] Loss 2.392812 mAP 0.880831 +2025-05-14 15:33:22,679 - Epoch: [57][ 766/ 766] Loss 2.388964 mAP 0.878857 +2025-05-14 15:33:22,719 - ==> mAP: 0.87886 Loss: 2.389 + +2025-05-14 15:33:22,725 - ==> Best [mAP: 0.883112 vloss: 2.380652 Params: 318604 on epoch: 54] +2025-05-14 15:33:22,725 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:33:22,756 - + +2025-05-14 15:33:22,756 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:33:31,083 - Epoch: [58][ 200/ 1785] Overall Loss 2.103846 Objective Loss 2.103846 LR 0.001000 Time 0.041622 +2025-05-14 15:33:39,486 - Epoch: [58][ 400/ 1785] Overall Loss 2.091390 Objective Loss 2.091390 LR 0.001000 Time 0.041815 +2025-05-14 15:33:47,790 - Epoch: [58][ 600/ 1785] Overall Loss 2.098403 Objective Loss 2.098403 LR 0.001000 Time 0.041714 +2025-05-14 15:33:55,689 - Epoch: [58][ 800/ 1785] Overall Loss 2.104583 Objective Loss 2.104583 LR 0.001000 Time 0.041156 +2025-05-14 15:34:03,783 - Epoch: [58][ 1000/ 1785] Overall Loss 2.103329 Objective Loss 2.103329 LR 0.001000 Time 0.041017 +2025-05-14 15:34:12,063 - Epoch: [58][ 1200/ 1785] Overall Loss 2.112687 Objective Loss 2.112687 LR 0.001000 Time 0.041079 +2025-05-14 15:34:20,421 - Epoch: [58][ 1400/ 1785] Overall Loss 2.117214 Objective Loss 2.117214 LR 0.001000 Time 0.041179 +2025-05-14 15:34:28,849 - Epoch: [58][ 1600/ 1785] Overall Loss 2.115334 Objective Loss 2.115334 LR 0.001000 Time 0.041299 +2025-05-14 15:34:36,571 - Epoch: [58][ 1785/ 1785] Overall Loss 2.117823 Objective Loss 2.117823 LR 0.001000 Time 0.041343 +2025-05-14 15:34:36,612 - --- validate (epoch=58)----------- +2025-05-14 15:34:36,613 - 12251 samples (16 per mini-batch) +2025-05-14 15:34:36,615 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:34:50,145 - Epoch: [58][ 200/ 766] Loss 2.374786 mAP 0.881028 +2025-05-14 15:35:04,371 - Epoch: [58][ 400/ 766] Loss 2.371010 mAP 0.880630 +2025-05-14 15:35:19,665 - Epoch: [58][ 600/ 766] Loss 2.366795 mAP 0.880482 +2025-05-14 15:35:33,321 - Epoch: [58][ 766/ 766] Loss 2.371437 mAP 0.883505 +2025-05-14 15:35:33,360 - ==> mAP: 0.88351 Loss: 2.371 + +2025-05-14 15:35:33,365 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 15:35:33,366 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:35:33,400 - + +2025-05-14 15:35:33,401 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:35:42,201 - Epoch: [59][ 200/ 1785] Overall Loss 2.051534 Objective Loss 2.051534 LR 0.001000 Time 0.043991 +2025-05-14 15:35:50,629 - Epoch: [59][ 400/ 1785] Overall Loss 2.070222 Objective Loss 2.070222 LR 0.001000 Time 0.043061 +2025-05-14 15:35:58,769 - Epoch: [59][ 600/ 1785] Overall Loss 2.089748 Objective Loss 2.089748 LR 0.001000 Time 0.042271 +2025-05-14 15:36:06,967 - Epoch: [59][ 800/ 1785] Overall Loss 2.100594 Objective Loss 2.100594 LR 0.001000 Time 0.041948 +2025-05-14 15:36:15,152 - Epoch: [59][ 1000/ 1785] Overall Loss 2.105458 Objective Loss 2.105458 LR 0.001000 Time 0.041742 +2025-05-14 15:36:23,322 - Epoch: [59][ 1200/ 1785] Overall Loss 2.105721 Objective Loss 2.105721 LR 0.001000 Time 0.041592 +2025-05-14 15:36:31,464 - Epoch: [59][ 1400/ 1785] Overall Loss 2.112954 Objective Loss 2.112954 LR 0.001000 Time 0.041465 +2025-05-14 15:36:39,603 - Epoch: [59][ 1600/ 1785] Overall Loss 2.117542 Objective Loss 2.117542 LR 0.001000 Time 0.041368 +2025-05-14 15:36:47,090 - Epoch: [59][ 1785/ 1785] Overall Loss 2.119030 Objective Loss 2.119030 LR 0.001000 Time 0.041274 +2025-05-14 15:36:47,129 - --- validate (epoch=59)----------- +2025-05-14 15:36:47,129 - 12251 samples (16 per mini-batch) +2025-05-14 15:36:47,131 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:37:01,030 - Epoch: [59][ 200/ 766] Loss 2.397841 mAP 0.879645 +2025-05-14 15:37:15,207 - Epoch: [59][ 400/ 766] Loss 2.404739 mAP 0.876894 +2025-05-14 15:37:30,335 - Epoch: [59][ 600/ 766] Loss 2.398891 mAP 0.876270 +2025-05-14 15:37:44,340 - Epoch: [59][ 766/ 766] Loss 2.405919 mAP 0.874999 +2025-05-14 15:37:44,382 - ==> mAP: 0.87500 Loss: 2.406 + +2025-05-14 15:37:44,388 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 15:37:44,388 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:37:44,419 - + +2025-05-14 15:37:44,419 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:37:52,742 - Epoch: [60][ 200/ 1785] Overall Loss 2.091672 Objective Loss 2.091672 LR 0.001000 Time 0.041597 +2025-05-14 15:38:00,939 - Epoch: [60][ 400/ 1785] Overall Loss 2.091592 Objective Loss 2.091592 LR 0.001000 Time 0.041289 +2025-05-14 15:38:09,399 - Epoch: [60][ 600/ 1785] Overall Loss 2.085369 Objective Loss 2.085369 LR 0.001000 Time 0.041623 +2025-05-14 15:38:17,732 - Epoch: [60][ 800/ 1785] Overall Loss 2.092905 Objective Loss 2.092905 LR 0.001000 Time 0.041631 +2025-05-14 15:38:26,213 - Epoch: [60][ 1000/ 1785] Overall Loss 2.103092 Objective Loss 2.103092 LR 0.001000 Time 0.041784 +2025-05-14 15:38:34,684 - Epoch: [60][ 1200/ 1785] Overall Loss 2.105778 Objective Loss 2.105778 LR 0.001000 Time 0.041878 +2025-05-14 15:38:43,152 - Epoch: [60][ 1400/ 1785] Overall Loss 2.108571 Objective Loss 2.108571 LR 0.001000 Time 0.041943 +2025-05-14 15:38:51,560 - Epoch: [60][ 1600/ 1785] Overall Loss 2.112156 Objective Loss 2.112156 LR 0.001000 Time 0.041954 +2025-05-14 15:38:59,051 - Epoch: [60][ 1785/ 1785] Overall Loss 2.116557 Objective Loss 2.116557 LR 0.001000 Time 0.041801 +2025-05-14 15:38:59,091 - --- validate (epoch=60)----------- +2025-05-14 15:38:59,092 - 12251 samples (16 per mini-batch) +2025-05-14 15:38:59,094 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:39:13,032 - Epoch: [60][ 200/ 766] Loss 2.444946 mAP 0.868919 +2025-05-14 15:39:27,664 - Epoch: [60][ 400/ 766] Loss 2.418731 mAP 0.871000 +2025-05-14 15:39:43,050 - Epoch: [60][ 600/ 766] Loss 2.416968 mAP 0.871494 +2025-05-14 15:39:56,812 - Epoch: [60][ 766/ 766] Loss 2.413667 mAP 0.871109 +2025-05-14 15:39:56,846 - ==> mAP: 0.87111 Loss: 2.414 + +2025-05-14 15:39:56,852 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 15:39:56,852 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:39:56,882 - + +2025-05-14 15:39:56,883 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:40:05,322 - Epoch: [61][ 200/ 1785] Overall Loss 2.060300 Objective Loss 2.060300 LR 0.001000 Time 0.042185 +2025-05-14 15:40:13,632 - Epoch: [61][ 400/ 1785] Overall Loss 2.061810 Objective Loss 2.061810 LR 0.001000 Time 0.041863 +2025-05-14 15:40:21,951 - Epoch: [61][ 600/ 1785] Overall Loss 2.068440 Objective Loss 2.068440 LR 0.001000 Time 0.041771 +2025-05-14 15:40:30,152 - Epoch: [61][ 800/ 1785] Overall Loss 2.074610 Objective Loss 2.074610 LR 0.001000 Time 0.041578 +2025-05-14 15:40:38,303 - Epoch: [61][ 1000/ 1785] Overall Loss 2.085820 Objective Loss 2.085820 LR 0.001000 Time 0.041412 +2025-05-14 15:40:46,494 - Epoch: [61][ 1200/ 1785] Overall Loss 2.097190 Objective Loss 2.097190 LR 0.001000 Time 0.041334 +2025-05-14 15:40:54,643 - Epoch: [61][ 1400/ 1785] Overall Loss 2.107128 Objective Loss 2.107128 LR 0.001000 Time 0.041248 +2025-05-14 15:41:02,784 - Epoch: [61][ 1600/ 1785] Overall Loss 2.109275 Objective Loss 2.109275 LR 0.001000 Time 0.041180 +2025-05-14 15:41:10,339 - Epoch: [61][ 1785/ 1785] Overall Loss 2.115174 Objective Loss 2.115174 LR 0.001000 Time 0.041143 +2025-05-14 15:41:10,386 - --- validate (epoch=61)----------- +2025-05-14 15:41:10,386 - 12251 samples (16 per mini-batch) +2025-05-14 15:41:10,388 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:41:24,227 - Epoch: [61][ 200/ 766] Loss 2.362333 mAP 0.884902 +2025-05-14 15:41:38,907 - Epoch: [61][ 400/ 766] Loss 2.372388 mAP 0.878774 +2025-05-14 15:41:54,546 - Epoch: [61][ 600/ 766] Loss 2.384497 mAP 0.877059 +2025-05-14 15:42:08,554 - Epoch: [61][ 766/ 766] Loss 2.378625 mAP 0.879043 +2025-05-14 15:42:08,594 - ==> mAP: 0.87904 Loss: 2.379 + +2025-05-14 15:42:08,600 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 15:42:08,600 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:42:08,624 - + +2025-05-14 15:42:08,624 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:42:17,195 - Epoch: [62][ 200/ 1785] Overall Loss 2.030914 Objective Loss 2.030914 LR 0.001000 Time 0.042840 +2025-05-14 15:42:25,572 - Epoch: [62][ 400/ 1785] Overall Loss 2.054930 Objective Loss 2.054930 LR 0.001000 Time 0.042358 +2025-05-14 15:42:34,062 - Epoch: [62][ 600/ 1785] Overall Loss 2.064953 Objective Loss 2.064953 LR 0.001000 Time 0.042386 +2025-05-14 15:42:42,530 - Epoch: [62][ 800/ 1785] Overall Loss 2.087705 Objective Loss 2.087705 LR 0.001000 Time 0.042372 +2025-05-14 15:42:50,937 - Epoch: [62][ 1000/ 1785] Overall Loss 2.094084 Objective Loss 2.094084 LR 0.001000 Time 0.042303 +2025-05-14 15:42:59,345 - Epoch: [62][ 1200/ 1785] Overall Loss 2.100558 Objective Loss 2.100558 LR 0.001000 Time 0.042258 +2025-05-14 15:43:07,680 - Epoch: [62][ 1400/ 1785] Overall Loss 2.106200 Objective Loss 2.106200 LR 0.001000 Time 0.042173 +2025-05-14 15:43:15,982 - Epoch: [62][ 1600/ 1785] Overall Loss 2.111248 Objective Loss 2.111248 LR 0.001000 Time 0.042090 +2025-05-14 15:43:23,590 - Epoch: [62][ 1785/ 1785] Overall Loss 2.110864 Objective Loss 2.110864 LR 0.001000 Time 0.041988 +2025-05-14 15:43:23,630 - --- validate (epoch=62)----------- +2025-05-14 15:43:23,631 - 12251 samples (16 per mini-batch) +2025-05-14 15:43:23,633 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:43:37,166 - Epoch: [62][ 200/ 766] Loss 2.395319 mAP 0.875074 +2025-05-14 15:43:51,612 - Epoch: [62][ 400/ 766] Loss 2.395164 mAP 0.874367 +2025-05-14 15:44:06,667 - Epoch: [62][ 600/ 766] Loss 2.402399 mAP 0.870184 +2025-05-14 15:44:20,684 - Epoch: [62][ 766/ 766] Loss 2.402434 mAP 0.872342 +2025-05-14 15:44:20,726 - ==> mAP: 0.87234 Loss: 2.402 + +2025-05-14 15:44:20,732 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 15:44:20,732 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:44:20,763 - + +2025-05-14 15:44:20,763 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:44:29,096 - Epoch: [63][ 200/ 1785] Overall Loss 2.078691 Objective Loss 2.078691 LR 0.001000 Time 0.041651 +2025-05-14 15:44:37,276 - Epoch: [63][ 400/ 1785] Overall Loss 2.072932 Objective Loss 2.072932 LR 0.001000 Time 0.041271 +2025-05-14 15:44:45,452 - Epoch: [63][ 600/ 1785] Overall Loss 2.085220 Objective Loss 2.085220 LR 0.001000 Time 0.041139 +2025-05-14 15:44:53,647 - Epoch: [63][ 800/ 1785] Overall Loss 2.086296 Objective Loss 2.086296 LR 0.001000 Time 0.041096 +2025-05-14 15:45:01,839 - Epoch: [63][ 1000/ 1785] Overall Loss 2.091637 Objective Loss 2.091637 LR 0.001000 Time 0.041066 +2025-05-14 15:45:10,099 - Epoch: [63][ 1200/ 1785] Overall Loss 2.098837 Objective Loss 2.098837 LR 0.001000 Time 0.041104 +2025-05-14 15:45:18,293 - Epoch: [63][ 1400/ 1785] Overall Loss 2.103432 Objective Loss 2.103432 LR 0.001000 Time 0.041084 +2025-05-14 15:45:26,492 - Epoch: [63][ 1600/ 1785] Overall Loss 2.109182 Objective Loss 2.109182 LR 0.001000 Time 0.041072 +2025-05-14 15:45:34,109 - Epoch: [63][ 1785/ 1785] Overall Loss 2.111272 Objective Loss 2.111272 LR 0.001000 Time 0.041082 +2025-05-14 15:45:34,144 - --- validate (epoch=63)----------- +2025-05-14 15:45:34,145 - 12251 samples (16 per mini-batch) +2025-05-14 15:45:34,147 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:45:48,343 - Epoch: [63][ 200/ 766] Loss 2.431779 mAP 0.867397 +2025-05-14 15:46:02,888 - Epoch: [63][ 400/ 766] Loss 2.421563 mAP 0.866976 +2025-05-14 15:46:18,355 - Epoch: [63][ 600/ 766] Loss 2.425876 mAP 0.865734 +2025-05-14 15:46:31,991 - Epoch: [63][ 766/ 766] Loss 2.424019 mAP 0.867141 +2025-05-14 15:46:32,027 - ==> mAP: 0.86714 Loss: 2.424 + +2025-05-14 15:46:32,033 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 15:46:32,033 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:46:32,064 - + +2025-05-14 15:46:32,064 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:46:40,789 - Epoch: [64][ 200/ 1785] Overall Loss 2.050606 Objective Loss 2.050606 LR 0.001000 Time 0.043613 +2025-05-14 15:46:49,242 - Epoch: [64][ 400/ 1785] Overall Loss 2.052338 Objective Loss 2.052338 LR 0.001000 Time 0.042936 +2025-05-14 15:46:57,767 - Epoch: [64][ 600/ 1785] Overall Loss 2.059544 Objective Loss 2.059544 LR 0.001000 Time 0.042828 +2025-05-14 15:47:06,219 - Epoch: [64][ 800/ 1785] Overall Loss 2.076495 Objective Loss 2.076495 LR 0.001000 Time 0.042684 +2025-05-14 15:47:14,574 - Epoch: [64][ 1000/ 1785] Overall Loss 2.090224 Objective Loss 2.090224 LR 0.001000 Time 0.042501 +2025-05-14 15:47:23,031 - Epoch: [64][ 1200/ 1785] Overall Loss 2.094516 Objective Loss 2.094516 LR 0.001000 Time 0.042463 +2025-05-14 15:47:31,352 - Epoch: [64][ 1400/ 1785] Overall Loss 2.101351 Objective Loss 2.101351 LR 0.001000 Time 0.042339 +2025-05-14 15:47:39,262 - Epoch: [64][ 1600/ 1785] Overall Loss 2.103939 Objective Loss 2.103939 LR 0.001000 Time 0.041990 +2025-05-14 15:47:46,617 - Epoch: [64][ 1785/ 1785] Overall Loss 2.107849 Objective Loss 2.107849 LR 0.001000 Time 0.041757 +2025-05-14 15:47:46,657 - --- validate (epoch=64)----------- +2025-05-14 15:47:46,658 - 12251 samples (16 per mini-batch) +2025-05-14 15:47:46,660 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:48:00,095 - Epoch: [64][ 200/ 766] Loss 2.402166 mAP 0.880980 +2025-05-14 15:48:14,336 - Epoch: [64][ 400/ 766] Loss 2.402884 mAP 0.879766 +2025-05-14 15:48:29,627 - Epoch: [64][ 600/ 766] Loss 2.401663 mAP 0.878052 +2025-05-14 15:48:43,755 - Epoch: [64][ 766/ 766] Loss 2.398497 mAP 0.877645 +2025-05-14 15:48:43,793 - ==> mAP: 0.87765 Loss: 2.398 + +2025-05-14 15:48:43,799 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 15:48:43,799 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:48:43,831 - + +2025-05-14 15:48:43,831 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:48:52,124 - Epoch: [65][ 200/ 1785] Overall Loss 2.056509 Objective Loss 2.056509 LR 0.001000 Time 0.041450 +2025-05-14 15:49:00,348 - Epoch: [65][ 400/ 1785] Overall Loss 2.064389 Objective Loss 2.064389 LR 0.001000 Time 0.041282 +2025-05-14 15:49:08,524 - Epoch: [65][ 600/ 1785] Overall Loss 2.068922 Objective Loss 2.068922 LR 0.001000 Time 0.041145 +2025-05-14 15:49:16,658 - Epoch: [65][ 800/ 1785] Overall Loss 2.079379 Objective Loss 2.079379 LR 0.001000 Time 0.041024 +2025-05-14 15:49:25,039 - Epoch: [65][ 1000/ 1785] Overall Loss 2.087847 Objective Loss 2.087847 LR 0.001000 Time 0.041198 +2025-05-14 15:49:33,205 - Epoch: [65][ 1200/ 1785] Overall Loss 2.092507 Objective Loss 2.092507 LR 0.001000 Time 0.041136 +2025-05-14 15:49:41,360 - Epoch: [65][ 1400/ 1785] Overall Loss 2.096771 Objective Loss 2.096771 LR 0.001000 Time 0.041083 +2025-05-14 15:49:49,522 - Epoch: [65][ 1600/ 1785] Overall Loss 2.098821 Objective Loss 2.098821 LR 0.001000 Time 0.041048 +2025-05-14 15:49:57,110 - Epoch: [65][ 1785/ 1785] Overall Loss 2.104089 Objective Loss 2.104089 LR 0.001000 Time 0.041044 +2025-05-14 15:49:57,156 - --- validate (epoch=65)----------- +2025-05-14 15:49:57,157 - 12251 samples (16 per mini-batch) +2025-05-14 15:49:57,159 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:50:10,938 - Epoch: [65][ 200/ 766] Loss 2.371207 mAP 0.885977 +2025-05-14 15:50:25,382 - Epoch: [65][ 400/ 766] Loss 2.367240 mAP 0.888511 +2025-05-14 15:50:40,751 - Epoch: [65][ 600/ 766] Loss 2.375435 mAP 0.886512 +2025-05-14 15:50:54,605 - Epoch: [65][ 766/ 766] Loss 2.384857 mAP 0.882329 +2025-05-14 15:50:54,641 - ==> mAP: 0.88233 Loss: 2.385 + +2025-05-14 15:50:54,646 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 15:50:54,646 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:50:54,677 - + +2025-05-14 15:50:54,677 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:51:02,773 - Epoch: [66][ 200/ 1785] Overall Loss 2.056738 Objective Loss 2.056738 LR 0.001000 Time 0.040467 +2025-05-14 15:51:10,759 - Epoch: [66][ 400/ 1785] Overall Loss 2.054923 Objective Loss 2.054923 LR 0.001000 Time 0.040192 +2025-05-14 15:51:19,076 - Epoch: [66][ 600/ 1785] Overall Loss 2.069098 Objective Loss 2.069098 LR 0.001000 Time 0.040653 +2025-05-14 15:51:27,394 - Epoch: [66][ 800/ 1785] Overall Loss 2.080546 Objective Loss 2.080546 LR 0.001000 Time 0.040885 +2025-05-14 15:51:35,735 - Epoch: [66][ 1000/ 1785] Overall Loss 2.096408 Objective Loss 2.096408 LR 0.001000 Time 0.041047 +2025-05-14 15:51:44,104 - Epoch: [66][ 1200/ 1785] Overall Loss 2.094446 Objective Loss 2.094446 LR 0.001000 Time 0.041179 +2025-05-14 15:51:52,437 - Epoch: [66][ 1400/ 1785] Overall Loss 2.097665 Objective Loss 2.097665 LR 0.001000 Time 0.041247 +2025-05-14 15:52:00,810 - Epoch: [66][ 1600/ 1785] Overall Loss 2.101117 Objective Loss 2.101117 LR 0.001000 Time 0.041324 +2025-05-14 15:52:08,494 - Epoch: [66][ 1785/ 1785] Overall Loss 2.101676 Objective Loss 2.101676 LR 0.001000 Time 0.041344 +2025-05-14 15:52:08,538 - --- validate (epoch=66)----------- +2025-05-14 15:52:08,539 - 12251 samples (16 per mini-batch) +2025-05-14 15:52:08,540 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:52:22,620 - Epoch: [66][ 200/ 766] Loss 2.351921 mAP 0.886829 +2025-05-14 15:52:37,186 - Epoch: [66][ 400/ 766] Loss 2.362502 mAP 0.879873 +2025-05-14 15:52:52,298 - Epoch: [66][ 600/ 766] Loss 2.360898 mAP 0.878047 +2025-05-14 15:53:06,195 - Epoch: [66][ 766/ 766] Loss 2.361970 mAP 0.879204 +2025-05-14 15:53:06,233 - ==> mAP: 0.87920 Loss: 2.362 + +2025-05-14 15:53:06,240 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 15:53:06,240 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:53:06,271 - + +2025-05-14 15:53:06,271 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:53:14,870 - Epoch: [67][ 200/ 1785] Overall Loss 2.069458 Objective Loss 2.069458 LR 0.001000 Time 0.042982 +2025-05-14 15:53:23,287 - Epoch: [67][ 400/ 1785] Overall Loss 2.073747 Objective Loss 2.073747 LR 0.001000 Time 0.042529 +2025-05-14 15:53:31,700 - Epoch: [67][ 600/ 1785] Overall Loss 2.068014 Objective Loss 2.068014 LR 0.001000 Time 0.042372 +2025-05-14 15:53:40,148 - Epoch: [67][ 800/ 1785] Overall Loss 2.079186 Objective Loss 2.079186 LR 0.001000 Time 0.042337 +2025-05-14 15:53:48,481 - Epoch: [67][ 1000/ 1785] Overall Loss 2.082532 Objective Loss 2.082532 LR 0.001000 Time 0.042201 +2025-05-14 15:53:56,730 - Epoch: [67][ 1200/ 1785] Overall Loss 2.088485 Objective Loss 2.088485 LR 0.001000 Time 0.042040 +2025-05-14 15:54:05,079 - Epoch: [67][ 1400/ 1785] Overall Loss 2.096539 Objective Loss 2.096539 LR 0.001000 Time 0.041997 +2025-05-14 15:54:13,425 - Epoch: [67][ 1600/ 1785] Overall Loss 2.099264 Objective Loss 2.099264 LR 0.001000 Time 0.041962 +2025-05-14 15:54:21,231 - Epoch: [67][ 1785/ 1785] Overall Loss 2.103815 Objective Loss 2.103815 LR 0.001000 Time 0.041985 +2025-05-14 15:54:21,276 - --- validate (epoch=67)----------- +2025-05-14 15:54:21,277 - 12251 samples (16 per mini-batch) +2025-05-14 15:54:21,278 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:54:34,693 - Epoch: [67][ 200/ 766] Loss 2.390060 mAP 0.871137 +2025-05-14 15:54:49,002 - Epoch: [67][ 400/ 766] Loss 2.382626 mAP 0.874574 +2025-05-14 15:55:03,986 - Epoch: [67][ 600/ 766] Loss 2.384568 mAP 0.873920 +2025-05-14 15:55:17,625 - Epoch: [67][ 766/ 766] Loss 2.393838 mAP 0.873279 +2025-05-14 15:55:17,663 - ==> mAP: 0.87328 Loss: 2.394 + +2025-05-14 15:55:17,669 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 15:55:17,669 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:55:17,692 - + +2025-05-14 15:55:17,692 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:55:26,152 - Epoch: [68][ 200/ 1785] Overall Loss 2.039193 Objective Loss 2.039193 LR 0.001000 Time 0.042288 +2025-05-14 15:55:34,458 - Epoch: [68][ 400/ 1785] Overall Loss 2.060959 Objective Loss 2.060959 LR 0.001000 Time 0.041906 +2025-05-14 15:55:42,800 - Epoch: [68][ 600/ 1785] Overall Loss 2.073995 Objective Loss 2.073995 LR 0.001000 Time 0.041836 +2025-05-14 15:55:51,249 - Epoch: [68][ 800/ 1785] Overall Loss 2.068738 Objective Loss 2.068738 LR 0.001000 Time 0.041936 +2025-05-14 15:55:59,650 - Epoch: [68][ 1000/ 1785] Overall Loss 2.078593 Objective Loss 2.078593 LR 0.001000 Time 0.041948 +2025-05-14 15:56:08,048 - Epoch: [68][ 1200/ 1785] Overall Loss 2.081760 Objective Loss 2.081760 LR 0.001000 Time 0.041954 +2025-05-14 15:56:16,400 - Epoch: [68][ 1400/ 1785] Overall Loss 2.089043 Objective Loss 2.089043 LR 0.001000 Time 0.041925 +2025-05-14 15:56:24,874 - Epoch: [68][ 1600/ 1785] Overall Loss 2.096615 Objective Loss 2.096615 LR 0.001000 Time 0.041980 +2025-05-14 15:56:32,656 - Epoch: [68][ 1785/ 1785] Overall Loss 2.097929 Objective Loss 2.097929 LR 0.001000 Time 0.041988 +2025-05-14 15:56:32,692 - --- validate (epoch=68)----------- +2025-05-14 15:56:32,693 - 12251 samples (16 per mini-batch) +2025-05-14 15:56:32,694 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:56:46,358 - Epoch: [68][ 200/ 766] Loss 2.380633 mAP 0.880680 +2025-05-14 15:57:01,034 - Epoch: [68][ 400/ 766] Loss 2.399776 mAP 0.878218 +2025-05-14 15:57:16,714 - Epoch: [68][ 600/ 766] Loss 2.390174 mAP 0.879865 +2025-05-14 15:57:30,658 - Epoch: [68][ 766/ 766] Loss 2.389495 mAP 0.881931 +2025-05-14 15:57:30,696 - ==> mAP: 0.88193 Loss: 2.389 + +2025-05-14 15:57:30,703 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 15:57:30,703 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:57:30,734 - + +2025-05-14 15:57:30,734 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:57:39,185 - Epoch: [69][ 200/ 1785] Overall Loss 2.018363 Objective Loss 2.018363 LR 0.001000 Time 0.042244 +2025-05-14 15:57:47,472 - Epoch: [69][ 400/ 1785] Overall Loss 2.036095 Objective Loss 2.036095 LR 0.001000 Time 0.041834 +2025-05-14 15:57:55,748 - Epoch: [69][ 600/ 1785] Overall Loss 2.049544 Objective Loss 2.049544 LR 0.001000 Time 0.041680 +2025-05-14 15:58:04,025 - Epoch: [69][ 800/ 1785] Overall Loss 2.072383 Objective Loss 2.072383 LR 0.001000 Time 0.041605 +2025-05-14 15:58:12,296 - Epoch: [69][ 1000/ 1785] Overall Loss 2.077244 Objective Loss 2.077244 LR 0.001000 Time 0.041553 +2025-05-14 15:58:20,519 - Epoch: [69][ 1200/ 1785] Overall Loss 2.084020 Objective Loss 2.084020 LR 0.001000 Time 0.041479 +2025-05-14 15:58:28,646 - Epoch: [69][ 1400/ 1785] Overall Loss 2.090796 Objective Loss 2.090796 LR 0.001000 Time 0.041357 +2025-05-14 15:58:36,662 - Epoch: [69][ 1600/ 1785] Overall Loss 2.095409 Objective Loss 2.095409 LR 0.001000 Time 0.041196 +2025-05-14 15:58:44,144 - Epoch: [69][ 1785/ 1785] Overall Loss 2.099089 Objective Loss 2.099089 LR 0.001000 Time 0.041117 +2025-05-14 15:58:44,182 - --- validate (epoch=69)----------- +2025-05-14 15:58:44,182 - 12251 samples (16 per mini-batch) +2025-05-14 15:58:44,184 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 15:58:58,045 - Epoch: [69][ 200/ 766] Loss 2.459947 mAP 0.865767 +2025-05-14 15:59:12,691 - Epoch: [69][ 400/ 766] Loss 2.434001 mAP 0.865623 +2025-05-14 15:59:28,061 - Epoch: [69][ 600/ 766] Loss 2.435577 mAP 0.864009 +2025-05-14 15:59:41,545 - Epoch: [69][ 766/ 766] Loss 2.432929 mAP 0.865967 +2025-05-14 15:59:41,580 - ==> mAP: 0.86597 Loss: 2.433 + +2025-05-14 15:59:41,586 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 15:59:41,586 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 15:59:41,616 - + +2025-05-14 15:59:41,616 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 15:59:49,980 - Epoch: [70][ 200/ 1785] Overall Loss 2.041838 Objective Loss 2.041838 LR 0.001000 Time 0.041807 +2025-05-14 15:59:58,201 - Epoch: [70][ 400/ 1785] Overall Loss 2.058658 Objective Loss 2.058658 LR 0.001000 Time 0.041451 +2025-05-14 16:00:06,380 - Epoch: [70][ 600/ 1785] Overall Loss 2.063349 Objective Loss 2.063349 LR 0.001000 Time 0.041263 +2025-05-14 16:00:14,623 - Epoch: [70][ 800/ 1785] Overall Loss 2.062616 Objective Loss 2.062616 LR 0.001000 Time 0.041249 +2025-05-14 16:00:22,817 - Epoch: [70][ 1000/ 1785] Overall Loss 2.071137 Objective Loss 2.071137 LR 0.001000 Time 0.041192 +2025-05-14 16:00:31,028 - Epoch: [70][ 1200/ 1785] Overall Loss 2.075347 Objective Loss 2.075347 LR 0.001000 Time 0.041168 +2025-05-14 16:00:39,204 - Epoch: [70][ 1400/ 1785] Overall Loss 2.084004 Objective Loss 2.084004 LR 0.001000 Time 0.041125 +2025-05-14 16:00:47,440 - Epoch: [70][ 1600/ 1785] Overall Loss 2.090512 Objective Loss 2.090512 LR 0.001000 Time 0.041131 +2025-05-14 16:00:55,053 - Epoch: [70][ 1785/ 1785] Overall Loss 2.094172 Objective Loss 2.094172 LR 0.001000 Time 0.041133 +2025-05-14 16:00:55,097 - --- validate (epoch=70)----------- +2025-05-14 16:00:55,097 - 12251 samples (16 per mini-batch) +2025-05-14 16:00:55,099 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:01:09,082 - Epoch: [70][ 200/ 766] Loss 2.410013 mAP 0.875887 +2025-05-14 16:01:23,521 - Epoch: [70][ 400/ 766] Loss 2.420383 mAP 0.869787 +2025-05-14 16:01:38,912 - Epoch: [70][ 600/ 766] Loss 2.416555 mAP 0.871625 +2025-05-14 16:01:53,071 - Epoch: [70][ 766/ 766] Loss 2.410399 mAP 0.870131 +2025-05-14 16:01:53,112 - ==> mAP: 0.87013 Loss: 2.410 + +2025-05-14 16:01:53,117 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:01:53,117 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:01:53,149 - + +2025-05-14 16:01:53,149 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:02:01,715 - Epoch: [71][ 200/ 1785] Overall Loss 2.041120 Objective Loss 2.041120 LR 0.001000 Time 0.042815 +2025-05-14 16:02:10,073 - Epoch: [71][ 400/ 1785] Overall Loss 2.049686 Objective Loss 2.049686 LR 0.001000 Time 0.042299 +2025-05-14 16:02:18,357 - Epoch: [71][ 600/ 1785] Overall Loss 2.051461 Objective Loss 2.051461 LR 0.001000 Time 0.042004 +2025-05-14 16:02:26,535 - Epoch: [71][ 800/ 1785] Overall Loss 2.061214 Objective Loss 2.061214 LR 0.001000 Time 0.041723 +2025-05-14 16:02:34,713 - Epoch: [71][ 1000/ 1785] Overall Loss 2.065275 Objective Loss 2.065275 LR 0.001000 Time 0.041555 +2025-05-14 16:02:42,884 - Epoch: [71][ 1200/ 1785] Overall Loss 2.071061 Objective Loss 2.071061 LR 0.001000 Time 0.041437 +2025-05-14 16:02:51,055 - Epoch: [71][ 1400/ 1785] Overall Loss 2.083259 Objective Loss 2.083259 LR 0.001000 Time 0.041353 +2025-05-14 16:02:59,170 - Epoch: [71][ 1600/ 1785] Overall Loss 2.090974 Objective Loss 2.090974 LR 0.001000 Time 0.041254 +2025-05-14 16:03:06,686 - Epoch: [71][ 1785/ 1785] Overall Loss 2.093205 Objective Loss 2.093205 LR 0.001000 Time 0.041189 +2025-05-14 16:03:06,727 - --- validate (epoch=71)----------- +2025-05-14 16:03:06,728 - 12251 samples (16 per mini-batch) +2025-05-14 16:03:06,729 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:03:20,204 - Epoch: [71][ 200/ 766] Loss 2.386384 mAP 0.881597 +2025-05-14 16:03:34,519 - Epoch: [71][ 400/ 766] Loss 2.396185 mAP 0.876420 +2025-05-14 16:03:49,524 - Epoch: [71][ 600/ 766] Loss 2.374038 mAP 0.877342 +2025-05-14 16:04:03,551 - Epoch: [71][ 766/ 766] Loss 2.374379 mAP 0.876258 +2025-05-14 16:04:03,589 - ==> mAP: 0.87626 Loss: 2.374 + +2025-05-14 16:04:03,594 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:04:03,594 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:04:03,625 - + +2025-05-14 16:04:03,626 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:04:12,005 - Epoch: [72][ 200/ 1785] Overall Loss 2.043136 Objective Loss 2.043136 LR 0.001000 Time 0.041880 +2025-05-14 16:04:19,899 - Epoch: [72][ 400/ 1785] Overall Loss 2.059603 Objective Loss 2.059603 LR 0.001000 Time 0.040671 +2025-05-14 16:04:28,099 - Epoch: [72][ 600/ 1785] Overall Loss 2.062336 Objective Loss 2.062336 LR 0.001000 Time 0.040777 +2025-05-14 16:04:36,221 - Epoch: [72][ 800/ 1785] Overall Loss 2.073317 Objective Loss 2.073317 LR 0.001000 Time 0.040733 +2025-05-14 16:04:44,362 - Epoch: [72][ 1000/ 1785] Overall Loss 2.077715 Objective Loss 2.077715 LR 0.001000 Time 0.040725 +2025-05-14 16:04:52,552 - Epoch: [72][ 1200/ 1785] Overall Loss 2.080785 Objective Loss 2.080785 LR 0.001000 Time 0.040762 +2025-05-14 16:05:00,728 - Epoch: [72][ 1400/ 1785] Overall Loss 2.087010 Objective Loss 2.087010 LR 0.001000 Time 0.040778 +2025-05-14 16:05:08,933 - Epoch: [72][ 1600/ 1785] Overall Loss 2.091096 Objective Loss 2.091096 LR 0.001000 Time 0.040807 +2025-05-14 16:05:16,691 - Epoch: [72][ 1785/ 1785] Overall Loss 2.095304 Objective Loss 2.095304 LR 0.001000 Time 0.040923 +2025-05-14 16:05:16,738 - --- validate (epoch=72)----------- +2025-05-14 16:05:16,739 - 12251 samples (16 per mini-batch) +2025-05-14 16:05:16,741 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:05:30,698 - Epoch: [72][ 200/ 766] Loss 2.411402 mAP 0.877044 +2025-05-14 16:05:45,208 - Epoch: [72][ 400/ 766] Loss 2.414694 mAP 0.874743 +2025-05-14 16:06:00,694 - Epoch: [72][ 600/ 766] Loss 2.409267 mAP 0.875191 +2025-05-14 16:06:14,494 - Epoch: [72][ 766/ 766] Loss 2.413709 mAP 0.874445 +2025-05-14 16:06:14,540 - ==> mAP: 0.87445 Loss: 2.414 + +2025-05-14 16:06:14,546 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:06:14,546 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:06:14,577 - + +2025-05-14 16:06:14,577 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:06:22,788 - Epoch: [73][ 200/ 1785] Overall Loss 2.052144 Objective Loss 2.052144 LR 0.001000 Time 0.041037 +2025-05-14 16:06:30,846 - Epoch: [73][ 400/ 1785] Overall Loss 2.066029 Objective Loss 2.066029 LR 0.001000 Time 0.040658 +2025-05-14 16:06:38,903 - Epoch: [73][ 600/ 1785] Overall Loss 2.068222 Objective Loss 2.068222 LR 0.001000 Time 0.040531 +2025-05-14 16:06:46,984 - Epoch: [73][ 800/ 1785] Overall Loss 2.072516 Objective Loss 2.072516 LR 0.001000 Time 0.040498 +2025-05-14 16:06:55,263 - Epoch: [73][ 1000/ 1785] Overall Loss 2.076781 Objective Loss 2.076781 LR 0.001000 Time 0.040675 +2025-05-14 16:07:03,382 - Epoch: [73][ 1200/ 1785] Overall Loss 2.080979 Objective Loss 2.080979 LR 0.001000 Time 0.040660 +2025-05-14 16:07:11,540 - Epoch: [73][ 1400/ 1785] Overall Loss 2.083937 Objective Loss 2.083937 LR 0.001000 Time 0.040678 +2025-05-14 16:07:19,739 - Epoch: [73][ 1600/ 1785] Overall Loss 2.089660 Objective Loss 2.089660 LR 0.001000 Time 0.040716 +2025-05-14 16:07:27,419 - Epoch: [73][ 1785/ 1785] Overall Loss 2.093487 Objective Loss 2.093487 LR 0.001000 Time 0.040798 +2025-05-14 16:07:27,473 - --- validate (epoch=73)----------- +2025-05-14 16:07:27,474 - 12251 samples (16 per mini-batch) +2025-05-14 16:07:27,476 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:07:41,537 - Epoch: [73][ 200/ 766] Loss 2.403582 mAP 0.870134 +2025-05-14 16:07:55,694 - Epoch: [73][ 400/ 766] Loss 2.397472 mAP 0.873044 +2025-05-14 16:08:11,073 - Epoch: [73][ 600/ 766] Loss 2.391993 mAP 0.873251 +2025-05-14 16:08:25,088 - Epoch: [73][ 766/ 766] Loss 2.386645 mAP 0.873031 +2025-05-14 16:08:25,130 - ==> mAP: 0.87303 Loss: 2.387 + +2025-05-14 16:08:25,136 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:08:25,137 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:08:25,167 - + +2025-05-14 16:08:25,167 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:08:33,775 - Epoch: [74][ 200/ 1785] Overall Loss 2.018075 Objective Loss 2.018075 LR 0.001000 Time 0.043025 +2025-05-14 16:08:42,215 - Epoch: [74][ 400/ 1785] Overall Loss 2.038164 Objective Loss 2.038164 LR 0.001000 Time 0.042607 +2025-05-14 16:08:50,387 - Epoch: [74][ 600/ 1785] Overall Loss 2.049268 Objective Loss 2.049268 LR 0.001000 Time 0.042023 +2025-05-14 16:08:58,592 - Epoch: [74][ 800/ 1785] Overall Loss 2.056900 Objective Loss 2.056900 LR 0.001000 Time 0.041771 +2025-05-14 16:09:06,771 - Epoch: [74][ 1000/ 1785] Overall Loss 2.067676 Objective Loss 2.067676 LR 0.001000 Time 0.041594 +2025-05-14 16:09:14,918 - Epoch: [74][ 1200/ 1785] Overall Loss 2.076801 Objective Loss 2.076801 LR 0.001000 Time 0.041449 +2025-05-14 16:09:22,905 - Epoch: [74][ 1400/ 1785] Overall Loss 2.081304 Objective Loss 2.081304 LR 0.001000 Time 0.041232 +2025-05-14 16:09:30,825 - Epoch: [74][ 1600/ 1785] Overall Loss 2.085878 Objective Loss 2.085878 LR 0.001000 Time 0.041027 +2025-05-14 16:09:38,178 - Epoch: [74][ 1785/ 1785] Overall Loss 2.093693 Objective Loss 2.093693 LR 0.001000 Time 0.040893 +2025-05-14 16:09:38,218 - --- validate (epoch=74)----------- +2025-05-14 16:09:38,219 - 12251 samples (16 per mini-batch) +2025-05-14 16:09:38,221 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:09:51,890 - Epoch: [74][ 200/ 766] Loss 2.398486 mAP 0.860018 +2025-05-14 16:10:06,138 - Epoch: [74][ 400/ 766] Loss 2.431267 mAP 0.862043 +2025-05-14 16:10:21,298 - Epoch: [74][ 600/ 766] Loss 2.430272 mAP 0.865784 +2025-05-14 16:10:34,828 - Epoch: [74][ 766/ 766] Loss 2.424205 mAP 0.865864 +2025-05-14 16:10:34,879 - ==> mAP: 0.86586 Loss: 2.424 + +2025-05-14 16:10:34,885 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:10:34,886 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:10:34,916 - + +2025-05-14 16:10:34,916 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:10:43,343 - Epoch: [75][ 200/ 1785] Overall Loss 2.048356 Objective Loss 2.048356 LR 0.001000 Time 0.042124 +2025-05-14 16:10:51,639 - Epoch: [75][ 400/ 1785] Overall Loss 2.052369 Objective Loss 2.052369 LR 0.001000 Time 0.041797 +2025-05-14 16:10:59,778 - Epoch: [75][ 600/ 1785] Overall Loss 2.062122 Objective Loss 2.062122 LR 0.001000 Time 0.041427 +2025-05-14 16:11:08,006 - Epoch: [75][ 800/ 1785] Overall Loss 2.065699 Objective Loss 2.065699 LR 0.001000 Time 0.041354 +2025-05-14 16:11:16,131 - Epoch: [75][ 1000/ 1785] Overall Loss 2.069946 Objective Loss 2.069946 LR 0.001000 Time 0.041206 +2025-05-14 16:11:24,297 - Epoch: [75][ 1200/ 1785] Overall Loss 2.079526 Objective Loss 2.079526 LR 0.001000 Time 0.041142 +2025-05-14 16:11:32,504 - Epoch: [75][ 1400/ 1785] Overall Loss 2.079074 Objective Loss 2.079074 LR 0.001000 Time 0.041125 +2025-05-14 16:11:40,699 - Epoch: [75][ 1600/ 1785] Overall Loss 2.085926 Objective Loss 2.085926 LR 0.001000 Time 0.041105 +2025-05-14 16:11:48,469 - Epoch: [75][ 1785/ 1785] Overall Loss 2.093268 Objective Loss 2.093268 LR 0.001000 Time 0.041197 +2025-05-14 16:11:48,508 - --- validate (epoch=75)----------- +2025-05-14 16:11:48,509 - 12251 samples (16 per mini-batch) +2025-05-14 16:11:48,511 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:12:01,941 - Epoch: [75][ 200/ 766] Loss 2.410252 mAP 0.873566 +2025-05-14 16:12:16,331 - Epoch: [75][ 400/ 766] Loss 2.437150 mAP 0.867398 +2025-05-14 16:12:31,578 - Epoch: [75][ 600/ 766] Loss 2.442918 mAP 0.872041 +2025-05-14 16:12:45,490 - Epoch: [75][ 766/ 766] Loss 2.438527 mAP 0.872659 +2025-05-14 16:12:45,527 - ==> mAP: 0.87266 Loss: 2.439 + +2025-05-14 16:12:45,533 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:12:45,533 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:12:45,564 - + +2025-05-14 16:12:45,564 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:12:53,883 - Epoch: [76][ 200/ 1785] Overall Loss 2.007474 Objective Loss 2.007474 LR 0.001000 Time 0.041577 +2025-05-14 16:13:02,297 - Epoch: [76][ 400/ 1785] Overall Loss 2.027133 Objective Loss 2.027133 LR 0.001000 Time 0.041821 +2025-05-14 16:13:10,752 - Epoch: [76][ 600/ 1785] Overall Loss 2.043581 Objective Loss 2.043581 LR 0.001000 Time 0.041969 +2025-05-14 16:13:19,096 - Epoch: [76][ 800/ 1785] Overall Loss 2.053470 Objective Loss 2.053470 LR 0.001000 Time 0.041905 +2025-05-14 16:13:27,458 - Epoch: [76][ 1000/ 1785] Overall Loss 2.061943 Objective Loss 2.061943 LR 0.001000 Time 0.041884 +2025-05-14 16:13:35,923 - Epoch: [76][ 1200/ 1785] Overall Loss 2.072378 Objective Loss 2.072378 LR 0.001000 Time 0.041956 +2025-05-14 16:13:44,335 - Epoch: [76][ 1400/ 1785] Overall Loss 2.081422 Objective Loss 2.081422 LR 0.001000 Time 0.041969 +2025-05-14 16:13:52,734 - Epoch: [76][ 1600/ 1785] Overall Loss 2.088834 Objective Loss 2.088834 LR 0.001000 Time 0.041972 +2025-05-14 16:14:00,430 - Epoch: [76][ 1785/ 1785] Overall Loss 2.089176 Objective Loss 2.089176 LR 0.001000 Time 0.041932 +2025-05-14 16:14:00,473 - --- validate (epoch=76)----------- +2025-05-14 16:14:00,474 - 12251 samples (16 per mini-batch) +2025-05-14 16:14:00,476 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:14:14,315 - Epoch: [76][ 200/ 766] Loss 2.408503 mAP 0.867289 +2025-05-14 16:14:28,354 - Epoch: [76][ 400/ 766] Loss 2.402256 mAP 0.866419 +2025-05-14 16:14:43,398 - Epoch: [76][ 600/ 766] Loss 2.394533 mAP 0.867361 +2025-05-14 16:14:57,113 - Epoch: [76][ 766/ 766] Loss 2.399576 mAP 0.864452 +2025-05-14 16:14:57,151 - ==> mAP: 0.86445 Loss: 2.400 + +2025-05-14 16:14:57,157 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:14:57,158 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:14:57,188 - + +2025-05-14 16:14:57,188 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:15:05,366 - Epoch: [77][ 200/ 1785] Overall Loss 2.066115 Objective Loss 2.066115 LR 0.001000 Time 0.040875 +2025-05-14 16:15:13,698 - Epoch: [77][ 400/ 1785] Overall Loss 2.055037 Objective Loss 2.055037 LR 0.001000 Time 0.041262 +2025-05-14 16:15:21,975 - Epoch: [77][ 600/ 1785] Overall Loss 2.050466 Objective Loss 2.050466 LR 0.001000 Time 0.041300 +2025-05-14 16:15:30,168 - Epoch: [77][ 800/ 1785] Overall Loss 2.072831 Objective Loss 2.072831 LR 0.001000 Time 0.041214 +2025-05-14 16:15:38,333 - Epoch: [77][ 1000/ 1785] Overall Loss 2.074802 Objective Loss 2.074802 LR 0.001000 Time 0.041135 +2025-05-14 16:15:46,505 - Epoch: [77][ 1200/ 1785] Overall Loss 2.077507 Objective Loss 2.077507 LR 0.001000 Time 0.041088 +2025-05-14 16:15:54,707 - Epoch: [77][ 1400/ 1785] Overall Loss 2.083693 Objective Loss 2.083693 LR 0.001000 Time 0.041076 +2025-05-14 16:16:02,906 - Epoch: [77][ 1600/ 1785] Overall Loss 2.083976 Objective Loss 2.083976 LR 0.001000 Time 0.041064 +2025-05-14 16:16:10,474 - Epoch: [77][ 1785/ 1785] Overall Loss 2.085973 Objective Loss 2.085973 LR 0.001000 Time 0.041047 +2025-05-14 16:16:10,514 - --- validate (epoch=77)----------- +2025-05-14 16:16:10,515 - 12251 samples (16 per mini-batch) +2025-05-14 16:16:10,516 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:16:24,422 - Epoch: [77][ 200/ 766] Loss 2.374376 mAP 0.883502 +2025-05-14 16:16:38,885 - Epoch: [77][ 400/ 766] Loss 2.397093 mAP 0.879539 +2025-05-14 16:16:54,249 - Epoch: [77][ 600/ 766] Loss 2.406355 mAP 0.880247 +2025-05-14 16:17:08,412 - Epoch: [77][ 766/ 766] Loss 2.406676 mAP 0.879064 +2025-05-14 16:17:08,450 - ==> mAP: 0.87906 Loss: 2.407 + +2025-05-14 16:17:08,456 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:17:08,456 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:17:08,487 - + +2025-05-14 16:17:08,487 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:17:16,792 - Epoch: [78][ 200/ 1785] Overall Loss 2.043597 Objective Loss 2.043597 LR 0.001000 Time 0.041511 +2025-05-14 16:17:24,948 - Epoch: [78][ 400/ 1785] Overall Loss 2.062628 Objective Loss 2.062628 LR 0.001000 Time 0.041141 +2025-05-14 16:17:33,253 - Epoch: [78][ 600/ 1785] Overall Loss 2.065793 Objective Loss 2.065793 LR 0.001000 Time 0.041265 +2025-05-14 16:17:41,565 - Epoch: [78][ 800/ 1785] Overall Loss 2.079926 Objective Loss 2.079926 LR 0.001000 Time 0.041337 +2025-05-14 16:17:49,841 - Epoch: [78][ 1000/ 1785] Overall Loss 2.077244 Objective Loss 2.077244 LR 0.001000 Time 0.041344 +2025-05-14 16:17:58,166 - Epoch: [78][ 1200/ 1785] Overall Loss 2.083019 Objective Loss 2.083019 LR 0.001000 Time 0.041390 +2025-05-14 16:18:06,272 - Epoch: [78][ 1400/ 1785] Overall Loss 2.086895 Objective Loss 2.086895 LR 0.001000 Time 0.041266 +2025-05-14 16:18:14,269 - Epoch: [78][ 1600/ 1785] Overall Loss 2.089101 Objective Loss 2.089101 LR 0.001000 Time 0.041104 +2025-05-14 16:18:21,668 - Epoch: [78][ 1785/ 1785] Overall Loss 2.089293 Objective Loss 2.089293 LR 0.001000 Time 0.040988 +2025-05-14 16:18:21,708 - --- validate (epoch=78)----------- +2025-05-14 16:18:21,709 - 12251 samples (16 per mini-batch) +2025-05-14 16:18:21,711 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:18:35,151 - Epoch: [78][ 200/ 766] Loss 2.367333 mAP 0.878420 +2025-05-14 16:18:49,120 - Epoch: [78][ 400/ 766] Loss 2.382376 mAP 0.873837 +2025-05-14 16:19:04,585 - Epoch: [78][ 600/ 766] Loss 2.398330 mAP 0.873124 +2025-05-14 16:19:18,529 - Epoch: [78][ 766/ 766] Loss 2.389848 mAP 0.873563 +2025-05-14 16:19:18,568 - ==> mAP: 0.87356 Loss: 2.390 + +2025-05-14 16:19:18,575 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:19:18,575 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:19:18,605 - + +2025-05-14 16:19:18,605 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:19:27,086 - Epoch: [79][ 200/ 1785] Overall Loss 2.044156 Objective Loss 2.044156 LR 0.001000 Time 0.042389 +2025-05-14 16:19:35,393 - Epoch: [79][ 400/ 1785] Overall Loss 2.045520 Objective Loss 2.045520 LR 0.001000 Time 0.041958 +2025-05-14 16:19:43,861 - Epoch: [79][ 600/ 1785] Overall Loss 2.036626 Objective Loss 2.036626 LR 0.001000 Time 0.042082 +2025-05-14 16:19:52,229 - Epoch: [79][ 800/ 1785] Overall Loss 2.042607 Objective Loss 2.042607 LR 0.001000 Time 0.042020 +2025-05-14 16:20:00,573 - Epoch: [79][ 1000/ 1785] Overall Loss 2.052081 Objective Loss 2.052081 LR 0.001000 Time 0.041959 +2025-05-14 16:20:08,927 - Epoch: [79][ 1200/ 1785] Overall Loss 2.062658 Objective Loss 2.062658 LR 0.001000 Time 0.041926 +2025-05-14 16:20:17,290 - Epoch: [79][ 1400/ 1785] Overall Loss 2.070823 Objective Loss 2.070823 LR 0.001000 Time 0.041908 +2025-05-14 16:20:25,602 - Epoch: [79][ 1600/ 1785] Overall Loss 2.076154 Objective Loss 2.076154 LR 0.001000 Time 0.041864 +2025-05-14 16:20:33,125 - Epoch: [79][ 1785/ 1785] Overall Loss 2.082858 Objective Loss 2.082858 LR 0.001000 Time 0.041739 +2025-05-14 16:20:33,166 - --- validate (epoch=79)----------- +2025-05-14 16:20:33,166 - 12251 samples (16 per mini-batch) +2025-05-14 16:20:33,168 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:20:47,242 - Epoch: [79][ 200/ 766] Loss 2.391215 mAP 0.866069 +2025-05-14 16:21:01,501 - Epoch: [79][ 400/ 766] Loss 2.366493 mAP 0.873779 +2025-05-14 16:21:16,606 - Epoch: [79][ 600/ 766] Loss 2.382763 mAP 0.876273 +2025-05-14 16:21:30,367 - Epoch: [79][ 766/ 766] Loss 2.380069 mAP 0.876844 +2025-05-14 16:21:30,407 - ==> mAP: 0.87684 Loss: 2.380 + +2025-05-14 16:21:30,414 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:21:30,414 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:21:30,445 - + +2025-05-14 16:21:30,445 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:21:38,766 - Epoch: [80][ 200/ 1785] Overall Loss 2.040020 Objective Loss 2.040020 LR 0.001000 Time 0.041591 +2025-05-14 16:21:46,927 - Epoch: [80][ 400/ 1785] Overall Loss 2.047585 Objective Loss 2.047585 LR 0.001000 Time 0.041194 +2025-05-14 16:21:55,127 - Epoch: [80][ 600/ 1785] Overall Loss 2.063878 Objective Loss 2.063878 LR 0.001000 Time 0.041127 +2025-05-14 16:22:03,303 - Epoch: [80][ 800/ 1785] Overall Loss 2.064452 Objective Loss 2.064452 LR 0.001000 Time 0.041063 +2025-05-14 16:22:11,565 - Epoch: [80][ 1000/ 1785] Overall Loss 2.068636 Objective Loss 2.068636 LR 0.001000 Time 0.041111 +2025-05-14 16:22:19,810 - Epoch: [80][ 1200/ 1785] Overall Loss 2.071009 Objective Loss 2.071009 LR 0.001000 Time 0.041128 +2025-05-14 16:22:28,094 - Epoch: [80][ 1400/ 1785] Overall Loss 2.071952 Objective Loss 2.071952 LR 0.001000 Time 0.041169 +2025-05-14 16:22:36,338 - Epoch: [80][ 1600/ 1785] Overall Loss 2.079416 Objective Loss 2.079416 LR 0.001000 Time 0.041174 +2025-05-14 16:22:44,001 - Epoch: [80][ 1785/ 1785] Overall Loss 2.083025 Objective Loss 2.083025 LR 0.001000 Time 0.041199 +2025-05-14 16:22:44,048 - --- validate (epoch=80)----------- +2025-05-14 16:22:44,049 - 12251 samples (16 per mini-batch) +2025-05-14 16:22:44,050 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:22:57,931 - Epoch: [80][ 200/ 766] Loss 2.482517 mAP 0.853673 +2025-05-14 16:23:12,365 - Epoch: [80][ 400/ 766] Loss 2.478260 mAP 0.853800 +2025-05-14 16:23:27,993 - Epoch: [80][ 600/ 766] Loss 2.465473 mAP 0.851439 +2025-05-14 16:23:41,886 - Epoch: [80][ 766/ 766] Loss 2.467350 mAP 0.852076 +2025-05-14 16:23:41,927 - ==> mAP: 0.85208 Loss: 2.467 + +2025-05-14 16:23:41,934 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:23:41,934 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:23:41,964 - + +2025-05-14 16:23:41,964 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:23:50,324 - Epoch: [81][ 200/ 1785] Overall Loss 2.010587 Objective Loss 2.010587 LR 0.001000 Time 0.041784 +2025-05-14 16:23:58,473 - Epoch: [81][ 400/ 1785] Overall Loss 2.042461 Objective Loss 2.042461 LR 0.001000 Time 0.041260 +2025-05-14 16:24:06,730 - Epoch: [81][ 600/ 1785] Overall Loss 2.057377 Objective Loss 2.057377 LR 0.001000 Time 0.041266 +2025-05-14 16:24:14,905 - Epoch: [81][ 800/ 1785] Overall Loss 2.068158 Objective Loss 2.068158 LR 0.001000 Time 0.041166 +2025-05-14 16:24:23,134 - Epoch: [81][ 1000/ 1785] Overall Loss 2.069197 Objective Loss 2.069197 LR 0.001000 Time 0.041160 +2025-05-14 16:24:31,417 - Epoch: [81][ 1200/ 1785] Overall Loss 2.073530 Objective Loss 2.073530 LR 0.001000 Time 0.041201 +2025-05-14 16:24:39,647 - Epoch: [81][ 1400/ 1785] Overall Loss 2.078379 Objective Loss 2.078379 LR 0.001000 Time 0.041192 +2025-05-14 16:24:47,884 - Epoch: [81][ 1600/ 1785] Overall Loss 2.082224 Objective Loss 2.082224 LR 0.001000 Time 0.041190 +2025-05-14 16:24:55,578 - Epoch: [81][ 1785/ 1785] Overall Loss 2.088718 Objective Loss 2.088718 LR 0.001000 Time 0.041230 +2025-05-14 16:24:55,626 - --- validate (epoch=81)----------- +2025-05-14 16:24:55,627 - 12251 samples (16 per mini-batch) +2025-05-14 16:24:55,629 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:25:09,243 - Epoch: [81][ 200/ 766] Loss 2.371282 mAP 0.881437 +2025-05-14 16:25:23,831 - Epoch: [81][ 400/ 766] Loss 2.393470 mAP 0.873429 +2025-05-14 16:25:39,231 - Epoch: [81][ 600/ 766] Loss 2.409116 mAP 0.869965 +2025-05-14 16:25:53,334 - Epoch: [81][ 766/ 766] Loss 2.407110 mAP 0.870371 +2025-05-14 16:25:53,374 - ==> mAP: 0.87037 Loss: 2.407 + +2025-05-14 16:25:53,381 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:25:53,381 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:25:53,412 - + +2025-05-14 16:25:53,412 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:26:02,028 - Epoch: [82][ 200/ 1785] Overall Loss 2.037930 Objective Loss 2.037930 LR 0.001000 Time 0.043068 +2025-05-14 16:26:10,268 - Epoch: [82][ 400/ 1785] Overall Loss 2.055648 Objective Loss 2.055648 LR 0.001000 Time 0.042128 +2025-05-14 16:26:18,548 - Epoch: [82][ 600/ 1785] Overall Loss 2.053260 Objective Loss 2.053260 LR 0.001000 Time 0.041882 +2025-05-14 16:26:26,849 - Epoch: [82][ 800/ 1785] Overall Loss 2.055435 Objective Loss 2.055435 LR 0.001000 Time 0.041786 +2025-05-14 16:26:35,061 - Epoch: [82][ 1000/ 1785] Overall Loss 2.060874 Objective Loss 2.060874 LR 0.001000 Time 0.041638 +2025-05-14 16:26:43,263 - Epoch: [82][ 1200/ 1785] Overall Loss 2.065673 Objective Loss 2.065673 LR 0.001000 Time 0.041532 +2025-05-14 16:26:51,495 - Epoch: [82][ 1400/ 1785] Overall Loss 2.068875 Objective Loss 2.068875 LR 0.001000 Time 0.041477 +2025-05-14 16:26:59,681 - Epoch: [82][ 1600/ 1785] Overall Loss 2.075801 Objective Loss 2.075801 LR 0.001000 Time 0.041408 +2025-05-14 16:27:07,255 - Epoch: [82][ 1785/ 1785] Overall Loss 2.078919 Objective Loss 2.078919 LR 0.001000 Time 0.041359 +2025-05-14 16:27:07,296 - --- validate (epoch=82)----------- +2025-05-14 16:27:07,297 - 12251 samples (16 per mini-batch) +2025-05-14 16:27:07,298 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:27:20,992 - Epoch: [82][ 200/ 766] Loss 2.390140 mAP 0.871066 +2025-05-14 16:27:35,701 - Epoch: [82][ 400/ 766] Loss 2.415656 mAP 0.872864 +2025-05-14 16:27:51,218 - Epoch: [82][ 600/ 766] Loss 2.427499 mAP 0.869879 +2025-05-14 16:28:04,803 - Epoch: [82][ 766/ 766] Loss 2.426174 mAP 0.870505 +2025-05-14 16:28:04,844 - ==> mAP: 0.87050 Loss: 2.426 + +2025-05-14 16:28:04,851 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:28:04,851 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:28:04,875 - + +2025-05-14 16:28:04,876 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:28:13,408 - Epoch: [83][ 200/ 1785] Overall Loss 2.024290 Objective Loss 2.024290 LR 0.001000 Time 0.042650 +2025-05-14 16:28:21,937 - Epoch: [83][ 400/ 1785] Overall Loss 2.038561 Objective Loss 2.038561 LR 0.001000 Time 0.042642 +2025-05-14 16:28:30,466 - Epoch: [83][ 600/ 1785] Overall Loss 2.055444 Objective Loss 2.055444 LR 0.001000 Time 0.042640 +2025-05-14 16:28:38,948 - Epoch: [83][ 800/ 1785] Overall Loss 2.056529 Objective Loss 2.056529 LR 0.001000 Time 0.042580 +2025-05-14 16:28:47,216 - Epoch: [83][ 1000/ 1785] Overall Loss 2.060449 Objective Loss 2.060449 LR 0.001000 Time 0.042331 +2025-05-14 16:28:55,807 - Epoch: [83][ 1200/ 1785] Overall Loss 2.065223 Objective Loss 2.065223 LR 0.001000 Time 0.042433 +2025-05-14 16:29:04,326 - Epoch: [83][ 1400/ 1785] Overall Loss 2.070845 Objective Loss 2.070845 LR 0.001000 Time 0.042455 +2025-05-14 16:29:12,793 - Epoch: [83][ 1600/ 1785] Overall Loss 2.076230 Objective Loss 2.076230 LR 0.001000 Time 0.042439 +2025-05-14 16:29:20,670 - Epoch: [83][ 1785/ 1785] Overall Loss 2.080531 Objective Loss 2.080531 LR 0.001000 Time 0.042453 +2025-05-14 16:29:20,708 - --- validate (epoch=83)----------- +2025-05-14 16:29:20,709 - 12251 samples (16 per mini-batch) +2025-05-14 16:29:20,711 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:29:34,575 - Epoch: [83][ 200/ 766] Loss 2.438816 mAP 0.869624 +2025-05-14 16:29:49,034 - Epoch: [83][ 400/ 766] Loss 2.407887 mAP 0.869539 +2025-05-14 16:30:04,484 - Epoch: [83][ 600/ 766] Loss 2.406998 mAP 0.870758 +2025-05-14 16:30:18,309 - Epoch: [83][ 766/ 766] Loss 2.398077 mAP 0.871822 +2025-05-14 16:30:18,345 - ==> mAP: 0.87182 Loss: 2.398 + +2025-05-14 16:30:18,351 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:30:18,352 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:30:18,382 - + +2025-05-14 16:30:18,383 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:30:26,678 - Epoch: [84][ 200/ 1785] Overall Loss 2.033457 Objective Loss 2.033457 LR 0.001000 Time 0.041462 +2025-05-14 16:30:34,905 - Epoch: [84][ 400/ 1785] Overall Loss 2.031444 Objective Loss 2.031444 LR 0.001000 Time 0.041294 +2025-05-14 16:30:43,396 - Epoch: [84][ 600/ 1785] Overall Loss 2.042731 Objective Loss 2.042731 LR 0.001000 Time 0.041678 +2025-05-14 16:30:51,978 - Epoch: [84][ 800/ 1785] Overall Loss 2.054738 Objective Loss 2.054738 LR 0.001000 Time 0.041984 +2025-05-14 16:31:00,453 - Epoch: [84][ 1000/ 1785] Overall Loss 2.054464 Objective Loss 2.054464 LR 0.001000 Time 0.042061 +2025-05-14 16:31:08,910 - Epoch: [84][ 1200/ 1785] Overall Loss 2.059010 Objective Loss 2.059010 LR 0.001000 Time 0.042096 +2025-05-14 16:31:17,332 - Epoch: [84][ 1400/ 1785] Overall Loss 2.070114 Objective Loss 2.070114 LR 0.001000 Time 0.042097 +2025-05-14 16:31:25,756 - Epoch: [84][ 1600/ 1785] Overall Loss 2.074189 Objective Loss 2.074189 LR 0.001000 Time 0.042099 +2025-05-14 16:31:33,556 - Epoch: [84][ 1785/ 1785] Overall Loss 2.075999 Objective Loss 2.075999 LR 0.001000 Time 0.042105 +2025-05-14 16:31:33,590 - --- validate (epoch=84)----------- +2025-05-14 16:31:33,591 - 12251 samples (16 per mini-batch) +2025-05-14 16:31:33,593 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:31:47,145 - Epoch: [84][ 200/ 766] Loss 2.399489 mAP 0.883291 +2025-05-14 16:32:01,336 - Epoch: [84][ 400/ 766] Loss 2.405554 mAP 0.884889 +2025-05-14 16:32:16,678 - Epoch: [84][ 600/ 766] Loss 2.407487 mAP 0.880787 +2025-05-14 16:32:30,793 - Epoch: [84][ 766/ 766] Loss 2.406974 mAP 0.880995 +2025-05-14 16:32:30,833 - ==> mAP: 0.88099 Loss: 2.407 + +2025-05-14 16:32:30,840 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:32:30,840 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:32:30,870 - + +2025-05-14 16:32:30,871 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:32:39,179 - Epoch: [85][ 200/ 1785] Overall Loss 2.042014 Objective Loss 2.042014 LR 0.001000 Time 0.041529 +2025-05-14 16:32:47,366 - Epoch: [85][ 400/ 1785] Overall Loss 2.038623 Objective Loss 2.038623 LR 0.001000 Time 0.041229 +2025-05-14 16:32:55,478 - Epoch: [85][ 600/ 1785] Overall Loss 2.063714 Objective Loss 2.063714 LR 0.001000 Time 0.041003 +2025-05-14 16:33:03,593 - Epoch: [85][ 800/ 1785] Overall Loss 2.065163 Objective Loss 2.065163 LR 0.001000 Time 0.040894 +2025-05-14 16:33:11,831 - Epoch: [85][ 1000/ 1785] Overall Loss 2.063954 Objective Loss 2.063954 LR 0.001000 Time 0.040951 +2025-05-14 16:33:20,393 - Epoch: [85][ 1200/ 1785] Overall Loss 2.061352 Objective Loss 2.061352 LR 0.001000 Time 0.041260 +2025-05-14 16:33:28,872 - Epoch: [85][ 1400/ 1785] Overall Loss 2.065750 Objective Loss 2.065750 LR 0.001000 Time 0.041421 +2025-05-14 16:33:37,235 - Epoch: [85][ 1600/ 1785] Overall Loss 2.070007 Objective Loss 2.070007 LR 0.001000 Time 0.041469 +2025-05-14 16:33:44,953 - Epoch: [85][ 1785/ 1785] Overall Loss 2.075218 Objective Loss 2.075218 LR 0.001000 Time 0.041494 +2025-05-14 16:33:44,988 - --- validate (epoch=85)----------- +2025-05-14 16:33:44,989 - 12251 samples (16 per mini-batch) +2025-05-14 16:33:44,991 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:33:58,594 - Epoch: [85][ 200/ 766] Loss 2.409506 mAP 0.876248 +2025-05-14 16:34:12,742 - Epoch: [85][ 400/ 766] Loss 2.419280 mAP 0.875997 +2025-05-14 16:34:27,924 - Epoch: [85][ 600/ 766] Loss 2.423364 mAP 0.872797 +2025-05-14 16:34:41,880 - Epoch: [85][ 766/ 766] Loss 2.422583 mAP 0.873433 +2025-05-14 16:34:41,928 - ==> mAP: 0.87343 Loss: 2.423 + +2025-05-14 16:34:41,935 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:34:41,935 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:34:41,966 - + +2025-05-14 16:34:41,966 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:34:50,282 - Epoch: [86][ 200/ 1785] Overall Loss 2.017428 Objective Loss 2.017428 LR 0.001000 Time 0.041566 +2025-05-14 16:34:58,434 - Epoch: [86][ 400/ 1785] Overall Loss 2.033299 Objective Loss 2.033299 LR 0.001000 Time 0.041159 +2025-05-14 16:35:06,615 - Epoch: [86][ 600/ 1785] Overall Loss 2.040016 Objective Loss 2.040016 LR 0.001000 Time 0.041071 +2025-05-14 16:35:14,726 - Epoch: [86][ 800/ 1785] Overall Loss 2.032630 Objective Loss 2.032630 LR 0.001000 Time 0.040940 +2025-05-14 16:35:22,853 - Epoch: [86][ 1000/ 1785] Overall Loss 2.049222 Objective Loss 2.049222 LR 0.001000 Time 0.040878 +2025-05-14 16:35:31,186 - Epoch: [86][ 1200/ 1785] Overall Loss 2.060327 Objective Loss 2.060327 LR 0.001000 Time 0.041008 +2025-05-14 16:35:39,539 - Epoch: [86][ 1400/ 1785] Overall Loss 2.064236 Objective Loss 2.064236 LR 0.001000 Time 0.041114 +2025-05-14 16:35:47,906 - Epoch: [86][ 1600/ 1785] Overall Loss 2.075190 Objective Loss 2.075190 LR 0.001000 Time 0.041204 +2025-05-14 16:35:55,625 - Epoch: [86][ 1785/ 1785] Overall Loss 2.079749 Objective Loss 2.079749 LR 0.001000 Time 0.041257 +2025-05-14 16:35:55,666 - --- validate (epoch=86)----------- +2025-05-14 16:35:55,667 - 12251 samples (16 per mini-batch) +2025-05-14 16:35:55,669 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:36:09,709 - Epoch: [86][ 200/ 766] Loss 2.403949 mAP 0.876205 +2025-05-14 16:36:24,172 - Epoch: [86][ 400/ 766] Loss 2.387114 mAP 0.877394 +2025-05-14 16:36:39,670 - Epoch: [86][ 600/ 766] Loss 2.389994 mAP 0.876967 +2025-05-14 16:36:53,322 - Epoch: [86][ 766/ 766] Loss 2.397934 mAP 0.874138 +2025-05-14 16:36:53,354 - ==> mAP: 0.87414 Loss: 2.398 + +2025-05-14 16:36:53,361 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:36:53,361 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:36:53,391 - + +2025-05-14 16:36:53,391 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:37:01,719 - Epoch: [87][ 200/ 1785] Overall Loss 2.018556 Objective Loss 2.018556 LR 0.001000 Time 0.041623 +2025-05-14 16:37:09,723 - Epoch: [87][ 400/ 1785] Overall Loss 2.013054 Objective Loss 2.013054 LR 0.001000 Time 0.040818 +2025-05-14 16:37:17,696 - Epoch: [87][ 600/ 1785] Overall Loss 2.033685 Objective Loss 2.033685 LR 0.001000 Time 0.040496 +2025-05-14 16:37:25,696 - Epoch: [87][ 800/ 1785] Overall Loss 2.044353 Objective Loss 2.044353 LR 0.001000 Time 0.040370 +2025-05-14 16:37:33,735 - Epoch: [87][ 1000/ 1785] Overall Loss 2.049408 Objective Loss 2.049408 LR 0.001000 Time 0.040332 +2025-05-14 16:37:42,134 - Epoch: [87][ 1200/ 1785] Overall Loss 2.054160 Objective Loss 2.054160 LR 0.001000 Time 0.040608 +2025-05-14 16:37:50,549 - Epoch: [87][ 1400/ 1785] Overall Loss 2.064309 Objective Loss 2.064309 LR 0.001000 Time 0.040816 +2025-05-14 16:37:58,953 - Epoch: [87][ 1600/ 1785] Overall Loss 2.067457 Objective Loss 2.067457 LR 0.001000 Time 0.040966 +2025-05-14 16:38:06,821 - Epoch: [87][ 1785/ 1785] Overall Loss 2.070427 Objective Loss 2.070427 LR 0.001000 Time 0.041127 +2025-05-14 16:38:06,856 - --- validate (epoch=87)----------- +2025-05-14 16:38:06,857 - 12251 samples (16 per mini-batch) +2025-05-14 16:38:06,859 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:38:20,712 - Epoch: [87][ 200/ 766] Loss 2.466916 mAP 0.862802 +2025-05-14 16:38:35,294 - Epoch: [87][ 400/ 766] Loss 2.448623 mAP 0.862995 +2025-05-14 16:38:50,895 - Epoch: [87][ 600/ 766] Loss 2.449381 mAP 0.860719 +2025-05-14 16:39:04,729 - Epoch: [87][ 766/ 766] Loss 2.466372 mAP 0.856644 +2025-05-14 16:39:04,767 - ==> mAP: 0.85664 Loss: 2.466 + +2025-05-14 16:39:04,774 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:39:04,774 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:39:04,804 - + +2025-05-14 16:39:04,804 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:39:13,519 - Epoch: [88][ 200/ 1785] Overall Loss 2.026252 Objective Loss 2.026252 LR 0.001000 Time 0.043562 +2025-05-14 16:39:21,884 - Epoch: [88][ 400/ 1785] Overall Loss 2.033677 Objective Loss 2.033677 LR 0.001000 Time 0.042688 +2025-05-14 16:39:30,202 - Epoch: [88][ 600/ 1785] Overall Loss 2.032848 Objective Loss 2.032848 LR 0.001000 Time 0.042320 +2025-05-14 16:39:38,600 - Epoch: [88][ 800/ 1785] Overall Loss 2.040325 Objective Loss 2.040325 LR 0.001000 Time 0.042235 +2025-05-14 16:39:46,997 - Epoch: [88][ 1000/ 1785] Overall Loss 2.051317 Objective Loss 2.051317 LR 0.001000 Time 0.042183 +2025-05-14 16:39:55,299 - Epoch: [88][ 1200/ 1785] Overall Loss 2.057624 Objective Loss 2.057624 LR 0.001000 Time 0.042070 +2025-05-14 16:40:03,629 - Epoch: [88][ 1400/ 1785] Overall Loss 2.061381 Objective Loss 2.061381 LR 0.001000 Time 0.042009 +2025-05-14 16:40:11,870 - Epoch: [88][ 1600/ 1785] Overall Loss 2.065866 Objective Loss 2.065866 LR 0.001000 Time 0.041907 +2025-05-14 16:40:19,454 - Epoch: [88][ 1785/ 1785] Overall Loss 2.071705 Objective Loss 2.071705 LR 0.001000 Time 0.041812 +2025-05-14 16:40:19,492 - --- validate (epoch=88)----------- +2025-05-14 16:40:19,493 - 12251 samples (16 per mini-batch) +2025-05-14 16:40:19,494 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:40:33,129 - Epoch: [88][ 200/ 766] Loss 2.366407 mAP 0.880381 +2025-05-14 16:40:47,385 - Epoch: [88][ 400/ 766] Loss 2.374969 mAP 0.875064 +2025-05-14 16:41:02,421 - Epoch: [88][ 600/ 766] Loss 2.389586 mAP 0.870373 +2025-05-14 16:41:15,897 - Epoch: [88][ 766/ 766] Loss 2.386741 mAP 0.870652 +2025-05-14 16:41:15,940 - ==> mAP: 0.87065 Loss: 2.387 + +2025-05-14 16:41:15,946 - ==> Best [mAP: 0.883505 vloss: 2.371437 Params: 318604 on epoch: 58] +2025-05-14 16:41:15,946 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:41:15,977 - + +2025-05-14 16:41:15,977 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:41:24,310 - Epoch: [89][ 200/ 1785] Overall Loss 2.024084 Objective Loss 2.024084 LR 0.001000 Time 0.041648 +2025-05-14 16:41:32,352 - Epoch: [89][ 400/ 1785] Overall Loss 2.018241 Objective Loss 2.018241 LR 0.001000 Time 0.040925 +2025-05-14 16:41:40,737 - Epoch: [89][ 600/ 1785] Overall Loss 2.036912 Objective Loss 2.036912 LR 0.001000 Time 0.041256 +2025-05-14 16:41:49,059 - Epoch: [89][ 800/ 1785] Overall Loss 2.043143 Objective Loss 2.043143 LR 0.001000 Time 0.041343 +2025-05-14 16:41:57,392 - Epoch: [89][ 1000/ 1785] Overall Loss 2.046341 Objective Loss 2.046341 LR 0.001000 Time 0.041405 +2025-05-14 16:42:05,774 - Epoch: [89][ 1200/ 1785] Overall Loss 2.054709 Objective Loss 2.054709 LR 0.001000 Time 0.041488 +2025-05-14 16:42:14,100 - Epoch: [89][ 1400/ 1785] Overall Loss 2.059636 Objective Loss 2.059636 LR 0.001000 Time 0.041507 +2025-05-14 16:42:22,481 - Epoch: [89][ 1600/ 1785] Overall Loss 2.066521 Objective Loss 2.066521 LR 0.001000 Time 0.041556 +2025-05-14 16:42:30,281 - Epoch: [89][ 1785/ 1785] Overall Loss 2.070088 Objective Loss 2.070088 LR 0.001000 Time 0.041618 +2025-05-14 16:42:30,316 - --- validate (epoch=89)----------- +2025-05-14 16:42:30,317 - 12251 samples (16 per mini-batch) +2025-05-14 16:42:30,318 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:42:44,481 - Epoch: [89][ 200/ 766] Loss 2.371216 mAP 0.887081 +2025-05-14 16:42:59,018 - Epoch: [89][ 400/ 766] Loss 2.379896 mAP 0.887048 +2025-05-14 16:43:14,397 - Epoch: [89][ 600/ 766] Loss 2.383873 mAP 0.883611 +2025-05-14 16:43:28,335 - Epoch: [89][ 766/ 766] Loss 2.377537 mAP 0.883855 +2025-05-14 16:43:28,375 - ==> mAP: 0.88385 Loss: 2.378 + +2025-05-14 16:43:28,381 - ==> Best [mAP: 0.883855 vloss: 2.377537 Params: 318604 on epoch: 89] +2025-05-14 16:43:28,381 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:43:28,409 - + +2025-05-14 16:43:28,409 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:43:36,718 - Epoch: [90][ 200/ 1785] Overall Loss 2.024741 Objective Loss 2.024741 LR 0.001000 Time 0.041529 +2025-05-14 16:43:44,892 - Epoch: [90][ 400/ 1785] Overall Loss 2.030467 Objective Loss 2.030467 LR 0.001000 Time 0.041196 +2025-05-14 16:43:53,103 - Epoch: [90][ 600/ 1785] Overall Loss 2.041543 Objective Loss 2.041543 LR 0.001000 Time 0.041146 +2025-05-14 16:44:01,314 - Epoch: [90][ 800/ 1785] Overall Loss 2.045755 Objective Loss 2.045755 LR 0.001000 Time 0.041121 +2025-05-14 16:44:09,520 - Epoch: [90][ 1000/ 1785] Overall Loss 2.051420 Objective Loss 2.051420 LR 0.001000 Time 0.041102 +2025-05-14 16:44:17,659 - Epoch: [90][ 1200/ 1785] Overall Loss 2.059114 Objective Loss 2.059114 LR 0.001000 Time 0.041032 +2025-05-14 16:44:25,866 - Epoch: [90][ 1400/ 1785] Overall Loss 2.058028 Objective Loss 2.058028 LR 0.001000 Time 0.041032 +2025-05-14 16:44:33,993 - Epoch: [90][ 1600/ 1785] Overall Loss 2.063883 Objective Loss 2.063883 LR 0.001000 Time 0.040981 +2025-05-14 16:44:41,533 - Epoch: [90][ 1785/ 1785] Overall Loss 2.069359 Objective Loss 2.069359 LR 0.001000 Time 0.040957 +2025-05-14 16:44:41,575 - --- validate (epoch=90)----------- +2025-05-14 16:44:41,576 - 12251 samples (16 per mini-batch) +2025-05-14 16:44:41,578 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:44:55,075 - Epoch: [90][ 200/ 766] Loss 2.377194 mAP 0.878614 +2025-05-14 16:45:09,138 - Epoch: [90][ 400/ 766] Loss 2.363244 mAP 0.882480 +2025-05-14 16:45:24,625 - Epoch: [90][ 600/ 766] Loss 2.345260 mAP 0.884127 +2025-05-14 16:45:38,756 - Epoch: [90][ 766/ 766] Loss 2.356664 mAP 0.880866 +2025-05-14 16:45:38,792 - ==> mAP: 0.88087 Loss: 2.357 + +2025-05-14 16:45:38,798 - ==> Best [mAP: 0.883855 vloss: 2.377537 Params: 318604 on epoch: 89] +2025-05-14 16:45:38,798 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:45:38,822 - + +2025-05-14 16:45:38,822 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:45:47,131 - Epoch: [91][ 200/ 1785] Overall Loss 2.029373 Objective Loss 2.029373 LR 0.001000 Time 0.041533 +2025-05-14 16:45:55,415 - Epoch: [91][ 400/ 1785] Overall Loss 2.023908 Objective Loss 2.023908 LR 0.001000 Time 0.041471 +2025-05-14 16:46:03,585 - Epoch: [91][ 600/ 1785] Overall Loss 2.025532 Objective Loss 2.025532 LR 0.001000 Time 0.041261 +2025-05-14 16:46:11,684 - Epoch: [91][ 800/ 1785] Overall Loss 2.040392 Objective Loss 2.040392 LR 0.001000 Time 0.041067 +2025-05-14 16:46:19,750 - Epoch: [91][ 1000/ 1785] Overall Loss 2.044686 Objective Loss 2.044686 LR 0.001000 Time 0.040918 +2025-05-14 16:46:27,795 - Epoch: [91][ 1200/ 1785] Overall Loss 2.054516 Objective Loss 2.054516 LR 0.001000 Time 0.040801 +2025-05-14 16:46:35,811 - Epoch: [91][ 1400/ 1785] Overall Loss 2.060571 Objective Loss 2.060571 LR 0.001000 Time 0.040696 +2025-05-14 16:46:43,792 - Epoch: [91][ 1600/ 1785] Overall Loss 2.067108 Objective Loss 2.067108 LR 0.001000 Time 0.040596 +2025-05-14 16:46:51,151 - Epoch: [91][ 1785/ 1785] Overall Loss 2.073223 Objective Loss 2.073223 LR 0.001000 Time 0.040510 +2025-05-14 16:46:51,199 - --- validate (epoch=91)----------- +2025-05-14 16:46:51,200 - 12251 samples (16 per mini-batch) +2025-05-14 16:46:51,202 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:47:04,726 - Epoch: [91][ 200/ 766] Loss 2.424787 mAP 0.874387 +2025-05-14 16:47:19,348 - Epoch: [91][ 400/ 766] Loss 2.448693 mAP 0.865834 +2025-05-14 16:47:34,326 - Epoch: [91][ 600/ 766] Loss 2.445662 mAP 0.868736 +2025-05-14 16:47:47,977 - Epoch: [91][ 766/ 766] Loss 2.442852 mAP 0.870694 +2025-05-14 16:47:48,013 - ==> mAP: 0.87069 Loss: 2.443 + +2025-05-14 16:47:48,019 - ==> Best [mAP: 0.883855 vloss: 2.377537 Params: 318604 on epoch: 89] +2025-05-14 16:47:48,019 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:47:48,043 - + +2025-05-14 16:47:48,043 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:47:56,416 - Epoch: [92][ 200/ 1785] Overall Loss 2.028915 Objective Loss 2.028915 LR 0.001000 Time 0.041851 +2025-05-14 16:48:04,597 - Epoch: [92][ 400/ 1785] Overall Loss 2.013999 Objective Loss 2.013999 LR 0.001000 Time 0.041376 +2025-05-14 16:48:12,860 - Epoch: [92][ 600/ 1785] Overall Loss 2.026956 Objective Loss 2.026956 LR 0.001000 Time 0.041352 +2025-05-14 16:48:20,988 - Epoch: [92][ 800/ 1785] Overall Loss 2.032897 Objective Loss 2.032897 LR 0.001000 Time 0.041172 +2025-05-14 16:48:29,131 - Epoch: [92][ 1000/ 1785] Overall Loss 2.050325 Objective Loss 2.050325 LR 0.001000 Time 0.041080 +2025-05-14 16:48:37,214 - Epoch: [92][ 1200/ 1785] Overall Loss 2.062448 Objective Loss 2.062448 LR 0.001000 Time 0.040967 +2025-05-14 16:48:45,322 - Epoch: [92][ 1400/ 1785] Overall Loss 2.072269 Objective Loss 2.072269 LR 0.001000 Time 0.040905 +2025-05-14 16:48:53,508 - Epoch: [92][ 1600/ 1785] Overall Loss 2.071098 Objective Loss 2.071098 LR 0.001000 Time 0.040907 +2025-05-14 16:49:01,259 - Epoch: [92][ 1785/ 1785] Overall Loss 2.071945 Objective Loss 2.071945 LR 0.001000 Time 0.041009 +2025-05-14 16:49:01,296 - --- validate (epoch=92)----------- +2025-05-14 16:49:01,297 - 12251 samples (16 per mini-batch) +2025-05-14 16:49:01,299 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:49:15,258 - Epoch: [92][ 200/ 766] Loss 2.476498 mAP 0.866872 +2025-05-14 16:49:29,442 - Epoch: [92][ 400/ 766] Loss 2.472580 mAP 0.869039 +2025-05-14 16:49:44,918 - Epoch: [92][ 600/ 766] Loss 2.471802 mAP 0.865491 +2025-05-14 16:49:58,675 - Epoch: [92][ 766/ 766] Loss 2.479483 mAP 0.866472 +2025-05-14 16:49:58,725 - ==> mAP: 0.86647 Loss: 2.479 + +2025-05-14 16:49:58,731 - ==> Best [mAP: 0.883855 vloss: 2.377537 Params: 318604 on epoch: 89] +2025-05-14 16:49:58,731 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:49:58,754 - + +2025-05-14 16:49:58,755 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:50:07,038 - Epoch: [93][ 200/ 1785] Overall Loss 2.044996 Objective Loss 2.044996 LR 0.001000 Time 0.041404 +2025-05-14 16:50:15,290 - Epoch: [93][ 400/ 1785] Overall Loss 2.065791 Objective Loss 2.065791 LR 0.001000 Time 0.041328 +2025-05-14 16:50:23,544 - Epoch: [93][ 600/ 1785] Overall Loss 2.056821 Objective Loss 2.056821 LR 0.001000 Time 0.041306 +2025-05-14 16:50:31,760 - Epoch: [93][ 800/ 1785] Overall Loss 2.056235 Objective Loss 2.056235 LR 0.001000 Time 0.041247 +2025-05-14 16:50:39,995 - Epoch: [93][ 1000/ 1785] Overall Loss 2.058881 Objective Loss 2.058881 LR 0.001000 Time 0.041231 +2025-05-14 16:50:48,228 - Epoch: [93][ 1200/ 1785] Overall Loss 2.063791 Objective Loss 2.063791 LR 0.001000 Time 0.041219 +2025-05-14 16:50:56,482 - Epoch: [93][ 1400/ 1785] Overall Loss 2.064113 Objective Loss 2.064113 LR 0.001000 Time 0.041225 +2025-05-14 16:51:04,723 - Epoch: [93][ 1600/ 1785] Overall Loss 2.070733 Objective Loss 2.070733 LR 0.001000 Time 0.041221 +2025-05-14 16:51:12,312 - Epoch: [93][ 1785/ 1785] Overall Loss 2.071911 Objective Loss 2.071911 LR 0.001000 Time 0.041200 +2025-05-14 16:51:12,357 - --- validate (epoch=93)----------- +2025-05-14 16:51:12,358 - 12251 samples (16 per mini-batch) +2025-05-14 16:51:12,360 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:51:25,896 - Epoch: [93][ 200/ 766] Loss 2.395838 mAP 0.881079 +2025-05-14 16:51:40,080 - Epoch: [93][ 400/ 766] Loss 2.411175 mAP 0.878203 +2025-05-14 16:51:55,797 - Epoch: [93][ 600/ 766] Loss 2.413360 mAP 0.874959 +2025-05-14 16:52:09,790 - Epoch: [93][ 766/ 766] Loss 2.411524 mAP 0.875374 +2025-05-14 16:52:09,828 - ==> mAP: 0.87537 Loss: 2.412 + +2025-05-14 16:52:09,834 - ==> Best [mAP: 0.883855 vloss: 2.377537 Params: 318604 on epoch: 89] +2025-05-14 16:52:09,834 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:52:09,865 - + +2025-05-14 16:52:09,865 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:52:18,222 - Epoch: [94][ 200/ 1785] Overall Loss 2.008532 Objective Loss 2.008532 LR 0.001000 Time 0.041772 +2025-05-14 16:52:26,404 - Epoch: [94][ 400/ 1785] Overall Loss 2.022839 Objective Loss 2.022839 LR 0.001000 Time 0.041335 +2025-05-14 16:52:34,599 - Epoch: [94][ 600/ 1785] Overall Loss 2.034189 Objective Loss 2.034189 LR 0.001000 Time 0.041213 +2025-05-14 16:52:42,763 - Epoch: [94][ 800/ 1785] Overall Loss 2.040019 Objective Loss 2.040019 LR 0.001000 Time 0.041113 +2025-05-14 16:52:50,843 - Epoch: [94][ 1000/ 1785] Overall Loss 2.047797 Objective Loss 2.047797 LR 0.001000 Time 0.040968 +2025-05-14 16:52:58,932 - Epoch: [94][ 1200/ 1785] Overall Loss 2.048816 Objective Loss 2.048816 LR 0.001000 Time 0.040880 +2025-05-14 16:53:07,265 - Epoch: [94][ 1400/ 1785] Overall Loss 2.053433 Objective Loss 2.053433 LR 0.001000 Time 0.040991 +2025-05-14 16:53:15,665 - Epoch: [94][ 1600/ 1785] Overall Loss 2.057393 Objective Loss 2.057393 LR 0.001000 Time 0.041116 +2025-05-14 16:53:23,452 - Epoch: [94][ 1785/ 1785] Overall Loss 2.063618 Objective Loss 2.063618 LR 0.001000 Time 0.041216 +2025-05-14 16:53:23,487 - --- validate (epoch=94)----------- +2025-05-14 16:53:23,488 - 12251 samples (16 per mini-batch) +2025-05-14 16:53:23,490 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:53:37,420 - Epoch: [94][ 200/ 766] Loss 2.482786 mAP 0.858657 +2025-05-14 16:53:52,000 - Epoch: [94][ 400/ 766] Loss 2.457277 mAP 0.858649 +2025-05-14 16:54:06,984 - Epoch: [94][ 600/ 766] Loss 2.456455 mAP 0.856216 +2025-05-14 16:54:21,031 - Epoch: [94][ 766/ 766] Loss 2.447197 mAP 0.857256 +2025-05-14 16:54:21,072 - ==> mAP: 0.85726 Loss: 2.447 + +2025-05-14 16:54:21,078 - ==> Best [mAP: 0.883855 vloss: 2.377537 Params: 318604 on epoch: 89] +2025-05-14 16:54:21,078 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:54:21,102 - + +2025-05-14 16:54:21,103 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:54:29,896 - Epoch: [95][ 200/ 1785] Overall Loss 2.019607 Objective Loss 2.019607 LR 0.001000 Time 0.043956 +2025-05-14 16:54:38,391 - Epoch: [95][ 400/ 1785] Overall Loss 2.029416 Objective Loss 2.029416 LR 0.001000 Time 0.043211 +2025-05-14 16:54:46,693 - Epoch: [95][ 600/ 1785] Overall Loss 2.033923 Objective Loss 2.033923 LR 0.001000 Time 0.042641 +2025-05-14 16:54:55,063 - Epoch: [95][ 800/ 1785] Overall Loss 2.048089 Objective Loss 2.048089 LR 0.001000 Time 0.042441 +2025-05-14 16:55:03,248 - Epoch: [95][ 1000/ 1785] Overall Loss 2.052504 Objective Loss 2.052504 LR 0.001000 Time 0.042136 +2025-05-14 16:55:11,435 - Epoch: [95][ 1200/ 1785] Overall Loss 2.046087 Objective Loss 2.046087 LR 0.001000 Time 0.041934 +2025-05-14 16:55:19,624 - Epoch: [95][ 1400/ 1785] Overall Loss 2.054566 Objective Loss 2.054566 LR 0.001000 Time 0.041792 +2025-05-14 16:55:27,817 - Epoch: [95][ 1600/ 1785] Overall Loss 2.061061 Objective Loss 2.061061 LR 0.001000 Time 0.041688 +2025-05-14 16:55:35,371 - Epoch: [95][ 1785/ 1785] Overall Loss 2.065867 Objective Loss 2.065867 LR 0.001000 Time 0.041598 +2025-05-14 16:55:35,415 - --- validate (epoch=95)----------- +2025-05-14 16:55:35,416 - 12251 samples (16 per mini-batch) +2025-05-14 16:55:35,417 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:55:48,843 - Epoch: [95][ 200/ 766] Loss 2.416030 mAP 0.878269 +2025-05-14 16:56:03,120 - Epoch: [95][ 400/ 766] Loss 2.383218 mAP 0.880084 +2025-05-14 16:56:18,500 - Epoch: [95][ 600/ 766] Loss 2.387092 mAP 0.879937 +2025-05-14 16:56:32,093 - Epoch: [95][ 766/ 766] Loss 2.382810 mAP 0.878142 +2025-05-14 16:56:32,132 - ==> mAP: 0.87814 Loss: 2.383 + +2025-05-14 16:56:32,138 - ==> Best [mAP: 0.883855 vloss: 2.377537 Params: 318604 on epoch: 89] +2025-05-14 16:56:32,138 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:56:32,162 - + +2025-05-14 16:56:32,162 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:56:40,757 - Epoch: [96][ 200/ 1785] Overall Loss 2.033032 Objective Loss 2.033032 LR 0.001000 Time 0.042958 +2025-05-14 16:56:49,152 - Epoch: [96][ 400/ 1785] Overall Loss 2.037143 Objective Loss 2.037143 LR 0.001000 Time 0.042463 +2025-05-14 16:56:57,521 - Epoch: [96][ 600/ 1785] Overall Loss 2.035074 Objective Loss 2.035074 LR 0.001000 Time 0.042254 +2025-05-14 16:57:06,015 - Epoch: [96][ 800/ 1785] Overall Loss 2.044308 Objective Loss 2.044308 LR 0.001000 Time 0.042306 +2025-05-14 16:57:14,492 - Epoch: [96][ 1000/ 1785] Overall Loss 2.047786 Objective Loss 2.047786 LR 0.001000 Time 0.042320 +2025-05-14 16:57:22,923 - Epoch: [96][ 1200/ 1785] Overall Loss 2.054341 Objective Loss 2.054341 LR 0.001000 Time 0.042291 +2025-05-14 16:57:30,943 - Epoch: [96][ 1400/ 1785] Overall Loss 2.057922 Objective Loss 2.057922 LR 0.001000 Time 0.041977 +2025-05-14 16:57:39,406 - Epoch: [96][ 1600/ 1785] Overall Loss 2.062499 Objective Loss 2.062499 LR 0.001000 Time 0.042018 +2025-05-14 16:57:47,264 - Epoch: [96][ 1785/ 1785] Overall Loss 2.067954 Objective Loss 2.067954 LR 0.001000 Time 0.042064 +2025-05-14 16:57:47,298 - --- validate (epoch=96)----------- +2025-05-14 16:57:47,298 - 12251 samples (16 per mini-batch) +2025-05-14 16:57:47,300 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 16:58:01,388 - Epoch: [96][ 200/ 766] Loss 2.442299 mAP 0.873350 +2025-05-14 16:58:16,164 - Epoch: [96][ 400/ 766] Loss 2.442821 mAP 0.868482 +2025-05-14 16:58:32,000 - Epoch: [96][ 600/ 766] Loss 2.431459 mAP 0.873711 +2025-05-14 16:58:46,264 - Epoch: [96][ 766/ 766] Loss 2.439342 mAP 0.870441 +2025-05-14 16:58:46,303 - ==> mAP: 0.87044 Loss: 2.439 + +2025-05-14 16:58:46,309 - ==> Best [mAP: 0.883855 vloss: 2.377537 Params: 318604 on epoch: 89] +2025-05-14 16:58:46,309 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 16:58:46,340 - + +2025-05-14 16:58:46,340 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 16:58:54,692 - Epoch: [97][ 200/ 1785] Overall Loss 2.013753 Objective Loss 2.013753 LR 0.001000 Time 0.041744 +2025-05-14 16:59:02,807 - Epoch: [97][ 400/ 1785] Overall Loss 2.021675 Objective Loss 2.021675 LR 0.001000 Time 0.041156 +2025-05-14 16:59:10,764 - Epoch: [97][ 600/ 1785] Overall Loss 2.032572 Objective Loss 2.032572 LR 0.001000 Time 0.040696 +2025-05-14 16:59:18,788 - Epoch: [97][ 800/ 1785] Overall Loss 2.039253 Objective Loss 2.039253 LR 0.001000 Time 0.040549 +2025-05-14 16:59:26,819 - Epoch: [97][ 1000/ 1785] Overall Loss 2.043367 Objective Loss 2.043367 LR 0.001000 Time 0.040468 +2025-05-14 16:59:34,824 - Epoch: [97][ 1200/ 1785] Overall Loss 2.052575 Objective Loss 2.052575 LR 0.001000 Time 0.040392 +2025-05-14 16:59:42,776 - Epoch: [97][ 1400/ 1785] Overall Loss 2.056881 Objective Loss 2.056881 LR 0.001000 Time 0.040301 +2025-05-14 16:59:50,731 - Epoch: [97][ 1600/ 1785] Overall Loss 2.059833 Objective Loss 2.059833 LR 0.001000 Time 0.040234 +2025-05-14 16:59:58,210 - Epoch: [97][ 1785/ 1785] Overall Loss 2.066151 Objective Loss 2.066151 LR 0.001000 Time 0.040253 +2025-05-14 16:59:58,259 - --- validate (epoch=97)----------- +2025-05-14 16:59:58,260 - 12251 samples (16 per mini-batch) +2025-05-14 16:59:58,262 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 17:00:12,159 - Epoch: [97][ 200/ 766] Loss 2.378707 mAP 0.876711 +2025-05-14 17:00:26,597 - Epoch: [97][ 400/ 766] Loss 2.379508 mAP 0.874751 +2025-05-14 17:00:41,828 - Epoch: [97][ 600/ 766] Loss 2.385875 mAP 0.875416 +2025-05-14 17:00:55,314 - Epoch: [97][ 766/ 766] Loss 2.385052 mAP 0.875224 +2025-05-14 17:00:55,355 - ==> mAP: 0.87522 Loss: 2.385 + +2025-05-14 17:00:55,361 - ==> Best [mAP: 0.883855 vloss: 2.377537 Params: 318604 on epoch: 89] +2025-05-14 17:00:55,361 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 17:00:55,391 - + +2025-05-14 17:00:55,391 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 17:01:03,733 - Epoch: [98][ 200/ 1785] Overall Loss 2.013860 Objective Loss 2.013860 LR 0.001000 Time 0.041698 +2025-05-14 17:01:11,777 - Epoch: [98][ 400/ 1785] Overall Loss 2.029121 Objective Loss 2.029121 LR 0.001000 Time 0.040953 +2025-05-14 17:01:20,063 - Epoch: [98][ 600/ 1785] Overall Loss 2.031842 Objective Loss 2.031842 LR 0.001000 Time 0.041109 +2025-05-14 17:01:28,449 - Epoch: [98][ 800/ 1785] Overall Loss 2.042421 Objective Loss 2.042421 LR 0.001000 Time 0.041313 +2025-05-14 17:01:36,824 - Epoch: [98][ 1000/ 1785] Overall Loss 2.054373 Objective Loss 2.054373 LR 0.001000 Time 0.041423 +2025-05-14 17:01:45,118 - Epoch: [98][ 1200/ 1785] Overall Loss 2.058807 Objective Loss 2.058807 LR 0.001000 Time 0.041430 +2025-05-14 17:01:53,464 - Epoch: [98][ 1400/ 1785] Overall Loss 2.059811 Objective Loss 2.059811 LR 0.001000 Time 0.041471 +2025-05-14 17:02:01,829 - Epoch: [98][ 1600/ 1785] Overall Loss 2.062711 Objective Loss 2.062711 LR 0.001000 Time 0.041515 +2025-05-14 17:02:09,549 - Epoch: [98][ 1785/ 1785] Overall Loss 2.068580 Objective Loss 2.068580 LR 0.001000 Time 0.041536 +2025-05-14 17:02:09,601 - --- validate (epoch=98)----------- +2025-05-14 17:02:09,602 - 12251 samples (16 per mini-batch) +2025-05-14 17:02:09,604 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 17:02:23,325 - Epoch: [98][ 200/ 766] Loss 2.369797 mAP 0.884423 +2025-05-14 17:02:37,817 - Epoch: [98][ 400/ 766] Loss 2.358428 mAP 0.881344 +2025-05-14 17:02:53,129 - Epoch: [98][ 600/ 766] Loss 2.351649 mAP 0.884488 +2025-05-14 17:03:06,925 - Epoch: [98][ 766/ 766] Loss 2.363336 mAP 0.881939 +2025-05-14 17:03:06,959 - ==> mAP: 0.88194 Loss: 2.363 + +2025-05-14 17:03:06,966 - ==> Best [mAP: 0.883855 vloss: 2.377537 Params: 318604 on epoch: 89] +2025-05-14 17:03:06,966 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 17:03:06,997 - + +2025-05-14 17:03:06,997 - Training epoch: 28548 samples (16 per mini-batch, world size: 1) +2025-05-14 17:03:15,309 - Epoch: [99][ 200/ 1785] Overall Loss 2.005802 Objective Loss 2.005802 LR 0.001000 Time 0.041545 +2025-05-14 17:03:23,435 - Epoch: [99][ 400/ 1785] Overall Loss 2.049847 Objective Loss 2.049847 LR 0.001000 Time 0.041084 +2025-05-14 17:03:31,530 - Epoch: [99][ 600/ 1785] Overall Loss 2.040648 Objective Loss 2.040648 LR 0.001000 Time 0.040878 +2025-05-14 17:03:39,679 - Epoch: [99][ 800/ 1785] Overall Loss 2.043170 Objective Loss 2.043170 LR 0.001000 Time 0.040842 +2025-05-14 17:03:47,870 - Epoch: [99][ 1000/ 1785] Overall Loss 2.047675 Objective Loss 2.047675 LR 0.001000 Time 0.040864 +2025-05-14 17:03:56,184 - Epoch: [99][ 1200/ 1785] Overall Loss 2.055372 Objective Loss 2.055372 LR 0.001000 Time 0.040980 +2025-05-14 17:04:04,544 - Epoch: [99][ 1400/ 1785] Overall Loss 2.058850 Objective Loss 2.058850 LR 0.001000 Time 0.041096 +2025-05-14 17:04:12,894 - Epoch: [99][ 1600/ 1785] Overall Loss 2.058644 Objective Loss 2.058644 LR 0.001000 Time 0.041177 +2025-05-14 17:04:20,636 - Epoch: [99][ 1785/ 1785] Overall Loss 2.062273 Objective Loss 2.062273 LR 0.001000 Time 0.041245 +2025-05-14 17:04:20,682 - --- validate (epoch=99)----------- +2025-05-14 17:04:20,683 - 12251 samples (16 per mini-batch) +2025-05-14 17:04:20,685 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 17:04:34,544 - Epoch: [99][ 200/ 766] Loss 2.368010 mAP 0.877419 +2025-05-14 17:04:48,911 - Epoch: [99][ 400/ 766] Loss 2.374826 mAP 0.874904 +2025-05-14 17:05:04,024 - Epoch: [99][ 600/ 766] Loss 2.379204 mAP 0.875675 +2025-05-14 17:05:17,683 - Epoch: [99][ 766/ 766] Loss 2.369671 mAP 0.877856 +2025-05-14 17:05:17,717 - ==> mAP: 0.87786 Loss: 2.370 + +2025-05-14 17:05:17,724 - ==> Best [mAP: 0.883855 vloss: 2.377537 Params: 318604 on epoch: 89] +2025-05-14 17:05:17,724 - Saving checkpoint to: logs/2025.05.14-133616/qat_checkpoint.pth.tar +2025-05-14 17:05:17,754 - --- test (ckpt) --------------------- +2025-05-14 17:05:17,755 - 12251 samples (16 per mini-batch) +2025-05-14 17:05:17,757 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 17:05:31,263 - Test: [ 200/ 766] Loss 2.345245 mAP 0.879387 +2025-05-14 17:05:45,957 - Test: [ 400/ 766] Loss 2.373689 mAP 0.876048 +2025-05-14 17:06:01,218 - Test: [ 600/ 766] Loss 2.366248 mAP 0.875822 +2025-05-14 17:06:14,734 - Test: [ 766/ 766] Loss 2.372800 mAP 0.875071 +2025-05-14 17:06:14,770 - ==> mAP: 0.87507 Loss: 2.373 + +2025-05-14 17:06:14,772 - --- test (best) --------------------- +2025-05-14 17:06:14,772 - => loading checkpoint logs/2025.05.14-133616/qat_best.pth.tar +2025-05-14 17:06:14,786 - => Checkpoint contents: ++----------------------+-------------+---------------+ +| Key | Type | Value | +|----------------------+-------------+---------------| +| arch | str | ai85tinierssd | +| compression_sched | dict | | +| epoch | int | 89 | +| extras | dict | | +| optimizer_state_dict | dict | | +| optimizer_type | type | Adam | +| state_dict | OrderedDict | | ++----------------------+-------------+---------------+ + +2025-05-14 17:06:14,786 - => Checkpoint['extras'] contents: ++--------------+--------+---------+ +| Key | Type | Value | +|--------------+--------+---------| +| best_epoch | int | 89 | +| best_mAP | Tensor | | +| best_top1 | int | 0 | +| current_mAP | Tensor | | +| current_top1 | int | 0 | ++--------------+--------+---------+ + +2025-05-14 17:06:14,787 - Loaded compression schedule from checkpoint (epoch 89) +2025-05-14 17:06:14,800 - => loaded 'state_dict' from checkpoint 'logs/2025.05.14-133616/qat_best.pth.tar' +2025-05-14 17:06:14,801 - 12251 samples (16 per mini-batch) +2025-05-14 17:06:14,802 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.2, 'max_overlap': 0.3, 'top_k': 20}} +2025-05-14 17:06:28,959 - Test: [ 200/ 766] Loss 2.360796 mAP 0.889725 +2025-05-14 17:06:43,420 - Test: [ 400/ 766] Loss 2.365856 mAP 0.884790 +2025-05-14 17:06:58,914 - Test: [ 600/ 766] Loss 2.373269 mAP 0.885296 +2025-05-14 17:07:12,860 - Test: [ 766/ 766] Loss 2.377396 mAP 0.882094 +2025-05-14 17:07:12,904 - ==> mAP: 0.88209 Loss: 2.377 + +2025-05-14 17:07:12,954 - +2025-05-14 17:07:12,955 - Log file for this run: /home/asyaturhal/ai8x-training/logs/2025.05.14-133616/2025.05.14-133616.log diff --git a/trained/ai85-svhn-tinierssd-qat.pth.tar b/trained/ai85-svhn-tinierssd-qat.pth.tar new file mode 100644 index 00000000..5aedde6f Binary files /dev/null and b/trained/ai85-svhn-tinierssd-qat.pth.tar differ diff --git a/trained/ai85-svhn-tinierssd-qat8-q.pth.tar b/trained/ai85-svhn-tinierssd-qat8-q.pth.tar index 97d6b798..0e24254e 100644 Binary files a/trained/ai85-svhn-tinierssd-qat8-q.pth.tar and b/trained/ai85-svhn-tinierssd-qat8-q.pth.tar differ diff --git a/trained/ai87-pascalvoc-fpndetector-qat8-q.pth.tar b/trained/ai87-pascalvoc-fpndetector-qat8-q.pth.tar index ec4b4ca7..aa6d8490 100644 Binary files a/trained/ai87-pascalvoc-fpndetector-qat8-q.pth.tar and b/trained/ai87-pascalvoc-fpndetector-qat8-q.pth.tar differ diff --git a/trained/ai87-pascalvoc-fpndetector-qat8.log b/trained/ai87-pascalvoc-fpndetector-qat8.log index 1273837a..423d0c53 100644 --- a/trained/ai87-pascalvoc-fpndetector-qat8.log +++ b/trained/ai87-pascalvoc-fpndetector-qat8.log @@ -1,9618 +1,8489 @@ -2023-04-26 18:43:48,348 - Log file for this run: /home/seldauyanik/Workspace/ai8x-training/logs/2023.04.26-184348/2023.04.26-184348.log -2023-04-26 18:43:49,930 - Optimizer Type: -2023-04-26 18:43:49,930 - Optimizer Args: {'lr': 0.002, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 1e-06, 'amsgrad': False} -2023-04-26 18:43:49,958 - Dataset sizes: +2025-05-16 17:57:07,977 - Log file for this run: /home/asyaturhal/ai8x-training/logs/fpndetector___2025.05.16-175707/fpndetector___2025.05.16-175707.log +2025-05-16 17:57:07,978 - The open file limit is 1024. Please raise the limit (see documentation). +2025-05-16 17:57:07,980 - Configuring device: MAX78002, simulate=False. +2025-05-16 17:57:08,595 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 17:57:08,735 - => loading checkpoint /home/asyaturhal/ai8x-training/logs/fpndetector___2025.05.16-110229/fpndetector_checkpoint.pth.tar +2025-05-16 17:57:08,819 - => Checkpoint contents: ++----------------------+-------------+-----------------+ +| Key | Type | Value | +|----------------------+-------------+-----------------| +| arch | str | ai87fpndetector | +| compression_sched | dict | | +| epoch | int | 63 | +| extras | dict | | +| optimizer_state_dict | dict | | +| optimizer_type | type | Adam | +| state_dict | OrderedDict | | ++----------------------+-------------+-----------------+ + +2025-05-16 17:57:08,820 - => Checkpoint['extras'] contents: ++--------------+--------+---------+ +| Key | Type | Value | +|--------------+--------+---------| +| best_epoch | int | 63 | +| best_mAP | Tensor | | +| best_top1 | int | 0 | +| current_mAP | Tensor | | +| current_top1 | int | 0 | ++--------------+--------+---------+ + +2025-05-16 17:57:08,823 - Loaded compression schedule from checkpoint (epoch 63) +2025-05-16 17:57:08,938 - Optimizer of type was loaded from checkpoint +2025-05-16 17:57:08,939 - Optimizer Args: {'lr': 0.00025, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 1e-06, 'amsgrad': False, 'maximize': False, 'foreach': None, 'capturable': False, 'differentiable': False, 'fused': None, 'initial_lr': 0.001} +2025-05-16 17:57:08,939 - => loaded checkpoint '/home/asyaturhal/ai8x-training/logs/fpndetector___2025.05.16-110229/fpndetector_checkpoint.pth.tar' (epoch 63) +2025-05-16 17:57:09,502 - Reading compression schedule from: policies/schedule-pascalvoc.yaml +2025-05-16 17:57:09,519 - torch.compile() not available, using "eager" mode +2025-05-16 17:57:09,519 - Use distributed training to enable torch.compile() with multiple GPUs +2025-05-16 17:57:09,519 - Dataset sizes: training=16551 validation=4952 test=4952 -2023-04-26 18:43:49,958 - Reading compression schedule from: policies/schedule-pascalvoc.yaml -2023-04-26 18:43:49,963 - - -2023-04-26 18:43:49,963 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 18:44:01,396 - Epoch: [0][ 50/ 518] Overall Loss 7.581112 Objective Loss 7.581112 LR 0.002000 Time 0.228623 -2023-04-26 18:44:11,478 - Epoch: [0][ 100/ 518] Overall Loss 6.786543 Objective Loss 6.786543 LR 0.002000 Time 0.215111 -2023-04-26 18:44:21,551 - Epoch: [0][ 150/ 518] Overall Loss 6.485756 Objective Loss 6.485756 LR 0.002000 Time 0.210545 -2023-04-26 18:44:31,671 - Epoch: [0][ 200/ 518] Overall Loss 6.324189 Objective Loss 6.324189 LR 0.002000 Time 0.208506 -2023-04-26 18:44:41,703 - Epoch: [0][ 250/ 518] Overall Loss 6.215689 Objective Loss 6.215689 LR 0.002000 Time 0.206925 -2023-04-26 18:44:51,848 - Epoch: [0][ 300/ 518] Overall Loss 6.137906 Objective Loss 6.137906 LR 0.002000 Time 0.206247 -2023-04-26 18:45:01,964 - Epoch: [0][ 350/ 518] Overall Loss 6.079170 Objective Loss 6.079170 LR 0.002000 Time 0.205681 -2023-04-26 18:45:12,098 - Epoch: [0][ 400/ 518] Overall Loss 6.035478 Objective Loss 6.035478 LR 0.002000 Time 0.205302 -2023-04-26 18:45:22,227 - Epoch: [0][ 450/ 518] Overall Loss 5.992094 Objective Loss 5.992094 LR 0.002000 Time 0.204996 -2023-04-26 18:45:32,251 - Epoch: [0][ 500/ 518] Overall Loss 5.954386 Objective Loss 5.954386 LR 0.002000 Time 0.204542 -2023-04-26 18:45:35,743 - Epoch: [0][ 518/ 518] Overall Loss 5.943826 Objective Loss 5.943826 LR 0.002000 Time 0.204174 -2023-04-26 18:45:35,811 - --- validate (epoch=0)----------- -2023-04-26 18:45:35,812 - 4952 samples (32 per mini-batch) -2023-04-26 18:45:41,263 - Epoch: [0][ 50/ 155] Loss 5.680794 mAP 0.000000 -2023-04-26 18:45:46,387 - Epoch: [0][ 100/ 155] Loss 5.710366 mAP 0.000000 -2023-04-26 18:45:51,463 - Epoch: [0][ 150/ 155] Loss 5.700281 mAP 0.000000 -2023-04-26 18:45:51,917 - Epoch: [0][ 155/ 155] Loss 5.696242 mAP 0.000000 -2023-04-26 18:45:51,979 - ==> mAP: 0.00000 Loss: 5.696 - -2023-04-26 18:45:51,982 - ==> Best [mAP: 0.000000 vloss: 5.696242 Sparsity:0.00 Params: 2177088 on epoch: 0] -2023-04-26 18:45:51,982 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 18:45:52,022 - - -2023-04-26 18:45:52,022 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 18:46:02,849 - Epoch: [1][ 50/ 518] Overall Loss 5.506161 Objective Loss 5.506161 LR 0.002000 Time 0.216471 -2023-04-26 18:46:13,031 - Epoch: [1][ 100/ 518] Overall Loss 5.493601 Objective Loss 5.493601 LR 0.002000 Time 0.210041 -2023-04-26 18:46:23,132 - Epoch: [1][ 150/ 518] Overall Loss 5.511360 Objective Loss 5.511360 LR 0.002000 Time 0.207356 -2023-04-26 18:46:33,304 - Epoch: [1][ 200/ 518] Overall Loss 5.523178 Objective Loss 5.523178 LR 0.002000 Time 0.206367 -2023-04-26 18:46:43,506 - Epoch: [1][ 250/ 518] Overall Loss 5.508800 Objective Loss 5.508800 LR 0.002000 Time 0.205897 -2023-04-26 18:46:53,699 - Epoch: [1][ 300/ 518] Overall Loss 5.483558 Objective Loss 5.483558 LR 0.002000 Time 0.205552 -2023-04-26 18:47:03,787 - Epoch: [1][ 350/ 518] Overall Loss 5.475019 Objective Loss 5.475019 LR 0.002000 Time 0.205007 -2023-04-26 18:47:13,897 - Epoch: [1][ 400/ 518] Overall Loss 5.457925 Objective Loss 5.457925 LR 0.002000 Time 0.204652 -2023-04-26 18:47:24,001 - Epoch: [1][ 450/ 518] Overall Loss 5.445572 Objective Loss 5.445572 LR 0.002000 Time 0.204361 -2023-04-26 18:47:34,164 - Epoch: [1][ 500/ 518] Overall Loss 5.431971 Objective Loss 5.431971 LR 0.002000 Time 0.204248 -2023-04-26 18:47:37,700 - Epoch: [1][ 518/ 518] Overall Loss 5.429640 Objective Loss 5.429640 LR 0.002000 Time 0.203977 -2023-04-26 18:47:37,772 - --- validate (epoch=1)----------- -2023-04-26 18:47:37,772 - 4952 samples (32 per mini-batch) -2023-04-26 18:47:43,291 - Epoch: [1][ 50/ 155] Loss 5.518327 mAP 0.000000 -2023-04-26 18:47:48,514 - Epoch: [1][ 100/ 155] Loss 5.516963 mAP 0.000000 -2023-04-26 18:47:53,745 - Epoch: [1][ 150/ 155] Loss 5.495558 mAP 0.000000 -2023-04-26 18:47:54,202 - Epoch: [1][ 155/ 155] Loss 5.498510 mAP 0.000000 -2023-04-26 18:47:54,270 - ==> mAP: 0.00000 Loss: 5.499 - -2023-04-26 18:47:54,274 - ==> Best [mAP: 0.000000 vloss: 5.498510 Sparsity:0.00 Params: 2177088 on epoch: 1] -2023-04-26 18:47:54,274 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 18:47:54,327 - - -2023-04-26 18:47:54,327 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 18:48:05,180 - Epoch: [2][ 50/ 518] Overall Loss 5.289046 Objective Loss 5.289046 LR 0.002000 Time 0.217002 -2023-04-26 18:48:15,349 - Epoch: [2][ 100/ 518] Overall Loss 5.266754 Objective Loss 5.266754 LR 0.002000 Time 0.210174 -2023-04-26 18:48:25,580 - Epoch: [2][ 150/ 518] Overall Loss 5.231322 Objective Loss 5.231322 LR 0.002000 Time 0.208311 -2023-04-26 18:48:35,801 - Epoch: [2][ 200/ 518] Overall Loss 5.226539 Objective Loss 5.226539 LR 0.002000 Time 0.207330 -2023-04-26 18:48:45,946 - Epoch: [2][ 250/ 518] Overall Loss 5.226818 Objective Loss 5.226818 LR 0.002000 Time 0.206436 -2023-04-26 18:48:56,108 - Epoch: [2][ 300/ 518] Overall Loss 5.207170 Objective Loss 5.207170 LR 0.002000 Time 0.205899 -2023-04-26 18:49:06,204 - Epoch: [2][ 350/ 518] Overall Loss 5.188172 Objective Loss 5.188172 LR 0.002000 Time 0.205327 -2023-04-26 18:49:16,402 - Epoch: [2][ 400/ 518] Overall Loss 5.175901 Objective Loss 5.175901 LR 0.002000 Time 0.205151 -2023-04-26 18:49:26,611 - Epoch: [2][ 450/ 518] Overall Loss 5.165071 Objective Loss 5.165071 LR 0.002000 Time 0.205039 -2023-04-26 18:49:36,857 - Epoch: [2][ 500/ 518] Overall Loss 5.159868 Objective Loss 5.159868 LR 0.002000 Time 0.205025 -2023-04-26 18:49:40,430 - Epoch: [2][ 518/ 518] Overall Loss 5.160328 Objective Loss 5.160328 LR 0.002000 Time 0.204797 -2023-04-26 18:49:40,502 - --- validate (epoch=2)----------- -2023-04-26 18:49:40,502 - 4952 samples (32 per mini-batch) -2023-04-26 18:49:46,016 - Epoch: [2][ 50/ 155] Loss 5.206715 mAP 0.000170 -2023-04-26 18:49:51,206 - Epoch: [2][ 100/ 155] Loss 5.168171 mAP 0.000465 -2023-04-26 18:49:56,345 - Epoch: [2][ 150/ 155] Loss 5.182317 mAP 0.000478 -2023-04-26 18:49:56,803 - Epoch: [2][ 155/ 155] Loss 5.184851 mAP 0.000576 -2023-04-26 18:49:56,862 - ==> mAP: 0.00058 Loss: 5.185 - -2023-04-26 18:49:56,866 - ==> Best [mAP: 0.000576 vloss: 5.184851 Sparsity:0.00 Params: 2177088 on epoch: 2] -2023-04-26 18:49:56,866 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 18:49:56,920 - - -2023-04-26 18:49:56,920 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 18:50:07,801 - Epoch: [3][ 50/ 518] Overall Loss 5.062207 Objective Loss 5.062207 LR 0.002000 Time 0.217572 -2023-04-26 18:50:17,924 - Epoch: [3][ 100/ 518] Overall Loss 5.082694 Objective Loss 5.082694 LR 0.002000 Time 0.209997 -2023-04-26 18:50:28,038 - Epoch: [3][ 150/ 518] Overall Loss 5.053928 Objective Loss 5.053928 LR 0.002000 Time 0.207413 -2023-04-26 18:50:38,266 - Epoch: [3][ 200/ 518] Overall Loss 5.035621 Objective Loss 5.035621 LR 0.002000 Time 0.206692 -2023-04-26 18:50:48,414 - Epoch: [3][ 250/ 518] Overall Loss 5.007172 Objective Loss 5.007172 LR 0.002000 Time 0.205941 -2023-04-26 18:50:58,609 - Epoch: [3][ 300/ 518] Overall Loss 5.004353 Objective Loss 5.004353 LR 0.002000 Time 0.205594 -2023-04-26 18:51:08,750 - Epoch: [3][ 350/ 518] Overall Loss 4.996206 Objective Loss 4.996206 LR 0.002000 Time 0.205194 -2023-04-26 18:51:18,936 - Epoch: [3][ 400/ 518] Overall Loss 4.991833 Objective Loss 4.991833 LR 0.002000 Time 0.205006 -2023-04-26 18:51:29,081 - Epoch: [3][ 450/ 518] Overall Loss 4.982553 Objective Loss 4.982553 LR 0.002000 Time 0.204769 -2023-04-26 18:51:39,208 - Epoch: [3][ 500/ 518] Overall Loss 4.978779 Objective Loss 4.978779 LR 0.002000 Time 0.204543 -2023-04-26 18:51:42,700 - Epoch: [3][ 518/ 518] Overall Loss 4.978215 Objective Loss 4.978215 LR 0.002000 Time 0.204176 -2023-04-26 18:51:42,772 - --- validate (epoch=3)----------- -2023-04-26 18:51:42,773 - 4952 samples (32 per mini-batch) -2023-04-26 18:51:48,405 - Epoch: [3][ 50/ 155] Loss 5.365351 mAP 0.006468 -2023-04-26 18:51:53,628 - Epoch: [3][ 100/ 155] Loss 5.345745 mAP 0.004709 -2023-04-26 18:51:58,791 - Epoch: [3][ 150/ 155] Loss 5.341064 mAP 0.005464 -2023-04-26 18:51:59,248 - Epoch: [3][ 155/ 155] Loss 5.341885 mAP 0.005324 -2023-04-26 18:51:59,313 - ==> mAP: 0.00532 Loss: 5.342 - -2023-04-26 18:51:59,317 - ==> Best [mAP: 0.005324 vloss: 5.341885 Sparsity:0.00 Params: 2177088 on epoch: 3] -2023-04-26 18:51:59,317 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 18:51:59,370 - - -2023-04-26 18:51:59,371 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 18:52:10,153 - Epoch: [4][ 50/ 518] Overall Loss 4.907613 Objective Loss 4.907613 LR 0.002000 Time 0.215580 -2023-04-26 18:52:20,347 - Epoch: [4][ 100/ 518] Overall Loss 4.901172 Objective Loss 4.901172 LR 0.002000 Time 0.209717 -2023-04-26 18:52:30,455 - Epoch: [4][ 150/ 518] Overall Loss 4.891361 Objective Loss 4.891361 LR 0.002000 Time 0.207190 -2023-04-26 18:52:40,556 - Epoch: [4][ 200/ 518] Overall Loss 4.875586 Objective Loss 4.875586 LR 0.002000 Time 0.205887 -2023-04-26 18:52:50,706 - Epoch: [4][ 250/ 518] Overall Loss 4.869341 Objective Loss 4.869341 LR 0.002000 Time 0.205305 -2023-04-26 18:53:00,853 - Epoch: [4][ 300/ 518] Overall Loss 4.862331 Objective Loss 4.862331 LR 0.002000 Time 0.204904 -2023-04-26 18:53:10,972 - Epoch: [4][ 350/ 518] Overall Loss 4.855282 Objective Loss 4.855282 LR 0.002000 Time 0.204537 -2023-04-26 18:53:21,104 - Epoch: [4][ 400/ 518] Overall Loss 4.853889 Objective Loss 4.853889 LR 0.002000 Time 0.204297 -2023-04-26 18:53:31,220 - Epoch: [4][ 450/ 518] Overall Loss 4.844303 Objective Loss 4.844303 LR 0.002000 Time 0.204074 -2023-04-26 18:53:41,410 - Epoch: [4][ 500/ 518] Overall Loss 4.842758 Objective Loss 4.842758 LR 0.002000 Time 0.204044 -2023-04-26 18:53:44,957 - Epoch: [4][ 518/ 518] Overall Loss 4.841576 Objective Loss 4.841576 LR 0.002000 Time 0.203799 -2023-04-26 18:53:45,028 - --- validate (epoch=4)----------- -2023-04-26 18:53:45,029 - 4952 samples (32 per mini-batch) -2023-04-26 18:53:50,655 - Epoch: [4][ 50/ 155] Loss 5.044647 mAP 0.014012 -2023-04-26 18:53:55,863 - Epoch: [4][ 100/ 155] Loss 5.024860 mAP 0.013300 -2023-04-26 18:54:01,053 - Epoch: [4][ 150/ 155] Loss 5.035821 mAP 0.012227 -2023-04-26 18:54:01,516 - Epoch: [4][ 155/ 155] Loss 5.036174 mAP 0.012532 -2023-04-26 18:54:01,583 - ==> mAP: 0.01253 Loss: 5.036 - -2023-04-26 18:54:01,587 - ==> Best [mAP: 0.012532 vloss: 5.036174 Sparsity:0.00 Params: 2177088 on epoch: 4] -2023-04-26 18:54:01,587 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 18:54:01,640 - - -2023-04-26 18:54:01,640 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 18:54:12,674 - Epoch: [5][ 50/ 518] Overall Loss 4.813672 Objective Loss 4.813672 LR 0.002000 Time 0.220623 -2023-04-26 18:54:22,905 - Epoch: [5][ 100/ 518] Overall Loss 4.788561 Objective Loss 4.788561 LR 0.002000 Time 0.212601 -2023-04-26 18:54:32,990 - Epoch: [5][ 150/ 518] Overall Loss 4.775404 Objective Loss 4.775404 LR 0.002000 Time 0.208956 -2023-04-26 18:54:43,072 - Epoch: [5][ 200/ 518] Overall Loss 4.777132 Objective Loss 4.777132 LR 0.002000 Time 0.207118 -2023-04-26 18:54:53,251 - Epoch: [5][ 250/ 518] Overall Loss 4.772787 Objective Loss 4.772787 LR 0.002000 Time 0.206405 -2023-04-26 18:55:03,379 - Epoch: [5][ 300/ 518] Overall Loss 4.763455 Objective Loss 4.763455 LR 0.002000 Time 0.205761 -2023-04-26 18:55:13,570 - Epoch: [5][ 350/ 518] Overall Loss 4.761734 Objective Loss 4.761734 LR 0.002000 Time 0.205479 -2023-04-26 18:55:23,677 - Epoch: [5][ 400/ 518] Overall Loss 4.754184 Objective Loss 4.754184 LR 0.002000 Time 0.205057 -2023-04-26 18:55:33,827 - Epoch: [5][ 450/ 518] Overall Loss 4.747047 Objective Loss 4.747047 LR 0.002000 Time 0.204825 -2023-04-26 18:55:43,947 - Epoch: [5][ 500/ 518] Overall Loss 4.745136 Objective Loss 4.745136 LR 0.002000 Time 0.204578 -2023-04-26 18:55:47,483 - Epoch: [5][ 518/ 518] Overall Loss 4.748110 Objective Loss 4.748110 LR 0.002000 Time 0.204296 -2023-04-26 18:55:47,554 - --- validate (epoch=5)----------- -2023-04-26 18:55:47,554 - 4952 samples (32 per mini-batch) -2023-04-26 18:55:53,243 - Epoch: [5][ 50/ 155] Loss 4.921851 mAP 0.029119 -2023-04-26 18:55:58,547 - Epoch: [5][ 100/ 155] Loss 4.927791 mAP 0.030656 -2023-04-26 18:56:03,804 - Epoch: [5][ 150/ 155] Loss 4.930954 mAP 0.033773 -2023-04-26 18:56:04,269 - Epoch: [5][ 155/ 155] Loss 4.933616 mAP 0.032943 -2023-04-26 18:56:04,351 - ==> mAP: 0.03294 Loss: 4.934 - -2023-04-26 18:56:04,355 - ==> Best [mAP: 0.032943 vloss: 4.933616 Sparsity:0.00 Params: 2177088 on epoch: 5] -2023-04-26 18:56:04,355 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 18:56:04,407 - - -2023-04-26 18:56:04,407 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 18:56:15,377 - Epoch: [6][ 50/ 518] Overall Loss 4.761142 Objective Loss 4.761142 LR 0.002000 Time 0.219336 -2023-04-26 18:56:25,530 - Epoch: [6][ 100/ 518] Overall Loss 4.713612 Objective Loss 4.713612 LR 0.002000 Time 0.211178 -2023-04-26 18:56:35,647 - Epoch: [6][ 150/ 518] Overall Loss 4.683662 Objective Loss 4.683662 LR 0.002000 Time 0.208225 -2023-04-26 18:56:45,908 - Epoch: [6][ 200/ 518] Overall Loss 4.681057 Objective Loss 4.681057 LR 0.002000 Time 0.207467 -2023-04-26 18:56:56,087 - Epoch: [6][ 250/ 518] Overall Loss 4.673279 Objective Loss 4.673279 LR 0.002000 Time 0.206681 -2023-04-26 18:57:06,321 - Epoch: [6][ 300/ 518] Overall Loss 4.665199 Objective Loss 4.665199 LR 0.002000 Time 0.206341 -2023-04-26 18:57:16,459 - Epoch: [6][ 350/ 518] Overall Loss 4.665004 Objective Loss 4.665004 LR 0.002000 Time 0.205825 -2023-04-26 18:57:26,684 - Epoch: [6][ 400/ 518] Overall Loss 4.662627 Objective Loss 4.662627 LR 0.002000 Time 0.205655 -2023-04-26 18:57:36,828 - Epoch: [6][ 450/ 518] Overall Loss 4.649736 Objective Loss 4.649736 LR 0.002000 Time 0.205344 -2023-04-26 18:57:46,919 - Epoch: [6][ 500/ 518] Overall Loss 4.650286 Objective Loss 4.650286 LR 0.002000 Time 0.204989 -2023-04-26 18:57:50,486 - Epoch: [6][ 518/ 518] Overall Loss 4.654031 Objective Loss 4.654031 LR 0.002000 Time 0.204750 -2023-04-26 18:57:50,557 - --- validate (epoch=6)----------- -2023-04-26 18:57:50,558 - 4952 samples (32 per mini-batch) -2023-04-26 18:57:56,131 - Epoch: [6][ 50/ 155] Loss 4.963509 mAP 0.014467 -2023-04-26 18:58:01,369 - Epoch: [6][ 100/ 155] Loss 4.986738 mAP 0.016083 -2023-04-26 18:58:06,569 - Epoch: [6][ 150/ 155] Loss 5.008911 mAP 0.014756 -2023-04-26 18:58:07,031 - Epoch: [6][ 155/ 155] Loss 5.012071 mAP 0.014594 -2023-04-26 18:58:07,103 - ==> mAP: 0.01459 Loss: 5.012 - -2023-04-26 18:58:07,106 - ==> Best [mAP: 0.032943 vloss: 4.933616 Sparsity:0.00 Params: 2177088 on epoch: 5] -2023-04-26 18:58:07,106 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 18:58:07,144 - - -2023-04-26 18:58:07,144 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 18:58:17,953 - Epoch: [7][ 50/ 518] Overall Loss 4.582747 Objective Loss 4.582747 LR 0.002000 Time 0.216120 -2023-04-26 18:58:28,101 - Epoch: [7][ 100/ 518] Overall Loss 4.596874 Objective Loss 4.596874 LR 0.002000 Time 0.209524 -2023-04-26 18:58:38,312 - Epoch: [7][ 150/ 518] Overall Loss 4.591949 Objective Loss 4.591949 LR 0.002000 Time 0.207750 -2023-04-26 18:58:48,444 - Epoch: [7][ 200/ 518] Overall Loss 4.569472 Objective Loss 4.569472 LR 0.002000 Time 0.206464 -2023-04-26 18:58:58,595 - Epoch: [7][ 250/ 518] Overall Loss 4.567704 Objective Loss 4.567704 LR 0.002000 Time 0.205766 -2023-04-26 18:59:08,735 - Epoch: [7][ 300/ 518] Overall Loss 4.563120 Objective Loss 4.563120 LR 0.002000 Time 0.205267 -2023-04-26 18:59:18,862 - Epoch: [7][ 350/ 518] Overall Loss 4.561089 Objective Loss 4.561089 LR 0.002000 Time 0.204875 -2023-04-26 18:59:28,995 - Epoch: [7][ 400/ 518] Overall Loss 4.561284 Objective Loss 4.561284 LR 0.002000 Time 0.204593 -2023-04-26 18:59:39,055 - Epoch: [7][ 450/ 518] Overall Loss 4.562529 Objective Loss 4.562529 LR 0.002000 Time 0.204212 -2023-04-26 18:59:49,213 - Epoch: [7][ 500/ 518] Overall Loss 4.566850 Objective Loss 4.566850 LR 0.002000 Time 0.204104 -2023-04-26 18:59:52,705 - Epoch: [7][ 518/ 518] Overall Loss 4.564621 Objective Loss 4.564621 LR 0.002000 Time 0.203752 -2023-04-26 18:59:52,774 - --- validate (epoch=7)----------- -2023-04-26 18:59:52,775 - 4952 samples (32 per mini-batch) -2023-04-26 18:59:58,560 - Epoch: [7][ 50/ 155] Loss 5.062050 mAP 0.034304 -2023-04-26 19:00:03,983 - Epoch: [7][ 100/ 155] Loss 5.053016 mAP 0.037425 -2023-04-26 19:00:09,375 - Epoch: [7][ 150/ 155] Loss 5.071941 mAP 0.037422 -2023-04-26 19:00:09,855 - Epoch: [7][ 155/ 155] Loss 5.073614 mAP 0.037261 -2023-04-26 19:00:09,915 - ==> mAP: 0.03726 Loss: 5.074 - -2023-04-26 19:00:09,918 - ==> Best [mAP: 0.037261 vloss: 5.073614 Sparsity:0.00 Params: 2177088 on epoch: 7] -2023-04-26 19:00:09,919 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:00:09,981 - - -2023-04-26 19:00:09,981 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:00:20,959 - Epoch: [8][ 50/ 518] Overall Loss 4.502025 Objective Loss 4.502025 LR 0.002000 Time 0.219507 -2023-04-26 19:00:31,062 - Epoch: [8][ 100/ 518] Overall Loss 4.501439 Objective Loss 4.501439 LR 0.002000 Time 0.210761 -2023-04-26 19:00:41,230 - Epoch: [8][ 150/ 518] Overall Loss 4.510735 Objective Loss 4.510735 LR 0.002000 Time 0.208284 -2023-04-26 19:00:51,408 - Epoch: [8][ 200/ 518] Overall Loss 4.524754 Objective Loss 4.524754 LR 0.002000 Time 0.207094 -2023-04-26 19:01:01,587 - Epoch: [8][ 250/ 518] Overall Loss 4.506519 Objective Loss 4.506519 LR 0.002000 Time 0.206386 -2023-04-26 19:01:11,768 - Epoch: [8][ 300/ 518] Overall Loss 4.502182 Objective Loss 4.502182 LR 0.002000 Time 0.205918 -2023-04-26 19:01:21,860 - Epoch: [8][ 350/ 518] Overall Loss 4.504581 Objective Loss 4.504581 LR 0.002000 Time 0.205332 -2023-04-26 19:01:31,999 - Epoch: [8][ 400/ 518] Overall Loss 4.501232 Objective Loss 4.501232 LR 0.002000 Time 0.205010 -2023-04-26 19:01:42,044 - Epoch: [8][ 450/ 518] Overall Loss 4.495451 Objective Loss 4.495451 LR 0.002000 Time 0.204548 -2023-04-26 19:01:52,138 - Epoch: [8][ 500/ 518] Overall Loss 4.499646 Objective Loss 4.499646 LR 0.002000 Time 0.204278 -2023-04-26 19:01:55,685 - Epoch: [8][ 518/ 518] Overall Loss 4.495841 Objective Loss 4.495841 LR 0.002000 Time 0.204026 -2023-04-26 19:01:55,756 - --- validate (epoch=8)----------- -2023-04-26 19:01:55,756 - 4952 samples (32 per mini-batch) -2023-04-26 19:02:01,534 - Epoch: [8][ 50/ 155] Loss 4.721816 mAP 0.058889 -2023-04-26 19:02:07,038 - Epoch: [8][ 100/ 155] Loss 4.740076 mAP 0.057760 -2023-04-26 19:02:12,490 - Epoch: [8][ 150/ 155] Loss 4.747500 mAP 0.055603 -2023-04-26 19:02:12,968 - Epoch: [8][ 155/ 155] Loss 4.744856 mAP 0.055541 -2023-04-26 19:02:13,041 - ==> mAP: 0.05554 Loss: 4.745 - -2023-04-26 19:02:13,045 - ==> Best [mAP: 0.055541 vloss: 4.744856 Sparsity:0.00 Params: 2177088 on epoch: 8] -2023-04-26 19:02:13,045 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:02:13,098 - - -2023-04-26 19:02:13,098 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:02:24,017 - Epoch: [9][ 50/ 518] Overall Loss 4.460543 Objective Loss 4.460543 LR 0.002000 Time 0.218337 -2023-04-26 19:02:34,170 - Epoch: [9][ 100/ 518] Overall Loss 4.445796 Objective Loss 4.445796 LR 0.002000 Time 0.210683 -2023-04-26 19:02:44,444 - Epoch: [9][ 150/ 518] Overall Loss 4.418630 Objective Loss 4.418630 LR 0.002000 Time 0.208936 -2023-04-26 19:02:54,556 - Epoch: [9][ 200/ 518] Overall Loss 4.430761 Objective Loss 4.430761 LR 0.002000 Time 0.207255 -2023-04-26 19:03:04,703 - Epoch: [9][ 250/ 518] Overall Loss 4.417511 Objective Loss 4.417511 LR 0.002000 Time 0.206383 -2023-04-26 19:03:14,999 - Epoch: [9][ 300/ 518] Overall Loss 4.421122 Objective Loss 4.421122 LR 0.002000 Time 0.206301 -2023-04-26 19:03:25,170 - Epoch: [9][ 350/ 518] Overall Loss 4.422536 Objective Loss 4.422536 LR 0.002000 Time 0.205885 -2023-04-26 19:03:35,336 - Epoch: [9][ 400/ 518] Overall Loss 4.423683 Objective Loss 4.423683 LR 0.002000 Time 0.205560 -2023-04-26 19:03:45,524 - Epoch: [9][ 450/ 518] Overall Loss 4.419972 Objective Loss 4.419972 LR 0.002000 Time 0.205356 -2023-04-26 19:03:55,698 - Epoch: [9][ 500/ 518] Overall Loss 4.418808 Objective Loss 4.418808 LR 0.002000 Time 0.205165 -2023-04-26 19:03:59,277 - Epoch: [9][ 518/ 518] Overall Loss 4.417766 Objective Loss 4.417766 LR 0.002000 Time 0.204945 -2023-04-26 19:03:59,348 - --- validate (epoch=9)----------- -2023-04-26 19:03:59,348 - 4952 samples (32 per mini-batch) -2023-04-26 19:04:05,375 - Epoch: [9][ 50/ 155] Loss 5.143822 mAP 0.064267 -2023-04-26 19:04:11,076 - Epoch: [9][ 100/ 155] Loss 5.134260 mAP 0.061424 -2023-04-26 19:04:16,802 - Epoch: [9][ 150/ 155] Loss 5.135877 mAP 0.060140 -2023-04-26 19:04:17,303 - Epoch: [9][ 155/ 155] Loss 5.135221 mAP 0.059388 -2023-04-26 19:04:17,372 - ==> mAP: 0.05939 Loss: 5.135 - -2023-04-26 19:04:17,375 - ==> Best [mAP: 0.059388 vloss: 5.135221 Sparsity:0.00 Params: 2177088 on epoch: 9] -2023-04-26 19:04:17,375 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:04:17,428 - - -2023-04-26 19:04:17,428 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:04:28,420 - Epoch: [10][ 50/ 518] Overall Loss 4.393080 Objective Loss 4.393080 LR 0.002000 Time 0.219771 -2023-04-26 19:04:38,628 - Epoch: [10][ 100/ 518] Overall Loss 4.355729 Objective Loss 4.355729 LR 0.002000 Time 0.211955 -2023-04-26 19:04:48,910 - Epoch: [10][ 150/ 518] Overall Loss 4.363539 Objective Loss 4.363539 LR 0.002000 Time 0.209837 -2023-04-26 19:04:59,203 - Epoch: [10][ 200/ 518] Overall Loss 4.380264 Objective Loss 4.380264 LR 0.002000 Time 0.208834 -2023-04-26 19:05:09,538 - Epoch: [10][ 250/ 518] Overall Loss 4.374612 Objective Loss 4.374612 LR 0.002000 Time 0.208400 -2023-04-26 19:05:19,723 - Epoch: [10][ 300/ 518] Overall Loss 4.378651 Objective Loss 4.378651 LR 0.002000 Time 0.207613 -2023-04-26 19:05:29,977 - Epoch: [10][ 350/ 518] Overall Loss 4.369861 Objective Loss 4.369861 LR 0.002000 Time 0.207245 -2023-04-26 19:05:40,159 - Epoch: [10][ 400/ 518] Overall Loss 4.373089 Objective Loss 4.373089 LR 0.002000 Time 0.206792 -2023-04-26 19:05:50,408 - Epoch: [10][ 450/ 518] Overall Loss 4.370237 Objective Loss 4.370237 LR 0.002000 Time 0.206586 -2023-04-26 19:06:00,672 - Epoch: [10][ 500/ 518] Overall Loss 4.364309 Objective Loss 4.364309 LR 0.002000 Time 0.206451 -2023-04-26 19:06:04,229 - Epoch: [10][ 518/ 518] Overall Loss 4.361516 Objective Loss 4.361516 LR 0.002000 Time 0.206144 -2023-04-26 19:06:04,301 - --- validate (epoch=10)----------- -2023-04-26 19:06:04,301 - 4952 samples (32 per mini-batch) -2023-04-26 19:06:10,110 - Epoch: [10][ 50/ 155] Loss 4.825384 mAP 0.073129 -2023-04-26 19:06:15,544 - Epoch: [10][ 100/ 155] Loss 4.818518 mAP 0.066655 -2023-04-26 19:06:20,979 - Epoch: [10][ 150/ 155] Loss 4.822191 mAP 0.063338 -2023-04-26 19:06:21,466 - Epoch: [10][ 155/ 155] Loss 4.823679 mAP 0.063435 -2023-04-26 19:06:21,538 - ==> mAP: 0.06343 Loss: 4.824 - -2023-04-26 19:06:21,541 - ==> Best [mAP: 0.063435 vloss: 4.823679 Sparsity:0.00 Params: 2177088 on epoch: 10] -2023-04-26 19:06:21,541 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:06:21,595 - - -2023-04-26 19:06:21,595 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:06:32,531 - Epoch: [11][ 50/ 518] Overall Loss 4.295898 Objective Loss 4.295898 LR 0.002000 Time 0.218666 -2023-04-26 19:06:42,717 - Epoch: [11][ 100/ 518] Overall Loss 4.285344 Objective Loss 4.285344 LR 0.002000 Time 0.211173 -2023-04-26 19:06:52,986 - Epoch: [11][ 150/ 518] Overall Loss 4.298929 Objective Loss 4.298929 LR 0.002000 Time 0.209234 -2023-04-26 19:07:03,262 - Epoch: [11][ 200/ 518] Overall Loss 4.311870 Objective Loss 4.311870 LR 0.002000 Time 0.208297 -2023-04-26 19:07:13,421 - Epoch: [11][ 250/ 518] Overall Loss 4.301560 Objective Loss 4.301560 LR 0.002000 Time 0.207265 -2023-04-26 19:07:23,703 - Epoch: [11][ 300/ 518] Overall Loss 4.315415 Objective Loss 4.315415 LR 0.002000 Time 0.206991 -2023-04-26 19:07:33,895 - Epoch: [11][ 350/ 518] Overall Loss 4.320717 Objective Loss 4.320717 LR 0.002000 Time 0.206536 -2023-04-26 19:07:44,040 - Epoch: [11][ 400/ 518] Overall Loss 4.315042 Objective Loss 4.315042 LR 0.002000 Time 0.206076 -2023-04-26 19:07:54,179 - Epoch: [11][ 450/ 518] Overall Loss 4.315157 Objective Loss 4.315157 LR 0.002000 Time 0.205708 -2023-04-26 19:08:04,418 - Epoch: [11][ 500/ 518] Overall Loss 4.300534 Objective Loss 4.300534 LR 0.002000 Time 0.205611 -2023-04-26 19:08:07,994 - Epoch: [11][ 518/ 518] Overall Loss 4.299196 Objective Loss 4.299196 LR 0.002000 Time 0.205368 -2023-04-26 19:08:08,064 - --- validate (epoch=11)----------- -2023-04-26 19:08:08,065 - 4952 samples (32 per mini-batch) -2023-04-26 19:08:13,950 - Epoch: [11][ 50/ 155] Loss 4.640636 mAP 0.045781 -2023-04-26 19:08:19,565 - Epoch: [11][ 100/ 155] Loss 4.613115 mAP 0.053575 -2023-04-26 19:08:25,072 - Epoch: [11][ 150/ 155] Loss 4.614694 mAP 0.053276 -2023-04-26 19:08:25,553 - Epoch: [11][ 155/ 155] Loss 4.612869 mAP 0.052856 -2023-04-26 19:08:25,624 - ==> mAP: 0.05286 Loss: 4.613 - -2023-04-26 19:08:25,628 - ==> Best [mAP: 0.063435 vloss: 4.823679 Sparsity:0.00 Params: 2177088 on epoch: 10] -2023-04-26 19:08:25,629 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:08:25,667 - - -2023-04-26 19:08:25,668 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:08:36,846 - Epoch: [12][ 50/ 518] Overall Loss 4.303558 Objective Loss 4.303558 LR 0.002000 Time 0.223509 -2023-04-26 19:08:47,072 - Epoch: [12][ 100/ 518] Overall Loss 4.276266 Objective Loss 4.276266 LR 0.002000 Time 0.213995 -2023-04-26 19:08:57,233 - Epoch: [12][ 150/ 518] Overall Loss 4.275499 Objective Loss 4.275499 LR 0.002000 Time 0.210398 -2023-04-26 19:09:07,468 - Epoch: [12][ 200/ 518] Overall Loss 4.282149 Objective Loss 4.282149 LR 0.002000 Time 0.208963 -2023-04-26 19:09:17,658 - Epoch: [12][ 250/ 518] Overall Loss 4.287495 Objective Loss 4.287495 LR 0.002000 Time 0.207925 -2023-04-26 19:09:27,871 - Epoch: [12][ 300/ 518] Overall Loss 4.273264 Objective Loss 4.273264 LR 0.002000 Time 0.207309 -2023-04-26 19:09:38,017 - Epoch: [12][ 350/ 518] Overall Loss 4.271317 Objective Loss 4.271317 LR 0.002000 Time 0.206676 -2023-04-26 19:09:48,237 - Epoch: [12][ 400/ 518] Overall Loss 4.269400 Objective Loss 4.269400 LR 0.002000 Time 0.206388 -2023-04-26 19:09:58,482 - Epoch: [12][ 450/ 518] Overall Loss 4.264516 Objective Loss 4.264516 LR 0.002000 Time 0.206220 -2023-04-26 19:10:08,661 - Epoch: [12][ 500/ 518] Overall Loss 4.264997 Objective Loss 4.264997 LR 0.002000 Time 0.205951 -2023-04-26 19:10:12,171 - Epoch: [12][ 518/ 518] Overall Loss 4.266283 Objective Loss 4.266283 LR 0.002000 Time 0.205569 -2023-04-26 19:10:12,241 - --- validate (epoch=12)----------- -2023-04-26 19:10:12,242 - 4952 samples (32 per mini-batch) -2023-04-26 19:10:18,625 - Epoch: [12][ 50/ 155] Loss 4.800670 mAP 0.058946 -2023-04-26 19:10:24,522 - Epoch: [12][ 100/ 155] Loss 4.795041 mAP 0.056466 -2023-04-26 19:10:30,419 - Epoch: [12][ 150/ 155] Loss 4.792190 mAP 0.056867 -2023-04-26 19:10:30,939 - Epoch: [12][ 155/ 155] Loss 4.796340 mAP 0.056103 -2023-04-26 19:10:31,022 - ==> mAP: 0.05610 Loss: 4.796 - -2023-04-26 19:10:31,026 - ==> Best [mAP: 0.063435 vloss: 4.823679 Sparsity:0.00 Params: 2177088 on epoch: 10] -2023-04-26 19:10:31,026 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:10:31,064 - - -2023-04-26 19:10:31,064 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:10:41,994 - Epoch: [13][ 50/ 518] Overall Loss 4.196735 Objective Loss 4.196735 LR 0.002000 Time 0.218536 -2023-04-26 19:10:52,200 - Epoch: [13][ 100/ 518] Overall Loss 4.183296 Objective Loss 4.183296 LR 0.002000 Time 0.211312 -2023-04-26 19:11:02,461 - Epoch: [13][ 150/ 518] Overall Loss 4.188301 Objective Loss 4.188301 LR 0.002000 Time 0.209273 -2023-04-26 19:11:12,737 - Epoch: [13][ 200/ 518] Overall Loss 4.173528 Objective Loss 4.173528 LR 0.002000 Time 0.208324 -2023-04-26 19:11:22,908 - Epoch: [13][ 250/ 518] Overall Loss 4.177827 Objective Loss 4.177827 LR 0.002000 Time 0.207338 -2023-04-26 19:11:33,158 - Epoch: [13][ 300/ 518] Overall Loss 4.175789 Objective Loss 4.175789 LR 0.002000 Time 0.206943 -2023-04-26 19:11:43,352 - Epoch: [13][ 350/ 518] Overall Loss 4.180161 Objective Loss 4.180161 LR 0.002000 Time 0.206500 -2023-04-26 19:11:53,587 - Epoch: [13][ 400/ 518] Overall Loss 4.175336 Objective Loss 4.175336 LR 0.002000 Time 0.206271 -2023-04-26 19:12:03,778 - Epoch: [13][ 450/ 518] Overall Loss 4.180783 Objective Loss 4.180783 LR 0.002000 Time 0.205994 -2023-04-26 19:12:14,016 - Epoch: [13][ 500/ 518] Overall Loss 4.185341 Objective Loss 4.185341 LR 0.002000 Time 0.205868 -2023-04-26 19:12:17,586 - Epoch: [13][ 518/ 518] Overall Loss 4.181640 Objective Loss 4.181640 LR 0.002000 Time 0.205605 -2023-04-26 19:12:17,658 - --- validate (epoch=13)----------- -2023-04-26 19:12:17,658 - 4952 samples (32 per mini-batch) -2023-04-26 19:12:23,607 - Epoch: [13][ 50/ 155] Loss 4.433584 mAP 0.114606 -2023-04-26 19:12:29,268 - Epoch: [13][ 100/ 155] Loss 4.417048 mAP 0.111954 -2023-04-26 19:12:34,956 - Epoch: [13][ 150/ 155] Loss 4.414147 mAP 0.116542 -2023-04-26 19:12:35,442 - Epoch: [13][ 155/ 155] Loss 4.415760 mAP 0.115995 -2023-04-26 19:12:35,512 - ==> mAP: 0.11600 Loss: 4.416 - -2023-04-26 19:12:35,515 - ==> Best [mAP: 0.115995 vloss: 4.415760 Sparsity:0.00 Params: 2177088 on epoch: 13] -2023-04-26 19:12:35,515 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:12:35,568 - - -2023-04-26 19:12:35,568 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:12:46,537 - Epoch: [14][ 50/ 518] Overall Loss 4.242636 Objective Loss 4.242636 LR 0.002000 Time 0.219317 -2023-04-26 19:12:56,781 - Epoch: [14][ 100/ 518] Overall Loss 4.192554 Objective Loss 4.192554 LR 0.002000 Time 0.212080 -2023-04-26 19:13:07,000 - Epoch: [14][ 150/ 518] Overall Loss 4.167381 Objective Loss 4.167381 LR 0.002000 Time 0.209503 -2023-04-26 19:13:17,191 - Epoch: [14][ 200/ 518] Overall Loss 4.156668 Objective Loss 4.156668 LR 0.002000 Time 0.208077 -2023-04-26 19:13:27,445 - Epoch: [14][ 250/ 518] Overall Loss 4.155878 Objective Loss 4.155878 LR 0.002000 Time 0.207470 -2023-04-26 19:13:37,691 - Epoch: [14][ 300/ 518] Overall Loss 4.155114 Objective Loss 4.155114 LR 0.002000 Time 0.207039 -2023-04-26 19:13:47,958 - Epoch: [14][ 350/ 518] Overall Loss 4.153790 Objective Loss 4.153790 LR 0.002000 Time 0.206791 -2023-04-26 19:13:58,197 - Epoch: [14][ 400/ 518] Overall Loss 4.153019 Objective Loss 4.153019 LR 0.002000 Time 0.206536 -2023-04-26 19:14:08,425 - Epoch: [14][ 450/ 518] Overall Loss 4.150084 Objective Loss 4.150084 LR 0.002000 Time 0.206314 -2023-04-26 19:14:18,571 - Epoch: [14][ 500/ 518] Overall Loss 4.152941 Objective Loss 4.152941 LR 0.002000 Time 0.205970 -2023-04-26 19:14:22,103 - Epoch: [14][ 518/ 518] Overall Loss 4.153623 Objective Loss 4.153623 LR 0.002000 Time 0.205630 -2023-04-26 19:14:22,174 - --- validate (epoch=14)----------- -2023-04-26 19:14:22,175 - 4952 samples (32 per mini-batch) -2023-04-26 19:14:28,324 - Epoch: [14][ 50/ 155] Loss 4.813039 mAP 0.075604 -2023-04-26 19:14:34,110 - Epoch: [14][ 100/ 155] Loss 4.796694 mAP 0.079456 -2023-04-26 19:14:39,884 - Epoch: [14][ 150/ 155] Loss 4.789978 mAP 0.079511 -2023-04-26 19:14:40,394 - Epoch: [14][ 155/ 155] Loss 4.787882 mAP 0.079263 -2023-04-26 19:14:40,459 - ==> mAP: 0.07926 Loss: 4.788 - -2023-04-26 19:14:40,463 - ==> Best [mAP: 0.115995 vloss: 4.415760 Sparsity:0.00 Params: 2177088 on epoch: 13] -2023-04-26 19:14:40,463 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:14:40,500 - - -2023-04-26 19:14:40,501 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:14:51,361 - Epoch: [15][ 50/ 518] Overall Loss 4.151980 Objective Loss 4.151980 LR 0.002000 Time 0.217146 -2023-04-26 19:15:01,622 - Epoch: [15][ 100/ 518] Overall Loss 4.142035 Objective Loss 4.142035 LR 0.002000 Time 0.211171 -2023-04-26 19:15:11,845 - Epoch: [15][ 150/ 518] Overall Loss 4.115489 Objective Loss 4.115489 LR 0.002000 Time 0.208922 -2023-04-26 19:15:22,099 - Epoch: [15][ 200/ 518] Overall Loss 4.119777 Objective Loss 4.119777 LR 0.002000 Time 0.207952 -2023-04-26 19:15:32,368 - Epoch: [15][ 250/ 518] Overall Loss 4.116136 Objective Loss 4.116136 LR 0.002000 Time 0.207433 -2023-04-26 19:15:42,528 - Epoch: [15][ 300/ 518] Overall Loss 4.126147 Objective Loss 4.126147 LR 0.002000 Time 0.206722 -2023-04-26 19:15:52,741 - Epoch: [15][ 350/ 518] Overall Loss 4.115392 Objective Loss 4.115392 LR 0.002000 Time 0.206364 -2023-04-26 19:16:02,923 - Epoch: [15][ 400/ 518] Overall Loss 4.107901 Objective Loss 4.107901 LR 0.002000 Time 0.206020 -2023-04-26 19:16:13,164 - Epoch: [15][ 450/ 518] Overall Loss 4.105802 Objective Loss 4.105802 LR 0.002000 Time 0.205882 -2023-04-26 19:16:23,424 - Epoch: [15][ 500/ 518] Overall Loss 4.108065 Objective Loss 4.108065 LR 0.002000 Time 0.205810 -2023-04-26 19:16:26,946 - Epoch: [15][ 518/ 518] Overall Loss 4.109577 Objective Loss 4.109577 LR 0.002000 Time 0.205458 -2023-04-26 19:16:27,018 - --- validate (epoch=15)----------- -2023-04-26 19:16:27,018 - 4952 samples (32 per mini-batch) -2023-04-26 19:16:33,000 - Epoch: [15][ 50/ 155] Loss 4.402553 mAP 0.109158 -2023-04-26 19:16:38,579 - Epoch: [15][ 100/ 155] Loss 4.399216 mAP 0.113464 -2023-04-26 19:16:44,207 - Epoch: [15][ 150/ 155] Loss 4.418735 mAP 0.108262 -2023-04-26 19:16:44,695 - Epoch: [15][ 155/ 155] Loss 4.420749 mAP 0.108318 -2023-04-26 19:16:44,769 - ==> mAP: 0.10832 Loss: 4.421 - -2023-04-26 19:16:44,773 - ==> Best [mAP: 0.115995 vloss: 4.415760 Sparsity:0.00 Params: 2177088 on epoch: 13] -2023-04-26 19:16:44,773 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:16:44,810 - - -2023-04-26 19:16:44,810 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:16:55,808 - Epoch: [16][ 50/ 518] Overall Loss 4.116832 Objective Loss 4.116832 LR 0.002000 Time 0.219899 -2023-04-26 19:17:05,997 - Epoch: [16][ 100/ 518] Overall Loss 4.103914 Objective Loss 4.103914 LR 0.002000 Time 0.211826 -2023-04-26 19:17:16,247 - Epoch: [16][ 150/ 518] Overall Loss 4.097176 Objective Loss 4.097176 LR 0.002000 Time 0.209536 -2023-04-26 19:17:26,483 - Epoch: [16][ 200/ 518] Overall Loss 4.083888 Objective Loss 4.083888 LR 0.002000 Time 0.208327 -2023-04-26 19:17:36,661 - Epoch: [16][ 250/ 518] Overall Loss 4.079944 Objective Loss 4.079944 LR 0.002000 Time 0.207368 -2023-04-26 19:17:46,853 - Epoch: [16][ 300/ 518] Overall Loss 4.082135 Objective Loss 4.082135 LR 0.002000 Time 0.206773 -2023-04-26 19:17:57,055 - Epoch: [16][ 350/ 518] Overall Loss 4.078410 Objective Loss 4.078410 LR 0.002000 Time 0.206377 -2023-04-26 19:18:07,318 - Epoch: [16][ 400/ 518] Overall Loss 4.081991 Objective Loss 4.081991 LR 0.002000 Time 0.206235 -2023-04-26 19:18:17,581 - Epoch: [16][ 450/ 518] Overall Loss 4.076967 Objective Loss 4.076967 LR 0.002000 Time 0.206121 -2023-04-26 19:18:27,804 - Epoch: [16][ 500/ 518] Overall Loss 4.076922 Objective Loss 4.076922 LR 0.002000 Time 0.205953 -2023-04-26 19:18:31,351 - Epoch: [16][ 518/ 518] Overall Loss 4.075091 Objective Loss 4.075091 LR 0.002000 Time 0.205643 -2023-04-26 19:18:31,422 - --- validate (epoch=16)----------- -2023-04-26 19:18:31,423 - 4952 samples (32 per mini-batch) -2023-04-26 19:18:37,572 - Epoch: [16][ 50/ 155] Loss 4.420284 mAP 0.122161 -2023-04-26 19:18:43,284 - Epoch: [16][ 100/ 155] Loss 4.440816 mAP 0.117954 -2023-04-26 19:18:48,984 - Epoch: [16][ 150/ 155] Loss 4.458437 mAP 0.114768 -2023-04-26 19:18:49,489 - Epoch: [16][ 155/ 155] Loss 4.461949 mAP 0.114185 -2023-04-26 19:18:49,560 - ==> mAP: 0.11418 Loss: 4.462 - -2023-04-26 19:18:49,564 - ==> Best [mAP: 0.115995 vloss: 4.415760 Sparsity:0.00 Params: 2177088 on epoch: 13] -2023-04-26 19:18:49,564 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:18:49,602 - - -2023-04-26 19:18:49,602 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:19:00,583 - Epoch: [17][ 50/ 518] Overall Loss 4.079230 Objective Loss 4.079230 LR 0.002000 Time 0.219566 -2023-04-26 19:19:10,803 - Epoch: [17][ 100/ 518] Overall Loss 4.053898 Objective Loss 4.053898 LR 0.002000 Time 0.211968 -2023-04-26 19:19:21,021 - Epoch: [17][ 150/ 518] Overall Loss 4.019556 Objective Loss 4.019556 LR 0.002000 Time 0.209422 -2023-04-26 19:19:31,364 - Epoch: [17][ 200/ 518] Overall Loss 4.028954 Objective Loss 4.028954 LR 0.002000 Time 0.208772 -2023-04-26 19:19:41,584 - Epoch: [17][ 250/ 518] Overall Loss 4.026170 Objective Loss 4.026170 LR 0.002000 Time 0.207892 -2023-04-26 19:19:51,910 - Epoch: [17][ 300/ 518] Overall Loss 4.039356 Objective Loss 4.039356 LR 0.002000 Time 0.207656 -2023-04-26 19:20:02,138 - Epoch: [17][ 350/ 518] Overall Loss 4.039866 Objective Loss 4.039866 LR 0.002000 Time 0.207209 -2023-04-26 19:20:12,363 - Epoch: [17][ 400/ 518] Overall Loss 4.037899 Objective Loss 4.037899 LR 0.002000 Time 0.206868 -2023-04-26 19:20:22,612 - Epoch: [17][ 450/ 518] Overall Loss 4.039425 Objective Loss 4.039425 LR 0.002000 Time 0.206655 -2023-04-26 19:20:32,860 - Epoch: [17][ 500/ 518] Overall Loss 4.033516 Objective Loss 4.033516 LR 0.002000 Time 0.206481 -2023-04-26 19:20:36,355 - Epoch: [17][ 518/ 518] Overall Loss 4.035337 Objective Loss 4.035337 LR 0.002000 Time 0.206053 -2023-04-26 19:20:36,428 - --- validate (epoch=17)----------- -2023-04-26 19:20:36,428 - 4952 samples (32 per mini-batch) -2023-04-26 19:20:42,902 - Epoch: [17][ 50/ 155] Loss 4.762335 mAP 0.131815 -2023-04-26 19:20:48,997 - Epoch: [17][ 100/ 155] Loss 4.806852 mAP 0.128137 -2023-04-26 19:20:54,997 - Epoch: [17][ 150/ 155] Loss 4.802011 mAP 0.121530 -2023-04-26 19:20:55,535 - Epoch: [17][ 155/ 155] Loss 4.799436 mAP 0.120673 -2023-04-26 19:20:55,606 - ==> mAP: 0.12067 Loss: 4.799 - -2023-04-26 19:20:55,610 - ==> Best [mAP: 0.120673 vloss: 4.799436 Sparsity:0.00 Params: 2177088 on epoch: 17] -2023-04-26 19:20:55,610 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:20:55,662 - - -2023-04-26 19:20:55,662 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:21:06,614 - Epoch: [18][ 50/ 518] Overall Loss 3.970579 Objective Loss 3.970579 LR 0.002000 Time 0.218986 -2023-04-26 19:21:16,837 - Epoch: [18][ 100/ 518] Overall Loss 3.947567 Objective Loss 3.947567 LR 0.002000 Time 0.211701 -2023-04-26 19:21:26,989 - Epoch: [18][ 150/ 518] Overall Loss 3.976780 Objective Loss 3.976780 LR 0.002000 Time 0.208804 -2023-04-26 19:21:37,229 - Epoch: [18][ 200/ 518] Overall Loss 3.974091 Objective Loss 3.974091 LR 0.002000 Time 0.207797 -2023-04-26 19:21:47,383 - Epoch: [18][ 250/ 518] Overall Loss 3.988732 Objective Loss 3.988732 LR 0.002000 Time 0.206846 -2023-04-26 19:21:57,566 - Epoch: [18][ 300/ 518] Overall Loss 3.992228 Objective Loss 3.992228 LR 0.002000 Time 0.206308 -2023-04-26 19:22:07,873 - Epoch: [18][ 350/ 518] Overall Loss 3.987963 Objective Loss 3.987963 LR 0.002000 Time 0.206280 -2023-04-26 19:22:18,115 - Epoch: [18][ 400/ 518] Overall Loss 3.991340 Objective Loss 3.991340 LR 0.002000 Time 0.206095 -2023-04-26 19:22:28,420 - Epoch: [18][ 450/ 518] Overall Loss 3.984131 Objective Loss 3.984131 LR 0.002000 Time 0.206092 -2023-04-26 19:22:38,603 - Epoch: [18][ 500/ 518] Overall Loss 3.989479 Objective Loss 3.989479 LR 0.002000 Time 0.205846 -2023-04-26 19:22:42,174 - Epoch: [18][ 518/ 518] Overall Loss 3.988706 Objective Loss 3.988706 LR 0.002000 Time 0.205585 -2023-04-26 19:22:42,245 - --- validate (epoch=18)----------- -2023-04-26 19:22:42,246 - 4952 samples (32 per mini-batch) -2023-04-26 19:22:48,313 - Epoch: [18][ 50/ 155] Loss 4.251801 mAP 0.173031 -2023-04-26 19:22:53,995 - Epoch: [18][ 100/ 155] Loss 4.231337 mAP 0.170363 -2023-04-26 19:22:59,676 - Epoch: [18][ 150/ 155] Loss 4.226679 mAP 0.168388 -2023-04-26 19:23:00,180 - Epoch: [18][ 155/ 155] Loss 4.226456 mAP 0.167918 -2023-04-26 19:23:00,261 - ==> mAP: 0.16792 Loss: 4.226 - -2023-04-26 19:23:00,265 - ==> Best [mAP: 0.167918 vloss: 4.226456 Sparsity:0.00 Params: 2177088 on epoch: 18] -2023-04-26 19:23:00,265 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:23:00,318 - - -2023-04-26 19:23:00,318 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:23:11,477 - Epoch: [19][ 50/ 518] Overall Loss 4.001551 Objective Loss 4.001551 LR 0.002000 Time 0.223133 -2023-04-26 19:23:21,660 - Epoch: [19][ 100/ 518] Overall Loss 3.995545 Objective Loss 3.995545 LR 0.002000 Time 0.213374 -2023-04-26 19:23:31,883 - Epoch: [19][ 150/ 518] Overall Loss 3.996790 Objective Loss 3.996790 LR 0.002000 Time 0.210393 -2023-04-26 19:23:42,025 - Epoch: [19][ 200/ 518] Overall Loss 3.989041 Objective Loss 3.989041 LR 0.002000 Time 0.208497 -2023-04-26 19:23:52,285 - Epoch: [19][ 250/ 518] Overall Loss 3.978226 Objective Loss 3.978226 LR 0.002000 Time 0.207829 -2023-04-26 19:24:02,494 - Epoch: [19][ 300/ 518] Overall Loss 3.972049 Objective Loss 3.972049 LR 0.002000 Time 0.207216 -2023-04-26 19:24:12,716 - Epoch: [19][ 350/ 518] Overall Loss 3.960278 Objective Loss 3.960278 LR 0.002000 Time 0.206814 -2023-04-26 19:24:22,973 - Epoch: [19][ 400/ 518] Overall Loss 3.957989 Objective Loss 3.957989 LR 0.002000 Time 0.206602 -2023-04-26 19:24:33,323 - Epoch: [19][ 450/ 518] Overall Loss 3.961067 Objective Loss 3.961067 LR 0.002000 Time 0.206641 -2023-04-26 19:24:43,597 - Epoch: [19][ 500/ 518] Overall Loss 3.961536 Objective Loss 3.961536 LR 0.002000 Time 0.206523 -2023-04-26 19:24:47,134 - Epoch: [19][ 518/ 518] Overall Loss 3.956841 Objective Loss 3.956841 LR 0.002000 Time 0.206172 -2023-04-26 19:24:47,206 - --- validate (epoch=19)----------- -2023-04-26 19:24:47,207 - 4952 samples (32 per mini-batch) -2023-04-26 19:24:53,441 - Epoch: [19][ 50/ 155] Loss 4.269452 mAP 0.142271 -2023-04-26 19:24:59,339 - Epoch: [19][ 100/ 155] Loss 4.236129 mAP 0.147522 -2023-04-26 19:25:05,207 - Epoch: [19][ 150/ 155] Loss 4.235864 mAP 0.149067 -2023-04-26 19:25:05,733 - Epoch: [19][ 155/ 155] Loss 4.231082 mAP 0.150857 -2023-04-26 19:25:05,799 - ==> mAP: 0.15086 Loss: 4.231 - -2023-04-26 19:25:05,803 - ==> Best [mAP: 0.167918 vloss: 4.226456 Sparsity:0.00 Params: 2177088 on epoch: 18] -2023-04-26 19:25:05,803 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:25:05,908 - - -2023-04-26 19:25:05,909 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:25:16,935 - Epoch: [20][ 50/ 518] Overall Loss 3.930722 Objective Loss 3.930722 LR 0.002000 Time 0.220481 -2023-04-26 19:25:27,190 - Epoch: [20][ 100/ 518] Overall Loss 3.935600 Objective Loss 3.935600 LR 0.002000 Time 0.212764 -2023-04-26 19:25:37,405 - Epoch: [20][ 150/ 518] Overall Loss 3.936906 Objective Loss 3.936906 LR 0.002000 Time 0.209932 -2023-04-26 19:25:47,658 - Epoch: [20][ 200/ 518] Overall Loss 3.939340 Objective Loss 3.939340 LR 0.002000 Time 0.208709 -2023-04-26 19:25:58,039 - Epoch: [20][ 250/ 518] Overall Loss 3.955782 Objective Loss 3.955782 LR 0.002000 Time 0.208485 -2023-04-26 19:26:08,279 - Epoch: [20][ 300/ 518] Overall Loss 3.947473 Objective Loss 3.947473 LR 0.002000 Time 0.207866 -2023-04-26 19:26:18,508 - Epoch: [20][ 350/ 518] Overall Loss 3.938997 Objective Loss 3.938997 LR 0.002000 Time 0.207391 -2023-04-26 19:26:28,810 - Epoch: [20][ 400/ 518] Overall Loss 3.946688 Objective Loss 3.946688 LR 0.002000 Time 0.207219 -2023-04-26 19:26:39,090 - Epoch: [20][ 450/ 518] Overall Loss 3.944617 Objective Loss 3.944617 LR 0.002000 Time 0.207034 -2023-04-26 19:26:49,271 - Epoch: [20][ 500/ 518] Overall Loss 3.939596 Objective Loss 3.939596 LR 0.002000 Time 0.206690 -2023-04-26 19:26:52,776 - Epoch: [20][ 518/ 518] Overall Loss 3.938067 Objective Loss 3.938067 LR 0.002000 Time 0.206273 -2023-04-26 19:26:52,847 - --- validate (epoch=20)----------- -2023-04-26 19:26:52,847 - 4952 samples (32 per mini-batch) -2023-04-26 19:26:59,200 - Epoch: [20][ 50/ 155] Loss 4.565711 mAP 0.131542 -2023-04-26 19:27:05,091 - Epoch: [20][ 100/ 155] Loss 4.575539 mAP 0.126785 -2023-04-26 19:27:11,005 - Epoch: [20][ 150/ 155] Loss 4.588142 mAP 0.129081 -2023-04-26 19:27:11,537 - Epoch: [20][ 155/ 155] Loss 4.592676 mAP 0.129046 -2023-04-26 19:27:11,610 - ==> mAP: 0.12905 Loss: 4.593 - -2023-04-26 19:27:11,613 - ==> Best [mAP: 0.167918 vloss: 4.226456 Sparsity:0.00 Params: 2177088 on epoch: 18] -2023-04-26 19:27:11,614 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:27:11,652 - - -2023-04-26 19:27:11,652 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:27:22,726 - Epoch: [21][ 50/ 518] Overall Loss 3.908323 Objective Loss 3.908323 LR 0.002000 Time 0.221428 -2023-04-26 19:27:32,958 - Epoch: [21][ 100/ 518] Overall Loss 3.895153 Objective Loss 3.895153 LR 0.002000 Time 0.213009 -2023-04-26 19:27:43,101 - Epoch: [21][ 150/ 518] Overall Loss 3.894633 Objective Loss 3.894633 LR 0.002000 Time 0.209615 -2023-04-26 19:27:53,304 - Epoch: [21][ 200/ 518] Overall Loss 3.908060 Objective Loss 3.908060 LR 0.002000 Time 0.208217 -2023-04-26 19:28:03,479 - Epoch: [21][ 250/ 518] Overall Loss 3.896410 Objective Loss 3.896410 LR 0.002000 Time 0.207270 -2023-04-26 19:28:13,677 - Epoch: [21][ 300/ 518] Overall Loss 3.905670 Objective Loss 3.905670 LR 0.002000 Time 0.206713 -2023-04-26 19:28:23,929 - Epoch: [21][ 350/ 518] Overall Loss 3.911942 Objective Loss 3.911942 LR 0.002000 Time 0.206468 -2023-04-26 19:28:34,039 - Epoch: [21][ 400/ 518] Overall Loss 3.902162 Objective Loss 3.902162 LR 0.002000 Time 0.205932 -2023-04-26 19:28:44,256 - Epoch: [21][ 450/ 518] Overall Loss 3.898106 Objective Loss 3.898106 LR 0.002000 Time 0.205751 -2023-04-26 19:28:54,388 - Epoch: [21][ 500/ 518] Overall Loss 3.899317 Objective Loss 3.899317 LR 0.002000 Time 0.205437 -2023-04-26 19:28:57,941 - Epoch: [21][ 518/ 518] Overall Loss 3.903420 Objective Loss 3.903420 LR 0.002000 Time 0.205155 -2023-04-26 19:28:58,013 - --- validate (epoch=21)----------- -2023-04-26 19:28:58,013 - 4952 samples (32 per mini-batch) -2023-04-26 19:29:04,353 - Epoch: [21][ 50/ 155] Loss 4.045565 mAP 0.202458 -2023-04-26 19:29:10,206 - Epoch: [21][ 100/ 155] Loss 4.073936 mAP 0.203286 -2023-04-26 19:29:16,051 - Epoch: [21][ 150/ 155] Loss 4.073577 mAP 0.202198 -2023-04-26 19:29:16,582 - Epoch: [21][ 155/ 155] Loss 4.073908 mAP 0.202613 -2023-04-26 19:29:16,646 - ==> mAP: 0.20261 Loss: 4.074 - -2023-04-26 19:29:16,650 - ==> Best [mAP: 0.202613 vloss: 4.073908 Sparsity:0.00 Params: 2177088 on epoch: 21] -2023-04-26 19:29:16,650 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:29:16,703 - - -2023-04-26 19:29:16,703 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:29:27,746 - Epoch: [22][ 50/ 518] Overall Loss 3.835769 Objective Loss 3.835769 LR 0.002000 Time 0.220792 -2023-04-26 19:29:37,971 - Epoch: [22][ 100/ 518] Overall Loss 3.836642 Objective Loss 3.836642 LR 0.002000 Time 0.212632 -2023-04-26 19:29:48,193 - Epoch: [22][ 150/ 518] Overall Loss 3.847628 Objective Loss 3.847628 LR 0.002000 Time 0.209888 -2023-04-26 19:29:58,414 - Epoch: [22][ 200/ 518] Overall Loss 3.865865 Objective Loss 3.865865 LR 0.002000 Time 0.208514 -2023-04-26 19:30:08,678 - Epoch: [22][ 250/ 518] Overall Loss 3.863067 Objective Loss 3.863067 LR 0.002000 Time 0.207863 -2023-04-26 19:30:18,907 - Epoch: [22][ 300/ 518] Overall Loss 3.847202 Objective Loss 3.847202 LR 0.002000 Time 0.207308 -2023-04-26 19:30:29,219 - Epoch: [22][ 350/ 518] Overall Loss 3.847502 Objective Loss 3.847502 LR 0.002000 Time 0.207152 -2023-04-26 19:30:39,424 - Epoch: [22][ 400/ 518] Overall Loss 3.855860 Objective Loss 3.855860 LR 0.002000 Time 0.206765 -2023-04-26 19:30:49,684 - Epoch: [22][ 450/ 518] Overall Loss 3.853795 Objective Loss 3.853795 LR 0.002000 Time 0.206589 -2023-04-26 19:30:59,862 - Epoch: [22][ 500/ 518] Overall Loss 3.857798 Objective Loss 3.857798 LR 0.002000 Time 0.206282 -2023-04-26 19:31:03,446 - Epoch: [22][ 518/ 518] Overall Loss 3.858813 Objective Loss 3.858813 LR 0.002000 Time 0.206032 -2023-04-26 19:31:03,519 - --- validate (epoch=22)----------- -2023-04-26 19:31:03,519 - 4952 samples (32 per mini-batch) -2023-04-26 19:31:09,988 - Epoch: [22][ 50/ 155] Loss 4.445878 mAP 0.189565 -2023-04-26 19:31:16,048 - Epoch: [22][ 100/ 155] Loss 4.429628 mAP 0.191483 -2023-04-26 19:31:22,134 - Epoch: [22][ 150/ 155] Loss 4.408893 mAP 0.195891 -2023-04-26 19:31:22,663 - Epoch: [22][ 155/ 155] Loss 4.409934 mAP 0.194164 -2023-04-26 19:31:22,734 - ==> mAP: 0.19416 Loss: 4.410 - -2023-04-26 19:31:22,738 - ==> Best [mAP: 0.202613 vloss: 4.073908 Sparsity:0.00 Params: 2177088 on epoch: 21] -2023-04-26 19:31:22,738 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:31:22,775 - - -2023-04-26 19:31:22,776 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:31:33,838 - Epoch: [23][ 50/ 518] Overall Loss 3.869186 Objective Loss 3.869186 LR 0.002000 Time 0.221199 -2023-04-26 19:31:44,135 - Epoch: [23][ 100/ 518] Overall Loss 3.829768 Objective Loss 3.829768 LR 0.002000 Time 0.213554 -2023-04-26 19:31:54,432 - Epoch: [23][ 150/ 518] Overall Loss 3.855845 Objective Loss 3.855845 LR 0.002000 Time 0.210999 -2023-04-26 19:32:04,702 - Epoch: [23][ 200/ 518] Overall Loss 3.846239 Objective Loss 3.846239 LR 0.002000 Time 0.209593 -2023-04-26 19:32:14,868 - Epoch: [23][ 250/ 518] Overall Loss 3.844984 Objective Loss 3.844984 LR 0.002000 Time 0.208333 -2023-04-26 19:32:25,154 - Epoch: [23][ 300/ 518] Overall Loss 3.843301 Objective Loss 3.843301 LR 0.002000 Time 0.207890 -2023-04-26 19:32:35,430 - Epoch: [23][ 350/ 518] Overall Loss 3.839943 Objective Loss 3.839943 LR 0.002000 Time 0.207547 -2023-04-26 19:32:45,761 - Epoch: [23][ 400/ 518] Overall Loss 3.837681 Objective Loss 3.837681 LR 0.002000 Time 0.207428 -2023-04-26 19:32:55,936 - Epoch: [23][ 450/ 518] Overall Loss 3.835670 Objective Loss 3.835670 LR 0.002000 Time 0.206989 -2023-04-26 19:33:06,195 - Epoch: [23][ 500/ 518] Overall Loss 3.843143 Objective Loss 3.843143 LR 0.002000 Time 0.206803 -2023-04-26 19:33:09,758 - Epoch: [23][ 518/ 518] Overall Loss 3.840426 Objective Loss 3.840426 LR 0.002000 Time 0.206494 -2023-04-26 19:33:09,830 - --- validate (epoch=23)----------- -2023-04-26 19:33:09,830 - 4952 samples (32 per mini-batch) -2023-04-26 19:33:15,956 - Epoch: [23][ 50/ 155] Loss 4.180120 mAP 0.185807 -2023-04-26 19:33:21,736 - Epoch: [23][ 100/ 155] Loss 4.199617 mAP 0.195664 -2023-04-26 19:33:27,486 - Epoch: [23][ 150/ 155] Loss 4.189676 mAP 0.194113 -2023-04-26 19:33:27,998 - Epoch: [23][ 155/ 155] Loss 4.187840 mAP 0.194759 -2023-04-26 19:33:28,072 - ==> mAP: 0.19476 Loss: 4.188 - -2023-04-26 19:33:28,076 - ==> Best [mAP: 0.202613 vloss: 4.073908 Sparsity:0.00 Params: 2177088 on epoch: 21] -2023-04-26 19:33:28,076 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:33:28,114 - - -2023-04-26 19:33:28,114 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:33:39,165 - Epoch: [24][ 50/ 518] Overall Loss 3.852586 Objective Loss 3.852586 LR 0.002000 Time 0.220965 -2023-04-26 19:33:49,492 - Epoch: [24][ 100/ 518] Overall Loss 3.816194 Objective Loss 3.816194 LR 0.002000 Time 0.213735 -2023-04-26 19:33:59,723 - Epoch: [24][ 150/ 518] Overall Loss 3.791940 Objective Loss 3.791940 LR 0.002000 Time 0.210685 -2023-04-26 19:34:10,057 - Epoch: [24][ 200/ 518] Overall Loss 3.796997 Objective Loss 3.796997 LR 0.002000 Time 0.209679 -2023-04-26 19:34:20,331 - Epoch: [24][ 250/ 518] Overall Loss 3.805953 Objective Loss 3.805953 LR 0.002000 Time 0.208831 -2023-04-26 19:34:30,521 - Epoch: [24][ 300/ 518] Overall Loss 3.812328 Objective Loss 3.812328 LR 0.002000 Time 0.207986 -2023-04-26 19:34:40,742 - Epoch: [24][ 350/ 518] Overall Loss 3.820318 Objective Loss 3.820318 LR 0.002000 Time 0.207474 -2023-04-26 19:34:50,961 - Epoch: [24][ 400/ 518] Overall Loss 3.820000 Objective Loss 3.820000 LR 0.002000 Time 0.207081 -2023-04-26 19:35:01,131 - Epoch: [24][ 450/ 518] Overall Loss 3.816098 Objective Loss 3.816098 LR 0.002000 Time 0.206669 -2023-04-26 19:35:11,448 - Epoch: [24][ 500/ 518] Overall Loss 3.819255 Objective Loss 3.819255 LR 0.002000 Time 0.206633 -2023-04-26 19:35:15,041 - Epoch: [24][ 518/ 518] Overall Loss 3.821565 Objective Loss 3.821565 LR 0.002000 Time 0.206387 -2023-04-26 19:35:15,112 - --- validate (epoch=24)----------- -2023-04-26 19:35:15,113 - 4952 samples (32 per mini-batch) -2023-04-26 19:35:21,318 - Epoch: [24][ 50/ 155] Loss 4.332407 mAP 0.180267 -2023-04-26 19:35:27,294 - Epoch: [24][ 100/ 155] Loss 4.284126 mAP 0.179955 -2023-04-26 19:35:33,172 - Epoch: [24][ 150/ 155] Loss 4.283788 mAP 0.177347 -2023-04-26 19:35:33,691 - Epoch: [24][ 155/ 155] Loss 4.284801 mAP 0.177002 -2023-04-26 19:35:33,761 - ==> mAP: 0.17700 Loss: 4.285 - -2023-04-26 19:35:33,765 - ==> Best [mAP: 0.202613 vloss: 4.073908 Sparsity:0.00 Params: 2177088 on epoch: 21] -2023-04-26 19:35:33,765 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:35:33,802 - - -2023-04-26 19:35:33,802 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:35:44,748 - Epoch: [25][ 50/ 518] Overall Loss 3.780812 Objective Loss 3.780812 LR 0.002000 Time 0.218849 -2023-04-26 19:35:55,128 - Epoch: [25][ 100/ 518] Overall Loss 3.791213 Objective Loss 3.791213 LR 0.002000 Time 0.213211 -2023-04-26 19:36:05,359 - Epoch: [25][ 150/ 518] Overall Loss 3.790964 Objective Loss 3.790964 LR 0.002000 Time 0.210336 -2023-04-26 19:36:15,618 - Epoch: [25][ 200/ 518] Overall Loss 3.788943 Objective Loss 3.788943 LR 0.002000 Time 0.209041 -2023-04-26 19:36:25,868 - Epoch: [25][ 250/ 518] Overall Loss 3.784437 Objective Loss 3.784437 LR 0.002000 Time 0.208225 -2023-04-26 19:36:36,185 - Epoch: [25][ 300/ 518] Overall Loss 3.796403 Objective Loss 3.796403 LR 0.002000 Time 0.207907 -2023-04-26 19:36:46,434 - Epoch: [25][ 350/ 518] Overall Loss 3.788934 Objective Loss 3.788934 LR 0.002000 Time 0.207484 -2023-04-26 19:36:56,635 - Epoch: [25][ 400/ 518] Overall Loss 3.792496 Objective Loss 3.792496 LR 0.002000 Time 0.207046 -2023-04-26 19:37:06,790 - Epoch: [25][ 450/ 518] Overall Loss 3.789324 Objective Loss 3.789324 LR 0.002000 Time 0.206604 -2023-04-26 19:37:17,052 - Epoch: [25][ 500/ 518] Overall Loss 3.786370 Objective Loss 3.786370 LR 0.002000 Time 0.206463 -2023-04-26 19:37:20,621 - Epoch: [25][ 518/ 518] Overall Loss 3.785088 Objective Loss 3.785088 LR 0.002000 Time 0.206178 -2023-04-26 19:37:20,694 - --- validate (epoch=25)----------- -2023-04-26 19:37:20,694 - 4952 samples (32 per mini-batch) -2023-04-26 19:37:26,885 - Epoch: [25][ 50/ 155] Loss 4.178306 mAP 0.182515 -2023-04-26 19:37:32,692 - Epoch: [25][ 100/ 155] Loss 4.171574 mAP 0.186471 -2023-04-26 19:37:38,488 - Epoch: [25][ 150/ 155] Loss 4.170928 mAP 0.193311 -2023-04-26 19:37:38,997 - Epoch: [25][ 155/ 155] Loss 4.164609 mAP 0.192613 -2023-04-26 19:37:39,059 - ==> mAP: 0.19261 Loss: 4.165 - -2023-04-26 19:37:39,063 - ==> Best [mAP: 0.202613 vloss: 4.073908 Sparsity:0.00 Params: 2177088 on epoch: 21] -2023-04-26 19:37:39,063 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:37:39,100 - - -2023-04-26 19:37:39,100 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:37:49,978 - Epoch: [26][ 50/ 518] Overall Loss 3.813190 Objective Loss 3.813190 LR 0.002000 Time 0.217500 -2023-04-26 19:38:00,245 - Epoch: [26][ 100/ 518] Overall Loss 3.786629 Objective Loss 3.786629 LR 0.002000 Time 0.211400 -2023-04-26 19:38:10,494 - Epoch: [26][ 150/ 518] Overall Loss 3.792274 Objective Loss 3.792274 LR 0.002000 Time 0.209252 -2023-04-26 19:38:20,641 - Epoch: [26][ 200/ 518] Overall Loss 3.768918 Objective Loss 3.768918 LR 0.002000 Time 0.207668 -2023-04-26 19:38:30,838 - Epoch: [26][ 250/ 518] Overall Loss 3.761646 Objective Loss 3.761646 LR 0.002000 Time 0.206914 -2023-04-26 19:38:40,989 - Epoch: [26][ 300/ 518] Overall Loss 3.766099 Objective Loss 3.766099 LR 0.002000 Time 0.206259 -2023-04-26 19:38:51,226 - Epoch: [26][ 350/ 518] Overall Loss 3.769137 Objective Loss 3.769137 LR 0.002000 Time 0.206037 -2023-04-26 19:39:01,464 - Epoch: [26][ 400/ 518] Overall Loss 3.776168 Objective Loss 3.776168 LR 0.002000 Time 0.205874 -2023-04-26 19:39:11,613 - Epoch: [26][ 450/ 518] Overall Loss 3.776124 Objective Loss 3.776124 LR 0.002000 Time 0.205548 -2023-04-26 19:39:21,990 - Epoch: [26][ 500/ 518] Overall Loss 3.775858 Objective Loss 3.775858 LR 0.002000 Time 0.205745 -2023-04-26 19:39:25,569 - Epoch: [26][ 518/ 518] Overall Loss 3.776813 Objective Loss 3.776813 LR 0.002000 Time 0.205504 -2023-04-26 19:39:25,641 - --- validate (epoch=26)----------- -2023-04-26 19:39:25,642 - 4952 samples (32 per mini-batch) -2023-04-26 19:39:31,821 - Epoch: [26][ 50/ 155] Loss 4.097317 mAP 0.206061 -2023-04-26 19:39:37,665 - Epoch: [26][ 100/ 155] Loss 4.058297 mAP 0.214264 -2023-04-26 19:39:43,521 - Epoch: [26][ 150/ 155] Loss 4.050810 mAP 0.214144 -2023-04-26 19:39:44,034 - Epoch: [26][ 155/ 155] Loss 4.050778 mAP 0.214986 -2023-04-26 19:39:44,104 - ==> mAP: 0.21499 Loss: 4.051 - -2023-04-26 19:39:44,107 - ==> Best [mAP: 0.214986 vloss: 4.050778 Sparsity:0.00 Params: 2177088 on epoch: 26] -2023-04-26 19:39:44,107 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:39:44,160 - - -2023-04-26 19:39:44,160 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:39:55,109 - Epoch: [27][ 50/ 518] Overall Loss 3.735010 Objective Loss 3.735010 LR 0.002000 Time 0.218922 -2023-04-26 19:40:05,505 - Epoch: [27][ 100/ 518] Overall Loss 3.758400 Objective Loss 3.758400 LR 0.002000 Time 0.213408 -2023-04-26 19:40:15,619 - Epoch: [27][ 150/ 518] Overall Loss 3.750613 Objective Loss 3.750613 LR 0.002000 Time 0.209686 -2023-04-26 19:40:25,843 - Epoch: [27][ 200/ 518] Overall Loss 3.746376 Objective Loss 3.746376 LR 0.002000 Time 0.208375 -2023-04-26 19:40:36,062 - Epoch: [27][ 250/ 518] Overall Loss 3.750954 Objective Loss 3.750954 LR 0.002000 Time 0.207572 -2023-04-26 19:40:46,399 - Epoch: [27][ 300/ 518] Overall Loss 3.754447 Objective Loss 3.754447 LR 0.002000 Time 0.207425 -2023-04-26 19:40:56,675 - Epoch: [27][ 350/ 518] Overall Loss 3.744555 Objective Loss 3.744555 LR 0.002000 Time 0.207150 -2023-04-26 19:41:06,903 - Epoch: [27][ 400/ 518] Overall Loss 3.739720 Objective Loss 3.739720 LR 0.002000 Time 0.206820 -2023-04-26 19:41:17,153 - Epoch: [27][ 450/ 518] Overall Loss 3.742962 Objective Loss 3.742962 LR 0.002000 Time 0.206616 -2023-04-26 19:41:27,391 - Epoch: [27][ 500/ 518] Overall Loss 3.740664 Objective Loss 3.740664 LR 0.002000 Time 0.206426 -2023-04-26 19:41:31,020 - Epoch: [27][ 518/ 518] Overall Loss 3.741702 Objective Loss 3.741702 LR 0.002000 Time 0.206258 -2023-04-26 19:41:31,091 - --- validate (epoch=27)----------- -2023-04-26 19:41:31,091 - 4952 samples (32 per mini-batch) -2023-04-26 19:41:37,275 - Epoch: [27][ 50/ 155] Loss 4.034730 mAP 0.212614 -2023-04-26 19:41:43,111 - Epoch: [27][ 100/ 155] Loss 4.050146 mAP 0.217550 -2023-04-26 19:41:48,960 - Epoch: [27][ 150/ 155] Loss 4.054650 mAP 0.222874 -2023-04-26 19:41:49,484 - Epoch: [27][ 155/ 155] Loss 4.051917 mAP 0.224382 -2023-04-26 19:41:49,555 - ==> mAP: 0.22438 Loss: 4.052 - -2023-04-26 19:41:49,559 - ==> Best [mAP: 0.224382 vloss: 4.051917 Sparsity:0.00 Params: 2177088 on epoch: 27] -2023-04-26 19:41:49,559 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:41:49,611 - - -2023-04-26 19:41:49,611 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:42:00,579 - Epoch: [28][ 50/ 518] Overall Loss 3.793618 Objective Loss 3.793618 LR 0.002000 Time 0.219293 -2023-04-26 19:42:10,759 - Epoch: [28][ 100/ 518] Overall Loss 3.783070 Objective Loss 3.783070 LR 0.002000 Time 0.211428 -2023-04-26 19:42:20,873 - Epoch: [28][ 150/ 518] Overall Loss 3.757351 Objective Loss 3.757351 LR 0.002000 Time 0.208373 -2023-04-26 19:42:31,207 - Epoch: [28][ 200/ 518] Overall Loss 3.750747 Objective Loss 3.750747 LR 0.002000 Time 0.207940 -2023-04-26 19:42:41,443 - Epoch: [28][ 250/ 518] Overall Loss 3.743433 Objective Loss 3.743433 LR 0.002000 Time 0.207290 -2023-04-26 19:42:51,638 - Epoch: [28][ 300/ 518] Overall Loss 3.739596 Objective Loss 3.739596 LR 0.002000 Time 0.206717 -2023-04-26 19:43:01,807 - Epoch: [28][ 350/ 518] Overall Loss 3.738357 Objective Loss 3.738357 LR 0.002000 Time 0.206237 -2023-04-26 19:43:12,021 - Epoch: [28][ 400/ 518] Overall Loss 3.733618 Objective Loss 3.733618 LR 0.002000 Time 0.205987 -2023-04-26 19:43:22,272 - Epoch: [28][ 450/ 518] Overall Loss 3.731523 Objective Loss 3.731523 LR 0.002000 Time 0.205877 -2023-04-26 19:43:32,520 - Epoch: [28][ 500/ 518] Overall Loss 3.728978 Objective Loss 3.728978 LR 0.002000 Time 0.205782 -2023-04-26 19:43:36,097 - Epoch: [28][ 518/ 518] Overall Loss 3.729820 Objective Loss 3.729820 LR 0.002000 Time 0.205536 -2023-04-26 19:43:36,168 - --- validate (epoch=28)----------- -2023-04-26 19:43:36,169 - 4952 samples (32 per mini-batch) -2023-04-26 19:43:42,674 - Epoch: [28][ 50/ 155] Loss 3.996921 mAP 0.238958 -2023-04-26 19:43:48,796 - Epoch: [28][ 100/ 155] Loss 3.998303 mAP 0.241040 -2023-04-26 19:43:54,975 - Epoch: [28][ 150/ 155] Loss 3.959077 mAP 0.244770 -2023-04-26 19:43:55,539 - Epoch: [28][ 155/ 155] Loss 3.956869 mAP 0.244790 -2023-04-26 19:43:55,609 - ==> mAP: 0.24479 Loss: 3.957 - -2023-04-26 19:43:55,613 - ==> Best [mAP: 0.244790 vloss: 3.956869 Sparsity:0.00 Params: 2177088 on epoch: 28] -2023-04-26 19:43:55,613 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:43:55,664 - - -2023-04-26 19:43:55,664 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:44:06,598 - Epoch: [29][ 50/ 518] Overall Loss 3.756586 Objective Loss 3.756586 LR 0.002000 Time 0.218626 -2023-04-26 19:44:16,818 - Epoch: [29][ 100/ 518] Overall Loss 3.722885 Objective Loss 3.722885 LR 0.002000 Time 0.211493 -2023-04-26 19:44:27,104 - Epoch: [29][ 150/ 518] Overall Loss 3.723105 Objective Loss 3.723105 LR 0.002000 Time 0.209556 -2023-04-26 19:44:37,311 - Epoch: [29][ 200/ 518] Overall Loss 3.705128 Objective Loss 3.705128 LR 0.002000 Time 0.208198 -2023-04-26 19:44:47,618 - Epoch: [29][ 250/ 518] Overall Loss 3.718329 Objective Loss 3.718329 LR 0.002000 Time 0.207777 -2023-04-26 19:44:57,744 - Epoch: [29][ 300/ 518] Overall Loss 3.704645 Objective Loss 3.704645 LR 0.002000 Time 0.206897 -2023-04-26 19:45:08,048 - Epoch: [29][ 350/ 518] Overall Loss 3.705383 Objective Loss 3.705383 LR 0.002000 Time 0.206776 -2023-04-26 19:45:18,358 - Epoch: [29][ 400/ 518] Overall Loss 3.709987 Objective Loss 3.709987 LR 0.002000 Time 0.206699 -2023-04-26 19:45:28,623 - Epoch: [29][ 450/ 518] Overall Loss 3.708574 Objective Loss 3.708574 LR 0.002000 Time 0.206541 -2023-04-26 19:45:39,012 - Epoch: [29][ 500/ 518] Overall Loss 3.704994 Objective Loss 3.704994 LR 0.002000 Time 0.206660 -2023-04-26 19:45:42,624 - Epoch: [29][ 518/ 518] Overall Loss 3.705613 Objective Loss 3.705613 LR 0.002000 Time 0.206450 -2023-04-26 19:45:42,695 - --- validate (epoch=29)----------- -2023-04-26 19:45:42,695 - 4952 samples (32 per mini-batch) -2023-04-26 19:45:49,021 - Epoch: [29][ 50/ 155] Loss 4.020824 mAP 0.251332 -2023-04-26 19:45:54,915 - Epoch: [29][ 100/ 155] Loss 3.995273 mAP 0.247810 -2023-04-26 19:46:00,817 - Epoch: [29][ 150/ 155] Loss 3.990509 mAP 0.236123 -2023-04-26 19:46:01,343 - Epoch: [29][ 155/ 155] Loss 3.993245 mAP 0.234295 -2023-04-26 19:46:01,405 - ==> mAP: 0.23429 Loss: 3.993 - -2023-04-26 19:46:01,409 - ==> Best [mAP: 0.244790 vloss: 3.956869 Sparsity:0.00 Params: 2177088 on epoch: 28] -2023-04-26 19:46:01,409 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:46:01,471 - - -2023-04-26 19:46:01,471 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:46:12,471 - Epoch: [30][ 50/ 518] Overall Loss 3.675683 Objective Loss 3.675683 LR 0.002000 Time 0.219910 -2023-04-26 19:46:22,648 - Epoch: [30][ 100/ 518] Overall Loss 3.712482 Objective Loss 3.712482 LR 0.002000 Time 0.211708 -2023-04-26 19:46:32,913 - Epoch: [30][ 150/ 518] Overall Loss 3.693862 Objective Loss 3.693862 LR 0.002000 Time 0.209560 -2023-04-26 19:46:43,155 - Epoch: [30][ 200/ 518] Overall Loss 3.696428 Objective Loss 3.696428 LR 0.002000 Time 0.208372 -2023-04-26 19:46:53,516 - Epoch: [30][ 250/ 518] Overall Loss 3.707046 Objective Loss 3.707046 LR 0.002000 Time 0.208133 -2023-04-26 19:47:03,734 - Epoch: [30][ 300/ 518] Overall Loss 3.708627 Objective Loss 3.708627 LR 0.002000 Time 0.207500 -2023-04-26 19:47:13,998 - Epoch: [30][ 350/ 518] Overall Loss 3.699401 Objective Loss 3.699401 LR 0.002000 Time 0.207179 -2023-04-26 19:47:24,261 - Epoch: [30][ 400/ 518] Overall Loss 3.692252 Objective Loss 3.692252 LR 0.002000 Time 0.206933 -2023-04-26 19:47:34,462 - Epoch: [30][ 450/ 518] Overall Loss 3.699528 Objective Loss 3.699528 LR 0.002000 Time 0.206606 -2023-04-26 19:47:44,675 - Epoch: [30][ 500/ 518] Overall Loss 3.699541 Objective Loss 3.699541 LR 0.002000 Time 0.206369 -2023-04-26 19:47:48,209 - Epoch: [30][ 518/ 518] Overall Loss 3.700829 Objective Loss 3.700829 LR 0.002000 Time 0.206019 -2023-04-26 19:47:48,281 - --- validate (epoch=30)----------- -2023-04-26 19:47:48,282 - 4952 samples (32 per mini-batch) -2023-04-26 19:47:54,683 - Epoch: [30][ 50/ 155] Loss 4.046191 mAP 0.231719 -2023-04-26 19:48:00,684 - Epoch: [30][ 100/ 155] Loss 3.991807 mAP 0.229803 -2023-04-26 19:48:06,686 - Epoch: [30][ 150/ 155] Loss 4.003231 mAP 0.227844 -2023-04-26 19:48:07,222 - Epoch: [30][ 155/ 155] Loss 4.001645 mAP 0.227649 -2023-04-26 19:48:07,291 - ==> mAP: 0.22765 Loss: 4.002 - -2023-04-26 19:48:07,295 - ==> Best [mAP: 0.244790 vloss: 3.956869 Sparsity:0.00 Params: 2177088 on epoch: 28] -2023-04-26 19:48:07,295 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:48:07,332 - - -2023-04-26 19:48:07,332 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:48:18,320 - Epoch: [31][ 50/ 518] Overall Loss 3.676557 Objective Loss 3.676557 LR 0.002000 Time 0.219686 -2023-04-26 19:48:28,566 - Epoch: [31][ 100/ 518] Overall Loss 3.667549 Objective Loss 3.667549 LR 0.002000 Time 0.212293 -2023-04-26 19:48:38,806 - Epoch: [31][ 150/ 518] Overall Loss 3.662500 Objective Loss 3.662500 LR 0.002000 Time 0.209786 -2023-04-26 19:48:48,946 - Epoch: [31][ 200/ 518] Overall Loss 3.668876 Objective Loss 3.668876 LR 0.002000 Time 0.208030 -2023-04-26 19:48:59,267 - Epoch: [31][ 250/ 518] Overall Loss 3.679939 Objective Loss 3.679939 LR 0.002000 Time 0.207699 -2023-04-26 19:49:09,418 - Epoch: [31][ 300/ 518] Overall Loss 3.678177 Objective Loss 3.678177 LR 0.002000 Time 0.206915 -2023-04-26 19:49:19,649 - Epoch: [31][ 350/ 518] Overall Loss 3.679737 Objective Loss 3.679737 LR 0.002000 Time 0.206584 -2023-04-26 19:49:29,768 - Epoch: [31][ 400/ 518] Overall Loss 3.685045 Objective Loss 3.685045 LR 0.002000 Time 0.206054 -2023-04-26 19:49:40,023 - Epoch: [31][ 450/ 518] Overall Loss 3.688929 Objective Loss 3.688929 LR 0.002000 Time 0.205943 -2023-04-26 19:49:50,162 - Epoch: [31][ 500/ 518] Overall Loss 3.682723 Objective Loss 3.682723 LR 0.002000 Time 0.205623 -2023-04-26 19:49:53,736 - Epoch: [31][ 518/ 518] Overall Loss 3.686206 Objective Loss 3.686206 LR 0.002000 Time 0.205377 -2023-04-26 19:49:53,808 - --- validate (epoch=31)----------- -2023-04-26 19:49:53,808 - 4952 samples (32 per mini-batch) -2023-04-26 19:50:00,279 - Epoch: [31][ 50/ 155] Loss 4.071385 mAP 0.216501 -2023-04-26 19:50:06,482 - Epoch: [31][ 100/ 155] Loss 4.040853 mAP 0.226701 -2023-04-26 19:50:12,577 - Epoch: [31][ 150/ 155] Loss 4.045206 mAP 0.221586 -2023-04-26 19:50:13,108 - Epoch: [31][ 155/ 155] Loss 4.039146 mAP 0.223440 -2023-04-26 19:50:13,181 - ==> mAP: 0.22344 Loss: 4.039 - -2023-04-26 19:50:13,184 - ==> Best [mAP: 0.244790 vloss: 3.956869 Sparsity:0.00 Params: 2177088 on epoch: 28] -2023-04-26 19:50:13,185 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:50:13,222 - - -2023-04-26 19:50:13,222 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:50:24,256 - Epoch: [32][ 50/ 518] Overall Loss 3.622508 Objective Loss 3.622508 LR 0.002000 Time 0.220619 -2023-04-26 19:50:34,459 - Epoch: [32][ 100/ 518] Overall Loss 3.621469 Objective Loss 3.621469 LR 0.002000 Time 0.212327 -2023-04-26 19:50:44,654 - Epoch: [32][ 150/ 518] Overall Loss 3.639146 Objective Loss 3.639146 LR 0.002000 Time 0.209505 -2023-04-26 19:50:54,893 - Epoch: [32][ 200/ 518] Overall Loss 3.640208 Objective Loss 3.640208 LR 0.002000 Time 0.208316 -2023-04-26 19:51:05,123 - Epoch: [32][ 250/ 518] Overall Loss 3.658617 Objective Loss 3.658617 LR 0.002000 Time 0.207565 -2023-04-26 19:51:15,376 - Epoch: [32][ 300/ 518] Overall Loss 3.663944 Objective Loss 3.663944 LR 0.002000 Time 0.207144 -2023-04-26 19:51:25,527 - Epoch: [32][ 350/ 518] Overall Loss 3.656344 Objective Loss 3.656344 LR 0.002000 Time 0.206550 -2023-04-26 19:51:35,726 - Epoch: [32][ 400/ 518] Overall Loss 3.657006 Objective Loss 3.657006 LR 0.002000 Time 0.206224 -2023-04-26 19:51:45,963 - Epoch: [32][ 450/ 518] Overall Loss 3.648701 Objective Loss 3.648701 LR 0.002000 Time 0.206056 -2023-04-26 19:51:56,114 - Epoch: [32][ 500/ 518] Overall Loss 3.650677 Objective Loss 3.650677 LR 0.002000 Time 0.205749 -2023-04-26 19:51:59,632 - Epoch: [32][ 518/ 518] Overall Loss 3.656804 Objective Loss 3.656804 LR 0.002000 Time 0.205389 -2023-04-26 19:51:59,704 - --- validate (epoch=32)----------- -2023-04-26 19:51:59,705 - 4952 samples (32 per mini-batch) -2023-04-26 19:52:06,251 - Epoch: [32][ 50/ 155] Loss 4.161220 mAP 0.220729 -2023-04-26 19:52:12,371 - Epoch: [32][ 100/ 155] Loss 4.140896 mAP 0.217663 -2023-04-26 19:52:18,491 - Epoch: [32][ 150/ 155] Loss 4.170464 mAP 0.213553 -2023-04-26 19:52:19,034 - Epoch: [32][ 155/ 155] Loss 4.173628 mAP 0.213481 -2023-04-26 19:52:19,100 - ==> mAP: 0.21348 Loss: 4.174 - -2023-04-26 19:52:19,103 - ==> Best [mAP: 0.244790 vloss: 3.956869 Sparsity:0.00 Params: 2177088 on epoch: 28] -2023-04-26 19:52:19,103 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:52:19,141 - - -2023-04-26 19:52:19,141 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:52:30,156 - Epoch: [33][ 50/ 518] Overall Loss 3.814897 Objective Loss 3.814897 LR 0.002000 Time 0.220241 -2023-04-26 19:52:40,403 - Epoch: [33][ 100/ 518] Overall Loss 3.734574 Objective Loss 3.734574 LR 0.002000 Time 0.212570 -2023-04-26 19:52:50,604 - Epoch: [33][ 150/ 518] Overall Loss 3.715674 Objective Loss 3.715674 LR 0.002000 Time 0.209715 -2023-04-26 19:53:00,811 - Epoch: [33][ 200/ 518] Overall Loss 3.700233 Objective Loss 3.700233 LR 0.002000 Time 0.208310 -2023-04-26 19:53:11,065 - Epoch: [33][ 250/ 518] Overall Loss 3.689503 Objective Loss 3.689503 LR 0.002000 Time 0.207658 -2023-04-26 19:53:21,311 - Epoch: [33][ 300/ 518] Overall Loss 3.679951 Objective Loss 3.679951 LR 0.002000 Time 0.207198 -2023-04-26 19:53:31,528 - Epoch: [33][ 350/ 518] Overall Loss 3.682826 Objective Loss 3.682826 LR 0.002000 Time 0.206785 -2023-04-26 19:53:41,788 - Epoch: [33][ 400/ 518] Overall Loss 3.686880 Objective Loss 3.686880 LR 0.002000 Time 0.206583 -2023-04-26 19:53:52,090 - Epoch: [33][ 450/ 518] Overall Loss 3.685774 Objective Loss 3.685774 LR 0.002000 Time 0.206517 -2023-04-26 19:54:02,387 - Epoch: [33][ 500/ 518] Overall Loss 3.678560 Objective Loss 3.678560 LR 0.002000 Time 0.206457 -2023-04-26 19:54:05,962 - Epoch: [33][ 518/ 518] Overall Loss 3.681652 Objective Loss 3.681652 LR 0.002000 Time 0.206182 -2023-04-26 19:54:06,033 - --- validate (epoch=33)----------- -2023-04-26 19:54:06,033 - 4952 samples (32 per mini-batch) -2023-04-26 19:54:12,477 - Epoch: [33][ 50/ 155] Loss 3.864882 mAP 0.261107 -2023-04-26 19:54:18,567 - Epoch: [33][ 100/ 155] Loss 3.880147 mAP 0.255303 -2023-04-26 19:54:24,607 - Epoch: [33][ 150/ 155] Loss 3.882949 mAP 0.254966 -2023-04-26 19:54:25,137 - Epoch: [33][ 155/ 155] Loss 3.877640 mAP 0.253832 -2023-04-26 19:54:25,207 - ==> mAP: 0.25383 Loss: 3.878 - -2023-04-26 19:54:25,211 - ==> Best [mAP: 0.253832 vloss: 3.877640 Sparsity:0.00 Params: 2177088 on epoch: 33] -2023-04-26 19:54:25,211 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:54:25,263 - - -2023-04-26 19:54:25,263 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:54:36,254 - Epoch: [34][ 50/ 518] Overall Loss 3.577139 Objective Loss 3.577139 LR 0.002000 Time 0.219770 -2023-04-26 19:54:46,513 - Epoch: [34][ 100/ 518] Overall Loss 3.557859 Objective Loss 3.557859 LR 0.002000 Time 0.212453 -2023-04-26 19:54:56,729 - Epoch: [34][ 150/ 518] Overall Loss 3.567130 Objective Loss 3.567130 LR 0.002000 Time 0.209733 -2023-04-26 19:55:07,037 - Epoch: [34][ 200/ 518] Overall Loss 3.570533 Objective Loss 3.570533 LR 0.002000 Time 0.208830 -2023-04-26 19:55:17,237 - Epoch: [34][ 250/ 518] Overall Loss 3.578371 Objective Loss 3.578371 LR 0.002000 Time 0.207858 -2023-04-26 19:55:27,418 - Epoch: [34][ 300/ 518] Overall Loss 3.585870 Objective Loss 3.585870 LR 0.002000 Time 0.207147 -2023-04-26 19:55:37,671 - Epoch: [34][ 350/ 518] Overall Loss 3.599762 Objective Loss 3.599762 LR 0.002000 Time 0.206844 -2023-04-26 19:55:47,814 - Epoch: [34][ 400/ 518] Overall Loss 3.605205 Objective Loss 3.605205 LR 0.002000 Time 0.206343 -2023-04-26 19:55:58,088 - Epoch: [34][ 450/ 518] Overall Loss 3.608041 Objective Loss 3.608041 LR 0.002000 Time 0.206242 -2023-04-26 19:56:08,298 - Epoch: [34][ 500/ 518] Overall Loss 3.608538 Objective Loss 3.608538 LR 0.002000 Time 0.206034 -2023-04-26 19:56:11,846 - Epoch: [34][ 518/ 518] Overall Loss 3.609947 Objective Loss 3.609947 LR 0.002000 Time 0.205725 -2023-04-26 19:56:11,917 - --- validate (epoch=34)----------- -2023-04-26 19:56:11,917 - 4952 samples (32 per mini-batch) -2023-04-26 19:56:18,336 - Epoch: [34][ 50/ 155] Loss 3.938545 mAP 0.276593 -2023-04-26 19:56:24,441 - Epoch: [34][ 100/ 155] Loss 3.924729 mAP 0.274265 -2023-04-26 19:56:30,524 - Epoch: [34][ 150/ 155] Loss 3.902601 mAP 0.281820 -2023-04-26 19:56:31,057 - Epoch: [34][ 155/ 155] Loss 3.908049 mAP 0.282017 -2023-04-26 19:56:31,127 - ==> mAP: 0.28202 Loss: 3.908 - -2023-04-26 19:56:31,130 - ==> Best [mAP: 0.282017 vloss: 3.908049 Sparsity:0.00 Params: 2177088 on epoch: 34] -2023-04-26 19:56:31,131 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:56:31,181 - - -2023-04-26 19:56:31,182 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:56:42,172 - Epoch: [35][ 50/ 518] Overall Loss 3.674274 Objective Loss 3.674274 LR 0.002000 Time 0.219753 -2023-04-26 19:56:52,397 - Epoch: [35][ 100/ 518] Overall Loss 3.631758 Objective Loss 3.631758 LR 0.002000 Time 0.212114 -2023-04-26 19:57:02,686 - Epoch: [35][ 150/ 518] Overall Loss 3.635628 Objective Loss 3.635628 LR 0.002000 Time 0.209989 -2023-04-26 19:57:12,911 - Epoch: [35][ 200/ 518] Overall Loss 3.622842 Objective Loss 3.622842 LR 0.002000 Time 0.208610 -2023-04-26 19:57:23,105 - Epoch: [35][ 250/ 518] Overall Loss 3.622560 Objective Loss 3.622560 LR 0.002000 Time 0.207656 -2023-04-26 19:57:33,356 - Epoch: [35][ 300/ 518] Overall Loss 3.613039 Objective Loss 3.613039 LR 0.002000 Time 0.207212 -2023-04-26 19:57:43,449 - Epoch: [35][ 350/ 518] Overall Loss 3.611793 Objective Loss 3.611793 LR 0.002000 Time 0.206443 -2023-04-26 19:57:53,746 - Epoch: [35][ 400/ 518] Overall Loss 3.612367 Objective Loss 3.612367 LR 0.002000 Time 0.206375 -2023-04-26 19:58:04,008 - Epoch: [35][ 450/ 518] Overall Loss 3.605167 Objective Loss 3.605167 LR 0.002000 Time 0.206246 -2023-04-26 19:58:14,237 - Epoch: [35][ 500/ 518] Overall Loss 3.607668 Objective Loss 3.607668 LR 0.002000 Time 0.206075 -2023-04-26 19:58:17,811 - Epoch: [35][ 518/ 518] Overall Loss 3.613131 Objective Loss 3.613131 LR 0.002000 Time 0.205813 -2023-04-26 19:58:17,882 - --- validate (epoch=35)----------- -2023-04-26 19:58:17,883 - 4952 samples (32 per mini-batch) -2023-04-26 19:58:24,325 - Epoch: [35][ 50/ 155] Loss 3.782484 mAP 0.274633 -2023-04-26 19:58:30,437 - Epoch: [35][ 100/ 155] Loss 3.781611 mAP 0.274225 -2023-04-26 19:58:36,540 - Epoch: [35][ 150/ 155] Loss 3.762132 mAP 0.282572 -2023-04-26 19:58:37,088 - Epoch: [35][ 155/ 155] Loss 3.764001 mAP 0.282334 -2023-04-26 19:58:37,163 - ==> mAP: 0.28233 Loss: 3.764 - -2023-04-26 19:58:37,167 - ==> Best [mAP: 0.282334 vloss: 3.764001 Sparsity:0.00 Params: 2177088 on epoch: 35] -2023-04-26 19:58:37,167 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 19:58:37,217 - - -2023-04-26 19:58:37,217 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 19:58:48,325 - Epoch: [36][ 50/ 518] Overall Loss 3.571954 Objective Loss 3.571954 LR 0.002000 Time 0.222087 -2023-04-26 19:58:58,542 - Epoch: [36][ 100/ 518] Overall Loss 3.575150 Objective Loss 3.575150 LR 0.002000 Time 0.213202 -2023-04-26 19:59:08,706 - Epoch: [36][ 150/ 518] Overall Loss 3.573098 Objective Loss 3.573098 LR 0.002000 Time 0.209883 -2023-04-26 19:59:18,994 - Epoch: [36][ 200/ 518] Overall Loss 3.579652 Objective Loss 3.579652 LR 0.002000 Time 0.208842 -2023-04-26 19:59:29,234 - Epoch: [36][ 250/ 518] Overall Loss 3.573026 Objective Loss 3.573026 LR 0.002000 Time 0.208030 -2023-04-26 19:59:39,467 - Epoch: [36][ 300/ 518] Overall Loss 3.577472 Objective Loss 3.577472 LR 0.002000 Time 0.207461 -2023-04-26 19:59:49,737 - Epoch: [36][ 350/ 518] Overall Loss 3.582137 Objective Loss 3.582137 LR 0.002000 Time 0.207163 -2023-04-26 20:00:00,026 - Epoch: [36][ 400/ 518] Overall Loss 3.583557 Objective Loss 3.583557 LR 0.002000 Time 0.206985 -2023-04-26 20:00:10,316 - Epoch: [36][ 450/ 518] Overall Loss 3.583362 Objective Loss 3.583362 LR 0.002000 Time 0.206849 -2023-04-26 20:00:20,453 - Epoch: [36][ 500/ 518] Overall Loss 3.587749 Objective Loss 3.587749 LR 0.002000 Time 0.206437 -2023-04-26 20:00:23,983 - Epoch: [36][ 518/ 518] Overall Loss 3.587257 Objective Loss 3.587257 LR 0.002000 Time 0.206076 -2023-04-26 20:00:24,054 - --- validate (epoch=36)----------- -2023-04-26 20:00:24,055 - 4952 samples (32 per mini-batch) -2023-04-26 20:00:30,249 - Epoch: [36][ 50/ 155] Loss 4.117269 mAP 0.209553 -2023-04-26 20:00:36,140 - Epoch: [36][ 100/ 155] Loss 4.103442 mAP 0.210367 -2023-04-26 20:00:42,001 - Epoch: [36][ 150/ 155] Loss 4.103325 mAP 0.214652 -2023-04-26 20:00:42,520 - Epoch: [36][ 155/ 155] Loss 4.108443 mAP 0.212777 -2023-04-26 20:00:42,583 - ==> mAP: 0.21278 Loss: 4.108 - -2023-04-26 20:00:42,586 - ==> Best [mAP: 0.282334 vloss: 3.764001 Sparsity:0.00 Params: 2177088 on epoch: 35] -2023-04-26 20:00:42,586 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:00:42,625 - - -2023-04-26 20:00:42,625 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:00:53,644 - Epoch: [37][ 50/ 518] Overall Loss 3.544465 Objective Loss 3.544465 LR 0.002000 Time 0.220334 -2023-04-26 20:01:03,921 - Epoch: [37][ 100/ 518] Overall Loss 3.548942 Objective Loss 3.548942 LR 0.002000 Time 0.212915 -2023-04-26 20:01:14,036 - Epoch: [37][ 150/ 518] Overall Loss 3.549602 Objective Loss 3.549602 LR 0.002000 Time 0.209363 -2023-04-26 20:01:24,213 - Epoch: [37][ 200/ 518] Overall Loss 3.548648 Objective Loss 3.548648 LR 0.002000 Time 0.207899 -2023-04-26 20:01:34,359 - Epoch: [37][ 250/ 518] Overall Loss 3.555897 Objective Loss 3.555897 LR 0.002000 Time 0.206897 -2023-04-26 20:01:44,536 - Epoch: [37][ 300/ 518] Overall Loss 3.559496 Objective Loss 3.559496 LR 0.002000 Time 0.206332 -2023-04-26 20:01:54,676 - Epoch: [37][ 350/ 518] Overall Loss 3.560734 Objective Loss 3.560734 LR 0.002000 Time 0.205823 -2023-04-26 20:02:04,934 - Epoch: [37][ 400/ 518] Overall Loss 3.568569 Objective Loss 3.568569 LR 0.002000 Time 0.205737 -2023-04-26 20:02:15,263 - Epoch: [37][ 450/ 518] Overall Loss 3.570110 Objective Loss 3.570110 LR 0.002000 Time 0.205826 -2023-04-26 20:02:25,474 - Epoch: [37][ 500/ 518] Overall Loss 3.567891 Objective Loss 3.567891 LR 0.002000 Time 0.205663 -2023-04-26 20:02:28,983 - Epoch: [37][ 518/ 518] Overall Loss 3.570881 Objective Loss 3.570881 LR 0.002000 Time 0.205289 -2023-04-26 20:02:29,071 - --- validate (epoch=37)----------- -2023-04-26 20:02:29,072 - 4952 samples (32 per mini-batch) -2023-04-26 20:02:35,452 - Epoch: [37][ 50/ 155] Loss 3.817792 mAP 0.276327 -2023-04-26 20:02:41,446 - Epoch: [37][ 100/ 155] Loss 3.836885 mAP 0.277180 -2023-04-26 20:02:47,470 - Epoch: [37][ 150/ 155] Loss 3.820092 mAP 0.274652 -2023-04-26 20:02:48,007 - Epoch: [37][ 155/ 155] Loss 3.823139 mAP 0.274591 -2023-04-26 20:02:48,076 - ==> mAP: 0.27459 Loss: 3.823 - -2023-04-26 20:02:48,081 - ==> Best [mAP: 0.282334 vloss: 3.764001 Sparsity:0.00 Params: 2177088 on epoch: 35] -2023-04-26 20:02:48,081 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:02:48,119 - - -2023-04-26 20:02:48,119 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:02:59,151 - Epoch: [38][ 50/ 518] Overall Loss 3.623772 Objective Loss 3.623772 LR 0.002000 Time 0.220576 -2023-04-26 20:03:09,379 - Epoch: [38][ 100/ 518] Overall Loss 3.624112 Objective Loss 3.624112 LR 0.002000 Time 0.212554 -2023-04-26 20:03:19,675 - Epoch: [38][ 150/ 518] Overall Loss 3.618546 Objective Loss 3.618546 LR 0.002000 Time 0.210327 -2023-04-26 20:03:29,868 - Epoch: [38][ 200/ 518] Overall Loss 3.591136 Objective Loss 3.591136 LR 0.002000 Time 0.208703 -2023-04-26 20:03:40,107 - Epoch: [38][ 250/ 518] Overall Loss 3.575868 Objective Loss 3.575868 LR 0.002000 Time 0.207911 -2023-04-26 20:03:50,328 - Epoch: [38][ 300/ 518] Overall Loss 3.580700 Objective Loss 3.580700 LR 0.002000 Time 0.207325 -2023-04-26 20:04:00,588 - Epoch: [38][ 350/ 518] Overall Loss 3.571287 Objective Loss 3.571287 LR 0.002000 Time 0.207017 -2023-04-26 20:04:10,882 - Epoch: [38][ 400/ 518] Overall Loss 3.568911 Objective Loss 3.568911 LR 0.002000 Time 0.206871 -2023-04-26 20:04:21,148 - Epoch: [38][ 450/ 518] Overall Loss 3.567397 Objective Loss 3.567397 LR 0.002000 Time 0.206694 -2023-04-26 20:04:31,389 - Epoch: [38][ 500/ 518] Overall Loss 3.560225 Objective Loss 3.560225 LR 0.002000 Time 0.206503 -2023-04-26 20:04:34,941 - Epoch: [38][ 518/ 518] Overall Loss 3.559654 Objective Loss 3.559654 LR 0.002000 Time 0.206184 -2023-04-26 20:04:35,015 - --- validate (epoch=38)----------- -2023-04-26 20:04:35,016 - 4952 samples (32 per mini-batch) -2023-04-26 20:04:41,442 - Epoch: [38][ 50/ 155] Loss 3.865955 mAP 0.248079 -2023-04-26 20:04:47,535 - Epoch: [38][ 100/ 155] Loss 3.887044 mAP 0.250255 -2023-04-26 20:04:53,512 - Epoch: [38][ 150/ 155] Loss 3.880935 mAP 0.250191 -2023-04-26 20:04:54,078 - Epoch: [38][ 155/ 155] Loss 3.877326 mAP 0.251635 -2023-04-26 20:04:54,147 - ==> mAP: 0.25164 Loss: 3.877 - -2023-04-26 20:04:54,151 - ==> Best [mAP: 0.282334 vloss: 3.764001 Sparsity:0.00 Params: 2177088 on epoch: 35] -2023-04-26 20:04:54,151 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:04:54,189 - - -2023-04-26 20:04:54,189 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:05:05,071 - Epoch: [39][ 50/ 518] Overall Loss 3.602041 Objective Loss 3.602041 LR 0.002000 Time 0.217582 -2023-04-26 20:05:15,331 - Epoch: [39][ 100/ 518] Overall Loss 3.553775 Objective Loss 3.553775 LR 0.002000 Time 0.211373 -2023-04-26 20:05:25,532 - Epoch: [39][ 150/ 518] Overall Loss 3.551157 Objective Loss 3.551157 LR 0.002000 Time 0.208913 -2023-04-26 20:05:35,715 - Epoch: [39][ 200/ 518] Overall Loss 3.542589 Objective Loss 3.542589 LR 0.002000 Time 0.207592 -2023-04-26 20:05:45,903 - Epoch: [39][ 250/ 518] Overall Loss 3.538071 Objective Loss 3.538071 LR 0.002000 Time 0.206820 -2023-04-26 20:05:56,130 - Epoch: [39][ 300/ 518] Overall Loss 3.533322 Objective Loss 3.533322 LR 0.002000 Time 0.206434 -2023-04-26 20:06:06,365 - Epoch: [39][ 350/ 518] Overall Loss 3.536428 Objective Loss 3.536428 LR 0.002000 Time 0.206181 -2023-04-26 20:06:16,541 - Epoch: [39][ 400/ 518] Overall Loss 3.544190 Objective Loss 3.544190 LR 0.002000 Time 0.205845 -2023-04-26 20:06:26,810 - Epoch: [39][ 450/ 518] Overall Loss 3.542994 Objective Loss 3.542994 LR 0.002000 Time 0.205789 -2023-04-26 20:06:37,095 - Epoch: [39][ 500/ 518] Overall Loss 3.548683 Objective Loss 3.548683 LR 0.002000 Time 0.205778 -2023-04-26 20:06:40,597 - Epoch: [39][ 518/ 518] Overall Loss 3.550863 Objective Loss 3.550863 LR 0.002000 Time 0.205386 -2023-04-26 20:06:40,667 - --- validate (epoch=39)----------- -2023-04-26 20:06:40,668 - 4952 samples (32 per mini-batch) -2023-04-26 20:06:47,182 - Epoch: [39][ 50/ 155] Loss 3.681846 mAP 0.303396 -2023-04-26 20:06:53,307 - Epoch: [39][ 100/ 155] Loss 3.693414 mAP 0.291396 -2023-04-26 20:06:59,354 - Epoch: [39][ 150/ 155] Loss 3.716486 mAP 0.289095 -2023-04-26 20:06:59,914 - Epoch: [39][ 155/ 155] Loss 3.715739 mAP 0.289792 -2023-04-26 20:06:59,985 - ==> mAP: 0.28979 Loss: 3.716 - -2023-04-26 20:06:59,989 - ==> Best [mAP: 0.289792 vloss: 3.715739 Sparsity:0.00 Params: 2177088 on epoch: 39] -2023-04-26 20:06:59,990 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:07:00,042 - - -2023-04-26 20:07:00,043 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:07:11,169 - Epoch: [40][ 50/ 518] Overall Loss 3.568132 Objective Loss 3.568132 LR 0.002000 Time 0.222481 -2023-04-26 20:07:21,398 - Epoch: [40][ 100/ 518] Overall Loss 3.570336 Objective Loss 3.570336 LR 0.002000 Time 0.213505 -2023-04-26 20:07:31,657 - Epoch: [40][ 150/ 518] Overall Loss 3.545196 Objective Loss 3.545196 LR 0.002000 Time 0.210721 -2023-04-26 20:07:41,873 - Epoch: [40][ 200/ 518] Overall Loss 3.558941 Objective Loss 3.558941 LR 0.002000 Time 0.209111 -2023-04-26 20:07:52,137 - Epoch: [40][ 250/ 518] Overall Loss 3.557941 Objective Loss 3.557941 LR 0.002000 Time 0.208339 -2023-04-26 20:08:02,319 - Epoch: [40][ 300/ 518] Overall Loss 3.551657 Objective Loss 3.551657 LR 0.002000 Time 0.207551 -2023-04-26 20:08:12,605 - Epoch: [40][ 350/ 518] Overall Loss 3.545977 Objective Loss 3.545977 LR 0.002000 Time 0.207286 -2023-04-26 20:08:22,837 - Epoch: [40][ 400/ 518] Overall Loss 3.547651 Objective Loss 3.547651 LR 0.002000 Time 0.206950 -2023-04-26 20:08:33,143 - Epoch: [40][ 450/ 518] Overall Loss 3.543006 Objective Loss 3.543006 LR 0.002000 Time 0.206854 -2023-04-26 20:08:43,328 - Epoch: [40][ 500/ 518] Overall Loss 3.541811 Objective Loss 3.541811 LR 0.002000 Time 0.206536 -2023-04-26 20:08:46,883 - Epoch: [40][ 518/ 518] Overall Loss 3.542927 Objective Loss 3.542927 LR 0.002000 Time 0.206220 -2023-04-26 20:08:46,956 - --- validate (epoch=40)----------- -2023-04-26 20:08:46,956 - 4952 samples (32 per mini-batch) -2023-04-26 20:08:53,431 - Epoch: [40][ 50/ 155] Loss 3.847755 mAP 0.307521 -2023-04-26 20:08:59,598 - Epoch: [40][ 100/ 155] Loss 3.808445 mAP 0.297535 -2023-04-26 20:09:05,724 - Epoch: [40][ 150/ 155] Loss 3.809166 mAP 0.298108 -2023-04-26 20:09:06,261 - Epoch: [40][ 155/ 155] Loss 3.812876 mAP 0.298829 -2023-04-26 20:09:06,330 - ==> mAP: 0.29883 Loss: 3.813 - -2023-04-26 20:09:06,334 - ==> Best [mAP: 0.298829 vloss: 3.812876 Sparsity:0.00 Params: 2177088 on epoch: 40] -2023-04-26 20:09:06,334 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:09:06,385 - - -2023-04-26 20:09:06,385 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:09:17,252 - Epoch: [41][ 50/ 518] Overall Loss 3.489459 Objective Loss 3.489459 LR 0.002000 Time 0.217276 -2023-04-26 20:09:27,488 - Epoch: [41][ 100/ 518] Overall Loss 3.512971 Objective Loss 3.512971 LR 0.002000 Time 0.210982 -2023-04-26 20:09:37,725 - Epoch: [41][ 150/ 518] Overall Loss 3.523250 Objective Loss 3.523250 LR 0.002000 Time 0.208886 -2023-04-26 20:09:47,947 - Epoch: [41][ 200/ 518] Overall Loss 3.521917 Objective Loss 3.521917 LR 0.002000 Time 0.207767 -2023-04-26 20:09:58,154 - Epoch: [41][ 250/ 518] Overall Loss 3.522287 Objective Loss 3.522287 LR 0.002000 Time 0.207037 -2023-04-26 20:10:08,397 - Epoch: [41][ 300/ 518] Overall Loss 3.521559 Objective Loss 3.521559 LR 0.002000 Time 0.206667 -2023-04-26 20:10:18,565 - Epoch: [41][ 350/ 518] Overall Loss 3.527976 Objective Loss 3.527976 LR 0.002000 Time 0.206189 -2023-04-26 20:10:28,760 - Epoch: [41][ 400/ 518] Overall Loss 3.535820 Objective Loss 3.535820 LR 0.002000 Time 0.205899 -2023-04-26 20:10:39,010 - Epoch: [41][ 450/ 518] Overall Loss 3.534219 Objective Loss 3.534219 LR 0.002000 Time 0.205795 -2023-04-26 20:10:49,275 - Epoch: [41][ 500/ 518] Overall Loss 3.537574 Objective Loss 3.537574 LR 0.002000 Time 0.205742 -2023-04-26 20:10:52,793 - Epoch: [41][ 518/ 518] Overall Loss 3.538935 Objective Loss 3.538935 LR 0.002000 Time 0.205384 -2023-04-26 20:10:52,865 - --- validate (epoch=41)----------- -2023-04-26 20:10:52,866 - 4952 samples (32 per mini-batch) -2023-04-26 20:10:59,310 - Epoch: [41][ 50/ 155] Loss 3.768814 mAP 0.299403 -2023-04-26 20:11:05,432 - Epoch: [41][ 100/ 155] Loss 3.785556 mAP 0.293750 -2023-04-26 20:11:11,482 - Epoch: [41][ 150/ 155] Loss 3.781280 mAP 0.297985 -2023-04-26 20:11:12,025 - Epoch: [41][ 155/ 155] Loss 3.780781 mAP 0.296725 -2023-04-26 20:11:12,087 - ==> mAP: 0.29672 Loss: 3.781 - -2023-04-26 20:11:12,090 - ==> Best [mAP: 0.298829 vloss: 3.812876 Sparsity:0.00 Params: 2177088 on epoch: 40] -2023-04-26 20:11:12,090 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:11:12,128 - - -2023-04-26 20:11:12,128 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:11:23,234 - Epoch: [42][ 50/ 518] Overall Loss 3.510524 Objective Loss 3.510524 LR 0.002000 Time 0.222050 -2023-04-26 20:11:33,507 - Epoch: [42][ 100/ 518] Overall Loss 3.504875 Objective Loss 3.504875 LR 0.002000 Time 0.213745 -2023-04-26 20:11:43,792 - Epoch: [42][ 150/ 518] Overall Loss 3.505233 Objective Loss 3.505233 LR 0.002000 Time 0.211048 -2023-04-26 20:11:54,016 - Epoch: [42][ 200/ 518] Overall Loss 3.490507 Objective Loss 3.490507 LR 0.002000 Time 0.209401 -2023-04-26 20:12:04,262 - Epoch: [42][ 250/ 518] Overall Loss 3.495338 Objective Loss 3.495338 LR 0.002000 Time 0.208499 -2023-04-26 20:12:14,489 - Epoch: [42][ 300/ 518] Overall Loss 3.509426 Objective Loss 3.509426 LR 0.002000 Time 0.207833 -2023-04-26 20:12:24,731 - Epoch: [42][ 350/ 518] Overall Loss 3.520971 Objective Loss 3.520971 LR 0.002000 Time 0.207399 -2023-04-26 20:12:34,881 - Epoch: [42][ 400/ 518] Overall Loss 3.523832 Objective Loss 3.523832 LR 0.002000 Time 0.206847 -2023-04-26 20:12:45,138 - Epoch: [42][ 450/ 518] Overall Loss 3.520359 Objective Loss 3.520359 LR 0.002000 Time 0.206654 -2023-04-26 20:12:55,417 - Epoch: [42][ 500/ 518] Overall Loss 3.511340 Objective Loss 3.511340 LR 0.002000 Time 0.206541 -2023-04-26 20:12:58,976 - Epoch: [42][ 518/ 518] Overall Loss 3.514424 Objective Loss 3.514424 LR 0.002000 Time 0.206235 -2023-04-26 20:12:59,049 - --- validate (epoch=42)----------- -2023-04-26 20:12:59,050 - 4952 samples (32 per mini-batch) -2023-04-26 20:13:05,382 - Epoch: [42][ 50/ 155] Loss 3.928010 mAP 0.282286 -2023-04-26 20:13:11,410 - Epoch: [42][ 100/ 155] Loss 3.966494 mAP 0.271312 -2023-04-26 20:13:17,360 - Epoch: [42][ 150/ 155] Loss 3.947045 mAP 0.272814 -2023-04-26 20:13:17,894 - Epoch: [42][ 155/ 155] Loss 3.949347 mAP 0.273366 -2023-04-26 20:13:17,963 - ==> mAP: 0.27337 Loss: 3.949 - -2023-04-26 20:13:17,966 - ==> Best [mAP: 0.298829 vloss: 3.812876 Sparsity:0.00 Params: 2177088 on epoch: 40] -2023-04-26 20:13:17,966 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:13:18,004 - - -2023-04-26 20:13:18,004 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:13:28,861 - Epoch: [43][ 50/ 518] Overall Loss 3.521436 Objective Loss 3.521436 LR 0.002000 Time 0.217075 -2023-04-26 20:13:39,042 - Epoch: [43][ 100/ 518] Overall Loss 3.510122 Objective Loss 3.510122 LR 0.002000 Time 0.210336 -2023-04-26 20:13:49,260 - Epoch: [43][ 150/ 518] Overall Loss 3.498484 Objective Loss 3.498484 LR 0.002000 Time 0.208334 -2023-04-26 20:13:59,468 - Epoch: [43][ 200/ 518] Overall Loss 3.504267 Objective Loss 3.504267 LR 0.002000 Time 0.207282 -2023-04-26 20:14:09,815 - Epoch: [43][ 250/ 518] Overall Loss 3.498007 Objective Loss 3.498007 LR 0.002000 Time 0.207207 -2023-04-26 20:14:19,996 - Epoch: [43][ 300/ 518] Overall Loss 3.497244 Objective Loss 3.497244 LR 0.002000 Time 0.206603 -2023-04-26 20:14:30,250 - Epoch: [43][ 350/ 518] Overall Loss 3.503654 Objective Loss 3.503654 LR 0.002000 Time 0.206381 -2023-04-26 20:14:40,535 - Epoch: [43][ 400/ 518] Overall Loss 3.503583 Objective Loss 3.503583 LR 0.002000 Time 0.206291 -2023-04-26 20:14:50,737 - Epoch: [43][ 450/ 518] Overall Loss 3.508895 Objective Loss 3.508895 LR 0.002000 Time 0.206037 -2023-04-26 20:15:00,901 - Epoch: [43][ 500/ 518] Overall Loss 3.502664 Objective Loss 3.502664 LR 0.002000 Time 0.205758 -2023-04-26 20:15:04,428 - Epoch: [43][ 518/ 518] Overall Loss 3.504489 Objective Loss 3.504489 LR 0.002000 Time 0.205416 -2023-04-26 20:15:04,499 - --- validate (epoch=43)----------- -2023-04-26 20:15:04,499 - 4952 samples (32 per mini-batch) -2023-04-26 20:15:11,050 - Epoch: [43][ 50/ 155] Loss 3.761623 mAP 0.316665 -2023-04-26 20:15:17,273 - Epoch: [43][ 100/ 155] Loss 3.750147 mAP 0.309042 -2023-04-26 20:15:23,498 - Epoch: [43][ 150/ 155] Loss 3.773217 mAP 0.306203 -2023-04-26 20:15:24,060 - Epoch: [43][ 155/ 155] Loss 3.775010 mAP 0.305906 -2023-04-26 20:15:24,131 - ==> mAP: 0.30591 Loss: 3.775 - -2023-04-26 20:15:24,135 - ==> Best [mAP: 0.305906 vloss: 3.775010 Sparsity:0.00 Params: 2177088 on epoch: 43] -2023-04-26 20:15:24,135 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:15:24,187 - - -2023-04-26 20:15:24,187 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:15:35,109 - Epoch: [44][ 50/ 518] Overall Loss 3.431258 Objective Loss 3.431258 LR 0.002000 Time 0.218376 -2023-04-26 20:15:45,373 - Epoch: [44][ 100/ 518] Overall Loss 3.480227 Objective Loss 3.480227 LR 0.002000 Time 0.211814 -2023-04-26 20:15:55,631 - Epoch: [44][ 150/ 518] Overall Loss 3.474218 Objective Loss 3.474218 LR 0.002000 Time 0.209582 -2023-04-26 20:16:05,899 - Epoch: [44][ 200/ 518] Overall Loss 3.457116 Objective Loss 3.457116 LR 0.002000 Time 0.208520 -2023-04-26 20:16:16,104 - Epoch: [44][ 250/ 518] Overall Loss 3.448930 Objective Loss 3.448930 LR 0.002000 Time 0.207629 -2023-04-26 20:16:26,303 - Epoch: [44][ 300/ 518] Overall Loss 3.461723 Objective Loss 3.461723 LR 0.002000 Time 0.207014 -2023-04-26 20:16:36,522 - Epoch: [44][ 350/ 518] Overall Loss 3.473674 Objective Loss 3.473674 LR 0.002000 Time 0.206635 -2023-04-26 20:16:46,788 - Epoch: [44][ 400/ 518] Overall Loss 3.474594 Objective Loss 3.474594 LR 0.002000 Time 0.206467 -2023-04-26 20:16:57,072 - Epoch: [44][ 450/ 518] Overall Loss 3.472349 Objective Loss 3.472349 LR 0.002000 Time 0.206376 -2023-04-26 20:17:07,213 - Epoch: [44][ 500/ 518] Overall Loss 3.473933 Objective Loss 3.473933 LR 0.002000 Time 0.206017 -2023-04-26 20:17:10,789 - Epoch: [44][ 518/ 518] Overall Loss 3.477551 Objective Loss 3.477551 LR 0.002000 Time 0.205760 -2023-04-26 20:17:10,861 - --- validate (epoch=44)----------- -2023-04-26 20:17:10,862 - 4952 samples (32 per mini-batch) -2023-04-26 20:17:17,273 - Epoch: [44][ 50/ 155] Loss 3.816290 mAP 0.268487 -2023-04-26 20:17:23,306 - Epoch: [44][ 100/ 155] Loss 3.770340 mAP 0.279987 -2023-04-26 20:17:29,309 - Epoch: [44][ 150/ 155] Loss 3.768426 mAP 0.282681 -2023-04-26 20:17:29,839 - Epoch: [44][ 155/ 155] Loss 3.770551 mAP 0.282786 -2023-04-26 20:17:29,903 - ==> mAP: 0.28279 Loss: 3.771 - -2023-04-26 20:17:29,907 - ==> Best [mAP: 0.305906 vloss: 3.775010 Sparsity:0.00 Params: 2177088 on epoch: 43] -2023-04-26 20:17:29,907 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:17:29,944 - - -2023-04-26 20:17:29,944 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:17:40,993 - Epoch: [45][ 50/ 518] Overall Loss 3.490834 Objective Loss 3.490834 LR 0.002000 Time 0.220927 -2023-04-26 20:17:51,214 - Epoch: [45][ 100/ 518] Overall Loss 3.460399 Objective Loss 3.460399 LR 0.002000 Time 0.212657 -2023-04-26 20:18:01,351 - Epoch: [45][ 150/ 518] Overall Loss 3.476447 Objective Loss 3.476447 LR 0.002000 Time 0.209344 -2023-04-26 20:18:11,553 - Epoch: [45][ 200/ 518] Overall Loss 3.481978 Objective Loss 3.481978 LR 0.002000 Time 0.208010 -2023-04-26 20:18:21,826 - Epoch: [45][ 250/ 518] Overall Loss 3.481621 Objective Loss 3.481621 LR 0.002000 Time 0.207491 -2023-04-26 20:18:32,013 - Epoch: [45][ 300/ 518] Overall Loss 3.476190 Objective Loss 3.476190 LR 0.002000 Time 0.206861 -2023-04-26 20:18:42,380 - Epoch: [45][ 350/ 518] Overall Loss 3.481439 Objective Loss 3.481439 LR 0.002000 Time 0.206925 -2023-04-26 20:18:52,624 - Epoch: [45][ 400/ 518] Overall Loss 3.480459 Objective Loss 3.480459 LR 0.002000 Time 0.206664 -2023-04-26 20:19:02,797 - Epoch: [45][ 450/ 518] Overall Loss 3.475061 Objective Loss 3.475061 LR 0.002000 Time 0.206305 -2023-04-26 20:19:13,050 - Epoch: [45][ 500/ 518] Overall Loss 3.470173 Objective Loss 3.470173 LR 0.002000 Time 0.206178 -2023-04-26 20:19:16,584 - Epoch: [45][ 518/ 518] Overall Loss 3.465247 Objective Loss 3.465247 LR 0.002000 Time 0.205834 -2023-04-26 20:19:16,655 - --- validate (epoch=45)----------- -2023-04-26 20:19:16,656 - 4952 samples (32 per mini-batch) -2023-04-26 20:19:23,257 - Epoch: [45][ 50/ 155] Loss 3.766533 mAP 0.317185 -2023-04-26 20:19:29,549 - Epoch: [45][ 100/ 155] Loss 3.765398 mAP 0.311178 -2023-04-26 20:19:35,744 - Epoch: [45][ 150/ 155] Loss 3.769913 mAP 0.309027 -2023-04-26 20:19:36,290 - Epoch: [45][ 155/ 155] Loss 3.761277 mAP 0.309682 -2023-04-26 20:19:36,359 - ==> mAP: 0.30968 Loss: 3.761 - -2023-04-26 20:19:36,363 - ==> Best [mAP: 0.309682 vloss: 3.761277 Sparsity:0.00 Params: 2177088 on epoch: 45] -2023-04-26 20:19:36,363 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:19:36,415 - - -2023-04-26 20:19:36,416 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:19:47,499 - Epoch: [46][ 50/ 518] Overall Loss 3.497721 Objective Loss 3.497721 LR 0.002000 Time 0.221604 -2023-04-26 20:19:57,742 - Epoch: [46][ 100/ 518] Overall Loss 3.454764 Objective Loss 3.454764 LR 0.002000 Time 0.213218 -2023-04-26 20:20:08,023 - Epoch: [46][ 150/ 518] Overall Loss 3.443213 Objective Loss 3.443213 LR 0.002000 Time 0.210672 -2023-04-26 20:20:18,355 - Epoch: [46][ 200/ 518] Overall Loss 3.448681 Objective Loss 3.448681 LR 0.002000 Time 0.209656 -2023-04-26 20:20:28,622 - Epoch: [46][ 250/ 518] Overall Loss 3.460678 Objective Loss 3.460678 LR 0.002000 Time 0.208788 -2023-04-26 20:20:38,884 - Epoch: [46][ 300/ 518] Overall Loss 3.465977 Objective Loss 3.465977 LR 0.002000 Time 0.208190 -2023-04-26 20:20:49,161 - Epoch: [46][ 350/ 518] Overall Loss 3.467732 Objective Loss 3.467732 LR 0.002000 Time 0.207808 -2023-04-26 20:20:59,328 - Epoch: [46][ 400/ 518] Overall Loss 3.463650 Objective Loss 3.463650 LR 0.002000 Time 0.207244 -2023-04-26 20:21:09,517 - Epoch: [46][ 450/ 518] Overall Loss 3.461741 Objective Loss 3.461741 LR 0.002000 Time 0.206857 -2023-04-26 20:21:19,755 - Epoch: [46][ 500/ 518] Overall Loss 3.463875 Objective Loss 3.463875 LR 0.002000 Time 0.206643 -2023-04-26 20:21:23,293 - Epoch: [46][ 518/ 518] Overall Loss 3.466009 Objective Loss 3.466009 LR 0.002000 Time 0.206292 -2023-04-26 20:21:23,365 - --- validate (epoch=46)----------- -2023-04-26 20:21:23,365 - 4952 samples (32 per mini-batch) -2023-04-26 20:21:29,832 - Epoch: [46][ 50/ 155] Loss 3.892489 mAP 0.285853 -2023-04-26 20:21:36,012 - Epoch: [46][ 100/ 155] Loss 3.827351 mAP 0.299977 -2023-04-26 20:21:42,125 - Epoch: [46][ 150/ 155] Loss 3.802919 mAP 0.304641 -2023-04-26 20:21:42,674 - Epoch: [46][ 155/ 155] Loss 3.801894 mAP 0.304101 -2023-04-26 20:21:42,745 - ==> mAP: 0.30410 Loss: 3.802 - -2023-04-26 20:21:42,748 - ==> Best [mAP: 0.309682 vloss: 3.761277 Sparsity:0.00 Params: 2177088 on epoch: 45] -2023-04-26 20:21:42,748 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:21:42,786 - - -2023-04-26 20:21:42,786 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:21:53,675 - Epoch: [47][ 50/ 518] Overall Loss 3.447303 Objective Loss 3.447303 LR 0.002000 Time 0.217715 -2023-04-26 20:22:03,857 - Epoch: [47][ 100/ 518] Overall Loss 3.440827 Objective Loss 3.440827 LR 0.002000 Time 0.210665 -2023-04-26 20:22:14,112 - Epoch: [47][ 150/ 518] Overall Loss 3.444653 Objective Loss 3.444653 LR 0.002000 Time 0.208799 -2023-04-26 20:22:24,447 - Epoch: [47][ 200/ 518] Overall Loss 3.450487 Objective Loss 3.450487 LR 0.002000 Time 0.208267 -2023-04-26 20:22:34,745 - Epoch: [47][ 250/ 518] Overall Loss 3.452310 Objective Loss 3.452310 LR 0.002000 Time 0.207796 -2023-04-26 20:22:44,929 - Epoch: [47][ 300/ 518] Overall Loss 3.443788 Objective Loss 3.443788 LR 0.002000 Time 0.207105 -2023-04-26 20:22:55,160 - Epoch: [47][ 350/ 518] Overall Loss 3.443355 Objective Loss 3.443355 LR 0.002000 Time 0.206746 -2023-04-26 20:23:05,341 - Epoch: [47][ 400/ 518] Overall Loss 3.447309 Objective Loss 3.447309 LR 0.002000 Time 0.206352 -2023-04-26 20:23:15,627 - Epoch: [47][ 450/ 518] Overall Loss 3.441183 Objective Loss 3.441183 LR 0.002000 Time 0.206278 -2023-04-26 20:23:25,812 - Epoch: [47][ 500/ 518] Overall Loss 3.447577 Objective Loss 3.447577 LR 0.002000 Time 0.206017 -2023-04-26 20:23:29,347 - Epoch: [47][ 518/ 518] Overall Loss 3.445605 Objective Loss 3.445605 LR 0.002000 Time 0.205681 -2023-04-26 20:23:29,419 - --- validate (epoch=47)----------- -2023-04-26 20:23:29,420 - 4952 samples (32 per mini-batch) -2023-04-26 20:23:36,022 - Epoch: [47][ 50/ 155] Loss 3.842947 mAP 0.282188 -2023-04-26 20:23:42,286 - Epoch: [47][ 100/ 155] Loss 3.792546 mAP 0.291131 -2023-04-26 20:23:48,489 - Epoch: [47][ 150/ 155] Loss 3.792626 mAP 0.286176 -2023-04-26 20:23:49,043 - Epoch: [47][ 155/ 155] Loss 3.788381 mAP 0.285970 -2023-04-26 20:23:49,115 - ==> mAP: 0.28597 Loss: 3.788 - -2023-04-26 20:23:49,118 - ==> Best [mAP: 0.309682 vloss: 3.761277 Sparsity:0.00 Params: 2177088 on epoch: 45] -2023-04-26 20:23:49,118 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:23:49,156 - - -2023-04-26 20:23:49,156 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:24:00,045 - Epoch: [48][ 50/ 518] Overall Loss 3.406116 Objective Loss 3.406116 LR 0.002000 Time 0.217721 -2023-04-26 20:24:10,258 - Epoch: [48][ 100/ 518] Overall Loss 3.402988 Objective Loss 3.402988 LR 0.002000 Time 0.210974 -2023-04-26 20:24:20,458 - Epoch: [48][ 150/ 518] Overall Loss 3.420803 Objective Loss 3.420803 LR 0.002000 Time 0.208638 -2023-04-26 20:24:30,759 - Epoch: [48][ 200/ 518] Overall Loss 3.427712 Objective Loss 3.427712 LR 0.002000 Time 0.207977 -2023-04-26 20:24:40,956 - Epoch: [48][ 250/ 518] Overall Loss 3.437820 Objective Loss 3.437820 LR 0.002000 Time 0.207164 -2023-04-26 20:24:51,190 - Epoch: [48][ 300/ 518] Overall Loss 3.442871 Objective Loss 3.442871 LR 0.002000 Time 0.206743 -2023-04-26 20:25:01,497 - Epoch: [48][ 350/ 518] Overall Loss 3.442638 Objective Loss 3.442638 LR 0.002000 Time 0.206652 -2023-04-26 20:25:11,768 - Epoch: [48][ 400/ 518] Overall Loss 3.441927 Objective Loss 3.441927 LR 0.002000 Time 0.206495 -2023-04-26 20:25:21,991 - Epoch: [48][ 450/ 518] Overall Loss 3.440971 Objective Loss 3.440971 LR 0.002000 Time 0.206265 -2023-04-26 20:25:32,243 - Epoch: [48][ 500/ 518] Overall Loss 3.446664 Objective Loss 3.446664 LR 0.002000 Time 0.206139 -2023-04-26 20:25:35,805 - Epoch: [48][ 518/ 518] Overall Loss 3.442537 Objective Loss 3.442537 LR 0.002000 Time 0.205850 -2023-04-26 20:25:35,876 - --- validate (epoch=48)----------- -2023-04-26 20:25:35,876 - 4952 samples (32 per mini-batch) -2023-04-26 20:25:42,510 - Epoch: [48][ 50/ 155] Loss 3.723890 mAP 0.327350 -2023-04-26 20:25:48,804 - Epoch: [48][ 100/ 155] Loss 3.714814 mAP 0.337494 -2023-04-26 20:25:55,052 - Epoch: [48][ 150/ 155] Loss 3.731914 mAP 0.337095 -2023-04-26 20:25:55,621 - Epoch: [48][ 155/ 155] Loss 3.731215 mAP 0.338533 -2023-04-26 20:25:55,687 - ==> mAP: 0.33853 Loss: 3.731 - -2023-04-26 20:25:55,691 - ==> Best [mAP: 0.338533 vloss: 3.731215 Sparsity:0.00 Params: 2177088 on epoch: 48] -2023-04-26 20:25:55,691 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:25:55,743 - - -2023-04-26 20:25:55,743 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:26:06,872 - Epoch: [49][ 50/ 518] Overall Loss 3.428500 Objective Loss 3.428500 LR 0.002000 Time 0.222512 -2023-04-26 20:26:17,113 - Epoch: [49][ 100/ 518] Overall Loss 3.433432 Objective Loss 3.433432 LR 0.002000 Time 0.213654 -2023-04-26 20:26:27,375 - Epoch: [49][ 150/ 518] Overall Loss 3.443077 Objective Loss 3.443077 LR 0.002000 Time 0.210836 -2023-04-26 20:26:37,571 - Epoch: [49][ 200/ 518] Overall Loss 3.437415 Objective Loss 3.437415 LR 0.002000 Time 0.209102 -2023-04-26 20:26:47,893 - Epoch: [49][ 250/ 518] Overall Loss 3.419378 Objective Loss 3.419378 LR 0.002000 Time 0.208560 -2023-04-26 20:26:58,072 - Epoch: [49][ 300/ 518] Overall Loss 3.419453 Objective Loss 3.419453 LR 0.002000 Time 0.207726 -2023-04-26 20:27:08,288 - Epoch: [49][ 350/ 518] Overall Loss 3.417985 Objective Loss 3.417985 LR 0.002000 Time 0.207235 -2023-04-26 20:27:18,613 - Epoch: [49][ 400/ 518] Overall Loss 3.420480 Objective Loss 3.420480 LR 0.002000 Time 0.207138 -2023-04-26 20:27:28,771 - Epoch: [49][ 450/ 518] Overall Loss 3.419378 Objective Loss 3.419378 LR 0.002000 Time 0.206692 -2023-04-26 20:27:39,069 - Epoch: [49][ 500/ 518] Overall Loss 3.418248 Objective Loss 3.418248 LR 0.002000 Time 0.206616 -2023-04-26 20:27:42,565 - Epoch: [49][ 518/ 518] Overall Loss 3.415481 Objective Loss 3.415481 LR 0.002000 Time 0.206184 -2023-04-26 20:27:42,637 - --- validate (epoch=49)----------- -2023-04-26 20:27:42,637 - 4952 samples (32 per mini-batch) -2023-04-26 20:27:49,378 - Epoch: [49][ 50/ 155] Loss 3.651460 mAP 0.334100 -2023-04-26 20:27:55,699 - Epoch: [49][ 100/ 155] Loss 3.653357 mAP 0.331194 -2023-04-26 20:28:02,014 - Epoch: [49][ 150/ 155] Loss 3.679575 mAP 0.330264 -2023-04-26 20:28:02,583 - Epoch: [49][ 155/ 155] Loss 3.681447 mAP 0.327632 -2023-04-26 20:28:02,655 - ==> mAP: 0.32763 Loss: 3.681 - -2023-04-26 20:28:02,659 - ==> Best [mAP: 0.338533 vloss: 3.731215 Sparsity:0.00 Params: 2177088 on epoch: 48] -2023-04-26 20:28:02,659 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:28:02,697 - - -2023-04-26 20:28:02,697 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:28:13,754 - Epoch: [50][ 50/ 518] Overall Loss 3.304751 Objective Loss 3.304751 LR 0.000500 Time 0.221086 -2023-04-26 20:28:24,010 - Epoch: [50][ 100/ 518] Overall Loss 3.283023 Objective Loss 3.283023 LR 0.000500 Time 0.213084 -2023-04-26 20:28:34,163 - Epoch: [50][ 150/ 518] Overall Loss 3.300935 Objective Loss 3.300935 LR 0.000500 Time 0.209735 -2023-04-26 20:28:44,348 - Epoch: [50][ 200/ 518] Overall Loss 3.296223 Objective Loss 3.296223 LR 0.000500 Time 0.208217 -2023-04-26 20:28:54,569 - Epoch: [50][ 250/ 518] Overall Loss 3.297551 Objective Loss 3.297551 LR 0.000500 Time 0.207449 -2023-04-26 20:29:04,825 - Epoch: [50][ 300/ 518] Overall Loss 3.295640 Objective Loss 3.295640 LR 0.000500 Time 0.207056 -2023-04-26 20:29:15,038 - Epoch: [50][ 350/ 518] Overall Loss 3.293723 Objective Loss 3.293723 LR 0.000500 Time 0.206652 -2023-04-26 20:29:25,232 - Epoch: [50][ 400/ 518] Overall Loss 3.286819 Objective Loss 3.286819 LR 0.000500 Time 0.206301 -2023-04-26 20:29:35,529 - Epoch: [50][ 450/ 518] Overall Loss 3.288484 Objective Loss 3.288484 LR 0.000500 Time 0.206259 -2023-04-26 20:29:45,824 - Epoch: [50][ 500/ 518] Overall Loss 3.277300 Objective Loss 3.277300 LR 0.000500 Time 0.206219 -2023-04-26 20:29:49,398 - Epoch: [50][ 518/ 518] Overall Loss 3.274491 Objective Loss 3.274491 LR 0.000500 Time 0.205951 -2023-04-26 20:29:49,470 - --- validate (epoch=50)----------- -2023-04-26 20:29:49,470 - 4952 samples (32 per mini-batch) -2023-04-26 20:29:56,153 - Epoch: [50][ 50/ 155] Loss 3.472088 mAP 0.384031 -2023-04-26 20:30:02,513 - Epoch: [50][ 100/ 155] Loss 3.452169 mAP 0.376159 -2023-04-26 20:30:08,808 - Epoch: [50][ 150/ 155] Loss 3.447472 mAP 0.380398 -2023-04-26 20:30:09,363 - Epoch: [50][ 155/ 155] Loss 3.447414 mAP 0.379632 -2023-04-26 20:30:09,447 - ==> mAP: 0.37963 Loss: 3.447 - -2023-04-26 20:30:09,451 - ==> Best [mAP: 0.379632 vloss: 3.447414 Sparsity:0.00 Params: 2177088 on epoch: 50] -2023-04-26 20:30:09,451 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:30:09,503 - - -2023-04-26 20:30:09,504 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:30:20,431 - Epoch: [51][ 50/ 518] Overall Loss 3.284614 Objective Loss 3.284614 LR 0.000500 Time 0.218490 -2023-04-26 20:30:30,585 - Epoch: [51][ 100/ 518] Overall Loss 3.248077 Objective Loss 3.248077 LR 0.000500 Time 0.210761 -2023-04-26 20:30:40,815 - Epoch: [51][ 150/ 518] Overall Loss 3.261739 Objective Loss 3.261739 LR 0.000500 Time 0.208700 -2023-04-26 20:30:51,049 - Epoch: [51][ 200/ 518] Overall Loss 3.242826 Objective Loss 3.242826 LR 0.000500 Time 0.207689 -2023-04-26 20:31:01,179 - Epoch: [51][ 250/ 518] Overall Loss 3.241424 Objective Loss 3.241424 LR 0.000500 Time 0.206662 -2023-04-26 20:31:11,518 - Epoch: [51][ 300/ 518] Overall Loss 3.249414 Objective Loss 3.249414 LR 0.000500 Time 0.206679 -2023-04-26 20:31:21,752 - Epoch: [51][ 350/ 518] Overall Loss 3.244378 Objective Loss 3.244378 LR 0.000500 Time 0.206385 -2023-04-26 20:31:31,971 - Epoch: [51][ 400/ 518] Overall Loss 3.242994 Objective Loss 3.242994 LR 0.000500 Time 0.206133 -2023-04-26 20:31:42,196 - Epoch: [51][ 450/ 518] Overall Loss 3.237360 Objective Loss 3.237360 LR 0.000500 Time 0.205948 -2023-04-26 20:31:52,429 - Epoch: [51][ 500/ 518] Overall Loss 3.236309 Objective Loss 3.236309 LR 0.000500 Time 0.205816 -2023-04-26 20:31:55,954 - Epoch: [51][ 518/ 518] Overall Loss 3.232775 Objective Loss 3.232775 LR 0.000500 Time 0.205467 -2023-04-26 20:31:56,025 - --- validate (epoch=51)----------- -2023-04-26 20:31:56,025 - 4952 samples (32 per mini-batch) -2023-04-26 20:32:02,771 - Epoch: [51][ 50/ 155] Loss 3.435218 mAP 0.369394 -2023-04-26 20:32:09,049 - Epoch: [51][ 100/ 155] Loss 3.441681 mAP 0.366511 -2023-04-26 20:32:15,370 - Epoch: [51][ 150/ 155] Loss 3.419752 mAP 0.377991 -2023-04-26 20:32:15,924 - Epoch: [51][ 155/ 155] Loss 3.421496 mAP 0.375533 -2023-04-26 20:32:15,998 - ==> mAP: 0.37553 Loss: 3.421 - -2023-04-26 20:32:16,002 - ==> Best [mAP: 0.379632 vloss: 3.447414 Sparsity:0.00 Params: 2177088 on epoch: 50] -2023-04-26 20:32:16,002 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:32:16,040 - - -2023-04-26 20:32:16,040 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:32:26,972 - Epoch: [52][ 50/ 518] Overall Loss 3.207502 Objective Loss 3.207502 LR 0.000500 Time 0.218588 -2023-04-26 20:32:37,079 - Epoch: [52][ 100/ 518] Overall Loss 3.224763 Objective Loss 3.224763 LR 0.000500 Time 0.210344 -2023-04-26 20:32:47,250 - Epoch: [52][ 150/ 518] Overall Loss 3.219599 Objective Loss 3.219599 LR 0.000500 Time 0.208025 -2023-04-26 20:32:57,425 - Epoch: [52][ 200/ 518] Overall Loss 3.211481 Objective Loss 3.211481 LR 0.000500 Time 0.206889 -2023-04-26 20:33:07,736 - Epoch: [52][ 250/ 518] Overall Loss 3.209591 Objective Loss 3.209591 LR 0.000500 Time 0.206748 -2023-04-26 20:33:18,020 - Epoch: [52][ 300/ 518] Overall Loss 3.212403 Objective Loss 3.212403 LR 0.000500 Time 0.206565 -2023-04-26 20:33:28,246 - Epoch: [52][ 350/ 518] Overall Loss 3.210032 Objective Loss 3.210032 LR 0.000500 Time 0.206267 -2023-04-26 20:33:38,573 - Epoch: [52][ 400/ 518] Overall Loss 3.211082 Objective Loss 3.211082 LR 0.000500 Time 0.206297 -2023-04-26 20:33:48,769 - Epoch: [52][ 450/ 518] Overall Loss 3.209333 Objective Loss 3.209333 LR 0.000500 Time 0.206029 -2023-04-26 20:33:58,967 - Epoch: [52][ 500/ 518] Overall Loss 3.212315 Objective Loss 3.212315 LR 0.000500 Time 0.205819 -2023-04-26 20:34:02,482 - Epoch: [52][ 518/ 518] Overall Loss 3.210992 Objective Loss 3.210992 LR 0.000500 Time 0.205453 -2023-04-26 20:34:02,554 - --- validate (epoch=52)----------- -2023-04-26 20:34:02,555 - 4952 samples (32 per mini-batch) -2023-04-26 20:34:09,233 - Epoch: [52][ 50/ 155] Loss 3.405044 mAP 0.362687 -2023-04-26 20:34:15,538 - Epoch: [52][ 100/ 155] Loss 3.433267 mAP 0.372485 -2023-04-26 20:34:21,876 - Epoch: [52][ 150/ 155] Loss 3.429350 mAP 0.377551 -2023-04-26 20:34:22,442 - Epoch: [52][ 155/ 155] Loss 3.431594 mAP 0.375890 -2023-04-26 20:34:22,522 - ==> mAP: 0.37589 Loss: 3.432 - -2023-04-26 20:34:22,526 - ==> Best [mAP: 0.379632 vloss: 3.447414 Sparsity:0.00 Params: 2177088 on epoch: 50] -2023-04-26 20:34:22,526 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:34:22,563 - - -2023-04-26 20:34:22,563 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:34:33,571 - Epoch: [53][ 50/ 518] Overall Loss 3.189725 Objective Loss 3.189725 LR 0.000500 Time 0.220099 -2023-04-26 20:34:43,776 - Epoch: [53][ 100/ 518] Overall Loss 3.189058 Objective Loss 3.189058 LR 0.000500 Time 0.212086 -2023-04-26 20:34:53,998 - Epoch: [53][ 150/ 518] Overall Loss 3.190356 Objective Loss 3.190356 LR 0.000500 Time 0.209527 -2023-04-26 20:35:04,327 - Epoch: [53][ 200/ 518] Overall Loss 3.195662 Objective Loss 3.195662 LR 0.000500 Time 0.208781 -2023-04-26 20:35:14,550 - Epoch: [53][ 250/ 518] Overall Loss 3.199295 Objective Loss 3.199295 LR 0.000500 Time 0.207912 -2023-04-26 20:35:24,850 - Epoch: [53][ 300/ 518] Overall Loss 3.202164 Objective Loss 3.202164 LR 0.000500 Time 0.207585 -2023-04-26 20:35:35,087 - Epoch: [53][ 350/ 518] Overall Loss 3.206032 Objective Loss 3.206032 LR 0.000500 Time 0.207174 -2023-04-26 20:35:45,327 - Epoch: [53][ 400/ 518] Overall Loss 3.204481 Objective Loss 3.204481 LR 0.000500 Time 0.206874 -2023-04-26 20:35:55,575 - Epoch: [53][ 450/ 518] Overall Loss 3.207273 Objective Loss 3.207273 LR 0.000500 Time 0.206656 -2023-04-26 20:36:05,798 - Epoch: [53][ 500/ 518] Overall Loss 3.206602 Objective Loss 3.206602 LR 0.000500 Time 0.206434 -2023-04-26 20:36:09,354 - Epoch: [53][ 518/ 518] Overall Loss 3.204893 Objective Loss 3.204893 LR 0.000500 Time 0.206125 -2023-04-26 20:36:09,428 - --- validate (epoch=53)----------- -2023-04-26 20:36:09,429 - 4952 samples (32 per mini-batch) -2023-04-26 20:36:16,006 - Epoch: [53][ 50/ 155] Loss 3.398236 mAP 0.371203 -2023-04-26 20:36:22,273 - Epoch: [53][ 100/ 155] Loss 3.433710 mAP 0.376964 -2023-04-26 20:36:28,528 - Epoch: [53][ 150/ 155] Loss 3.424578 mAP 0.377383 -2023-04-26 20:36:29,086 - Epoch: [53][ 155/ 155] Loss 3.426139 mAP 0.376568 -2023-04-26 20:36:29,155 - ==> mAP: 0.37657 Loss: 3.426 - -2023-04-26 20:36:29,158 - ==> Best [mAP: 0.379632 vloss: 3.447414 Sparsity:0.00 Params: 2177088 on epoch: 50] -2023-04-26 20:36:29,159 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:36:29,221 - - -2023-04-26 20:36:29,221 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:36:40,285 - Epoch: [54][ 50/ 518] Overall Loss 3.208712 Objective Loss 3.208712 LR 0.000500 Time 0.221189 -2023-04-26 20:36:50,436 - Epoch: [54][ 100/ 518] Overall Loss 3.179120 Objective Loss 3.179120 LR 0.000500 Time 0.212090 -2023-04-26 20:37:00,661 - Epoch: [54][ 150/ 518] Overall Loss 3.192209 Objective Loss 3.192209 LR 0.000500 Time 0.209548 -2023-04-26 20:37:10,963 - Epoch: [54][ 200/ 518] Overall Loss 3.189396 Objective Loss 3.189396 LR 0.000500 Time 0.208666 -2023-04-26 20:37:21,233 - Epoch: [54][ 250/ 518] Overall Loss 3.195766 Objective Loss 3.195766 LR 0.000500 Time 0.208005 -2023-04-26 20:37:31,460 - Epoch: [54][ 300/ 518] Overall Loss 3.199664 Objective Loss 3.199664 LR 0.000500 Time 0.207423 -2023-04-26 20:37:41,617 - Epoch: [54][ 350/ 518] Overall Loss 3.210045 Objective Loss 3.210045 LR 0.000500 Time 0.206807 -2023-04-26 20:37:51,798 - Epoch: [54][ 400/ 518] Overall Loss 3.209609 Objective Loss 3.209609 LR 0.000500 Time 0.206404 -2023-04-26 20:38:02,033 - Epoch: [54][ 450/ 518] Overall Loss 3.206645 Objective Loss 3.206645 LR 0.000500 Time 0.206210 -2023-04-26 20:38:12,215 - Epoch: [54][ 500/ 518] Overall Loss 3.204838 Objective Loss 3.204838 LR 0.000500 Time 0.205951 -2023-04-26 20:38:15,849 - Epoch: [54][ 518/ 518] Overall Loss 3.204085 Objective Loss 3.204085 LR 0.000500 Time 0.205808 -2023-04-26 20:38:15,919 - --- validate (epoch=54)----------- -2023-04-26 20:38:15,919 - 4952 samples (32 per mini-batch) -2023-04-26 20:38:22,594 - Epoch: [54][ 50/ 155] Loss 3.421279 mAP 0.384380 -2023-04-26 20:38:28,939 - Epoch: [54][ 100/ 155] Loss 3.401631 mAP 0.380982 -2023-04-26 20:38:35,337 - Epoch: [54][ 150/ 155] Loss 3.394708 mAP 0.388394 -2023-04-26 20:38:35,896 - Epoch: [54][ 155/ 155] Loss 3.397305 mAP 0.387379 -2023-04-26 20:38:35,967 - ==> mAP: 0.38738 Loss: 3.397 - -2023-04-26 20:38:35,970 - ==> Best [mAP: 0.387379 vloss: 3.397305 Sparsity:0.00 Params: 2177088 on epoch: 54] -2023-04-26 20:38:35,971 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:38:36,022 - - -2023-04-26 20:38:36,022 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:38:47,186 - Epoch: [55][ 50/ 518] Overall Loss 3.132390 Objective Loss 3.132390 LR 0.000500 Time 0.223222 -2023-04-26 20:38:57,372 - Epoch: [55][ 100/ 518] Overall Loss 3.163234 Objective Loss 3.163234 LR 0.000500 Time 0.213447 -2023-04-26 20:39:07,553 - Epoch: [55][ 150/ 518] Overall Loss 3.163134 Objective Loss 3.163134 LR 0.000500 Time 0.210161 -2023-04-26 20:39:17,785 - Epoch: [55][ 200/ 518] Overall Loss 3.160218 Objective Loss 3.160218 LR 0.000500 Time 0.208772 -2023-04-26 20:39:28,135 - Epoch: [55][ 250/ 518] Overall Loss 3.176275 Objective Loss 3.176275 LR 0.000500 Time 0.208413 -2023-04-26 20:39:38,338 - Epoch: [55][ 300/ 518] Overall Loss 3.178024 Objective Loss 3.178024 LR 0.000500 Time 0.207681 -2023-04-26 20:39:48,642 - Epoch: [55][ 350/ 518] Overall Loss 3.173631 Objective Loss 3.173631 LR 0.000500 Time 0.207449 -2023-04-26 20:39:58,787 - Epoch: [55][ 400/ 518] Overall Loss 3.177462 Objective Loss 3.177462 LR 0.000500 Time 0.206875 -2023-04-26 20:40:08,888 - Epoch: [55][ 450/ 518] Overall Loss 3.176026 Objective Loss 3.176026 LR 0.000500 Time 0.206331 -2023-04-26 20:40:19,097 - Epoch: [55][ 500/ 518] Overall Loss 3.180188 Objective Loss 3.180188 LR 0.000500 Time 0.206113 -2023-04-26 20:40:22,632 - Epoch: [55][ 518/ 518] Overall Loss 3.181978 Objective Loss 3.181978 LR 0.000500 Time 0.205775 -2023-04-26 20:40:22,703 - --- validate (epoch=55)----------- -2023-04-26 20:40:22,703 - 4952 samples (32 per mini-batch) -2023-04-26 20:40:29,454 - Epoch: [55][ 50/ 155] Loss 3.415345 mAP 0.407783 -2023-04-26 20:40:35,883 - Epoch: [55][ 100/ 155] Loss 3.409329 mAP 0.402358 -2023-04-26 20:40:42,282 - Epoch: [55][ 150/ 155] Loss 3.442645 mAP 0.395382 -2023-04-26 20:40:42,833 - Epoch: [55][ 155/ 155] Loss 3.442725 mAP 0.394178 -2023-04-26 20:40:42,905 - ==> mAP: 0.39418 Loss: 3.443 - -2023-04-26 20:40:42,909 - ==> Best [mAP: 0.394178 vloss: 3.442725 Sparsity:0.00 Params: 2177088 on epoch: 55] -2023-04-26 20:40:42,909 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:40:42,963 - - -2023-04-26 20:40:42,963 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:40:54,018 - Epoch: [56][ 50/ 518] Overall Loss 3.205225 Objective Loss 3.205225 LR 0.000500 Time 0.221057 -2023-04-26 20:41:04,186 - Epoch: [56][ 100/ 518] Overall Loss 3.199788 Objective Loss 3.199788 LR 0.000500 Time 0.212189 -2023-04-26 20:41:14,445 - Epoch: [56][ 150/ 518] Overall Loss 3.213804 Objective Loss 3.213804 LR 0.000500 Time 0.209839 -2023-04-26 20:41:24,709 - Epoch: [56][ 200/ 518] Overall Loss 3.196884 Objective Loss 3.196884 LR 0.000500 Time 0.208693 -2023-04-26 20:41:35,052 - Epoch: [56][ 250/ 518] Overall Loss 3.194875 Objective Loss 3.194875 LR 0.000500 Time 0.208318 -2023-04-26 20:41:45,209 - Epoch: [56][ 300/ 518] Overall Loss 3.191811 Objective Loss 3.191811 LR 0.000500 Time 0.207451 -2023-04-26 20:41:55,367 - Epoch: [56][ 350/ 518] Overall Loss 3.185650 Objective Loss 3.185650 LR 0.000500 Time 0.206833 -2023-04-26 20:42:05,497 - Epoch: [56][ 400/ 518] Overall Loss 3.180263 Objective Loss 3.180263 LR 0.000500 Time 0.206299 -2023-04-26 20:42:15,638 - Epoch: [56][ 450/ 518] Overall Loss 3.185979 Objective Loss 3.185979 LR 0.000500 Time 0.205910 -2023-04-26 20:42:25,832 - Epoch: [56][ 500/ 518] Overall Loss 3.178927 Objective Loss 3.178927 LR 0.000500 Time 0.205703 -2023-04-26 20:42:29,316 - Epoch: [56][ 518/ 518] Overall Loss 3.176396 Objective Loss 3.176396 LR 0.000500 Time 0.205280 -2023-04-26 20:42:29,388 - --- validate (epoch=56)----------- -2023-04-26 20:42:29,388 - 4952 samples (32 per mini-batch) -2023-04-26 20:42:36,101 - Epoch: [56][ 50/ 155] Loss 3.378481 mAP 0.411486 -2023-04-26 20:42:42,392 - Epoch: [56][ 100/ 155] Loss 3.393863 mAP 0.397495 -2023-04-26 20:42:48,709 - Epoch: [56][ 150/ 155] Loss 3.396721 mAP 0.392102 -2023-04-26 20:42:49,275 - Epoch: [56][ 155/ 155] Loss 3.396443 mAP 0.393101 -2023-04-26 20:42:49,346 - ==> mAP: 0.39310 Loss: 3.396 - -2023-04-26 20:42:49,349 - ==> Best [mAP: 0.394178 vloss: 3.442725 Sparsity:0.00 Params: 2177088 on epoch: 55] -2023-04-26 20:42:49,349 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:42:49,387 - - -2023-04-26 20:42:49,387 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:43:00,289 - Epoch: [57][ 50/ 518] Overall Loss 3.154996 Objective Loss 3.154996 LR 0.000500 Time 0.217969 -2023-04-26 20:43:10,512 - Epoch: [57][ 100/ 518] Overall Loss 3.160534 Objective Loss 3.160534 LR 0.000500 Time 0.211203 -2023-04-26 20:43:20,731 - Epoch: [57][ 150/ 518] Overall Loss 3.160064 Objective Loss 3.160064 LR 0.000500 Time 0.208917 -2023-04-26 20:43:30,996 - Epoch: [57][ 200/ 518] Overall Loss 3.153480 Objective Loss 3.153480 LR 0.000500 Time 0.208006 -2023-04-26 20:43:41,238 - Epoch: [57][ 250/ 518] Overall Loss 3.156463 Objective Loss 3.156463 LR 0.000500 Time 0.207365 -2023-04-26 20:43:51,482 - Epoch: [57][ 300/ 518] Overall Loss 3.153686 Objective Loss 3.153686 LR 0.000500 Time 0.206944 -2023-04-26 20:44:01,746 - Epoch: [57][ 350/ 518] Overall Loss 3.159447 Objective Loss 3.159447 LR 0.000500 Time 0.206702 -2023-04-26 20:44:11,965 - Epoch: [57][ 400/ 518] Overall Loss 3.163791 Objective Loss 3.163791 LR 0.000500 Time 0.206408 -2023-04-26 20:44:22,205 - Epoch: [57][ 450/ 518] Overall Loss 3.166946 Objective Loss 3.166946 LR 0.000500 Time 0.206227 -2023-04-26 20:44:32,331 - Epoch: [57][ 500/ 518] Overall Loss 3.172811 Objective Loss 3.172811 LR 0.000500 Time 0.205853 -2023-04-26 20:44:35,896 - Epoch: [57][ 518/ 518] Overall Loss 3.174809 Objective Loss 3.174809 LR 0.000500 Time 0.205580 -2023-04-26 20:44:35,967 - --- validate (epoch=57)----------- -2023-04-26 20:44:35,967 - 4952 samples (32 per mini-batch) -2023-04-26 20:44:42,747 - Epoch: [57][ 50/ 155] Loss 3.465960 mAP 0.395502 -2023-04-26 20:44:49,189 - Epoch: [57][ 100/ 155] Loss 3.441273 mAP 0.392408 -2023-04-26 20:44:55,541 - Epoch: [57][ 150/ 155] Loss 3.433553 mAP 0.388557 -2023-04-26 20:44:56,107 - Epoch: [57][ 155/ 155] Loss 3.433074 mAP 0.388324 -2023-04-26 20:44:56,175 - ==> mAP: 0.38832 Loss: 3.433 - -2023-04-26 20:44:56,178 - ==> Best [mAP: 0.394178 vloss: 3.442725 Sparsity:0.00 Params: 2177088 on epoch: 55] -2023-04-26 20:44:56,178 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:44:56,216 - - -2023-04-26 20:44:56,216 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:45:07,263 - Epoch: [58][ 50/ 518] Overall Loss 3.179770 Objective Loss 3.179770 LR 0.000500 Time 0.220884 -2023-04-26 20:45:17,640 - Epoch: [58][ 100/ 518] Overall Loss 3.159803 Objective Loss 3.159803 LR 0.000500 Time 0.214197 -2023-04-26 20:45:27,815 - Epoch: [58][ 150/ 518] Overall Loss 3.156748 Objective Loss 3.156748 LR 0.000500 Time 0.210620 -2023-04-26 20:45:38,021 - Epoch: [58][ 200/ 518] Overall Loss 3.170234 Objective Loss 3.170234 LR 0.000500 Time 0.208986 -2023-04-26 20:45:48,282 - Epoch: [58][ 250/ 518] Overall Loss 3.170640 Objective Loss 3.170640 LR 0.000500 Time 0.208226 -2023-04-26 20:45:58,452 - Epoch: [58][ 300/ 518] Overall Loss 3.169076 Objective Loss 3.169076 LR 0.000500 Time 0.207415 -2023-04-26 20:46:08,755 - Epoch: [58][ 350/ 518] Overall Loss 3.169254 Objective Loss 3.169254 LR 0.000500 Time 0.207217 -2023-04-26 20:46:18,985 - Epoch: [58][ 400/ 518] Overall Loss 3.167323 Objective Loss 3.167323 LR 0.000500 Time 0.206887 -2023-04-26 20:46:29,197 - Epoch: [58][ 450/ 518] Overall Loss 3.170618 Objective Loss 3.170618 LR 0.000500 Time 0.206588 -2023-04-26 20:46:39,463 - Epoch: [58][ 500/ 518] Overall Loss 3.164101 Objective Loss 3.164101 LR 0.000500 Time 0.206458 -2023-04-26 20:46:42,986 - Epoch: [58][ 518/ 518] Overall Loss 3.164473 Objective Loss 3.164473 LR 0.000500 Time 0.206084 -2023-04-26 20:46:43,056 - --- validate (epoch=58)----------- -2023-04-26 20:46:43,056 - 4952 samples (32 per mini-batch) -2023-04-26 20:46:49,727 - Epoch: [58][ 50/ 155] Loss 3.348744 mAP 0.412764 -2023-04-26 20:46:56,056 - Epoch: [58][ 100/ 155] Loss 3.366947 mAP 0.397784 -2023-04-26 20:47:02,431 - Epoch: [58][ 150/ 155] Loss 3.373718 mAP 0.395006 -2023-04-26 20:47:02,987 - Epoch: [58][ 155/ 155] Loss 3.374902 mAP 0.394528 -2023-04-26 20:47:03,065 - ==> mAP: 0.39453 Loss: 3.375 - -2023-04-26 20:47:03,069 - ==> Best [mAP: 0.394528 vloss: 3.374902 Sparsity:0.00 Params: 2177088 on epoch: 58] -2023-04-26 20:47:03,069 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:47:03,123 - - -2023-04-26 20:47:03,123 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:47:14,067 - Epoch: [59][ 50/ 518] Overall Loss 3.193687 Objective Loss 3.193687 LR 0.000500 Time 0.218812 -2023-04-26 20:47:24,309 - Epoch: [59][ 100/ 518] Overall Loss 3.179797 Objective Loss 3.179797 LR 0.000500 Time 0.211814 -2023-04-26 20:47:34,546 - Epoch: [59][ 150/ 518] Overall Loss 3.162212 Objective Loss 3.162212 LR 0.000500 Time 0.209444 -2023-04-26 20:47:44,766 - Epoch: [59][ 200/ 518] Overall Loss 3.145455 Objective Loss 3.145455 LR 0.000500 Time 0.208175 -2023-04-26 20:47:55,048 - Epoch: [59][ 250/ 518] Overall Loss 3.147992 Objective Loss 3.147992 LR 0.000500 Time 0.207664 -2023-04-26 20:48:05,281 - Epoch: [59][ 300/ 518] Overall Loss 3.142017 Objective Loss 3.142017 LR 0.000500 Time 0.207156 -2023-04-26 20:48:15,541 - Epoch: [59][ 350/ 518] Overall Loss 3.148018 Objective Loss 3.148018 LR 0.000500 Time 0.206872 -2023-04-26 20:48:25,851 - Epoch: [59][ 400/ 518] Overall Loss 3.149484 Objective Loss 3.149484 LR 0.000500 Time 0.206785 -2023-04-26 20:48:36,059 - Epoch: [59][ 450/ 518] Overall Loss 3.149642 Objective Loss 3.149642 LR 0.000500 Time 0.206490 -2023-04-26 20:48:46,305 - Epoch: [59][ 500/ 518] Overall Loss 3.155940 Objective Loss 3.155940 LR 0.000500 Time 0.206328 -2023-04-26 20:48:49,864 - Epoch: [59][ 518/ 518] Overall Loss 3.154491 Objective Loss 3.154491 LR 0.000500 Time 0.206029 -2023-04-26 20:48:49,935 - --- validate (epoch=59)----------- -2023-04-26 20:48:49,936 - 4952 samples (32 per mini-batch) -2023-04-26 20:48:56,581 - Epoch: [59][ 50/ 155] Loss 3.383633 mAP 0.383294 -2023-04-26 20:49:02,922 - Epoch: [59][ 100/ 155] Loss 3.383028 mAP 0.389660 -2023-04-26 20:49:09,209 - Epoch: [59][ 150/ 155] Loss 3.397379 mAP 0.385795 -2023-04-26 20:49:09,793 - Epoch: [59][ 155/ 155] Loss 3.397242 mAP 0.386290 -2023-04-26 20:49:09,866 - ==> mAP: 0.38629 Loss: 3.397 - -2023-04-26 20:49:09,870 - ==> Best [mAP: 0.394528 vloss: 3.374902 Sparsity:0.00 Params: 2177088 on epoch: 58] -2023-04-26 20:49:09,870 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:49:09,907 - - -2023-04-26 20:49:09,907 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:49:20,816 - Epoch: [60][ 50/ 518] Overall Loss 3.107423 Objective Loss 3.107423 LR 0.000500 Time 0.218125 -2023-04-26 20:49:31,014 - Epoch: [60][ 100/ 518] Overall Loss 3.136068 Objective Loss 3.136068 LR 0.000500 Time 0.211020 -2023-04-26 20:49:41,220 - Epoch: [60][ 150/ 518] Overall Loss 3.141047 Objective Loss 3.141047 LR 0.000500 Time 0.208709 -2023-04-26 20:49:51,425 - Epoch: [60][ 200/ 518] Overall Loss 3.129932 Objective Loss 3.129932 LR 0.000500 Time 0.207550 -2023-04-26 20:50:01,643 - Epoch: [60][ 250/ 518] Overall Loss 3.140829 Objective Loss 3.140829 LR 0.000500 Time 0.206906 -2023-04-26 20:50:11,902 - Epoch: [60][ 300/ 518] Overall Loss 3.148227 Objective Loss 3.148227 LR 0.000500 Time 0.206611 -2023-04-26 20:50:22,153 - Epoch: [60][ 350/ 518] Overall Loss 3.146969 Objective Loss 3.146969 LR 0.000500 Time 0.206379 -2023-04-26 20:50:32,325 - Epoch: [60][ 400/ 518] Overall Loss 3.143864 Objective Loss 3.143864 LR 0.000500 Time 0.206009 -2023-04-26 20:50:42,567 - Epoch: [60][ 450/ 518] Overall Loss 3.149313 Objective Loss 3.149313 LR 0.000500 Time 0.205875 -2023-04-26 20:50:52,760 - Epoch: [60][ 500/ 518] Overall Loss 3.148199 Objective Loss 3.148199 LR 0.000500 Time 0.205671 -2023-04-26 20:50:56,317 - Epoch: [60][ 518/ 518] Overall Loss 3.147255 Objective Loss 3.147255 LR 0.000500 Time 0.205389 -2023-04-26 20:50:56,387 - --- validate (epoch=60)----------- -2023-04-26 20:50:56,388 - 4952 samples (32 per mini-batch) -2023-04-26 20:51:03,153 - Epoch: [60][ 50/ 155] Loss 3.387491 mAP 0.391434 -2023-04-26 20:51:09,481 - Epoch: [60][ 100/ 155] Loss 3.390962 mAP 0.389449 -2023-04-26 20:51:15,787 - Epoch: [60][ 150/ 155] Loss 3.390970 mAP 0.394047 -2023-04-26 20:51:16,355 - Epoch: [60][ 155/ 155] Loss 3.388148 mAP 0.392963 -2023-04-26 20:51:16,421 - ==> mAP: 0.39296 Loss: 3.388 - -2023-04-26 20:51:16,425 - ==> Best [mAP: 0.394528 vloss: 3.374902 Sparsity:0.00 Params: 2177088 on epoch: 58] -2023-04-26 20:51:16,425 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:51:16,462 - - -2023-04-26 20:51:16,462 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:51:27,301 - Epoch: [61][ 50/ 518] Overall Loss 3.117575 Objective Loss 3.117575 LR 0.000500 Time 0.216720 -2023-04-26 20:51:37,540 - Epoch: [61][ 100/ 518] Overall Loss 3.125677 Objective Loss 3.125677 LR 0.000500 Time 0.210734 -2023-04-26 20:51:47,846 - Epoch: [61][ 150/ 518] Overall Loss 3.143474 Objective Loss 3.143474 LR 0.000500 Time 0.209185 -2023-04-26 20:51:58,184 - Epoch: [61][ 200/ 518] Overall Loss 3.136144 Objective Loss 3.136144 LR 0.000500 Time 0.208570 -2023-04-26 20:52:08,411 - Epoch: [61][ 250/ 518] Overall Loss 3.136594 Objective Loss 3.136594 LR 0.000500 Time 0.207759 -2023-04-26 20:52:18,672 - Epoch: [61][ 300/ 518] Overall Loss 3.132877 Objective Loss 3.132877 LR 0.000500 Time 0.207330 -2023-04-26 20:52:28,856 - Epoch: [61][ 350/ 518] Overall Loss 3.130455 Objective Loss 3.130455 LR 0.000500 Time 0.206803 -2023-04-26 20:52:39,082 - Epoch: [61][ 400/ 518] Overall Loss 3.140891 Objective Loss 3.140891 LR 0.000500 Time 0.206513 -2023-04-26 20:52:49,281 - Epoch: [61][ 450/ 518] Overall Loss 3.143083 Objective Loss 3.143083 LR 0.000500 Time 0.206227 -2023-04-26 20:52:59,508 - Epoch: [61][ 500/ 518] Overall Loss 3.147775 Objective Loss 3.147775 LR 0.000500 Time 0.206057 -2023-04-26 20:53:03,089 - Epoch: [61][ 518/ 518] Overall Loss 3.150098 Objective Loss 3.150098 LR 0.000500 Time 0.205807 -2023-04-26 20:53:03,162 - --- validate (epoch=61)----------- -2023-04-26 20:53:03,162 - 4952 samples (32 per mini-batch) -2023-04-26 20:53:09,851 - Epoch: [61][ 50/ 155] Loss 3.350057 mAP 0.389158 -2023-04-26 20:53:16,167 - Epoch: [61][ 100/ 155] Loss 3.388041 mAP 0.383509 -2023-04-26 20:53:22,444 - Epoch: [61][ 150/ 155] Loss 3.363264 mAP 0.390989 -2023-04-26 20:53:23,008 - Epoch: [61][ 155/ 155] Loss 3.363287 mAP 0.390737 -2023-04-26 20:53:23,079 - ==> mAP: 0.39074 Loss: 3.363 - -2023-04-26 20:53:23,083 - ==> Best [mAP: 0.394528 vloss: 3.374902 Sparsity:0.00 Params: 2177088 on epoch: 58] -2023-04-26 20:53:23,083 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:53:23,120 - - -2023-04-26 20:53:23,120 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:53:34,334 - Epoch: [62][ 50/ 518] Overall Loss 3.140290 Objective Loss 3.140290 LR 0.000500 Time 0.224216 -2023-04-26 20:53:44,613 - Epoch: [62][ 100/ 518] Overall Loss 3.134499 Objective Loss 3.134499 LR 0.000500 Time 0.214884 -2023-04-26 20:53:54,846 - Epoch: [62][ 150/ 518] Overall Loss 3.131603 Objective Loss 3.131603 LR 0.000500 Time 0.211467 -2023-04-26 20:54:05,124 - Epoch: [62][ 200/ 518] Overall Loss 3.149677 Objective Loss 3.149677 LR 0.000500 Time 0.209980 -2023-04-26 20:54:15,263 - Epoch: [62][ 250/ 518] Overall Loss 3.143783 Objective Loss 3.143783 LR 0.000500 Time 0.208533 -2023-04-26 20:54:25,534 - Epoch: [62][ 300/ 518] Overall Loss 3.142626 Objective Loss 3.142626 LR 0.000500 Time 0.208008 -2023-04-26 20:54:35,753 - Epoch: [62][ 350/ 518] Overall Loss 3.147249 Objective Loss 3.147249 LR 0.000500 Time 0.207487 -2023-04-26 20:54:46,005 - Epoch: [62][ 400/ 518] Overall Loss 3.146436 Objective Loss 3.146436 LR 0.000500 Time 0.207176 -2023-04-26 20:54:56,241 - Epoch: [62][ 450/ 518] Overall Loss 3.153558 Objective Loss 3.153558 LR 0.000500 Time 0.206900 -2023-04-26 20:55:06,501 - Epoch: [62][ 500/ 518] Overall Loss 3.156211 Objective Loss 3.156211 LR 0.000500 Time 0.206726 -2023-04-26 20:55:10,065 - Epoch: [62][ 518/ 518] Overall Loss 3.156476 Objective Loss 3.156476 LR 0.000500 Time 0.206422 -2023-04-26 20:55:10,136 - --- validate (epoch=62)----------- -2023-04-26 20:55:10,137 - 4952 samples (32 per mini-batch) -2023-04-26 20:55:16,917 - Epoch: [62][ 50/ 155] Loss 3.339289 mAP 0.399530 -2023-04-26 20:55:23,297 - Epoch: [62][ 100/ 155] Loss 3.349719 mAP 0.397809 -2023-04-26 20:55:29,649 - Epoch: [62][ 150/ 155] Loss 3.362013 mAP 0.395738 -2023-04-26 20:55:30,203 - Epoch: [62][ 155/ 155] Loss 3.359395 mAP 0.394084 -2023-04-26 20:55:30,277 - ==> mAP: 0.39408 Loss: 3.359 - -2023-04-26 20:55:30,281 - ==> Best [mAP: 0.394528 vloss: 3.374902 Sparsity:0.00 Params: 2177088 on epoch: 58] -2023-04-26 20:55:30,281 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:55:30,318 - - -2023-04-26 20:55:30,318 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:55:41,305 - Epoch: [63][ 50/ 518] Overall Loss 3.122867 Objective Loss 3.122867 LR 0.000500 Time 0.219674 -2023-04-26 20:55:51,537 - Epoch: [63][ 100/ 518] Overall Loss 3.119196 Objective Loss 3.119196 LR 0.000500 Time 0.212148 -2023-04-26 20:56:01,813 - Epoch: [63][ 150/ 518] Overall Loss 3.112403 Objective Loss 3.112403 LR 0.000500 Time 0.209923 -2023-04-26 20:56:12,049 - Epoch: [63][ 200/ 518] Overall Loss 3.127661 Objective Loss 3.127661 LR 0.000500 Time 0.208617 -2023-04-26 20:56:22,358 - Epoch: [63][ 250/ 518] Overall Loss 3.141238 Objective Loss 3.141238 LR 0.000500 Time 0.208122 -2023-04-26 20:56:32,528 - Epoch: [63][ 300/ 518] Overall Loss 3.143976 Objective Loss 3.143976 LR 0.000500 Time 0.207328 -2023-04-26 20:56:42,706 - Epoch: [63][ 350/ 518] Overall Loss 3.142531 Objective Loss 3.142531 LR 0.000500 Time 0.206788 -2023-04-26 20:56:52,897 - Epoch: [63][ 400/ 518] Overall Loss 3.136535 Objective Loss 3.136535 LR 0.000500 Time 0.206411 -2023-04-26 20:57:03,105 - Epoch: [63][ 450/ 518] Overall Loss 3.137899 Objective Loss 3.137899 LR 0.000500 Time 0.206158 -2023-04-26 20:57:13,352 - Epoch: [63][ 500/ 518] Overall Loss 3.135020 Objective Loss 3.135020 LR 0.000500 Time 0.206032 -2023-04-26 20:57:16,858 - Epoch: [63][ 518/ 518] Overall Loss 3.135891 Objective Loss 3.135891 LR 0.000500 Time 0.205640 -2023-04-26 20:57:16,929 - --- validate (epoch=63)----------- -2023-04-26 20:57:16,929 - 4952 samples (32 per mini-batch) -2023-04-26 20:57:23,783 - Epoch: [63][ 50/ 155] Loss 3.321821 mAP 0.395785 -2023-04-26 20:57:30,108 - Epoch: [63][ 100/ 155] Loss 3.351927 mAP 0.384170 -2023-04-26 20:57:36,518 - Epoch: [63][ 150/ 155] Loss 3.347704 mAP 0.396698 -2023-04-26 20:57:37,117 - Epoch: [63][ 155/ 155] Loss 3.351152 mAP 0.397624 -2023-04-26 20:57:37,180 - ==> mAP: 0.39762 Loss: 3.351 - -2023-04-26 20:57:37,183 - ==> Best [mAP: 0.397624 vloss: 3.351152 Sparsity:0.00 Params: 2177088 on epoch: 63] -2023-04-26 20:57:37,183 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:57:37,234 - - -2023-04-26 20:57:37,235 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:57:48,089 - Epoch: [64][ 50/ 518] Overall Loss 3.154796 Objective Loss 3.154796 LR 0.000500 Time 0.217036 -2023-04-26 20:57:58,219 - Epoch: [64][ 100/ 518] Overall Loss 3.143420 Objective Loss 3.143420 LR 0.000500 Time 0.209799 -2023-04-26 20:58:08,491 - Epoch: [64][ 150/ 518] Overall Loss 3.137184 Objective Loss 3.137184 LR 0.000500 Time 0.208331 -2023-04-26 20:58:18,713 - Epoch: [64][ 200/ 518] Overall Loss 3.130531 Objective Loss 3.130531 LR 0.000500 Time 0.207351 -2023-04-26 20:58:28,932 - Epoch: [64][ 250/ 518] Overall Loss 3.119172 Objective Loss 3.119172 LR 0.000500 Time 0.206752 -2023-04-26 20:58:39,167 - Epoch: [64][ 300/ 518] Overall Loss 3.129767 Objective Loss 3.129767 LR 0.000500 Time 0.206406 -2023-04-26 20:58:49,441 - Epoch: [64][ 350/ 518] Overall Loss 3.130196 Objective Loss 3.130196 LR 0.000500 Time 0.206268 -2023-04-26 20:58:59,655 - Epoch: [64][ 400/ 518] Overall Loss 3.129256 Objective Loss 3.129256 LR 0.000500 Time 0.206016 -2023-04-26 20:59:09,910 - Epoch: [64][ 450/ 518] Overall Loss 3.131034 Objective Loss 3.131034 LR 0.000500 Time 0.205909 -2023-04-26 20:59:20,156 - Epoch: [64][ 500/ 518] Overall Loss 3.133008 Objective Loss 3.133008 LR 0.000500 Time 0.205808 -2023-04-26 20:59:23,746 - Epoch: [64][ 518/ 518] Overall Loss 3.133499 Objective Loss 3.133499 LR 0.000500 Time 0.205585 -2023-04-26 20:59:23,818 - --- validate (epoch=64)----------- -2023-04-26 20:59:23,818 - 4952 samples (32 per mini-batch) -2023-04-26 20:59:30,488 - Epoch: [64][ 50/ 155] Loss 3.375057 mAP 0.409565 -2023-04-26 20:59:36,911 - Epoch: [64][ 100/ 155] Loss 3.360678 mAP 0.403221 -2023-04-26 20:59:43,244 - Epoch: [64][ 150/ 155] Loss 3.378996 mAP 0.398151 -2023-04-26 20:59:43,809 - Epoch: [64][ 155/ 155] Loss 3.377506 mAP 0.398895 -2023-04-26 20:59:43,874 - ==> mAP: 0.39890 Loss: 3.378 - -2023-04-26 20:59:43,878 - ==> Best [mAP: 0.398895 vloss: 3.377506 Sparsity:0.00 Params: 2177088 on epoch: 64] -2023-04-26 20:59:43,878 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 20:59:43,931 - - -2023-04-26 20:59:43,931 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 20:59:54,989 - Epoch: [65][ 50/ 518] Overall Loss 3.138288 Objective Loss 3.138288 LR 0.000500 Time 0.221105 -2023-04-26 21:00:05,298 - Epoch: [65][ 100/ 518] Overall Loss 3.143166 Objective Loss 3.143166 LR 0.000500 Time 0.213626 -2023-04-26 21:00:15,480 - Epoch: [65][ 150/ 518] Overall Loss 3.113511 Objective Loss 3.113511 LR 0.000500 Time 0.210286 -2023-04-26 21:00:25,796 - Epoch: [65][ 200/ 518] Overall Loss 3.115924 Objective Loss 3.115924 LR 0.000500 Time 0.209286 -2023-04-26 21:00:36,024 - Epoch: [65][ 250/ 518] Overall Loss 3.117818 Objective Loss 3.117818 LR 0.000500 Time 0.208335 -2023-04-26 21:00:46,192 - Epoch: [65][ 300/ 518] Overall Loss 3.132138 Objective Loss 3.132138 LR 0.000500 Time 0.207497 -2023-04-26 21:00:56,495 - Epoch: [65][ 350/ 518] Overall Loss 3.122980 Objective Loss 3.122980 LR 0.000500 Time 0.207288 -2023-04-26 21:01:06,764 - Epoch: [65][ 400/ 518] Overall Loss 3.124111 Objective Loss 3.124111 LR 0.000500 Time 0.207047 -2023-04-26 21:01:17,000 - Epoch: [65][ 450/ 518] Overall Loss 3.125051 Objective Loss 3.125051 LR 0.000500 Time 0.206783 -2023-04-26 21:01:27,271 - Epoch: [65][ 500/ 518] Overall Loss 3.125917 Objective Loss 3.125917 LR 0.000500 Time 0.206643 -2023-04-26 21:01:30,835 - Epoch: [65][ 518/ 518] Overall Loss 3.126202 Objective Loss 3.126202 LR 0.000500 Time 0.206342 -2023-04-26 21:01:30,909 - --- validate (epoch=65)----------- -2023-04-26 21:01:30,909 - 4952 samples (32 per mini-batch) -2023-04-26 21:01:37,728 - Epoch: [65][ 50/ 155] Loss 3.350670 mAP 0.399979 -2023-04-26 21:01:44,118 - Epoch: [65][ 100/ 155] Loss 3.336187 mAP 0.402599 -2023-04-26 21:01:50,452 - Epoch: [65][ 150/ 155] Loss 3.338086 mAP 0.404383 -2023-04-26 21:01:51,027 - Epoch: [65][ 155/ 155] Loss 3.340965 mAP 0.404782 -2023-04-26 21:01:51,097 - ==> mAP: 0.40478 Loss: 3.341 - -2023-04-26 21:01:51,100 - ==> Best [mAP: 0.404782 vloss: 3.340965 Sparsity:0.00 Params: 2177088 on epoch: 65] -2023-04-26 21:01:51,100 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:01:51,153 - - -2023-04-26 21:01:51,153 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:02:02,118 - Epoch: [66][ 50/ 518] Overall Loss 3.098808 Objective Loss 3.098808 LR 0.000500 Time 0.219233 -2023-04-26 21:02:12,346 - Epoch: [66][ 100/ 518] Overall Loss 3.106847 Objective Loss 3.106847 LR 0.000500 Time 0.211874 -2023-04-26 21:02:22,610 - Epoch: [66][ 150/ 518] Overall Loss 3.112074 Objective Loss 3.112074 LR 0.000500 Time 0.209668 -2023-04-26 21:02:32,848 - Epoch: [66][ 200/ 518] Overall Loss 3.104472 Objective Loss 3.104472 LR 0.000500 Time 0.208431 -2023-04-26 21:02:43,127 - Epoch: [66][ 250/ 518] Overall Loss 3.118187 Objective Loss 3.118187 LR 0.000500 Time 0.207854 -2023-04-26 21:02:53,419 - Epoch: [66][ 300/ 518] Overall Loss 3.126643 Objective Loss 3.126643 LR 0.000500 Time 0.207514 -2023-04-26 21:03:03,616 - Epoch: [66][ 350/ 518] Overall Loss 3.123784 Objective Loss 3.123784 LR 0.000500 Time 0.207000 -2023-04-26 21:03:13,806 - Epoch: [66][ 400/ 518] Overall Loss 3.124651 Objective Loss 3.124651 LR 0.000500 Time 0.206595 -2023-04-26 21:03:24,049 - Epoch: [66][ 450/ 518] Overall Loss 3.122429 Objective Loss 3.122429 LR 0.000500 Time 0.206399 -2023-04-26 21:03:34,355 - Epoch: [66][ 500/ 518] Overall Loss 3.116381 Objective Loss 3.116381 LR 0.000500 Time 0.206368 -2023-04-26 21:03:37,889 - Epoch: [66][ 518/ 518] Overall Loss 3.112319 Objective Loss 3.112319 LR 0.000500 Time 0.206017 -2023-04-26 21:03:37,960 - --- validate (epoch=66)----------- -2023-04-26 21:03:37,960 - 4952 samples (32 per mini-batch) -2023-04-26 21:03:44,612 - Epoch: [66][ 50/ 155] Loss 3.364036 mAP 0.399598 -2023-04-26 21:03:50,982 - Epoch: [66][ 100/ 155] Loss 3.346426 mAP 0.401934 -2023-04-26 21:03:57,235 - Epoch: [66][ 150/ 155] Loss 3.354209 mAP 0.397558 -2023-04-26 21:03:57,784 - Epoch: [66][ 155/ 155] Loss 3.352599 mAP 0.396820 -2023-04-26 21:03:57,856 - ==> mAP: 0.39682 Loss: 3.353 - -2023-04-26 21:03:57,860 - ==> Best [mAP: 0.404782 vloss: 3.340965 Sparsity:0.00 Params: 2177088 on epoch: 65] -2023-04-26 21:03:57,860 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:03:57,898 - - -2023-04-26 21:03:57,898 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:04:08,893 - Epoch: [67][ 50/ 518] Overall Loss 3.059060 Objective Loss 3.059060 LR 0.000500 Time 0.219845 -2023-04-26 21:04:19,143 - Epoch: [67][ 100/ 518] Overall Loss 3.103236 Objective Loss 3.103236 LR 0.000500 Time 0.212409 -2023-04-26 21:04:29,382 - Epoch: [67][ 150/ 518] Overall Loss 3.104021 Objective Loss 3.104021 LR 0.000500 Time 0.209856 -2023-04-26 21:04:39,612 - Epoch: [67][ 200/ 518] Overall Loss 3.108865 Objective Loss 3.108865 LR 0.000500 Time 0.208533 -2023-04-26 21:04:49,853 - Epoch: [67][ 250/ 518] Overall Loss 3.101721 Objective Loss 3.101721 LR 0.000500 Time 0.207782 -2023-04-26 21:05:00,132 - Epoch: [67][ 300/ 518] Overall Loss 3.105551 Objective Loss 3.105551 LR 0.000500 Time 0.207410 -2023-04-26 21:05:10,382 - Epoch: [67][ 350/ 518] Overall Loss 3.114393 Objective Loss 3.114393 LR 0.000500 Time 0.207062 -2023-04-26 21:05:20,613 - Epoch: [67][ 400/ 518] Overall Loss 3.118361 Objective Loss 3.118361 LR 0.000500 Time 0.206751 -2023-04-26 21:05:30,922 - Epoch: [67][ 450/ 518] Overall Loss 3.116233 Objective Loss 3.116233 LR 0.000500 Time 0.206686 -2023-04-26 21:05:41,218 - Epoch: [67][ 500/ 518] Overall Loss 3.114102 Objective Loss 3.114102 LR 0.000500 Time 0.206604 -2023-04-26 21:05:44,754 - Epoch: [67][ 518/ 518] Overall Loss 3.112769 Objective Loss 3.112769 LR 0.000500 Time 0.206252 -2023-04-26 21:05:44,827 - --- validate (epoch=67)----------- -2023-04-26 21:05:44,827 - 4952 samples (32 per mini-batch) -2023-04-26 21:05:51,579 - Epoch: [67][ 50/ 155] Loss 3.360903 mAP 0.392447 -2023-04-26 21:05:57,961 - Epoch: [67][ 100/ 155] Loss 3.378534 mAP 0.398061 -2023-04-26 21:06:04,293 - Epoch: [67][ 150/ 155] Loss 3.333520 mAP 0.403852 -2023-04-26 21:06:04,868 - Epoch: [67][ 155/ 155] Loss 3.333734 mAP 0.407427 -2023-04-26 21:06:04,937 - ==> mAP: 0.40743 Loss: 3.334 - -2023-04-26 21:06:04,941 - ==> Best [mAP: 0.407427 vloss: 3.333734 Sparsity:0.00 Params: 2177088 on epoch: 67] -2023-04-26 21:06:04,941 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:06:04,993 - - -2023-04-26 21:06:04,994 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:06:15,912 - Epoch: [68][ 50/ 518] Overall Loss 3.108588 Objective Loss 3.108588 LR 0.000500 Time 0.218310 -2023-04-26 21:06:26,090 - Epoch: [68][ 100/ 518] Overall Loss 3.102938 Objective Loss 3.102938 LR 0.000500 Time 0.210918 -2023-04-26 21:06:36,399 - Epoch: [68][ 150/ 518] Overall Loss 3.096751 Objective Loss 3.096751 LR 0.000500 Time 0.209330 -2023-04-26 21:06:46,671 - Epoch: [68][ 200/ 518] Overall Loss 3.094460 Objective Loss 3.094460 LR 0.000500 Time 0.208346 -2023-04-26 21:06:56,925 - Epoch: [68][ 250/ 518] Overall Loss 3.096084 Objective Loss 3.096084 LR 0.000500 Time 0.207689 -2023-04-26 21:07:07,072 - Epoch: [68][ 300/ 518] Overall Loss 3.098550 Objective Loss 3.098550 LR 0.000500 Time 0.206893 -2023-04-26 21:07:17,299 - Epoch: [68][ 350/ 518] Overall Loss 3.099842 Objective Loss 3.099842 LR 0.000500 Time 0.206550 -2023-04-26 21:07:27,475 - Epoch: [68][ 400/ 518] Overall Loss 3.106958 Objective Loss 3.106958 LR 0.000500 Time 0.206167 -2023-04-26 21:07:37,725 - Epoch: [68][ 450/ 518] Overall Loss 3.104868 Objective Loss 3.104868 LR 0.000500 Time 0.206035 -2023-04-26 21:07:47,905 - Epoch: [68][ 500/ 518] Overall Loss 3.106008 Objective Loss 3.106008 LR 0.000500 Time 0.205789 -2023-04-26 21:07:51,482 - Epoch: [68][ 518/ 518] Overall Loss 3.107878 Objective Loss 3.107878 LR 0.000500 Time 0.205541 -2023-04-26 21:07:51,553 - --- validate (epoch=68)----------- -2023-04-26 21:07:51,553 - 4952 samples (32 per mini-batch) -2023-04-26 21:07:58,282 - Epoch: [68][ 50/ 155] Loss 3.365922 mAP 0.390155 -2023-04-26 21:08:04,608 - Epoch: [68][ 100/ 155] Loss 3.358883 mAP 0.393159 -2023-04-26 21:08:10,923 - Epoch: [68][ 150/ 155] Loss 3.350316 mAP 0.394180 -2023-04-26 21:08:11,495 - Epoch: [68][ 155/ 155] Loss 3.343957 mAP 0.398163 -2023-04-26 21:08:11,567 - ==> mAP: 0.39816 Loss: 3.344 - -2023-04-26 21:08:11,571 - ==> Best [mAP: 0.407427 vloss: 3.333734 Sparsity:0.00 Params: 2177088 on epoch: 67] -2023-04-26 21:08:11,571 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:08:11,608 - - -2023-04-26 21:08:11,608 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:08:22,809 - Epoch: [69][ 50/ 518] Overall Loss 3.021820 Objective Loss 3.021820 LR 0.000500 Time 0.223946 -2023-04-26 21:08:33,039 - Epoch: [69][ 100/ 518] Overall Loss 3.078581 Objective Loss 3.078581 LR 0.000500 Time 0.214261 -2023-04-26 21:08:43,308 - Epoch: [69][ 150/ 518] Overall Loss 3.079610 Objective Loss 3.079610 LR 0.000500 Time 0.211287 -2023-04-26 21:08:53,546 - Epoch: [69][ 200/ 518] Overall Loss 3.074705 Objective Loss 3.074705 LR 0.000500 Time 0.209648 -2023-04-26 21:09:03,883 - Epoch: [69][ 250/ 518] Overall Loss 3.087258 Objective Loss 3.087258 LR 0.000500 Time 0.209060 -2023-04-26 21:09:14,023 - Epoch: [69][ 300/ 518] Overall Loss 3.089662 Objective Loss 3.089662 LR 0.000500 Time 0.208012 -2023-04-26 21:09:24,323 - Epoch: [69][ 350/ 518] Overall Loss 3.089213 Objective Loss 3.089213 LR 0.000500 Time 0.207720 -2023-04-26 21:09:34,501 - Epoch: [69][ 400/ 518] Overall Loss 3.098154 Objective Loss 3.098154 LR 0.000500 Time 0.207196 -2023-04-26 21:09:44,666 - Epoch: [69][ 450/ 518] Overall Loss 3.102012 Objective Loss 3.102012 LR 0.000500 Time 0.206760 -2023-04-26 21:09:54,976 - Epoch: [69][ 500/ 518] Overall Loss 3.101376 Objective Loss 3.101376 LR 0.000500 Time 0.206700 -2023-04-26 21:09:58,540 - Epoch: [69][ 518/ 518] Overall Loss 3.100698 Objective Loss 3.100698 LR 0.000500 Time 0.206396 -2023-04-26 21:09:58,612 - --- validate (epoch=69)----------- -2023-04-26 21:09:58,612 - 4952 samples (32 per mini-batch) -2023-04-26 21:10:05,431 - Epoch: [69][ 50/ 155] Loss 3.405737 mAP 0.407957 -2023-04-26 21:10:11,779 - Epoch: [69][ 100/ 155] Loss 3.369537 mAP 0.390592 -2023-04-26 21:10:18,254 - Epoch: [69][ 150/ 155] Loss 3.356313 mAP 0.398457 -2023-04-26 21:10:18,826 - Epoch: [69][ 155/ 155] Loss 3.354645 mAP 0.398755 -2023-04-26 21:10:18,901 - ==> mAP: 0.39876 Loss: 3.355 - -2023-04-26 21:10:18,906 - ==> Best [mAP: 0.407427 vloss: 3.333734 Sparsity:0.00 Params: 2177088 on epoch: 67] -2023-04-26 21:10:18,906 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:10:18,943 - - -2023-04-26 21:10:18,943 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:10:29,939 - Epoch: [70][ 50/ 518] Overall Loss 3.064300 Objective Loss 3.064300 LR 0.000500 Time 0.219855 -2023-04-26 21:10:40,211 - Epoch: [70][ 100/ 518] Overall Loss 3.095997 Objective Loss 3.095997 LR 0.000500 Time 0.212634 -2023-04-26 21:10:50,470 - Epoch: [70][ 150/ 518] Overall Loss 3.105374 Objective Loss 3.105374 LR 0.000500 Time 0.210140 -2023-04-26 21:11:00,721 - Epoch: [70][ 200/ 518] Overall Loss 3.105431 Objective Loss 3.105431 LR 0.000500 Time 0.208849 -2023-04-26 21:11:10,938 - Epoch: [70][ 250/ 518] Overall Loss 3.096610 Objective Loss 3.096610 LR 0.000500 Time 0.207941 -2023-04-26 21:11:21,213 - Epoch: [70][ 300/ 518] Overall Loss 3.095262 Objective Loss 3.095262 LR 0.000500 Time 0.207530 -2023-04-26 21:11:31,504 - Epoch: [70][ 350/ 518] Overall Loss 3.096416 Objective Loss 3.096416 LR 0.000500 Time 0.207282 -2023-04-26 21:11:41,705 - Epoch: [70][ 400/ 518] Overall Loss 3.097657 Objective Loss 3.097657 LR 0.000500 Time 0.206868 -2023-04-26 21:11:52,022 - Epoch: [70][ 450/ 518] Overall Loss 3.102195 Objective Loss 3.102195 LR 0.000500 Time 0.206807 -2023-04-26 21:12:02,217 - Epoch: [70][ 500/ 518] Overall Loss 3.104421 Objective Loss 3.104421 LR 0.000500 Time 0.206513 -2023-04-26 21:12:05,767 - Epoch: [70][ 518/ 518] Overall Loss 3.109503 Objective Loss 3.109503 LR 0.000500 Time 0.206188 -2023-04-26 21:12:05,838 - --- validate (epoch=70)----------- -2023-04-26 21:12:05,838 - 4952 samples (32 per mini-batch) -2023-04-26 21:12:12,608 - Epoch: [70][ 50/ 155] Loss 3.408647 mAP 0.406164 -2023-04-26 21:12:18,965 - Epoch: [70][ 100/ 155] Loss 3.386827 mAP 0.406378 -2023-04-26 21:12:25,332 - Epoch: [70][ 150/ 155] Loss 3.378024 mAP 0.409147 -2023-04-26 21:12:25,905 - Epoch: [70][ 155/ 155] Loss 3.376234 mAP 0.410750 -2023-04-26 21:12:25,977 - ==> mAP: 0.41075 Loss: 3.376 - -2023-04-26 21:12:25,980 - ==> Best [mAP: 0.410750 vloss: 3.376234 Sparsity:0.00 Params: 2177088 on epoch: 70] -2023-04-26 21:12:25,981 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:12:26,032 - - -2023-04-26 21:12:26,032 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:12:37,026 - Epoch: [71][ 50/ 518] Overall Loss 3.093215 Objective Loss 3.093215 LR 0.000500 Time 0.219815 -2023-04-26 21:12:47,177 - Epoch: [71][ 100/ 518] Overall Loss 3.113340 Objective Loss 3.113340 LR 0.000500 Time 0.211401 -2023-04-26 21:12:57,440 - Epoch: [71][ 150/ 518] Overall Loss 3.109846 Objective Loss 3.109846 LR 0.000500 Time 0.209348 -2023-04-26 21:13:07,661 - Epoch: [71][ 200/ 518] Overall Loss 3.098533 Objective Loss 3.098533 LR 0.000500 Time 0.208106 -2023-04-26 21:13:17,927 - Epoch: [71][ 250/ 518] Overall Loss 3.110691 Objective Loss 3.110691 LR 0.000500 Time 0.207540 -2023-04-26 21:13:28,074 - Epoch: [71][ 300/ 518] Overall Loss 3.121218 Objective Loss 3.121218 LR 0.000500 Time 0.206769 -2023-04-26 21:13:38,371 - Epoch: [71][ 350/ 518] Overall Loss 3.114849 Objective Loss 3.114849 LR 0.000500 Time 0.206646 -2023-04-26 21:13:48,561 - Epoch: [71][ 400/ 518] Overall Loss 3.116040 Objective Loss 3.116040 LR 0.000500 Time 0.206287 -2023-04-26 21:13:58,762 - Epoch: [71][ 450/ 518] Overall Loss 3.117352 Objective Loss 3.117352 LR 0.000500 Time 0.206031 -2023-04-26 21:14:08,885 - Epoch: [71][ 500/ 518] Overall Loss 3.111811 Objective Loss 3.111811 LR 0.000500 Time 0.205671 -2023-04-26 21:14:12,427 - Epoch: [71][ 518/ 518] Overall Loss 3.108717 Objective Loss 3.108717 LR 0.000500 Time 0.205360 -2023-04-26 21:14:12,499 - --- validate (epoch=71)----------- -2023-04-26 21:14:12,499 - 4952 samples (32 per mini-batch) -2023-04-26 21:14:19,303 - Epoch: [71][ 50/ 155] Loss 3.339583 mAP 0.415197 -2023-04-26 21:14:25,684 - Epoch: [71][ 100/ 155] Loss 3.331181 mAP 0.414131 -2023-04-26 21:14:32,027 - Epoch: [71][ 150/ 155] Loss 3.308379 mAP 0.411753 -2023-04-26 21:14:32,586 - Epoch: [71][ 155/ 155] Loss 3.310280 mAP 0.411791 -2023-04-26 21:14:32,652 - ==> mAP: 0.41179 Loss: 3.310 - -2023-04-26 21:14:32,656 - ==> Best [mAP: 0.411791 vloss: 3.310280 Sparsity:0.00 Params: 2177088 on epoch: 71] -2023-04-26 21:14:32,656 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:14:32,710 - - -2023-04-26 21:14:32,710 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:14:43,701 - Epoch: [72][ 50/ 518] Overall Loss 3.097554 Objective Loss 3.097554 LR 0.000500 Time 0.219748 -2023-04-26 21:14:53,847 - Epoch: [72][ 100/ 518] Overall Loss 3.090832 Objective Loss 3.090832 LR 0.000500 Time 0.211324 -2023-04-26 21:15:04,101 - Epoch: [72][ 150/ 518] Overall Loss 3.076565 Objective Loss 3.076565 LR 0.000500 Time 0.209228 -2023-04-26 21:15:14,307 - Epoch: [72][ 200/ 518] Overall Loss 3.082248 Objective Loss 3.082248 LR 0.000500 Time 0.207943 -2023-04-26 21:15:24,597 - Epoch: [72][ 250/ 518] Overall Loss 3.081627 Objective Loss 3.081627 LR 0.000500 Time 0.207507 -2023-04-26 21:15:34,812 - Epoch: [72][ 300/ 518] Overall Loss 3.081565 Objective Loss 3.081565 LR 0.000500 Time 0.206969 -2023-04-26 21:15:45,087 - Epoch: [72][ 350/ 518] Overall Loss 3.088581 Objective Loss 3.088581 LR 0.000500 Time 0.206753 -2023-04-26 21:15:55,310 - Epoch: [72][ 400/ 518] Overall Loss 3.077697 Objective Loss 3.077697 LR 0.000500 Time 0.206464 -2023-04-26 21:16:05,592 - Epoch: [72][ 450/ 518] Overall Loss 3.081504 Objective Loss 3.081504 LR 0.000500 Time 0.206367 -2023-04-26 21:16:15,847 - Epoch: [72][ 500/ 518] Overall Loss 3.080206 Objective Loss 3.080206 LR 0.000500 Time 0.206238 -2023-04-26 21:16:19,327 - Epoch: [72][ 518/ 518] Overall Loss 3.083708 Objective Loss 3.083708 LR 0.000500 Time 0.205788 -2023-04-26 21:16:19,398 - --- validate (epoch=72)----------- -2023-04-26 21:16:19,399 - 4952 samples (32 per mini-batch) -2023-04-26 21:16:26,153 - Epoch: [72][ 50/ 155] Loss 3.361813 mAP 0.412380 -2023-04-26 21:16:32,556 - Epoch: [72][ 100/ 155] Loss 3.369219 mAP 0.409617 -2023-04-26 21:16:38,935 - Epoch: [72][ 150/ 155] Loss 3.371567 mAP 0.408592 -2023-04-26 21:16:39,499 - Epoch: [72][ 155/ 155] Loss 3.373593 mAP 0.408687 -2023-04-26 21:16:39,567 - ==> mAP: 0.40869 Loss: 3.374 - -2023-04-26 21:16:39,571 - ==> Best [mAP: 0.411791 vloss: 3.310280 Sparsity:0.00 Params: 2177088 on epoch: 71] -2023-04-26 21:16:39,571 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:16:39,608 - - -2023-04-26 21:16:39,608 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:16:50,732 - Epoch: [73][ 50/ 518] Overall Loss 3.102592 Objective Loss 3.102592 LR 0.000500 Time 0.222429 -2023-04-26 21:17:00,978 - Epoch: [73][ 100/ 518] Overall Loss 3.106973 Objective Loss 3.106973 LR 0.000500 Time 0.213661 -2023-04-26 21:17:11,208 - Epoch: [73][ 150/ 518] Overall Loss 3.110413 Objective Loss 3.110413 LR 0.000500 Time 0.210627 -2023-04-26 21:17:21,371 - Epoch: [73][ 200/ 518] Overall Loss 3.100662 Objective Loss 3.100662 LR 0.000500 Time 0.208779 -2023-04-26 21:17:31,645 - Epoch: [73][ 250/ 518] Overall Loss 3.094244 Objective Loss 3.094244 LR 0.000500 Time 0.208110 -2023-04-26 21:17:41,902 - Epoch: [73][ 300/ 518] Overall Loss 3.096750 Objective Loss 3.096750 LR 0.000500 Time 0.207609 -2023-04-26 21:17:52,144 - Epoch: [73][ 350/ 518] Overall Loss 3.100564 Objective Loss 3.100564 LR 0.000500 Time 0.207211 -2023-04-26 21:18:02,333 - Epoch: [73][ 400/ 518] Overall Loss 3.097194 Objective Loss 3.097194 LR 0.000500 Time 0.206777 -2023-04-26 21:18:12,498 - Epoch: [73][ 450/ 518] Overall Loss 3.096925 Objective Loss 3.096925 LR 0.000500 Time 0.206387 -2023-04-26 21:18:22,710 - Epoch: [73][ 500/ 518] Overall Loss 3.091646 Objective Loss 3.091646 LR 0.000500 Time 0.206168 -2023-04-26 21:18:26,194 - Epoch: [73][ 518/ 518] Overall Loss 3.092107 Objective Loss 3.092107 LR 0.000500 Time 0.205730 -2023-04-26 21:18:26,265 - --- validate (epoch=73)----------- -2023-04-26 21:18:26,266 - 4952 samples (32 per mini-batch) -2023-04-26 21:18:32,965 - Epoch: [73][ 50/ 155] Loss 3.332391 mAP 0.407473 -2023-04-26 21:18:39,301 - Epoch: [73][ 100/ 155] Loss 3.350255 mAP 0.395479 -2023-04-26 21:18:45,591 - Epoch: [73][ 150/ 155] Loss 3.361046 mAP 0.392722 -2023-04-26 21:18:46,150 - Epoch: [73][ 155/ 155] Loss 3.354191 mAP 0.392870 -2023-04-26 21:18:46,222 - ==> mAP: 0.39287 Loss: 3.354 - -2023-04-26 21:18:46,226 - ==> Best [mAP: 0.411791 vloss: 3.310280 Sparsity:0.00 Params: 2177088 on epoch: 71] -2023-04-26 21:18:46,226 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:18:46,262 - - -2023-04-26 21:18:46,263 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:18:57,344 - Epoch: [74][ 50/ 518] Overall Loss 3.040304 Objective Loss 3.040304 LR 0.000500 Time 0.221574 -2023-04-26 21:19:07,586 - Epoch: [74][ 100/ 518] Overall Loss 3.068176 Objective Loss 3.068176 LR 0.000500 Time 0.213192 -2023-04-26 21:19:17,760 - Epoch: [74][ 150/ 518] Overall Loss 3.059938 Objective Loss 3.059938 LR 0.000500 Time 0.209941 -2023-04-26 21:19:28,022 - Epoch: [74][ 200/ 518] Overall Loss 3.070662 Objective Loss 3.070662 LR 0.000500 Time 0.208757 -2023-04-26 21:19:38,314 - Epoch: [74][ 250/ 518] Overall Loss 3.073180 Objective Loss 3.073180 LR 0.000500 Time 0.208167 -2023-04-26 21:19:48,513 - Epoch: [74][ 300/ 518] Overall Loss 3.071122 Objective Loss 3.071122 LR 0.000500 Time 0.207465 -2023-04-26 21:19:58,735 - Epoch: [74][ 350/ 518] Overall Loss 3.080750 Objective Loss 3.080750 LR 0.000500 Time 0.207027 -2023-04-26 21:20:08,956 - Epoch: [74][ 400/ 518] Overall Loss 3.073174 Objective Loss 3.073174 LR 0.000500 Time 0.206699 -2023-04-26 21:20:19,216 - Epoch: [74][ 450/ 518] Overall Loss 3.076395 Objective Loss 3.076395 LR 0.000500 Time 0.206528 -2023-04-26 21:20:29,539 - Epoch: [74][ 500/ 518] Overall Loss 3.084885 Objective Loss 3.084885 LR 0.000500 Time 0.206517 -2023-04-26 21:20:33,126 - Epoch: [74][ 518/ 518] Overall Loss 3.084402 Objective Loss 3.084402 LR 0.000500 Time 0.206265 -2023-04-26 21:20:33,194 - --- validate (epoch=74)----------- -2023-04-26 21:20:33,195 - 4952 samples (32 per mini-batch) -2023-04-26 21:20:39,798 - Epoch: [74][ 50/ 155] Loss 3.317198 mAP 0.390484 -2023-04-26 21:20:46,161 - Epoch: [74][ 100/ 155] Loss 3.308566 mAP 0.400155 -2023-04-26 21:20:52,468 - Epoch: [74][ 150/ 155] Loss 3.321292 mAP 0.404982 -2023-04-26 21:20:53,034 - Epoch: [74][ 155/ 155] Loss 3.320980 mAP 0.405497 -2023-04-26 21:20:53,108 - ==> mAP: 0.40550 Loss: 3.321 - -2023-04-26 21:20:53,111 - ==> Best [mAP: 0.411791 vloss: 3.310280 Sparsity:0.00 Params: 2177088 on epoch: 71] -2023-04-26 21:20:53,111 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:20:53,149 - - -2023-04-26 21:20:53,149 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:21:04,172 - Epoch: [75][ 50/ 518] Overall Loss 3.012438 Objective Loss 3.012438 LR 0.000500 Time 0.220409 -2023-04-26 21:21:14,450 - Epoch: [75][ 100/ 518] Overall Loss 3.057305 Objective Loss 3.057305 LR 0.000500 Time 0.212963 -2023-04-26 21:21:24,680 - Epoch: [75][ 150/ 518] Overall Loss 3.075071 Objective Loss 3.075071 LR 0.000500 Time 0.210164 -2023-04-26 21:21:34,797 - Epoch: [75][ 200/ 518] Overall Loss 3.089745 Objective Loss 3.089745 LR 0.000500 Time 0.208201 -2023-04-26 21:21:45,034 - Epoch: [75][ 250/ 518] Overall Loss 3.088229 Objective Loss 3.088229 LR 0.000500 Time 0.207505 -2023-04-26 21:21:55,299 - Epoch: [75][ 300/ 518] Overall Loss 3.097151 Objective Loss 3.097151 LR 0.000500 Time 0.207128 -2023-04-26 21:22:05,490 - Epoch: [75][ 350/ 518] Overall Loss 3.093266 Objective Loss 3.093266 LR 0.000500 Time 0.206654 -2023-04-26 21:22:15,660 - Epoch: [75][ 400/ 518] Overall Loss 3.097098 Objective Loss 3.097098 LR 0.000500 Time 0.206241 -2023-04-26 21:22:25,988 - Epoch: [75][ 450/ 518] Overall Loss 3.091849 Objective Loss 3.091849 LR 0.000500 Time 0.206274 -2023-04-26 21:22:36,232 - Epoch: [75][ 500/ 518] Overall Loss 3.084035 Objective Loss 3.084035 LR 0.000500 Time 0.206131 -2023-04-26 21:22:39,757 - Epoch: [75][ 518/ 518] Overall Loss 3.086071 Objective Loss 3.086071 LR 0.000500 Time 0.205771 -2023-04-26 21:22:39,827 - --- validate (epoch=75)----------- -2023-04-26 21:22:39,827 - 4952 samples (32 per mini-batch) -2023-04-26 21:22:46,539 - Epoch: [75][ 50/ 155] Loss 3.290184 mAP 0.422552 -2023-04-26 21:22:53,004 - Epoch: [75][ 100/ 155] Loss 3.283868 mAP 0.427047 -2023-04-26 21:22:59,374 - Epoch: [75][ 150/ 155] Loss 3.306265 mAP 0.419466 -2023-04-26 21:22:59,937 - Epoch: [75][ 155/ 155] Loss 3.311546 mAP 0.417850 -2023-04-26 21:23:00,010 - ==> mAP: 0.41785 Loss: 3.312 - -2023-04-26 21:23:00,013 - ==> Best [mAP: 0.417850 vloss: 3.311546 Sparsity:0.00 Params: 2177088 on epoch: 75] -2023-04-26 21:23:00,013 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:23:00,067 - - -2023-04-26 21:23:00,067 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:23:11,031 - Epoch: [76][ 50/ 518] Overall Loss 3.096419 Objective Loss 3.096419 LR 0.000500 Time 0.219223 -2023-04-26 21:23:21,267 - Epoch: [76][ 100/ 518] Overall Loss 3.059841 Objective Loss 3.059841 LR 0.000500 Time 0.211945 -2023-04-26 21:23:31,517 - Epoch: [76][ 150/ 518] Overall Loss 3.087335 Objective Loss 3.087335 LR 0.000500 Time 0.209620 -2023-04-26 21:23:41,773 - Epoch: [76][ 200/ 518] Overall Loss 3.089018 Objective Loss 3.089018 LR 0.000500 Time 0.208490 -2023-04-26 21:23:51,988 - Epoch: [76][ 250/ 518] Overall Loss 3.092815 Objective Loss 3.092815 LR 0.000500 Time 0.207645 -2023-04-26 21:24:02,264 - Epoch: [76][ 300/ 518] Overall Loss 3.089096 Objective Loss 3.089096 LR 0.000500 Time 0.207284 -2023-04-26 21:24:12,404 - Epoch: [76][ 350/ 518] Overall Loss 3.080043 Objective Loss 3.080043 LR 0.000500 Time 0.206638 -2023-04-26 21:24:22,612 - Epoch: [76][ 400/ 518] Overall Loss 3.084861 Objective Loss 3.084861 LR 0.000500 Time 0.206325 -2023-04-26 21:24:32,855 - Epoch: [76][ 450/ 518] Overall Loss 3.081436 Objective Loss 3.081436 LR 0.000500 Time 0.206157 -2023-04-26 21:24:42,994 - Epoch: [76][ 500/ 518] Overall Loss 3.079363 Objective Loss 3.079363 LR 0.000500 Time 0.205818 -2023-04-26 21:24:46,541 - Epoch: [76][ 518/ 518] Overall Loss 3.083487 Objective Loss 3.083487 LR 0.000500 Time 0.205512 -2023-04-26 21:24:46,612 - --- validate (epoch=76)----------- -2023-04-26 21:24:46,612 - 4952 samples (32 per mini-batch) -2023-04-26 21:24:53,433 - Epoch: [76][ 50/ 155] Loss 3.391189 mAP 0.393523 -2023-04-26 21:24:59,829 - Epoch: [76][ 100/ 155] Loss 3.395500 mAP 0.402602 -2023-04-26 21:25:06,246 - Epoch: [76][ 150/ 155] Loss 3.387967 mAP 0.398013 -2023-04-26 21:25:06,818 - Epoch: [76][ 155/ 155] Loss 3.390070 mAP 0.398583 -2023-04-26 21:25:06,889 - ==> mAP: 0.39858 Loss: 3.390 - -2023-04-26 21:25:06,893 - ==> Best [mAP: 0.417850 vloss: 3.311546 Sparsity:0.00 Params: 2177088 on epoch: 75] -2023-04-26 21:25:06,893 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:25:07,025 - - -2023-04-26 21:25:07,025 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:25:17,969 - Epoch: [77][ 50/ 518] Overall Loss 3.099658 Objective Loss 3.099658 LR 0.000500 Time 0.218801 -2023-04-26 21:25:28,267 - Epoch: [77][ 100/ 518] Overall Loss 3.076410 Objective Loss 3.076410 LR 0.000500 Time 0.212360 -2023-04-26 21:25:38,486 - Epoch: [77][ 150/ 518] Overall Loss 3.081601 Objective Loss 3.081601 LR 0.000500 Time 0.209688 -2023-04-26 21:25:48,641 - Epoch: [77][ 200/ 518] Overall Loss 3.063080 Objective Loss 3.063080 LR 0.000500 Time 0.208033 -2023-04-26 21:25:58,923 - Epoch: [77][ 250/ 518] Overall Loss 3.066031 Objective Loss 3.066031 LR 0.000500 Time 0.207549 -2023-04-26 21:26:09,180 - Epoch: [77][ 300/ 518] Overall Loss 3.069831 Objective Loss 3.069831 LR 0.000500 Time 0.207141 -2023-04-26 21:26:19,371 - Epoch: [77][ 350/ 518] Overall Loss 3.074110 Objective Loss 3.074110 LR 0.000500 Time 0.206661 -2023-04-26 21:26:29,491 - Epoch: [77][ 400/ 518] Overall Loss 3.068216 Objective Loss 3.068216 LR 0.000500 Time 0.206125 -2023-04-26 21:26:39,798 - Epoch: [77][ 450/ 518] Overall Loss 3.073999 Objective Loss 3.073999 LR 0.000500 Time 0.206122 -2023-04-26 21:26:50,011 - Epoch: [77][ 500/ 518] Overall Loss 3.075660 Objective Loss 3.075660 LR 0.000500 Time 0.205933 -2023-04-26 21:26:53,531 - Epoch: [77][ 518/ 518] Overall Loss 3.081390 Objective Loss 3.081390 LR 0.000500 Time 0.205572 -2023-04-26 21:26:53,603 - --- validate (epoch=77)----------- -2023-04-26 21:26:53,603 - 4952 samples (32 per mini-batch) -2023-04-26 21:27:00,362 - Epoch: [77][ 50/ 155] Loss 3.280719 mAP 0.422440 -2023-04-26 21:27:06,716 - Epoch: [77][ 100/ 155] Loss 3.292329 mAP 0.406911 -2023-04-26 21:27:13,024 - Epoch: [77][ 150/ 155] Loss 3.303425 mAP 0.405631 -2023-04-26 21:27:13,584 - Epoch: [77][ 155/ 155] Loss 3.299249 mAP 0.406583 -2023-04-26 21:27:13,655 - ==> mAP: 0.40658 Loss: 3.299 - -2023-04-26 21:27:13,658 - ==> Best [mAP: 0.417850 vloss: 3.311546 Sparsity:0.00 Params: 2177088 on epoch: 75] -2023-04-26 21:27:13,659 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:27:13,696 - - -2023-04-26 21:27:13,696 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:27:24,616 - Epoch: [78][ 50/ 518] Overall Loss 3.084650 Objective Loss 3.084650 LR 0.000500 Time 0.218338 -2023-04-26 21:27:34,705 - Epoch: [78][ 100/ 518] Overall Loss 3.058554 Objective Loss 3.058554 LR 0.000500 Time 0.210041 -2023-04-26 21:27:44,974 - Epoch: [78][ 150/ 518] Overall Loss 3.059546 Objective Loss 3.059546 LR 0.000500 Time 0.208480 -2023-04-26 21:27:55,312 - Epoch: [78][ 200/ 518] Overall Loss 3.054120 Objective Loss 3.054120 LR 0.000500 Time 0.208041 -2023-04-26 21:28:05,563 - Epoch: [78][ 250/ 518] Overall Loss 3.058584 Objective Loss 3.058584 LR 0.000500 Time 0.207429 -2023-04-26 21:28:15,725 - Epoch: [78][ 300/ 518] Overall Loss 3.067017 Objective Loss 3.067017 LR 0.000500 Time 0.206726 -2023-04-26 21:28:26,010 - Epoch: [78][ 350/ 518] Overall Loss 3.068140 Objective Loss 3.068140 LR 0.000500 Time 0.206576 -2023-04-26 21:28:36,254 - Epoch: [78][ 400/ 518] Overall Loss 3.068320 Objective Loss 3.068320 LR 0.000500 Time 0.206359 -2023-04-26 21:28:46,466 - Epoch: [78][ 450/ 518] Overall Loss 3.064058 Objective Loss 3.064058 LR 0.000500 Time 0.206119 -2023-04-26 21:28:56,761 - Epoch: [78][ 500/ 518] Overall Loss 3.062928 Objective Loss 3.062928 LR 0.000500 Time 0.206094 -2023-04-26 21:29:00,321 - Epoch: [78][ 518/ 518] Overall Loss 3.060802 Objective Loss 3.060802 LR 0.000500 Time 0.205804 -2023-04-26 21:29:00,392 - --- validate (epoch=78)----------- -2023-04-26 21:29:00,393 - 4952 samples (32 per mini-batch) -2023-04-26 21:29:07,195 - Epoch: [78][ 50/ 155] Loss 3.275664 mAP 0.421346 -2023-04-26 21:29:13,627 - Epoch: [78][ 100/ 155] Loss 3.309077 mAP 0.413644 -2023-04-26 21:29:20,037 - Epoch: [78][ 150/ 155] Loss 3.302759 mAP 0.419599 -2023-04-26 21:29:20,606 - Epoch: [78][ 155/ 155] Loss 3.305679 mAP 0.416877 -2023-04-26 21:29:20,678 - ==> mAP: 0.41688 Loss: 3.306 - -2023-04-26 21:29:20,682 - ==> Best [mAP: 0.417850 vloss: 3.311546 Sparsity:0.00 Params: 2177088 on epoch: 75] -2023-04-26 21:29:20,682 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:29:20,719 - - -2023-04-26 21:29:20,719 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:29:31,703 - Epoch: [79][ 50/ 518] Overall Loss 3.057693 Objective Loss 3.057693 LR 0.000500 Time 0.219620 -2023-04-26 21:29:41,866 - Epoch: [79][ 100/ 518] Overall Loss 3.073515 Objective Loss 3.073515 LR 0.000500 Time 0.211423 -2023-04-26 21:29:52,127 - Epoch: [79][ 150/ 518] Overall Loss 3.090057 Objective Loss 3.090057 LR 0.000500 Time 0.209340 -2023-04-26 21:30:02,387 - Epoch: [79][ 200/ 518] Overall Loss 3.084578 Objective Loss 3.084578 LR 0.000500 Time 0.208301 -2023-04-26 21:30:12,582 - Epoch: [79][ 250/ 518] Overall Loss 3.083250 Objective Loss 3.083250 LR 0.000500 Time 0.207412 -2023-04-26 21:30:22,781 - Epoch: [79][ 300/ 518] Overall Loss 3.073419 Objective Loss 3.073419 LR 0.000500 Time 0.206836 -2023-04-26 21:30:32,916 - Epoch: [79][ 350/ 518] Overall Loss 3.073965 Objective Loss 3.073965 LR 0.000500 Time 0.206241 -2023-04-26 21:30:43,135 - Epoch: [79][ 400/ 518] Overall Loss 3.064530 Objective Loss 3.064530 LR 0.000500 Time 0.206003 -2023-04-26 21:30:53,313 - Epoch: [79][ 450/ 518] Overall Loss 3.069326 Objective Loss 3.069326 LR 0.000500 Time 0.205728 -2023-04-26 21:31:03,517 - Epoch: [79][ 500/ 518] Overall Loss 3.069837 Objective Loss 3.069837 LR 0.000500 Time 0.205559 -2023-04-26 21:31:07,085 - Epoch: [79][ 518/ 518] Overall Loss 3.072813 Objective Loss 3.072813 LR 0.000500 Time 0.205303 -2023-04-26 21:31:07,158 - --- validate (epoch=79)----------- -2023-04-26 21:31:07,158 - 4952 samples (32 per mini-batch) -2023-04-26 21:31:13,847 - Epoch: [79][ 50/ 155] Loss 3.295736 mAP 0.394403 -2023-04-26 21:31:20,194 - Epoch: [79][ 100/ 155] Loss 3.316887 mAP 0.406956 -2023-04-26 21:31:26,518 - Epoch: [79][ 150/ 155] Loss 3.304909 mAP 0.410395 -2023-04-26 21:31:27,089 - Epoch: [79][ 155/ 155] Loss 3.310065 mAP 0.410770 -2023-04-26 21:31:27,157 - ==> mAP: 0.41077 Loss: 3.310 - -2023-04-26 21:31:27,161 - ==> Best [mAP: 0.417850 vloss: 3.311546 Sparsity:0.00 Params: 2177088 on epoch: 75] -2023-04-26 21:31:27,161 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:31:27,198 - - -2023-04-26 21:31:27,198 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:31:38,255 - Epoch: [80][ 50/ 518] Overall Loss 3.059836 Objective Loss 3.059836 LR 0.000500 Time 0.221091 -2023-04-26 21:31:48,449 - Epoch: [80][ 100/ 518] Overall Loss 3.046711 Objective Loss 3.046711 LR 0.000500 Time 0.212466 -2023-04-26 21:31:58,648 - Epoch: [80][ 150/ 518] Overall Loss 3.053408 Objective Loss 3.053408 LR 0.000500 Time 0.209626 -2023-04-26 21:32:08,805 - Epoch: [80][ 200/ 518] Overall Loss 3.065897 Objective Loss 3.065897 LR 0.000500 Time 0.207993 -2023-04-26 21:32:19,017 - Epoch: [80][ 250/ 518] Overall Loss 3.065111 Objective Loss 3.065111 LR 0.000500 Time 0.207237 -2023-04-26 21:32:29,293 - Epoch: [80][ 300/ 518] Overall Loss 3.074103 Objective Loss 3.074103 LR 0.000500 Time 0.206946 -2023-04-26 21:32:39,459 - Epoch: [80][ 350/ 518] Overall Loss 3.065981 Objective Loss 3.065981 LR 0.000500 Time 0.206424 -2023-04-26 21:32:49,827 - Epoch: [80][ 400/ 518] Overall Loss 3.078238 Objective Loss 3.078238 LR 0.000500 Time 0.206537 -2023-04-26 21:32:59,982 - Epoch: [80][ 450/ 518] Overall Loss 3.076032 Objective Loss 3.076032 LR 0.000500 Time 0.206150 -2023-04-26 21:33:10,198 - Epoch: [80][ 500/ 518] Overall Loss 3.074339 Objective Loss 3.074339 LR 0.000500 Time 0.205965 -2023-04-26 21:33:13,769 - Epoch: [80][ 518/ 518] Overall Loss 3.072048 Objective Loss 3.072048 LR 0.000500 Time 0.205701 -2023-04-26 21:33:13,842 - --- validate (epoch=80)----------- -2023-04-26 21:33:13,842 - 4952 samples (32 per mini-batch) -2023-04-26 21:33:20,538 - Epoch: [80][ 50/ 155] Loss 3.300232 mAP 0.415883 -2023-04-26 21:33:26,942 - Epoch: [80][ 100/ 155] Loss 3.284832 mAP 0.414844 -2023-04-26 21:33:33,327 - Epoch: [80][ 150/ 155] Loss 3.276248 mAP 0.415130 -2023-04-26 21:33:33,901 - Epoch: [80][ 155/ 155] Loss 3.276076 mAP 0.413850 -2023-04-26 21:33:33,971 - ==> mAP: 0.41385 Loss: 3.276 - -2023-04-26 21:33:33,975 - ==> Best [mAP: 0.417850 vloss: 3.311546 Sparsity:0.00 Params: 2177088 on epoch: 75] -2023-04-26 21:33:33,975 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:33:34,013 - - -2023-04-26 21:33:34,013 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:33:45,035 - Epoch: [81][ 50/ 518] Overall Loss 3.103088 Objective Loss 3.103088 LR 0.000500 Time 0.220377 -2023-04-26 21:33:55,257 - Epoch: [81][ 100/ 518] Overall Loss 3.067943 Objective Loss 3.067943 LR 0.000500 Time 0.212393 -2023-04-26 21:34:05,454 - Epoch: [81][ 150/ 518] Overall Loss 3.071235 Objective Loss 3.071235 LR 0.000500 Time 0.209568 -2023-04-26 21:34:15,606 - Epoch: [81][ 200/ 518] Overall Loss 3.070802 Objective Loss 3.070802 LR 0.000500 Time 0.207927 -2023-04-26 21:34:25,843 - Epoch: [81][ 250/ 518] Overall Loss 3.066253 Objective Loss 3.066253 LR 0.000500 Time 0.207282 -2023-04-26 21:34:36,040 - Epoch: [81][ 300/ 518] Overall Loss 3.065178 Objective Loss 3.065178 LR 0.000500 Time 0.206720 -2023-04-26 21:34:46,243 - Epoch: [81][ 350/ 518] Overall Loss 3.068817 Objective Loss 3.068817 LR 0.000500 Time 0.206335 -2023-04-26 21:34:56,401 - Epoch: [81][ 400/ 518] Overall Loss 3.070474 Objective Loss 3.070474 LR 0.000500 Time 0.205935 -2023-04-26 21:35:06,658 - Epoch: [81][ 450/ 518] Overall Loss 3.068171 Objective Loss 3.068171 LR 0.000500 Time 0.205842 -2023-04-26 21:35:16,734 - Epoch: [81][ 500/ 518] Overall Loss 3.070249 Objective Loss 3.070249 LR 0.000500 Time 0.205406 -2023-04-26 21:35:20,281 - Epoch: [81][ 518/ 518] Overall Loss 3.069608 Objective Loss 3.069608 LR 0.000500 Time 0.205115 -2023-04-26 21:35:20,352 - --- validate (epoch=81)----------- -2023-04-26 21:35:20,353 - 4952 samples (32 per mini-batch) -2023-04-26 21:35:27,198 - Epoch: [81][ 50/ 155] Loss 3.235519 mAP 0.436340 -2023-04-26 21:35:33,552 - Epoch: [81][ 100/ 155] Loss 3.263569 mAP 0.408835 -2023-04-26 21:35:39,893 - Epoch: [81][ 150/ 155] Loss 3.267164 mAP 0.411733 -2023-04-26 21:35:40,466 - Epoch: [81][ 155/ 155] Loss 3.276542 mAP 0.410717 -2023-04-26 21:35:40,536 - ==> mAP: 0.41072 Loss: 3.277 - -2023-04-26 21:35:40,539 - ==> Best [mAP: 0.417850 vloss: 3.311546 Sparsity:0.00 Params: 2177088 on epoch: 75] -2023-04-26 21:35:40,540 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:35:40,576 - - -2023-04-26 21:35:40,576 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:35:51,460 - Epoch: [82][ 50/ 518] Overall Loss 3.033417 Objective Loss 3.033417 LR 0.000500 Time 0.217629 -2023-04-26 21:36:01,651 - Epoch: [82][ 100/ 518] Overall Loss 3.048772 Objective Loss 3.048772 LR 0.000500 Time 0.210702 -2023-04-26 21:36:11,910 - Epoch: [82][ 150/ 518] Overall Loss 3.043770 Objective Loss 3.043770 LR 0.000500 Time 0.208850 -2023-04-26 21:36:22,193 - Epoch: [82][ 200/ 518] Overall Loss 3.058602 Objective Loss 3.058602 LR 0.000500 Time 0.208043 -2023-04-26 21:36:32,359 - Epoch: [82][ 250/ 518] Overall Loss 3.055667 Objective Loss 3.055667 LR 0.000500 Time 0.207093 -2023-04-26 21:36:42,632 - Epoch: [82][ 300/ 518] Overall Loss 3.063942 Objective Loss 3.063942 LR 0.000500 Time 0.206815 -2023-04-26 21:36:52,929 - Epoch: [82][ 350/ 518] Overall Loss 3.065962 Objective Loss 3.065962 LR 0.000500 Time 0.206685 -2023-04-26 21:37:03,207 - Epoch: [82][ 400/ 518] Overall Loss 3.057030 Objective Loss 3.057030 LR 0.000500 Time 0.206541 -2023-04-26 21:37:13,468 - Epoch: [82][ 450/ 518] Overall Loss 3.047783 Objective Loss 3.047783 LR 0.000500 Time 0.206390 -2023-04-26 21:37:23,658 - Epoch: [82][ 500/ 518] Overall Loss 3.052674 Objective Loss 3.052674 LR 0.000500 Time 0.206128 -2023-04-26 21:37:27,178 - Epoch: [82][ 518/ 518] Overall Loss 3.053133 Objective Loss 3.053133 LR 0.000500 Time 0.205759 -2023-04-26 21:37:27,249 - --- validate (epoch=82)----------- -2023-04-26 21:37:27,250 - 4952 samples (32 per mini-batch) -2023-04-26 21:37:34,028 - Epoch: [82][ 50/ 155] Loss 3.277569 mAP 0.423571 -2023-04-26 21:37:40,301 - Epoch: [82][ 100/ 155] Loss 3.283835 mAP 0.415468 -2023-04-26 21:37:46,582 - Epoch: [82][ 150/ 155] Loss 3.305333 mAP 0.407193 -2023-04-26 21:37:47,150 - Epoch: [82][ 155/ 155] Loss 3.308270 mAP 0.406630 -2023-04-26 21:37:47,224 - ==> mAP: 0.40663 Loss: 3.308 - -2023-04-26 21:37:47,228 - ==> Best [mAP: 0.417850 vloss: 3.311546 Sparsity:0.00 Params: 2177088 on epoch: 75] -2023-04-26 21:37:47,228 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:37:47,265 - - -2023-04-26 21:37:47,265 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:37:58,211 - Epoch: [83][ 50/ 518] Overall Loss 3.053473 Objective Loss 3.053473 LR 0.000500 Time 0.218862 -2023-04-26 21:38:08,500 - Epoch: [83][ 100/ 518] Overall Loss 3.063033 Objective Loss 3.063033 LR 0.000500 Time 0.212302 -2023-04-26 21:38:18,740 - Epoch: [83][ 150/ 518] Overall Loss 3.050981 Objective Loss 3.050981 LR 0.000500 Time 0.209792 -2023-04-26 21:38:28,959 - Epoch: [83][ 200/ 518] Overall Loss 3.047372 Objective Loss 3.047372 LR 0.000500 Time 0.208430 -2023-04-26 21:38:39,216 - Epoch: [83][ 250/ 518] Overall Loss 3.052759 Objective Loss 3.052759 LR 0.000500 Time 0.207762 -2023-04-26 21:38:49,365 - Epoch: [83][ 300/ 518] Overall Loss 3.053307 Objective Loss 3.053307 LR 0.000500 Time 0.206961 -2023-04-26 21:38:59,627 - Epoch: [83][ 350/ 518] Overall Loss 3.062226 Objective Loss 3.062226 LR 0.000500 Time 0.206710 -2023-04-26 21:39:09,907 - Epoch: [83][ 400/ 518] Overall Loss 3.061813 Objective Loss 3.061813 LR 0.000500 Time 0.206569 -2023-04-26 21:39:20,172 - Epoch: [83][ 450/ 518] Overall Loss 3.064379 Objective Loss 3.064379 LR 0.000500 Time 0.206423 -2023-04-26 21:39:30,453 - Epoch: [83][ 500/ 518] Overall Loss 3.065424 Objective Loss 3.065424 LR 0.000500 Time 0.206339 -2023-04-26 21:39:33,953 - Epoch: [83][ 518/ 518] Overall Loss 3.067587 Objective Loss 3.067587 LR 0.000500 Time 0.205926 -2023-04-26 21:39:34,027 - --- validate (epoch=83)----------- -2023-04-26 21:39:34,027 - 4952 samples (32 per mini-batch) -2023-04-26 21:39:40,752 - Epoch: [83][ 50/ 155] Loss 3.306866 mAP 0.412398 -2023-04-26 21:39:47,192 - Epoch: [83][ 100/ 155] Loss 3.297473 mAP 0.415307 -2023-04-26 21:39:53,488 - Epoch: [83][ 150/ 155] Loss 3.307598 mAP 0.411356 -2023-04-26 21:39:54,050 - Epoch: [83][ 155/ 155] Loss 3.310135 mAP 0.410024 -2023-04-26 21:39:54,120 - ==> mAP: 0.41002 Loss: 3.310 - -2023-04-26 21:39:54,123 - ==> Best [mAP: 0.417850 vloss: 3.311546 Sparsity:0.00 Params: 2177088 on epoch: 75] -2023-04-26 21:39:54,123 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:39:54,160 - - -2023-04-26 21:39:54,160 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:40:05,229 - Epoch: [84][ 50/ 518] Overall Loss 3.032611 Objective Loss 3.032611 LR 0.000500 Time 0.221321 -2023-04-26 21:40:15,495 - Epoch: [84][ 100/ 518] Overall Loss 3.054785 Objective Loss 3.054785 LR 0.000500 Time 0.213305 -2023-04-26 21:40:25,711 - Epoch: [84][ 150/ 518] Overall Loss 3.058475 Objective Loss 3.058475 LR 0.000500 Time 0.210294 -2023-04-26 21:40:35,939 - Epoch: [84][ 200/ 518] Overall Loss 3.047546 Objective Loss 3.047546 LR 0.000500 Time 0.208856 -2023-04-26 21:40:46,125 - Epoch: [84][ 250/ 518] Overall Loss 3.050016 Objective Loss 3.050016 LR 0.000500 Time 0.207822 -2023-04-26 21:40:56,238 - Epoch: [84][ 300/ 518] Overall Loss 3.042057 Objective Loss 3.042057 LR 0.000500 Time 0.206889 -2023-04-26 21:41:06,552 - Epoch: [84][ 350/ 518] Overall Loss 3.041763 Objective Loss 3.041763 LR 0.000500 Time 0.206798 -2023-04-26 21:41:16,673 - Epoch: [84][ 400/ 518] Overall Loss 3.044325 Objective Loss 3.044325 LR 0.000500 Time 0.206247 -2023-04-26 21:41:26,920 - Epoch: [84][ 450/ 518] Overall Loss 3.048487 Objective Loss 3.048487 LR 0.000500 Time 0.206098 -2023-04-26 21:41:37,098 - Epoch: [84][ 500/ 518] Overall Loss 3.045993 Objective Loss 3.045993 LR 0.000500 Time 0.205840 -2023-04-26 21:41:40,692 - Epoch: [84][ 518/ 518] Overall Loss 3.048839 Objective Loss 3.048839 LR 0.000500 Time 0.205626 -2023-04-26 21:41:40,763 - --- validate (epoch=84)----------- -2023-04-26 21:41:40,763 - 4952 samples (32 per mini-batch) -2023-04-26 21:41:47,438 - Epoch: [84][ 50/ 155] Loss 3.277342 mAP 0.430459 -2023-04-26 21:41:53,789 - Epoch: [84][ 100/ 155] Loss 3.284109 mAP 0.420685 -2023-04-26 21:42:00,189 - Epoch: [84][ 150/ 155] Loss 3.293465 mAP 0.416290 -2023-04-26 21:42:00,744 - Epoch: [84][ 155/ 155] Loss 3.295723 mAP 0.415400 -2023-04-26 21:42:00,815 - ==> mAP: 0.41540 Loss: 3.296 - -2023-04-26 21:42:00,819 - ==> Best [mAP: 0.417850 vloss: 3.311546 Sparsity:0.00 Params: 2177088 on epoch: 75] -2023-04-26 21:42:00,819 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:42:00,856 - - -2023-04-26 21:42:00,856 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:42:11,822 - Epoch: [85][ 50/ 518] Overall Loss 3.077004 Objective Loss 3.077004 LR 0.000500 Time 0.219266 -2023-04-26 21:42:22,014 - Epoch: [85][ 100/ 518] Overall Loss 3.075713 Objective Loss 3.075713 LR 0.000500 Time 0.211531 -2023-04-26 21:42:32,241 - Epoch: [85][ 150/ 518] Overall Loss 3.059461 Objective Loss 3.059461 LR 0.000500 Time 0.209189 -2023-04-26 21:42:42,571 - Epoch: [85][ 200/ 518] Overall Loss 3.054213 Objective Loss 3.054213 LR 0.000500 Time 0.208536 -2023-04-26 21:42:52,839 - Epoch: [85][ 250/ 518] Overall Loss 3.052558 Objective Loss 3.052558 LR 0.000500 Time 0.207894 -2023-04-26 21:43:03,120 - Epoch: [85][ 300/ 518] Overall Loss 3.052492 Objective Loss 3.052492 LR 0.000500 Time 0.207508 -2023-04-26 21:43:13,435 - Epoch: [85][ 350/ 518] Overall Loss 3.060990 Objective Loss 3.060990 LR 0.000500 Time 0.207332 -2023-04-26 21:43:23,623 - Epoch: [85][ 400/ 518] Overall Loss 3.058946 Objective Loss 3.058946 LR 0.000500 Time 0.206881 -2023-04-26 21:43:33,877 - Epoch: [85][ 450/ 518] Overall Loss 3.051452 Objective Loss 3.051452 LR 0.000500 Time 0.206678 -2023-04-26 21:43:44,027 - Epoch: [85][ 500/ 518] Overall Loss 3.051834 Objective Loss 3.051834 LR 0.000500 Time 0.206306 -2023-04-26 21:43:47,548 - Epoch: [85][ 518/ 518] Overall Loss 3.055367 Objective Loss 3.055367 LR 0.000500 Time 0.205933 -2023-04-26 21:43:47,619 - --- validate (epoch=85)----------- -2023-04-26 21:43:47,619 - 4952 samples (32 per mini-batch) -2023-04-26 21:43:54,367 - Epoch: [85][ 50/ 155] Loss 3.247985 mAP 0.440270 -2023-04-26 21:44:00,817 - Epoch: [85][ 100/ 155] Loss 3.278212 mAP 0.428884 -2023-04-26 21:44:07,326 - Epoch: [85][ 150/ 155] Loss 3.287357 mAP 0.425913 -2023-04-26 21:44:07,912 - Epoch: [85][ 155/ 155] Loss 3.290909 mAP 0.426456 -2023-04-26 21:44:07,985 - ==> mAP: 0.42646 Loss: 3.291 - -2023-04-26 21:44:07,989 - ==> Best [mAP: 0.426456 vloss: 3.290909 Sparsity:0.00 Params: 2177088 on epoch: 85] -2023-04-26 21:44:07,989 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:44:08,044 - - -2023-04-26 21:44:08,044 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:44:18,925 - Epoch: [86][ 50/ 518] Overall Loss 3.026055 Objective Loss 3.026055 LR 0.000500 Time 0.217558 -2023-04-26 21:44:29,262 - Epoch: [86][ 100/ 518] Overall Loss 3.057616 Objective Loss 3.057616 LR 0.000500 Time 0.212134 -2023-04-26 21:44:39,486 - Epoch: [86][ 150/ 518] Overall Loss 3.062893 Objective Loss 3.062893 LR 0.000500 Time 0.209574 -2023-04-26 21:44:49,685 - Epoch: [86][ 200/ 518] Overall Loss 3.055646 Objective Loss 3.055646 LR 0.000500 Time 0.208165 -2023-04-26 21:44:59,939 - Epoch: [86][ 250/ 518] Overall Loss 3.042679 Objective Loss 3.042679 LR 0.000500 Time 0.207543 -2023-04-26 21:45:10,203 - Epoch: [86][ 300/ 518] Overall Loss 3.040970 Objective Loss 3.040970 LR 0.000500 Time 0.207161 -2023-04-26 21:45:20,381 - Epoch: [86][ 350/ 518] Overall Loss 3.049172 Objective Loss 3.049172 LR 0.000500 Time 0.206640 -2023-04-26 21:45:30,671 - Epoch: [86][ 400/ 518] Overall Loss 3.044085 Objective Loss 3.044085 LR 0.000500 Time 0.206533 -2023-04-26 21:45:41,009 - Epoch: [86][ 450/ 518] Overall Loss 3.039991 Objective Loss 3.039991 LR 0.000500 Time 0.206553 -2023-04-26 21:45:51,241 - Epoch: [86][ 500/ 518] Overall Loss 3.039003 Objective Loss 3.039003 LR 0.000500 Time 0.206359 -2023-04-26 21:45:54,765 - Epoch: [86][ 518/ 518] Overall Loss 3.040573 Objective Loss 3.040573 LR 0.000500 Time 0.205990 -2023-04-26 21:45:54,835 - --- validate (epoch=86)----------- -2023-04-26 21:45:54,836 - 4952 samples (32 per mini-batch) -2023-04-26 21:46:01,551 - Epoch: [86][ 50/ 155] Loss 3.286394 mAP 0.410940 -2023-04-26 21:46:07,927 - Epoch: [86][ 100/ 155] Loss 3.286257 mAP 0.422095 -2023-04-26 21:46:14,295 - Epoch: [86][ 150/ 155] Loss 3.269299 mAP 0.422120 -2023-04-26 21:46:14,863 - Epoch: [86][ 155/ 155] Loss 3.269087 mAP 0.420386 -2023-04-26 21:46:14,932 - ==> mAP: 0.42039 Loss: 3.269 - -2023-04-26 21:46:14,936 - ==> Best [mAP: 0.426456 vloss: 3.290909 Sparsity:0.00 Params: 2177088 on epoch: 85] -2023-04-26 21:46:14,936 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:46:14,973 - - -2023-04-26 21:46:14,973 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:46:26,005 - Epoch: [87][ 50/ 518] Overall Loss 3.072611 Objective Loss 3.072611 LR 0.000500 Time 0.220567 -2023-04-26 21:46:36,189 - Epoch: [87][ 100/ 518] Overall Loss 3.057235 Objective Loss 3.057235 LR 0.000500 Time 0.212107 -2023-04-26 21:46:46,345 - Epoch: [87][ 150/ 518] Overall Loss 3.042274 Objective Loss 3.042274 LR 0.000500 Time 0.209100 -2023-04-26 21:46:56,628 - Epoch: [87][ 200/ 518] Overall Loss 3.042593 Objective Loss 3.042593 LR 0.000500 Time 0.208234 -2023-04-26 21:47:06,873 - Epoch: [87][ 250/ 518] Overall Loss 3.047879 Objective Loss 3.047879 LR 0.000500 Time 0.207560 -2023-04-26 21:47:17,051 - Epoch: [87][ 300/ 518] Overall Loss 3.042576 Objective Loss 3.042576 LR 0.000500 Time 0.206888 -2023-04-26 21:47:27,251 - Epoch: [87][ 350/ 518] Overall Loss 3.037624 Objective Loss 3.037624 LR 0.000500 Time 0.206472 -2023-04-26 21:47:37,466 - Epoch: [87][ 400/ 518] Overall Loss 3.040583 Objective Loss 3.040583 LR 0.000500 Time 0.206196 -2023-04-26 21:47:47,639 - Epoch: [87][ 450/ 518] Overall Loss 3.037068 Objective Loss 3.037068 LR 0.000500 Time 0.205888 -2023-04-26 21:47:57,880 - Epoch: [87][ 500/ 518] Overall Loss 3.029901 Objective Loss 3.029901 LR 0.000500 Time 0.205778 -2023-04-26 21:48:01,393 - Epoch: [87][ 518/ 518] Overall Loss 3.029914 Objective Loss 3.029914 LR 0.000500 Time 0.205408 -2023-04-26 21:48:01,464 - --- validate (epoch=87)----------- -2023-04-26 21:48:01,465 - 4952 samples (32 per mini-batch) -2023-04-26 21:48:08,273 - Epoch: [87][ 50/ 155] Loss 3.274660 mAP 0.428634 -2023-04-26 21:48:14,693 - Epoch: [87][ 100/ 155] Loss 3.293355 mAP 0.423144 -2023-04-26 21:48:21,080 - Epoch: [87][ 150/ 155] Loss 3.287229 mAP 0.423779 -2023-04-26 21:48:21,645 - Epoch: [87][ 155/ 155] Loss 3.290364 mAP 0.422433 -2023-04-26 21:48:21,712 - ==> mAP: 0.42243 Loss: 3.290 - -2023-04-26 21:48:21,716 - ==> Best [mAP: 0.426456 vloss: 3.290909 Sparsity:0.00 Params: 2177088 on epoch: 85] -2023-04-26 21:48:21,716 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:48:21,754 - - -2023-04-26 21:48:21,754 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:48:32,817 - Epoch: [88][ 50/ 518] Overall Loss 2.999429 Objective Loss 2.999429 LR 0.000500 Time 0.221204 -2023-04-26 21:48:43,073 - Epoch: [88][ 100/ 518] Overall Loss 3.030605 Objective Loss 3.030605 LR 0.000500 Time 0.213141 -2023-04-26 21:48:53,328 - Epoch: [88][ 150/ 518] Overall Loss 3.046888 Objective Loss 3.046888 LR 0.000500 Time 0.210452 -2023-04-26 21:49:03,515 - Epoch: [88][ 200/ 518] Overall Loss 3.045787 Objective Loss 3.045787 LR 0.000500 Time 0.208766 -2023-04-26 21:49:13,824 - Epoch: [88][ 250/ 518] Overall Loss 3.034466 Objective Loss 3.034466 LR 0.000500 Time 0.208243 -2023-04-26 21:49:24,069 - Epoch: [88][ 300/ 518] Overall Loss 3.039331 Objective Loss 3.039331 LR 0.000500 Time 0.207681 -2023-04-26 21:49:34,335 - Epoch: [88][ 350/ 518] Overall Loss 3.031304 Objective Loss 3.031304 LR 0.000500 Time 0.207338 -2023-04-26 21:49:44,561 - Epoch: [88][ 400/ 518] Overall Loss 3.032047 Objective Loss 3.032047 LR 0.000500 Time 0.206981 -2023-04-26 21:49:54,775 - Epoch: [88][ 450/ 518] Overall Loss 3.033507 Objective Loss 3.033507 LR 0.000500 Time 0.206678 -2023-04-26 21:50:05,023 - Epoch: [88][ 500/ 518] Overall Loss 3.034987 Objective Loss 3.034987 LR 0.000500 Time 0.206503 -2023-04-26 21:50:08,544 - Epoch: [88][ 518/ 518] Overall Loss 3.032736 Objective Loss 3.032736 LR 0.000500 Time 0.206123 -2023-04-26 21:50:08,616 - --- validate (epoch=88)----------- -2023-04-26 21:50:08,617 - 4952 samples (32 per mini-batch) -2023-04-26 21:50:15,458 - Epoch: [88][ 50/ 155] Loss 3.337094 mAP 0.422556 -2023-04-26 21:50:21,855 - Epoch: [88][ 100/ 155] Loss 3.325244 mAP 0.418824 -2023-04-26 21:50:28,280 - Epoch: [88][ 150/ 155] Loss 3.315395 mAP 0.418259 -2023-04-26 21:50:28,843 - Epoch: [88][ 155/ 155] Loss 3.315596 mAP 0.418799 -2023-04-26 21:50:28,911 - ==> mAP: 0.41880 Loss: 3.316 - -2023-04-26 21:50:28,914 - ==> Best [mAP: 0.426456 vloss: 3.290909 Sparsity:0.00 Params: 2177088 on epoch: 85] -2023-04-26 21:50:28,915 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:50:28,952 - - -2023-04-26 21:50:28,952 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:50:39,925 - Epoch: [89][ 50/ 518] Overall Loss 3.012969 Objective Loss 3.012969 LR 0.000500 Time 0.219413 -2023-04-26 21:50:50,247 - Epoch: [89][ 100/ 518] Overall Loss 3.025885 Objective Loss 3.025885 LR 0.000500 Time 0.212908 -2023-04-26 21:51:00,508 - Epoch: [89][ 150/ 518] Overall Loss 3.029334 Objective Loss 3.029334 LR 0.000500 Time 0.210332 -2023-04-26 21:51:10,701 - Epoch: [89][ 200/ 518] Overall Loss 3.028606 Objective Loss 3.028606 LR 0.000500 Time 0.208705 -2023-04-26 21:51:20,928 - Epoch: [89][ 250/ 518] Overall Loss 3.039650 Objective Loss 3.039650 LR 0.000500 Time 0.207868 -2023-04-26 21:51:31,247 - Epoch: [89][ 300/ 518] Overall Loss 3.037880 Objective Loss 3.037880 LR 0.000500 Time 0.207614 -2023-04-26 21:51:41,557 - Epoch: [89][ 350/ 518] Overall Loss 3.029558 Objective Loss 3.029558 LR 0.000500 Time 0.207407 -2023-04-26 21:51:51,723 - Epoch: [89][ 400/ 518] Overall Loss 3.033403 Objective Loss 3.033403 LR 0.000500 Time 0.206893 -2023-04-26 21:52:01,935 - Epoch: [89][ 450/ 518] Overall Loss 3.033591 Objective Loss 3.033591 LR 0.000500 Time 0.206593 -2023-04-26 21:52:12,103 - Epoch: [89][ 500/ 518] Overall Loss 3.030968 Objective Loss 3.030968 LR 0.000500 Time 0.206266 -2023-04-26 21:52:15,604 - Epoch: [89][ 518/ 518] Overall Loss 3.028423 Objective Loss 3.028423 LR 0.000500 Time 0.205856 -2023-04-26 21:52:15,675 - --- validate (epoch=89)----------- -2023-04-26 21:52:15,676 - 4952 samples (32 per mini-batch) -2023-04-26 21:52:22,427 - Epoch: [89][ 50/ 155] Loss 3.316568 mAP 0.427851 -2023-04-26 21:52:28,802 - Epoch: [89][ 100/ 155] Loss 3.301697 mAP 0.420551 -2023-04-26 21:52:35,076 - Epoch: [89][ 150/ 155] Loss 3.312096 mAP 0.415417 -2023-04-26 21:52:35,635 - Epoch: [89][ 155/ 155] Loss 3.310998 mAP 0.414813 -2023-04-26 21:52:35,705 - ==> mAP: 0.41481 Loss: 3.311 - -2023-04-26 21:52:35,709 - ==> Best [mAP: 0.426456 vloss: 3.290909 Sparsity:0.00 Params: 2177088 on epoch: 85] -2023-04-26 21:52:35,709 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:52:35,749 - - -2023-04-26 21:52:35,749 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:52:46,698 - Epoch: [90][ 50/ 518] Overall Loss 3.054113 Objective Loss 3.054113 LR 0.000500 Time 0.218922 -2023-04-26 21:52:56,987 - Epoch: [90][ 100/ 518] Overall Loss 3.045349 Objective Loss 3.045349 LR 0.000500 Time 0.212340 -2023-04-26 21:53:07,313 - Epoch: [90][ 150/ 518] Overall Loss 3.045084 Objective Loss 3.045084 LR 0.000500 Time 0.210389 -2023-04-26 21:53:17,534 - Epoch: [90][ 200/ 518] Overall Loss 3.039974 Objective Loss 3.039974 LR 0.000500 Time 0.208889 -2023-04-26 21:53:27,783 - Epoch: [90][ 250/ 518] Overall Loss 3.032842 Objective Loss 3.032842 LR 0.000500 Time 0.208100 -2023-04-26 21:53:38,007 - Epoch: [90][ 300/ 518] Overall Loss 3.039038 Objective Loss 3.039038 LR 0.000500 Time 0.207491 -2023-04-26 21:53:48,332 - Epoch: [90][ 350/ 518] Overall Loss 3.036297 Objective Loss 3.036297 LR 0.000500 Time 0.207344 -2023-04-26 21:53:58,492 - Epoch: [90][ 400/ 518] Overall Loss 3.037657 Objective Loss 3.037657 LR 0.000500 Time 0.206821 -2023-04-26 21:54:08,713 - Epoch: [90][ 450/ 518] Overall Loss 3.041057 Objective Loss 3.041057 LR 0.000500 Time 0.206552 -2023-04-26 21:54:18,922 - Epoch: [90][ 500/ 518] Overall Loss 3.043335 Objective Loss 3.043335 LR 0.000500 Time 0.206312 -2023-04-26 21:54:22,514 - Epoch: [90][ 518/ 518] Overall Loss 3.040555 Objective Loss 3.040555 LR 0.000500 Time 0.206074 -2023-04-26 21:54:22,585 - --- validate (epoch=90)----------- -2023-04-26 21:54:22,585 - 4952 samples (32 per mini-batch) -2023-04-26 21:54:29,270 - Epoch: [90][ 50/ 155] Loss 3.252711 mAP 0.399350 -2023-04-26 21:54:35,602 - Epoch: [90][ 100/ 155] Loss 3.290069 mAP 0.406615 -2023-04-26 21:54:41,899 - Epoch: [90][ 150/ 155] Loss 3.296188 mAP 0.403927 -2023-04-26 21:54:42,471 - Epoch: [90][ 155/ 155] Loss 3.304329 mAP 0.403906 -2023-04-26 21:54:42,545 - ==> mAP: 0.40391 Loss: 3.304 - -2023-04-26 21:54:42,549 - ==> Best [mAP: 0.426456 vloss: 3.290909 Sparsity:0.00 Params: 2177088 on epoch: 85] -2023-04-26 21:54:42,549 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:54:42,586 - - -2023-04-26 21:54:42,586 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:54:53,470 - Epoch: [91][ 50/ 518] Overall Loss 3.032710 Objective Loss 3.032710 LR 0.000500 Time 0.217621 -2023-04-26 21:55:03,745 - Epoch: [91][ 100/ 518] Overall Loss 3.019805 Objective Loss 3.019805 LR 0.000500 Time 0.211538 -2023-04-26 21:55:13,910 - Epoch: [91][ 150/ 518] Overall Loss 3.039624 Objective Loss 3.039624 LR 0.000500 Time 0.208783 -2023-04-26 21:55:24,080 - Epoch: [91][ 200/ 518] Overall Loss 3.039293 Objective Loss 3.039293 LR 0.000500 Time 0.207428 -2023-04-26 21:55:34,360 - Epoch: [91][ 250/ 518] Overall Loss 3.047570 Objective Loss 3.047570 LR 0.000500 Time 0.207056 -2023-04-26 21:55:44,667 - Epoch: [91][ 300/ 518] Overall Loss 3.037238 Objective Loss 3.037238 LR 0.000500 Time 0.206898 -2023-04-26 21:55:54,912 - Epoch: [91][ 350/ 518] Overall Loss 3.038592 Objective Loss 3.038592 LR 0.000500 Time 0.206607 -2023-04-26 21:56:05,103 - Epoch: [91][ 400/ 518] Overall Loss 3.036638 Objective Loss 3.036638 LR 0.000500 Time 0.206256 -2023-04-26 21:56:15,431 - Epoch: [91][ 450/ 518] Overall Loss 3.038631 Objective Loss 3.038631 LR 0.000500 Time 0.206285 -2023-04-26 21:56:25,683 - Epoch: [91][ 500/ 518] Overall Loss 3.037446 Objective Loss 3.037446 LR 0.000500 Time 0.206157 -2023-04-26 21:56:29,185 - Epoch: [91][ 518/ 518] Overall Loss 3.037625 Objective Loss 3.037625 LR 0.000500 Time 0.205754 -2023-04-26 21:56:29,258 - --- validate (epoch=91)----------- -2023-04-26 21:56:29,258 - 4952 samples (32 per mini-batch) -2023-04-26 21:56:36,122 - Epoch: [91][ 50/ 155] Loss 3.258290 mAP 0.420625 -2023-04-26 21:56:42,517 - Epoch: [91][ 100/ 155] Loss 3.258435 mAP 0.421614 -2023-04-26 21:56:48,888 - Epoch: [91][ 150/ 155] Loss 3.282185 mAP 0.423554 -2023-04-26 21:56:49,457 - Epoch: [91][ 155/ 155] Loss 3.283382 mAP 0.424389 -2023-04-26 21:56:49,529 - ==> mAP: 0.42439 Loss: 3.283 - -2023-04-26 21:56:49,533 - ==> Best [mAP: 0.426456 vloss: 3.290909 Sparsity:0.00 Params: 2177088 on epoch: 85] -2023-04-26 21:56:49,533 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:56:49,570 - - -2023-04-26 21:56:49,570 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:57:00,552 - Epoch: [92][ 50/ 518] Overall Loss 3.080411 Objective Loss 3.080411 LR 0.000500 Time 0.219577 -2023-04-26 21:57:10,695 - Epoch: [92][ 100/ 518] Overall Loss 3.013047 Objective Loss 3.013047 LR 0.000500 Time 0.211200 -2023-04-26 21:57:20,960 - Epoch: [92][ 150/ 518] Overall Loss 3.013692 Objective Loss 3.013692 LR 0.000500 Time 0.209225 -2023-04-26 21:57:31,212 - Epoch: [92][ 200/ 518] Overall Loss 3.018153 Objective Loss 3.018153 LR 0.000500 Time 0.208171 -2023-04-26 21:57:41,523 - Epoch: [92][ 250/ 518] Overall Loss 3.024871 Objective Loss 3.024871 LR 0.000500 Time 0.207772 -2023-04-26 21:57:51,841 - Epoch: [92][ 300/ 518] Overall Loss 3.031545 Objective Loss 3.031545 LR 0.000500 Time 0.207533 -2023-04-26 21:58:02,128 - Epoch: [92][ 350/ 518] Overall Loss 3.035972 Objective Loss 3.035972 LR 0.000500 Time 0.207271 -2023-04-26 21:58:12,382 - Epoch: [92][ 400/ 518] Overall Loss 3.034929 Objective Loss 3.034929 LR 0.000500 Time 0.206994 -2023-04-26 21:58:22,632 - Epoch: [92][ 450/ 518] Overall Loss 3.037618 Objective Loss 3.037618 LR 0.000500 Time 0.206767 -2023-04-26 21:58:32,909 - Epoch: [92][ 500/ 518] Overall Loss 3.034327 Objective Loss 3.034327 LR 0.000500 Time 0.206642 -2023-04-26 21:58:36,511 - Epoch: [92][ 518/ 518] Overall Loss 3.033455 Objective Loss 3.033455 LR 0.000500 Time 0.206415 -2023-04-26 21:58:36,585 - --- validate (epoch=92)----------- -2023-04-26 21:58:36,585 - 4952 samples (32 per mini-batch) -2023-04-26 21:58:43,273 - Epoch: [92][ 50/ 155] Loss 3.284948 mAP 0.420904 -2023-04-26 21:58:49,660 - Epoch: [92][ 100/ 155] Loss 3.294987 mAP 0.416202 -2023-04-26 21:58:56,006 - Epoch: [92][ 150/ 155] Loss 3.275779 mAP 0.418826 -2023-04-26 21:58:56,588 - Epoch: [92][ 155/ 155] Loss 3.275896 mAP 0.418425 -2023-04-26 21:58:56,660 - ==> mAP: 0.41843 Loss: 3.276 - -2023-04-26 21:58:56,664 - ==> Best [mAP: 0.426456 vloss: 3.290909 Sparsity:0.00 Params: 2177088 on epoch: 85] -2023-04-26 21:58:56,664 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 21:58:56,700 - - -2023-04-26 21:58:56,700 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 21:59:07,693 - Epoch: [93][ 50/ 518] Overall Loss 3.009648 Objective Loss 3.009648 LR 0.000500 Time 0.219807 -2023-04-26 21:59:17,877 - Epoch: [93][ 100/ 518] Overall Loss 3.014570 Objective Loss 3.014570 LR 0.000500 Time 0.211718 -2023-04-26 21:59:28,092 - Epoch: [93][ 150/ 518] Overall Loss 3.025954 Objective Loss 3.025954 LR 0.000500 Time 0.209239 -2023-04-26 21:59:38,279 - Epoch: [93][ 200/ 518] Overall Loss 3.019347 Objective Loss 3.019347 LR 0.000500 Time 0.207856 -2023-04-26 21:59:48,588 - Epoch: [93][ 250/ 518] Overall Loss 3.018910 Objective Loss 3.018910 LR 0.000500 Time 0.207511 -2023-04-26 21:59:58,861 - Epoch: [93][ 300/ 518] Overall Loss 3.013264 Objective Loss 3.013264 LR 0.000500 Time 0.207166 -2023-04-26 22:00:09,031 - Epoch: [93][ 350/ 518] Overall Loss 3.010237 Objective Loss 3.010237 LR 0.000500 Time 0.206621 -2023-04-26 22:00:19,304 - Epoch: [93][ 400/ 518] Overall Loss 3.021139 Objective Loss 3.021139 LR 0.000500 Time 0.206474 -2023-04-26 22:00:29,582 - Epoch: [93][ 450/ 518] Overall Loss 3.019903 Objective Loss 3.019903 LR 0.000500 Time 0.206368 -2023-04-26 22:00:39,829 - Epoch: [93][ 500/ 518] Overall Loss 3.016315 Objective Loss 3.016315 LR 0.000500 Time 0.206223 -2023-04-26 22:00:43,405 - Epoch: [93][ 518/ 518] Overall Loss 3.016078 Objective Loss 3.016078 LR 0.000500 Time 0.205958 -2023-04-26 22:00:43,478 - --- validate (epoch=93)----------- -2023-04-26 22:00:43,479 - 4952 samples (32 per mini-batch) -2023-04-26 22:00:50,215 - Epoch: [93][ 50/ 155] Loss 3.248965 mAP 0.410667 -2023-04-26 22:00:56,554 - Epoch: [93][ 100/ 155] Loss 3.257108 mAP 0.412584 -2023-04-26 22:01:02,955 - Epoch: [93][ 150/ 155] Loss 3.272343 mAP 0.415088 -2023-04-26 22:01:03,536 - Epoch: [93][ 155/ 155] Loss 3.272244 mAP 0.414757 -2023-04-26 22:01:03,603 - ==> mAP: 0.41476 Loss: 3.272 - -2023-04-26 22:01:03,607 - ==> Best [mAP: 0.426456 vloss: 3.290909 Sparsity:0.00 Params: 2177088 on epoch: 85] -2023-04-26 22:01:03,607 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:01:03,645 - - -2023-04-26 22:01:03,645 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:01:14,613 - Epoch: [94][ 50/ 518] Overall Loss 3.087485 Objective Loss 3.087485 LR 0.000500 Time 0.219300 -2023-04-26 22:01:24,855 - Epoch: [94][ 100/ 518] Overall Loss 3.041743 Objective Loss 3.041743 LR 0.000500 Time 0.212056 -2023-04-26 22:01:35,071 - Epoch: [94][ 150/ 518] Overall Loss 3.035968 Objective Loss 3.035968 LR 0.000500 Time 0.209467 -2023-04-26 22:01:45,283 - Epoch: [94][ 200/ 518] Overall Loss 3.024518 Objective Loss 3.024518 LR 0.000500 Time 0.208150 -2023-04-26 22:01:55,603 - Epoch: [94][ 250/ 518] Overall Loss 3.032809 Objective Loss 3.032809 LR 0.000500 Time 0.207795 -2023-04-26 22:02:05,825 - Epoch: [94][ 300/ 518] Overall Loss 3.030361 Objective Loss 3.030361 LR 0.000500 Time 0.207232 -2023-04-26 22:02:16,048 - Epoch: [94][ 350/ 518] Overall Loss 3.033276 Objective Loss 3.033276 LR 0.000500 Time 0.206829 -2023-04-26 22:02:26,279 - Epoch: [94][ 400/ 518] Overall Loss 3.031147 Objective Loss 3.031147 LR 0.000500 Time 0.206549 -2023-04-26 22:02:36,523 - Epoch: [94][ 450/ 518] Overall Loss 3.030109 Objective Loss 3.030109 LR 0.000500 Time 0.206360 -2023-04-26 22:02:46,701 - Epoch: [94][ 500/ 518] Overall Loss 3.026862 Objective Loss 3.026862 LR 0.000500 Time 0.206077 -2023-04-26 22:02:50,238 - Epoch: [94][ 518/ 518] Overall Loss 3.026007 Objective Loss 3.026007 LR 0.000500 Time 0.205744 -2023-04-26 22:02:50,311 - --- validate (epoch=94)----------- -2023-04-26 22:02:50,311 - 4952 samples (32 per mini-batch) -2023-04-26 22:02:57,060 - Epoch: [94][ 50/ 155] Loss 3.270680 mAP 0.418623 -2023-04-26 22:03:03,478 - Epoch: [94][ 100/ 155] Loss 3.298549 mAP 0.422189 -2023-04-26 22:03:09,915 - Epoch: [94][ 150/ 155] Loss 3.286012 mAP 0.424944 -2023-04-26 22:03:10,476 - Epoch: [94][ 155/ 155] Loss 3.288764 mAP 0.424591 -2023-04-26 22:03:10,547 - ==> mAP: 0.42459 Loss: 3.289 - -2023-04-26 22:03:10,551 - ==> Best [mAP: 0.426456 vloss: 3.290909 Sparsity:0.00 Params: 2177088 on epoch: 85] -2023-04-26 22:03:10,551 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:03:10,587 - - -2023-04-26 22:03:10,587 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:03:21,627 - Epoch: [95][ 50/ 518] Overall Loss 3.022106 Objective Loss 3.022106 LR 0.000500 Time 0.220731 -2023-04-26 22:03:31,868 - Epoch: [95][ 100/ 518] Overall Loss 3.003707 Objective Loss 3.003707 LR 0.000500 Time 0.212758 -2023-04-26 22:03:42,074 - Epoch: [95][ 150/ 518] Overall Loss 3.012070 Objective Loss 3.012070 LR 0.000500 Time 0.209870 -2023-04-26 22:03:52,348 - Epoch: [95][ 200/ 518] Overall Loss 3.001260 Objective Loss 3.001260 LR 0.000500 Time 0.208764 -2023-04-26 22:04:02,505 - Epoch: [95][ 250/ 518] Overall Loss 3.006484 Objective Loss 3.006484 LR 0.000500 Time 0.207634 -2023-04-26 22:04:12,783 - Epoch: [95][ 300/ 518] Overall Loss 3.004067 Objective Loss 3.004067 LR 0.000500 Time 0.207283 -2023-04-26 22:04:23,014 - Epoch: [95][ 350/ 518] Overall Loss 3.007525 Objective Loss 3.007525 LR 0.000500 Time 0.206897 -2023-04-26 22:04:33,255 - Epoch: [95][ 400/ 518] Overall Loss 3.012224 Objective Loss 3.012224 LR 0.000500 Time 0.206633 -2023-04-26 22:04:43,589 - Epoch: [95][ 450/ 518] Overall Loss 3.014515 Objective Loss 3.014515 LR 0.000500 Time 0.206636 -2023-04-26 22:04:53,850 - Epoch: [95][ 500/ 518] Overall Loss 3.012069 Objective Loss 3.012069 LR 0.000500 Time 0.206489 -2023-04-26 22:04:57,380 - Epoch: [95][ 518/ 518] Overall Loss 3.013200 Objective Loss 3.013200 LR 0.000500 Time 0.206127 -2023-04-26 22:04:57,451 - --- validate (epoch=95)----------- -2023-04-26 22:04:57,451 - 4952 samples (32 per mini-batch) -2023-04-26 22:05:04,378 - Epoch: [95][ 50/ 155] Loss 3.269525 mAP 0.417954 -2023-04-26 22:05:10,906 - Epoch: [95][ 100/ 155] Loss 3.265765 mAP 0.430830 -2023-04-26 22:05:17,277 - Epoch: [95][ 150/ 155] Loss 3.278527 mAP 0.430861 -2023-04-26 22:05:17,852 - Epoch: [95][ 155/ 155] Loss 3.276514 mAP 0.430350 -2023-04-26 22:05:17,923 - ==> mAP: 0.43035 Loss: 3.277 - -2023-04-26 22:05:17,927 - ==> Best [mAP: 0.430350 vloss: 3.276514 Sparsity:0.00 Params: 2177088 on epoch: 95] -2023-04-26 22:05:17,927 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:05:17,979 - - -2023-04-26 22:05:17,979 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:05:28,889 - Epoch: [96][ 50/ 518] Overall Loss 3.048438 Objective Loss 3.048438 LR 0.000500 Time 0.218142 -2023-04-26 22:05:39,124 - Epoch: [96][ 100/ 518] Overall Loss 3.022108 Objective Loss 3.022108 LR 0.000500 Time 0.211404 -2023-04-26 22:05:49,338 - Epoch: [96][ 150/ 518] Overall Loss 3.010093 Objective Loss 3.010093 LR 0.000500 Time 0.209019 -2023-04-26 22:05:59,521 - Epoch: [96][ 200/ 518] Overall Loss 3.014911 Objective Loss 3.014911 LR 0.000500 Time 0.207669 -2023-04-26 22:06:09,800 - Epoch: [96][ 250/ 518] Overall Loss 3.017873 Objective Loss 3.017873 LR 0.000500 Time 0.207246 -2023-04-26 22:06:20,035 - Epoch: [96][ 300/ 518] Overall Loss 3.010125 Objective Loss 3.010125 LR 0.000500 Time 0.206814 -2023-04-26 22:06:30,307 - Epoch: [96][ 350/ 518] Overall Loss 3.014559 Objective Loss 3.014559 LR 0.000500 Time 0.206614 -2023-04-26 22:06:40,514 - Epoch: [96][ 400/ 518] Overall Loss 3.014699 Objective Loss 3.014699 LR 0.000500 Time 0.206301 -2023-04-26 22:06:50,655 - Epoch: [96][ 450/ 518] Overall Loss 3.014652 Objective Loss 3.014652 LR 0.000500 Time 0.205910 -2023-04-26 22:07:00,974 - Epoch: [96][ 500/ 518] Overall Loss 3.012018 Objective Loss 3.012018 LR 0.000500 Time 0.205953 -2023-04-26 22:07:04,509 - Epoch: [96][ 518/ 518] Overall Loss 3.012184 Objective Loss 3.012184 LR 0.000500 Time 0.205620 -2023-04-26 22:07:04,581 - --- validate (epoch=96)----------- -2023-04-26 22:07:04,581 - 4952 samples (32 per mini-batch) -2023-04-26 22:07:11,257 - Epoch: [96][ 50/ 155] Loss 3.253489 mAP 0.408331 -2023-04-26 22:07:17,567 - Epoch: [96][ 100/ 155] Loss 3.247173 mAP 0.423004 -2023-04-26 22:07:23,865 - Epoch: [96][ 150/ 155] Loss 3.263947 mAP 0.418510 -2023-04-26 22:07:24,428 - Epoch: [96][ 155/ 155] Loss 3.262890 mAP 0.417728 -2023-04-26 22:07:24,497 - ==> mAP: 0.41773 Loss: 3.263 - -2023-04-26 22:07:24,500 - ==> Best [mAP: 0.430350 vloss: 3.276514 Sparsity:0.00 Params: 2177088 on epoch: 95] -2023-04-26 22:07:24,500 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:07:24,538 - - -2023-04-26 22:07:24,538 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:07:35,421 - Epoch: [97][ 50/ 518] Overall Loss 3.023703 Objective Loss 3.023703 LR 0.000500 Time 0.217613 -2023-04-26 22:07:45,674 - Epoch: [97][ 100/ 518] Overall Loss 3.019751 Objective Loss 3.019751 LR 0.000500 Time 0.211318 -2023-04-26 22:07:55,871 - Epoch: [97][ 150/ 518] Overall Loss 3.017469 Objective Loss 3.017469 LR 0.000500 Time 0.208850 -2023-04-26 22:08:05,991 - Epoch: [97][ 200/ 518] Overall Loss 3.014952 Objective Loss 3.014952 LR 0.000500 Time 0.207229 -2023-04-26 22:08:16,147 - Epoch: [97][ 250/ 518] Overall Loss 3.012114 Objective Loss 3.012114 LR 0.000500 Time 0.206398 -2023-04-26 22:08:26,381 - Epoch: [97][ 300/ 518] Overall Loss 3.013289 Objective Loss 3.013289 LR 0.000500 Time 0.206107 -2023-04-26 22:08:36,604 - Epoch: [97][ 350/ 518] Overall Loss 3.015412 Objective Loss 3.015412 LR 0.000500 Time 0.205868 -2023-04-26 22:08:46,870 - Epoch: [97][ 400/ 518] Overall Loss 3.011962 Objective Loss 3.011962 LR 0.000500 Time 0.205794 -2023-04-26 22:08:57,099 - Epoch: [97][ 450/ 518] Overall Loss 3.009770 Objective Loss 3.009770 LR 0.000500 Time 0.205655 -2023-04-26 22:09:07,283 - Epoch: [97][ 500/ 518] Overall Loss 3.008502 Objective Loss 3.008502 LR 0.000500 Time 0.205455 -2023-04-26 22:09:10,819 - Epoch: [97][ 518/ 518] Overall Loss 3.008551 Objective Loss 3.008551 LR 0.000500 Time 0.205140 -2023-04-26 22:09:10,891 - --- validate (epoch=97)----------- -2023-04-26 22:09:10,892 - 4952 samples (32 per mini-batch) -2023-04-26 22:09:17,614 - Epoch: [97][ 50/ 155] Loss 3.278458 mAP 0.413386 -2023-04-26 22:09:23,960 - Epoch: [97][ 100/ 155] Loss 3.279165 mAP 0.419138 -2023-04-26 22:09:30,304 - Epoch: [97][ 150/ 155] Loss 3.271825 mAP 0.418983 -2023-04-26 22:09:30,864 - Epoch: [97][ 155/ 155] Loss 3.272474 mAP 0.417774 -2023-04-26 22:09:30,924 - ==> mAP: 0.41777 Loss: 3.272 - -2023-04-26 22:09:30,928 - ==> Best [mAP: 0.430350 vloss: 3.276514 Sparsity:0.00 Params: 2177088 on epoch: 95] -2023-04-26 22:09:30,928 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:09:30,965 - - -2023-04-26 22:09:30,966 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:09:42,002 - Epoch: [98][ 50/ 518] Overall Loss 3.001288 Objective Loss 3.001288 LR 0.000500 Time 0.220679 -2023-04-26 22:09:52,289 - Epoch: [98][ 100/ 518] Overall Loss 3.035055 Objective Loss 3.035055 LR 0.000500 Time 0.213193 -2023-04-26 22:10:02,469 - Epoch: [98][ 150/ 518] Overall Loss 3.039442 Objective Loss 3.039442 LR 0.000500 Time 0.209982 -2023-04-26 22:10:12,741 - Epoch: [98][ 200/ 518] Overall Loss 3.038451 Objective Loss 3.038451 LR 0.000500 Time 0.208840 -2023-04-26 22:10:22,905 - Epoch: [98][ 250/ 518] Overall Loss 3.025258 Objective Loss 3.025258 LR 0.000500 Time 0.207719 -2023-04-26 22:10:33,055 - Epoch: [98][ 300/ 518] Overall Loss 3.020610 Objective Loss 3.020610 LR 0.000500 Time 0.206927 -2023-04-26 22:10:43,279 - Epoch: [98][ 350/ 518] Overall Loss 3.011921 Objective Loss 3.011921 LR 0.000500 Time 0.206573 -2023-04-26 22:10:53,536 - Epoch: [98][ 400/ 518] Overall Loss 3.004684 Objective Loss 3.004684 LR 0.000500 Time 0.206389 -2023-04-26 22:11:03,858 - Epoch: [98][ 450/ 518] Overall Loss 3.003689 Objective Loss 3.003689 LR 0.000500 Time 0.206392 -2023-04-26 22:11:14,133 - Epoch: [98][ 500/ 518] Overall Loss 3.006689 Objective Loss 3.006689 LR 0.000500 Time 0.206299 -2023-04-26 22:11:17,667 - Epoch: [98][ 518/ 518] Overall Loss 3.009358 Objective Loss 3.009358 LR 0.000500 Time 0.205951 -2023-04-26 22:11:17,738 - --- validate (epoch=98)----------- -2023-04-26 22:11:17,739 - 4952 samples (32 per mini-batch) -2023-04-26 22:11:24,465 - Epoch: [98][ 50/ 155] Loss 3.284740 mAP 0.403158 -2023-04-26 22:11:30,833 - Epoch: [98][ 100/ 155] Loss 3.259959 mAP 0.410114 -2023-04-26 22:11:37,209 - Epoch: [98][ 150/ 155] Loss 3.265185 mAP 0.405435 -2023-04-26 22:11:37,773 - Epoch: [98][ 155/ 155] Loss 3.269737 mAP 0.405368 -2023-04-26 22:11:37,840 - ==> mAP: 0.40537 Loss: 3.270 - -2023-04-26 22:11:37,843 - ==> Best [mAP: 0.430350 vloss: 3.276514 Sparsity:0.00 Params: 2177088 on epoch: 95] -2023-04-26 22:11:37,844 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:11:37,881 - - -2023-04-26 22:11:37,881 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:11:48,787 - Epoch: [99][ 50/ 518] Overall Loss 3.030478 Objective Loss 3.030478 LR 0.000500 Time 0.218076 -2023-04-26 22:11:59,001 - Epoch: [99][ 100/ 518] Overall Loss 3.029376 Objective Loss 3.029376 LR 0.000500 Time 0.211154 -2023-04-26 22:12:09,263 - Epoch: [99][ 150/ 518] Overall Loss 3.015442 Objective Loss 3.015442 LR 0.000500 Time 0.209171 -2023-04-26 22:12:19,493 - Epoch: [99][ 200/ 518] Overall Loss 3.016862 Objective Loss 3.016862 LR 0.000500 Time 0.208024 -2023-04-26 22:12:29,619 - Epoch: [99][ 250/ 518] Overall Loss 3.021740 Objective Loss 3.021740 LR 0.000500 Time 0.206916 -2023-04-26 22:12:39,885 - Epoch: [99][ 300/ 518] Overall Loss 3.015623 Objective Loss 3.015623 LR 0.000500 Time 0.206645 -2023-04-26 22:12:50,117 - Epoch: [99][ 350/ 518] Overall Loss 3.018100 Objective Loss 3.018100 LR 0.000500 Time 0.206353 -2023-04-26 22:13:00,312 - Epoch: [99][ 400/ 518] Overall Loss 3.013573 Objective Loss 3.013573 LR 0.000500 Time 0.206043 -2023-04-26 22:13:10,613 - Epoch: [99][ 450/ 518] Overall Loss 3.018059 Objective Loss 3.018059 LR 0.000500 Time 0.206036 -2023-04-26 22:13:20,792 - Epoch: [99][ 500/ 518] Overall Loss 3.013358 Objective Loss 3.013358 LR 0.000500 Time 0.205787 -2023-04-26 22:13:24,308 - Epoch: [99][ 518/ 518] Overall Loss 3.014085 Objective Loss 3.014085 LR 0.000500 Time 0.205423 -2023-04-26 22:13:24,380 - --- validate (epoch=99)----------- -2023-04-26 22:13:24,380 - 4952 samples (32 per mini-batch) -2023-04-26 22:13:31,110 - Epoch: [99][ 50/ 155] Loss 3.243137 mAP 0.425408 -2023-04-26 22:13:37,499 - Epoch: [99][ 100/ 155] Loss 3.259413 mAP 0.425161 -2023-04-26 22:13:43,878 - Epoch: [99][ 150/ 155] Loss 3.257038 mAP 0.425482 -2023-04-26 22:13:44,449 - Epoch: [99][ 155/ 155] Loss 3.250323 mAP 0.426379 -2023-04-26 22:13:44,517 - ==> mAP: 0.42638 Loss: 3.250 - -2023-04-26 22:13:44,521 - ==> Best [mAP: 0.430350 vloss: 3.276514 Sparsity:0.00 Params: 2177088 on epoch: 95] -2023-04-26 22:13:44,521 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:13:44,559 - - -2023-04-26 22:13:44,559 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:13:55,520 - Epoch: [100][ 50/ 518] Overall Loss 3.010117 Objective Loss 3.010117 LR 0.000125 Time 0.219152 -2023-04-26 22:14:05,717 - Epoch: [100][ 100/ 518] Overall Loss 3.006555 Objective Loss 3.006555 LR 0.000125 Time 0.211535 -2023-04-26 22:14:15,968 - Epoch: [100][ 150/ 518] Overall Loss 2.984581 Objective Loss 2.984581 LR 0.000125 Time 0.209351 -2023-04-26 22:14:26,225 - Epoch: [100][ 200/ 518] Overall Loss 2.977675 Objective Loss 2.977675 LR 0.000125 Time 0.208288 -2023-04-26 22:14:36,438 - Epoch: [100][ 250/ 518] Overall Loss 2.967617 Objective Loss 2.967617 LR 0.000125 Time 0.207479 -2023-04-26 22:14:46,722 - Epoch: [100][ 300/ 518] Overall Loss 2.969173 Objective Loss 2.969173 LR 0.000125 Time 0.207171 -2023-04-26 22:14:57,030 - Epoch: [100][ 350/ 518] Overall Loss 2.974907 Objective Loss 2.974907 LR 0.000125 Time 0.207023 -2023-04-26 22:15:07,323 - Epoch: [100][ 400/ 518] Overall Loss 2.965321 Objective Loss 2.965321 LR 0.000125 Time 0.206874 -2023-04-26 22:15:17,508 - Epoch: [100][ 450/ 518] Overall Loss 2.959190 Objective Loss 2.959190 LR 0.000125 Time 0.206517 -2023-04-26 22:15:27,751 - Epoch: [100][ 500/ 518] Overall Loss 2.959965 Objective Loss 2.959965 LR 0.000125 Time 0.206349 -2023-04-26 22:15:31,297 - Epoch: [100][ 518/ 518] Overall Loss 2.957971 Objective Loss 2.957971 LR 0.000125 Time 0.206022 -2023-04-26 22:15:31,368 - --- validate (epoch=100)----------- -2023-04-26 22:15:31,368 - 4952 samples (32 per mini-batch) -2023-04-26 22:15:38,146 - Epoch: [100][ 50/ 155] Loss 3.235764 mAP 0.456442 -2023-04-26 22:15:44,659 - Epoch: [100][ 100/ 155] Loss 3.214877 mAP 0.453457 -2023-04-26 22:15:51,057 - Epoch: [100][ 150/ 155] Loss 3.215314 mAP 0.441249 -2023-04-26 22:15:51,636 - Epoch: [100][ 155/ 155] Loss 3.217630 mAP 0.438558 -2023-04-26 22:15:51,710 - ==> mAP: 0.43856 Loss: 3.218 - -2023-04-26 22:15:51,714 - ==> Best [mAP: 0.438558 vloss: 3.217630 Sparsity:0.00 Params: 2177088 on epoch: 100] -2023-04-26 22:15:51,714 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:15:51,768 - - -2023-04-26 22:15:51,768 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:16:02,643 - Epoch: [101][ 50/ 518] Overall Loss 2.888214 Objective Loss 2.888214 LR 0.000125 Time 0.217453 -2023-04-26 22:16:12,799 - Epoch: [101][ 100/ 518] Overall Loss 2.917097 Objective Loss 2.917097 LR 0.000125 Time 0.210263 -2023-04-26 22:16:23,062 - Epoch: [101][ 150/ 518] Overall Loss 2.911463 Objective Loss 2.911463 LR 0.000125 Time 0.208588 -2023-04-26 22:16:33,252 - Epoch: [101][ 200/ 518] Overall Loss 2.924071 Objective Loss 2.924071 LR 0.000125 Time 0.207379 -2023-04-26 22:16:43,485 - Epoch: [101][ 250/ 518] Overall Loss 2.925205 Objective Loss 2.925205 LR 0.000125 Time 0.206830 -2023-04-26 22:16:53,623 - Epoch: [101][ 300/ 518] Overall Loss 2.931723 Objective Loss 2.931723 LR 0.000125 Time 0.206148 -2023-04-26 22:17:03,861 - Epoch: [101][ 350/ 518] Overall Loss 2.935371 Objective Loss 2.935371 LR 0.000125 Time 0.205945 -2023-04-26 22:17:14,038 - Epoch: [101][ 400/ 518] Overall Loss 2.947538 Objective Loss 2.947538 LR 0.000125 Time 0.205639 -2023-04-26 22:17:24,150 - Epoch: [101][ 450/ 518] Overall Loss 2.943304 Objective Loss 2.943304 LR 0.000125 Time 0.205259 -2023-04-26 22:17:34,356 - Epoch: [101][ 500/ 518] Overall Loss 2.948479 Objective Loss 2.948479 LR 0.000125 Time 0.205142 -2023-04-26 22:17:37,868 - Epoch: [101][ 518/ 518] Overall Loss 2.955450 Objective Loss 2.955450 LR 0.000125 Time 0.204792 -2023-04-26 22:17:37,941 - --- validate (epoch=101)----------- -2023-04-26 22:17:37,941 - 4952 samples (32 per mini-batch) -2023-04-26 22:17:44,731 - Epoch: [101][ 50/ 155] Loss 3.211107 mAP 0.465825 -2023-04-26 22:17:51,162 - Epoch: [101][ 100/ 155] Loss 3.214548 mAP 0.445600 -2023-04-26 22:17:57,570 - Epoch: [101][ 150/ 155] Loss 3.207691 mAP 0.441602 -2023-04-26 22:17:58,146 - Epoch: [101][ 155/ 155] Loss 3.211414 mAP 0.441647 -2023-04-26 22:17:58,226 - ==> mAP: 0.44165 Loss: 3.211 - -2023-04-26 22:17:58,230 - ==> Best [mAP: 0.441647 vloss: 3.211414 Sparsity:0.00 Params: 2177088 on epoch: 101] -2023-04-26 22:17:58,230 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:17:58,281 - - -2023-04-26 22:17:58,282 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:18:09,182 - Epoch: [102][ 50/ 518] Overall Loss 2.958252 Objective Loss 2.958252 LR 0.000125 Time 0.217954 -2023-04-26 22:18:19,528 - Epoch: [102][ 100/ 518] Overall Loss 2.932383 Objective Loss 2.932383 LR 0.000125 Time 0.212418 -2023-04-26 22:18:29,718 - Epoch: [102][ 150/ 518] Overall Loss 2.928066 Objective Loss 2.928066 LR 0.000125 Time 0.209532 -2023-04-26 22:18:40,002 - Epoch: [102][ 200/ 518] Overall Loss 2.938462 Objective Loss 2.938462 LR 0.000125 Time 0.208563 -2023-04-26 22:18:50,304 - Epoch: [102][ 250/ 518] Overall Loss 2.956857 Objective Loss 2.956857 LR 0.000125 Time 0.208050 -2023-04-26 22:19:00,557 - Epoch: [102][ 300/ 518] Overall Loss 2.962911 Objective Loss 2.962911 LR 0.000125 Time 0.207546 -2023-04-26 22:19:10,793 - Epoch: [102][ 350/ 518] Overall Loss 2.956825 Objective Loss 2.956825 LR 0.000125 Time 0.207140 -2023-04-26 22:19:21,117 - Epoch: [102][ 400/ 518] Overall Loss 2.955795 Objective Loss 2.955795 LR 0.000125 Time 0.207051 -2023-04-26 22:19:31,366 - Epoch: [102][ 450/ 518] Overall Loss 2.956739 Objective Loss 2.956739 LR 0.000125 Time 0.206819 -2023-04-26 22:19:41,495 - Epoch: [102][ 500/ 518] Overall Loss 2.955577 Objective Loss 2.955577 LR 0.000125 Time 0.206392 -2023-04-26 22:19:45,013 - Epoch: [102][ 518/ 518] Overall Loss 2.957342 Objective Loss 2.957342 LR 0.000125 Time 0.206009 -2023-04-26 22:19:45,084 - --- validate (epoch=102)----------- -2023-04-26 22:19:45,084 - 4952 samples (32 per mini-batch) -2023-04-26 22:19:51,851 - Epoch: [102][ 50/ 155] Loss 3.204228 mAP 0.444952 -2023-04-26 22:19:58,271 - Epoch: [102][ 100/ 155] Loss 3.203989 mAP 0.434340 -2023-04-26 22:20:04,678 - Epoch: [102][ 150/ 155] Loss 3.195629 mAP 0.426086 -2023-04-26 22:20:05,248 - Epoch: [102][ 155/ 155] Loss 3.197855 mAP 0.426476 -2023-04-26 22:20:05,317 - ==> mAP: 0.42648 Loss: 3.198 - -2023-04-26 22:20:05,322 - ==> Best [mAP: 0.441647 vloss: 3.211414 Sparsity:0.00 Params: 2177088 on epoch: 101] -2023-04-26 22:20:05,322 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:20:05,359 - - -2023-04-26 22:20:05,359 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:20:16,387 - Epoch: [103][ 50/ 518] Overall Loss 2.939318 Objective Loss 2.939318 LR 0.000125 Time 0.220508 -2023-04-26 22:20:26,564 - Epoch: [103][ 100/ 518] Overall Loss 2.924776 Objective Loss 2.924776 LR 0.000125 Time 0.212008 -2023-04-26 22:20:36,896 - Epoch: [103][ 150/ 518] Overall Loss 2.928625 Objective Loss 2.928625 LR 0.000125 Time 0.210206 -2023-04-26 22:20:47,213 - Epoch: [103][ 200/ 518] Overall Loss 2.927562 Objective Loss 2.927562 LR 0.000125 Time 0.209233 -2023-04-26 22:20:57,443 - Epoch: [103][ 250/ 518] Overall Loss 2.937682 Objective Loss 2.937682 LR 0.000125 Time 0.208300 -2023-04-26 22:21:07,734 - Epoch: [103][ 300/ 518] Overall Loss 2.942553 Objective Loss 2.942553 LR 0.000125 Time 0.207879 -2023-04-26 22:21:17,973 - Epoch: [103][ 350/ 518] Overall Loss 2.946335 Objective Loss 2.946335 LR 0.000125 Time 0.207433 -2023-04-26 22:21:28,193 - Epoch: [103][ 400/ 518] Overall Loss 2.940493 Objective Loss 2.940493 LR 0.000125 Time 0.207048 -2023-04-26 22:21:38,377 - Epoch: [103][ 450/ 518] Overall Loss 2.942142 Objective Loss 2.942142 LR 0.000125 Time 0.206670 -2023-04-26 22:21:48,658 - Epoch: [103][ 500/ 518] Overall Loss 2.941153 Objective Loss 2.941153 LR 0.000125 Time 0.206562 -2023-04-26 22:21:52,247 - Epoch: [103][ 518/ 518] Overall Loss 2.942286 Objective Loss 2.942286 LR 0.000125 Time 0.206312 -2023-04-26 22:21:52,320 - --- validate (epoch=103)----------- -2023-04-26 22:21:52,320 - 4952 samples (32 per mini-batch) -2023-04-26 22:21:59,094 - Epoch: [103][ 50/ 155] Loss 3.176561 mAP 0.438429 -2023-04-26 22:22:05,528 - Epoch: [103][ 100/ 155] Loss 3.213005 mAP 0.437919 -2023-04-26 22:22:11,899 - Epoch: [103][ 150/ 155] Loss 3.201451 mAP 0.435913 -2023-04-26 22:22:12,470 - Epoch: [103][ 155/ 155] Loss 3.206505 mAP 0.434697 -2023-04-26 22:22:12,539 - ==> mAP: 0.43470 Loss: 3.207 - -2023-04-26 22:22:12,543 - ==> Best [mAP: 0.441647 vloss: 3.211414 Sparsity:0.00 Params: 2177088 on epoch: 101] -2023-04-26 22:22:12,543 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:22:12,581 - - -2023-04-26 22:22:12,581 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:22:23,650 - Epoch: [104][ 50/ 518] Overall Loss 3.037198 Objective Loss 3.037198 LR 0.000125 Time 0.221335 -2023-04-26 22:22:34,009 - Epoch: [104][ 100/ 518] Overall Loss 2.981746 Objective Loss 2.981746 LR 0.000125 Time 0.214237 -2023-04-26 22:22:44,217 - Epoch: [104][ 150/ 518] Overall Loss 2.962621 Objective Loss 2.962621 LR 0.000125 Time 0.210866 -2023-04-26 22:22:54,444 - Epoch: [104][ 200/ 518] Overall Loss 2.948980 Objective Loss 2.948980 LR 0.000125 Time 0.209276 -2023-04-26 22:23:04,619 - Epoch: [104][ 250/ 518] Overall Loss 2.937249 Objective Loss 2.937249 LR 0.000125 Time 0.208114 -2023-04-26 22:23:14,866 - Epoch: [104][ 300/ 518] Overall Loss 2.934969 Objective Loss 2.934969 LR 0.000125 Time 0.207581 -2023-04-26 22:23:25,050 - Epoch: [104][ 350/ 518] Overall Loss 2.930888 Objective Loss 2.930888 LR 0.000125 Time 0.207019 -2023-04-26 22:23:35,369 - Epoch: [104][ 400/ 518] Overall Loss 2.937261 Objective Loss 2.937261 LR 0.000125 Time 0.206934 -2023-04-26 22:23:45,641 - Epoch: [104][ 450/ 518] Overall Loss 2.935385 Objective Loss 2.935385 LR 0.000125 Time 0.206765 -2023-04-26 22:23:55,904 - Epoch: [104][ 500/ 518] Overall Loss 2.934194 Objective Loss 2.934194 LR 0.000125 Time 0.206611 -2023-04-26 22:23:59,491 - Epoch: [104][ 518/ 518] Overall Loss 2.934127 Objective Loss 2.934127 LR 0.000125 Time 0.206355 -2023-04-26 22:23:59,564 - --- validate (epoch=104)----------- -2023-04-26 22:23:59,564 - 4952 samples (32 per mini-batch) -2023-04-26 22:24:06,414 - Epoch: [104][ 50/ 155] Loss 3.187119 mAP 0.433820 -2023-04-26 22:24:12,758 - Epoch: [104][ 100/ 155] Loss 3.200280 mAP 0.428840 -2023-04-26 22:24:19,133 - Epoch: [104][ 150/ 155] Loss 3.204257 mAP 0.430522 -2023-04-26 22:24:19,705 - Epoch: [104][ 155/ 155] Loss 3.209947 mAP 0.427980 -2023-04-26 22:24:19,771 - ==> mAP: 0.42798 Loss: 3.210 - -2023-04-26 22:24:19,775 - ==> Best [mAP: 0.441647 vloss: 3.211414 Sparsity:0.00 Params: 2177088 on epoch: 101] -2023-04-26 22:24:19,775 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:24:19,833 - - -2023-04-26 22:24:19,833 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:24:30,884 - Epoch: [105][ 50/ 518] Overall Loss 2.917454 Objective Loss 2.917454 LR 0.000125 Time 0.220950 -2023-04-26 22:24:41,087 - Epoch: [105][ 100/ 518] Overall Loss 2.934195 Objective Loss 2.934195 LR 0.000125 Time 0.212488 -2023-04-26 22:24:51,261 - Epoch: [105][ 150/ 518] Overall Loss 2.932761 Objective Loss 2.932761 LR 0.000125 Time 0.209473 -2023-04-26 22:25:01,487 - Epoch: [105][ 200/ 518] Overall Loss 2.944829 Objective Loss 2.944829 LR 0.000125 Time 0.208228 -2023-04-26 22:25:11,611 - Epoch: [105][ 250/ 518] Overall Loss 2.941132 Objective Loss 2.941132 LR 0.000125 Time 0.207073 -2023-04-26 22:25:21,840 - Epoch: [105][ 300/ 518] Overall Loss 2.946927 Objective Loss 2.946927 LR 0.000125 Time 0.206652 -2023-04-26 22:25:32,077 - Epoch: [105][ 350/ 518] Overall Loss 2.937404 Objective Loss 2.937404 LR 0.000125 Time 0.206373 -2023-04-26 22:25:42,232 - Epoch: [105][ 400/ 518] Overall Loss 2.934014 Objective Loss 2.934014 LR 0.000125 Time 0.205961 -2023-04-26 22:25:52,436 - Epoch: [105][ 450/ 518] Overall Loss 2.937216 Objective Loss 2.937216 LR 0.000125 Time 0.205748 -2023-04-26 22:26:02,692 - Epoch: [105][ 500/ 518] Overall Loss 2.936232 Objective Loss 2.936232 LR 0.000125 Time 0.205682 -2023-04-26 22:26:06,173 - Epoch: [105][ 518/ 518] Overall Loss 2.936125 Objective Loss 2.936125 LR 0.000125 Time 0.205253 -2023-04-26 22:26:06,243 - --- validate (epoch=105)----------- -2023-04-26 22:26:06,243 - 4952 samples (32 per mini-batch) -2023-04-26 22:26:13,046 - Epoch: [105][ 50/ 155] Loss 3.236850 mAP 0.425872 -2023-04-26 22:26:19,463 - Epoch: [105][ 100/ 155] Loss 3.198560 mAP 0.430105 -2023-04-26 22:26:25,889 - Epoch: [105][ 150/ 155] Loss 3.199312 mAP 0.432550 -2023-04-26 22:26:26,471 - Epoch: [105][ 155/ 155] Loss 3.199384 mAP 0.432444 -2023-04-26 22:26:26,539 - ==> mAP: 0.43244 Loss: 3.199 - -2023-04-26 22:26:26,543 - ==> Best [mAP: 0.441647 vloss: 3.211414 Sparsity:0.00 Params: 2177088 on epoch: 101] -2023-04-26 22:26:26,543 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:26:26,579 - - -2023-04-26 22:26:26,579 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:26:37,507 - Epoch: [106][ 50/ 518] Overall Loss 2.923096 Objective Loss 2.923096 LR 0.000125 Time 0.218498 -2023-04-26 22:26:47,741 - Epoch: [106][ 100/ 518] Overall Loss 2.950824 Objective Loss 2.950824 LR 0.000125 Time 0.211575 -2023-04-26 22:26:57,951 - Epoch: [106][ 150/ 518] Overall Loss 2.932395 Objective Loss 2.932395 LR 0.000125 Time 0.209102 -2023-04-26 22:27:08,177 - Epoch: [106][ 200/ 518] Overall Loss 2.933541 Objective Loss 2.933541 LR 0.000125 Time 0.207951 -2023-04-26 22:27:18,468 - Epoch: [106][ 250/ 518] Overall Loss 2.927925 Objective Loss 2.927925 LR 0.000125 Time 0.207517 -2023-04-26 22:27:28,726 - Epoch: [106][ 300/ 518] Overall Loss 2.935308 Objective Loss 2.935308 LR 0.000125 Time 0.207119 -2023-04-26 22:27:38,945 - Epoch: [106][ 350/ 518] Overall Loss 2.936474 Objective Loss 2.936474 LR 0.000125 Time 0.206722 -2023-04-26 22:27:49,122 - Epoch: [106][ 400/ 518] Overall Loss 2.942451 Objective Loss 2.942451 LR 0.000125 Time 0.206322 -2023-04-26 22:27:59,442 - Epoch: [106][ 450/ 518] Overall Loss 2.939291 Objective Loss 2.939291 LR 0.000125 Time 0.206327 -2023-04-26 22:28:09,761 - Epoch: [106][ 500/ 518] Overall Loss 2.937535 Objective Loss 2.937535 LR 0.000125 Time 0.206328 -2023-04-26 22:28:13,320 - Epoch: [106][ 518/ 518] Overall Loss 2.939695 Objective Loss 2.939695 LR 0.000125 Time 0.206029 -2023-04-26 22:28:13,392 - --- validate (epoch=106)----------- -2023-04-26 22:28:13,392 - 4952 samples (32 per mini-batch) -2023-04-26 22:28:20,176 - Epoch: [106][ 50/ 155] Loss 3.170670 mAP 0.444965 -2023-04-26 22:28:26,601 - Epoch: [106][ 100/ 155] Loss 3.191686 mAP 0.443813 -2023-04-26 22:28:33,117 - Epoch: [106][ 150/ 155] Loss 3.210862 mAP 0.435500 -2023-04-26 22:28:33,679 - Epoch: [106][ 155/ 155] Loss 3.211373 mAP 0.435682 -2023-04-26 22:28:33,751 - ==> mAP: 0.43568 Loss: 3.211 - -2023-04-26 22:28:33,755 - ==> Best [mAP: 0.441647 vloss: 3.211414 Sparsity:0.00 Params: 2177088 on epoch: 101] -2023-04-26 22:28:33,755 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:28:33,792 - - -2023-04-26 22:28:33,792 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:28:44,891 - Epoch: [107][ 50/ 518] Overall Loss 2.944396 Objective Loss 2.944396 LR 0.000125 Time 0.221930 -2023-04-26 22:28:55,127 - Epoch: [107][ 100/ 518] Overall Loss 2.949427 Objective Loss 2.949427 LR 0.000125 Time 0.213305 -2023-04-26 22:29:05,312 - Epoch: [107][ 150/ 518] Overall Loss 2.958699 Objective Loss 2.958699 LR 0.000125 Time 0.210092 -2023-04-26 22:29:15,529 - Epoch: [107][ 200/ 518] Overall Loss 2.955864 Objective Loss 2.955864 LR 0.000125 Time 0.208646 -2023-04-26 22:29:25,804 - Epoch: [107][ 250/ 518] Overall Loss 2.952245 Objective Loss 2.952245 LR 0.000125 Time 0.208010 -2023-04-26 22:29:36,083 - Epoch: [107][ 300/ 518] Overall Loss 2.941340 Objective Loss 2.941340 LR 0.000125 Time 0.207601 -2023-04-26 22:29:46,301 - Epoch: [107][ 350/ 518] Overall Loss 2.937743 Objective Loss 2.937743 LR 0.000125 Time 0.207131 -2023-04-26 22:29:56,573 - Epoch: [107][ 400/ 518] Overall Loss 2.944926 Objective Loss 2.944926 LR 0.000125 Time 0.206916 -2023-04-26 22:30:06,809 - Epoch: [107][ 450/ 518] Overall Loss 2.945342 Objective Loss 2.945342 LR 0.000125 Time 0.206668 -2023-04-26 22:30:17,111 - Epoch: [107][ 500/ 518] Overall Loss 2.948695 Objective Loss 2.948695 LR 0.000125 Time 0.206602 -2023-04-26 22:30:20,676 - Epoch: [107][ 518/ 518] Overall Loss 2.945399 Objective Loss 2.945399 LR 0.000125 Time 0.206304 -2023-04-26 22:30:20,748 - --- validate (epoch=107)----------- -2023-04-26 22:30:20,748 - 4952 samples (32 per mini-batch) -2023-04-26 22:30:27,486 - Epoch: [107][ 50/ 155] Loss 3.260833 mAP 0.426824 -2023-04-26 22:30:33,838 - Epoch: [107][ 100/ 155] Loss 3.209349 mAP 0.427305 -2023-04-26 22:30:40,282 - Epoch: [107][ 150/ 155] Loss 3.210357 mAP 0.429039 -2023-04-26 22:30:40,852 - Epoch: [107][ 155/ 155] Loss 3.204226 mAP 0.431480 -2023-04-26 22:30:40,923 - ==> mAP: 0.43148 Loss: 3.204 - -2023-04-26 22:30:40,927 - ==> Best [mAP: 0.441647 vloss: 3.211414 Sparsity:0.00 Params: 2177088 on epoch: 101] -2023-04-26 22:30:40,927 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:30:40,964 - - -2023-04-26 22:30:40,965 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:30:52,001 - Epoch: [108][ 50/ 518] Overall Loss 2.932591 Objective Loss 2.932591 LR 0.000125 Time 0.220667 -2023-04-26 22:31:02,223 - Epoch: [108][ 100/ 518] Overall Loss 2.924586 Objective Loss 2.924586 LR 0.000125 Time 0.212538 -2023-04-26 22:31:12,553 - Epoch: [108][ 150/ 518] Overall Loss 2.929116 Objective Loss 2.929116 LR 0.000125 Time 0.210550 -2023-04-26 22:31:22,816 - Epoch: [108][ 200/ 518] Overall Loss 2.911084 Objective Loss 2.911084 LR 0.000125 Time 0.209221 -2023-04-26 22:31:33,109 - Epoch: [108][ 250/ 518] Overall Loss 2.923109 Objective Loss 2.923109 LR 0.000125 Time 0.208541 -2023-04-26 22:31:43,474 - Epoch: [108][ 300/ 518] Overall Loss 2.928231 Objective Loss 2.928231 LR 0.000125 Time 0.208326 -2023-04-26 22:31:53,724 - Epoch: [108][ 350/ 518] Overall Loss 2.933241 Objective Loss 2.933241 LR 0.000125 Time 0.207848 -2023-04-26 22:32:03,990 - Epoch: [108][ 400/ 518] Overall Loss 2.936750 Objective Loss 2.936750 LR 0.000125 Time 0.207526 -2023-04-26 22:32:14,154 - Epoch: [108][ 450/ 518] Overall Loss 2.929525 Objective Loss 2.929525 LR 0.000125 Time 0.207052 -2023-04-26 22:32:24,428 - Epoch: [108][ 500/ 518] Overall Loss 2.930096 Objective Loss 2.930096 LR 0.000125 Time 0.206892 -2023-04-26 22:32:27,961 - Epoch: [108][ 518/ 518] Overall Loss 2.932063 Objective Loss 2.932063 LR 0.000125 Time 0.206522 -2023-04-26 22:32:28,033 - --- validate (epoch=108)----------- -2023-04-26 22:32:28,034 - 4952 samples (32 per mini-batch) -2023-04-26 22:32:34,846 - Epoch: [108][ 50/ 155] Loss 3.199441 mAP 0.442386 -2023-04-26 22:32:41,267 - Epoch: [108][ 100/ 155] Loss 3.193845 mAP 0.448900 -2023-04-26 22:32:47,654 - Epoch: [108][ 150/ 155] Loss 3.195694 mAP 0.445026 -2023-04-26 22:32:48,260 - Epoch: [108][ 155/ 155] Loss 3.197665 mAP 0.443686 -2023-04-26 22:32:48,333 - ==> mAP: 0.44369 Loss: 3.198 - -2023-04-26 22:32:48,337 - ==> Best [mAP: 0.443686 vloss: 3.197665 Sparsity:0.00 Params: 2177088 on epoch: 108] -2023-04-26 22:32:48,337 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:32:48,389 - - -2023-04-26 22:32:48,389 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:32:59,165 - Epoch: [109][ 50/ 518] Overall Loss 2.887553 Objective Loss 2.887553 LR 0.000125 Time 0.215457 -2023-04-26 22:33:09,462 - Epoch: [109][ 100/ 518] Overall Loss 2.910058 Objective Loss 2.910058 LR 0.000125 Time 0.210684 -2023-04-26 22:33:19,759 - Epoch: [109][ 150/ 518] Overall Loss 2.920642 Objective Loss 2.920642 LR 0.000125 Time 0.209087 -2023-04-26 22:33:29,879 - Epoch: [109][ 200/ 518] Overall Loss 2.935632 Objective Loss 2.935632 LR 0.000125 Time 0.207411 -2023-04-26 22:33:39,974 - Epoch: [109][ 250/ 518] Overall Loss 2.940065 Objective Loss 2.940065 LR 0.000125 Time 0.206301 -2023-04-26 22:33:50,213 - Epoch: [109][ 300/ 518] Overall Loss 2.934056 Objective Loss 2.934056 LR 0.000125 Time 0.206041 -2023-04-26 22:34:00,492 - Epoch: [109][ 350/ 518] Overall Loss 2.935914 Objective Loss 2.935914 LR 0.000125 Time 0.205972 -2023-04-26 22:34:10,697 - Epoch: [109][ 400/ 518] Overall Loss 2.939600 Objective Loss 2.939600 LR 0.000125 Time 0.205732 -2023-04-26 22:34:21,018 - Epoch: [109][ 450/ 518] Overall Loss 2.933231 Objective Loss 2.933231 LR 0.000125 Time 0.205806 -2023-04-26 22:34:31,140 - Epoch: [109][ 500/ 518] Overall Loss 2.934898 Objective Loss 2.934898 LR 0.000125 Time 0.205465 -2023-04-26 22:34:34,702 - Epoch: [109][ 518/ 518] Overall Loss 2.934411 Objective Loss 2.934411 LR 0.000125 Time 0.205201 -2023-04-26 22:34:34,774 - --- validate (epoch=109)----------- -2023-04-26 22:34:34,774 - 4952 samples (32 per mini-batch) -2023-04-26 22:34:41,507 - Epoch: [109][ 50/ 155] Loss 3.197840 mAP 0.440234 -2023-04-26 22:34:47,938 - Epoch: [109][ 100/ 155] Loss 3.206288 mAP 0.436109 -2023-04-26 22:34:54,292 - Epoch: [109][ 150/ 155] Loss 3.203765 mAP 0.437864 -2023-04-26 22:34:54,849 - Epoch: [109][ 155/ 155] Loss 3.199960 mAP 0.438550 -2023-04-26 22:34:54,920 - ==> mAP: 0.43855 Loss: 3.200 - -2023-04-26 22:34:54,924 - ==> Best [mAP: 0.443686 vloss: 3.197665 Sparsity:0.00 Params: 2177088 on epoch: 108] -2023-04-26 22:34:54,924 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:34:54,961 - - -2023-04-26 22:34:54,961 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:35:05,940 - Epoch: [110][ 50/ 518] Overall Loss 2.900683 Objective Loss 2.900683 LR 0.000125 Time 0.219530 -2023-04-26 22:35:16,232 - Epoch: [110][ 100/ 518] Overall Loss 2.900330 Objective Loss 2.900330 LR 0.000125 Time 0.212671 -2023-04-26 22:35:26,535 - Epoch: [110][ 150/ 518] Overall Loss 2.915986 Objective Loss 2.915986 LR 0.000125 Time 0.210452 -2023-04-26 22:35:36,788 - Epoch: [110][ 200/ 518] Overall Loss 2.912271 Objective Loss 2.912271 LR 0.000125 Time 0.209095 -2023-04-26 22:35:47,087 - Epoch: [110][ 250/ 518] Overall Loss 2.924583 Objective Loss 2.924583 LR 0.000125 Time 0.208468 -2023-04-26 22:35:57,278 - Epoch: [110][ 300/ 518] Overall Loss 2.913065 Objective Loss 2.913065 LR 0.000125 Time 0.207688 -2023-04-26 22:36:07,642 - Epoch: [110][ 350/ 518] Overall Loss 2.920000 Objective Loss 2.920000 LR 0.000125 Time 0.207623 -2023-04-26 22:36:17,815 - Epoch: [110][ 400/ 518] Overall Loss 2.917571 Objective Loss 2.917571 LR 0.000125 Time 0.207099 -2023-04-26 22:36:27,993 - Epoch: [110][ 450/ 518] Overall Loss 2.914683 Objective Loss 2.914683 LR 0.000125 Time 0.206702 -2023-04-26 22:36:38,241 - Epoch: [110][ 500/ 518] Overall Loss 2.913195 Objective Loss 2.913195 LR 0.000125 Time 0.206524 -2023-04-26 22:36:41,833 - Epoch: [110][ 518/ 518] Overall Loss 2.915191 Objective Loss 2.915191 LR 0.000125 Time 0.206283 -2023-04-26 22:36:41,906 - --- validate (epoch=110)----------- -2023-04-26 22:36:41,907 - 4952 samples (32 per mini-batch) -2023-04-26 22:36:48,740 - Epoch: [110][ 50/ 155] Loss 3.192275 mAP 0.445541 -2023-04-26 22:36:55,212 - Epoch: [110][ 100/ 155] Loss 3.208073 mAP 0.439444 -2023-04-26 22:37:01,591 - Epoch: [110][ 150/ 155] Loss 3.198112 mAP 0.434879 -2023-04-26 22:37:02,166 - Epoch: [110][ 155/ 155] Loss 3.201234 mAP 0.432920 -2023-04-26 22:37:02,236 - ==> mAP: 0.43292 Loss: 3.201 - -2023-04-26 22:37:02,239 - ==> Best [mAP: 0.443686 vloss: 3.197665 Sparsity:0.00 Params: 2177088 on epoch: 108] -2023-04-26 22:37:02,239 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:37:02,278 - - -2023-04-26 22:37:02,278 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:37:13,298 - Epoch: [111][ 50/ 518] Overall Loss 2.937232 Objective Loss 2.937232 LR 0.000125 Time 0.220345 -2023-04-26 22:37:23,562 - Epoch: [111][ 100/ 518] Overall Loss 2.949875 Objective Loss 2.949875 LR 0.000125 Time 0.212795 -2023-04-26 22:37:33,827 - Epoch: [111][ 150/ 518] Overall Loss 2.933331 Objective Loss 2.933331 LR 0.000125 Time 0.210285 -2023-04-26 22:37:44,099 - Epoch: [111][ 200/ 518] Overall Loss 2.930911 Objective Loss 2.930911 LR 0.000125 Time 0.209067 -2023-04-26 22:37:54,393 - Epoch: [111][ 250/ 518] Overall Loss 2.924427 Objective Loss 2.924427 LR 0.000125 Time 0.208424 -2023-04-26 22:38:04,581 - Epoch: [111][ 300/ 518] Overall Loss 2.932305 Objective Loss 2.932305 LR 0.000125 Time 0.207641 -2023-04-26 22:38:14,766 - Epoch: [111][ 350/ 518] Overall Loss 2.928144 Objective Loss 2.928144 LR 0.000125 Time 0.207071 -2023-04-26 22:38:24,996 - Epoch: [111][ 400/ 518] Overall Loss 2.921384 Objective Loss 2.921384 LR 0.000125 Time 0.206760 -2023-04-26 22:38:35,188 - Epoch: [111][ 450/ 518] Overall Loss 2.926482 Objective Loss 2.926482 LR 0.000125 Time 0.206432 -2023-04-26 22:38:45,426 - Epoch: [111][ 500/ 518] Overall Loss 2.922987 Objective Loss 2.922987 LR 0.000125 Time 0.206261 -2023-04-26 22:38:48,953 - Epoch: [111][ 518/ 518] Overall Loss 2.923567 Objective Loss 2.923567 LR 0.000125 Time 0.205902 -2023-04-26 22:38:49,026 - --- validate (epoch=111)----------- -2023-04-26 22:38:49,026 - 4952 samples (32 per mini-batch) -2023-04-26 22:38:55,793 - Epoch: [111][ 50/ 155] Loss 3.213983 mAP 0.443900 -2023-04-26 22:39:02,295 - Epoch: [111][ 100/ 155] Loss 3.208088 mAP 0.442705 -2023-04-26 22:39:08,679 - Epoch: [111][ 150/ 155] Loss 3.193134 mAP 0.441128 -2023-04-26 22:39:09,262 - Epoch: [111][ 155/ 155] Loss 3.192463 mAP 0.443365 -2023-04-26 22:39:09,331 - ==> mAP: 0.44336 Loss: 3.192 - -2023-04-26 22:39:09,335 - ==> Best [mAP: 0.443686 vloss: 3.197665 Sparsity:0.00 Params: 2177088 on epoch: 108] -2023-04-26 22:39:09,335 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:39:09,372 - - -2023-04-26 22:39:09,372 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:39:20,169 - Epoch: [112][ 50/ 518] Overall Loss 2.869364 Objective Loss 2.869364 LR 0.000125 Time 0.215871 -2023-04-26 22:39:30,350 - Epoch: [112][ 100/ 518] Overall Loss 2.889222 Objective Loss 2.889222 LR 0.000125 Time 0.209731 -2023-04-26 22:39:40,547 - Epoch: [112][ 150/ 518] Overall Loss 2.902128 Objective Loss 2.902128 LR 0.000125 Time 0.207788 -2023-04-26 22:39:50,795 - Epoch: [112][ 200/ 518] Overall Loss 2.894141 Objective Loss 2.894141 LR 0.000125 Time 0.207072 -2023-04-26 22:40:01,012 - Epoch: [112][ 250/ 518] Overall Loss 2.898469 Objective Loss 2.898469 LR 0.000125 Time 0.206519 -2023-04-26 22:40:11,153 - Epoch: [112][ 300/ 518] Overall Loss 2.902567 Objective Loss 2.902567 LR 0.000125 Time 0.205897 -2023-04-26 22:40:21,282 - Epoch: [112][ 350/ 518] Overall Loss 2.908702 Objective Loss 2.908702 LR 0.000125 Time 0.205419 -2023-04-26 22:40:31,497 - Epoch: [112][ 400/ 518] Overall Loss 2.911730 Objective Loss 2.911730 LR 0.000125 Time 0.205274 -2023-04-26 22:40:41,756 - Epoch: [112][ 450/ 518] Overall Loss 2.918423 Objective Loss 2.918423 LR 0.000125 Time 0.205260 -2023-04-26 22:40:51,951 - Epoch: [112][ 500/ 518] Overall Loss 2.922526 Objective Loss 2.922526 LR 0.000125 Time 0.205122 -2023-04-26 22:40:55,524 - Epoch: [112][ 518/ 518] Overall Loss 2.926335 Objective Loss 2.926335 LR 0.000125 Time 0.204891 -2023-04-26 22:40:55,595 - --- validate (epoch=112)----------- -2023-04-26 22:40:55,596 - 4952 samples (32 per mini-batch) -2023-04-26 22:41:02,411 - Epoch: [112][ 50/ 155] Loss 3.176458 mAP 0.434286 -2023-04-26 22:41:08,859 - Epoch: [112][ 100/ 155] Loss 3.199114 mAP 0.433602 -2023-04-26 22:41:15,252 - Epoch: [112][ 150/ 155] Loss 3.190080 mAP 0.442437 -2023-04-26 22:41:15,822 - Epoch: [112][ 155/ 155] Loss 3.191066 mAP 0.442297 -2023-04-26 22:41:15,893 - ==> mAP: 0.44230 Loss: 3.191 - -2023-04-26 22:41:15,897 - ==> Best [mAP: 0.443686 vloss: 3.197665 Sparsity:0.00 Params: 2177088 on epoch: 108] -2023-04-26 22:41:15,897 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:41:15,935 - - -2023-04-26 22:41:15,935 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:41:26,868 - Epoch: [113][ 50/ 518] Overall Loss 2.903749 Objective Loss 2.903749 LR 0.000125 Time 0.218602 -2023-04-26 22:41:37,075 - Epoch: [113][ 100/ 518] Overall Loss 2.880212 Objective Loss 2.880212 LR 0.000125 Time 0.211354 -2023-04-26 22:41:47,194 - Epoch: [113][ 150/ 518] Overall Loss 2.891344 Objective Loss 2.891344 LR 0.000125 Time 0.208352 -2023-04-26 22:41:57,350 - Epoch: [113][ 200/ 518] Overall Loss 2.894628 Objective Loss 2.894628 LR 0.000125 Time 0.207035 -2023-04-26 22:42:07,650 - Epoch: [113][ 250/ 518] Overall Loss 2.912572 Objective Loss 2.912572 LR 0.000125 Time 0.206822 -2023-04-26 22:42:17,842 - Epoch: [113][ 300/ 518] Overall Loss 2.919297 Objective Loss 2.919297 LR 0.000125 Time 0.206321 -2023-04-26 22:42:28,073 - Epoch: [113][ 350/ 518] Overall Loss 2.923986 Objective Loss 2.923986 LR 0.000125 Time 0.206074 -2023-04-26 22:42:38,328 - Epoch: [113][ 400/ 518] Overall Loss 2.925886 Objective Loss 2.925886 LR 0.000125 Time 0.205946 -2023-04-26 22:42:48,561 - Epoch: [113][ 450/ 518] Overall Loss 2.925356 Objective Loss 2.925356 LR 0.000125 Time 0.205800 -2023-04-26 22:42:58,703 - Epoch: [113][ 500/ 518] Overall Loss 2.921685 Objective Loss 2.921685 LR 0.000125 Time 0.205501 -2023-04-26 22:43:02,269 - Epoch: [113][ 518/ 518] Overall Loss 2.921095 Objective Loss 2.921095 LR 0.000125 Time 0.205244 -2023-04-26 22:43:02,342 - --- validate (epoch=113)----------- -2023-04-26 22:43:02,343 - 4952 samples (32 per mini-batch) -2023-04-26 22:43:09,224 - Epoch: [113][ 50/ 155] Loss 3.193860 mAP 0.413256 -2023-04-26 22:43:15,622 - Epoch: [113][ 100/ 155] Loss 3.182669 mAP 0.426424 -2023-04-26 22:43:22,047 - Epoch: [113][ 150/ 155] Loss 3.183294 mAP 0.429468 -2023-04-26 22:43:22,612 - Epoch: [113][ 155/ 155] Loss 3.186067 mAP 0.429833 -2023-04-26 22:43:22,684 - ==> mAP: 0.42983 Loss: 3.186 - -2023-04-26 22:43:22,687 - ==> Best [mAP: 0.443686 vloss: 3.197665 Sparsity:0.00 Params: 2177088 on epoch: 108] -2023-04-26 22:43:22,688 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:43:22,726 - - -2023-04-26 22:43:22,726 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:43:33,608 - Epoch: [114][ 50/ 518] Overall Loss 2.836483 Objective Loss 2.836483 LR 0.000125 Time 0.217575 -2023-04-26 22:43:43,899 - Epoch: [114][ 100/ 518] Overall Loss 2.871926 Objective Loss 2.871926 LR 0.000125 Time 0.211686 -2023-04-26 22:43:54,122 - Epoch: [114][ 150/ 518] Overall Loss 2.899273 Objective Loss 2.899273 LR 0.000125 Time 0.209266 -2023-04-26 22:44:04,339 - Epoch: [114][ 200/ 518] Overall Loss 2.897280 Objective Loss 2.897280 LR 0.000125 Time 0.208025 -2023-04-26 22:44:14,539 - Epoch: [114][ 250/ 518] Overall Loss 2.903058 Objective Loss 2.903058 LR 0.000125 Time 0.207211 -2023-04-26 22:44:24,752 - Epoch: [114][ 300/ 518] Overall Loss 2.901948 Objective Loss 2.901948 LR 0.000125 Time 0.206715 -2023-04-26 22:44:35,019 - Epoch: [114][ 350/ 518] Overall Loss 2.896801 Objective Loss 2.896801 LR 0.000125 Time 0.206514 -2023-04-26 22:44:45,189 - Epoch: [114][ 400/ 518] Overall Loss 2.901325 Objective Loss 2.901325 LR 0.000125 Time 0.206120 -2023-04-26 22:44:55,430 - Epoch: [114][ 450/ 518] Overall Loss 2.904894 Objective Loss 2.904894 LR 0.000125 Time 0.205972 -2023-04-26 22:45:05,751 - Epoch: [114][ 500/ 518] Overall Loss 2.907291 Objective Loss 2.907291 LR 0.000125 Time 0.206014 -2023-04-26 22:45:09,284 - Epoch: [114][ 518/ 518] Overall Loss 2.907482 Objective Loss 2.907482 LR 0.000125 Time 0.205675 -2023-04-26 22:45:09,356 - --- validate (epoch=114)----------- -2023-04-26 22:45:09,356 - 4952 samples (32 per mini-batch) -2023-04-26 22:45:16,055 - Epoch: [114][ 50/ 155] Loss 3.199652 mAP 0.432874 -2023-04-26 22:45:22,514 - Epoch: [114][ 100/ 155] Loss 3.195378 mAP 0.431717 -2023-04-26 22:45:28,891 - Epoch: [114][ 150/ 155] Loss 3.188928 mAP 0.438720 -2023-04-26 22:45:29,458 - Epoch: [114][ 155/ 155] Loss 3.189205 mAP 0.438739 -2023-04-26 22:45:29,529 - ==> mAP: 0.43874 Loss: 3.189 - -2023-04-26 22:45:29,533 - ==> Best [mAP: 0.443686 vloss: 3.197665 Sparsity:0.00 Params: 2177088 on epoch: 108] -2023-04-26 22:45:29,533 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:45:29,570 - - -2023-04-26 22:45:29,570 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:45:40,450 - Epoch: [115][ 50/ 518] Overall Loss 2.917484 Objective Loss 2.917484 LR 0.000125 Time 0.217531 -2023-04-26 22:45:50,690 - Epoch: [115][ 100/ 518] Overall Loss 2.921559 Objective Loss 2.921559 LR 0.000125 Time 0.211150 -2023-04-26 22:46:00,980 - Epoch: [115][ 150/ 518] Overall Loss 2.924151 Objective Loss 2.924151 LR 0.000125 Time 0.209357 -2023-04-26 22:46:11,158 - Epoch: [115][ 200/ 518] Overall Loss 2.916861 Objective Loss 2.916861 LR 0.000125 Time 0.207903 -2023-04-26 22:46:21,452 - Epoch: [115][ 250/ 518] Overall Loss 2.923786 Objective Loss 2.923786 LR 0.000125 Time 0.207488 -2023-04-26 22:46:31,694 - Epoch: [115][ 300/ 518] Overall Loss 2.907690 Objective Loss 2.907690 LR 0.000125 Time 0.207042 -2023-04-26 22:46:41,910 - Epoch: [115][ 350/ 518] Overall Loss 2.915019 Objective Loss 2.915019 LR 0.000125 Time 0.206650 -2023-04-26 22:46:52,154 - Epoch: [115][ 400/ 518] Overall Loss 2.913875 Objective Loss 2.913875 LR 0.000125 Time 0.206425 -2023-04-26 22:47:02,355 - Epoch: [115][ 450/ 518] Overall Loss 2.917450 Objective Loss 2.917450 LR 0.000125 Time 0.206153 -2023-04-26 22:47:12,620 - Epoch: [115][ 500/ 518] Overall Loss 2.918872 Objective Loss 2.918872 LR 0.000125 Time 0.206065 -2023-04-26 22:47:16,196 - Epoch: [115][ 518/ 518] Overall Loss 2.919340 Objective Loss 2.919340 LR 0.000125 Time 0.205806 -2023-04-26 22:47:16,268 - --- validate (epoch=115)----------- -2023-04-26 22:47:16,268 - 4952 samples (32 per mini-batch) -2023-04-26 22:47:23,050 - Epoch: [115][ 50/ 155] Loss 3.189922 mAP 0.424767 -2023-04-26 22:47:29,478 - Epoch: [115][ 100/ 155] Loss 3.196640 mAP 0.431650 -2023-04-26 22:47:35,896 - Epoch: [115][ 150/ 155] Loss 3.194007 mAP 0.432363 -2023-04-26 22:47:36,474 - Epoch: [115][ 155/ 155] Loss 3.194570 mAP 0.432961 -2023-04-26 22:47:36,545 - ==> mAP: 0.43296 Loss: 3.195 - -2023-04-26 22:47:36,549 - ==> Best [mAP: 0.443686 vloss: 3.197665 Sparsity:0.00 Params: 2177088 on epoch: 108] -2023-04-26 22:47:36,550 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:47:36,588 - - -2023-04-26 22:47:36,588 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:47:47,582 - Epoch: [116][ 50/ 518] Overall Loss 2.941806 Objective Loss 2.941806 LR 0.000125 Time 0.219819 -2023-04-26 22:47:57,677 - Epoch: [116][ 100/ 518] Overall Loss 2.904355 Objective Loss 2.904355 LR 0.000125 Time 0.210846 -2023-04-26 22:48:07,964 - Epoch: [116][ 150/ 518] Overall Loss 2.890671 Objective Loss 2.890671 LR 0.000125 Time 0.209136 -2023-04-26 22:48:18,124 - Epoch: [116][ 200/ 518] Overall Loss 2.907977 Objective Loss 2.907977 LR 0.000125 Time 0.207643 -2023-04-26 22:48:28,336 - Epoch: [116][ 250/ 518] Overall Loss 2.925239 Objective Loss 2.925239 LR 0.000125 Time 0.206955 -2023-04-26 22:48:38,516 - Epoch: [116][ 300/ 518] Overall Loss 2.925776 Objective Loss 2.925776 LR 0.000125 Time 0.206390 -2023-04-26 22:48:48,768 - Epoch: [116][ 350/ 518] Overall Loss 2.923010 Objective Loss 2.923010 LR 0.000125 Time 0.206192 -2023-04-26 22:48:58,984 - Epoch: [116][ 400/ 518] Overall Loss 2.925184 Objective Loss 2.925184 LR 0.000125 Time 0.205954 -2023-04-26 22:49:09,199 - Epoch: [116][ 450/ 518] Overall Loss 2.917055 Objective Loss 2.917055 LR 0.000125 Time 0.205767 -2023-04-26 22:49:19,342 - Epoch: [116][ 500/ 518] Overall Loss 2.919737 Objective Loss 2.919737 LR 0.000125 Time 0.205472 -2023-04-26 22:49:22,903 - Epoch: [116][ 518/ 518] Overall Loss 2.920775 Objective Loss 2.920775 LR 0.000125 Time 0.205206 -2023-04-26 22:49:22,975 - --- validate (epoch=116)----------- -2023-04-26 22:49:22,975 - 4952 samples (32 per mini-batch) -2023-04-26 22:49:29,725 - Epoch: [116][ 50/ 155] Loss 3.211632 mAP 0.434540 -2023-04-26 22:49:36,114 - Epoch: [116][ 100/ 155] Loss 3.199567 mAP 0.432340 -2023-04-26 22:49:42,463 - Epoch: [116][ 150/ 155] Loss 3.200918 mAP 0.428761 -2023-04-26 22:49:43,038 - Epoch: [116][ 155/ 155] Loss 3.193116 mAP 0.429912 -2023-04-26 22:49:43,106 - ==> mAP: 0.42991 Loss: 3.193 - -2023-04-26 22:49:43,109 - ==> Best [mAP: 0.443686 vloss: 3.197665 Sparsity:0.00 Params: 2177088 on epoch: 108] -2023-04-26 22:49:43,110 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:49:43,147 - - -2023-04-26 22:49:43,147 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:49:54,208 - Epoch: [117][ 50/ 518] Overall Loss 2.881834 Objective Loss 2.881834 LR 0.000125 Time 0.221161 -2023-04-26 22:50:04,483 - Epoch: [117][ 100/ 518] Overall Loss 2.886334 Objective Loss 2.886334 LR 0.000125 Time 0.213318 -2023-04-26 22:50:14,699 - Epoch: [117][ 150/ 518] Overall Loss 2.893793 Objective Loss 2.893793 LR 0.000125 Time 0.210306 -2023-04-26 22:50:24,980 - Epoch: [117][ 200/ 518] Overall Loss 2.906813 Objective Loss 2.906813 LR 0.000125 Time 0.209125 -2023-04-26 22:50:35,150 - Epoch: [117][ 250/ 518] Overall Loss 2.927121 Objective Loss 2.927121 LR 0.000125 Time 0.207974 -2023-04-26 22:50:45,313 - Epoch: [117][ 300/ 518] Overall Loss 2.924688 Objective Loss 2.924688 LR 0.000125 Time 0.207184 -2023-04-26 22:50:55,433 - Epoch: [117][ 350/ 518] Overall Loss 2.919182 Objective Loss 2.919182 LR 0.000125 Time 0.206496 -2023-04-26 22:51:05,622 - Epoch: [117][ 400/ 518] Overall Loss 2.916924 Objective Loss 2.916924 LR 0.000125 Time 0.206153 -2023-04-26 22:51:15,808 - Epoch: [117][ 450/ 518] Overall Loss 2.912598 Objective Loss 2.912598 LR 0.000125 Time 0.205877 -2023-04-26 22:51:25,880 - Epoch: [117][ 500/ 518] Overall Loss 2.916221 Objective Loss 2.916221 LR 0.000125 Time 0.205431 -2023-04-26 22:51:29,447 - Epoch: [117][ 518/ 518] Overall Loss 2.917868 Objective Loss 2.917868 LR 0.000125 Time 0.205178 -2023-04-26 22:51:29,519 - --- validate (epoch=117)----------- -2023-04-26 22:51:29,520 - 4952 samples (32 per mini-batch) -2023-04-26 22:51:36,238 - Epoch: [117][ 50/ 155] Loss 3.168992 mAP 0.438299 -2023-04-26 22:51:42,631 - Epoch: [117][ 100/ 155] Loss 3.159211 mAP 0.435356 -2023-04-26 22:51:48,999 - Epoch: [117][ 150/ 155] Loss 3.178312 mAP 0.437114 -2023-04-26 22:51:49,563 - Epoch: [117][ 155/ 155] Loss 3.178656 mAP 0.437934 -2023-04-26 22:51:49,635 - ==> mAP: 0.43793 Loss: 3.179 - -2023-04-26 22:51:49,639 - ==> Best [mAP: 0.443686 vloss: 3.197665 Sparsity:0.00 Params: 2177088 on epoch: 108] -2023-04-26 22:51:49,639 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:51:49,676 - - -2023-04-26 22:51:49,676 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:52:00,544 - Epoch: [118][ 50/ 518] Overall Loss 2.919194 Objective Loss 2.919194 LR 0.000125 Time 0.217297 -2023-04-26 22:52:10,656 - Epoch: [118][ 100/ 518] Overall Loss 2.920198 Objective Loss 2.920198 LR 0.000125 Time 0.209754 -2023-04-26 22:52:20,713 - Epoch: [118][ 150/ 518] Overall Loss 2.910351 Objective Loss 2.910351 LR 0.000125 Time 0.206872 -2023-04-26 22:52:30,905 - Epoch: [118][ 200/ 518] Overall Loss 2.922631 Objective Loss 2.922631 LR 0.000125 Time 0.206104 -2023-04-26 22:52:40,985 - Epoch: [118][ 250/ 518] Overall Loss 2.917984 Objective Loss 2.917984 LR 0.000125 Time 0.205197 -2023-04-26 22:52:51,159 - Epoch: [118][ 300/ 518] Overall Loss 2.922881 Objective Loss 2.922881 LR 0.000125 Time 0.204907 -2023-04-26 22:53:01,342 - Epoch: [118][ 350/ 518] Overall Loss 2.926382 Objective Loss 2.926382 LR 0.000125 Time 0.204723 -2023-04-26 22:53:11,415 - Epoch: [118][ 400/ 518] Overall Loss 2.923335 Objective Loss 2.923335 LR 0.000125 Time 0.204312 -2023-04-26 22:53:21,530 - Epoch: [118][ 450/ 518] Overall Loss 2.918106 Objective Loss 2.918106 LR 0.000125 Time 0.204085 -2023-04-26 22:53:31,633 - Epoch: [118][ 500/ 518] Overall Loss 2.912914 Objective Loss 2.912914 LR 0.000125 Time 0.203879 -2023-04-26 22:53:35,158 - Epoch: [118][ 518/ 518] Overall Loss 2.914147 Objective Loss 2.914147 LR 0.000125 Time 0.203598 -2023-04-26 22:53:35,229 - --- validate (epoch=118)----------- -2023-04-26 22:53:35,229 - 4952 samples (32 per mini-batch) -2023-04-26 22:53:42,067 - Epoch: [118][ 50/ 155] Loss 3.181161 mAP 0.422856 -2023-04-26 22:53:48,432 - Epoch: [118][ 100/ 155] Loss 3.186637 mAP 0.432238 -2023-04-26 22:53:54,785 - Epoch: [118][ 150/ 155] Loss 3.186519 mAP 0.435314 -2023-04-26 22:53:55,366 - Epoch: [118][ 155/ 155] Loss 3.185973 mAP 0.435061 -2023-04-26 22:53:55,439 - ==> mAP: 0.43506 Loss: 3.186 - -2023-04-26 22:53:55,442 - ==> Best [mAP: 0.443686 vloss: 3.197665 Sparsity:0.00 Params: 2177088 on epoch: 108] -2023-04-26 22:53:55,442 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:53:55,479 - - -2023-04-26 22:53:55,479 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:54:06,250 - Epoch: [119][ 50/ 518] Overall Loss 2.896351 Objective Loss 2.896351 LR 0.000125 Time 0.215358 -2023-04-26 22:54:16,437 - Epoch: [119][ 100/ 518] Overall Loss 2.896053 Objective Loss 2.896053 LR 0.000125 Time 0.209530 -2023-04-26 22:54:26,625 - Epoch: [119][ 150/ 518] Overall Loss 2.913593 Objective Loss 2.913593 LR 0.000125 Time 0.207598 -2023-04-26 22:54:36,712 - Epoch: [119][ 200/ 518] Overall Loss 2.896698 Objective Loss 2.896698 LR 0.000125 Time 0.206123 -2023-04-26 22:54:46,876 - Epoch: [119][ 250/ 518] Overall Loss 2.902839 Objective Loss 2.902839 LR 0.000125 Time 0.205548 -2023-04-26 22:54:57,099 - Epoch: [119][ 300/ 518] Overall Loss 2.906199 Objective Loss 2.906199 LR 0.000125 Time 0.205361 -2023-04-26 22:55:07,302 - Epoch: [119][ 350/ 518] Overall Loss 2.907656 Objective Loss 2.907656 LR 0.000125 Time 0.205171 -2023-04-26 22:55:17,486 - Epoch: [119][ 400/ 518] Overall Loss 2.910213 Objective Loss 2.910213 LR 0.000125 Time 0.204982 -2023-04-26 22:55:27,624 - Epoch: [119][ 450/ 518] Overall Loss 2.906425 Objective Loss 2.906425 LR 0.000125 Time 0.204731 -2023-04-26 22:55:37,787 - Epoch: [119][ 500/ 518] Overall Loss 2.907846 Objective Loss 2.907846 LR 0.000125 Time 0.204581 -2023-04-26 22:55:41,360 - Epoch: [119][ 518/ 518] Overall Loss 2.907140 Objective Loss 2.907140 LR 0.000125 Time 0.204369 -2023-04-26 22:55:41,431 - --- validate (epoch=119)----------- -2023-04-26 22:55:41,431 - 4952 samples (32 per mini-batch) -2023-04-26 22:55:48,170 - Epoch: [119][ 50/ 155] Loss 3.168309 mAP 0.430602 -2023-04-26 22:55:54,663 - Epoch: [119][ 100/ 155] Loss 3.184838 mAP 0.444549 -2023-04-26 22:56:01,043 - Epoch: [119][ 150/ 155] Loss 3.182031 mAP 0.444019 -2023-04-26 22:56:01,629 - Epoch: [119][ 155/ 155] Loss 3.185144 mAP 0.444519 -2023-04-26 22:56:01,706 - ==> mAP: 0.44452 Loss: 3.185 - -2023-04-26 22:56:01,710 - ==> Best [mAP: 0.444519 vloss: 3.185144 Sparsity:0.00 Params: 2177088 on epoch: 119] -2023-04-26 22:56:01,710 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:56:01,762 - - -2023-04-26 22:56:01,762 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:56:12,543 - Epoch: [120][ 50/ 518] Overall Loss 2.943367 Objective Loss 2.943367 LR 0.000125 Time 0.215576 -2023-04-26 22:56:22,684 - Epoch: [120][ 100/ 518] Overall Loss 2.926299 Objective Loss 2.926299 LR 0.000125 Time 0.209182 -2023-04-26 22:56:32,799 - Epoch: [120][ 150/ 518] Overall Loss 2.933093 Objective Loss 2.933093 LR 0.000125 Time 0.206876 -2023-04-26 22:56:43,009 - Epoch: [120][ 200/ 518] Overall Loss 2.938256 Objective Loss 2.938256 LR 0.000125 Time 0.206197 -2023-04-26 22:56:53,229 - Epoch: [120][ 250/ 518] Overall Loss 2.932413 Objective Loss 2.932413 LR 0.000125 Time 0.205830 -2023-04-26 22:57:03,395 - Epoch: [120][ 300/ 518] Overall Loss 2.926886 Objective Loss 2.926886 LR 0.000125 Time 0.205407 -2023-04-26 22:57:13,585 - Epoch: [120][ 350/ 518] Overall Loss 2.923822 Objective Loss 2.923822 LR 0.000125 Time 0.205173 -2023-04-26 22:57:23,704 - Epoch: [120][ 400/ 518] Overall Loss 2.917111 Objective Loss 2.917111 LR 0.000125 Time 0.204820 -2023-04-26 22:57:33,876 - Epoch: [120][ 450/ 518] Overall Loss 2.911975 Objective Loss 2.911975 LR 0.000125 Time 0.204663 -2023-04-26 22:57:43,992 - Epoch: [120][ 500/ 518] Overall Loss 2.908717 Objective Loss 2.908717 LR 0.000125 Time 0.204426 -2023-04-26 22:57:47,556 - Epoch: [120][ 518/ 518] Overall Loss 2.909626 Objective Loss 2.909626 LR 0.000125 Time 0.204200 -2023-04-26 22:57:47,627 - --- validate (epoch=120)----------- -2023-04-26 22:57:47,627 - 4952 samples (32 per mini-batch) -2023-04-26 22:57:54,411 - Epoch: [120][ 50/ 155] Loss 3.197746 mAP 0.442143 -2023-04-26 22:58:00,810 - Epoch: [120][ 100/ 155] Loss 3.182623 mAP 0.440350 -2023-04-26 22:58:07,189 - Epoch: [120][ 150/ 155] Loss 3.175535 mAP 0.446550 -2023-04-26 22:58:07,762 - Epoch: [120][ 155/ 155] Loss 3.177059 mAP 0.444792 -2023-04-26 22:58:07,825 - ==> mAP: 0.44479 Loss: 3.177 - -2023-04-26 22:58:07,829 - ==> Best [mAP: 0.444792 vloss: 3.177059 Sparsity:0.00 Params: 2177088 on epoch: 120] -2023-04-26 22:58:07,829 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 22:58:07,883 - - -2023-04-26 22:58:07,883 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 22:58:18,752 - Epoch: [121][ 50/ 518] Overall Loss 2.967805 Objective Loss 2.967805 LR 0.000125 Time 0.217315 -2023-04-26 22:58:28,897 - Epoch: [121][ 100/ 518] Overall Loss 2.929491 Objective Loss 2.929491 LR 0.000125 Time 0.210098 -2023-04-26 22:58:39,043 - Epoch: [121][ 150/ 518] Overall Loss 2.927062 Objective Loss 2.927062 LR 0.000125 Time 0.207690 -2023-04-26 22:58:49,140 - Epoch: [121][ 200/ 518] Overall Loss 2.926414 Objective Loss 2.926414 LR 0.000125 Time 0.206244 -2023-04-26 22:58:59,236 - Epoch: [121][ 250/ 518] Overall Loss 2.926153 Objective Loss 2.926153 LR 0.000125 Time 0.205375 -2023-04-26 22:59:09,409 - Epoch: [121][ 300/ 518] Overall Loss 2.930051 Objective Loss 2.930051 LR 0.000125 Time 0.205051 -2023-04-26 22:59:19,507 - Epoch: [121][ 350/ 518] Overall Loss 2.923620 Objective Loss 2.923620 LR 0.000125 Time 0.204604 -2023-04-26 22:59:29,636 - Epoch: [121][ 400/ 518] Overall Loss 2.920346 Objective Loss 2.920346 LR 0.000125 Time 0.204346 -2023-04-26 22:59:39,748 - Epoch: [121][ 450/ 518] Overall Loss 2.921177 Objective Loss 2.921177 LR 0.000125 Time 0.204108 -2023-04-26 22:59:49,864 - Epoch: [121][ 500/ 518] Overall Loss 2.918115 Objective Loss 2.918115 LR 0.000125 Time 0.203927 -2023-04-26 22:59:53,476 - Epoch: [121][ 518/ 518] Overall Loss 2.916028 Objective Loss 2.916028 LR 0.000125 Time 0.203811 -2023-04-26 22:59:53,548 - --- validate (epoch=121)----------- -2023-04-26 22:59:53,549 - 4952 samples (32 per mini-batch) -2023-04-26 23:00:00,245 - Epoch: [121][ 50/ 155] Loss 3.251105 mAP 0.427340 -2023-04-26 23:00:06,618 - Epoch: [121][ 100/ 155] Loss 3.211445 mAP 0.441664 -2023-04-26 23:00:12,945 - Epoch: [121][ 150/ 155] Loss 3.198006 mAP 0.443231 -2023-04-26 23:00:13,510 - Epoch: [121][ 155/ 155] Loss 3.199776 mAP 0.443764 -2023-04-26 23:00:13,580 - ==> mAP: 0.44376 Loss: 3.200 - -2023-04-26 23:00:13,583 - ==> Best [mAP: 0.444792 vloss: 3.177059 Sparsity:0.00 Params: 2177088 on epoch: 120] -2023-04-26 23:00:13,583 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:00:13,621 - - -2023-04-26 23:00:13,621 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:00:24,465 - Epoch: [122][ 50/ 518] Overall Loss 2.853038 Objective Loss 2.853038 LR 0.000125 Time 0.216814 -2023-04-26 23:00:34,612 - Epoch: [122][ 100/ 518] Overall Loss 2.841857 Objective Loss 2.841857 LR 0.000125 Time 0.209867 -2023-04-26 23:00:44,736 - Epoch: [122][ 150/ 518] Overall Loss 2.865407 Objective Loss 2.865407 LR 0.000125 Time 0.207389 -2023-04-26 23:00:54,865 - Epoch: [122][ 200/ 518] Overall Loss 2.878950 Objective Loss 2.878950 LR 0.000125 Time 0.206179 -2023-04-26 23:01:05,068 - Epoch: [122][ 250/ 518] Overall Loss 2.891779 Objective Loss 2.891779 LR 0.000125 Time 0.205751 -2023-04-26 23:01:15,261 - Epoch: [122][ 300/ 518] Overall Loss 2.898091 Objective Loss 2.898091 LR 0.000125 Time 0.205430 -2023-04-26 23:01:25,576 - Epoch: [122][ 350/ 518] Overall Loss 2.904451 Objective Loss 2.904451 LR 0.000125 Time 0.205549 -2023-04-26 23:01:35,735 - Epoch: [122][ 400/ 518] Overall Loss 2.899193 Objective Loss 2.899193 LR 0.000125 Time 0.205249 -2023-04-26 23:01:45,945 - Epoch: [122][ 450/ 518] Overall Loss 2.898732 Objective Loss 2.898732 LR 0.000125 Time 0.205129 -2023-04-26 23:01:56,091 - Epoch: [122][ 500/ 518] Overall Loss 2.901927 Objective Loss 2.901927 LR 0.000125 Time 0.204904 -2023-04-26 23:01:59,638 - Epoch: [122][ 518/ 518] Overall Loss 2.906528 Objective Loss 2.906528 LR 0.000125 Time 0.204631 -2023-04-26 23:01:59,710 - --- validate (epoch=122)----------- -2023-04-26 23:01:59,711 - 4952 samples (32 per mini-batch) -2023-04-26 23:02:06,541 - Epoch: [122][ 50/ 155] Loss 3.195539 mAP 0.455466 -2023-04-26 23:02:12,990 - Epoch: [122][ 100/ 155] Loss 3.202402 mAP 0.446390 -2023-04-26 23:02:19,430 - Epoch: [122][ 150/ 155] Loss 3.193195 mAP 0.445236 -2023-04-26 23:02:19,991 - Epoch: [122][ 155/ 155] Loss 3.191894 mAP 0.442439 -2023-04-26 23:02:20,060 - ==> mAP: 0.44244 Loss: 3.192 - -2023-04-26 23:02:20,064 - ==> Best [mAP: 0.444792 vloss: 3.177059 Sparsity:0.00 Params: 2177088 on epoch: 120] -2023-04-26 23:02:20,064 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:02:20,103 - - -2023-04-26 23:02:20,103 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:02:30,950 - Epoch: [123][ 50/ 518] Overall Loss 2.908392 Objective Loss 2.908392 LR 0.000125 Time 0.216881 -2023-04-26 23:02:41,100 - Epoch: [123][ 100/ 518] Overall Loss 2.915244 Objective Loss 2.915244 LR 0.000125 Time 0.209919 -2023-04-26 23:02:51,226 - Epoch: [123][ 150/ 518] Overall Loss 2.915848 Objective Loss 2.915848 LR 0.000125 Time 0.207444 -2023-04-26 23:03:01,401 - Epoch: [123][ 200/ 518] Overall Loss 2.913296 Objective Loss 2.913296 LR 0.000125 Time 0.206450 -2023-04-26 23:03:11,577 - Epoch: [123][ 250/ 518] Overall Loss 2.920180 Objective Loss 2.920180 LR 0.000125 Time 0.205856 -2023-04-26 23:03:21,807 - Epoch: [123][ 300/ 518] Overall Loss 2.926891 Objective Loss 2.926891 LR 0.000125 Time 0.205642 -2023-04-26 23:03:31,996 - Epoch: [123][ 350/ 518] Overall Loss 2.922864 Objective Loss 2.922864 LR 0.000125 Time 0.205372 -2023-04-26 23:03:42,201 - Epoch: [123][ 400/ 518] Overall Loss 2.921913 Objective Loss 2.921913 LR 0.000125 Time 0.205208 -2023-04-26 23:03:52,340 - Epoch: [123][ 450/ 518] Overall Loss 2.919783 Objective Loss 2.919783 LR 0.000125 Time 0.204936 -2023-04-26 23:04:02,522 - Epoch: [123][ 500/ 518] Overall Loss 2.912766 Objective Loss 2.912766 LR 0.000125 Time 0.204803 -2023-04-26 23:04:06,053 - Epoch: [123][ 518/ 518] Overall Loss 2.908665 Objective Loss 2.908665 LR 0.000125 Time 0.204501 -2023-04-26 23:04:06,124 - --- validate (epoch=123)----------- -2023-04-26 23:04:06,124 - 4952 samples (32 per mini-batch) -2023-04-26 23:04:12,812 - Epoch: [123][ 50/ 155] Loss 3.213200 mAP 0.406803 -2023-04-26 23:04:19,155 - Epoch: [123][ 100/ 155] Loss 3.175614 mAP 0.436899 -2023-04-26 23:04:25,545 - Epoch: [123][ 150/ 155] Loss 3.183179 mAP 0.439113 -2023-04-26 23:04:26,111 - Epoch: [123][ 155/ 155] Loss 3.179718 mAP 0.438296 -2023-04-26 23:04:26,193 - ==> mAP: 0.43830 Loss: 3.180 - -2023-04-26 23:04:26,197 - ==> Best [mAP: 0.444792 vloss: 3.177059 Sparsity:0.00 Params: 2177088 on epoch: 120] -2023-04-26 23:04:26,197 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:04:26,232 - - -2023-04-26 23:04:26,233 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:04:37,100 - Epoch: [124][ 50/ 518] Overall Loss 2.897357 Objective Loss 2.897357 LR 0.000125 Time 0.217294 -2023-04-26 23:04:47,161 - Epoch: [124][ 100/ 518] Overall Loss 2.927078 Objective Loss 2.927078 LR 0.000125 Time 0.209241 -2023-04-26 23:04:57,364 - Epoch: [124][ 150/ 518] Overall Loss 2.938392 Objective Loss 2.938392 LR 0.000125 Time 0.207502 -2023-04-26 23:05:07,558 - Epoch: [124][ 200/ 518] Overall Loss 2.926473 Objective Loss 2.926473 LR 0.000125 Time 0.206586 -2023-04-26 23:05:17,775 - Epoch: [124][ 250/ 518] Overall Loss 2.923227 Objective Loss 2.923227 LR 0.000125 Time 0.206130 -2023-04-26 23:05:27,959 - Epoch: [124][ 300/ 518] Overall Loss 2.918449 Objective Loss 2.918449 LR 0.000125 Time 0.205716 -2023-04-26 23:05:38,110 - Epoch: [124][ 350/ 518] Overall Loss 2.907833 Objective Loss 2.907833 LR 0.000125 Time 0.205328 -2023-04-26 23:05:48,277 - Epoch: [124][ 400/ 518] Overall Loss 2.909160 Objective Loss 2.909160 LR 0.000125 Time 0.205074 -2023-04-26 23:05:58,383 - Epoch: [124][ 450/ 518] Overall Loss 2.904970 Objective Loss 2.904970 LR 0.000125 Time 0.204744 -2023-04-26 23:06:08,568 - Epoch: [124][ 500/ 518] Overall Loss 2.901965 Objective Loss 2.901965 LR 0.000125 Time 0.204636 -2023-04-26 23:06:12,133 - Epoch: [124][ 518/ 518] Overall Loss 2.902696 Objective Loss 2.902696 LR 0.000125 Time 0.204406 -2023-04-26 23:06:12,205 - --- validate (epoch=124)----------- -2023-04-26 23:06:12,205 - 4952 samples (32 per mini-batch) -2023-04-26 23:06:18,925 - Epoch: [124][ 50/ 155] Loss 3.200543 mAP 0.433967 -2023-04-26 23:06:25,339 - Epoch: [124][ 100/ 155] Loss 3.211464 mAP 0.438403 -2023-04-26 23:06:31,691 - Epoch: [124][ 150/ 155] Loss 3.193217 mAP 0.438088 -2023-04-26 23:06:32,257 - Epoch: [124][ 155/ 155] Loss 3.191534 mAP 0.435759 -2023-04-26 23:06:32,329 - ==> mAP: 0.43576 Loss: 3.192 - -2023-04-26 23:06:32,333 - ==> Best [mAP: 0.444792 vloss: 3.177059 Sparsity:0.00 Params: 2177088 on epoch: 120] -2023-04-26 23:06:32,333 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:06:32,371 - - -2023-04-26 23:06:32,371 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:06:43,400 - Epoch: [125][ 50/ 518] Overall Loss 2.947678 Objective Loss 2.947678 LR 0.000125 Time 0.220528 -2023-04-26 23:06:53,497 - Epoch: [125][ 100/ 518] Overall Loss 2.890054 Objective Loss 2.890054 LR 0.000125 Time 0.211220 -2023-04-26 23:07:03,687 - Epoch: [125][ 150/ 518] Overall Loss 2.907004 Objective Loss 2.907004 LR 0.000125 Time 0.208733 -2023-04-26 23:07:13,971 - Epoch: [125][ 200/ 518] Overall Loss 2.914566 Objective Loss 2.914566 LR 0.000125 Time 0.207959 -2023-04-26 23:07:24,205 - Epoch: [125][ 250/ 518] Overall Loss 2.907095 Objective Loss 2.907095 LR 0.000125 Time 0.207296 -2023-04-26 23:07:34,359 - Epoch: [125][ 300/ 518] Overall Loss 2.906013 Objective Loss 2.906013 LR 0.000125 Time 0.206588 -2023-04-26 23:07:44,557 - Epoch: [125][ 350/ 518] Overall Loss 2.904027 Objective Loss 2.904027 LR 0.000125 Time 0.206210 -2023-04-26 23:07:54,742 - Epoch: [125][ 400/ 518] Overall Loss 2.904905 Objective Loss 2.904905 LR 0.000125 Time 0.205891 -2023-04-26 23:08:04,861 - Epoch: [125][ 450/ 518] Overall Loss 2.909675 Objective Loss 2.909675 LR 0.000125 Time 0.205498 -2023-04-26 23:08:15,024 - Epoch: [125][ 500/ 518] Overall Loss 2.907513 Objective Loss 2.907513 LR 0.000125 Time 0.205271 -2023-04-26 23:08:18,516 - Epoch: [125][ 518/ 518] Overall Loss 2.907920 Objective Loss 2.907920 LR 0.000125 Time 0.204877 -2023-04-26 23:08:18,587 - --- validate (epoch=125)----------- -2023-04-26 23:08:18,587 - 4952 samples (32 per mini-batch) -2023-04-26 23:08:25,299 - Epoch: [125][ 50/ 155] Loss 3.149050 mAP 0.450414 -2023-04-26 23:08:31,677 - Epoch: [125][ 100/ 155] Loss 3.160620 mAP 0.446773 -2023-04-26 23:08:37,967 - Epoch: [125][ 150/ 155] Loss 3.187466 mAP 0.445971 -2023-04-26 23:08:38,534 - Epoch: [125][ 155/ 155] Loss 3.188677 mAP 0.446227 -2023-04-26 23:08:38,599 - ==> mAP: 0.44623 Loss: 3.189 - -2023-04-26 23:08:38,602 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:08:38,603 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:08:38,655 - - -2023-04-26 23:08:38,656 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:08:49,768 - Epoch: [126][ 50/ 518] Overall Loss 2.917473 Objective Loss 2.917473 LR 0.000125 Time 0.222198 -2023-04-26 23:08:59,988 - Epoch: [126][ 100/ 518] Overall Loss 2.908843 Objective Loss 2.908843 LR 0.000125 Time 0.213279 -2023-04-26 23:09:10,158 - Epoch: [126][ 150/ 518] Overall Loss 2.913200 Objective Loss 2.913200 LR 0.000125 Time 0.209976 -2023-04-26 23:09:20,356 - Epoch: [126][ 200/ 518] Overall Loss 2.905576 Objective Loss 2.905576 LR 0.000125 Time 0.208465 -2023-04-26 23:09:30,468 - Epoch: [126][ 250/ 518] Overall Loss 2.901701 Objective Loss 2.901701 LR 0.000125 Time 0.207212 -2023-04-26 23:09:40,732 - Epoch: [126][ 300/ 518] Overall Loss 2.898020 Objective Loss 2.898020 LR 0.000125 Time 0.206884 -2023-04-26 23:09:50,941 - Epoch: [126][ 350/ 518] Overall Loss 2.910145 Objective Loss 2.910145 LR 0.000125 Time 0.206494 -2023-04-26 23:10:01,062 - Epoch: [126][ 400/ 518] Overall Loss 2.903188 Objective Loss 2.903188 LR 0.000125 Time 0.205981 -2023-04-26 23:10:11,196 - Epoch: [126][ 450/ 518] Overall Loss 2.908347 Objective Loss 2.908347 LR 0.000125 Time 0.205610 -2023-04-26 23:10:21,374 - Epoch: [126][ 500/ 518] Overall Loss 2.905211 Objective Loss 2.905211 LR 0.000125 Time 0.205400 -2023-04-26 23:10:24,904 - Epoch: [126][ 518/ 518] Overall Loss 2.906404 Objective Loss 2.906404 LR 0.000125 Time 0.205077 -2023-04-26 23:10:24,975 - --- validate (epoch=126)----------- -2023-04-26 23:10:24,976 - 4952 samples (32 per mini-batch) -2023-04-26 23:10:31,692 - Epoch: [126][ 50/ 155] Loss 3.192915 mAP 0.442077 -2023-04-26 23:10:38,072 - Epoch: [126][ 100/ 155] Loss 3.174701 mAP 0.446622 -2023-04-26 23:10:44,431 - Epoch: [126][ 150/ 155] Loss 3.187233 mAP 0.440059 -2023-04-26 23:10:45,014 - Epoch: [126][ 155/ 155] Loss 3.188544 mAP 0.439979 -2023-04-26 23:10:45,100 - ==> mAP: 0.43998 Loss: 3.189 - -2023-04-26 23:10:45,103 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:10:45,103 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:10:45,141 - - -2023-04-26 23:10:45,141 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:10:55,944 - Epoch: [127][ 50/ 518] Overall Loss 2.882536 Objective Loss 2.882536 LR 0.000125 Time 0.216003 -2023-04-26 23:11:06,114 - Epoch: [127][ 100/ 518] Overall Loss 2.889875 Objective Loss 2.889875 LR 0.000125 Time 0.209684 -2023-04-26 23:11:16,267 - Epoch: [127][ 150/ 518] Overall Loss 2.869519 Objective Loss 2.869519 LR 0.000125 Time 0.207467 -2023-04-26 23:11:26,421 - Epoch: [127][ 200/ 518] Overall Loss 2.875803 Objective Loss 2.875803 LR 0.000125 Time 0.206361 -2023-04-26 23:11:36,461 - Epoch: [127][ 250/ 518] Overall Loss 2.886981 Objective Loss 2.886981 LR 0.000125 Time 0.205242 -2023-04-26 23:11:46,607 - Epoch: [127][ 300/ 518] Overall Loss 2.891038 Objective Loss 2.891038 LR 0.000125 Time 0.204850 -2023-04-26 23:11:56,779 - Epoch: [127][ 350/ 518] Overall Loss 2.887155 Objective Loss 2.887155 LR 0.000125 Time 0.204645 -2023-04-26 23:12:06,895 - Epoch: [127][ 400/ 518] Overall Loss 2.894924 Objective Loss 2.894924 LR 0.000125 Time 0.204349 -2023-04-26 23:12:16,969 - Epoch: [127][ 450/ 518] Overall Loss 2.894949 Objective Loss 2.894949 LR 0.000125 Time 0.204026 -2023-04-26 23:12:27,100 - Epoch: [127][ 500/ 518] Overall Loss 2.891978 Objective Loss 2.891978 LR 0.000125 Time 0.203883 -2023-04-26 23:12:30,601 - Epoch: [127][ 518/ 518] Overall Loss 2.892776 Objective Loss 2.892776 LR 0.000125 Time 0.203557 -2023-04-26 23:12:30,675 - --- validate (epoch=127)----------- -2023-04-26 23:12:30,675 - 4952 samples (32 per mini-batch) -2023-04-26 23:12:37,387 - Epoch: [127][ 50/ 155] Loss 3.223478 mAP 0.449058 -2023-04-26 23:12:43,743 - Epoch: [127][ 100/ 155] Loss 3.191005 mAP 0.439838 -2023-04-26 23:12:50,111 - Epoch: [127][ 150/ 155] Loss 3.187441 mAP 0.443035 -2023-04-26 23:12:50,680 - Epoch: [127][ 155/ 155] Loss 3.190509 mAP 0.441940 -2023-04-26 23:12:50,739 - ==> mAP: 0.44194 Loss: 3.191 - -2023-04-26 23:12:50,743 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:12:50,743 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:12:50,780 - - -2023-04-26 23:12:50,780 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:13:01,669 - Epoch: [128][ 50/ 518] Overall Loss 2.948812 Objective Loss 2.948812 LR 0.000125 Time 0.217722 -2023-04-26 23:13:11,781 - Epoch: [128][ 100/ 518] Overall Loss 2.932231 Objective Loss 2.932231 LR 0.000125 Time 0.209961 -2023-04-26 23:13:21,974 - Epoch: [128][ 150/ 518] Overall Loss 2.931930 Objective Loss 2.931930 LR 0.000125 Time 0.207917 -2023-04-26 23:13:32,098 - Epoch: [128][ 200/ 518] Overall Loss 2.929752 Objective Loss 2.929752 LR 0.000125 Time 0.206548 -2023-04-26 23:13:42,224 - Epoch: [128][ 250/ 518] Overall Loss 2.927081 Objective Loss 2.927081 LR 0.000125 Time 0.205736 -2023-04-26 23:13:52,358 - Epoch: [128][ 300/ 518] Overall Loss 2.910337 Objective Loss 2.910337 LR 0.000125 Time 0.205221 -2023-04-26 23:14:02,519 - Epoch: [128][ 350/ 518] Overall Loss 2.904491 Objective Loss 2.904491 LR 0.000125 Time 0.204933 -2023-04-26 23:14:12,653 - Epoch: [128][ 400/ 518] Overall Loss 2.900435 Objective Loss 2.900435 LR 0.000125 Time 0.204645 -2023-04-26 23:14:22,807 - Epoch: [128][ 450/ 518] Overall Loss 2.901876 Objective Loss 2.901876 LR 0.000125 Time 0.204467 -2023-04-26 23:14:32,904 - Epoch: [128][ 500/ 518] Overall Loss 2.904005 Objective Loss 2.904005 LR 0.000125 Time 0.204213 -2023-04-26 23:14:36,490 - Epoch: [128][ 518/ 518] Overall Loss 2.905557 Objective Loss 2.905557 LR 0.000125 Time 0.204038 -2023-04-26 23:14:36,559 - --- validate (epoch=128)----------- -2023-04-26 23:14:36,560 - 4952 samples (32 per mini-batch) -2023-04-26 23:14:43,228 - Epoch: [128][ 50/ 155] Loss 3.161318 mAP 0.440770 -2023-04-26 23:14:49,576 - Epoch: [128][ 100/ 155] Loss 3.189616 mAP 0.439280 -2023-04-26 23:14:55,951 - Epoch: [128][ 150/ 155] Loss 3.177463 mAP 0.441827 -2023-04-26 23:14:56,518 - Epoch: [128][ 155/ 155] Loss 3.181931 mAP 0.439098 -2023-04-26 23:14:56,589 - ==> mAP: 0.43910 Loss: 3.182 - -2023-04-26 23:14:56,593 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:14:56,593 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:14:56,630 - - -2023-04-26 23:14:56,630 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:15:07,510 - Epoch: [129][ 50/ 518] Overall Loss 2.874962 Objective Loss 2.874962 LR 0.000125 Time 0.217541 -2023-04-26 23:15:17,646 - Epoch: [129][ 100/ 518] Overall Loss 2.880596 Objective Loss 2.880596 LR 0.000125 Time 0.210108 -2023-04-26 23:15:27,827 - Epoch: [129][ 150/ 518] Overall Loss 2.889922 Objective Loss 2.889922 LR 0.000125 Time 0.207939 -2023-04-26 23:15:37,988 - Epoch: [129][ 200/ 518] Overall Loss 2.892614 Objective Loss 2.892614 LR 0.000125 Time 0.206750 -2023-04-26 23:15:48,169 - Epoch: [129][ 250/ 518] Overall Loss 2.900635 Objective Loss 2.900635 LR 0.000125 Time 0.206116 -2023-04-26 23:15:58,239 - Epoch: [129][ 300/ 518] Overall Loss 2.908663 Objective Loss 2.908663 LR 0.000125 Time 0.205326 -2023-04-26 23:16:08,424 - Epoch: [129][ 350/ 518] Overall Loss 2.909969 Objective Loss 2.909969 LR 0.000125 Time 0.205090 -2023-04-26 23:16:18,595 - Epoch: [129][ 400/ 518] Overall Loss 2.912397 Objective Loss 2.912397 LR 0.000125 Time 0.204877 -2023-04-26 23:16:28,785 - Epoch: [129][ 450/ 518] Overall Loss 2.908418 Objective Loss 2.908418 LR 0.000125 Time 0.204753 -2023-04-26 23:16:38,958 - Epoch: [129][ 500/ 518] Overall Loss 2.909119 Objective Loss 2.909119 LR 0.000125 Time 0.204620 -2023-04-26 23:16:42,454 - Epoch: [129][ 518/ 518] Overall Loss 2.904011 Objective Loss 2.904011 LR 0.000125 Time 0.204257 -2023-04-26 23:16:42,525 - --- validate (epoch=129)----------- -2023-04-26 23:16:42,525 - 4952 samples (32 per mini-batch) -2023-04-26 23:16:49,231 - Epoch: [129][ 50/ 155] Loss 3.139637 mAP 0.436292 -2023-04-26 23:16:55,670 - Epoch: [129][ 100/ 155] Loss 3.193122 mAP 0.438963 -2023-04-26 23:17:01,993 - Epoch: [129][ 150/ 155] Loss 3.187237 mAP 0.440782 -2023-04-26 23:17:02,553 - Epoch: [129][ 155/ 155] Loss 3.186359 mAP 0.438577 -2023-04-26 23:17:02,628 - ==> mAP: 0.43858 Loss: 3.186 - -2023-04-26 23:17:02,632 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:17:02,632 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:17:02,670 - - -2023-04-26 23:17:02,671 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:17:13,703 - Epoch: [130][ 50/ 518] Overall Loss 2.861448 Objective Loss 2.861448 LR 0.000125 Time 0.220602 -2023-04-26 23:17:23,892 - Epoch: [130][ 100/ 518] Overall Loss 2.875857 Objective Loss 2.875857 LR 0.000125 Time 0.212168 -2023-04-26 23:17:33,946 - Epoch: [130][ 150/ 518] Overall Loss 2.882995 Objective Loss 2.882995 LR 0.000125 Time 0.208464 -2023-04-26 23:17:44,079 - Epoch: [130][ 200/ 518] Overall Loss 2.881760 Objective Loss 2.881760 LR 0.000125 Time 0.207001 -2023-04-26 23:17:54,231 - Epoch: [130][ 250/ 518] Overall Loss 2.881900 Objective Loss 2.881900 LR 0.000125 Time 0.206203 -2023-04-26 23:18:04,381 - Epoch: [130][ 300/ 518] Overall Loss 2.883692 Objective Loss 2.883692 LR 0.000125 Time 0.205664 -2023-04-26 23:18:14,524 - Epoch: [130][ 350/ 518] Overall Loss 2.894724 Objective Loss 2.894724 LR 0.000125 Time 0.205260 -2023-04-26 23:18:24,808 - Epoch: [130][ 400/ 518] Overall Loss 2.886840 Objective Loss 2.886840 LR 0.000125 Time 0.205309 -2023-04-26 23:18:34,946 - Epoch: [130][ 450/ 518] Overall Loss 2.895490 Objective Loss 2.895490 LR 0.000125 Time 0.205021 -2023-04-26 23:18:45,125 - Epoch: [130][ 500/ 518] Overall Loss 2.891382 Objective Loss 2.891382 LR 0.000125 Time 0.204873 -2023-04-26 23:18:48,636 - Epoch: [130][ 518/ 518] Overall Loss 2.894802 Objective Loss 2.894802 LR 0.000125 Time 0.204532 -2023-04-26 23:18:48,707 - --- validate (epoch=130)----------- -2023-04-26 23:18:48,708 - 4952 samples (32 per mini-batch) -2023-04-26 23:18:55,374 - Epoch: [130][ 50/ 155] Loss 3.220281 mAP 0.435835 -2023-04-26 23:19:01,733 - Epoch: [130][ 100/ 155] Loss 3.187851 mAP 0.445150 -2023-04-26 23:19:07,997 - Epoch: [130][ 150/ 155] Loss 3.181334 mAP 0.443139 -2023-04-26 23:19:08,562 - Epoch: [130][ 155/ 155] Loss 3.178568 mAP 0.444932 -2023-04-26 23:19:08,624 - ==> mAP: 0.44493 Loss: 3.179 - -2023-04-26 23:19:08,628 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:19:08,628 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:19:08,665 - - -2023-04-26 23:19:08,665 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:19:19,674 - Epoch: [131][ 50/ 518] Overall Loss 2.966268 Objective Loss 2.966268 LR 0.000125 Time 0.220107 -2023-04-26 23:19:29,686 - Epoch: [131][ 100/ 518] Overall Loss 2.902331 Objective Loss 2.902331 LR 0.000125 Time 0.210158 -2023-04-26 23:19:39,831 - Epoch: [131][ 150/ 518] Overall Loss 2.894120 Objective Loss 2.894120 LR 0.000125 Time 0.207729 -2023-04-26 23:19:50,010 - Epoch: [131][ 200/ 518] Overall Loss 2.903982 Objective Loss 2.903982 LR 0.000125 Time 0.206684 -2023-04-26 23:20:00,088 - Epoch: [131][ 250/ 518] Overall Loss 2.896000 Objective Loss 2.896000 LR 0.000125 Time 0.205651 -2023-04-26 23:20:10,241 - Epoch: [131][ 300/ 518] Overall Loss 2.897446 Objective Loss 2.897446 LR 0.000125 Time 0.205215 -2023-04-26 23:20:20,434 - Epoch: [131][ 350/ 518] Overall Loss 2.898201 Objective Loss 2.898201 LR 0.000125 Time 0.205017 -2023-04-26 23:20:30,627 - Epoch: [131][ 400/ 518] Overall Loss 2.895005 Objective Loss 2.895005 LR 0.000125 Time 0.204869 -2023-04-26 23:20:40,813 - Epoch: [131][ 450/ 518] Overall Loss 2.892279 Objective Loss 2.892279 LR 0.000125 Time 0.204737 -2023-04-26 23:20:50,923 - Epoch: [131][ 500/ 518] Overall Loss 2.902919 Objective Loss 2.902919 LR 0.000125 Time 0.204479 -2023-04-26 23:20:54,461 - Epoch: [131][ 518/ 518] Overall Loss 2.904172 Objective Loss 2.904172 LR 0.000125 Time 0.204203 -2023-04-26 23:20:54,533 - --- validate (epoch=131)----------- -2023-04-26 23:20:54,533 - 4952 samples (32 per mini-batch) -2023-04-26 23:21:01,260 - Epoch: [131][ 50/ 155] Loss 3.180971 mAP 0.437716 -2023-04-26 23:21:07,571 - Epoch: [131][ 100/ 155] Loss 3.194589 mAP 0.432402 -2023-04-26 23:21:13,913 - Epoch: [131][ 150/ 155] Loss 3.193535 mAP 0.433209 -2023-04-26 23:21:14,484 - Epoch: [131][ 155/ 155] Loss 3.191293 mAP 0.433944 -2023-04-26 23:21:14,557 - ==> mAP: 0.43394 Loss: 3.191 - -2023-04-26 23:21:14,561 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:21:14,561 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:21:14,599 - - -2023-04-26 23:21:14,599 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:21:25,589 - Epoch: [132][ 50/ 518] Overall Loss 2.922586 Objective Loss 2.922586 LR 0.000125 Time 0.219749 -2023-04-26 23:21:35,820 - Epoch: [132][ 100/ 518] Overall Loss 2.925658 Objective Loss 2.925658 LR 0.000125 Time 0.212164 -2023-04-26 23:21:45,976 - Epoch: [132][ 150/ 518] Overall Loss 2.909697 Objective Loss 2.909697 LR 0.000125 Time 0.209142 -2023-04-26 23:21:56,121 - Epoch: [132][ 200/ 518] Overall Loss 2.909289 Objective Loss 2.909289 LR 0.000125 Time 0.207572 -2023-04-26 23:22:06,300 - Epoch: [132][ 250/ 518] Overall Loss 2.905048 Objective Loss 2.905048 LR 0.000125 Time 0.206768 -2023-04-26 23:22:16,470 - Epoch: [132][ 300/ 518] Overall Loss 2.897088 Objective Loss 2.897088 LR 0.000125 Time 0.206199 -2023-04-26 23:22:26,705 - Epoch: [132][ 350/ 518] Overall Loss 2.894005 Objective Loss 2.894005 LR 0.000125 Time 0.205980 -2023-04-26 23:22:36,826 - Epoch: [132][ 400/ 518] Overall Loss 2.893536 Objective Loss 2.893536 LR 0.000125 Time 0.205532 -2023-04-26 23:22:46,968 - Epoch: [132][ 450/ 518] Overall Loss 2.898477 Objective Loss 2.898477 LR 0.000125 Time 0.205229 -2023-04-26 23:22:57,102 - Epoch: [132][ 500/ 518] Overall Loss 2.896883 Objective Loss 2.896883 LR 0.000125 Time 0.204971 -2023-04-26 23:23:00,697 - Epoch: [132][ 518/ 518] Overall Loss 2.899410 Objective Loss 2.899410 LR 0.000125 Time 0.204788 -2023-04-26 23:23:00,772 - --- validate (epoch=132)----------- -2023-04-26 23:23:00,773 - 4952 samples (32 per mini-batch) -2023-04-26 23:23:07,472 - Epoch: [132][ 50/ 155] Loss 3.159480 mAP 0.441577 -2023-04-26 23:23:13,844 - Epoch: [132][ 100/ 155] Loss 3.142394 mAP 0.441092 -2023-04-26 23:23:20,178 - Epoch: [132][ 150/ 155] Loss 3.171609 mAP 0.440602 -2023-04-26 23:23:20,738 - Epoch: [132][ 155/ 155] Loss 3.179739 mAP 0.438030 -2023-04-26 23:23:20,804 - ==> mAP: 0.43803 Loss: 3.180 - -2023-04-26 23:23:20,808 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:23:20,808 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:23:20,845 - - -2023-04-26 23:23:20,845 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:23:31,640 - Epoch: [133][ 50/ 518] Overall Loss 2.850403 Objective Loss 2.850403 LR 0.000125 Time 0.215834 -2023-04-26 23:23:41,797 - Epoch: [133][ 100/ 518] Overall Loss 2.882865 Objective Loss 2.882865 LR 0.000125 Time 0.209473 -2023-04-26 23:23:51,923 - Epoch: [133][ 150/ 518] Overall Loss 2.901139 Objective Loss 2.901139 LR 0.000125 Time 0.207146 -2023-04-26 23:24:02,024 - Epoch: [133][ 200/ 518] Overall Loss 2.891388 Objective Loss 2.891388 LR 0.000125 Time 0.205853 -2023-04-26 23:24:12,149 - Epoch: [133][ 250/ 518] Overall Loss 2.884736 Objective Loss 2.884736 LR 0.000125 Time 0.205180 -2023-04-26 23:24:22,304 - Epoch: [133][ 300/ 518] Overall Loss 2.877716 Objective Loss 2.877716 LR 0.000125 Time 0.204827 -2023-04-26 23:24:32,391 - Epoch: [133][ 350/ 518] Overall Loss 2.884575 Objective Loss 2.884575 LR 0.000125 Time 0.204381 -2023-04-26 23:24:42,526 - Epoch: [133][ 400/ 518] Overall Loss 2.884642 Objective Loss 2.884642 LR 0.000125 Time 0.204165 -2023-04-26 23:24:52,621 - Epoch: [133][ 450/ 518] Overall Loss 2.883995 Objective Loss 2.883995 LR 0.000125 Time 0.203911 -2023-04-26 23:25:02,744 - Epoch: [133][ 500/ 518] Overall Loss 2.887995 Objective Loss 2.887995 LR 0.000125 Time 0.203763 -2023-04-26 23:25:06,278 - Epoch: [133][ 518/ 518] Overall Loss 2.887635 Objective Loss 2.887635 LR 0.000125 Time 0.203504 -2023-04-26 23:25:06,350 - --- validate (epoch=133)----------- -2023-04-26 23:25:06,350 - 4952 samples (32 per mini-batch) -2023-04-26 23:25:13,106 - Epoch: [133][ 50/ 155] Loss 3.193475 mAP 0.438755 -2023-04-26 23:25:19,511 - Epoch: [133][ 100/ 155] Loss 3.192928 mAP 0.442787 -2023-04-26 23:25:25,882 - Epoch: [133][ 150/ 155] Loss 3.171250 mAP 0.442434 -2023-04-26 23:25:26,456 - Epoch: [133][ 155/ 155] Loss 3.172496 mAP 0.442463 -2023-04-26 23:25:26,527 - ==> mAP: 0.44246 Loss: 3.172 - -2023-04-26 23:25:26,599 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:25:26,599 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:25:26,657 - - -2023-04-26 23:25:26,657 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:25:37,464 - Epoch: [134][ 50/ 518] Overall Loss 2.904332 Objective Loss 2.904332 LR 0.000125 Time 0.216064 -2023-04-26 23:25:47,680 - Epoch: [134][ 100/ 518] Overall Loss 2.901034 Objective Loss 2.901034 LR 0.000125 Time 0.210177 -2023-04-26 23:25:57,769 - Epoch: [134][ 150/ 518] Overall Loss 2.919642 Objective Loss 2.919642 LR 0.000125 Time 0.207365 -2023-04-26 23:26:07,915 - Epoch: [134][ 200/ 518] Overall Loss 2.910755 Objective Loss 2.910755 LR 0.000125 Time 0.206244 -2023-04-26 23:26:18,077 - Epoch: [134][ 250/ 518] Overall Loss 2.902385 Objective Loss 2.902385 LR 0.000125 Time 0.205638 -2023-04-26 23:26:28,274 - Epoch: [134][ 300/ 518] Overall Loss 2.903022 Objective Loss 2.903022 LR 0.000125 Time 0.205351 -2023-04-26 23:26:38,336 - Epoch: [134][ 350/ 518] Overall Loss 2.908430 Objective Loss 2.908430 LR 0.000125 Time 0.204758 -2023-04-26 23:26:48,452 - Epoch: [134][ 400/ 518] Overall Loss 2.908565 Objective Loss 2.908565 LR 0.000125 Time 0.204450 -2023-04-26 23:26:58,584 - Epoch: [134][ 450/ 518] Overall Loss 2.905697 Objective Loss 2.905697 LR 0.000125 Time 0.204244 -2023-04-26 23:27:08,832 - Epoch: [134][ 500/ 518] Overall Loss 2.900057 Objective Loss 2.900057 LR 0.000125 Time 0.204312 -2023-04-26 23:27:12,367 - Epoch: [134][ 518/ 518] Overall Loss 2.903662 Objective Loss 2.903662 LR 0.000125 Time 0.204036 -2023-04-26 23:27:12,437 - --- validate (epoch=134)----------- -2023-04-26 23:27:12,438 - 4952 samples (32 per mini-batch) -2023-04-26 23:27:19,159 - Epoch: [134][ 50/ 155] Loss 3.188677 mAP 0.425764 -2023-04-26 23:27:25,501 - Epoch: [134][ 100/ 155] Loss 3.183038 mAP 0.431069 -2023-04-26 23:27:31,870 - Epoch: [134][ 150/ 155] Loss 3.187964 mAP 0.432074 -2023-04-26 23:27:32,449 - Epoch: [134][ 155/ 155] Loss 3.185733 mAP 0.433301 -2023-04-26 23:27:32,513 - ==> mAP: 0.43330 Loss: 3.186 - -2023-04-26 23:27:32,517 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:27:32,517 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:27:32,555 - - -2023-04-26 23:27:32,555 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:27:43,606 - Epoch: [135][ 50/ 518] Overall Loss 2.876271 Objective Loss 2.876271 LR 0.000125 Time 0.220971 -2023-04-26 23:27:53,812 - Epoch: [135][ 100/ 518] Overall Loss 2.846703 Objective Loss 2.846703 LR 0.000125 Time 0.212522 -2023-04-26 23:28:04,004 - Epoch: [135][ 150/ 518] Overall Loss 2.854338 Objective Loss 2.854338 LR 0.000125 Time 0.209619 -2023-04-26 23:28:14,129 - Epoch: [135][ 200/ 518] Overall Loss 2.860962 Objective Loss 2.860962 LR 0.000125 Time 0.207830 -2023-04-26 23:28:24,308 - Epoch: [135][ 250/ 518] Overall Loss 2.860566 Objective Loss 2.860566 LR 0.000125 Time 0.206974 -2023-04-26 23:28:34,388 - Epoch: [135][ 300/ 518] Overall Loss 2.867928 Objective Loss 2.867928 LR 0.000125 Time 0.206075 -2023-04-26 23:28:44,518 - Epoch: [135][ 350/ 518] Overall Loss 2.876480 Objective Loss 2.876480 LR 0.000125 Time 0.205573 -2023-04-26 23:28:54,635 - Epoch: [135][ 400/ 518] Overall Loss 2.878730 Objective Loss 2.878730 LR 0.000125 Time 0.205164 -2023-04-26 23:29:04,821 - Epoch: [135][ 450/ 518] Overall Loss 2.881674 Objective Loss 2.881674 LR 0.000125 Time 0.205001 -2023-04-26 23:29:14,984 - Epoch: [135][ 500/ 518] Overall Loss 2.884367 Objective Loss 2.884367 LR 0.000125 Time 0.204822 -2023-04-26 23:29:18,480 - Epoch: [135][ 518/ 518] Overall Loss 2.885386 Objective Loss 2.885386 LR 0.000125 Time 0.204454 -2023-04-26 23:29:18,552 - --- validate (epoch=135)----------- -2023-04-26 23:29:18,552 - 4952 samples (32 per mini-batch) -2023-04-26 23:29:25,313 - Epoch: [135][ 50/ 155] Loss 3.178278 mAP 0.445913 -2023-04-26 23:29:31,669 - Epoch: [135][ 100/ 155] Loss 3.183741 mAP 0.441370 -2023-04-26 23:29:38,048 - Epoch: [135][ 150/ 155] Loss 3.189325 mAP 0.436737 -2023-04-26 23:29:38,618 - Epoch: [135][ 155/ 155] Loss 3.187994 mAP 0.437677 -2023-04-26 23:29:38,685 - ==> mAP: 0.43768 Loss: 3.188 - -2023-04-26 23:29:38,689 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:29:38,689 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:29:38,727 - - -2023-04-26 23:29:38,727 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:29:49,662 - Epoch: [136][ 50/ 518] Overall Loss 2.937126 Objective Loss 2.937126 LR 0.000125 Time 0.218634 -2023-04-26 23:30:00,036 - Epoch: [136][ 100/ 518] Overall Loss 2.898272 Objective Loss 2.898272 LR 0.000125 Time 0.213041 -2023-04-26 23:30:10,137 - Epoch: [136][ 150/ 518] Overall Loss 2.901033 Objective Loss 2.901033 LR 0.000125 Time 0.209356 -2023-04-26 23:30:20,298 - Epoch: [136][ 200/ 518] Overall Loss 2.897521 Objective Loss 2.897521 LR 0.000125 Time 0.207815 -2023-04-26 23:30:30,511 - Epoch: [136][ 250/ 518] Overall Loss 2.890426 Objective Loss 2.890426 LR 0.000125 Time 0.207097 -2023-04-26 23:30:40,601 - Epoch: [136][ 300/ 518] Overall Loss 2.894054 Objective Loss 2.894054 LR 0.000125 Time 0.206209 -2023-04-26 23:30:50,806 - Epoch: [136][ 350/ 518] Overall Loss 2.893476 Objective Loss 2.893476 LR 0.000125 Time 0.205904 -2023-04-26 23:31:00,922 - Epoch: [136][ 400/ 518] Overall Loss 2.897902 Objective Loss 2.897902 LR 0.000125 Time 0.205450 -2023-04-26 23:31:11,139 - Epoch: [136][ 450/ 518] Overall Loss 2.900246 Objective Loss 2.900246 LR 0.000125 Time 0.205323 -2023-04-26 23:31:21,278 - Epoch: [136][ 500/ 518] Overall Loss 2.901921 Objective Loss 2.901921 LR 0.000125 Time 0.205067 -2023-04-26 23:31:24,797 - Epoch: [136][ 518/ 518] Overall Loss 2.903626 Objective Loss 2.903626 LR 0.000125 Time 0.204732 -2023-04-26 23:31:24,867 - --- validate (epoch=136)----------- -2023-04-26 23:31:24,867 - 4952 samples (32 per mini-batch) -2023-04-26 23:31:31,591 - Epoch: [136][ 50/ 155] Loss 3.200618 mAP 0.444785 -2023-04-26 23:31:38,002 - Epoch: [136][ 100/ 155] Loss 3.182733 mAP 0.443873 -2023-04-26 23:31:44,356 - Epoch: [136][ 150/ 155] Loss 3.182879 mAP 0.441096 -2023-04-26 23:31:44,934 - Epoch: [136][ 155/ 155] Loss 3.180840 mAP 0.441017 -2023-04-26 23:31:44,997 - ==> mAP: 0.44102 Loss: 3.181 - -2023-04-26 23:31:45,002 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:31:45,002 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:31:45,039 - - -2023-04-26 23:31:45,039 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:31:55,888 - Epoch: [137][ 50/ 518] Overall Loss 2.817964 Objective Loss 2.817964 LR 0.000125 Time 0.216922 -2023-04-26 23:32:06,127 - Epoch: [137][ 100/ 518] Overall Loss 2.843213 Objective Loss 2.843213 LR 0.000125 Time 0.210832 -2023-04-26 23:32:16,413 - Epoch: [137][ 150/ 518] Overall Loss 2.847887 Objective Loss 2.847887 LR 0.000125 Time 0.209115 -2023-04-26 23:32:26,582 - Epoch: [137][ 200/ 518] Overall Loss 2.850504 Objective Loss 2.850504 LR 0.000125 Time 0.207675 -2023-04-26 23:32:36,633 - Epoch: [137][ 250/ 518] Overall Loss 2.863676 Objective Loss 2.863676 LR 0.000125 Time 0.206338 -2023-04-26 23:32:46,764 - Epoch: [137][ 300/ 518] Overall Loss 2.861106 Objective Loss 2.861106 LR 0.000125 Time 0.205713 -2023-04-26 23:32:56,947 - Epoch: [137][ 350/ 518] Overall Loss 2.863598 Objective Loss 2.863598 LR 0.000125 Time 0.205415 -2023-04-26 23:33:07,108 - Epoch: [137][ 400/ 518] Overall Loss 2.863338 Objective Loss 2.863338 LR 0.000125 Time 0.205136 -2023-04-26 23:33:17,257 - Epoch: [137][ 450/ 518] Overall Loss 2.868683 Objective Loss 2.868683 LR 0.000125 Time 0.204893 -2023-04-26 23:33:27,382 - Epoch: [137][ 500/ 518] Overall Loss 2.871645 Objective Loss 2.871645 LR 0.000125 Time 0.204651 -2023-04-26 23:33:30,888 - Epoch: [137][ 518/ 518] Overall Loss 2.872958 Objective Loss 2.872958 LR 0.000125 Time 0.204305 -2023-04-26 23:33:30,960 - --- validate (epoch=137)----------- -2023-04-26 23:33:30,960 - 4952 samples (32 per mini-batch) -2023-04-26 23:33:37,618 - Epoch: [137][ 50/ 155] Loss 3.209897 mAP 0.429714 -2023-04-26 23:33:43,996 - Epoch: [137][ 100/ 155] Loss 3.201362 mAP 0.433744 -2023-04-26 23:33:50,321 - Epoch: [137][ 150/ 155] Loss 3.181252 mAP 0.439718 -2023-04-26 23:33:50,865 - Epoch: [137][ 155/ 155] Loss 3.177871 mAP 0.438239 -2023-04-26 23:33:50,929 - ==> mAP: 0.43824 Loss: 3.178 - -2023-04-26 23:33:50,933 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:33:50,933 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:33:50,969 - - -2023-04-26 23:33:50,970 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:34:01,775 - Epoch: [138][ 50/ 518] Overall Loss 2.881354 Objective Loss 2.881354 LR 0.000125 Time 0.216054 -2023-04-26 23:34:11,877 - Epoch: [138][ 100/ 518] Overall Loss 2.865926 Objective Loss 2.865926 LR 0.000125 Time 0.209033 -2023-04-26 23:34:22,008 - Epoch: [138][ 150/ 518] Overall Loss 2.879884 Objective Loss 2.879884 LR 0.000125 Time 0.206880 -2023-04-26 23:34:32,172 - Epoch: [138][ 200/ 518] Overall Loss 2.885522 Objective Loss 2.885522 LR 0.000125 Time 0.205975 -2023-04-26 23:34:42,455 - Epoch: [138][ 250/ 518] Overall Loss 2.873540 Objective Loss 2.873540 LR 0.000125 Time 0.205903 -2023-04-26 23:34:52,585 - Epoch: [138][ 300/ 518] Overall Loss 2.879638 Objective Loss 2.879638 LR 0.000125 Time 0.205349 -2023-04-26 23:35:02,737 - Epoch: [138][ 350/ 518] Overall Loss 2.884532 Objective Loss 2.884532 LR 0.000125 Time 0.205012 -2023-04-26 23:35:12,914 - Epoch: [138][ 400/ 518] Overall Loss 2.881629 Objective Loss 2.881629 LR 0.000125 Time 0.204825 -2023-04-26 23:35:23,047 - Epoch: [138][ 450/ 518] Overall Loss 2.879006 Objective Loss 2.879006 LR 0.000125 Time 0.204580 -2023-04-26 23:35:33,217 - Epoch: [138][ 500/ 518] Overall Loss 2.876914 Objective Loss 2.876914 LR 0.000125 Time 0.204460 -2023-04-26 23:35:36,769 - Epoch: [138][ 518/ 518] Overall Loss 2.879732 Objective Loss 2.879732 LR 0.000125 Time 0.204210 -2023-04-26 23:35:36,840 - --- validate (epoch=138)----------- -2023-04-26 23:35:36,840 - 4952 samples (32 per mini-batch) -2023-04-26 23:35:43,526 - Epoch: [138][ 50/ 155] Loss 3.136394 mAP 0.445216 -2023-04-26 23:35:49,863 - Epoch: [138][ 100/ 155] Loss 3.187799 mAP 0.436535 -2023-04-26 23:35:56,121 - Epoch: [138][ 150/ 155] Loss 3.177046 mAP 0.429041 -2023-04-26 23:35:56,685 - Epoch: [138][ 155/ 155] Loss 3.179765 mAP 0.428293 -2023-04-26 23:35:56,756 - ==> mAP: 0.42829 Loss: 3.180 - -2023-04-26 23:35:56,760 - ==> Best [mAP: 0.446227 vloss: 3.188677 Sparsity:0.00 Params: 2177088 on epoch: 125] -2023-04-26 23:35:56,760 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:35:56,798 - - -2023-04-26 23:35:56,798 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:36:07,696 - Epoch: [139][ 50/ 518] Overall Loss 2.841489 Objective Loss 2.841489 LR 0.000125 Time 0.217906 -2023-04-26 23:36:17,802 - Epoch: [139][ 100/ 518] Overall Loss 2.888974 Objective Loss 2.888974 LR 0.000125 Time 0.209994 -2023-04-26 23:36:27,978 - Epoch: [139][ 150/ 518] Overall Loss 2.879251 Objective Loss 2.879251 LR 0.000125 Time 0.207824 -2023-04-26 23:36:38,131 - Epoch: [139][ 200/ 518] Overall Loss 2.890135 Objective Loss 2.890135 LR 0.000125 Time 0.206626 -2023-04-26 23:36:48,202 - Epoch: [139][ 250/ 518] Overall Loss 2.876262 Objective Loss 2.876262 LR 0.000125 Time 0.205580 -2023-04-26 23:36:58,375 - Epoch: [139][ 300/ 518] Overall Loss 2.883139 Objective Loss 2.883139 LR 0.000125 Time 0.205221 -2023-04-26 23:37:08,469 - Epoch: [139][ 350/ 518] Overall Loss 2.878188 Objective Loss 2.878188 LR 0.000125 Time 0.204737 -2023-04-26 23:37:18,616 - Epoch: [139][ 400/ 518] Overall Loss 2.883284 Objective Loss 2.883284 LR 0.000125 Time 0.204508 -2023-04-26 23:37:28,854 - Epoch: [139][ 450/ 518] Overall Loss 2.881315 Objective Loss 2.881315 LR 0.000125 Time 0.204534 -2023-04-26 23:37:39,009 - Epoch: [139][ 500/ 518] Overall Loss 2.887564 Objective Loss 2.887564 LR 0.000125 Time 0.204386 -2023-04-26 23:37:42,545 - Epoch: [139][ 518/ 518] Overall Loss 2.888706 Objective Loss 2.888706 LR 0.000125 Time 0.204110 -2023-04-26 23:37:42,616 - --- validate (epoch=139)----------- -2023-04-26 23:37:42,616 - 4952 samples (32 per mini-batch) -2023-04-26 23:37:49,432 - Epoch: [139][ 50/ 155] Loss 3.203988 mAP 0.450257 -2023-04-26 23:37:55,797 - Epoch: [139][ 100/ 155] Loss 3.198155 mAP 0.451481 -2023-04-26 23:38:02,136 - Epoch: [139][ 150/ 155] Loss 3.184437 mAP 0.449628 -2023-04-26 23:38:02,704 - Epoch: [139][ 155/ 155] Loss 3.186491 mAP 0.448273 -2023-04-26 23:38:02,775 - ==> mAP: 0.44827 Loss: 3.186 - -2023-04-26 23:38:02,779 - ==> Best [mAP: 0.448273 vloss: 3.186491 Sparsity:0.00 Params: 2177088 on epoch: 139] -2023-04-26 23:38:02,779 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:38:02,832 - - -2023-04-26 23:38:02,832 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:38:13,751 - Epoch: [140][ 50/ 518] Overall Loss 2.886107 Objective Loss 2.886107 LR 0.000125 Time 0.218318 -2023-04-26 23:38:23,925 - Epoch: [140][ 100/ 518] Overall Loss 2.895902 Objective Loss 2.895902 LR 0.000125 Time 0.210891 -2023-04-26 23:38:34,077 - Epoch: [140][ 150/ 518] Overall Loss 2.907755 Objective Loss 2.907755 LR 0.000125 Time 0.208259 -2023-04-26 23:38:44,216 - Epoch: [140][ 200/ 518] Overall Loss 2.907743 Objective Loss 2.907743 LR 0.000125 Time 0.206881 -2023-04-26 23:38:54,378 - Epoch: [140][ 250/ 518] Overall Loss 2.898124 Objective Loss 2.898124 LR 0.000125 Time 0.206145 -2023-04-26 23:39:04,524 - Epoch: [140][ 300/ 518] Overall Loss 2.907145 Objective Loss 2.907145 LR 0.000125 Time 0.205603 -2023-04-26 23:39:14,672 - Epoch: [140][ 350/ 518] Overall Loss 2.903951 Objective Loss 2.903951 LR 0.000125 Time 0.205220 -2023-04-26 23:39:24,870 - Epoch: [140][ 400/ 518] Overall Loss 2.892586 Objective Loss 2.892586 LR 0.000125 Time 0.205060 -2023-04-26 23:39:34,976 - Epoch: [140][ 450/ 518] Overall Loss 2.893525 Objective Loss 2.893525 LR 0.000125 Time 0.204728 -2023-04-26 23:39:45,162 - Epoch: [140][ 500/ 518] Overall Loss 2.895128 Objective Loss 2.895128 LR 0.000125 Time 0.204624 -2023-04-26 23:39:48,686 - Epoch: [140][ 518/ 518] Overall Loss 2.895633 Objective Loss 2.895633 LR 0.000125 Time 0.204317 -2023-04-26 23:39:48,759 - --- validate (epoch=140)----------- -2023-04-26 23:39:48,760 - 4952 samples (32 per mini-batch) -2023-04-26 23:39:55,571 - Epoch: [140][ 50/ 155] Loss 3.174632 mAP 0.450027 -2023-04-26 23:40:01,959 - Epoch: [140][ 100/ 155] Loss 3.174361 mAP 0.454595 -2023-04-26 23:40:08,363 - Epoch: [140][ 150/ 155] Loss 3.181934 mAP 0.448097 -2023-04-26 23:40:08,936 - Epoch: [140][ 155/ 155] Loss 3.181567 mAP 0.446846 -2023-04-26 23:40:09,006 - ==> mAP: 0.44685 Loss: 3.182 - -2023-04-26 23:40:09,009 - ==> Best [mAP: 0.448273 vloss: 3.186491 Sparsity:0.00 Params: 2177088 on epoch: 139] -2023-04-26 23:40:09,009 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:40:09,047 - - -2023-04-26 23:40:09,047 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:40:19,944 - Epoch: [141][ 50/ 518] Overall Loss 2.905253 Objective Loss 2.905253 LR 0.000125 Time 0.217874 -2023-04-26 23:40:30,096 - Epoch: [141][ 100/ 518] Overall Loss 2.907708 Objective Loss 2.907708 LR 0.000125 Time 0.210442 -2023-04-26 23:40:40,281 - Epoch: [141][ 150/ 518] Overall Loss 2.890114 Objective Loss 2.890114 LR 0.000125 Time 0.208187 -2023-04-26 23:40:50,436 - Epoch: [141][ 200/ 518] Overall Loss 2.898481 Objective Loss 2.898481 LR 0.000125 Time 0.206903 -2023-04-26 23:41:00,575 - Epoch: [141][ 250/ 518] Overall Loss 2.887159 Objective Loss 2.887159 LR 0.000125 Time 0.206072 -2023-04-26 23:41:10,731 - Epoch: [141][ 300/ 518] Overall Loss 2.885791 Objective Loss 2.885791 LR 0.000125 Time 0.205577 -2023-04-26 23:41:20,915 - Epoch: [141][ 350/ 518] Overall Loss 2.885747 Objective Loss 2.885747 LR 0.000125 Time 0.205301 -2023-04-26 23:41:31,073 - Epoch: [141][ 400/ 518] Overall Loss 2.883521 Objective Loss 2.883521 LR 0.000125 Time 0.205028 -2023-04-26 23:41:41,227 - Epoch: [141][ 450/ 518] Overall Loss 2.884306 Objective Loss 2.884306 LR 0.000125 Time 0.204808 -2023-04-26 23:41:51,310 - Epoch: [141][ 500/ 518] Overall Loss 2.883433 Objective Loss 2.883433 LR 0.000125 Time 0.204491 -2023-04-26 23:41:54,844 - Epoch: [141][ 518/ 518] Overall Loss 2.885886 Objective Loss 2.885886 LR 0.000125 Time 0.204205 -2023-04-26 23:41:54,915 - --- validate (epoch=141)----------- -2023-04-26 23:41:54,915 - 4952 samples (32 per mini-batch) -2023-04-26 23:42:01,581 - Epoch: [141][ 50/ 155] Loss 3.126947 mAP 0.434433 -2023-04-26 23:42:07,984 - Epoch: [141][ 100/ 155] Loss 3.141725 mAP 0.448234 -2023-04-26 23:42:14,333 - Epoch: [141][ 150/ 155] Loss 3.163740 mAP 0.444503 -2023-04-26 23:42:14,899 - Epoch: [141][ 155/ 155] Loss 3.161330 mAP 0.446125 -2023-04-26 23:42:14,970 - ==> mAP: 0.44612 Loss: 3.161 - -2023-04-26 23:42:14,973 - ==> Best [mAP: 0.448273 vloss: 3.186491 Sparsity:0.00 Params: 2177088 on epoch: 139] -2023-04-26 23:42:14,973 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:42:15,011 - - -2023-04-26 23:42:15,011 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:42:25,931 - Epoch: [142][ 50/ 518] Overall Loss 2.891305 Objective Loss 2.891305 LR 0.000125 Time 0.218349 -2023-04-26 23:42:36,081 - Epoch: [142][ 100/ 518] Overall Loss 2.891355 Objective Loss 2.891355 LR 0.000125 Time 0.210652 -2023-04-26 23:42:46,199 - Epoch: [142][ 150/ 518] Overall Loss 2.880251 Objective Loss 2.880251 LR 0.000125 Time 0.207880 -2023-04-26 23:42:56,343 - Epoch: [142][ 200/ 518] Overall Loss 2.878873 Objective Loss 2.878873 LR 0.000125 Time 0.206621 -2023-04-26 23:43:06,528 - Epoch: [142][ 250/ 518] Overall Loss 2.882732 Objective Loss 2.882732 LR 0.000125 Time 0.206032 -2023-04-26 23:43:16,641 - Epoch: [142][ 300/ 518] Overall Loss 2.888642 Objective Loss 2.888642 LR 0.000125 Time 0.205397 -2023-04-26 23:43:26,774 - Epoch: [142][ 350/ 518] Overall Loss 2.889424 Objective Loss 2.889424 LR 0.000125 Time 0.205002 -2023-04-26 23:43:36,919 - Epoch: [142][ 400/ 518] Overall Loss 2.885389 Objective Loss 2.885389 LR 0.000125 Time 0.204734 -2023-04-26 23:43:47,031 - Epoch: [142][ 450/ 518] Overall Loss 2.887114 Objective Loss 2.887114 LR 0.000125 Time 0.204454 -2023-04-26 23:43:57,221 - Epoch: [142][ 500/ 518] Overall Loss 2.888162 Objective Loss 2.888162 LR 0.000125 Time 0.204386 -2023-04-26 23:44:00,803 - Epoch: [142][ 518/ 518] Overall Loss 2.889634 Objective Loss 2.889634 LR 0.000125 Time 0.204198 -2023-04-26 23:44:00,875 - --- validate (epoch=142)----------- -2023-04-26 23:44:00,875 - 4952 samples (32 per mini-batch) -2023-04-26 23:44:07,597 - Epoch: [142][ 50/ 155] Loss 3.206886 mAP 0.438774 -2023-04-26 23:44:13,932 - Epoch: [142][ 100/ 155] Loss 3.178444 mAP 0.439797 -2023-04-26 23:44:20,258 - Epoch: [142][ 150/ 155] Loss 3.175899 mAP 0.438917 -2023-04-26 23:44:20,826 - Epoch: [142][ 155/ 155] Loss 3.175253 mAP 0.438402 -2023-04-26 23:44:20,899 - ==> mAP: 0.43840 Loss: 3.175 - -2023-04-26 23:44:20,903 - ==> Best [mAP: 0.448273 vloss: 3.186491 Sparsity:0.00 Params: 2177088 on epoch: 139] -2023-04-26 23:44:20,903 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:44:20,939 - - -2023-04-26 23:44:20,939 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:44:31,757 - Epoch: [143][ 50/ 518] Overall Loss 2.863049 Objective Loss 2.863049 LR 0.000125 Time 0.216297 -2023-04-26 23:44:41,886 - Epoch: [143][ 100/ 518] Overall Loss 2.872220 Objective Loss 2.872220 LR 0.000125 Time 0.209422 -2023-04-26 23:44:52,166 - Epoch: [143][ 150/ 518] Overall Loss 2.874588 Objective Loss 2.874588 LR 0.000125 Time 0.208138 -2023-04-26 23:45:02,245 - Epoch: [143][ 200/ 518] Overall Loss 2.888555 Objective Loss 2.888555 LR 0.000125 Time 0.206488 -2023-04-26 23:45:12,468 - Epoch: [143][ 250/ 518] Overall Loss 2.904762 Objective Loss 2.904762 LR 0.000125 Time 0.206080 -2023-04-26 23:45:22,581 - Epoch: [143][ 300/ 518] Overall Loss 2.900860 Objective Loss 2.900860 LR 0.000125 Time 0.205438 -2023-04-26 23:45:32,720 - Epoch: [143][ 350/ 518] Overall Loss 2.894714 Objective Loss 2.894714 LR 0.000125 Time 0.205053 -2023-04-26 23:45:42,823 - Epoch: [143][ 400/ 518] Overall Loss 2.895617 Objective Loss 2.895617 LR 0.000125 Time 0.204673 -2023-04-26 23:45:52,975 - Epoch: [143][ 450/ 518] Overall Loss 2.893229 Objective Loss 2.893229 LR 0.000125 Time 0.204488 -2023-04-26 23:46:03,268 - Epoch: [143][ 500/ 518] Overall Loss 2.892966 Objective Loss 2.892966 LR 0.000125 Time 0.204623 -2023-04-26 23:46:06,786 - Epoch: [143][ 518/ 518] Overall Loss 2.889831 Objective Loss 2.889831 LR 0.000125 Time 0.204304 -2023-04-26 23:46:06,857 - --- validate (epoch=143)----------- -2023-04-26 23:46:06,857 - 4952 samples (32 per mini-batch) -2023-04-26 23:46:13,600 - Epoch: [143][ 50/ 155] Loss 3.163149 mAP 0.444924 -2023-04-26 23:46:19,949 - Epoch: [143][ 100/ 155] Loss 3.182330 mAP 0.434648 -2023-04-26 23:46:26,203 - Epoch: [143][ 150/ 155] Loss 3.173671 mAP 0.437509 -2023-04-26 23:46:26,764 - Epoch: [143][ 155/ 155] Loss 3.169328 mAP 0.438246 -2023-04-26 23:46:26,833 - ==> mAP: 0.43825 Loss: 3.169 - -2023-04-26 23:46:26,838 - ==> Best [mAP: 0.448273 vloss: 3.186491 Sparsity:0.00 Params: 2177088 on epoch: 139] -2023-04-26 23:46:26,838 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:46:26,875 - - -2023-04-26 23:46:26,875 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:46:37,816 - Epoch: [144][ 50/ 518] Overall Loss 2.908705 Objective Loss 2.908705 LR 0.000125 Time 0.218754 -2023-04-26 23:46:47,989 - Epoch: [144][ 100/ 518] Overall Loss 2.889222 Objective Loss 2.889222 LR 0.000125 Time 0.211089 -2023-04-26 23:46:58,082 - Epoch: [144][ 150/ 518] Overall Loss 2.897286 Objective Loss 2.897286 LR 0.000125 Time 0.208001 -2023-04-26 23:47:08,232 - Epoch: [144][ 200/ 518] Overall Loss 2.887360 Objective Loss 2.887360 LR 0.000125 Time 0.206747 -2023-04-26 23:47:18,302 - Epoch: [144][ 250/ 518] Overall Loss 2.888758 Objective Loss 2.888758 LR 0.000125 Time 0.205671 -2023-04-26 23:47:28,505 - Epoch: [144][ 300/ 518] Overall Loss 2.888178 Objective Loss 2.888178 LR 0.000125 Time 0.205397 -2023-04-26 23:47:38,680 - Epoch: [144][ 350/ 518] Overall Loss 2.884738 Objective Loss 2.884738 LR 0.000125 Time 0.205119 -2023-04-26 23:47:48,831 - Epoch: [144][ 400/ 518] Overall Loss 2.881456 Objective Loss 2.881456 LR 0.000125 Time 0.204853 -2023-04-26 23:47:58,976 - Epoch: [144][ 450/ 518] Overall Loss 2.877843 Objective Loss 2.877843 LR 0.000125 Time 0.204632 -2023-04-26 23:48:09,111 - Epoch: [144][ 500/ 518] Overall Loss 2.880208 Objective Loss 2.880208 LR 0.000125 Time 0.204436 -2023-04-26 23:48:12,640 - Epoch: [144][ 518/ 518] Overall Loss 2.881438 Objective Loss 2.881438 LR 0.000125 Time 0.204144 -2023-04-26 23:48:12,710 - --- validate (epoch=144)----------- -2023-04-26 23:48:12,710 - 4952 samples (32 per mini-batch) -2023-04-26 23:48:19,326 - Epoch: [144][ 50/ 155] Loss 3.167701 mAP 0.422944 -2023-04-26 23:48:25,666 - Epoch: [144][ 100/ 155] Loss 3.168999 mAP 0.427461 -2023-04-26 23:48:32,040 - Epoch: [144][ 150/ 155] Loss 3.172585 mAP 0.435169 -2023-04-26 23:48:32,625 - Epoch: [144][ 155/ 155] Loss 3.170249 mAP 0.437575 -2023-04-26 23:48:32,692 - ==> mAP: 0.43757 Loss: 3.170 - -2023-04-26 23:48:32,697 - ==> Best [mAP: 0.448273 vloss: 3.186491 Sparsity:0.00 Params: 2177088 on epoch: 139] -2023-04-26 23:48:32,697 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:48:32,734 - - -2023-04-26 23:48:32,734 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:48:43,524 - Epoch: [145][ 50/ 518] Overall Loss 2.896976 Objective Loss 2.896976 LR 0.000125 Time 0.215734 -2023-04-26 23:48:53,751 - Epoch: [145][ 100/ 518] Overall Loss 2.890204 Objective Loss 2.890204 LR 0.000125 Time 0.210124 -2023-04-26 23:49:03,990 - Epoch: [145][ 150/ 518] Overall Loss 2.883764 Objective Loss 2.883764 LR 0.000125 Time 0.208335 -2023-04-26 23:49:14,113 - Epoch: [145][ 200/ 518] Overall Loss 2.882266 Objective Loss 2.882266 LR 0.000125 Time 0.206857 -2023-04-26 23:49:24,338 - Epoch: [145][ 250/ 518] Overall Loss 2.889557 Objective Loss 2.889557 LR 0.000125 Time 0.206378 -2023-04-26 23:49:34,506 - Epoch: [145][ 300/ 518] Overall Loss 2.888468 Objective Loss 2.888468 LR 0.000125 Time 0.205870 -2023-04-26 23:49:44,741 - Epoch: [145][ 350/ 518] Overall Loss 2.891097 Objective Loss 2.891097 LR 0.000125 Time 0.205698 -2023-04-26 23:49:54,888 - Epoch: [145][ 400/ 518] Overall Loss 2.895299 Objective Loss 2.895299 LR 0.000125 Time 0.205349 -2023-04-26 23:50:05,041 - Epoch: [145][ 450/ 518] Overall Loss 2.899003 Objective Loss 2.899003 LR 0.000125 Time 0.205090 -2023-04-26 23:50:15,238 - Epoch: [145][ 500/ 518] Overall Loss 2.897335 Objective Loss 2.897335 LR 0.000125 Time 0.204972 -2023-04-26 23:50:18,784 - Epoch: [145][ 518/ 518] Overall Loss 2.897996 Objective Loss 2.897996 LR 0.000125 Time 0.204695 -2023-04-26 23:50:18,857 - --- validate (epoch=145)----------- -2023-04-26 23:50:18,857 - 4952 samples (32 per mini-batch) -2023-04-26 23:50:25,560 - Epoch: [145][ 50/ 155] Loss 3.201118 mAP 0.440703 -2023-04-26 23:50:31,971 - Epoch: [145][ 100/ 155] Loss 3.190007 mAP 0.437428 -2023-04-26 23:50:38,297 - Epoch: [145][ 150/ 155] Loss 3.180871 mAP 0.436075 -2023-04-26 23:50:38,877 - Epoch: [145][ 155/ 155] Loss 3.179963 mAP 0.437717 -2023-04-26 23:50:38,936 - ==> mAP: 0.43772 Loss: 3.180 - -2023-04-26 23:50:38,940 - ==> Best [mAP: 0.448273 vloss: 3.186491 Sparsity:0.00 Params: 2177088 on epoch: 139] -2023-04-26 23:50:38,940 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:50:38,978 - - -2023-04-26 23:50:38,978 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:50:49,940 - Epoch: [146][ 50/ 518] Overall Loss 2.905931 Objective Loss 2.905931 LR 0.000125 Time 0.219179 -2023-04-26 23:51:00,086 - Epoch: [146][ 100/ 518] Overall Loss 2.887328 Objective Loss 2.887328 LR 0.000125 Time 0.211037 -2023-04-26 23:51:10,222 - Epoch: [146][ 150/ 518] Overall Loss 2.866246 Objective Loss 2.866246 LR 0.000125 Time 0.208253 -2023-04-26 23:51:20,488 - Epoch: [146][ 200/ 518] Overall Loss 2.867901 Objective Loss 2.867901 LR 0.000125 Time 0.207512 -2023-04-26 23:51:30,574 - Epoch: [146][ 250/ 518] Overall Loss 2.872727 Objective Loss 2.872727 LR 0.000125 Time 0.206348 -2023-04-26 23:51:40,752 - Epoch: [146][ 300/ 518] Overall Loss 2.880870 Objective Loss 2.880870 LR 0.000125 Time 0.205877 -2023-04-26 23:51:50,893 - Epoch: [146][ 350/ 518] Overall Loss 2.878139 Objective Loss 2.878139 LR 0.000125 Time 0.205436 -2023-04-26 23:52:01,070 - Epoch: [146][ 400/ 518] Overall Loss 2.877339 Objective Loss 2.877339 LR 0.000125 Time 0.205194 -2023-04-26 23:52:11,260 - Epoch: [146][ 450/ 518] Overall Loss 2.881913 Objective Loss 2.881913 LR 0.000125 Time 0.205035 -2023-04-26 23:52:21,406 - Epoch: [146][ 500/ 518] Overall Loss 2.884553 Objective Loss 2.884553 LR 0.000125 Time 0.204821 -2023-04-26 23:52:24,946 - Epoch: [146][ 518/ 518] Overall Loss 2.882600 Objective Loss 2.882600 LR 0.000125 Time 0.204536 -2023-04-26 23:52:25,017 - --- validate (epoch=146)----------- -2023-04-26 23:52:25,018 - 4952 samples (32 per mini-batch) -2023-04-26 23:52:31,777 - Epoch: [146][ 50/ 155] Loss 3.176014 mAP 0.440752 -2023-04-26 23:52:38,183 - Epoch: [146][ 100/ 155] Loss 3.170115 mAP 0.440259 -2023-04-26 23:52:44,583 - Epoch: [146][ 150/ 155] Loss 3.163430 mAP 0.443460 -2023-04-26 23:52:45,159 - Epoch: [146][ 155/ 155] Loss 3.166209 mAP 0.443818 -2023-04-26 23:52:45,225 - ==> mAP: 0.44382 Loss: 3.166 - -2023-04-26 23:52:45,229 - ==> Best [mAP: 0.448273 vloss: 3.186491 Sparsity:0.00 Params: 2177088 on epoch: 139] -2023-04-26 23:52:45,229 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:52:45,267 - - -2023-04-26 23:52:45,267 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:52:56,181 - Epoch: [147][ 50/ 518] Overall Loss 2.864511 Objective Loss 2.864511 LR 0.000125 Time 0.218226 -2023-04-26 23:53:06,238 - Epoch: [147][ 100/ 518] Overall Loss 2.867423 Objective Loss 2.867423 LR 0.000125 Time 0.209657 -2023-04-26 23:53:16,372 - Epoch: [147][ 150/ 518] Overall Loss 2.869349 Objective Loss 2.869349 LR 0.000125 Time 0.207326 -2023-04-26 23:53:26,565 - Epoch: [147][ 200/ 518] Overall Loss 2.879709 Objective Loss 2.879709 LR 0.000125 Time 0.206452 -2023-04-26 23:53:36,733 - Epoch: [147][ 250/ 518] Overall Loss 2.885629 Objective Loss 2.885629 LR 0.000125 Time 0.205824 -2023-04-26 23:53:46,876 - Epoch: [147][ 300/ 518] Overall Loss 2.877354 Objective Loss 2.877354 LR 0.000125 Time 0.205325 -2023-04-26 23:53:57,123 - Epoch: [147][ 350/ 518] Overall Loss 2.877451 Objective Loss 2.877451 LR 0.000125 Time 0.205265 -2023-04-26 23:54:07,319 - Epoch: [147][ 400/ 518] Overall Loss 2.881916 Objective Loss 2.881916 LR 0.000125 Time 0.205092 -2023-04-26 23:54:17,619 - Epoch: [147][ 450/ 518] Overall Loss 2.883229 Objective Loss 2.883229 LR 0.000125 Time 0.205190 -2023-04-26 23:54:27,752 - Epoch: [147][ 500/ 518] Overall Loss 2.882732 Objective Loss 2.882732 LR 0.000125 Time 0.204934 -2023-04-26 23:54:31,325 - Epoch: [147][ 518/ 518] Overall Loss 2.884835 Objective Loss 2.884835 LR 0.000125 Time 0.204709 -2023-04-26 23:54:31,398 - --- validate (epoch=147)----------- -2023-04-26 23:54:31,398 - 4952 samples (32 per mini-batch) -2023-04-26 23:54:38,144 - Epoch: [147][ 50/ 155] Loss 3.207219 mAP 0.459640 -2023-04-26 23:54:44,557 - Epoch: [147][ 100/ 155] Loss 3.189772 mAP 0.451736 -2023-04-26 23:54:50,919 - Epoch: [147][ 150/ 155] Loss 3.171729 mAP 0.451290 -2023-04-26 23:54:51,505 - Epoch: [147][ 155/ 155] Loss 3.166752 mAP 0.452634 -2023-04-26 23:54:51,572 - ==> mAP: 0.45263 Loss: 3.167 - -2023-04-26 23:54:51,575 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-26 23:54:51,575 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:54:51,625 - - -2023-04-26 23:54:51,626 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:55:02,529 - Epoch: [148][ 50/ 518] Overall Loss 2.901927 Objective Loss 2.901927 LR 0.000125 Time 0.218020 -2023-04-26 23:55:12,729 - Epoch: [148][ 100/ 518] Overall Loss 2.883188 Objective Loss 2.883188 LR 0.000125 Time 0.210993 -2023-04-26 23:55:22,886 - Epoch: [148][ 150/ 518] Overall Loss 2.878624 Objective Loss 2.878624 LR 0.000125 Time 0.208365 -2023-04-26 23:55:33,079 - Epoch: [148][ 200/ 518] Overall Loss 2.880564 Objective Loss 2.880564 LR 0.000125 Time 0.207230 -2023-04-26 23:55:43,188 - Epoch: [148][ 250/ 518] Overall Loss 2.883250 Objective Loss 2.883250 LR 0.000125 Time 0.206212 -2023-04-26 23:55:53,469 - Epoch: [148][ 300/ 518] Overall Loss 2.879393 Objective Loss 2.879393 LR 0.000125 Time 0.206110 -2023-04-26 23:56:03,568 - Epoch: [148][ 350/ 518] Overall Loss 2.880160 Objective Loss 2.880160 LR 0.000125 Time 0.205513 -2023-04-26 23:56:13,804 - Epoch: [148][ 400/ 518] Overall Loss 2.882914 Objective Loss 2.882914 LR 0.000125 Time 0.205411 -2023-04-26 23:56:23,912 - Epoch: [148][ 450/ 518] Overall Loss 2.883456 Objective Loss 2.883456 LR 0.000125 Time 0.205046 -2023-04-26 23:56:34,053 - Epoch: [148][ 500/ 518] Overall Loss 2.883821 Objective Loss 2.883821 LR 0.000125 Time 0.204820 -2023-04-26 23:56:37,594 - Epoch: [148][ 518/ 518] Overall Loss 2.883758 Objective Loss 2.883758 LR 0.000125 Time 0.204538 -2023-04-26 23:56:37,666 - --- validate (epoch=148)----------- -2023-04-26 23:56:37,667 - 4952 samples (32 per mini-batch) -2023-04-26 23:56:44,426 - Epoch: [148][ 50/ 155] Loss 3.179925 mAP 0.443432 -2023-04-26 23:56:50,840 - Epoch: [148][ 100/ 155] Loss 3.186947 mAP 0.447412 -2023-04-26 23:56:57,173 - Epoch: [148][ 150/ 155] Loss 3.177256 mAP 0.444664 -2023-04-26 23:56:57,742 - Epoch: [148][ 155/ 155] Loss 3.175674 mAP 0.445015 -2023-04-26 23:56:57,813 - ==> mAP: 0.44502 Loss: 3.176 - -2023-04-26 23:56:57,816 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-26 23:56:57,817 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:56:57,854 - - -2023-04-26 23:56:57,854 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:57:08,956 - Epoch: [149][ 50/ 518] Overall Loss 2.913762 Objective Loss 2.913762 LR 0.000125 Time 0.221979 -2023-04-26 23:57:19,175 - Epoch: [149][ 100/ 518] Overall Loss 2.899700 Objective Loss 2.899700 LR 0.000125 Time 0.213163 -2023-04-26 23:57:29,316 - Epoch: [149][ 150/ 518] Overall Loss 2.887263 Objective Loss 2.887263 LR 0.000125 Time 0.209709 -2023-04-26 23:57:39,481 - Epoch: [149][ 200/ 518] Overall Loss 2.892838 Objective Loss 2.892838 LR 0.000125 Time 0.208099 -2023-04-26 23:57:49,676 - Epoch: [149][ 250/ 518] Overall Loss 2.890130 Objective Loss 2.890130 LR 0.000125 Time 0.207253 -2023-04-26 23:57:59,819 - Epoch: [149][ 300/ 518] Overall Loss 2.886694 Objective Loss 2.886694 LR 0.000125 Time 0.206514 -2023-04-26 23:58:09,963 - Epoch: [149][ 350/ 518] Overall Loss 2.883478 Objective Loss 2.883478 LR 0.000125 Time 0.205991 -2023-04-26 23:58:20,080 - Epoch: [149][ 400/ 518] Overall Loss 2.881678 Objective Loss 2.881678 LR 0.000125 Time 0.205531 -2023-04-26 23:58:30,199 - Epoch: [149][ 450/ 518] Overall Loss 2.884703 Objective Loss 2.884703 LR 0.000125 Time 0.205176 -2023-04-26 23:58:40,391 - Epoch: [149][ 500/ 518] Overall Loss 2.885953 Objective Loss 2.885953 LR 0.000125 Time 0.205040 -2023-04-26 23:58:43,943 - Epoch: [149][ 518/ 518] Overall Loss 2.891217 Objective Loss 2.891217 LR 0.000125 Time 0.204771 -2023-04-26 23:58:44,015 - --- validate (epoch=149)----------- -2023-04-26 23:58:44,015 - 4952 samples (32 per mini-batch) -2023-04-26 23:58:50,734 - Epoch: [149][ 50/ 155] Loss 3.155173 mAP 0.459914 -2023-04-26 23:58:57,115 - Epoch: [149][ 100/ 155] Loss 3.147307 mAP 0.456810 -2023-04-26 23:59:03,493 - Epoch: [149][ 150/ 155] Loss 3.159855 mAP 0.450226 -2023-04-26 23:59:04,063 - Epoch: [149][ 155/ 155] Loss 3.160437 mAP 0.448896 -2023-04-26 23:59:04,142 - ==> mAP: 0.44890 Loss: 3.160 - -2023-04-26 23:59:04,145 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-26 23:59:04,146 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-26 23:59:04,183 - - -2023-04-26 23:59:04,183 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-26 23:59:15,008 - Epoch: [150][ 50/ 518] Overall Loss 2.862238 Objective Loss 2.862238 LR 0.000031 Time 0.216426 -2023-04-26 23:59:25,126 - Epoch: [150][ 100/ 518] Overall Loss 2.855522 Objective Loss 2.855522 LR 0.000031 Time 0.209382 -2023-04-26 23:59:35,372 - Epoch: [150][ 150/ 518] Overall Loss 2.847380 Objective Loss 2.847380 LR 0.000031 Time 0.207881 -2023-04-26 23:59:45,492 - Epoch: [150][ 200/ 518] Overall Loss 2.867577 Objective Loss 2.867577 LR 0.000031 Time 0.206504 -2023-04-26 23:59:55,552 - Epoch: [150][ 250/ 518] Overall Loss 2.869510 Objective Loss 2.869510 LR 0.000031 Time 0.205438 -2023-04-27 00:00:05,769 - Epoch: [150][ 300/ 518] Overall Loss 2.865114 Objective Loss 2.865114 LR 0.000031 Time 0.205248 -2023-04-27 00:00:15,902 - Epoch: [150][ 350/ 518] Overall Loss 2.860736 Objective Loss 2.860736 LR 0.000031 Time 0.204873 -2023-04-27 00:00:26,018 - Epoch: [150][ 400/ 518] Overall Loss 2.863382 Objective Loss 2.863382 LR 0.000031 Time 0.204549 -2023-04-27 00:00:36,165 - Epoch: [150][ 450/ 518] Overall Loss 2.866192 Objective Loss 2.866192 LR 0.000031 Time 0.204368 -2023-04-27 00:00:46,332 - Epoch: [150][ 500/ 518] Overall Loss 2.865029 Objective Loss 2.865029 LR 0.000031 Time 0.204261 -2023-04-27 00:00:49,822 - Epoch: [150][ 518/ 518] Overall Loss 2.868395 Objective Loss 2.868395 LR 0.000031 Time 0.203901 -2023-04-27 00:00:49,895 - --- validate (epoch=150)----------- -2023-04-27 00:00:49,895 - 4952 samples (32 per mini-batch) -2023-04-27 00:00:56,567 - Epoch: [150][ 50/ 155] Loss 3.145302 mAP 0.436955 -2023-04-27 00:01:02,935 - Epoch: [150][ 100/ 155] Loss 3.153357 mAP 0.450051 -2023-04-27 00:01:09,349 - Epoch: [150][ 150/ 155] Loss 3.151659 mAP 0.452531 -2023-04-27 00:01:09,911 - Epoch: [150][ 155/ 155] Loss 3.158624 mAP 0.450083 -2023-04-27 00:01:09,976 - ==> mAP: 0.45008 Loss: 3.159 - -2023-04-27 00:01:09,980 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-27 00:01:09,980 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:01:10,018 - - -2023-04-27 00:01:10,018 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:01:20,856 - Epoch: [151][ 50/ 518] Overall Loss 2.905828 Objective Loss 2.905828 LR 0.000031 Time 0.216704 -2023-04-27 00:01:31,042 - Epoch: [151][ 100/ 518] Overall Loss 2.895686 Objective Loss 2.895686 LR 0.000031 Time 0.210198 -2023-04-27 00:01:41,245 - Epoch: [151][ 150/ 518] Overall Loss 2.873868 Objective Loss 2.873868 LR 0.000031 Time 0.208138 -2023-04-27 00:01:51,415 - Epoch: [151][ 200/ 518] Overall Loss 2.871029 Objective Loss 2.871029 LR 0.000031 Time 0.206949 -2023-04-27 00:02:01,585 - Epoch: [151][ 250/ 518] Overall Loss 2.864720 Objective Loss 2.864720 LR 0.000031 Time 0.206233 -2023-04-27 00:02:11,723 - Epoch: [151][ 300/ 518] Overall Loss 2.855685 Objective Loss 2.855685 LR 0.000031 Time 0.205648 -2023-04-27 00:02:21,860 - Epoch: [151][ 350/ 518] Overall Loss 2.854722 Objective Loss 2.854722 LR 0.000031 Time 0.205228 -2023-04-27 00:02:32,080 - Epoch: [151][ 400/ 518] Overall Loss 2.853272 Objective Loss 2.853272 LR 0.000031 Time 0.205119 -2023-04-27 00:02:42,375 - Epoch: [151][ 450/ 518] Overall Loss 2.853580 Objective Loss 2.853580 LR 0.000031 Time 0.205203 -2023-04-27 00:02:52,525 - Epoch: [151][ 500/ 518] Overall Loss 2.857822 Objective Loss 2.857822 LR 0.000031 Time 0.204980 -2023-04-27 00:02:56,100 - Epoch: [151][ 518/ 518] Overall Loss 2.854862 Objective Loss 2.854862 LR 0.000031 Time 0.204757 -2023-04-27 00:02:56,170 - --- validate (epoch=151)----------- -2023-04-27 00:02:56,171 - 4952 samples (32 per mini-batch) -2023-04-27 00:03:02,918 - Epoch: [151][ 50/ 155] Loss 3.155741 mAP 0.453053 -2023-04-27 00:03:09,393 - Epoch: [151][ 100/ 155] Loss 3.157797 mAP 0.450122 -2023-04-27 00:03:15,786 - Epoch: [151][ 150/ 155] Loss 3.154181 mAP 0.446455 -2023-04-27 00:03:16,356 - Epoch: [151][ 155/ 155] Loss 3.155173 mAP 0.445101 -2023-04-27 00:03:16,439 - ==> mAP: 0.44510 Loss: 3.155 - -2023-04-27 00:03:16,443 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-27 00:03:16,443 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:03:16,480 - - -2023-04-27 00:03:16,480 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:03:27,552 - Epoch: [152][ 50/ 518] Overall Loss 2.865492 Objective Loss 2.865492 LR 0.000031 Time 0.221384 -2023-04-27 00:03:37,673 - Epoch: [152][ 100/ 518] Overall Loss 2.872873 Objective Loss 2.872873 LR 0.000031 Time 0.211884 -2023-04-27 00:03:47,864 - Epoch: [152][ 150/ 518] Overall Loss 2.856630 Objective Loss 2.856630 LR 0.000031 Time 0.209186 -2023-04-27 00:03:57,988 - Epoch: [152][ 200/ 518] Overall Loss 2.850954 Objective Loss 2.850954 LR 0.000031 Time 0.207497 -2023-04-27 00:04:08,194 - Epoch: [152][ 250/ 518] Overall Loss 2.854195 Objective Loss 2.854195 LR 0.000031 Time 0.206815 -2023-04-27 00:04:18,314 - Epoch: [152][ 300/ 518] Overall Loss 2.857736 Objective Loss 2.857736 LR 0.000031 Time 0.206074 -2023-04-27 00:04:28,445 - Epoch: [152][ 350/ 518] Overall Loss 2.856356 Objective Loss 2.856356 LR 0.000031 Time 0.205577 -2023-04-27 00:04:38,627 - Epoch: [152][ 400/ 518] Overall Loss 2.859385 Objective Loss 2.859385 LR 0.000031 Time 0.205331 -2023-04-27 00:04:48,810 - Epoch: [152][ 450/ 518] Overall Loss 2.861490 Objective Loss 2.861490 LR 0.000031 Time 0.205142 -2023-04-27 00:04:58,914 - Epoch: [152][ 500/ 518] Overall Loss 2.866743 Objective Loss 2.866743 LR 0.000031 Time 0.204832 -2023-04-27 00:05:02,490 - Epoch: [152][ 518/ 518] Overall Loss 2.866530 Objective Loss 2.866530 LR 0.000031 Time 0.204617 -2023-04-27 00:05:02,561 - --- validate (epoch=152)----------- -2023-04-27 00:05:02,562 - 4952 samples (32 per mini-batch) -2023-04-27 00:05:09,299 - Epoch: [152][ 50/ 155] Loss 3.160888 mAP 0.441504 -2023-04-27 00:05:15,737 - Epoch: [152][ 100/ 155] Loss 3.147758 mAP 0.449812 -2023-04-27 00:05:22,072 - Epoch: [152][ 150/ 155] Loss 3.156457 mAP 0.449706 -2023-04-27 00:05:22,645 - Epoch: [152][ 155/ 155] Loss 3.155320 mAP 0.450545 -2023-04-27 00:05:22,711 - ==> mAP: 0.45054 Loss: 3.155 - -2023-04-27 00:05:22,715 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-27 00:05:22,715 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:05:22,752 - - -2023-04-27 00:05:22,752 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:05:33,675 - Epoch: [153][ 50/ 518] Overall Loss 2.750504 Objective Loss 2.750504 LR 0.000031 Time 0.218391 -2023-04-27 00:05:43,793 - Epoch: [153][ 100/ 518] Overall Loss 2.839751 Objective Loss 2.839751 LR 0.000031 Time 0.210360 -2023-04-27 00:05:53,988 - Epoch: [153][ 150/ 518] Overall Loss 2.843288 Objective Loss 2.843288 LR 0.000031 Time 0.208195 -2023-04-27 00:06:04,176 - Epoch: [153][ 200/ 518] Overall Loss 2.851549 Objective Loss 2.851549 LR 0.000031 Time 0.207080 -2023-04-27 00:06:14,308 - Epoch: [153][ 250/ 518] Overall Loss 2.858437 Objective Loss 2.858437 LR 0.000031 Time 0.206186 -2023-04-27 00:06:24,552 - Epoch: [153][ 300/ 518] Overall Loss 2.859086 Objective Loss 2.859086 LR 0.000031 Time 0.205961 -2023-04-27 00:06:34,716 - Epoch: [153][ 350/ 518] Overall Loss 2.861413 Objective Loss 2.861413 LR 0.000031 Time 0.205574 -2023-04-27 00:06:44,912 - Epoch: [153][ 400/ 518] Overall Loss 2.863841 Objective Loss 2.863841 LR 0.000031 Time 0.205363 -2023-04-27 00:06:55,085 - Epoch: [153][ 450/ 518] Overall Loss 2.862602 Objective Loss 2.862602 LR 0.000031 Time 0.205148 -2023-04-27 00:07:05,284 - Epoch: [153][ 500/ 518] Overall Loss 2.862408 Objective Loss 2.862408 LR 0.000031 Time 0.205028 -2023-04-27 00:07:08,836 - Epoch: [153][ 518/ 518] Overall Loss 2.864821 Objective Loss 2.864821 LR 0.000031 Time 0.204759 -2023-04-27 00:07:08,908 - --- validate (epoch=153)----------- -2023-04-27 00:07:08,908 - 4952 samples (32 per mini-batch) -2023-04-27 00:07:15,640 - Epoch: [153][ 50/ 155] Loss 3.155023 mAP 0.448841 -2023-04-27 00:07:22,031 - Epoch: [153][ 100/ 155] Loss 3.165727 mAP 0.448886 -2023-04-27 00:07:28,371 - Epoch: [153][ 150/ 155] Loss 3.159222 mAP 0.449860 -2023-04-27 00:07:28,943 - Epoch: [153][ 155/ 155] Loss 3.154548 mAP 0.451284 -2023-04-27 00:07:29,007 - ==> mAP: 0.45128 Loss: 3.155 - -2023-04-27 00:07:29,011 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-27 00:07:29,011 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:07:29,048 - - -2023-04-27 00:07:29,048 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:07:40,006 - Epoch: [154][ 50/ 518] Overall Loss 2.827449 Objective Loss 2.827449 LR 0.000031 Time 0.219094 -2023-04-27 00:07:50,167 - Epoch: [154][ 100/ 518] Overall Loss 2.847826 Objective Loss 2.847826 LR 0.000031 Time 0.211139 -2023-04-27 00:08:00,262 - Epoch: [154][ 150/ 518] Overall Loss 2.834072 Objective Loss 2.834072 LR 0.000031 Time 0.208051 -2023-04-27 00:08:10,476 - Epoch: [154][ 200/ 518] Overall Loss 2.843022 Objective Loss 2.843022 LR 0.000031 Time 0.207101 -2023-04-27 00:08:20,662 - Epoch: [154][ 250/ 518] Overall Loss 2.849187 Objective Loss 2.849187 LR 0.000031 Time 0.206416 -2023-04-27 00:08:30,839 - Epoch: [154][ 300/ 518] Overall Loss 2.845864 Objective Loss 2.845864 LR 0.000031 Time 0.205932 -2023-04-27 00:08:40,965 - Epoch: [154][ 350/ 518] Overall Loss 2.845608 Objective Loss 2.845608 LR 0.000031 Time 0.205439 -2023-04-27 00:08:51,157 - Epoch: [154][ 400/ 518] Overall Loss 2.847182 Objective Loss 2.847182 LR 0.000031 Time 0.205235 -2023-04-27 00:09:01,384 - Epoch: [154][ 450/ 518] Overall Loss 2.845374 Objective Loss 2.845374 LR 0.000031 Time 0.205155 -2023-04-27 00:09:11,482 - Epoch: [154][ 500/ 518] Overall Loss 2.849932 Objective Loss 2.849932 LR 0.000031 Time 0.204831 -2023-04-27 00:09:14,981 - Epoch: [154][ 518/ 518] Overall Loss 2.847845 Objective Loss 2.847845 LR 0.000031 Time 0.204467 -2023-04-27 00:09:15,052 - --- validate (epoch=154)----------- -2023-04-27 00:09:15,052 - 4952 samples (32 per mini-batch) -2023-04-27 00:09:21,798 - Epoch: [154][ 50/ 155] Loss 3.132762 mAP 0.449348 -2023-04-27 00:09:28,144 - Epoch: [154][ 100/ 155] Loss 3.136802 mAP 0.444984 -2023-04-27 00:09:34,479 - Epoch: [154][ 150/ 155] Loss 3.154044 mAP 0.447157 -2023-04-27 00:09:35,051 - Epoch: [154][ 155/ 155] Loss 3.159106 mAP 0.447774 -2023-04-27 00:09:35,123 - ==> mAP: 0.44777 Loss: 3.159 - -2023-04-27 00:09:35,127 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-27 00:09:35,127 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:09:35,165 - - -2023-04-27 00:09:35,165 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:09:46,060 - Epoch: [155][ 50/ 518] Overall Loss 2.832081 Objective Loss 2.832081 LR 0.000031 Time 0.217845 -2023-04-27 00:09:56,283 - Epoch: [155][ 100/ 518] Overall Loss 2.857264 Objective Loss 2.857264 LR 0.000031 Time 0.211142 -2023-04-27 00:10:06,464 - Epoch: [155][ 150/ 518] Overall Loss 2.858422 Objective Loss 2.858422 LR 0.000031 Time 0.208620 -2023-04-27 00:10:16,615 - Epoch: [155][ 200/ 518] Overall Loss 2.859262 Objective Loss 2.859262 LR 0.000031 Time 0.207213 -2023-04-27 00:10:26,783 - Epoch: [155][ 250/ 518] Overall Loss 2.865163 Objective Loss 2.865163 LR 0.000031 Time 0.206437 -2023-04-27 00:10:36,940 - Epoch: [155][ 300/ 518] Overall Loss 2.865458 Objective Loss 2.865458 LR 0.000031 Time 0.205881 -2023-04-27 00:10:47,021 - Epoch: [155][ 350/ 518] Overall Loss 2.858682 Objective Loss 2.858682 LR 0.000031 Time 0.205269 -2023-04-27 00:10:57,219 - Epoch: [155][ 400/ 518] Overall Loss 2.865378 Objective Loss 2.865378 LR 0.000031 Time 0.205100 -2023-04-27 00:11:07,392 - Epoch: [155][ 450/ 518] Overall Loss 2.861244 Objective Loss 2.861244 LR 0.000031 Time 0.204913 -2023-04-27 00:11:17,559 - Epoch: [155][ 500/ 518] Overall Loss 2.861629 Objective Loss 2.861629 LR 0.000031 Time 0.204753 -2023-04-27 00:11:21,079 - Epoch: [155][ 518/ 518] Overall Loss 2.864626 Objective Loss 2.864626 LR 0.000031 Time 0.204432 -2023-04-27 00:11:21,151 - --- validate (epoch=155)----------- -2023-04-27 00:11:21,152 - 4952 samples (32 per mini-batch) -2023-04-27 00:11:27,907 - Epoch: [155][ 50/ 155] Loss 3.136369 mAP 0.449966 -2023-04-27 00:11:34,245 - Epoch: [155][ 100/ 155] Loss 3.160944 mAP 0.446453 -2023-04-27 00:11:40,635 - Epoch: [155][ 150/ 155] Loss 3.165235 mAP 0.444553 -2023-04-27 00:11:41,196 - Epoch: [155][ 155/ 155] Loss 3.161665 mAP 0.444145 -2023-04-27 00:11:41,266 - ==> mAP: 0.44414 Loss: 3.162 - -2023-04-27 00:11:41,270 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-27 00:11:41,270 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:11:41,306 - - -2023-04-27 00:11:41,307 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:11:52,191 - Epoch: [156][ 50/ 518] Overall Loss 2.839193 Objective Loss 2.839193 LR 0.000031 Time 0.217636 -2023-04-27 00:12:02,411 - Epoch: [156][ 100/ 518] Overall Loss 2.856150 Objective Loss 2.856150 LR 0.000031 Time 0.210999 -2023-04-27 00:12:12,637 - Epoch: [156][ 150/ 518] Overall Loss 2.842504 Objective Loss 2.842504 LR 0.000031 Time 0.208828 -2023-04-27 00:12:22,798 - Epoch: [156][ 200/ 518] Overall Loss 2.853985 Objective Loss 2.853985 LR 0.000031 Time 0.207417 -2023-04-27 00:12:32,956 - Epoch: [156][ 250/ 518] Overall Loss 2.852790 Objective Loss 2.852790 LR 0.000031 Time 0.206560 -2023-04-27 00:12:43,115 - Epoch: [156][ 300/ 518] Overall Loss 2.855700 Objective Loss 2.855700 LR 0.000031 Time 0.205991 -2023-04-27 00:12:53,276 - Epoch: [156][ 350/ 518] Overall Loss 2.856010 Objective Loss 2.856010 LR 0.000031 Time 0.205591 -2023-04-27 00:13:03,523 - Epoch: [156][ 400/ 518] Overall Loss 2.855968 Objective Loss 2.855968 LR 0.000031 Time 0.205506 -2023-04-27 00:13:13,660 - Epoch: [156][ 450/ 518] Overall Loss 2.855650 Objective Loss 2.855650 LR 0.000031 Time 0.205195 -2023-04-27 00:13:23,808 - Epoch: [156][ 500/ 518] Overall Loss 2.853758 Objective Loss 2.853758 LR 0.000031 Time 0.204968 -2023-04-27 00:13:27,323 - Epoch: [156][ 518/ 518] Overall Loss 2.852859 Objective Loss 2.852859 LR 0.000031 Time 0.204630 -2023-04-27 00:13:27,395 - --- validate (epoch=156)----------- -2023-04-27 00:13:27,396 - 4952 samples (32 per mini-batch) -2023-04-27 00:13:34,160 - Epoch: [156][ 50/ 155] Loss 3.191339 mAP 0.433896 -2023-04-27 00:13:40,552 - Epoch: [156][ 100/ 155] Loss 3.169294 mAP 0.435512 -2023-04-27 00:13:46,911 - Epoch: [156][ 150/ 155] Loss 3.161826 mAP 0.446374 -2023-04-27 00:13:47,487 - Epoch: [156][ 155/ 155] Loss 3.156768 mAP 0.447201 -2023-04-27 00:13:47,550 - ==> mAP: 0.44720 Loss: 3.157 - -2023-04-27 00:13:47,553 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-27 00:13:47,553 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:13:47,590 - - -2023-04-27 00:13:47,590 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:13:58,503 - Epoch: [157][ 50/ 518] Overall Loss 2.878328 Objective Loss 2.878328 LR 0.000031 Time 0.218209 -2023-04-27 00:14:08,617 - Epoch: [157][ 100/ 518] Overall Loss 2.849456 Objective Loss 2.849456 LR 0.000031 Time 0.210221 -2023-04-27 00:14:18,741 - Epoch: [157][ 150/ 518] Overall Loss 2.836358 Objective Loss 2.836358 LR 0.000031 Time 0.207635 -2023-04-27 00:14:28,932 - Epoch: [157][ 200/ 518] Overall Loss 2.849585 Objective Loss 2.849585 LR 0.000031 Time 0.206669 -2023-04-27 00:14:39,119 - Epoch: [157][ 250/ 518] Overall Loss 2.855950 Objective Loss 2.855950 LR 0.000031 Time 0.206077 -2023-04-27 00:14:49,276 - Epoch: [157][ 300/ 518] Overall Loss 2.854037 Objective Loss 2.854037 LR 0.000031 Time 0.205582 -2023-04-27 00:14:59,417 - Epoch: [157][ 350/ 518] Overall Loss 2.851163 Objective Loss 2.851163 LR 0.000031 Time 0.205183 -2023-04-27 00:15:09,518 - Epoch: [157][ 400/ 518] Overall Loss 2.858830 Objective Loss 2.858830 LR 0.000031 Time 0.204785 -2023-04-27 00:15:19,662 - Epoch: [157][ 450/ 518] Overall Loss 2.852975 Objective Loss 2.852975 LR 0.000031 Time 0.204568 -2023-04-27 00:15:29,822 - Epoch: [157][ 500/ 518] Overall Loss 2.849682 Objective Loss 2.849682 LR 0.000031 Time 0.204430 -2023-04-27 00:15:33,338 - Epoch: [157][ 518/ 518] Overall Loss 2.850612 Objective Loss 2.850612 LR 0.000031 Time 0.204112 -2023-04-27 00:15:33,411 - --- validate (epoch=157)----------- -2023-04-27 00:15:33,411 - 4952 samples (32 per mini-batch) -2023-04-27 00:15:40,192 - Epoch: [157][ 50/ 155] Loss 3.152102 mAP 0.448333 -2023-04-27 00:15:46,563 - Epoch: [157][ 100/ 155] Loss 3.144684 mAP 0.453296 -2023-04-27 00:15:52,956 - Epoch: [157][ 150/ 155] Loss 3.162286 mAP 0.444936 -2023-04-27 00:15:53,515 - Epoch: [157][ 155/ 155] Loss 3.158339 mAP 0.445789 -2023-04-27 00:15:53,584 - ==> mAP: 0.44579 Loss: 3.158 - -2023-04-27 00:15:53,587 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-27 00:15:53,588 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:15:53,625 - - -2023-04-27 00:15:53,625 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:16:04,561 - Epoch: [158][ 50/ 518] Overall Loss 2.860401 Objective Loss 2.860401 LR 0.000031 Time 0.218667 -2023-04-27 00:16:14,688 - Epoch: [158][ 100/ 518] Overall Loss 2.861722 Objective Loss 2.861722 LR 0.000031 Time 0.210583 -2023-04-27 00:16:24,829 - Epoch: [158][ 150/ 518] Overall Loss 2.877528 Objective Loss 2.877528 LR 0.000031 Time 0.207984 -2023-04-27 00:16:34,984 - Epoch: [158][ 200/ 518] Overall Loss 2.865730 Objective Loss 2.865730 LR 0.000031 Time 0.206756 -2023-04-27 00:16:45,101 - Epoch: [158][ 250/ 518] Overall Loss 2.860225 Objective Loss 2.860225 LR 0.000031 Time 0.205866 -2023-04-27 00:16:55,227 - Epoch: [158][ 300/ 518] Overall Loss 2.857134 Objective Loss 2.857134 LR 0.000031 Time 0.205304 -2023-04-27 00:17:05,311 - Epoch: [158][ 350/ 518] Overall Loss 2.857141 Objective Loss 2.857141 LR 0.000031 Time 0.204780 -2023-04-27 00:17:15,544 - Epoch: [158][ 400/ 518] Overall Loss 2.852238 Objective Loss 2.852238 LR 0.000031 Time 0.204762 -2023-04-27 00:17:25,693 - Epoch: [158][ 450/ 518] Overall Loss 2.852865 Objective Loss 2.852865 LR 0.000031 Time 0.204560 -2023-04-27 00:17:35,805 - Epoch: [158][ 500/ 518] Overall Loss 2.853772 Objective Loss 2.853772 LR 0.000031 Time 0.204324 -2023-04-27 00:17:39,317 - Epoch: [158][ 518/ 518] Overall Loss 2.856359 Objective Loss 2.856359 LR 0.000031 Time 0.204003 -2023-04-27 00:17:39,389 - --- validate (epoch=158)----------- -2023-04-27 00:17:39,389 - 4952 samples (32 per mini-batch) -2023-04-27 00:17:46,150 - Epoch: [158][ 50/ 155] Loss 3.177163 mAP 0.441906 -2023-04-27 00:17:52,522 - Epoch: [158][ 100/ 155] Loss 3.150256 mAP 0.443979 -2023-04-27 00:17:58,906 - Epoch: [158][ 150/ 155] Loss 3.148855 mAP 0.443603 -2023-04-27 00:17:59,472 - Epoch: [158][ 155/ 155] Loss 3.153740 mAP 0.443867 -2023-04-27 00:17:59,543 - ==> mAP: 0.44387 Loss: 3.154 - -2023-04-27 00:17:59,547 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-27 00:17:59,547 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:17:59,584 - - -2023-04-27 00:17:59,584 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:18:10,554 - Epoch: [159][ 50/ 518] Overall Loss 2.879464 Objective Loss 2.879464 LR 0.000031 Time 0.219341 -2023-04-27 00:18:20,639 - Epoch: [159][ 100/ 518] Overall Loss 2.876903 Objective Loss 2.876903 LR 0.000031 Time 0.210505 -2023-04-27 00:18:30,789 - Epoch: [159][ 150/ 518] Overall Loss 2.858082 Objective Loss 2.858082 LR 0.000031 Time 0.207990 -2023-04-27 00:18:40,942 - Epoch: [159][ 200/ 518] Overall Loss 2.849643 Objective Loss 2.849643 LR 0.000031 Time 0.206752 -2023-04-27 00:18:51,060 - Epoch: [159][ 250/ 518] Overall Loss 2.862305 Objective Loss 2.862305 LR 0.000031 Time 0.205867 -2023-04-27 00:19:01,242 - Epoch: [159][ 300/ 518] Overall Loss 2.865535 Objective Loss 2.865535 LR 0.000031 Time 0.205489 -2023-04-27 00:19:11,446 - Epoch: [159][ 350/ 518] Overall Loss 2.867448 Objective Loss 2.867448 LR 0.000031 Time 0.205283 -2023-04-27 00:19:21,563 - Epoch: [159][ 400/ 518] Overall Loss 2.869771 Objective Loss 2.869771 LR 0.000031 Time 0.204910 -2023-04-27 00:19:31,745 - Epoch: [159][ 450/ 518] Overall Loss 2.873826 Objective Loss 2.873826 LR 0.000031 Time 0.204767 -2023-04-27 00:19:41,851 - Epoch: [159][ 500/ 518] Overall Loss 2.872849 Objective Loss 2.872849 LR 0.000031 Time 0.204499 -2023-04-27 00:19:45,376 - Epoch: [159][ 518/ 518] Overall Loss 2.871776 Objective Loss 2.871776 LR 0.000031 Time 0.204196 -2023-04-27 00:19:45,447 - --- validate (epoch=159)----------- -2023-04-27 00:19:45,447 - 4952 samples (32 per mini-batch) -2023-04-27 00:19:52,141 - Epoch: [159][ 50/ 155] Loss 3.191313 mAP 0.435922 -2023-04-27 00:19:58,610 - Epoch: [159][ 100/ 155] Loss 3.166448 mAP 0.446666 -2023-04-27 00:20:04,980 - Epoch: [159][ 150/ 155] Loss 3.163580 mAP 0.451067 -2023-04-27 00:20:05,554 - Epoch: [159][ 155/ 155] Loss 3.161166 mAP 0.451449 -2023-04-27 00:20:05,637 - ==> mAP: 0.45145 Loss: 3.161 - -2023-04-27 00:20:05,641 - ==> Best [mAP: 0.452634 vloss: 3.166752 Sparsity:0.00 Params: 2177088 on epoch: 147] -2023-04-27 00:20:05,641 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:20:05,679 - - -2023-04-27 00:20:05,679 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:20:16,620 - Epoch: [160][ 50/ 518] Overall Loss 2.911208 Objective Loss 2.911208 LR 0.000031 Time 0.218777 -2023-04-27 00:20:26,843 - Epoch: [160][ 100/ 518] Overall Loss 2.867972 Objective Loss 2.867972 LR 0.000031 Time 0.211595 -2023-04-27 00:20:36,990 - Epoch: [160][ 150/ 518] Overall Loss 2.884520 Objective Loss 2.884520 LR 0.000031 Time 0.208698 -2023-04-27 00:20:47,123 - Epoch: [160][ 200/ 518] Overall Loss 2.867161 Objective Loss 2.867161 LR 0.000031 Time 0.207180 -2023-04-27 00:20:57,257 - Epoch: [160][ 250/ 518] Overall Loss 2.874411 Objective Loss 2.874411 LR 0.000031 Time 0.206275 -2023-04-27 00:21:07,437 - Epoch: [160][ 300/ 518] Overall Loss 2.869192 Objective Loss 2.869192 LR 0.000031 Time 0.205825 -2023-04-27 00:21:17,478 - Epoch: [160][ 350/ 518] Overall Loss 2.874329 Objective Loss 2.874329 LR 0.000031 Time 0.205104 -2023-04-27 00:21:27,583 - Epoch: [160][ 400/ 518] Overall Loss 2.873660 Objective Loss 2.873660 LR 0.000031 Time 0.204726 -2023-04-27 00:21:37,759 - Epoch: [160][ 450/ 518] Overall Loss 2.875457 Objective Loss 2.875457 LR 0.000031 Time 0.204587 -2023-04-27 00:21:47,946 - Epoch: [160][ 500/ 518] Overall Loss 2.877673 Objective Loss 2.877673 LR 0.000031 Time 0.204500 -2023-04-27 00:21:51,478 - Epoch: [160][ 518/ 518] Overall Loss 2.876922 Objective Loss 2.876922 LR 0.000031 Time 0.204212 -2023-04-27 00:21:51,549 - --- validate (epoch=160)----------- -2023-04-27 00:21:51,550 - 4952 samples (32 per mini-batch) -2023-04-27 00:21:58,291 - Epoch: [160][ 50/ 155] Loss 3.115938 mAP 0.446150 -2023-04-27 00:22:04,679 - Epoch: [160][ 100/ 155] Loss 3.142149 mAP 0.447514 -2023-04-27 00:22:11,047 - Epoch: [160][ 150/ 155] Loss 3.154637 mAP 0.452682 -2023-04-27 00:22:11,633 - Epoch: [160][ 155/ 155] Loss 3.159736 mAP 0.454181 -2023-04-27 00:22:11,703 - ==> mAP: 0.45418 Loss: 3.160 - -2023-04-27 00:22:11,706 - ==> Best [mAP: 0.454181 vloss: 3.159736 Sparsity:0.00 Params: 2177088 on epoch: 160] -2023-04-27 00:22:11,706 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:22:11,760 - - -2023-04-27 00:22:11,760 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:22:22,775 - Epoch: [161][ 50/ 518] Overall Loss 2.850727 Objective Loss 2.850727 LR 0.000031 Time 0.220239 -2023-04-27 00:22:32,925 - Epoch: [161][ 100/ 518] Overall Loss 2.843908 Objective Loss 2.843908 LR 0.000031 Time 0.211610 -2023-04-27 00:22:43,100 - Epoch: [161][ 150/ 518] Overall Loss 2.846468 Objective Loss 2.846468 LR 0.000031 Time 0.208893 -2023-04-27 00:22:53,267 - Epoch: [161][ 200/ 518] Overall Loss 2.866637 Objective Loss 2.866637 LR 0.000031 Time 0.207498 -2023-04-27 00:23:03,428 - Epoch: [161][ 250/ 518] Overall Loss 2.860275 Objective Loss 2.860275 LR 0.000031 Time 0.206635 -2023-04-27 00:23:13,623 - Epoch: [161][ 300/ 518] Overall Loss 2.859653 Objective Loss 2.859653 LR 0.000031 Time 0.206174 -2023-04-27 00:23:23,757 - Epoch: [161][ 350/ 518] Overall Loss 2.867411 Objective Loss 2.867411 LR 0.000031 Time 0.205671 -2023-04-27 00:23:33,970 - Epoch: [161][ 400/ 518] Overall Loss 2.858389 Objective Loss 2.858389 LR 0.000031 Time 0.205491 -2023-04-27 00:23:44,144 - Epoch: [161][ 450/ 518] Overall Loss 2.862072 Objective Loss 2.862072 LR 0.000031 Time 0.205264 -2023-04-27 00:23:54,291 - Epoch: [161][ 500/ 518] Overall Loss 2.865480 Objective Loss 2.865480 LR 0.000031 Time 0.205027 -2023-04-27 00:23:57,826 - Epoch: [161][ 518/ 518] Overall Loss 2.867138 Objective Loss 2.867138 LR 0.000031 Time 0.204726 -2023-04-27 00:23:57,898 - --- validate (epoch=161)----------- -2023-04-27 00:23:57,899 - 4952 samples (32 per mini-batch) -2023-04-27 00:24:04,659 - Epoch: [161][ 50/ 155] Loss 3.144797 mAP 0.450954 -2023-04-27 00:24:10,990 - Epoch: [161][ 100/ 155] Loss 3.164495 mAP 0.450318 -2023-04-27 00:24:17,344 - Epoch: [161][ 150/ 155] Loss 3.162160 mAP 0.449719 -2023-04-27 00:24:17,924 - Epoch: [161][ 155/ 155] Loss 3.160033 mAP 0.449768 -2023-04-27 00:24:17,999 - ==> mAP: 0.44977 Loss: 3.160 - -2023-04-27 00:24:18,003 - ==> Best [mAP: 0.454181 vloss: 3.159736 Sparsity:0.00 Params: 2177088 on epoch: 160] -2023-04-27 00:24:18,003 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:24:18,040 - - -2023-04-27 00:24:18,040 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:24:28,870 - Epoch: [162][ 50/ 518] Overall Loss 2.841349 Objective Loss 2.841349 LR 0.000031 Time 0.216547 -2023-04-27 00:24:38,948 - Epoch: [162][ 100/ 518] Overall Loss 2.850787 Objective Loss 2.850787 LR 0.000031 Time 0.209033 -2023-04-27 00:24:49,118 - Epoch: [162][ 150/ 518] Overall Loss 2.858706 Objective Loss 2.858706 LR 0.000031 Time 0.207150 -2023-04-27 00:24:59,322 - Epoch: [162][ 200/ 518] Overall Loss 2.843096 Objective Loss 2.843096 LR 0.000031 Time 0.206371 -2023-04-27 00:25:09,502 - Epoch: [162][ 250/ 518] Overall Loss 2.848480 Objective Loss 2.848480 LR 0.000031 Time 0.205812 -2023-04-27 00:25:19,585 - Epoch: [162][ 300/ 518] Overall Loss 2.847302 Objective Loss 2.847302 LR 0.000031 Time 0.205115 -2023-04-27 00:25:29,720 - Epoch: [162][ 350/ 518] Overall Loss 2.852264 Objective Loss 2.852264 LR 0.000031 Time 0.204764 -2023-04-27 00:25:39,885 - Epoch: [162][ 400/ 518] Overall Loss 2.851433 Objective Loss 2.851433 LR 0.000031 Time 0.204577 -2023-04-27 00:25:50,024 - Epoch: [162][ 450/ 518] Overall Loss 2.846246 Objective Loss 2.846246 LR 0.000031 Time 0.204374 -2023-04-27 00:26:00,118 - Epoch: [162][ 500/ 518] Overall Loss 2.854426 Objective Loss 2.854426 LR 0.000031 Time 0.204120 -2023-04-27 00:26:03,653 - Epoch: [162][ 518/ 518] Overall Loss 2.855377 Objective Loss 2.855377 LR 0.000031 Time 0.203851 -2023-04-27 00:26:03,724 - --- validate (epoch=162)----------- -2023-04-27 00:26:03,724 - 4952 samples (32 per mini-batch) -2023-04-27 00:26:10,530 - Epoch: [162][ 50/ 155] Loss 3.156674 mAP 0.449069 -2023-04-27 00:26:17,014 - Epoch: [162][ 100/ 155] Loss 3.161960 mAP 0.455262 -2023-04-27 00:26:23,370 - Epoch: [162][ 150/ 155] Loss 3.152652 mAP 0.451041 -2023-04-27 00:26:23,951 - Epoch: [162][ 155/ 155] Loss 3.154368 mAP 0.452228 -2023-04-27 00:26:24,023 - ==> mAP: 0.45223 Loss: 3.154 - -2023-04-27 00:26:24,027 - ==> Best [mAP: 0.454181 vloss: 3.159736 Sparsity:0.00 Params: 2177088 on epoch: 160] -2023-04-27 00:26:24,027 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:26:24,089 - - -2023-04-27 00:26:24,089 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:26:34,956 - Epoch: [163][ 50/ 518] Overall Loss 2.858901 Objective Loss 2.858901 LR 0.000031 Time 0.217251 -2023-04-27 00:26:45,071 - Epoch: [163][ 100/ 518] Overall Loss 2.813803 Objective Loss 2.813803 LR 0.000031 Time 0.209761 -2023-04-27 00:26:55,290 - Epoch: [163][ 150/ 518] Overall Loss 2.828465 Objective Loss 2.828465 LR 0.000031 Time 0.207954 -2023-04-27 00:27:05,437 - Epoch: [163][ 200/ 518] Overall Loss 2.830300 Objective Loss 2.830300 LR 0.000031 Time 0.206692 -2023-04-27 00:27:15,552 - Epoch: [163][ 250/ 518] Overall Loss 2.840843 Objective Loss 2.840843 LR 0.000031 Time 0.205807 -2023-04-27 00:27:25,768 - Epoch: [163][ 300/ 518] Overall Loss 2.849568 Objective Loss 2.849568 LR 0.000031 Time 0.205553 -2023-04-27 00:27:35,895 - Epoch: [163][ 350/ 518] Overall Loss 2.853397 Objective Loss 2.853397 LR 0.000031 Time 0.205120 -2023-04-27 00:27:45,968 - Epoch: [163][ 400/ 518] Overall Loss 2.850322 Objective Loss 2.850322 LR 0.000031 Time 0.204659 -2023-04-27 00:27:56,127 - Epoch: [163][ 450/ 518] Overall Loss 2.850313 Objective Loss 2.850313 LR 0.000031 Time 0.204490 -2023-04-27 00:28:06,300 - Epoch: [163][ 500/ 518] Overall Loss 2.853050 Objective Loss 2.853050 LR 0.000031 Time 0.204384 -2023-04-27 00:28:09,817 - Epoch: [163][ 518/ 518] Overall Loss 2.859321 Objective Loss 2.859321 LR 0.000031 Time 0.204070 -2023-04-27 00:28:09,888 - --- validate (epoch=163)----------- -2023-04-27 00:28:09,888 - 4952 samples (32 per mini-batch) -2023-04-27 00:28:16,645 - Epoch: [163][ 50/ 155] Loss 3.161769 mAP 0.441542 -2023-04-27 00:28:23,008 - Epoch: [163][ 100/ 155] Loss 3.141011 mAP 0.448379 -2023-04-27 00:28:29,373 - Epoch: [163][ 150/ 155] Loss 3.154760 mAP 0.446650 -2023-04-27 00:28:29,942 - Epoch: [163][ 155/ 155] Loss 3.157784 mAP 0.446149 -2023-04-27 00:28:30,013 - ==> mAP: 0.44615 Loss: 3.158 - -2023-04-27 00:28:30,017 - ==> Best [mAP: 0.454181 vloss: 3.159736 Sparsity:0.00 Params: 2177088 on epoch: 160] -2023-04-27 00:28:30,017 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:28:30,054 - - -2023-04-27 00:28:30,054 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:28:41,148 - Epoch: [164][ 50/ 518] Overall Loss 2.835639 Objective Loss 2.835639 LR 0.000031 Time 0.221816 -2023-04-27 00:28:51,361 - Epoch: [164][ 100/ 518] Overall Loss 2.831179 Objective Loss 2.831179 LR 0.000031 Time 0.213022 -2023-04-27 00:29:01,489 - Epoch: [164][ 150/ 518] Overall Loss 2.834430 Objective Loss 2.834430 LR 0.000031 Time 0.209523 -2023-04-27 00:29:11,707 - Epoch: [164][ 200/ 518] Overall Loss 2.852728 Objective Loss 2.852728 LR 0.000031 Time 0.208227 -2023-04-27 00:29:21,862 - Epoch: [164][ 250/ 518] Overall Loss 2.860053 Objective Loss 2.860053 LR 0.000031 Time 0.207195 -2023-04-27 00:29:32,098 - Epoch: [164][ 300/ 518] Overall Loss 2.867550 Objective Loss 2.867550 LR 0.000031 Time 0.206776 -2023-04-27 00:29:42,254 - Epoch: [164][ 350/ 518] Overall Loss 2.858886 Objective Loss 2.858886 LR 0.000031 Time 0.206249 -2023-04-27 00:29:52,445 - Epoch: [164][ 400/ 518] Overall Loss 2.860879 Objective Loss 2.860879 LR 0.000031 Time 0.205940 -2023-04-27 00:30:02,577 - Epoch: [164][ 450/ 518] Overall Loss 2.857075 Objective Loss 2.857075 LR 0.000031 Time 0.205571 -2023-04-27 00:30:12,677 - Epoch: [164][ 500/ 518] Overall Loss 2.859253 Objective Loss 2.859253 LR 0.000031 Time 0.205211 -2023-04-27 00:30:16,177 - Epoch: [164][ 518/ 518] Overall Loss 2.857679 Objective Loss 2.857679 LR 0.000031 Time 0.204836 -2023-04-27 00:30:16,247 - --- validate (epoch=164)----------- -2023-04-27 00:30:16,247 - 4952 samples (32 per mini-batch) -2023-04-27 00:30:22,912 - Epoch: [164][ 50/ 155] Loss 3.130228 mAP 0.449059 -2023-04-27 00:30:29,312 - Epoch: [164][ 100/ 155] Loss 3.150129 mAP 0.452025 -2023-04-27 00:30:35,605 - Epoch: [164][ 150/ 155] Loss 3.151751 mAP 0.451099 -2023-04-27 00:30:36,166 - Epoch: [164][ 155/ 155] Loss 3.154516 mAP 0.450116 -2023-04-27 00:30:36,237 - ==> mAP: 0.45012 Loss: 3.155 - -2023-04-27 00:30:36,241 - ==> Best [mAP: 0.454181 vloss: 3.159736 Sparsity:0.00 Params: 2177088 on epoch: 160] -2023-04-27 00:30:36,241 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:30:36,278 - - -2023-04-27 00:30:36,278 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:30:47,200 - Epoch: [165][ 50/ 518] Overall Loss 2.857006 Objective Loss 2.857006 LR 0.000031 Time 0.218375 -2023-04-27 00:30:57,340 - Epoch: [165][ 100/ 518] Overall Loss 2.840210 Objective Loss 2.840210 LR 0.000031 Time 0.210573 -2023-04-27 00:31:07,616 - Epoch: [165][ 150/ 518] Overall Loss 2.842085 Objective Loss 2.842085 LR 0.000031 Time 0.208878 -2023-04-27 00:31:17,790 - Epoch: [165][ 200/ 518] Overall Loss 2.860172 Objective Loss 2.860172 LR 0.000031 Time 0.207523 -2023-04-27 00:31:27,909 - Epoch: [165][ 250/ 518] Overall Loss 2.865786 Objective Loss 2.865786 LR 0.000031 Time 0.206487 -2023-04-27 00:31:38,053 - Epoch: [165][ 300/ 518] Overall Loss 2.871700 Objective Loss 2.871700 LR 0.000031 Time 0.205880 -2023-04-27 00:31:48,203 - Epoch: [165][ 350/ 518] Overall Loss 2.870087 Objective Loss 2.870087 LR 0.000031 Time 0.205463 -2023-04-27 00:31:58,335 - Epoch: [165][ 400/ 518] Overall Loss 2.863076 Objective Loss 2.863076 LR 0.000031 Time 0.205106 -2023-04-27 00:32:08,437 - Epoch: [165][ 450/ 518] Overall Loss 2.859500 Objective Loss 2.859500 LR 0.000031 Time 0.204762 -2023-04-27 00:32:18,544 - Epoch: [165][ 500/ 518] Overall Loss 2.863149 Objective Loss 2.863149 LR 0.000031 Time 0.204497 -2023-04-27 00:32:22,069 - Epoch: [165][ 518/ 518] Overall Loss 2.858591 Objective Loss 2.858591 LR 0.000031 Time 0.204195 -2023-04-27 00:32:22,143 - --- validate (epoch=165)----------- -2023-04-27 00:32:22,143 - 4952 samples (32 per mini-batch) -2023-04-27 00:32:28,895 - Epoch: [165][ 50/ 155] Loss 3.141234 mAP 0.448034 -2023-04-27 00:32:35,267 - Epoch: [165][ 100/ 155] Loss 3.143262 mAP 0.443395 -2023-04-27 00:32:41,640 - Epoch: [165][ 150/ 155] Loss 3.157058 mAP 0.447408 -2023-04-27 00:32:42,209 - Epoch: [165][ 155/ 155] Loss 3.155883 mAP 0.446977 -2023-04-27 00:32:42,276 - ==> mAP: 0.44698 Loss: 3.156 - -2023-04-27 00:32:42,280 - ==> Best [mAP: 0.454181 vloss: 3.159736 Sparsity:0.00 Params: 2177088 on epoch: 160] -2023-04-27 00:32:42,280 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:32:42,317 - - -2023-04-27 00:32:42,317 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:32:53,173 - Epoch: [166][ 50/ 518] Overall Loss 2.807568 Objective Loss 2.807568 LR 0.000031 Time 0.217063 -2023-04-27 00:33:03,404 - Epoch: [166][ 100/ 518] Overall Loss 2.835082 Objective Loss 2.835082 LR 0.000031 Time 0.210821 -2023-04-27 00:33:13,559 - Epoch: [166][ 150/ 518] Overall Loss 2.826795 Objective Loss 2.826795 LR 0.000031 Time 0.208237 -2023-04-27 00:33:23,694 - Epoch: [166][ 200/ 518] Overall Loss 2.838625 Objective Loss 2.838625 LR 0.000031 Time 0.206844 -2023-04-27 00:33:33,782 - Epoch: [166][ 250/ 518] Overall Loss 2.841549 Objective Loss 2.841549 LR 0.000031 Time 0.205822 -2023-04-27 00:33:44,054 - Epoch: [166][ 300/ 518] Overall Loss 2.842067 Objective Loss 2.842067 LR 0.000031 Time 0.205751 -2023-04-27 00:33:54,166 - Epoch: [166][ 350/ 518] Overall Loss 2.845806 Objective Loss 2.845806 LR 0.000031 Time 0.205247 -2023-04-27 00:34:04,234 - Epoch: [166][ 400/ 518] Overall Loss 2.853028 Objective Loss 2.853028 LR 0.000031 Time 0.204756 -2023-04-27 00:34:14,412 - Epoch: [166][ 450/ 518] Overall Loss 2.855978 Objective Loss 2.855978 LR 0.000031 Time 0.204620 -2023-04-27 00:34:24,588 - Epoch: [166][ 500/ 518] Overall Loss 2.860554 Objective Loss 2.860554 LR 0.000031 Time 0.204507 -2023-04-27 00:34:28,100 - Epoch: [166][ 518/ 518] Overall Loss 2.858554 Objective Loss 2.858554 LR 0.000031 Time 0.204178 -2023-04-27 00:34:28,171 - --- validate (epoch=166)----------- -2023-04-27 00:34:28,171 - 4952 samples (32 per mini-batch) -2023-04-27 00:34:34,908 - Epoch: [166][ 50/ 155] Loss 3.138660 mAP 0.441576 -2023-04-27 00:34:41,250 - Epoch: [166][ 100/ 155] Loss 3.144442 mAP 0.447110 -2023-04-27 00:34:47,625 - Epoch: [166][ 150/ 155] Loss 3.155461 mAP 0.445329 -2023-04-27 00:34:48,209 - Epoch: [166][ 155/ 155] Loss 3.155332 mAP 0.445581 -2023-04-27 00:34:48,279 - ==> mAP: 0.44558 Loss: 3.155 - -2023-04-27 00:34:48,283 - ==> Best [mAP: 0.454181 vloss: 3.159736 Sparsity:0.00 Params: 2177088 on epoch: 160] -2023-04-27 00:34:48,283 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:34:48,320 - - -2023-04-27 00:34:48,320 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:34:59,219 - Epoch: [167][ 50/ 518] Overall Loss 2.803489 Objective Loss 2.803489 LR 0.000031 Time 0.217916 -2023-04-27 00:35:09,395 - Epoch: [167][ 100/ 518] Overall Loss 2.847794 Objective Loss 2.847794 LR 0.000031 Time 0.210703 -2023-04-27 00:35:19,606 - Epoch: [167][ 150/ 518] Overall Loss 2.856434 Objective Loss 2.856434 LR 0.000031 Time 0.208534 -2023-04-27 00:35:29,792 - Epoch: [167][ 200/ 518] Overall Loss 2.852460 Objective Loss 2.852460 LR 0.000031 Time 0.207319 -2023-04-27 00:35:39,899 - Epoch: [167][ 250/ 518] Overall Loss 2.853619 Objective Loss 2.853619 LR 0.000031 Time 0.206278 -2023-04-27 00:35:50,077 - Epoch: [167][ 300/ 518] Overall Loss 2.857180 Objective Loss 2.857180 LR 0.000031 Time 0.205820 -2023-04-27 00:36:00,196 - Epoch: [167][ 350/ 518] Overall Loss 2.856976 Objective Loss 2.856976 LR 0.000031 Time 0.205323 -2023-04-27 00:36:10,362 - Epoch: [167][ 400/ 518] Overall Loss 2.857799 Objective Loss 2.857799 LR 0.000031 Time 0.205070 -2023-04-27 00:36:20,535 - Epoch: [167][ 450/ 518] Overall Loss 2.855267 Objective Loss 2.855267 LR 0.000031 Time 0.204886 -2023-04-27 00:36:30,678 - Epoch: [167][ 500/ 518] Overall Loss 2.861540 Objective Loss 2.861540 LR 0.000031 Time 0.204680 -2023-04-27 00:36:34,213 - Epoch: [167][ 518/ 518] Overall Loss 2.859988 Objective Loss 2.859988 LR 0.000031 Time 0.204391 -2023-04-27 00:36:34,285 - --- validate (epoch=167)----------- -2023-04-27 00:36:34,285 - 4952 samples (32 per mini-batch) -2023-04-27 00:36:40,963 - Epoch: [167][ 50/ 155] Loss 3.165576 mAP 0.441557 -2023-04-27 00:36:47,373 - Epoch: [167][ 100/ 155] Loss 3.166564 mAP 0.454865 -2023-04-27 00:36:53,702 - Epoch: [167][ 150/ 155] Loss 3.151835 mAP 0.454654 -2023-04-27 00:36:54,277 - Epoch: [167][ 155/ 155] Loss 3.152132 mAP 0.452565 -2023-04-27 00:36:54,347 - ==> mAP: 0.45257 Loss: 3.152 - -2023-04-27 00:36:54,351 - ==> Best [mAP: 0.454181 vloss: 3.159736 Sparsity:0.00 Params: 2177088 on epoch: 160] -2023-04-27 00:36:54,351 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:36:54,388 - - -2023-04-27 00:36:54,388 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:37:05,315 - Epoch: [168][ 50/ 518] Overall Loss 2.845920 Objective Loss 2.845920 LR 0.000031 Time 0.218487 -2023-04-27 00:37:15,451 - Epoch: [168][ 100/ 518] Overall Loss 2.851537 Objective Loss 2.851537 LR 0.000031 Time 0.210585 -2023-04-27 00:37:25,601 - Epoch: [168][ 150/ 518] Overall Loss 2.849907 Objective Loss 2.849907 LR 0.000031 Time 0.208044 -2023-04-27 00:37:35,770 - Epoch: [168][ 200/ 518] Overall Loss 2.857140 Objective Loss 2.857140 LR 0.000031 Time 0.206871 -2023-04-27 00:37:45,953 - Epoch: [168][ 250/ 518] Overall Loss 2.858097 Objective Loss 2.858097 LR 0.000031 Time 0.206221 -2023-04-27 00:37:56,040 - Epoch: [168][ 300/ 518] Overall Loss 2.857136 Objective Loss 2.857136 LR 0.000031 Time 0.205470 -2023-04-27 00:38:06,223 - Epoch: [168][ 350/ 518] Overall Loss 2.858030 Objective Loss 2.858030 LR 0.000031 Time 0.205207 -2023-04-27 00:38:16,332 - Epoch: [168][ 400/ 518] Overall Loss 2.858022 Objective Loss 2.858022 LR 0.000031 Time 0.204825 -2023-04-27 00:38:26,477 - Epoch: [168][ 450/ 518] Overall Loss 2.856679 Objective Loss 2.856679 LR 0.000031 Time 0.204607 -2023-04-27 00:38:36,592 - Epoch: [168][ 500/ 518] Overall Loss 2.855530 Objective Loss 2.855530 LR 0.000031 Time 0.204374 -2023-04-27 00:38:40,133 - Epoch: [168][ 518/ 518] Overall Loss 2.857787 Objective Loss 2.857787 LR 0.000031 Time 0.204106 -2023-04-27 00:38:40,203 - --- validate (epoch=168)----------- -2023-04-27 00:38:40,204 - 4952 samples (32 per mini-batch) -2023-04-27 00:38:46,939 - Epoch: [168][ 50/ 155] Loss 3.164496 mAP 0.457708 -2023-04-27 00:38:53,342 - Epoch: [168][ 100/ 155] Loss 3.156144 mAP 0.453287 -2023-04-27 00:38:59,703 - Epoch: [168][ 150/ 155] Loss 3.151699 mAP 0.457004 -2023-04-27 00:39:00,280 - Epoch: [168][ 155/ 155] Loss 3.151359 mAP 0.457598 -2023-04-27 00:39:00,351 - ==> mAP: 0.45760 Loss: 3.151 - -2023-04-27 00:39:00,355 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 00:39:00,355 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:39:00,408 - - -2023-04-27 00:39:00,408 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:39:11,385 - Epoch: [169][ 50/ 518] Overall Loss 2.888846 Objective Loss 2.888846 LR 0.000031 Time 0.219489 -2023-04-27 00:39:21,555 - Epoch: [169][ 100/ 518] Overall Loss 2.864906 Objective Loss 2.864906 LR 0.000031 Time 0.211424 -2023-04-27 00:39:31,694 - Epoch: [169][ 150/ 518] Overall Loss 2.867120 Objective Loss 2.867120 LR 0.000031 Time 0.208533 -2023-04-27 00:39:41,803 - Epoch: [169][ 200/ 518] Overall Loss 2.875372 Objective Loss 2.875372 LR 0.000031 Time 0.206935 -2023-04-27 00:39:52,052 - Epoch: [169][ 250/ 518] Overall Loss 2.869170 Objective Loss 2.869170 LR 0.000031 Time 0.206539 -2023-04-27 00:40:02,165 - Epoch: [169][ 300/ 518] Overall Loss 2.872670 Objective Loss 2.872670 LR 0.000031 Time 0.205821 -2023-04-27 00:40:12,375 - Epoch: [169][ 350/ 518] Overall Loss 2.870152 Objective Loss 2.870152 LR 0.000031 Time 0.205585 -2023-04-27 00:40:22,455 - Epoch: [169][ 400/ 518] Overall Loss 2.868705 Objective Loss 2.868705 LR 0.000031 Time 0.205082 -2023-04-27 00:40:32,609 - Epoch: [169][ 450/ 518] Overall Loss 2.854357 Objective Loss 2.854357 LR 0.000031 Time 0.204856 -2023-04-27 00:40:42,684 - Epoch: [169][ 500/ 518] Overall Loss 2.850419 Objective Loss 2.850419 LR 0.000031 Time 0.204518 -2023-04-27 00:40:46,214 - Epoch: [169][ 518/ 518] Overall Loss 2.852384 Objective Loss 2.852384 LR 0.000031 Time 0.204224 -2023-04-27 00:40:46,286 - --- validate (epoch=169)----------- -2023-04-27 00:40:46,287 - 4952 samples (32 per mini-batch) -2023-04-27 00:40:52,994 - Epoch: [169][ 50/ 155] Loss 3.144399 mAP 0.444134 -2023-04-27 00:40:59,323 - Epoch: [169][ 100/ 155] Loss 3.137222 mAP 0.444477 -2023-04-27 00:41:05,665 - Epoch: [169][ 150/ 155] Loss 3.157119 mAP 0.440898 -2023-04-27 00:41:06,238 - Epoch: [169][ 155/ 155] Loss 3.152611 mAP 0.440601 -2023-04-27 00:41:06,301 - ==> mAP: 0.44060 Loss: 3.153 - -2023-04-27 00:41:06,306 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 00:41:06,306 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:41:06,343 - - -2023-04-27 00:41:06,343 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:41:17,201 - Epoch: [170][ 50/ 518] Overall Loss 2.839039 Objective Loss 2.839039 LR 0.000031 Time 0.217108 -2023-04-27 00:41:27,336 - Epoch: [170][ 100/ 518] Overall Loss 2.846259 Objective Loss 2.846259 LR 0.000031 Time 0.209886 -2023-04-27 00:41:37,485 - Epoch: [170][ 150/ 518] Overall Loss 2.840455 Objective Loss 2.840455 LR 0.000031 Time 0.207572 -2023-04-27 00:41:47,630 - Epoch: [170][ 200/ 518] Overall Loss 2.851489 Objective Loss 2.851489 LR 0.000031 Time 0.206397 -2023-04-27 00:41:57,711 - Epoch: [170][ 250/ 518] Overall Loss 2.849784 Objective Loss 2.849784 LR 0.000031 Time 0.205432 -2023-04-27 00:42:07,875 - Epoch: [170][ 300/ 518] Overall Loss 2.848639 Objective Loss 2.848639 LR 0.000031 Time 0.205068 -2023-04-27 00:42:17,947 - Epoch: [170][ 350/ 518] Overall Loss 2.844181 Objective Loss 2.844181 LR 0.000031 Time 0.204546 -2023-04-27 00:42:28,075 - Epoch: [170][ 400/ 518] Overall Loss 2.844152 Objective Loss 2.844152 LR 0.000031 Time 0.204294 -2023-04-27 00:42:38,225 - Epoch: [170][ 450/ 518] Overall Loss 2.839913 Objective Loss 2.839913 LR 0.000031 Time 0.204147 -2023-04-27 00:42:48,463 - Epoch: [170][ 500/ 518] Overall Loss 2.843605 Objective Loss 2.843605 LR 0.000031 Time 0.204204 -2023-04-27 00:42:52,008 - Epoch: [170][ 518/ 518] Overall Loss 2.843284 Objective Loss 2.843284 LR 0.000031 Time 0.203952 -2023-04-27 00:42:52,079 - --- validate (epoch=170)----------- -2023-04-27 00:42:52,079 - 4952 samples (32 per mini-batch) -2023-04-27 00:42:58,763 - Epoch: [170][ 50/ 155] Loss 3.144961 mAP 0.436328 -2023-04-27 00:43:05,168 - Epoch: [170][ 100/ 155] Loss 3.161740 mAP 0.441800 -2023-04-27 00:43:11,534 - Epoch: [170][ 150/ 155] Loss 3.160455 mAP 0.437091 -2023-04-27 00:43:12,107 - Epoch: [170][ 155/ 155] Loss 3.157406 mAP 0.438111 -2023-04-27 00:43:12,175 - ==> mAP: 0.43811 Loss: 3.157 - -2023-04-27 00:43:12,179 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 00:43:12,179 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:43:12,216 - - -2023-04-27 00:43:12,217 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:43:23,085 - Epoch: [171][ 50/ 518] Overall Loss 2.862185 Objective Loss 2.862185 LR 0.000031 Time 0.217312 -2023-04-27 00:43:33,178 - Epoch: [171][ 100/ 518] Overall Loss 2.839396 Objective Loss 2.839396 LR 0.000031 Time 0.209564 -2023-04-27 00:43:43,303 - Epoch: [171][ 150/ 518] Overall Loss 2.837212 Objective Loss 2.837212 LR 0.000031 Time 0.207201 -2023-04-27 00:43:53,470 - Epoch: [171][ 200/ 518] Overall Loss 2.850501 Objective Loss 2.850501 LR 0.000031 Time 0.206228 -2023-04-27 00:44:03,624 - Epoch: [171][ 250/ 518] Overall Loss 2.855030 Objective Loss 2.855030 LR 0.000031 Time 0.205591 -2023-04-27 00:44:13,796 - Epoch: [171][ 300/ 518] Overall Loss 2.853232 Objective Loss 2.853232 LR 0.000031 Time 0.205229 -2023-04-27 00:44:23,996 - Epoch: [171][ 350/ 518] Overall Loss 2.850696 Objective Loss 2.850696 LR 0.000031 Time 0.205047 -2023-04-27 00:44:34,180 - Epoch: [171][ 400/ 518] Overall Loss 2.855420 Objective Loss 2.855420 LR 0.000031 Time 0.204873 -2023-04-27 00:44:44,345 - Epoch: [171][ 450/ 518] Overall Loss 2.848515 Objective Loss 2.848515 LR 0.000031 Time 0.204693 -2023-04-27 00:44:54,491 - Epoch: [171][ 500/ 518] Overall Loss 2.853095 Objective Loss 2.853095 LR 0.000031 Time 0.204513 -2023-04-27 00:44:58,038 - Epoch: [171][ 518/ 518] Overall Loss 2.852798 Objective Loss 2.852798 LR 0.000031 Time 0.204252 -2023-04-27 00:44:58,109 - --- validate (epoch=171)----------- -2023-04-27 00:44:58,109 - 4952 samples (32 per mini-batch) -2023-04-27 00:45:04,886 - Epoch: [171][ 50/ 155] Loss 3.144199 mAP 0.462704 -2023-04-27 00:45:11,291 - Epoch: [171][ 100/ 155] Loss 3.155053 mAP 0.453770 -2023-04-27 00:45:17,687 - Epoch: [171][ 150/ 155] Loss 3.156220 mAP 0.451157 -2023-04-27 00:45:18,255 - Epoch: [171][ 155/ 155] Loss 3.153890 mAP 0.451720 -2023-04-27 00:45:18,325 - ==> mAP: 0.45172 Loss: 3.154 - -2023-04-27 00:45:18,329 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 00:45:18,329 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:45:18,366 - - -2023-04-27 00:45:18,366 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:45:29,254 - Epoch: [172][ 50/ 518] Overall Loss 2.808546 Objective Loss 2.808546 LR 0.000031 Time 0.217712 -2023-04-27 00:45:39,393 - Epoch: [172][ 100/ 518] Overall Loss 2.828036 Objective Loss 2.828036 LR 0.000031 Time 0.210226 -2023-04-27 00:45:49,536 - Epoch: [172][ 150/ 518] Overall Loss 2.841280 Objective Loss 2.841280 LR 0.000031 Time 0.207762 -2023-04-27 00:45:59,810 - Epoch: [172][ 200/ 518] Overall Loss 2.850216 Objective Loss 2.850216 LR 0.000031 Time 0.207182 -2023-04-27 00:46:09,930 - Epoch: [172][ 250/ 518] Overall Loss 2.861730 Objective Loss 2.861730 LR 0.000031 Time 0.206220 -2023-04-27 00:46:20,068 - Epoch: [172][ 300/ 518] Overall Loss 2.844378 Objective Loss 2.844378 LR 0.000031 Time 0.205636 -2023-04-27 00:46:30,178 - Epoch: [172][ 350/ 518] Overall Loss 2.846242 Objective Loss 2.846242 LR 0.000031 Time 0.205141 -2023-04-27 00:46:40,295 - Epoch: [172][ 400/ 518] Overall Loss 2.850922 Objective Loss 2.850922 LR 0.000031 Time 0.204786 -2023-04-27 00:46:50,420 - Epoch: [172][ 450/ 518] Overall Loss 2.855720 Objective Loss 2.855720 LR 0.000031 Time 0.204529 -2023-04-27 00:47:00,612 - Epoch: [172][ 500/ 518] Overall Loss 2.852985 Objective Loss 2.852985 LR 0.000031 Time 0.204457 -2023-04-27 00:47:04,168 - Epoch: [172][ 518/ 518] Overall Loss 2.850571 Objective Loss 2.850571 LR 0.000031 Time 0.204216 -2023-04-27 00:47:04,239 - --- validate (epoch=172)----------- -2023-04-27 00:47:04,240 - 4952 samples (32 per mini-batch) -2023-04-27 00:47:10,998 - Epoch: [172][ 50/ 155] Loss 3.165273 mAP 0.454323 -2023-04-27 00:47:17,361 - Epoch: [172][ 100/ 155] Loss 3.163832 mAP 0.445922 -2023-04-27 00:47:23,751 - Epoch: [172][ 150/ 155] Loss 3.162536 mAP 0.450693 -2023-04-27 00:47:24,321 - Epoch: [172][ 155/ 155] Loss 3.155610 mAP 0.451582 -2023-04-27 00:47:24,391 - ==> mAP: 0.45158 Loss: 3.156 - -2023-04-27 00:47:24,395 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 00:47:24,395 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:47:24,432 - - -2023-04-27 00:47:24,432 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:47:35,381 - Epoch: [173][ 50/ 518] Overall Loss 2.837142 Objective Loss 2.837142 LR 0.000031 Time 0.218914 -2023-04-27 00:47:45,491 - Epoch: [173][ 100/ 518] Overall Loss 2.857301 Objective Loss 2.857301 LR 0.000031 Time 0.210545 -2023-04-27 00:47:55,603 - Epoch: [173][ 150/ 518] Overall Loss 2.873301 Objective Loss 2.873301 LR 0.000031 Time 0.207765 -2023-04-27 00:48:05,781 - Epoch: [173][ 200/ 518] Overall Loss 2.851107 Objective Loss 2.851107 LR 0.000031 Time 0.206703 -2023-04-27 00:48:15,947 - Epoch: [173][ 250/ 518] Overall Loss 2.839771 Objective Loss 2.839771 LR 0.000031 Time 0.206020 -2023-04-27 00:48:26,118 - Epoch: [173][ 300/ 518] Overall Loss 2.836157 Objective Loss 2.836157 LR 0.000031 Time 0.205584 -2023-04-27 00:48:36,281 - Epoch: [173][ 350/ 518] Overall Loss 2.833630 Objective Loss 2.833630 LR 0.000031 Time 0.205245 -2023-04-27 00:48:46,453 - Epoch: [173][ 400/ 518] Overall Loss 2.842963 Objective Loss 2.842963 LR 0.000031 Time 0.205016 -2023-04-27 00:48:56,553 - Epoch: [173][ 450/ 518] Overall Loss 2.846643 Objective Loss 2.846643 LR 0.000031 Time 0.204677 -2023-04-27 00:49:06,761 - Epoch: [173][ 500/ 518] Overall Loss 2.849285 Objective Loss 2.849285 LR 0.000031 Time 0.204623 -2023-04-27 00:49:10,292 - Epoch: [173][ 518/ 518] Overall Loss 2.853034 Objective Loss 2.853034 LR 0.000031 Time 0.204328 -2023-04-27 00:49:10,364 - --- validate (epoch=173)----------- -2023-04-27 00:49:10,364 - 4952 samples (32 per mini-batch) -2023-04-27 00:49:17,127 - Epoch: [173][ 50/ 155] Loss 3.155101 mAP 0.451752 -2023-04-27 00:49:23,467 - Epoch: [173][ 100/ 155] Loss 3.158489 mAP 0.445015 -2023-04-27 00:49:29,839 - Epoch: [173][ 150/ 155] Loss 3.146680 mAP 0.444556 -2023-04-27 00:49:30,408 - Epoch: [173][ 155/ 155] Loss 3.151280 mAP 0.444441 -2023-04-27 00:49:30,479 - ==> mAP: 0.44444 Loss: 3.151 - -2023-04-27 00:49:30,483 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 00:49:30,483 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:49:30,521 - - -2023-04-27 00:49:30,521 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:49:41,336 - Epoch: [174][ 50/ 518] Overall Loss 2.858481 Objective Loss 2.858481 LR 0.000031 Time 0.216239 -2023-04-27 00:49:51,547 - Epoch: [174][ 100/ 518] Overall Loss 2.812947 Objective Loss 2.812947 LR 0.000031 Time 0.210219 -2023-04-27 00:50:01,713 - Epoch: [174][ 150/ 518] Overall Loss 2.821116 Objective Loss 2.821116 LR 0.000031 Time 0.207909 -2023-04-27 00:50:11,889 - Epoch: [174][ 200/ 518] Overall Loss 2.832866 Objective Loss 2.832866 LR 0.000031 Time 0.206803 -2023-04-27 00:50:22,072 - Epoch: [174][ 250/ 518] Overall Loss 2.833703 Objective Loss 2.833703 LR 0.000031 Time 0.206169 -2023-04-27 00:50:32,189 - Epoch: [174][ 300/ 518] Overall Loss 2.840062 Objective Loss 2.840062 LR 0.000031 Time 0.205525 -2023-04-27 00:50:42,314 - Epoch: [174][ 350/ 518] Overall Loss 2.839347 Objective Loss 2.839347 LR 0.000031 Time 0.205086 -2023-04-27 00:50:52,434 - Epoch: [174][ 400/ 518] Overall Loss 2.842846 Objective Loss 2.842846 LR 0.000031 Time 0.204746 -2023-04-27 00:51:02,571 - Epoch: [174][ 450/ 518] Overall Loss 2.846188 Objective Loss 2.846188 LR 0.000031 Time 0.204521 -2023-04-27 00:51:12,670 - Epoch: [174][ 500/ 518] Overall Loss 2.845268 Objective Loss 2.845268 LR 0.000031 Time 0.204264 -2023-04-27 00:51:16,181 - Epoch: [174][ 518/ 518] Overall Loss 2.847802 Objective Loss 2.847802 LR 0.000031 Time 0.203942 -2023-04-27 00:51:16,252 - --- validate (epoch=174)----------- -2023-04-27 00:51:16,253 - 4952 samples (32 per mini-batch) -2023-04-27 00:51:22,938 - Epoch: [174][ 50/ 155] Loss 3.117370 mAP 0.436337 -2023-04-27 00:51:29,328 - Epoch: [174][ 100/ 155] Loss 3.144142 mAP 0.438554 -2023-04-27 00:51:35,727 - Epoch: [174][ 150/ 155] Loss 3.160601 mAP 0.438156 -2023-04-27 00:51:36,285 - Epoch: [174][ 155/ 155] Loss 3.156397 mAP 0.436472 -2023-04-27 00:51:36,365 - ==> mAP: 0.43647 Loss: 3.156 - -2023-04-27 00:51:36,369 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 00:51:36,369 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:51:36,406 - - -2023-04-27 00:51:36,406 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:51:47,246 - Epoch: [175][ 50/ 518] Overall Loss 2.839689 Objective Loss 2.839689 LR 0.000031 Time 0.216739 -2023-04-27 00:51:57,428 - Epoch: [175][ 100/ 518] Overall Loss 2.854767 Objective Loss 2.854767 LR 0.000031 Time 0.210172 -2023-04-27 00:52:07,446 - Epoch: [175][ 150/ 518] Overall Loss 2.848810 Objective Loss 2.848810 LR 0.000031 Time 0.206892 -2023-04-27 00:52:17,540 - Epoch: [175][ 200/ 518] Overall Loss 2.861578 Objective Loss 2.861578 LR 0.000031 Time 0.205631 -2023-04-27 00:52:27,781 - Epoch: [175][ 250/ 518] Overall Loss 2.860024 Objective Loss 2.860024 LR 0.000031 Time 0.205463 -2023-04-27 00:52:37,893 - Epoch: [175][ 300/ 518] Overall Loss 2.856373 Objective Loss 2.856373 LR 0.000031 Time 0.204919 -2023-04-27 00:52:48,076 - Epoch: [175][ 350/ 518] Overall Loss 2.860245 Objective Loss 2.860245 LR 0.000031 Time 0.204734 -2023-04-27 00:52:58,269 - Epoch: [175][ 400/ 518] Overall Loss 2.852913 Objective Loss 2.852913 LR 0.000031 Time 0.204620 -2023-04-27 00:53:08,418 - Epoch: [175][ 450/ 518] Overall Loss 2.852823 Objective Loss 2.852823 LR 0.000031 Time 0.204436 -2023-04-27 00:53:18,504 - Epoch: [175][ 500/ 518] Overall Loss 2.850238 Objective Loss 2.850238 LR 0.000031 Time 0.204160 -2023-04-27 00:53:22,034 - Epoch: [175][ 518/ 518] Overall Loss 2.850722 Objective Loss 2.850722 LR 0.000031 Time 0.203879 -2023-04-27 00:53:22,107 - --- validate (epoch=175)----------- -2023-04-27 00:53:22,108 - 4952 samples (32 per mini-batch) -2023-04-27 00:53:28,821 - Epoch: [175][ 50/ 155] Loss 3.177828 mAP 0.448367 -2023-04-27 00:53:35,191 - Epoch: [175][ 100/ 155] Loss 3.177447 mAP 0.446042 -2023-04-27 00:53:41,492 - Epoch: [175][ 150/ 155] Loss 3.152630 mAP 0.451076 -2023-04-27 00:53:42,064 - Epoch: [175][ 155/ 155] Loss 3.154214 mAP 0.451349 -2023-04-27 00:53:42,135 - ==> mAP: 0.45135 Loss: 3.154 - -2023-04-27 00:53:42,139 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 00:53:42,139 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:53:42,176 - - -2023-04-27 00:53:42,176 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:53:52,989 - Epoch: [176][ 50/ 518] Overall Loss 2.875472 Objective Loss 2.875472 LR 0.000031 Time 0.216200 -2023-04-27 00:54:03,097 - Epoch: [176][ 100/ 518] Overall Loss 2.874252 Objective Loss 2.874252 LR 0.000031 Time 0.209162 -2023-04-27 00:54:13,203 - Epoch: [176][ 150/ 518] Overall Loss 2.857343 Objective Loss 2.857343 LR 0.000031 Time 0.206804 -2023-04-27 00:54:23,408 - Epoch: [176][ 200/ 518] Overall Loss 2.861214 Objective Loss 2.861214 LR 0.000031 Time 0.206118 -2023-04-27 00:54:33,566 - Epoch: [176][ 250/ 518] Overall Loss 2.865233 Objective Loss 2.865233 LR 0.000031 Time 0.205518 -2023-04-27 00:54:43,656 - Epoch: [176][ 300/ 518] Overall Loss 2.864304 Objective Loss 2.864304 LR 0.000031 Time 0.204892 -2023-04-27 00:54:53,858 - Epoch: [176][ 350/ 518] Overall Loss 2.852848 Objective Loss 2.852848 LR 0.000031 Time 0.204768 -2023-04-27 00:55:03,967 - Epoch: [176][ 400/ 518] Overall Loss 2.841926 Objective Loss 2.841926 LR 0.000031 Time 0.204440 -2023-04-27 00:55:14,116 - Epoch: [176][ 450/ 518] Overall Loss 2.843482 Objective Loss 2.843482 LR 0.000031 Time 0.204274 -2023-04-27 00:55:24,319 - Epoch: [176][ 500/ 518] Overall Loss 2.847808 Objective Loss 2.847808 LR 0.000031 Time 0.204249 -2023-04-27 00:55:27,827 - Epoch: [176][ 518/ 518] Overall Loss 2.849151 Objective Loss 2.849151 LR 0.000031 Time 0.203922 -2023-04-27 00:55:27,898 - --- validate (epoch=176)----------- -2023-04-27 00:55:27,898 - 4952 samples (32 per mini-batch) -2023-04-27 00:55:34,694 - Epoch: [176][ 50/ 155] Loss 3.139897 mAP 0.441269 -2023-04-27 00:55:41,140 - Epoch: [176][ 100/ 155] Loss 3.154784 mAP 0.436179 -2023-04-27 00:55:47,547 - Epoch: [176][ 150/ 155] Loss 3.154133 mAP 0.443637 -2023-04-27 00:55:48,116 - Epoch: [176][ 155/ 155] Loss 3.153621 mAP 0.444955 -2023-04-27 00:55:48,195 - ==> mAP: 0.44496 Loss: 3.154 - -2023-04-27 00:55:48,199 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 00:55:48,199 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:55:48,236 - - -2023-04-27 00:55:48,236 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:55:59,119 - Epoch: [177][ 50/ 518] Overall Loss 2.892882 Objective Loss 2.892882 LR 0.000031 Time 0.217589 -2023-04-27 00:56:09,307 - Epoch: [177][ 100/ 518] Overall Loss 2.881696 Objective Loss 2.881696 LR 0.000031 Time 0.210663 -2023-04-27 00:56:19,451 - Epoch: [177][ 150/ 518] Overall Loss 2.856772 Objective Loss 2.856772 LR 0.000031 Time 0.208055 -2023-04-27 00:56:29,600 - Epoch: [177][ 200/ 518] Overall Loss 2.848390 Objective Loss 2.848390 LR 0.000031 Time 0.206782 -2023-04-27 00:56:39,711 - Epoch: [177][ 250/ 518] Overall Loss 2.850863 Objective Loss 2.850863 LR 0.000031 Time 0.205863 -2023-04-27 00:56:49,773 - Epoch: [177][ 300/ 518] Overall Loss 2.852361 Objective Loss 2.852361 LR 0.000031 Time 0.205085 -2023-04-27 00:56:59,916 - Epoch: [177][ 350/ 518] Overall Loss 2.853101 Objective Loss 2.853101 LR 0.000031 Time 0.204761 -2023-04-27 00:57:10,097 - Epoch: [177][ 400/ 518] Overall Loss 2.849402 Objective Loss 2.849402 LR 0.000031 Time 0.204617 -2023-04-27 00:57:20,260 - Epoch: [177][ 450/ 518] Overall Loss 2.848345 Objective Loss 2.848345 LR 0.000031 Time 0.204462 -2023-04-27 00:57:30,514 - Epoch: [177][ 500/ 518] Overall Loss 2.854337 Objective Loss 2.854337 LR 0.000031 Time 0.204520 -2023-04-27 00:57:34,028 - Epoch: [177][ 518/ 518] Overall Loss 2.856589 Objective Loss 2.856589 LR 0.000031 Time 0.204196 -2023-04-27 00:57:34,100 - --- validate (epoch=177)----------- -2023-04-27 00:57:34,100 - 4952 samples (32 per mini-batch) -2023-04-27 00:57:40,894 - Epoch: [177][ 50/ 155] Loss 3.177972 mAP 0.457768 -2023-04-27 00:57:47,295 - Epoch: [177][ 100/ 155] Loss 3.154313 mAP 0.459177 -2023-04-27 00:57:53,622 - Epoch: [177][ 150/ 155] Loss 3.159434 mAP 0.451917 -2023-04-27 00:57:54,195 - Epoch: [177][ 155/ 155] Loss 3.154004 mAP 0.452733 -2023-04-27 00:57:54,274 - ==> mAP: 0.45273 Loss: 3.154 - -2023-04-27 00:57:54,278 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 00:57:54,278 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 00:57:54,316 - - -2023-04-27 00:57:54,316 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 00:58:05,120 - Epoch: [178][ 50/ 518] Overall Loss 2.839772 Objective Loss 2.839772 LR 0.000031 Time 0.216025 -2023-04-27 00:58:15,311 - Epoch: [178][ 100/ 518] Overall Loss 2.829271 Objective Loss 2.829271 LR 0.000031 Time 0.209910 -2023-04-27 00:58:25,501 - Epoch: [178][ 150/ 518] Overall Loss 2.837304 Objective Loss 2.837304 LR 0.000031 Time 0.207859 -2023-04-27 00:58:35,644 - Epoch: [178][ 200/ 518] Overall Loss 2.848324 Objective Loss 2.848324 LR 0.000031 Time 0.206601 -2023-04-27 00:58:45,789 - Epoch: [178][ 250/ 518] Overall Loss 2.843283 Objective Loss 2.843283 LR 0.000031 Time 0.205855 -2023-04-27 00:58:55,996 - Epoch: [178][ 300/ 518] Overall Loss 2.844621 Objective Loss 2.844621 LR 0.000031 Time 0.205562 -2023-04-27 00:59:06,183 - Epoch: [178][ 350/ 518] Overall Loss 2.845546 Objective Loss 2.845546 LR 0.000031 Time 0.205298 -2023-04-27 00:59:16,283 - Epoch: [178][ 400/ 518] Overall Loss 2.851094 Objective Loss 2.851094 LR 0.000031 Time 0.204881 -2023-04-27 00:59:26,462 - Epoch: [178][ 450/ 518] Overall Loss 2.846385 Objective Loss 2.846385 LR 0.000031 Time 0.204733 -2023-04-27 00:59:36,710 - Epoch: [178][ 500/ 518] Overall Loss 2.842960 Objective Loss 2.842960 LR 0.000031 Time 0.204753 -2023-04-27 00:59:40,270 - Epoch: [178][ 518/ 518] Overall Loss 2.846643 Objective Loss 2.846643 LR 0.000031 Time 0.204509 -2023-04-27 00:59:40,343 - --- validate (epoch=178)----------- -2023-04-27 00:59:40,344 - 4952 samples (32 per mini-batch) -2023-04-27 00:59:47,117 - Epoch: [178][ 50/ 155] Loss 3.147992 mAP 0.449395 -2023-04-27 00:59:53,545 - Epoch: [178][ 100/ 155] Loss 3.154691 mAP 0.456374 -2023-04-27 00:59:59,953 - Epoch: [178][ 150/ 155] Loss 3.157242 mAP 0.455650 -2023-04-27 01:00:00,531 - Epoch: [178][ 155/ 155] Loss 3.158572 mAP 0.455131 -2023-04-27 01:00:00,591 - ==> mAP: 0.45513 Loss: 3.159 - -2023-04-27 01:00:00,595 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 01:00:00,595 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:00:00,632 - - -2023-04-27 01:00:00,632 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:00:11,692 - Epoch: [179][ 50/ 518] Overall Loss 2.891975 Objective Loss 2.891975 LR 0.000031 Time 0.221142 -2023-04-27 01:00:21,819 - Epoch: [179][ 100/ 518] Overall Loss 2.859409 Objective Loss 2.859409 LR 0.000031 Time 0.211820 -2023-04-27 01:00:31,923 - Epoch: [179][ 150/ 518] Overall Loss 2.852555 Objective Loss 2.852555 LR 0.000031 Time 0.208567 -2023-04-27 01:00:42,115 - Epoch: [179][ 200/ 518] Overall Loss 2.842469 Objective Loss 2.842469 LR 0.000031 Time 0.207375 -2023-04-27 01:00:52,231 - Epoch: [179][ 250/ 518] Overall Loss 2.849682 Objective Loss 2.849682 LR 0.000031 Time 0.206355 -2023-04-27 01:01:02,433 - Epoch: [179][ 300/ 518] Overall Loss 2.840946 Objective Loss 2.840946 LR 0.000031 Time 0.205966 -2023-04-27 01:01:12,604 - Epoch: [179][ 350/ 518] Overall Loss 2.840138 Objective Loss 2.840138 LR 0.000031 Time 0.205598 -2023-04-27 01:01:22,748 - Epoch: [179][ 400/ 518] Overall Loss 2.847532 Objective Loss 2.847532 LR 0.000031 Time 0.205253 -2023-04-27 01:01:32,841 - Epoch: [179][ 450/ 518] Overall Loss 2.850035 Objective Loss 2.850035 LR 0.000031 Time 0.204872 -2023-04-27 01:01:42,935 - Epoch: [179][ 500/ 518] Overall Loss 2.846841 Objective Loss 2.846841 LR 0.000031 Time 0.204570 -2023-04-27 01:01:46,470 - Epoch: [179][ 518/ 518] Overall Loss 2.847783 Objective Loss 2.847783 LR 0.000031 Time 0.204285 -2023-04-27 01:01:46,541 - --- validate (epoch=179)----------- -2023-04-27 01:01:46,542 - 4952 samples (32 per mini-batch) -2023-04-27 01:01:53,297 - Epoch: [179][ 50/ 155] Loss 3.136510 mAP 0.456279 -2023-04-27 01:01:59,677 - Epoch: [179][ 100/ 155] Loss 3.144357 mAP 0.450723 -2023-04-27 01:02:06,049 - Epoch: [179][ 150/ 155] Loss 3.156402 mAP 0.446476 -2023-04-27 01:02:06,603 - Epoch: [179][ 155/ 155] Loss 3.154249 mAP 0.447559 -2023-04-27 01:02:06,670 - ==> mAP: 0.44756 Loss: 3.154 - -2023-04-27 01:02:06,675 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 01:02:06,675 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:02:06,713 - - -2023-04-27 01:02:06,713 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:02:17,727 - Epoch: [180][ 50/ 518] Overall Loss 2.878138 Objective Loss 2.878138 LR 0.000031 Time 0.220227 -2023-04-27 01:02:27,862 - Epoch: [180][ 100/ 518] Overall Loss 2.855433 Objective Loss 2.855433 LR 0.000031 Time 0.211444 -2023-04-27 01:02:38,006 - Epoch: [180][ 150/ 518] Overall Loss 2.861258 Objective Loss 2.861258 LR 0.000031 Time 0.208579 -2023-04-27 01:02:48,141 - Epoch: [180][ 200/ 518] Overall Loss 2.872487 Objective Loss 2.872487 LR 0.000031 Time 0.207104 -2023-04-27 01:02:58,384 - Epoch: [180][ 250/ 518] Overall Loss 2.860555 Objective Loss 2.860555 LR 0.000031 Time 0.206648 -2023-04-27 01:03:08,528 - Epoch: [180][ 300/ 518] Overall Loss 2.858964 Objective Loss 2.858964 LR 0.000031 Time 0.206015 -2023-04-27 01:03:18,670 - Epoch: [180][ 350/ 518] Overall Loss 2.859067 Objective Loss 2.859067 LR 0.000031 Time 0.205555 -2023-04-27 01:03:28,856 - Epoch: [180][ 400/ 518] Overall Loss 2.853989 Objective Loss 2.853989 LR 0.000031 Time 0.205321 -2023-04-27 01:03:39,004 - Epoch: [180][ 450/ 518] Overall Loss 2.851173 Objective Loss 2.851173 LR 0.000031 Time 0.205056 -2023-04-27 01:03:49,133 - Epoch: [180][ 500/ 518] Overall Loss 2.847819 Objective Loss 2.847819 LR 0.000031 Time 0.204804 -2023-04-27 01:03:52,653 - Epoch: [180][ 518/ 518] Overall Loss 2.847417 Objective Loss 2.847417 LR 0.000031 Time 0.204482 -2023-04-27 01:03:52,723 - --- validate (epoch=180)----------- -2023-04-27 01:03:52,724 - 4952 samples (32 per mini-batch) -2023-04-27 01:03:59,504 - Epoch: [180][ 50/ 155] Loss 3.153533 mAP 0.449393 -2023-04-27 01:04:05,843 - Epoch: [180][ 100/ 155] Loss 3.151324 mAP 0.438318 -2023-04-27 01:04:12,207 - Epoch: [180][ 150/ 155] Loss 3.153269 mAP 0.443711 -2023-04-27 01:04:12,779 - Epoch: [180][ 155/ 155] Loss 3.155839 mAP 0.442074 -2023-04-27 01:04:12,847 - ==> mAP: 0.44207 Loss: 3.156 - -2023-04-27 01:04:12,851 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 01:04:12,851 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:04:12,888 - - -2023-04-27 01:04:12,888 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:04:23,714 - Epoch: [181][ 50/ 518] Overall Loss 2.869825 Objective Loss 2.869825 LR 0.000031 Time 0.216457 -2023-04-27 01:04:33,824 - Epoch: [181][ 100/ 518] Overall Loss 2.852747 Objective Loss 2.852747 LR 0.000031 Time 0.209311 -2023-04-27 01:04:44,007 - Epoch: [181][ 150/ 518] Overall Loss 2.850897 Objective Loss 2.850897 LR 0.000031 Time 0.207416 -2023-04-27 01:04:54,238 - Epoch: [181][ 200/ 518] Overall Loss 2.851266 Objective Loss 2.851266 LR 0.000031 Time 0.206713 -2023-04-27 01:05:04,460 - Epoch: [181][ 250/ 518] Overall Loss 2.858597 Objective Loss 2.858597 LR 0.000031 Time 0.206248 -2023-04-27 01:05:14,715 - Epoch: [181][ 300/ 518] Overall Loss 2.860817 Objective Loss 2.860817 LR 0.000031 Time 0.206053 -2023-04-27 01:05:24,907 - Epoch: [181][ 350/ 518] Overall Loss 2.861054 Objective Loss 2.861054 LR 0.000031 Time 0.205732 -2023-04-27 01:05:35,093 - Epoch: [181][ 400/ 518] Overall Loss 2.860628 Objective Loss 2.860628 LR 0.000031 Time 0.205477 -2023-04-27 01:05:45,232 - Epoch: [181][ 450/ 518] Overall Loss 2.854948 Objective Loss 2.854948 LR 0.000031 Time 0.205173 -2023-04-27 01:05:55,404 - Epoch: [181][ 500/ 518] Overall Loss 2.855661 Objective Loss 2.855661 LR 0.000031 Time 0.204996 -2023-04-27 01:05:58,898 - Epoch: [181][ 518/ 518] Overall Loss 2.857159 Objective Loss 2.857159 LR 0.000031 Time 0.204617 -2023-04-27 01:05:58,968 - --- validate (epoch=181)----------- -2023-04-27 01:05:58,969 - 4952 samples (32 per mini-batch) -2023-04-27 01:06:05,708 - Epoch: [181][ 50/ 155] Loss 3.195268 mAP 0.442645 -2023-04-27 01:06:12,134 - Epoch: [181][ 100/ 155] Loss 3.154019 mAP 0.456952 -2023-04-27 01:06:18,519 - Epoch: [181][ 150/ 155] Loss 3.163407 mAP 0.453724 -2023-04-27 01:06:19,090 - Epoch: [181][ 155/ 155] Loss 3.160857 mAP 0.454575 -2023-04-27 01:06:19,166 - ==> mAP: 0.45457 Loss: 3.161 - -2023-04-27 01:06:19,170 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 01:06:19,170 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:06:19,207 - - -2023-04-27 01:06:19,207 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:06:30,155 - Epoch: [182][ 50/ 518] Overall Loss 2.871212 Objective Loss 2.871212 LR 0.000031 Time 0.218883 -2023-04-27 01:06:40,299 - Epoch: [182][ 100/ 518] Overall Loss 2.885053 Objective Loss 2.885053 LR 0.000031 Time 0.210871 -2023-04-27 01:06:50,578 - Epoch: [182][ 150/ 518] Overall Loss 2.871093 Objective Loss 2.871093 LR 0.000031 Time 0.209096 -2023-04-27 01:07:00,709 - Epoch: [182][ 200/ 518] Overall Loss 2.846563 Objective Loss 2.846563 LR 0.000031 Time 0.207469 -2023-04-27 01:07:10,875 - Epoch: [182][ 250/ 518] Overall Loss 2.851370 Objective Loss 2.851370 LR 0.000031 Time 0.206633 -2023-04-27 01:07:20,983 - Epoch: [182][ 300/ 518] Overall Loss 2.846879 Objective Loss 2.846879 LR 0.000031 Time 0.205882 -2023-04-27 01:07:31,101 - Epoch: [182][ 350/ 518] Overall Loss 2.849107 Objective Loss 2.849107 LR 0.000031 Time 0.205374 -2023-04-27 01:07:41,243 - Epoch: [182][ 400/ 518] Overall Loss 2.848077 Objective Loss 2.848077 LR 0.000031 Time 0.205051 -2023-04-27 01:07:51,356 - Epoch: [182][ 450/ 518] Overall Loss 2.852545 Objective Loss 2.852545 LR 0.000031 Time 0.204740 -2023-04-27 01:08:01,471 - Epoch: [182][ 500/ 518] Overall Loss 2.854116 Objective Loss 2.854116 LR 0.000031 Time 0.204491 -2023-04-27 01:08:04,988 - Epoch: [182][ 518/ 518] Overall Loss 2.854065 Objective Loss 2.854065 LR 0.000031 Time 0.204174 -2023-04-27 01:08:05,058 - --- validate (epoch=182)----------- -2023-04-27 01:08:05,058 - 4952 samples (32 per mini-batch) -2023-04-27 01:08:11,810 - Epoch: [182][ 50/ 155] Loss 3.152971 mAP 0.451391 -2023-04-27 01:08:18,182 - Epoch: [182][ 100/ 155] Loss 3.154191 mAP 0.450487 -2023-04-27 01:08:24,549 - Epoch: [182][ 150/ 155] Loss 3.153627 mAP 0.447665 -2023-04-27 01:08:25,110 - Epoch: [182][ 155/ 155] Loss 3.149786 mAP 0.446610 -2023-04-27 01:08:25,177 - ==> mAP: 0.44661 Loss: 3.150 - -2023-04-27 01:08:25,181 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 01:08:25,181 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:08:25,219 - - -2023-04-27 01:08:25,219 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:08:36,190 - Epoch: [183][ 50/ 518] Overall Loss 2.846082 Objective Loss 2.846082 LR 0.000031 Time 0.219352 -2023-04-27 01:08:46,341 - Epoch: [183][ 100/ 518] Overall Loss 2.830887 Objective Loss 2.830887 LR 0.000031 Time 0.211172 -2023-04-27 01:08:56,476 - Epoch: [183][ 150/ 518] Overall Loss 2.826195 Objective Loss 2.826195 LR 0.000031 Time 0.208339 -2023-04-27 01:09:06,633 - Epoch: [183][ 200/ 518] Overall Loss 2.842986 Objective Loss 2.842986 LR 0.000031 Time 0.207029 -2023-04-27 01:09:16,823 - Epoch: [183][ 250/ 518] Overall Loss 2.836924 Objective Loss 2.836924 LR 0.000031 Time 0.206377 -2023-04-27 01:09:27,035 - Epoch: [183][ 300/ 518] Overall Loss 2.837913 Objective Loss 2.837913 LR 0.000031 Time 0.206015 -2023-04-27 01:09:37,175 - Epoch: [183][ 350/ 518] Overall Loss 2.840553 Objective Loss 2.840553 LR 0.000031 Time 0.205551 -2023-04-27 01:09:47,358 - Epoch: [183][ 400/ 518] Overall Loss 2.843695 Objective Loss 2.843695 LR 0.000031 Time 0.205310 -2023-04-27 01:09:57,480 - Epoch: [183][ 450/ 518] Overall Loss 2.848939 Objective Loss 2.848939 LR 0.000031 Time 0.204989 -2023-04-27 01:10:07,757 - Epoch: [183][ 500/ 518] Overall Loss 2.849630 Objective Loss 2.849630 LR 0.000031 Time 0.205040 -2023-04-27 01:10:11,289 - Epoch: [183][ 518/ 518] Overall Loss 2.849810 Objective Loss 2.849810 LR 0.000031 Time 0.204732 -2023-04-27 01:10:11,360 - --- validate (epoch=183)----------- -2023-04-27 01:10:11,361 - 4952 samples (32 per mini-batch) -2023-04-27 01:10:18,124 - Epoch: [183][ 50/ 155] Loss 3.201459 mAP 0.440809 -2023-04-27 01:10:24,484 - Epoch: [183][ 100/ 155] Loss 3.172445 mAP 0.451911 -2023-04-27 01:10:30,830 - Epoch: [183][ 150/ 155] Loss 3.151166 mAP 0.453323 -2023-04-27 01:10:31,410 - Epoch: [183][ 155/ 155] Loss 3.152169 mAP 0.452763 -2023-04-27 01:10:31,491 - ==> mAP: 0.45276 Loss: 3.152 - -2023-04-27 01:10:31,495 - ==> Best [mAP: 0.457598 vloss: 3.151359 Sparsity:0.00 Params: 2177088 on epoch: 168] -2023-04-27 01:10:31,495 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:10:31,532 - - -2023-04-27 01:10:31,532 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:10:42,384 - Epoch: [184][ 50/ 518] Overall Loss 2.893280 Objective Loss 2.893280 LR 0.000031 Time 0.216984 -2023-04-27 01:10:52,553 - Epoch: [184][ 100/ 518] Overall Loss 2.853276 Objective Loss 2.853276 LR 0.000031 Time 0.210164 -2023-04-27 01:11:02,713 - Epoch: [184][ 150/ 518] Overall Loss 2.863886 Objective Loss 2.863886 LR 0.000031 Time 0.207831 -2023-04-27 01:11:12,885 - Epoch: [184][ 200/ 518] Overall Loss 2.854734 Objective Loss 2.854734 LR 0.000031 Time 0.206729 -2023-04-27 01:11:23,037 - Epoch: [184][ 250/ 518] Overall Loss 2.855534 Objective Loss 2.855534 LR 0.000031 Time 0.205984 -2023-04-27 01:11:33,173 - Epoch: [184][ 300/ 518] Overall Loss 2.859732 Objective Loss 2.859732 LR 0.000031 Time 0.205433 -2023-04-27 01:11:43,391 - Epoch: [184][ 350/ 518] Overall Loss 2.857136 Objective Loss 2.857136 LR 0.000031 Time 0.205276 -2023-04-27 01:11:53,536 - Epoch: [184][ 400/ 518] Overall Loss 2.853813 Objective Loss 2.853813 LR 0.000031 Time 0.204975 -2023-04-27 01:12:03,659 - Epoch: [184][ 450/ 518] Overall Loss 2.854137 Objective Loss 2.854137 LR 0.000031 Time 0.204692 -2023-04-27 01:12:13,822 - Epoch: [184][ 500/ 518] Overall Loss 2.853566 Objective Loss 2.853566 LR 0.000031 Time 0.204545 -2023-04-27 01:12:17,367 - Epoch: [184][ 518/ 518] Overall Loss 2.855378 Objective Loss 2.855378 LR 0.000031 Time 0.204279 -2023-04-27 01:12:17,438 - --- validate (epoch=184)----------- -2023-04-27 01:12:17,438 - 4952 samples (32 per mini-batch) -2023-04-27 01:12:24,220 - Epoch: [184][ 50/ 155] Loss 3.151207 mAP 0.450805 -2023-04-27 01:12:30,674 - Epoch: [184][ 100/ 155] Loss 3.152210 mAP 0.460296 -2023-04-27 01:12:37,064 - Epoch: [184][ 150/ 155] Loss 3.154473 mAP 0.461911 -2023-04-27 01:12:37,633 - Epoch: [184][ 155/ 155] Loss 3.158213 mAP 0.461909 -2023-04-27 01:12:37,704 - ==> mAP: 0.46191 Loss: 3.158 - -2023-04-27 01:12:37,708 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:12:37,708 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:12:37,761 - - -2023-04-27 01:12:37,761 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:12:48,631 - Epoch: [185][ 50/ 518] Overall Loss 2.837221 Objective Loss 2.837221 LR 0.000031 Time 0.217330 -2023-04-27 01:12:58,797 - Epoch: [185][ 100/ 518] Overall Loss 2.851356 Objective Loss 2.851356 LR 0.000031 Time 0.210311 -2023-04-27 01:13:09,094 - Epoch: [185][ 150/ 518] Overall Loss 2.851809 Objective Loss 2.851809 LR 0.000031 Time 0.208845 -2023-04-27 01:13:19,194 - Epoch: [185][ 200/ 518] Overall Loss 2.841139 Objective Loss 2.841139 LR 0.000031 Time 0.207124 -2023-04-27 01:13:29,453 - Epoch: [185][ 250/ 518] Overall Loss 2.848882 Objective Loss 2.848882 LR 0.000031 Time 0.206727 -2023-04-27 01:13:39,715 - Epoch: [185][ 300/ 518] Overall Loss 2.851980 Objective Loss 2.851980 LR 0.000031 Time 0.206475 -2023-04-27 01:13:49,794 - Epoch: [185][ 350/ 518] Overall Loss 2.852715 Objective Loss 2.852715 LR 0.000031 Time 0.205772 -2023-04-27 01:14:00,023 - Epoch: [185][ 400/ 518] Overall Loss 2.860233 Objective Loss 2.860233 LR 0.000031 Time 0.205619 -2023-04-27 01:14:10,194 - Epoch: [185][ 450/ 518] Overall Loss 2.856591 Objective Loss 2.856591 LR 0.000031 Time 0.205371 -2023-04-27 01:14:20,395 - Epoch: [185][ 500/ 518] Overall Loss 2.854191 Objective Loss 2.854191 LR 0.000031 Time 0.205233 -2023-04-27 01:14:23,970 - Epoch: [185][ 518/ 518] Overall Loss 2.856522 Objective Loss 2.856522 LR 0.000031 Time 0.205001 -2023-04-27 01:14:24,044 - --- validate (epoch=185)----------- -2023-04-27 01:14:24,044 - 4952 samples (32 per mini-batch) -2023-04-27 01:14:30,840 - Epoch: [185][ 50/ 155] Loss 3.111425 mAP 0.466269 -2023-04-27 01:14:37,241 - Epoch: [185][ 100/ 155] Loss 3.133747 mAP 0.453389 -2023-04-27 01:14:43,569 - Epoch: [185][ 150/ 155] Loss 3.146134 mAP 0.453663 -2023-04-27 01:14:44,139 - Epoch: [185][ 155/ 155] Loss 3.144814 mAP 0.452907 -2023-04-27 01:14:44,217 - ==> mAP: 0.45291 Loss: 3.145 - -2023-04-27 01:14:44,221 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:14:44,221 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:14:44,259 - - -2023-04-27 01:14:44,260 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:14:55,299 - Epoch: [186][ 50/ 518] Overall Loss 2.886301 Objective Loss 2.886301 LR 0.000031 Time 0.220734 -2023-04-27 01:15:05,491 - Epoch: [186][ 100/ 518] Overall Loss 2.876928 Objective Loss 2.876928 LR 0.000031 Time 0.212265 -2023-04-27 01:15:15,739 - Epoch: [186][ 150/ 518] Overall Loss 2.867832 Objective Loss 2.867832 LR 0.000031 Time 0.209825 -2023-04-27 01:15:25,950 - Epoch: [186][ 200/ 518] Overall Loss 2.860819 Objective Loss 2.860819 LR 0.000031 Time 0.208414 -2023-04-27 01:15:36,162 - Epoch: [186][ 250/ 518] Overall Loss 2.863076 Objective Loss 2.863076 LR 0.000031 Time 0.207571 -2023-04-27 01:15:46,268 - Epoch: [186][ 300/ 518] Overall Loss 2.862186 Objective Loss 2.862186 LR 0.000031 Time 0.206658 -2023-04-27 01:15:56,404 - Epoch: [186][ 350/ 518] Overall Loss 2.864558 Objective Loss 2.864558 LR 0.000031 Time 0.206091 -2023-04-27 01:16:06,559 - Epoch: [186][ 400/ 518] Overall Loss 2.864644 Objective Loss 2.864644 LR 0.000031 Time 0.205712 -2023-04-27 01:16:16,707 - Epoch: [186][ 450/ 518] Overall Loss 2.855868 Objective Loss 2.855868 LR 0.000031 Time 0.205404 -2023-04-27 01:16:26,877 - Epoch: [186][ 500/ 518] Overall Loss 2.859231 Objective Loss 2.859231 LR 0.000031 Time 0.205200 -2023-04-27 01:16:30,410 - Epoch: [186][ 518/ 518] Overall Loss 2.857032 Objective Loss 2.857032 LR 0.000031 Time 0.204889 -2023-04-27 01:16:30,483 - --- validate (epoch=186)----------- -2023-04-27 01:16:30,483 - 4952 samples (32 per mini-batch) -2023-04-27 01:16:37,284 - Epoch: [186][ 50/ 155] Loss 3.131826 mAP 0.473306 -2023-04-27 01:16:43,673 - Epoch: [186][ 100/ 155] Loss 3.141429 mAP 0.463318 -2023-04-27 01:16:50,029 - Epoch: [186][ 150/ 155] Loss 3.151530 mAP 0.457005 -2023-04-27 01:16:50,600 - Epoch: [186][ 155/ 155] Loss 3.146144 mAP 0.457250 -2023-04-27 01:16:50,669 - ==> mAP: 0.45725 Loss: 3.146 - -2023-04-27 01:16:50,672 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:16:50,672 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:16:50,710 - - -2023-04-27 01:16:50,710 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:17:01,712 - Epoch: [187][ 50/ 518] Overall Loss 2.844389 Objective Loss 2.844389 LR 0.000031 Time 0.219975 -2023-04-27 01:17:11,857 - Epoch: [187][ 100/ 518] Overall Loss 2.846278 Objective Loss 2.846278 LR 0.000031 Time 0.211429 -2023-04-27 01:17:22,040 - Epoch: [187][ 150/ 518] Overall Loss 2.852587 Objective Loss 2.852587 LR 0.000031 Time 0.208824 -2023-04-27 01:17:32,159 - Epoch: [187][ 200/ 518] Overall Loss 2.840308 Objective Loss 2.840308 LR 0.000031 Time 0.207207 -2023-04-27 01:17:42,371 - Epoch: [187][ 250/ 518] Overall Loss 2.848249 Objective Loss 2.848249 LR 0.000031 Time 0.206606 -2023-04-27 01:17:52,560 - Epoch: [187][ 300/ 518] Overall Loss 2.844085 Objective Loss 2.844085 LR 0.000031 Time 0.206131 -2023-04-27 01:18:02,780 - Epoch: [187][ 350/ 518] Overall Loss 2.849883 Objective Loss 2.849883 LR 0.000031 Time 0.205879 -2023-04-27 01:18:12,951 - Epoch: [187][ 400/ 518] Overall Loss 2.847591 Objective Loss 2.847591 LR 0.000031 Time 0.205567 -2023-04-27 01:18:23,041 - Epoch: [187][ 450/ 518] Overall Loss 2.845680 Objective Loss 2.845680 LR 0.000031 Time 0.205144 -2023-04-27 01:18:33,221 - Epoch: [187][ 500/ 518] Overall Loss 2.851157 Objective Loss 2.851157 LR 0.000031 Time 0.204987 -2023-04-27 01:18:36,753 - Epoch: [187][ 518/ 518] Overall Loss 2.850489 Objective Loss 2.850489 LR 0.000031 Time 0.204682 -2023-04-27 01:18:36,825 - --- validate (epoch=187)----------- -2023-04-27 01:18:36,826 - 4952 samples (32 per mini-batch) -2023-04-27 01:18:43,526 - Epoch: [187][ 50/ 155] Loss 3.153069 mAP 0.462421 -2023-04-27 01:18:49,905 - Epoch: [187][ 100/ 155] Loss 3.144052 mAP 0.454471 -2023-04-27 01:18:56,224 - Epoch: [187][ 150/ 155] Loss 3.151966 mAP 0.451016 -2023-04-27 01:18:56,793 - Epoch: [187][ 155/ 155] Loss 3.152565 mAP 0.450277 -2023-04-27 01:18:56,855 - ==> mAP: 0.45028 Loss: 3.153 - -2023-04-27 01:18:56,859 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:18:56,859 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:18:56,897 - - -2023-04-27 01:18:56,897 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:19:07,742 - Epoch: [188][ 50/ 518] Overall Loss 2.931894 Objective Loss 2.931894 LR 0.000031 Time 0.216848 -2023-04-27 01:19:17,869 - Epoch: [188][ 100/ 518] Overall Loss 2.892878 Objective Loss 2.892878 LR 0.000031 Time 0.209679 -2023-04-27 01:19:27,986 - Epoch: [188][ 150/ 518] Overall Loss 2.862235 Objective Loss 2.862235 LR 0.000031 Time 0.207217 -2023-04-27 01:19:38,135 - Epoch: [188][ 200/ 518] Overall Loss 2.865723 Objective Loss 2.865723 LR 0.000031 Time 0.206153 -2023-04-27 01:19:48,367 - Epoch: [188][ 250/ 518] Overall Loss 2.866684 Objective Loss 2.866684 LR 0.000031 Time 0.205843 -2023-04-27 01:19:58,477 - Epoch: [188][ 300/ 518] Overall Loss 2.866082 Objective Loss 2.866082 LR 0.000031 Time 0.205231 -2023-04-27 01:20:08,650 - Epoch: [188][ 350/ 518] Overall Loss 2.854587 Objective Loss 2.854587 LR 0.000031 Time 0.204973 -2023-04-27 01:20:18,720 - Epoch: [188][ 400/ 518] Overall Loss 2.853529 Objective Loss 2.853529 LR 0.000031 Time 0.204523 -2023-04-27 01:20:28,844 - Epoch: [188][ 450/ 518] Overall Loss 2.850120 Objective Loss 2.850120 LR 0.000031 Time 0.204291 -2023-04-27 01:20:38,978 - Epoch: [188][ 500/ 518] Overall Loss 2.850032 Objective Loss 2.850032 LR 0.000031 Time 0.204127 -2023-04-27 01:20:42,531 - Epoch: [188][ 518/ 518] Overall Loss 2.851702 Objective Loss 2.851702 LR 0.000031 Time 0.203891 -2023-04-27 01:20:42,602 - --- validate (epoch=188)----------- -2023-04-27 01:20:42,602 - 4952 samples (32 per mini-batch) -2023-04-27 01:20:49,295 - Epoch: [188][ 50/ 155] Loss 3.172165 mAP 0.446324 -2023-04-27 01:20:55,646 - Epoch: [188][ 100/ 155] Loss 3.172619 mAP 0.452452 -2023-04-27 01:21:02,011 - Epoch: [188][ 150/ 155] Loss 3.160474 mAP 0.453511 -2023-04-27 01:21:02,595 - Epoch: [188][ 155/ 155] Loss 3.152344 mAP 0.455549 -2023-04-27 01:21:02,664 - ==> mAP: 0.45555 Loss: 3.152 - -2023-04-27 01:21:02,667 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:21:02,668 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:21:02,705 - - -2023-04-27 01:21:02,706 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:21:13,657 - Epoch: [189][ 50/ 518] Overall Loss 2.804240 Objective Loss 2.804240 LR 0.000031 Time 0.218979 -2023-04-27 01:21:23,860 - Epoch: [189][ 100/ 518] Overall Loss 2.837194 Objective Loss 2.837194 LR 0.000031 Time 0.211494 -2023-04-27 01:21:34,011 - Epoch: [189][ 150/ 518] Overall Loss 2.838094 Objective Loss 2.838094 LR 0.000031 Time 0.208663 -2023-04-27 01:21:44,162 - Epoch: [189][ 200/ 518] Overall Loss 2.834400 Objective Loss 2.834400 LR 0.000031 Time 0.207245 -2023-04-27 01:21:54,333 - Epoch: [189][ 250/ 518] Overall Loss 2.837773 Objective Loss 2.837773 LR 0.000031 Time 0.206473 -2023-04-27 01:22:04,495 - Epoch: [189][ 300/ 518] Overall Loss 2.840652 Objective Loss 2.840652 LR 0.000031 Time 0.205929 -2023-04-27 01:22:14,693 - Epoch: [189][ 350/ 518] Overall Loss 2.845474 Objective Loss 2.845474 LR 0.000031 Time 0.205642 -2023-04-27 01:22:24,826 - Epoch: [189][ 400/ 518] Overall Loss 2.834355 Objective Loss 2.834355 LR 0.000031 Time 0.205264 -2023-04-27 01:22:34,977 - Epoch: [189][ 450/ 518] Overall Loss 2.833199 Objective Loss 2.833199 LR 0.000031 Time 0.205013 -2023-04-27 01:22:45,140 - Epoch: [189][ 500/ 518] Overall Loss 2.838412 Objective Loss 2.838412 LR 0.000031 Time 0.204834 -2023-04-27 01:22:48,689 - Epoch: [189][ 518/ 518] Overall Loss 2.841608 Objective Loss 2.841608 LR 0.000031 Time 0.204565 -2023-04-27 01:22:48,761 - --- validate (epoch=189)----------- -2023-04-27 01:22:48,762 - 4952 samples (32 per mini-batch) -2023-04-27 01:22:55,457 - Epoch: [189][ 50/ 155] Loss 3.166775 mAP 0.436071 -2023-04-27 01:23:01,887 - Epoch: [189][ 100/ 155] Loss 3.168454 mAP 0.444101 -2023-04-27 01:23:08,235 - Epoch: [189][ 150/ 155] Loss 3.155470 mAP 0.444436 -2023-04-27 01:23:08,801 - Epoch: [189][ 155/ 155] Loss 3.151289 mAP 0.444036 -2023-04-27 01:23:08,892 - ==> mAP: 0.44404 Loss: 3.151 - -2023-04-27 01:23:08,896 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:23:08,896 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:23:08,934 - - -2023-04-27 01:23:08,934 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:23:19,919 - Epoch: [190][ 50/ 518] Overall Loss 2.895856 Objective Loss 2.895856 LR 0.000031 Time 0.219659 -2023-04-27 01:23:30,113 - Epoch: [190][ 100/ 518] Overall Loss 2.871577 Objective Loss 2.871577 LR 0.000031 Time 0.211747 -2023-04-27 01:23:40,212 - Epoch: [190][ 150/ 518] Overall Loss 2.842779 Objective Loss 2.842779 LR 0.000031 Time 0.208479 -2023-04-27 01:23:50,397 - Epoch: [190][ 200/ 518] Overall Loss 2.842301 Objective Loss 2.842301 LR 0.000031 Time 0.207275 -2023-04-27 01:24:00,603 - Epoch: [190][ 250/ 518] Overall Loss 2.851153 Objective Loss 2.851153 LR 0.000031 Time 0.206639 -2023-04-27 01:24:10,751 - Epoch: [190][ 300/ 518] Overall Loss 2.855142 Objective Loss 2.855142 LR 0.000031 Time 0.206020 -2023-04-27 01:24:20,936 - Epoch: [190][ 350/ 518] Overall Loss 2.846511 Objective Loss 2.846511 LR 0.000031 Time 0.205683 -2023-04-27 01:24:31,199 - Epoch: [190][ 400/ 518] Overall Loss 2.851191 Objective Loss 2.851191 LR 0.000031 Time 0.205626 -2023-04-27 01:24:41,369 - Epoch: [190][ 450/ 518] Overall Loss 2.851951 Objective Loss 2.851951 LR 0.000031 Time 0.205376 -2023-04-27 01:24:51,483 - Epoch: [190][ 500/ 518] Overall Loss 2.849957 Objective Loss 2.849957 LR 0.000031 Time 0.205062 -2023-04-27 01:24:55,004 - Epoch: [190][ 518/ 518] Overall Loss 2.846914 Objective Loss 2.846914 LR 0.000031 Time 0.204734 -2023-04-27 01:24:55,076 - --- validate (epoch=190)----------- -2023-04-27 01:24:55,077 - 4952 samples (32 per mini-batch) -2023-04-27 01:25:01,798 - Epoch: [190][ 50/ 155] Loss 3.179151 mAP 0.438920 -2023-04-27 01:25:08,157 - Epoch: [190][ 100/ 155] Loss 3.151781 mAP 0.439905 -2023-04-27 01:25:14,509 - Epoch: [190][ 150/ 155] Loss 3.158272 mAP 0.444665 -2023-04-27 01:25:15,075 - Epoch: [190][ 155/ 155] Loss 3.152659 mAP 0.448002 -2023-04-27 01:25:15,141 - ==> mAP: 0.44800 Loss: 3.153 - -2023-04-27 01:25:15,145 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:25:15,145 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:25:15,182 - - -2023-04-27 01:25:15,182 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:25:26,143 - Epoch: [191][ 50/ 518] Overall Loss 2.922661 Objective Loss 2.922661 LR 0.000031 Time 0.219162 -2023-04-27 01:25:36,297 - Epoch: [191][ 100/ 518] Overall Loss 2.879362 Objective Loss 2.879362 LR 0.000031 Time 0.211099 -2023-04-27 01:25:46,356 - Epoch: [191][ 150/ 518] Overall Loss 2.875701 Objective Loss 2.875701 LR 0.000031 Time 0.207779 -2023-04-27 01:25:56,496 - Epoch: [191][ 200/ 518] Overall Loss 2.880409 Objective Loss 2.880409 LR 0.000031 Time 0.206531 -2023-04-27 01:26:06,701 - Epoch: [191][ 250/ 518] Overall Loss 2.878463 Objective Loss 2.878463 LR 0.000031 Time 0.206036 -2023-04-27 01:26:16,906 - Epoch: [191][ 300/ 518] Overall Loss 2.879858 Objective Loss 2.879858 LR 0.000031 Time 0.205706 -2023-04-27 01:26:27,136 - Epoch: [191][ 350/ 518] Overall Loss 2.875452 Objective Loss 2.875452 LR 0.000031 Time 0.205544 -2023-04-27 01:26:37,342 - Epoch: [191][ 400/ 518] Overall Loss 2.870660 Objective Loss 2.870660 LR 0.000031 Time 0.205363 -2023-04-27 01:26:47,467 - Epoch: [191][ 450/ 518] Overall Loss 2.865997 Objective Loss 2.865997 LR 0.000031 Time 0.205041 -2023-04-27 01:26:57,658 - Epoch: [191][ 500/ 518] Overall Loss 2.862210 Objective Loss 2.862210 LR 0.000031 Time 0.204916 -2023-04-27 01:27:01,187 - Epoch: [191][ 518/ 518] Overall Loss 2.858150 Objective Loss 2.858150 LR 0.000031 Time 0.204603 -2023-04-27 01:27:01,259 - --- validate (epoch=191)----------- -2023-04-27 01:27:01,259 - 4952 samples (32 per mini-batch) -2023-04-27 01:27:07,988 - Epoch: [191][ 50/ 155] Loss 3.133891 mAP 0.455387 -2023-04-27 01:27:14,304 - Epoch: [191][ 100/ 155] Loss 3.132436 mAP 0.449379 -2023-04-27 01:27:20,671 - Epoch: [191][ 150/ 155] Loss 3.152535 mAP 0.451503 -2023-04-27 01:27:21,237 - Epoch: [191][ 155/ 155] Loss 3.156425 mAP 0.450695 -2023-04-27 01:27:21,306 - ==> mAP: 0.45069 Loss: 3.156 - -2023-04-27 01:27:21,310 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:27:21,310 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:27:21,346 - - -2023-04-27 01:27:21,346 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:27:32,163 - Epoch: [192][ 50/ 518] Overall Loss 2.851368 Objective Loss 2.851368 LR 0.000031 Time 0.216270 -2023-04-27 01:27:42,222 - Epoch: [192][ 100/ 518] Overall Loss 2.830917 Objective Loss 2.830917 LR 0.000031 Time 0.208713 -2023-04-27 01:27:52,368 - Epoch: [192][ 150/ 518] Overall Loss 2.844201 Objective Loss 2.844201 LR 0.000031 Time 0.206769 -2023-04-27 01:28:02,529 - Epoch: [192][ 200/ 518] Overall Loss 2.835183 Objective Loss 2.835183 LR 0.000031 Time 0.205873 -2023-04-27 01:28:12,660 - Epoch: [192][ 250/ 518] Overall Loss 2.833196 Objective Loss 2.833196 LR 0.000031 Time 0.205215 -2023-04-27 01:28:22,815 - Epoch: [192][ 300/ 518] Overall Loss 2.840903 Objective Loss 2.840903 LR 0.000031 Time 0.204857 -2023-04-27 01:28:32,970 - Epoch: [192][ 350/ 518] Overall Loss 2.838638 Objective Loss 2.838638 LR 0.000031 Time 0.204602 -2023-04-27 01:28:43,225 - Epoch: [192][ 400/ 518] Overall Loss 2.846859 Objective Loss 2.846859 LR 0.000031 Time 0.204660 -2023-04-27 01:28:53,417 - Epoch: [192][ 450/ 518] Overall Loss 2.847085 Objective Loss 2.847085 LR 0.000031 Time 0.204565 -2023-04-27 01:29:03,617 - Epoch: [192][ 500/ 518] Overall Loss 2.846199 Objective Loss 2.846199 LR 0.000031 Time 0.204506 -2023-04-27 01:29:07,144 - Epoch: [192][ 518/ 518] Overall Loss 2.845869 Objective Loss 2.845869 LR 0.000031 Time 0.204206 -2023-04-27 01:29:07,219 - --- validate (epoch=192)----------- -2023-04-27 01:29:07,219 - 4952 samples (32 per mini-batch) -2023-04-27 01:29:13,951 - Epoch: [192][ 50/ 155] Loss 3.145129 mAP 0.436973 -2023-04-27 01:29:20,367 - Epoch: [192][ 100/ 155] Loss 3.171665 mAP 0.448441 -2023-04-27 01:29:26,685 - Epoch: [192][ 150/ 155] Loss 3.154648 mAP 0.444636 -2023-04-27 01:29:27,264 - Epoch: [192][ 155/ 155] Loss 3.155434 mAP 0.444159 -2023-04-27 01:29:27,334 - ==> mAP: 0.44416 Loss: 3.155 - -2023-04-27 01:29:27,338 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:29:27,338 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:29:27,376 - - -2023-04-27 01:29:27,376 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:29:38,409 - Epoch: [193][ 50/ 518] Overall Loss 2.850758 Objective Loss 2.850758 LR 0.000031 Time 0.220608 -2023-04-27 01:29:48,508 - Epoch: [193][ 100/ 518] Overall Loss 2.847089 Objective Loss 2.847089 LR 0.000031 Time 0.211281 -2023-04-27 01:29:58,749 - Epoch: [193][ 150/ 518] Overall Loss 2.846376 Objective Loss 2.846376 LR 0.000031 Time 0.209116 -2023-04-27 01:30:08,830 - Epoch: [193][ 200/ 518] Overall Loss 2.853427 Objective Loss 2.853427 LR 0.000031 Time 0.207234 -2023-04-27 01:30:19,011 - Epoch: [193][ 250/ 518] Overall Loss 2.845374 Objective Loss 2.845374 LR 0.000031 Time 0.206505 -2023-04-27 01:30:29,101 - Epoch: [193][ 300/ 518] Overall Loss 2.852757 Objective Loss 2.852757 LR 0.000031 Time 0.205713 -2023-04-27 01:30:39,273 - Epoch: [193][ 350/ 518] Overall Loss 2.845641 Objective Loss 2.845641 LR 0.000031 Time 0.205383 -2023-04-27 01:30:49,348 - Epoch: [193][ 400/ 518] Overall Loss 2.849585 Objective Loss 2.849585 LR 0.000031 Time 0.204894 -2023-04-27 01:30:59,486 - Epoch: [193][ 450/ 518] Overall Loss 2.846900 Objective Loss 2.846900 LR 0.000031 Time 0.204653 -2023-04-27 01:31:09,647 - Epoch: [193][ 500/ 518] Overall Loss 2.847202 Objective Loss 2.847202 LR 0.000031 Time 0.204508 -2023-04-27 01:31:13,166 - Epoch: [193][ 518/ 518] Overall Loss 2.849824 Objective Loss 2.849824 LR 0.000031 Time 0.204193 -2023-04-27 01:31:13,237 - --- validate (epoch=193)----------- -2023-04-27 01:31:13,237 - 4952 samples (32 per mini-batch) -2023-04-27 01:31:19,945 - Epoch: [193][ 50/ 155] Loss 3.174259 mAP 0.432159 -2023-04-27 01:31:26,358 - Epoch: [193][ 100/ 155] Loss 3.143492 mAP 0.448132 -2023-04-27 01:31:32,719 - Epoch: [193][ 150/ 155] Loss 3.152541 mAP 0.453739 -2023-04-27 01:31:33,292 - Epoch: [193][ 155/ 155] Loss 3.152665 mAP 0.453489 -2023-04-27 01:31:33,361 - ==> mAP: 0.45349 Loss: 3.153 - -2023-04-27 01:31:33,365 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:31:33,365 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:31:33,403 - - -2023-04-27 01:31:33,403 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:31:44,283 - Epoch: [194][ 50/ 518] Overall Loss 2.846080 Objective Loss 2.846080 LR 0.000031 Time 0.217545 -2023-04-27 01:31:54,519 - Epoch: [194][ 100/ 518] Overall Loss 2.846407 Objective Loss 2.846407 LR 0.000031 Time 0.211120 -2023-04-27 01:32:04,743 - Epoch: [194][ 150/ 518] Overall Loss 2.833758 Objective Loss 2.833758 LR 0.000031 Time 0.208891 -2023-04-27 01:32:14,941 - Epoch: [194][ 200/ 518] Overall Loss 2.820361 Objective Loss 2.820361 LR 0.000031 Time 0.207649 -2023-04-27 01:32:25,160 - Epoch: [194][ 250/ 518] Overall Loss 2.832167 Objective Loss 2.832167 LR 0.000031 Time 0.206992 -2023-04-27 01:32:35,327 - Epoch: [194][ 300/ 518] Overall Loss 2.831883 Objective Loss 2.831883 LR 0.000031 Time 0.206376 -2023-04-27 01:32:45,465 - Epoch: [194][ 350/ 518] Overall Loss 2.837512 Objective Loss 2.837512 LR 0.000031 Time 0.205854 -2023-04-27 01:32:55,595 - Epoch: [194][ 400/ 518] Overall Loss 2.837655 Objective Loss 2.837655 LR 0.000031 Time 0.205445 -2023-04-27 01:33:05,724 - Epoch: [194][ 450/ 518] Overall Loss 2.842819 Objective Loss 2.842819 LR 0.000031 Time 0.205123 -2023-04-27 01:33:15,851 - Epoch: [194][ 500/ 518] Overall Loss 2.841317 Objective Loss 2.841317 LR 0.000031 Time 0.204861 -2023-04-27 01:33:19,389 - Epoch: [194][ 518/ 518] Overall Loss 2.845710 Objective Loss 2.845710 LR 0.000031 Time 0.204571 -2023-04-27 01:33:19,460 - --- validate (epoch=194)----------- -2023-04-27 01:33:19,460 - 4952 samples (32 per mini-batch) -2023-04-27 01:33:26,194 - Epoch: [194][ 50/ 155] Loss 3.137425 mAP 0.459755 -2023-04-27 01:33:32,619 - Epoch: [194][ 100/ 155] Loss 3.125061 mAP 0.470568 -2023-04-27 01:33:39,076 - Epoch: [194][ 150/ 155] Loss 3.152531 mAP 0.458668 -2023-04-27 01:33:39,646 - Epoch: [194][ 155/ 155] Loss 3.154230 mAP 0.458363 -2023-04-27 01:33:39,724 - ==> mAP: 0.45836 Loss: 3.154 - -2023-04-27 01:33:39,727 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:33:39,727 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:33:39,765 - - -2023-04-27 01:33:39,765 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:33:50,830 - Epoch: [195][ 50/ 518] Overall Loss 2.826105 Objective Loss 2.826105 LR 0.000031 Time 0.221246 -2023-04-27 01:34:01,033 - Epoch: [195][ 100/ 518] Overall Loss 2.843848 Objective Loss 2.843848 LR 0.000031 Time 0.212633 -2023-04-27 01:34:11,222 - Epoch: [195][ 150/ 518] Overall Loss 2.841005 Objective Loss 2.841005 LR 0.000031 Time 0.209668 -2023-04-27 01:34:21,405 - Epoch: [195][ 200/ 518] Overall Loss 2.843097 Objective Loss 2.843097 LR 0.000031 Time 0.208161 -2023-04-27 01:34:31,484 - Epoch: [195][ 250/ 518] Overall Loss 2.839411 Objective Loss 2.839411 LR 0.000031 Time 0.206838 -2023-04-27 01:34:41,717 - Epoch: [195][ 300/ 518] Overall Loss 2.840822 Objective Loss 2.840822 LR 0.000031 Time 0.206469 -2023-04-27 01:34:51,886 - Epoch: [195][ 350/ 518] Overall Loss 2.838113 Objective Loss 2.838113 LR 0.000031 Time 0.206023 -2023-04-27 01:35:02,064 - Epoch: [195][ 400/ 518] Overall Loss 2.844344 Objective Loss 2.844344 LR 0.000031 Time 0.205712 -2023-04-27 01:35:12,176 - Epoch: [195][ 450/ 518] Overall Loss 2.848754 Objective Loss 2.848754 LR 0.000031 Time 0.205321 -2023-04-27 01:35:22,322 - Epoch: [195][ 500/ 518] Overall Loss 2.844441 Objective Loss 2.844441 LR 0.000031 Time 0.205077 -2023-04-27 01:35:25,845 - Epoch: [195][ 518/ 518] Overall Loss 2.850044 Objective Loss 2.850044 LR 0.000031 Time 0.204752 -2023-04-27 01:35:25,917 - --- validate (epoch=195)----------- -2023-04-27 01:35:25,917 - 4952 samples (32 per mini-batch) -2023-04-27 01:35:32,661 - Epoch: [195][ 50/ 155] Loss 3.128942 mAP 0.442595 -2023-04-27 01:35:39,028 - Epoch: [195][ 100/ 155] Loss 3.127459 mAP 0.443198 -2023-04-27 01:35:45,404 - Epoch: [195][ 150/ 155] Loss 3.140874 mAP 0.444937 -2023-04-27 01:35:45,965 - Epoch: [195][ 155/ 155] Loss 3.142307 mAP 0.443640 -2023-04-27 01:35:46,054 - ==> mAP: 0.44364 Loss: 3.142 - -2023-04-27 01:35:46,058 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:35:46,058 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:35:46,095 - - -2023-04-27 01:35:46,095 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:35:56,917 - Epoch: [196][ 50/ 518] Overall Loss 2.922650 Objective Loss 2.922650 LR 0.000031 Time 0.216383 -2023-04-27 01:36:07,055 - Epoch: [196][ 100/ 518] Overall Loss 2.865078 Objective Loss 2.865078 LR 0.000031 Time 0.209556 -2023-04-27 01:36:17,149 - Epoch: [196][ 150/ 518] Overall Loss 2.828801 Objective Loss 2.828801 LR 0.000031 Time 0.206985 -2023-04-27 01:36:27,342 - Epoch: [196][ 200/ 518] Overall Loss 2.826317 Objective Loss 2.826317 LR 0.000031 Time 0.206196 -2023-04-27 01:36:37,469 - Epoch: [196][ 250/ 518] Overall Loss 2.831596 Objective Loss 2.831596 LR 0.000031 Time 0.205456 -2023-04-27 01:36:47,747 - Epoch: [196][ 300/ 518] Overall Loss 2.837360 Objective Loss 2.837360 LR 0.000031 Time 0.205469 -2023-04-27 01:36:57,894 - Epoch: [196][ 350/ 518] Overall Loss 2.837445 Objective Loss 2.837445 LR 0.000031 Time 0.205103 -2023-04-27 01:37:07,983 - Epoch: [196][ 400/ 518] Overall Loss 2.838191 Objective Loss 2.838191 LR 0.000031 Time 0.204682 -2023-04-27 01:37:18,163 - Epoch: [196][ 450/ 518] Overall Loss 2.834695 Objective Loss 2.834695 LR 0.000031 Time 0.204559 -2023-04-27 01:37:28,254 - Epoch: [196][ 500/ 518] Overall Loss 2.837245 Objective Loss 2.837245 LR 0.000031 Time 0.204283 -2023-04-27 01:37:31,777 - Epoch: [196][ 518/ 518] Overall Loss 2.840966 Objective Loss 2.840966 LR 0.000031 Time 0.203984 -2023-04-27 01:37:31,848 - --- validate (epoch=196)----------- -2023-04-27 01:37:31,848 - 4952 samples (32 per mini-batch) -2023-04-27 01:37:38,600 - Epoch: [196][ 50/ 155] Loss 3.144156 mAP 0.455038 -2023-04-27 01:37:44,997 - Epoch: [196][ 100/ 155] Loss 3.165642 mAP 0.458116 -2023-04-27 01:37:51,385 - Epoch: [196][ 150/ 155] Loss 3.149983 mAP 0.455706 -2023-04-27 01:37:51,972 - Epoch: [196][ 155/ 155] Loss 3.148419 mAP 0.455973 -2023-04-27 01:37:52,048 - ==> mAP: 0.45597 Loss: 3.148 - -2023-04-27 01:37:52,051 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:37:52,051 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:37:52,089 - - -2023-04-27 01:37:52,089 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:38:03,036 - Epoch: [197][ 50/ 518] Overall Loss 2.855136 Objective Loss 2.855136 LR 0.000031 Time 0.218869 -2023-04-27 01:38:13,125 - Epoch: [197][ 100/ 518] Overall Loss 2.848036 Objective Loss 2.848036 LR 0.000031 Time 0.210314 -2023-04-27 01:38:23,335 - Epoch: [197][ 150/ 518] Overall Loss 2.855982 Objective Loss 2.855982 LR 0.000031 Time 0.208261 -2023-04-27 01:38:33,504 - Epoch: [197][ 200/ 518] Overall Loss 2.862925 Objective Loss 2.862925 LR 0.000031 Time 0.207036 -2023-04-27 01:38:43,580 - Epoch: [197][ 250/ 518] Overall Loss 2.855966 Objective Loss 2.855966 LR 0.000031 Time 0.205925 -2023-04-27 01:38:53,759 - Epoch: [197][ 300/ 518] Overall Loss 2.857722 Objective Loss 2.857722 LR 0.000031 Time 0.205528 -2023-04-27 01:39:03,929 - Epoch: [197][ 350/ 518] Overall Loss 2.866644 Objective Loss 2.866644 LR 0.000031 Time 0.205219 -2023-04-27 01:39:14,092 - Epoch: [197][ 400/ 518] Overall Loss 2.856798 Objective Loss 2.856798 LR 0.000031 Time 0.204972 -2023-04-27 01:39:24,268 - Epoch: [197][ 450/ 518] Overall Loss 2.852562 Objective Loss 2.852562 LR 0.000031 Time 0.204807 -2023-04-27 01:39:34,440 - Epoch: [197][ 500/ 518] Overall Loss 2.854808 Objective Loss 2.854808 LR 0.000031 Time 0.204665 -2023-04-27 01:39:37,995 - Epoch: [197][ 518/ 518] Overall Loss 2.855284 Objective Loss 2.855284 LR 0.000031 Time 0.204416 -2023-04-27 01:39:38,069 - --- validate (epoch=197)----------- -2023-04-27 01:39:38,069 - 4952 samples (32 per mini-batch) -2023-04-27 01:39:44,806 - Epoch: [197][ 50/ 155] Loss 3.135218 mAP 0.448708 -2023-04-27 01:39:51,234 - Epoch: [197][ 100/ 155] Loss 3.151671 mAP 0.455679 -2023-04-27 01:39:57,572 - Epoch: [197][ 150/ 155] Loss 3.149193 mAP 0.448841 -2023-04-27 01:39:58,143 - Epoch: [197][ 155/ 155] Loss 3.145818 mAP 0.449170 -2023-04-27 01:39:58,215 - ==> mAP: 0.44917 Loss: 3.146 - -2023-04-27 01:39:58,219 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:39:58,219 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:39:58,256 - - -2023-04-27 01:39:58,256 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:40:09,167 - Epoch: [198][ 50/ 518] Overall Loss 2.852484 Objective Loss 2.852484 LR 0.000031 Time 0.218155 -2023-04-27 01:40:19,326 - Epoch: [198][ 100/ 518] Overall Loss 2.859505 Objective Loss 2.859505 LR 0.000031 Time 0.210654 -2023-04-27 01:40:29,452 - Epoch: [198][ 150/ 518] Overall Loss 2.851443 Objective Loss 2.851443 LR 0.000031 Time 0.207929 -2023-04-27 01:40:39,670 - Epoch: [198][ 200/ 518] Overall Loss 2.849354 Objective Loss 2.849354 LR 0.000031 Time 0.207030 -2023-04-27 01:40:49,841 - Epoch: [198][ 250/ 518] Overall Loss 2.850264 Objective Loss 2.850264 LR 0.000031 Time 0.206299 -2023-04-27 01:41:00,063 - Epoch: [198][ 300/ 518] Overall Loss 2.852455 Objective Loss 2.852455 LR 0.000031 Time 0.205983 -2023-04-27 01:41:10,230 - Epoch: [198][ 350/ 518] Overall Loss 2.859647 Objective Loss 2.859647 LR 0.000031 Time 0.205603 -2023-04-27 01:41:20,407 - Epoch: [198][ 400/ 518] Overall Loss 2.854912 Objective Loss 2.854912 LR 0.000031 Time 0.205339 -2023-04-27 01:41:30,591 - Epoch: [198][ 450/ 518] Overall Loss 2.855713 Objective Loss 2.855713 LR 0.000031 Time 0.205152 -2023-04-27 01:41:40,667 - Epoch: [198][ 500/ 518] Overall Loss 2.849669 Objective Loss 2.849669 LR 0.000031 Time 0.204786 -2023-04-27 01:41:44,182 - Epoch: [198][ 518/ 518] Overall Loss 2.850225 Objective Loss 2.850225 LR 0.000031 Time 0.204454 -2023-04-27 01:41:44,252 - --- validate (epoch=198)----------- -2023-04-27 01:41:44,253 - 4952 samples (32 per mini-batch) -2023-04-27 01:41:51,053 - Epoch: [198][ 50/ 155] Loss 3.122188 mAP 0.472561 -2023-04-27 01:41:57,474 - Epoch: [198][ 100/ 155] Loss 3.117730 mAP 0.465418 -2023-04-27 01:42:03,856 - Epoch: [198][ 150/ 155] Loss 3.146209 mAP 0.454948 -2023-04-27 01:42:04,423 - Epoch: [198][ 155/ 155] Loss 3.147852 mAP 0.453342 -2023-04-27 01:42:04,503 - ==> mAP: 0.45334 Loss: 3.148 - -2023-04-27 01:42:04,507 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:42:04,507 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:42:04,545 - - -2023-04-27 01:42:04,545 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:42:15,447 - Epoch: [199][ 50/ 518] Overall Loss 2.861491 Objective Loss 2.861491 LR 0.000031 Time 0.217981 -2023-04-27 01:42:25,606 - Epoch: [199][ 100/ 518] Overall Loss 2.855142 Objective Loss 2.855142 LR 0.000031 Time 0.210568 -2023-04-27 01:42:35,718 - Epoch: [199][ 150/ 518] Overall Loss 2.853309 Objective Loss 2.853309 LR 0.000031 Time 0.207780 -2023-04-27 01:42:45,912 - Epoch: [199][ 200/ 518] Overall Loss 2.852752 Objective Loss 2.852752 LR 0.000031 Time 0.206796 -2023-04-27 01:42:56,094 - Epoch: [199][ 250/ 518] Overall Loss 2.860316 Objective Loss 2.860316 LR 0.000031 Time 0.206158 -2023-04-27 01:43:06,223 - Epoch: [199][ 300/ 518] Overall Loss 2.858533 Objective Loss 2.858533 LR 0.000031 Time 0.205557 -2023-04-27 01:43:16,366 - Epoch: [199][ 350/ 518] Overall Loss 2.862983 Objective Loss 2.862983 LR 0.000031 Time 0.205167 -2023-04-27 01:43:26,522 - Epoch: [199][ 400/ 518] Overall Loss 2.858039 Objective Loss 2.858039 LR 0.000031 Time 0.204907 -2023-04-27 01:43:36,672 - Epoch: [199][ 450/ 518] Overall Loss 2.865517 Objective Loss 2.865517 LR 0.000031 Time 0.204691 -2023-04-27 01:43:46,804 - Epoch: [199][ 500/ 518] Overall Loss 2.863693 Objective Loss 2.863693 LR 0.000031 Time 0.204484 -2023-04-27 01:43:50,322 - Epoch: [199][ 518/ 518] Overall Loss 2.862876 Objective Loss 2.862876 LR 0.000031 Time 0.204169 -2023-04-27 01:43:50,393 - --- validate (epoch=199)----------- -2023-04-27 01:43:50,393 - 4952 samples (32 per mini-batch) -2023-04-27 01:43:57,097 - Epoch: [199][ 50/ 155] Loss 3.185084 mAP 0.448203 -2023-04-27 01:44:03,468 - Epoch: [199][ 100/ 155] Loss 3.147566 mAP 0.458632 -2023-04-27 01:44:09,857 - Epoch: [199][ 150/ 155] Loss 3.143502 mAP 0.452454 -2023-04-27 01:44:10,438 - Epoch: [199][ 155/ 155] Loss 3.144061 mAP 0.453641 -2023-04-27 01:44:10,509 - ==> mAP: 0.45364 Loss: 3.144 - -2023-04-27 01:44:10,513 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:44:10,513 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:44:10,551 - - -2023-04-27 01:44:10,551 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:44:21,509 - Epoch: [200][ 50/ 518] Overall Loss 2.832907 Objective Loss 2.832907 LR 0.000008 Time 0.219112 -2023-04-27 01:44:31,636 - Epoch: [200][ 100/ 518] Overall Loss 2.838178 Objective Loss 2.838178 LR 0.000008 Time 0.210805 -2023-04-27 01:44:41,797 - Epoch: [200][ 150/ 518] Overall Loss 2.831309 Objective Loss 2.831309 LR 0.000008 Time 0.208265 -2023-04-27 01:44:51,978 - Epoch: [200][ 200/ 518] Overall Loss 2.833170 Objective Loss 2.833170 LR 0.000008 Time 0.207097 -2023-04-27 01:45:02,204 - Epoch: [200][ 250/ 518] Overall Loss 2.835221 Objective Loss 2.835221 LR 0.000008 Time 0.206576 -2023-04-27 01:45:12,377 - Epoch: [200][ 300/ 518] Overall Loss 2.837161 Objective Loss 2.837161 LR 0.000008 Time 0.206052 -2023-04-27 01:45:22,539 - Epoch: [200][ 350/ 518] Overall Loss 2.837229 Objective Loss 2.837229 LR 0.000008 Time 0.205645 -2023-04-27 01:45:32,631 - Epoch: [200][ 400/ 518] Overall Loss 2.835586 Objective Loss 2.835586 LR 0.000008 Time 0.205164 -2023-04-27 01:45:42,762 - Epoch: [200][ 450/ 518] Overall Loss 2.841851 Objective Loss 2.841851 LR 0.000008 Time 0.204879 -2023-04-27 01:45:52,967 - Epoch: [200][ 500/ 518] Overall Loss 2.851637 Objective Loss 2.851637 LR 0.000008 Time 0.204796 -2023-04-27 01:45:56,525 - Epoch: [200][ 518/ 518] Overall Loss 2.852527 Objective Loss 2.852527 LR 0.000008 Time 0.204549 -2023-04-27 01:45:56,598 - --- validate (epoch=200)----------- -2023-04-27 01:45:56,598 - 4952 samples (32 per mini-batch) -2023-04-27 01:46:03,390 - Epoch: [200][ 50/ 155] Loss 3.187804 mAP 0.444355 -2023-04-27 01:46:09,772 - Epoch: [200][ 100/ 155] Loss 3.157387 mAP 0.448424 -2023-04-27 01:46:16,100 - Epoch: [200][ 150/ 155] Loss 3.149844 mAP 0.444953 -2023-04-27 01:46:16,668 - Epoch: [200][ 155/ 155] Loss 3.148346 mAP 0.444812 -2023-04-27 01:46:16,737 - ==> mAP: 0.44481 Loss: 3.148 - -2023-04-27 01:46:16,740 - ==> Best [mAP: 0.461909 vloss: 3.158213 Sparsity:0.00 Params: 2177088 on epoch: 184] -2023-04-27 01:46:16,740 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:46:16,778 - - -2023-04-27 01:46:16,778 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:46:27,712 - Epoch: [201][ 50/ 518] Overall Loss 2.906920 Objective Loss 2.906920 LR 0.000008 Time 0.218622 -2023-04-27 01:46:37,862 - Epoch: [201][ 100/ 518] Overall Loss 2.891210 Objective Loss 2.891210 LR 0.000008 Time 0.210798 -2023-04-27 01:46:48,040 - Epoch: [201][ 150/ 518] Overall Loss 2.863199 Objective Loss 2.863199 LR 0.000008 Time 0.208369 -2023-04-27 01:46:58,118 - Epoch: [201][ 200/ 518] Overall Loss 2.857947 Objective Loss 2.857947 LR 0.000008 Time 0.206660 -2023-04-27 01:47:08,256 - Epoch: [201][ 250/ 518] Overall Loss 2.856573 Objective Loss 2.856573 LR 0.000008 Time 0.205875 -2023-04-27 01:47:18,362 - Epoch: [201][ 300/ 518] Overall Loss 2.862379 Objective Loss 2.862379 LR 0.000008 Time 0.205243 -2023-04-27 01:47:28,535 - Epoch: [201][ 350/ 518] Overall Loss 2.856918 Objective Loss 2.856918 LR 0.000008 Time 0.204984 -2023-04-27 01:47:38,779 - Epoch: [201][ 400/ 518] Overall Loss 2.857995 Objective Loss 2.857995 LR 0.000008 Time 0.204968 -2023-04-27 01:47:48,955 - Epoch: [201][ 450/ 518] Overall Loss 2.855076 Objective Loss 2.855076 LR 0.000008 Time 0.204803 -2023-04-27 01:47:59,162 - Epoch: [201][ 500/ 518] Overall Loss 2.846404 Objective Loss 2.846404 LR 0.000008 Time 0.204734 -2023-04-27 01:48:02,706 - Epoch: [201][ 518/ 518] Overall Loss 2.850152 Objective Loss 2.850152 LR 0.000008 Time 0.204460 -2023-04-27 01:48:02,778 - --- validate (epoch=201)----------- -2023-04-27 01:48:02,778 - 4952 samples (32 per mini-batch) -2023-04-27 01:48:09,557 - Epoch: [201][ 50/ 155] Loss 3.203125 mAP 0.466786 -2023-04-27 01:48:16,000 - Epoch: [201][ 100/ 155] Loss 3.175466 mAP 0.462138 -2023-04-27 01:48:22,398 - Epoch: [201][ 150/ 155] Loss 3.142282 mAP 0.460868 -2023-04-27 01:48:22,971 - Epoch: [201][ 155/ 155] Loss 3.144180 mAP 0.462889 -2023-04-27 01:48:23,054 - ==> mAP: 0.46289 Loss: 3.144 - -2023-04-27 01:48:23,058 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 01:48:23,058 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:48:23,111 - - -2023-04-27 01:48:23,112 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:48:33,965 - Epoch: [202][ 50/ 518] Overall Loss 2.787363 Objective Loss 2.787363 LR 0.000008 Time 0.217018 -2023-04-27 01:48:44,191 - Epoch: [202][ 100/ 518] Overall Loss 2.819730 Objective Loss 2.819730 LR 0.000008 Time 0.210753 -2023-04-27 01:48:54,351 - Epoch: [202][ 150/ 518] Overall Loss 2.846302 Objective Loss 2.846302 LR 0.000008 Time 0.208221 -2023-04-27 01:49:04,510 - Epoch: [202][ 200/ 518] Overall Loss 2.837304 Objective Loss 2.837304 LR 0.000008 Time 0.206953 -2023-04-27 01:49:14,660 - Epoch: [202][ 250/ 518] Overall Loss 2.836076 Objective Loss 2.836076 LR 0.000008 Time 0.206155 -2023-04-27 01:49:24,805 - Epoch: [202][ 300/ 518] Overall Loss 2.834939 Objective Loss 2.834939 LR 0.000008 Time 0.205608 -2023-04-27 01:49:34,989 - Epoch: [202][ 350/ 518] Overall Loss 2.841616 Objective Loss 2.841616 LR 0.000008 Time 0.205327 -2023-04-27 01:49:45,140 - Epoch: [202][ 400/ 518] Overall Loss 2.845711 Objective Loss 2.845711 LR 0.000008 Time 0.205035 -2023-04-27 01:49:55,280 - Epoch: [202][ 450/ 518] Overall Loss 2.844847 Objective Loss 2.844847 LR 0.000008 Time 0.204784 -2023-04-27 01:50:05,475 - Epoch: [202][ 500/ 518] Overall Loss 2.844469 Objective Loss 2.844469 LR 0.000008 Time 0.204692 -2023-04-27 01:50:09,013 - Epoch: [202][ 518/ 518] Overall Loss 2.845097 Objective Loss 2.845097 LR 0.000008 Time 0.204409 -2023-04-27 01:50:09,084 - --- validate (epoch=202)----------- -2023-04-27 01:50:09,085 - 4952 samples (32 per mini-batch) -2023-04-27 01:50:15,816 - Epoch: [202][ 50/ 155] Loss 3.134071 mAP 0.466268 -2023-04-27 01:50:22,202 - Epoch: [202][ 100/ 155] Loss 3.153222 mAP 0.446024 -2023-04-27 01:50:28,581 - Epoch: [202][ 150/ 155] Loss 3.158046 mAP 0.447742 -2023-04-27 01:50:29,133 - Epoch: [202][ 155/ 155] Loss 3.154738 mAP 0.445502 -2023-04-27 01:50:29,204 - ==> mAP: 0.44550 Loss: 3.155 - -2023-04-27 01:50:29,209 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 01:50:29,209 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:50:29,247 - - -2023-04-27 01:50:29,247 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:50:40,140 - Epoch: [203][ 50/ 518] Overall Loss 2.882840 Objective Loss 2.882840 LR 0.000008 Time 0.217799 -2023-04-27 01:50:50,240 - Epoch: [203][ 100/ 518] Overall Loss 2.886663 Objective Loss 2.886663 LR 0.000008 Time 0.209883 -2023-04-27 01:51:00,429 - Epoch: [203][ 150/ 518] Overall Loss 2.900961 Objective Loss 2.900961 LR 0.000008 Time 0.207841 -2023-04-27 01:51:10,628 - Epoch: [203][ 200/ 518] Overall Loss 2.875661 Objective Loss 2.875661 LR 0.000008 Time 0.206864 -2023-04-27 01:51:20,780 - Epoch: [203][ 250/ 518] Overall Loss 2.856198 Objective Loss 2.856198 LR 0.000008 Time 0.206094 -2023-04-27 01:51:30,897 - Epoch: [203][ 300/ 518] Overall Loss 2.855655 Objective Loss 2.855655 LR 0.000008 Time 0.205462 -2023-04-27 01:51:41,091 - Epoch: [203][ 350/ 518] Overall Loss 2.856877 Objective Loss 2.856877 LR 0.000008 Time 0.205234 -2023-04-27 01:51:51,215 - Epoch: [203][ 400/ 518] Overall Loss 2.852828 Objective Loss 2.852828 LR 0.000008 Time 0.204885 -2023-04-27 01:52:01,277 - Epoch: [203][ 450/ 518] Overall Loss 2.851454 Objective Loss 2.851454 LR 0.000008 Time 0.204475 -2023-04-27 01:52:11,428 - Epoch: [203][ 500/ 518] Overall Loss 2.849040 Objective Loss 2.849040 LR 0.000008 Time 0.204327 -2023-04-27 01:52:15,008 - Epoch: [203][ 518/ 518] Overall Loss 2.853260 Objective Loss 2.853260 LR 0.000008 Time 0.204136 -2023-04-27 01:52:15,079 - --- validate (epoch=203)----------- -2023-04-27 01:52:15,079 - 4952 samples (32 per mini-batch) -2023-04-27 01:52:21,822 - Epoch: [203][ 50/ 155] Loss 3.158198 mAP 0.444907 -2023-04-27 01:52:28,248 - Epoch: [203][ 100/ 155] Loss 3.159702 mAP 0.450578 -2023-04-27 01:52:34,607 - Epoch: [203][ 150/ 155] Loss 3.148838 mAP 0.454994 -2023-04-27 01:52:35,194 - Epoch: [203][ 155/ 155] Loss 3.149908 mAP 0.454410 -2023-04-27 01:52:35,257 - ==> mAP: 0.45441 Loss: 3.150 - -2023-04-27 01:52:35,261 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 01:52:35,261 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:52:35,298 - - -2023-04-27 01:52:35,299 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:52:46,327 - Epoch: [204][ 50/ 518] Overall Loss 2.806552 Objective Loss 2.806552 LR 0.000008 Time 0.220509 -2023-04-27 01:52:56,539 - Epoch: [204][ 100/ 518] Overall Loss 2.801955 Objective Loss 2.801955 LR 0.000008 Time 0.212356 -2023-04-27 01:53:06,724 - Epoch: [204][ 150/ 518] Overall Loss 2.816842 Objective Loss 2.816842 LR 0.000008 Time 0.209459 -2023-04-27 01:53:16,939 - Epoch: [204][ 200/ 518] Overall Loss 2.828980 Objective Loss 2.828980 LR 0.000008 Time 0.208164 -2023-04-27 01:53:27,122 - Epoch: [204][ 250/ 518] Overall Loss 2.839316 Objective Loss 2.839316 LR 0.000008 Time 0.207256 -2023-04-27 01:53:37,238 - Epoch: [204][ 300/ 518] Overall Loss 2.843983 Objective Loss 2.843983 LR 0.000008 Time 0.206428 -2023-04-27 01:53:47,401 - Epoch: [204][ 350/ 518] Overall Loss 2.835111 Objective Loss 2.835111 LR 0.000008 Time 0.205971 -2023-04-27 01:53:57,596 - Epoch: [204][ 400/ 518] Overall Loss 2.837476 Objective Loss 2.837476 LR 0.000008 Time 0.205708 -2023-04-27 01:54:07,797 - Epoch: [204][ 450/ 518] Overall Loss 2.840676 Objective Loss 2.840676 LR 0.000008 Time 0.205517 -2023-04-27 01:54:17,945 - Epoch: [204][ 500/ 518] Overall Loss 2.839648 Objective Loss 2.839648 LR 0.000008 Time 0.205257 -2023-04-27 01:54:21,472 - Epoch: [204][ 518/ 518] Overall Loss 2.843657 Objective Loss 2.843657 LR 0.000008 Time 0.204932 -2023-04-27 01:54:21,543 - --- validate (epoch=204)----------- -2023-04-27 01:54:21,543 - 4952 samples (32 per mini-batch) -2023-04-27 01:54:28,303 - Epoch: [204][ 50/ 155] Loss 3.208688 mAP 0.431542 -2023-04-27 01:54:34,725 - Epoch: [204][ 100/ 155] Loss 3.154551 mAP 0.439614 -2023-04-27 01:54:41,084 - Epoch: [204][ 150/ 155] Loss 3.143855 mAP 0.440210 -2023-04-27 01:54:41,655 - Epoch: [204][ 155/ 155] Loss 3.145772 mAP 0.438684 -2023-04-27 01:54:41,734 - ==> mAP: 0.43868 Loss: 3.146 - -2023-04-27 01:54:41,738 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 01:54:41,738 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:54:41,776 - - -2023-04-27 01:54:41,776 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:54:52,645 - Epoch: [205][ 50/ 518] Overall Loss 2.861848 Objective Loss 2.861848 LR 0.000008 Time 0.217313 -2023-04-27 01:55:02,802 - Epoch: [205][ 100/ 518] Overall Loss 2.860854 Objective Loss 2.860854 LR 0.000008 Time 0.210214 -2023-04-27 01:55:12,937 - Epoch: [205][ 150/ 518] Overall Loss 2.850033 Objective Loss 2.850033 LR 0.000008 Time 0.207697 -2023-04-27 01:55:23,039 - Epoch: [205][ 200/ 518] Overall Loss 2.842458 Objective Loss 2.842458 LR 0.000008 Time 0.206272 -2023-04-27 01:55:33,233 - Epoch: [205][ 250/ 518] Overall Loss 2.836829 Objective Loss 2.836829 LR 0.000008 Time 0.205790 -2023-04-27 01:55:43,359 - Epoch: [205][ 300/ 518] Overall Loss 2.843336 Objective Loss 2.843336 LR 0.000008 Time 0.205239 -2023-04-27 01:55:53,454 - Epoch: [205][ 350/ 518] Overall Loss 2.836511 Objective Loss 2.836511 LR 0.000008 Time 0.204758 -2023-04-27 01:56:03,582 - Epoch: [205][ 400/ 518] Overall Loss 2.835854 Objective Loss 2.835854 LR 0.000008 Time 0.204478 -2023-04-27 01:56:13,716 - Epoch: [205][ 450/ 518] Overall Loss 2.831625 Objective Loss 2.831625 LR 0.000008 Time 0.204275 -2023-04-27 01:56:23,897 - Epoch: [205][ 500/ 518] Overall Loss 2.830622 Objective Loss 2.830622 LR 0.000008 Time 0.204207 -2023-04-27 01:56:27,431 - Epoch: [205][ 518/ 518] Overall Loss 2.834043 Objective Loss 2.834043 LR 0.000008 Time 0.203932 -2023-04-27 01:56:27,501 - --- validate (epoch=205)----------- -2023-04-27 01:56:27,502 - 4952 samples (32 per mini-batch) -2023-04-27 01:56:34,256 - Epoch: [205][ 50/ 155] Loss 3.191742 mAP 0.431427 -2023-04-27 01:56:40,738 - Epoch: [205][ 100/ 155] Loss 3.144393 mAP 0.452077 -2023-04-27 01:56:47,114 - Epoch: [205][ 150/ 155] Loss 3.141941 mAP 0.455001 -2023-04-27 01:56:47,683 - Epoch: [205][ 155/ 155] Loss 3.142018 mAP 0.453725 -2023-04-27 01:56:47,753 - ==> mAP: 0.45373 Loss: 3.142 - -2023-04-27 01:56:47,758 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 01:56:47,758 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:56:47,796 - - -2023-04-27 01:56:47,796 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:56:58,502 - Epoch: [206][ 50/ 518] Overall Loss 2.899592 Objective Loss 2.899592 LR 0.000008 Time 0.214060 -2023-04-27 01:57:08,775 - Epoch: [206][ 100/ 518] Overall Loss 2.864573 Objective Loss 2.864573 LR 0.000008 Time 0.209741 -2023-04-27 01:57:18,899 - Epoch: [206][ 150/ 518] Overall Loss 2.853285 Objective Loss 2.853285 LR 0.000008 Time 0.207309 -2023-04-27 01:57:28,955 - Epoch: [206][ 200/ 518] Overall Loss 2.855956 Objective Loss 2.855956 LR 0.000008 Time 0.205755 -2023-04-27 01:57:39,174 - Epoch: [206][ 250/ 518] Overall Loss 2.856143 Objective Loss 2.856143 LR 0.000008 Time 0.205476 -2023-04-27 01:57:49,333 - Epoch: [206][ 300/ 518] Overall Loss 2.853209 Objective Loss 2.853209 LR 0.000008 Time 0.205085 -2023-04-27 01:57:59,550 - Epoch: [206][ 350/ 518] Overall Loss 2.845665 Objective Loss 2.845665 LR 0.000008 Time 0.204975 -2023-04-27 01:58:09,695 - Epoch: [206][ 400/ 518] Overall Loss 2.835114 Objective Loss 2.835114 LR 0.000008 Time 0.204711 -2023-04-27 01:58:19,938 - Epoch: [206][ 450/ 518] Overall Loss 2.838851 Objective Loss 2.838851 LR 0.000008 Time 0.204724 -2023-04-27 01:58:30,057 - Epoch: [206][ 500/ 518] Overall Loss 2.835141 Objective Loss 2.835141 LR 0.000008 Time 0.204486 -2023-04-27 01:58:33,579 - Epoch: [206][ 518/ 518] Overall Loss 2.834190 Objective Loss 2.834190 LR 0.000008 Time 0.204179 -2023-04-27 01:58:33,650 - --- validate (epoch=206)----------- -2023-04-27 01:58:33,650 - 4952 samples (32 per mini-batch) -2023-04-27 01:58:40,401 - Epoch: [206][ 50/ 155] Loss 3.214658 mAP 0.423781 -2023-04-27 01:58:46,778 - Epoch: [206][ 100/ 155] Loss 3.162507 mAP 0.449039 -2023-04-27 01:58:53,034 - Epoch: [206][ 150/ 155] Loss 3.151341 mAP 0.443933 -2023-04-27 01:58:53,607 - Epoch: [206][ 155/ 155] Loss 3.150310 mAP 0.444036 -2023-04-27 01:58:53,676 - ==> mAP: 0.44404 Loss: 3.150 - -2023-04-27 01:58:53,681 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 01:58:53,681 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 01:58:53,718 - - -2023-04-27 01:58:53,718 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 01:59:04,677 - Epoch: [207][ 50/ 518] Overall Loss 2.866719 Objective Loss 2.866719 LR 0.000008 Time 0.219128 -2023-04-27 01:59:14,854 - Epoch: [207][ 100/ 518] Overall Loss 2.832479 Objective Loss 2.832479 LR 0.000008 Time 0.211313 -2023-04-27 01:59:25,023 - Epoch: [207][ 150/ 518] Overall Loss 2.823128 Objective Loss 2.823128 LR 0.000008 Time 0.208656 -2023-04-27 01:59:35,244 - Epoch: [207][ 200/ 518] Overall Loss 2.837521 Objective Loss 2.837521 LR 0.000008 Time 0.207589 -2023-04-27 01:59:45,364 - Epoch: [207][ 250/ 518] Overall Loss 2.846883 Objective Loss 2.846883 LR 0.000008 Time 0.206544 -2023-04-27 01:59:55,530 - Epoch: [207][ 300/ 518] Overall Loss 2.852238 Objective Loss 2.852238 LR 0.000008 Time 0.206003 -2023-04-27 02:00:05,714 - Epoch: [207][ 350/ 518] Overall Loss 2.858479 Objective Loss 2.858479 LR 0.000008 Time 0.205667 -2023-04-27 02:00:15,812 - Epoch: [207][ 400/ 518] Overall Loss 2.853312 Objective Loss 2.853312 LR 0.000008 Time 0.205198 -2023-04-27 02:00:25,935 - Epoch: [207][ 450/ 518] Overall Loss 2.852032 Objective Loss 2.852032 LR 0.000008 Time 0.204892 -2023-04-27 02:00:36,044 - Epoch: [207][ 500/ 518] Overall Loss 2.850500 Objective Loss 2.850500 LR 0.000008 Time 0.204617 -2023-04-27 02:00:39,614 - Epoch: [207][ 518/ 518] Overall Loss 2.847104 Objective Loss 2.847104 LR 0.000008 Time 0.204397 -2023-04-27 02:00:39,686 - --- validate (epoch=207)----------- -2023-04-27 02:00:39,687 - 4952 samples (32 per mini-batch) -2023-04-27 02:00:46,482 - Epoch: [207][ 50/ 155] Loss 3.124932 mAP 0.466031 -2023-04-27 02:00:52,902 - Epoch: [207][ 100/ 155] Loss 3.148475 mAP 0.457380 -2023-04-27 02:00:59,217 - Epoch: [207][ 150/ 155] Loss 3.140854 mAP 0.449591 -2023-04-27 02:00:59,783 - Epoch: [207][ 155/ 155] Loss 3.145049 mAP 0.448747 -2023-04-27 02:00:59,854 - ==> mAP: 0.44875 Loss: 3.145 - -2023-04-27 02:00:59,858 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:00:59,858 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:00:59,895 - - -2023-04-27 02:00:59,895 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:01:10,951 - Epoch: [208][ 50/ 518] Overall Loss 2.843577 Objective Loss 2.843577 LR 0.000008 Time 0.221052 -2023-04-27 02:01:21,086 - Epoch: [208][ 100/ 518] Overall Loss 2.866989 Objective Loss 2.866989 LR 0.000008 Time 0.211862 -2023-04-27 02:01:31,254 - Epoch: [208][ 150/ 518] Overall Loss 2.838379 Objective Loss 2.838379 LR 0.000008 Time 0.209019 -2023-04-27 02:01:41,425 - Epoch: [208][ 200/ 518] Overall Loss 2.852985 Objective Loss 2.852985 LR 0.000008 Time 0.207611 -2023-04-27 02:01:51,503 - Epoch: [208][ 250/ 518] Overall Loss 2.855428 Objective Loss 2.855428 LR 0.000008 Time 0.206394 -2023-04-27 02:02:01,664 - Epoch: [208][ 300/ 518] Overall Loss 2.852739 Objective Loss 2.852739 LR 0.000008 Time 0.205860 -2023-04-27 02:02:11,899 - Epoch: [208][ 350/ 518] Overall Loss 2.848797 Objective Loss 2.848797 LR 0.000008 Time 0.205687 -2023-04-27 02:02:22,044 - Epoch: [208][ 400/ 518] Overall Loss 2.842083 Objective Loss 2.842083 LR 0.000008 Time 0.205337 -2023-04-27 02:02:32,175 - Epoch: [208][ 450/ 518] Overall Loss 2.845232 Objective Loss 2.845232 LR 0.000008 Time 0.205030 -2023-04-27 02:02:42,323 - Epoch: [208][ 500/ 518] Overall Loss 2.847248 Objective Loss 2.847248 LR 0.000008 Time 0.204819 -2023-04-27 02:02:45,818 - Epoch: [208][ 518/ 518] Overall Loss 2.843784 Objective Loss 2.843784 LR 0.000008 Time 0.204449 -2023-04-27 02:02:45,888 - --- validate (epoch=208)----------- -2023-04-27 02:02:45,888 - 4952 samples (32 per mini-batch) -2023-04-27 02:02:52,679 - Epoch: [208][ 50/ 155] Loss 3.164061 mAP 0.448384 -2023-04-27 02:02:59,051 - Epoch: [208][ 100/ 155] Loss 3.151726 mAP 0.451904 -2023-04-27 02:03:05,432 - Epoch: [208][ 150/ 155] Loss 3.152316 mAP 0.449900 -2023-04-27 02:03:05,998 - Epoch: [208][ 155/ 155] Loss 3.146979 mAP 0.451829 -2023-04-27 02:03:06,057 - ==> mAP: 0.45183 Loss: 3.147 - -2023-04-27 02:03:06,062 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:03:06,062 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:03:06,099 - - -2023-04-27 02:03:06,099 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:03:16,930 - Epoch: [209][ 50/ 518] Overall Loss 2.849534 Objective Loss 2.849534 LR 0.000008 Time 0.216552 -2023-04-27 02:03:27,146 - Epoch: [209][ 100/ 518] Overall Loss 2.881888 Objective Loss 2.881888 LR 0.000008 Time 0.210427 -2023-04-27 02:03:37,303 - Epoch: [209][ 150/ 518] Overall Loss 2.869578 Objective Loss 2.869578 LR 0.000008 Time 0.207985 -2023-04-27 02:03:47,445 - Epoch: [209][ 200/ 518] Overall Loss 2.857097 Objective Loss 2.857097 LR 0.000008 Time 0.206690 -2023-04-27 02:03:57,504 - Epoch: [209][ 250/ 518] Overall Loss 2.846677 Objective Loss 2.846677 LR 0.000008 Time 0.205581 -2023-04-27 02:04:07,638 - Epoch: [209][ 300/ 518] Overall Loss 2.842274 Objective Loss 2.842274 LR 0.000008 Time 0.205091 -2023-04-27 02:04:17,694 - Epoch: [209][ 350/ 518] Overall Loss 2.839896 Objective Loss 2.839896 LR 0.000008 Time 0.204521 -2023-04-27 02:04:27,854 - Epoch: [209][ 400/ 518] Overall Loss 2.838154 Objective Loss 2.838154 LR 0.000008 Time 0.204351 -2023-04-27 02:04:37,983 - Epoch: [209][ 450/ 518] Overall Loss 2.842082 Objective Loss 2.842082 LR 0.000008 Time 0.204150 -2023-04-27 02:04:48,048 - Epoch: [209][ 500/ 518] Overall Loss 2.846956 Objective Loss 2.846956 LR 0.000008 Time 0.203862 -2023-04-27 02:04:51,615 - Epoch: [209][ 518/ 518] Overall Loss 2.844704 Objective Loss 2.844704 LR 0.000008 Time 0.203663 -2023-04-27 02:04:51,684 - --- validate (epoch=209)----------- -2023-04-27 02:04:51,685 - 4952 samples (32 per mini-batch) -2023-04-27 02:04:58,421 - Epoch: [209][ 50/ 155] Loss 3.171391 mAP 0.458560 -2023-04-27 02:05:04,828 - Epoch: [209][ 100/ 155] Loss 3.163077 mAP 0.457324 -2023-04-27 02:05:11,165 - Epoch: [209][ 150/ 155] Loss 3.155708 mAP 0.458696 -2023-04-27 02:05:11,732 - Epoch: [209][ 155/ 155] Loss 3.151107 mAP 0.457557 -2023-04-27 02:05:11,798 - ==> mAP: 0.45756 Loss: 3.151 - -2023-04-27 02:05:11,802 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:05:11,802 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:05:11,840 - - -2023-04-27 02:05:11,840 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:05:22,680 - Epoch: [210][ 50/ 518] Overall Loss 2.838443 Objective Loss 2.838443 LR 0.000008 Time 0.216737 -2023-04-27 02:05:32,858 - Epoch: [210][ 100/ 518] Overall Loss 2.833992 Objective Loss 2.833992 LR 0.000008 Time 0.210136 -2023-04-27 02:05:42,980 - Epoch: [210][ 150/ 518] Overall Loss 2.818161 Objective Loss 2.818161 LR 0.000008 Time 0.207558 -2023-04-27 02:05:53,147 - Epoch: [210][ 200/ 518] Overall Loss 2.832158 Objective Loss 2.832158 LR 0.000008 Time 0.206496 -2023-04-27 02:06:03,232 - Epoch: [210][ 250/ 518] Overall Loss 2.832250 Objective Loss 2.832250 LR 0.000008 Time 0.205530 -2023-04-27 02:06:13,375 - Epoch: [210][ 300/ 518] Overall Loss 2.832318 Objective Loss 2.832318 LR 0.000008 Time 0.205079 -2023-04-27 02:06:23,483 - Epoch: [210][ 350/ 518] Overall Loss 2.829410 Objective Loss 2.829410 LR 0.000008 Time 0.204656 -2023-04-27 02:06:33,692 - Epoch: [210][ 400/ 518] Overall Loss 2.833859 Objective Loss 2.833859 LR 0.000008 Time 0.204594 -2023-04-27 02:06:43,875 - Epoch: [210][ 450/ 518] Overall Loss 2.834790 Objective Loss 2.834790 LR 0.000008 Time 0.204485 -2023-04-27 02:06:54,065 - Epoch: [210][ 500/ 518] Overall Loss 2.837574 Objective Loss 2.837574 LR 0.000008 Time 0.204415 -2023-04-27 02:06:57,614 - Epoch: [210][ 518/ 518] Overall Loss 2.836361 Objective Loss 2.836361 LR 0.000008 Time 0.204162 -2023-04-27 02:06:57,687 - --- validate (epoch=210)----------- -2023-04-27 02:06:57,687 - 4952 samples (32 per mini-batch) -2023-04-27 02:07:04,412 - Epoch: [210][ 50/ 155] Loss 3.126536 mAP 0.454473 -2023-04-27 02:07:11,122 - Epoch: [210][ 100/ 155] Loss 3.144298 mAP 0.453569 -2023-04-27 02:07:17,450 - Epoch: [210][ 150/ 155] Loss 3.150127 mAP 0.445546 -2023-04-27 02:07:18,015 - Epoch: [210][ 155/ 155] Loss 3.147970 mAP 0.447558 -2023-04-27 02:07:18,089 - ==> mAP: 0.44756 Loss: 3.148 - -2023-04-27 02:07:18,093 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:07:18,093 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:07:18,130 - - -2023-04-27 02:07:18,130 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:07:28,963 - Epoch: [211][ 50/ 518] Overall Loss 2.823180 Objective Loss 2.823180 LR 0.000008 Time 0.216605 -2023-04-27 02:07:39,202 - Epoch: [211][ 100/ 518] Overall Loss 2.841527 Objective Loss 2.841527 LR 0.000008 Time 0.210671 -2023-04-27 02:07:49,381 - Epoch: [211][ 150/ 518] Overall Loss 2.832451 Objective Loss 2.832451 LR 0.000008 Time 0.208301 -2023-04-27 02:07:59,522 - Epoch: [211][ 200/ 518] Overall Loss 2.830016 Objective Loss 2.830016 LR 0.000008 Time 0.206922 -2023-04-27 02:08:09,692 - Epoch: [211][ 250/ 518] Overall Loss 2.840034 Objective Loss 2.840034 LR 0.000008 Time 0.206210 -2023-04-27 02:08:19,971 - Epoch: [211][ 300/ 518] Overall Loss 2.845732 Objective Loss 2.845732 LR 0.000008 Time 0.206100 -2023-04-27 02:08:30,082 - Epoch: [211][ 350/ 518] Overall Loss 2.842037 Objective Loss 2.842037 LR 0.000008 Time 0.205540 -2023-04-27 02:08:40,308 - Epoch: [211][ 400/ 518] Overall Loss 2.833241 Objective Loss 2.833241 LR 0.000008 Time 0.205408 -2023-04-27 02:08:50,478 - Epoch: [211][ 450/ 518] Overall Loss 2.826757 Objective Loss 2.826757 LR 0.000008 Time 0.205181 -2023-04-27 02:09:00,676 - Epoch: [211][ 500/ 518] Overall Loss 2.832119 Objective Loss 2.832119 LR 0.000008 Time 0.205057 -2023-04-27 02:09:04,235 - Epoch: [211][ 518/ 518] Overall Loss 2.830947 Objective Loss 2.830947 LR 0.000008 Time 0.204800 -2023-04-27 02:09:04,311 - --- validate (epoch=211)----------- -2023-04-27 02:09:04,312 - 4952 samples (32 per mini-batch) -2023-04-27 02:09:11,056 - Epoch: [211][ 50/ 155] Loss 3.176895 mAP 0.445190 -2023-04-27 02:09:17,426 - Epoch: [211][ 100/ 155] Loss 3.153623 mAP 0.451836 -2023-04-27 02:09:23,782 - Epoch: [211][ 150/ 155] Loss 3.154288 mAP 0.452671 -2023-04-27 02:09:24,345 - Epoch: [211][ 155/ 155] Loss 3.149285 mAP 0.453059 -2023-04-27 02:09:24,421 - ==> mAP: 0.45306 Loss: 3.149 - -2023-04-27 02:09:24,425 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:09:24,425 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:09:24,462 - - -2023-04-27 02:09:24,462 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:09:35,302 - Epoch: [212][ 50/ 518] Overall Loss 2.845255 Objective Loss 2.845255 LR 0.000008 Time 0.216735 -2023-04-27 02:09:45,454 - Epoch: [212][ 100/ 518] Overall Loss 2.827825 Objective Loss 2.827825 LR 0.000008 Time 0.209872 -2023-04-27 02:09:55,643 - Epoch: [212][ 150/ 518] Overall Loss 2.834259 Objective Loss 2.834259 LR 0.000008 Time 0.207835 -2023-04-27 02:10:05,810 - Epoch: [212][ 200/ 518] Overall Loss 2.847568 Objective Loss 2.847568 LR 0.000008 Time 0.206698 -2023-04-27 02:10:15,994 - Epoch: [212][ 250/ 518] Overall Loss 2.845861 Objective Loss 2.845861 LR 0.000008 Time 0.206092 -2023-04-27 02:10:26,110 - Epoch: [212][ 300/ 518] Overall Loss 2.841164 Objective Loss 2.841164 LR 0.000008 Time 0.205458 -2023-04-27 02:10:36,198 - Epoch: [212][ 350/ 518] Overall Loss 2.845795 Objective Loss 2.845795 LR 0.000008 Time 0.204923 -2023-04-27 02:10:46,402 - Epoch: [212][ 400/ 518] Overall Loss 2.849638 Objective Loss 2.849638 LR 0.000008 Time 0.204815 -2023-04-27 02:10:56,689 - Epoch: [212][ 450/ 518] Overall Loss 2.849271 Objective Loss 2.849271 LR 0.000008 Time 0.204914 -2023-04-27 02:11:06,893 - Epoch: [212][ 500/ 518] Overall Loss 2.852103 Objective Loss 2.852103 LR 0.000008 Time 0.204826 -2023-04-27 02:11:10,425 - Epoch: [212][ 518/ 518] Overall Loss 2.851388 Objective Loss 2.851388 LR 0.000008 Time 0.204526 -2023-04-27 02:11:10,501 - --- validate (epoch=212)----------- -2023-04-27 02:11:10,501 - 4952 samples (32 per mini-batch) -2023-04-27 02:11:17,319 - Epoch: [212][ 50/ 155] Loss 3.186546 mAP 0.442362 -2023-04-27 02:11:23,731 - Epoch: [212][ 100/ 155] Loss 3.137448 mAP 0.447438 -2023-04-27 02:11:30,118 - Epoch: [212][ 150/ 155] Loss 3.144988 mAP 0.453704 -2023-04-27 02:11:30,704 - Epoch: [212][ 155/ 155] Loss 3.145661 mAP 0.455622 -2023-04-27 02:11:30,780 - ==> mAP: 0.45562 Loss: 3.146 - -2023-04-27 02:11:30,783 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:11:30,783 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:11:30,821 - - -2023-04-27 02:11:30,821 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:11:41,871 - Epoch: [213][ 50/ 518] Overall Loss 2.839166 Objective Loss 2.839166 LR 0.000008 Time 0.220957 -2023-04-27 02:11:52,038 - Epoch: [213][ 100/ 518] Overall Loss 2.845738 Objective Loss 2.845738 LR 0.000008 Time 0.212129 -2023-04-27 02:12:02,258 - Epoch: [213][ 150/ 518] Overall Loss 2.856722 Objective Loss 2.856722 LR 0.000008 Time 0.209537 -2023-04-27 02:12:12,411 - Epoch: [213][ 200/ 518] Overall Loss 2.854780 Objective Loss 2.854780 LR 0.000008 Time 0.207912 -2023-04-27 02:12:22,608 - Epoch: [213][ 250/ 518] Overall Loss 2.854706 Objective Loss 2.854706 LR 0.000008 Time 0.207110 -2023-04-27 02:12:32,780 - Epoch: [213][ 300/ 518] Overall Loss 2.848863 Objective Loss 2.848863 LR 0.000008 Time 0.206494 -2023-04-27 02:12:42,965 - Epoch: [213][ 350/ 518] Overall Loss 2.849405 Objective Loss 2.849405 LR 0.000008 Time 0.206089 -2023-04-27 02:12:53,098 - Epoch: [213][ 400/ 518] Overall Loss 2.844046 Objective Loss 2.844046 LR 0.000008 Time 0.205658 -2023-04-27 02:13:03,268 - Epoch: [213][ 450/ 518] Overall Loss 2.847025 Objective Loss 2.847025 LR 0.000008 Time 0.205402 -2023-04-27 02:13:13,405 - Epoch: [213][ 500/ 518] Overall Loss 2.848637 Objective Loss 2.848637 LR 0.000008 Time 0.205133 -2023-04-27 02:13:16,900 - Epoch: [213][ 518/ 518] Overall Loss 2.849587 Objective Loss 2.849587 LR 0.000008 Time 0.204751 -2023-04-27 02:13:16,976 - --- validate (epoch=213)----------- -2023-04-27 02:13:16,976 - 4952 samples (32 per mini-batch) -2023-04-27 02:13:23,718 - Epoch: [213][ 50/ 155] Loss 3.116566 mAP 0.438327 -2023-04-27 02:13:30,073 - Epoch: [213][ 100/ 155] Loss 3.129347 mAP 0.439854 -2023-04-27 02:13:36,434 - Epoch: [213][ 150/ 155] Loss 3.144361 mAP 0.448404 -2023-04-27 02:13:37,002 - Epoch: [213][ 155/ 155] Loss 3.144092 mAP 0.448927 -2023-04-27 02:13:37,071 - ==> mAP: 0.44893 Loss: 3.144 - -2023-04-27 02:13:37,075 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:13:37,075 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:13:37,112 - - -2023-04-27 02:13:37,112 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:13:48,063 - Epoch: [214][ 50/ 518] Overall Loss 2.873508 Objective Loss 2.873508 LR 0.000008 Time 0.218961 -2023-04-27 02:13:58,233 - Epoch: [214][ 100/ 518] Overall Loss 2.881475 Objective Loss 2.881475 LR 0.000008 Time 0.211167 -2023-04-27 02:14:08,445 - Epoch: [214][ 150/ 518] Overall Loss 2.884083 Objective Loss 2.884083 LR 0.000008 Time 0.208848 -2023-04-27 02:14:18,541 - Epoch: [214][ 200/ 518] Overall Loss 2.874653 Objective Loss 2.874653 LR 0.000008 Time 0.207108 -2023-04-27 02:14:28,770 - Epoch: [214][ 250/ 518] Overall Loss 2.870767 Objective Loss 2.870767 LR 0.000008 Time 0.206594 -2023-04-27 02:14:38,972 - Epoch: [214][ 300/ 518] Overall Loss 2.863197 Objective Loss 2.863197 LR 0.000008 Time 0.206163 -2023-04-27 02:14:49,126 - Epoch: [214][ 350/ 518] Overall Loss 2.863962 Objective Loss 2.863962 LR 0.000008 Time 0.205718 -2023-04-27 02:14:59,311 - Epoch: [214][ 400/ 518] Overall Loss 2.860906 Objective Loss 2.860906 LR 0.000008 Time 0.205462 -2023-04-27 02:15:09,488 - Epoch: [214][ 450/ 518] Overall Loss 2.855031 Objective Loss 2.855031 LR 0.000008 Time 0.205245 -2023-04-27 02:15:19,594 - Epoch: [214][ 500/ 518] Overall Loss 2.851344 Objective Loss 2.851344 LR 0.000008 Time 0.204929 -2023-04-27 02:15:23,132 - Epoch: [214][ 518/ 518] Overall Loss 2.852693 Objective Loss 2.852693 LR 0.000008 Time 0.204636 -2023-04-27 02:15:23,209 - --- validate (epoch=214)----------- -2023-04-27 02:15:23,209 - 4952 samples (32 per mini-batch) -2023-04-27 02:15:29,982 - Epoch: [214][ 50/ 155] Loss 3.136370 mAP 0.447498 -2023-04-27 02:15:36,390 - Epoch: [214][ 100/ 155] Loss 3.141205 mAP 0.457661 -2023-04-27 02:15:42,762 - Epoch: [214][ 150/ 155] Loss 3.142873 mAP 0.455366 -2023-04-27 02:15:43,334 - Epoch: [214][ 155/ 155] Loss 3.144023 mAP 0.454042 -2023-04-27 02:15:43,409 - ==> mAP: 0.45404 Loss: 3.144 - -2023-04-27 02:15:43,413 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:15:43,413 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:15:43,449 - - -2023-04-27 02:15:43,449 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:15:54,217 - Epoch: [215][ 50/ 518] Overall Loss 2.778113 Objective Loss 2.778113 LR 0.000008 Time 0.215290 -2023-04-27 02:16:04,403 - Epoch: [215][ 100/ 518] Overall Loss 2.800045 Objective Loss 2.800045 LR 0.000008 Time 0.209495 -2023-04-27 02:16:14,490 - Epoch: [215][ 150/ 518] Overall Loss 2.823687 Objective Loss 2.823687 LR 0.000008 Time 0.206894 -2023-04-27 02:16:24,719 - Epoch: [215][ 200/ 518] Overall Loss 2.820686 Objective Loss 2.820686 LR 0.000008 Time 0.206310 -2023-04-27 02:16:34,864 - Epoch: [215][ 250/ 518] Overall Loss 2.821546 Objective Loss 2.821546 LR 0.000008 Time 0.205623 -2023-04-27 02:16:45,038 - Epoch: [215][ 300/ 518] Overall Loss 2.830602 Objective Loss 2.830602 LR 0.000008 Time 0.205258 -2023-04-27 02:16:55,217 - Epoch: [215][ 350/ 518] Overall Loss 2.833941 Objective Loss 2.833941 LR 0.000008 Time 0.205015 -2023-04-27 02:17:05,336 - Epoch: [215][ 400/ 518] Overall Loss 2.836289 Objective Loss 2.836289 LR 0.000008 Time 0.204681 -2023-04-27 02:17:15,477 - Epoch: [215][ 450/ 518] Overall Loss 2.830951 Objective Loss 2.830951 LR 0.000008 Time 0.204471 -2023-04-27 02:17:25,596 - Epoch: [215][ 500/ 518] Overall Loss 2.832878 Objective Loss 2.832878 LR 0.000008 Time 0.204259 -2023-04-27 02:17:29,142 - Epoch: [215][ 518/ 518] Overall Loss 2.831842 Objective Loss 2.831842 LR 0.000008 Time 0.204005 -2023-04-27 02:17:29,218 - --- validate (epoch=215)----------- -2023-04-27 02:17:29,218 - 4952 samples (32 per mini-batch) -2023-04-27 02:17:35,951 - Epoch: [215][ 50/ 155] Loss 3.112050 mAP 0.462622 -2023-04-27 02:17:42,379 - Epoch: [215][ 100/ 155] Loss 3.156629 mAP 0.459010 -2023-04-27 02:17:48,757 - Epoch: [215][ 150/ 155] Loss 3.143667 mAP 0.459783 -2023-04-27 02:17:49,329 - Epoch: [215][ 155/ 155] Loss 3.144633 mAP 0.458593 -2023-04-27 02:17:49,403 - ==> mAP: 0.45859 Loss: 3.145 - -2023-04-27 02:17:49,407 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:17:49,407 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:17:49,444 - - -2023-04-27 02:17:49,444 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:18:00,300 - Epoch: [216][ 50/ 518] Overall Loss 2.865622 Objective Loss 2.865622 LR 0.000008 Time 0.217078 -2023-04-27 02:18:10,461 - Epoch: [216][ 100/ 518] Overall Loss 2.876797 Objective Loss 2.876797 LR 0.000008 Time 0.210131 -2023-04-27 02:18:20,600 - Epoch: [216][ 150/ 518] Overall Loss 2.853204 Objective Loss 2.853204 LR 0.000008 Time 0.207667 -2023-04-27 02:18:30,683 - Epoch: [216][ 200/ 518] Overall Loss 2.860055 Objective Loss 2.860055 LR 0.000008 Time 0.206160 -2023-04-27 02:18:40,787 - Epoch: [216][ 250/ 518] Overall Loss 2.853359 Objective Loss 2.853359 LR 0.000008 Time 0.205334 -2023-04-27 02:18:50,978 - Epoch: [216][ 300/ 518] Overall Loss 2.850698 Objective Loss 2.850698 LR 0.000008 Time 0.205078 -2023-04-27 02:19:01,180 - Epoch: [216][ 350/ 518] Overall Loss 2.849385 Objective Loss 2.849385 LR 0.000008 Time 0.204925 -2023-04-27 02:19:11,372 - Epoch: [216][ 400/ 518] Overall Loss 2.851756 Objective Loss 2.851756 LR 0.000008 Time 0.204785 -2023-04-27 02:19:21,559 - Epoch: [216][ 450/ 518] Overall Loss 2.850067 Objective Loss 2.850067 LR 0.000008 Time 0.204666 -2023-04-27 02:19:31,708 - Epoch: [216][ 500/ 518] Overall Loss 2.847753 Objective Loss 2.847753 LR 0.000008 Time 0.204494 -2023-04-27 02:19:35,255 - Epoch: [216][ 518/ 518] Overall Loss 2.842260 Objective Loss 2.842260 LR 0.000008 Time 0.204234 -2023-04-27 02:19:35,334 - --- validate (epoch=216)----------- -2023-04-27 02:19:35,334 - 4952 samples (32 per mini-batch) -2023-04-27 02:19:42,221 - Epoch: [216][ 50/ 155] Loss 3.166013 mAP 0.456266 -2023-04-27 02:19:48,641 - Epoch: [216][ 100/ 155] Loss 3.132877 mAP 0.450977 -2023-04-27 02:19:55,028 - Epoch: [216][ 150/ 155] Loss 3.141799 mAP 0.452493 -2023-04-27 02:19:55,593 - Epoch: [216][ 155/ 155] Loss 3.138929 mAP 0.451681 -2023-04-27 02:19:55,660 - ==> mAP: 0.45168 Loss: 3.139 - -2023-04-27 02:19:55,664 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:19:55,665 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:19:55,701 - - -2023-04-27 02:19:55,701 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:20:06,579 - Epoch: [217][ 50/ 518] Overall Loss 2.843319 Objective Loss 2.843319 LR 0.000008 Time 0.217493 -2023-04-27 02:20:16,734 - Epoch: [217][ 100/ 518] Overall Loss 2.835470 Objective Loss 2.835470 LR 0.000008 Time 0.210283 -2023-04-27 02:20:26,886 - Epoch: [217][ 150/ 518] Overall Loss 2.828563 Objective Loss 2.828563 LR 0.000008 Time 0.207858 -2023-04-27 02:20:37,099 - Epoch: [217][ 200/ 518] Overall Loss 2.834932 Objective Loss 2.834932 LR 0.000008 Time 0.206948 -2023-04-27 02:20:47,261 - Epoch: [217][ 250/ 518] Overall Loss 2.825416 Objective Loss 2.825416 LR 0.000008 Time 0.206200 -2023-04-27 02:20:57,363 - Epoch: [217][ 300/ 518] Overall Loss 2.829735 Objective Loss 2.829735 LR 0.000008 Time 0.205502 -2023-04-27 02:21:07,577 - Epoch: [217][ 350/ 518] Overall Loss 2.828849 Objective Loss 2.828849 LR 0.000008 Time 0.205322 -2023-04-27 02:21:17,701 - Epoch: [217][ 400/ 518] Overall Loss 2.826988 Objective Loss 2.826988 LR 0.000008 Time 0.204964 -2023-04-27 02:21:27,822 - Epoch: [217][ 450/ 518] Overall Loss 2.824717 Objective Loss 2.824717 LR 0.000008 Time 0.204678 -2023-04-27 02:21:38,048 - Epoch: [217][ 500/ 518] Overall Loss 2.827791 Objective Loss 2.827791 LR 0.000008 Time 0.204658 -2023-04-27 02:21:41,580 - Epoch: [217][ 518/ 518] Overall Loss 2.829156 Objective Loss 2.829156 LR 0.000008 Time 0.204364 -2023-04-27 02:21:41,658 - --- validate (epoch=217)----------- -2023-04-27 02:21:41,658 - 4952 samples (32 per mini-batch) -2023-04-27 02:21:48,424 - Epoch: [217][ 50/ 155] Loss 3.080807 mAP 0.446506 -2023-04-27 02:21:54,844 - Epoch: [217][ 100/ 155] Loss 3.113131 mAP 0.455380 -2023-04-27 02:22:01,268 - Epoch: [217][ 150/ 155] Loss 3.145501 mAP 0.449260 -2023-04-27 02:22:01,845 - Epoch: [217][ 155/ 155] Loss 3.143686 mAP 0.447863 -2023-04-27 02:22:01,913 - ==> mAP: 0.44786 Loss: 3.144 - -2023-04-27 02:22:01,917 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:22:01,917 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:22:01,954 - - -2023-04-27 02:22:01,954 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:22:12,954 - Epoch: [218][ 50/ 518] Overall Loss 2.804906 Objective Loss 2.804906 LR 0.000008 Time 0.219933 -2023-04-27 02:22:23,083 - Epoch: [218][ 100/ 518] Overall Loss 2.799984 Objective Loss 2.799984 LR 0.000008 Time 0.211241 -2023-04-27 02:22:33,297 - Epoch: [218][ 150/ 518] Overall Loss 2.827940 Objective Loss 2.827940 LR 0.000008 Time 0.208913 -2023-04-27 02:22:43,419 - Epoch: [218][ 200/ 518] Overall Loss 2.823601 Objective Loss 2.823601 LR 0.000008 Time 0.207284 -2023-04-27 02:22:53,682 - Epoch: [218][ 250/ 518] Overall Loss 2.829039 Objective Loss 2.829039 LR 0.000008 Time 0.206876 -2023-04-27 02:23:03,783 - Epoch: [218][ 300/ 518] Overall Loss 2.830125 Objective Loss 2.830125 LR 0.000008 Time 0.206060 -2023-04-27 02:23:13,879 - Epoch: [218][ 350/ 518] Overall Loss 2.837434 Objective Loss 2.837434 LR 0.000008 Time 0.205463 -2023-04-27 02:23:23,956 - Epoch: [218][ 400/ 518] Overall Loss 2.838470 Objective Loss 2.838470 LR 0.000008 Time 0.204969 -2023-04-27 02:23:34,125 - Epoch: [218][ 450/ 518] Overall Loss 2.832557 Objective Loss 2.832557 LR 0.000008 Time 0.204788 -2023-04-27 02:23:44,343 - Epoch: [218][ 500/ 518] Overall Loss 2.829257 Objective Loss 2.829257 LR 0.000008 Time 0.204743 -2023-04-27 02:23:47,862 - Epoch: [218][ 518/ 518] Overall Loss 2.830546 Objective Loss 2.830546 LR 0.000008 Time 0.204421 -2023-04-27 02:23:47,938 - --- validate (epoch=218)----------- -2023-04-27 02:23:47,939 - 4952 samples (32 per mini-batch) -2023-04-27 02:23:54,685 - Epoch: [218][ 50/ 155] Loss 3.130098 mAP 0.445228 -2023-04-27 02:24:01,124 - Epoch: [218][ 100/ 155] Loss 3.139987 mAP 0.437544 -2023-04-27 02:24:07,463 - Epoch: [218][ 150/ 155] Loss 3.149896 mAP 0.441051 -2023-04-27 02:24:08,032 - Epoch: [218][ 155/ 155] Loss 3.145768 mAP 0.442957 -2023-04-27 02:24:08,102 - ==> mAP: 0.44296 Loss: 3.146 - -2023-04-27 02:24:08,106 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:24:08,106 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:24:08,144 - - -2023-04-27 02:24:08,144 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:24:19,152 - Epoch: [219][ 50/ 518] Overall Loss 2.814052 Objective Loss 2.814052 LR 0.000008 Time 0.220107 -2023-04-27 02:24:29,210 - Epoch: [219][ 100/ 518] Overall Loss 2.838779 Objective Loss 2.838779 LR 0.000008 Time 0.210611 -2023-04-27 02:24:39,312 - Epoch: [219][ 150/ 518] Overall Loss 2.829193 Objective Loss 2.829193 LR 0.000008 Time 0.207745 -2023-04-27 02:24:49,478 - Epoch: [219][ 200/ 518] Overall Loss 2.825765 Objective Loss 2.825765 LR 0.000008 Time 0.206629 -2023-04-27 02:24:59,661 - Epoch: [219][ 250/ 518] Overall Loss 2.828378 Objective Loss 2.828378 LR 0.000008 Time 0.206031 -2023-04-27 02:25:09,831 - Epoch: [219][ 300/ 518] Overall Loss 2.835557 Objective Loss 2.835557 LR 0.000008 Time 0.205585 -2023-04-27 02:25:19,958 - Epoch: [219][ 350/ 518] Overall Loss 2.835673 Objective Loss 2.835673 LR 0.000008 Time 0.205146 -2023-04-27 02:25:30,093 - Epoch: [219][ 400/ 518] Overall Loss 2.836970 Objective Loss 2.836970 LR 0.000008 Time 0.204837 -2023-04-27 02:25:40,210 - Epoch: [219][ 450/ 518] Overall Loss 2.841905 Objective Loss 2.841905 LR 0.000008 Time 0.204555 -2023-04-27 02:25:50,386 - Epoch: [219][ 500/ 518] Overall Loss 2.836743 Objective Loss 2.836743 LR 0.000008 Time 0.204450 -2023-04-27 02:25:53,926 - Epoch: [219][ 518/ 518] Overall Loss 2.835635 Objective Loss 2.835635 LR 0.000008 Time 0.204178 -2023-04-27 02:25:54,002 - --- validate (epoch=219)----------- -2023-04-27 02:25:54,002 - 4952 samples (32 per mini-batch) -2023-04-27 02:26:00,806 - Epoch: [219][ 50/ 155] Loss 3.171045 mAP 0.450750 -2023-04-27 02:26:07,204 - Epoch: [219][ 100/ 155] Loss 3.142162 mAP 0.450939 -2023-04-27 02:26:13,570 - Epoch: [219][ 150/ 155] Loss 3.146542 mAP 0.449511 -2023-04-27 02:26:14,141 - Epoch: [219][ 155/ 155] Loss 3.148029 mAP 0.451298 -2023-04-27 02:26:14,208 - ==> mAP: 0.45130 Loss: 3.148 - -2023-04-27 02:26:14,212 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:26:14,212 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:26:14,249 - - -2023-04-27 02:26:14,249 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:26:25,050 - Epoch: [220][ 50/ 518] Overall Loss 2.873363 Objective Loss 2.873363 LR 0.000008 Time 0.215966 -2023-04-27 02:26:35,319 - Epoch: [220][ 100/ 518] Overall Loss 2.845147 Objective Loss 2.845147 LR 0.000008 Time 0.210657 -2023-04-27 02:26:45,403 - Epoch: [220][ 150/ 518] Overall Loss 2.824270 Objective Loss 2.824270 LR 0.000008 Time 0.207654 -2023-04-27 02:26:55,514 - Epoch: [220][ 200/ 518] Overall Loss 2.835854 Objective Loss 2.835854 LR 0.000008 Time 0.206285 -2023-04-27 02:27:05,697 - Epoch: [220][ 250/ 518] Overall Loss 2.841618 Objective Loss 2.841618 LR 0.000008 Time 0.205754 -2023-04-27 02:27:15,889 - Epoch: [220][ 300/ 518] Overall Loss 2.838614 Objective Loss 2.838614 LR 0.000008 Time 0.205432 -2023-04-27 02:27:26,008 - Epoch: [220][ 350/ 518] Overall Loss 2.836298 Objective Loss 2.836298 LR 0.000008 Time 0.204991 -2023-04-27 02:27:36,250 - Epoch: [220][ 400/ 518] Overall Loss 2.828784 Objective Loss 2.828784 LR 0.000008 Time 0.204969 -2023-04-27 02:27:46,439 - Epoch: [220][ 450/ 518] Overall Loss 2.829132 Objective Loss 2.829132 LR 0.000008 Time 0.204832 -2023-04-27 02:27:56,651 - Epoch: [220][ 500/ 518] Overall Loss 2.831787 Objective Loss 2.831787 LR 0.000008 Time 0.204770 -2023-04-27 02:28:00,197 - Epoch: [220][ 518/ 518] Overall Loss 2.831778 Objective Loss 2.831778 LR 0.000008 Time 0.204500 -2023-04-27 02:28:00,275 - --- validate (epoch=220)----------- -2023-04-27 02:28:00,275 - 4952 samples (32 per mini-batch) -2023-04-27 02:28:07,023 - Epoch: [220][ 50/ 155] Loss 3.146492 mAP 0.436486 -2023-04-27 02:28:13,413 - Epoch: [220][ 100/ 155] Loss 3.142854 mAP 0.440813 -2023-04-27 02:28:19,832 - Epoch: [220][ 150/ 155] Loss 3.152317 mAP 0.439966 -2023-04-27 02:28:20,398 - Epoch: [220][ 155/ 155] Loss 3.145767 mAP 0.442764 -2023-04-27 02:28:20,472 - ==> mAP: 0.44276 Loss: 3.146 - -2023-04-27 02:28:20,476 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:28:20,476 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:28:20,512 - - -2023-04-27 02:28:20,512 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:28:31,420 - Epoch: [221][ 50/ 518] Overall Loss 2.824726 Objective Loss 2.824726 LR 0.000008 Time 0.218093 -2023-04-27 02:28:41,563 - Epoch: [221][ 100/ 518] Overall Loss 2.840026 Objective Loss 2.840026 LR 0.000008 Time 0.210463 -2023-04-27 02:28:51,760 - Epoch: [221][ 150/ 518] Overall Loss 2.831188 Objective Loss 2.831188 LR 0.000008 Time 0.208273 -2023-04-27 02:29:01,975 - Epoch: [221][ 200/ 518] Overall Loss 2.828506 Objective Loss 2.828506 LR 0.000008 Time 0.207272 -2023-04-27 02:29:12,179 - Epoch: [221][ 250/ 518] Overall Loss 2.826878 Objective Loss 2.826878 LR 0.000008 Time 0.206630 -2023-04-27 02:29:22,310 - Epoch: [221][ 300/ 518] Overall Loss 2.832307 Objective Loss 2.832307 LR 0.000008 Time 0.205955 -2023-04-27 02:29:32,438 - Epoch: [221][ 350/ 518] Overall Loss 2.839205 Objective Loss 2.839205 LR 0.000008 Time 0.205466 -2023-04-27 02:29:42,603 - Epoch: [221][ 400/ 518] Overall Loss 2.841577 Objective Loss 2.841577 LR 0.000008 Time 0.205192 -2023-04-27 02:29:52,804 - Epoch: [221][ 450/ 518] Overall Loss 2.843710 Objective Loss 2.843710 LR 0.000008 Time 0.205057 -2023-04-27 02:30:02,941 - Epoch: [221][ 500/ 518] Overall Loss 2.840918 Objective Loss 2.840918 LR 0.000008 Time 0.204822 -2023-04-27 02:30:06,520 - Epoch: [221][ 518/ 518] Overall Loss 2.841006 Objective Loss 2.841006 LR 0.000008 Time 0.204612 -2023-04-27 02:30:06,597 - --- validate (epoch=221)----------- -2023-04-27 02:30:06,597 - 4952 samples (32 per mini-batch) -2023-04-27 02:30:13,329 - Epoch: [221][ 50/ 155] Loss 3.121237 mAP 0.470564 -2023-04-27 02:30:19,688 - Epoch: [221][ 100/ 155] Loss 3.159309 mAP 0.455097 -2023-04-27 02:30:26,042 - Epoch: [221][ 150/ 155] Loss 3.149493 mAP 0.454794 -2023-04-27 02:30:26,623 - Epoch: [221][ 155/ 155] Loss 3.149910 mAP 0.454811 -2023-04-27 02:30:26,700 - ==> mAP: 0.45481 Loss: 3.150 - -2023-04-27 02:30:26,704 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:30:26,704 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:30:26,740 - - -2023-04-27 02:30:26,740 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:30:37,563 - Epoch: [222][ 50/ 518] Overall Loss 2.884980 Objective Loss 2.884980 LR 0.000008 Time 0.216398 -2023-04-27 02:30:47,714 - Epoch: [222][ 100/ 518] Overall Loss 2.854767 Objective Loss 2.854767 LR 0.000008 Time 0.209699 -2023-04-27 02:30:57,841 - Epoch: [222][ 150/ 518] Overall Loss 2.837080 Objective Loss 2.837080 LR 0.000008 Time 0.207302 -2023-04-27 02:31:07,959 - Epoch: [222][ 200/ 518] Overall Loss 2.839298 Objective Loss 2.839298 LR 0.000008 Time 0.206060 -2023-04-27 02:31:18,040 - Epoch: [222][ 250/ 518] Overall Loss 2.837534 Objective Loss 2.837534 LR 0.000008 Time 0.205165 -2023-04-27 02:31:28,142 - Epoch: [222][ 300/ 518] Overall Loss 2.833342 Objective Loss 2.833342 LR 0.000008 Time 0.204637 -2023-04-27 02:31:38,313 - Epoch: [222][ 350/ 518] Overall Loss 2.835090 Objective Loss 2.835090 LR 0.000008 Time 0.204458 -2023-04-27 02:31:48,557 - Epoch: [222][ 400/ 518] Overall Loss 2.835406 Objective Loss 2.835406 LR 0.000008 Time 0.204507 -2023-04-27 02:31:58,719 - Epoch: [222][ 450/ 518] Overall Loss 2.835215 Objective Loss 2.835215 LR 0.000008 Time 0.204364 -2023-04-27 02:32:08,819 - Epoch: [222][ 500/ 518] Overall Loss 2.829819 Objective Loss 2.829819 LR 0.000008 Time 0.204124 -2023-04-27 02:32:12,340 - Epoch: [222][ 518/ 518] Overall Loss 2.834090 Objective Loss 2.834090 LR 0.000008 Time 0.203827 -2023-04-27 02:32:12,418 - --- validate (epoch=222)----------- -2023-04-27 02:32:12,418 - 4952 samples (32 per mini-batch) -2023-04-27 02:32:19,250 - Epoch: [222][ 50/ 155] Loss 3.128718 mAP 0.449431 -2023-04-27 02:32:25,683 - Epoch: [222][ 100/ 155] Loss 3.132788 mAP 0.462313 -2023-04-27 02:32:32,031 - Epoch: [222][ 150/ 155] Loss 3.150789 mAP 0.455601 -2023-04-27 02:32:32,596 - Epoch: [222][ 155/ 155] Loss 3.145550 mAP 0.455863 -2023-04-27 02:32:32,670 - ==> mAP: 0.45586 Loss: 3.146 - -2023-04-27 02:32:32,674 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:32:32,674 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:32:32,711 - - -2023-04-27 02:32:32,711 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:32:43,623 - Epoch: [223][ 50/ 518] Overall Loss 2.888458 Objective Loss 2.888458 LR 0.000008 Time 0.218198 -2023-04-27 02:32:53,710 - Epoch: [223][ 100/ 518] Overall Loss 2.830577 Objective Loss 2.830577 LR 0.000008 Time 0.209951 -2023-04-27 02:33:03,880 - Epoch: [223][ 150/ 518] Overall Loss 2.836017 Objective Loss 2.836017 LR 0.000008 Time 0.207756 -2023-04-27 02:33:14,046 - Epoch: [223][ 200/ 518] Overall Loss 2.833491 Objective Loss 2.833491 LR 0.000008 Time 0.206637 -2023-04-27 02:33:24,146 - Epoch: [223][ 250/ 518] Overall Loss 2.830746 Objective Loss 2.830746 LR 0.000008 Time 0.205703 -2023-04-27 02:33:34,221 - Epoch: [223][ 300/ 518] Overall Loss 2.839087 Objective Loss 2.839087 LR 0.000008 Time 0.204998 -2023-04-27 02:33:44,338 - Epoch: [223][ 350/ 518] Overall Loss 2.832340 Objective Loss 2.832340 LR 0.000008 Time 0.204612 -2023-04-27 02:33:54,555 - Epoch: [223][ 400/ 518] Overall Loss 2.838476 Objective Loss 2.838476 LR 0.000008 Time 0.204575 -2023-04-27 02:34:04,808 - Epoch: [223][ 450/ 518] Overall Loss 2.837566 Objective Loss 2.837566 LR 0.000008 Time 0.204625 -2023-04-27 02:34:14,963 - Epoch: [223][ 500/ 518] Overall Loss 2.841154 Objective Loss 2.841154 LR 0.000008 Time 0.204470 -2023-04-27 02:34:18,520 - Epoch: [223][ 518/ 518] Overall Loss 2.842962 Objective Loss 2.842962 LR 0.000008 Time 0.204231 -2023-04-27 02:34:18,597 - --- validate (epoch=223)----------- -2023-04-27 02:34:18,597 - 4952 samples (32 per mini-batch) -2023-04-27 02:34:25,365 - Epoch: [223][ 50/ 155] Loss 3.163072 mAP 0.453233 -2023-04-27 02:34:31,793 - Epoch: [223][ 100/ 155] Loss 3.155975 mAP 0.455115 -2023-04-27 02:34:38,184 - Epoch: [223][ 150/ 155] Loss 3.151138 mAP 0.456696 -2023-04-27 02:34:38,748 - Epoch: [223][ 155/ 155] Loss 3.147809 mAP 0.457403 -2023-04-27 02:34:38,823 - ==> mAP: 0.45740 Loss: 3.148 - -2023-04-27 02:34:38,827 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:34:38,827 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:34:38,864 - - -2023-04-27 02:34:38,864 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:34:49,757 - Epoch: [224][ 50/ 518] Overall Loss 2.832989 Objective Loss 2.832989 LR 0.000008 Time 0.217810 -2023-04-27 02:34:59,906 - Epoch: [224][ 100/ 518] Overall Loss 2.835877 Objective Loss 2.835877 LR 0.000008 Time 0.210383 -2023-04-27 02:35:09,986 - Epoch: [224][ 150/ 518] Overall Loss 2.830155 Objective Loss 2.830155 LR 0.000008 Time 0.207440 -2023-04-27 02:35:20,160 - Epoch: [224][ 200/ 518] Overall Loss 2.816419 Objective Loss 2.816419 LR 0.000008 Time 0.206441 -2023-04-27 02:35:30,274 - Epoch: [224][ 250/ 518] Overall Loss 2.816431 Objective Loss 2.816431 LR 0.000008 Time 0.205604 -2023-04-27 02:35:40,401 - Epoch: [224][ 300/ 518] Overall Loss 2.814538 Objective Loss 2.814538 LR 0.000008 Time 0.205087 -2023-04-27 02:35:50,516 - Epoch: [224][ 350/ 518] Overall Loss 2.821973 Objective Loss 2.821973 LR 0.000008 Time 0.204684 -2023-04-27 02:36:00,718 - Epoch: [224][ 400/ 518] Overall Loss 2.824792 Objective Loss 2.824792 LR 0.000008 Time 0.204601 -2023-04-27 02:36:10,960 - Epoch: [224][ 450/ 518] Overall Loss 2.828365 Objective Loss 2.828365 LR 0.000008 Time 0.204624 -2023-04-27 02:36:21,096 - Epoch: [224][ 500/ 518] Overall Loss 2.829100 Objective Loss 2.829100 LR 0.000008 Time 0.204429 -2023-04-27 02:36:24,626 - Epoch: [224][ 518/ 518] Overall Loss 2.830941 Objective Loss 2.830941 LR 0.000008 Time 0.204141 -2023-04-27 02:36:24,703 - --- validate (epoch=224)----------- -2023-04-27 02:36:24,704 - 4952 samples (32 per mini-batch) -2023-04-27 02:36:31,459 - Epoch: [224][ 50/ 155] Loss 3.172251 mAP 0.443992 -2023-04-27 02:36:37,825 - Epoch: [224][ 100/ 155] Loss 3.153487 mAP 0.444766 -2023-04-27 02:36:44,182 - Epoch: [224][ 150/ 155] Loss 3.156451 mAP 0.438891 -2023-04-27 02:36:44,759 - Epoch: [224][ 155/ 155] Loss 3.149283 mAP 0.439560 -2023-04-27 02:36:44,831 - ==> mAP: 0.43956 Loss: 3.149 - -2023-04-27 02:36:44,835 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:36:44,835 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:36:44,872 - - -2023-04-27 02:36:44,872 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:36:55,817 - Epoch: [225][ 50/ 518] Overall Loss 2.857843 Objective Loss 2.857843 LR 0.000008 Time 0.218856 -2023-04-27 02:37:05,885 - Epoch: [225][ 100/ 518] Overall Loss 2.859336 Objective Loss 2.859336 LR 0.000008 Time 0.210085 -2023-04-27 02:37:16,049 - Epoch: [225][ 150/ 518] Overall Loss 2.843288 Objective Loss 2.843288 LR 0.000008 Time 0.207811 -2023-04-27 02:37:26,221 - Epoch: [225][ 200/ 518] Overall Loss 2.843984 Objective Loss 2.843984 LR 0.000008 Time 0.206707 -2023-04-27 02:37:36,378 - Epoch: [225][ 250/ 518] Overall Loss 2.857188 Objective Loss 2.857188 LR 0.000008 Time 0.205987 -2023-04-27 02:37:46,589 - Epoch: [225][ 300/ 518] Overall Loss 2.843422 Objective Loss 2.843422 LR 0.000008 Time 0.205689 -2023-04-27 02:37:56,804 - Epoch: [225][ 350/ 518] Overall Loss 2.848327 Objective Loss 2.848327 LR 0.000008 Time 0.205486 -2023-04-27 02:38:06,986 - Epoch: [225][ 400/ 518] Overall Loss 2.843583 Objective Loss 2.843583 LR 0.000008 Time 0.205250 -2023-04-27 02:38:17,200 - Epoch: [225][ 450/ 518] Overall Loss 2.844764 Objective Loss 2.844764 LR 0.000008 Time 0.205138 -2023-04-27 02:38:27,397 - Epoch: [225][ 500/ 518] Overall Loss 2.850501 Objective Loss 2.850501 LR 0.000008 Time 0.205015 -2023-04-27 02:38:30,921 - Epoch: [225][ 518/ 518] Overall Loss 2.851611 Objective Loss 2.851611 LR 0.000008 Time 0.204694 -2023-04-27 02:38:30,999 - --- validate (epoch=225)----------- -2023-04-27 02:38:31,000 - 4952 samples (32 per mini-batch) -2023-04-27 02:38:37,723 - Epoch: [225][ 50/ 155] Loss 3.126545 mAP 0.449003 -2023-04-27 02:38:44,169 - Epoch: [225][ 100/ 155] Loss 3.145622 mAP 0.449000 -2023-04-27 02:38:50,557 - Epoch: [225][ 150/ 155] Loss 3.147282 mAP 0.450089 -2023-04-27 02:38:51,137 - Epoch: [225][ 155/ 155] Loss 3.148065 mAP 0.449709 -2023-04-27 02:38:51,211 - ==> mAP: 0.44971 Loss: 3.148 - -2023-04-27 02:38:51,215 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:38:51,215 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:38:51,251 - - -2023-04-27 02:38:51,251 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:39:02,033 - Epoch: [226][ 50/ 518] Overall Loss 2.831670 Objective Loss 2.831670 LR 0.000008 Time 0.215584 -2023-04-27 02:39:12,135 - Epoch: [226][ 100/ 518] Overall Loss 2.838696 Objective Loss 2.838696 LR 0.000008 Time 0.208797 -2023-04-27 02:39:22,314 - Epoch: [226][ 150/ 518] Overall Loss 2.844421 Objective Loss 2.844421 LR 0.000008 Time 0.207048 -2023-04-27 02:39:32,562 - Epoch: [226][ 200/ 518] Overall Loss 2.836894 Objective Loss 2.836894 LR 0.000008 Time 0.206516 -2023-04-27 02:39:42,779 - Epoch: [226][ 250/ 518] Overall Loss 2.838951 Objective Loss 2.838951 LR 0.000008 Time 0.206075 -2023-04-27 02:39:52,863 - Epoch: [226][ 300/ 518] Overall Loss 2.838262 Objective Loss 2.838262 LR 0.000008 Time 0.205336 -2023-04-27 02:40:03,020 - Epoch: [226][ 350/ 518] Overall Loss 2.832724 Objective Loss 2.832724 LR 0.000008 Time 0.205016 -2023-04-27 02:40:13,149 - Epoch: [226][ 400/ 518] Overall Loss 2.832034 Objective Loss 2.832034 LR 0.000008 Time 0.204710 -2023-04-27 02:40:23,281 - Epoch: [226][ 450/ 518] Overall Loss 2.834972 Objective Loss 2.834972 LR 0.000008 Time 0.204476 -2023-04-27 02:40:33,421 - Epoch: [226][ 500/ 518] Overall Loss 2.825047 Objective Loss 2.825047 LR 0.000008 Time 0.204305 -2023-04-27 02:40:36,930 - Epoch: [226][ 518/ 518] Overall Loss 2.827463 Objective Loss 2.827463 LR 0.000008 Time 0.203979 -2023-04-27 02:40:37,007 - --- validate (epoch=226)----------- -2023-04-27 02:40:37,007 - 4952 samples (32 per mini-batch) -2023-04-27 02:40:43,786 - Epoch: [226][ 50/ 155] Loss 3.124058 mAP 0.443947 -2023-04-27 02:40:50,222 - Epoch: [226][ 100/ 155] Loss 3.144361 mAP 0.452168 -2023-04-27 02:40:56,598 - Epoch: [226][ 150/ 155] Loss 3.149067 mAP 0.452027 -2023-04-27 02:40:57,164 - Epoch: [226][ 155/ 155] Loss 3.148602 mAP 0.453183 -2023-04-27 02:40:57,247 - ==> mAP: 0.45318 Loss: 3.149 - -2023-04-27 02:40:57,252 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:40:57,252 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:40:57,289 - - -2023-04-27 02:40:57,289 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:41:08,209 - Epoch: [227][ 50/ 518] Overall Loss 2.835131 Objective Loss 2.835131 LR 0.000008 Time 0.218333 -2023-04-27 02:41:18,328 - Epoch: [227][ 100/ 518] Overall Loss 2.834554 Objective Loss 2.834554 LR 0.000008 Time 0.210339 -2023-04-27 02:41:28,485 - Epoch: [227][ 150/ 518] Overall Loss 2.840147 Objective Loss 2.840147 LR 0.000008 Time 0.207928 -2023-04-27 02:41:38,589 - Epoch: [227][ 200/ 518] Overall Loss 2.848695 Objective Loss 2.848695 LR 0.000008 Time 0.206461 -2023-04-27 02:41:48,754 - Epoch: [227][ 250/ 518] Overall Loss 2.844235 Objective Loss 2.844235 LR 0.000008 Time 0.205820 -2023-04-27 02:41:59,037 - Epoch: [227][ 300/ 518] Overall Loss 2.833315 Objective Loss 2.833315 LR 0.000008 Time 0.205789 -2023-04-27 02:42:09,161 - Epoch: [227][ 350/ 518] Overall Loss 2.835304 Objective Loss 2.835304 LR 0.000008 Time 0.205312 -2023-04-27 02:42:19,200 - Epoch: [227][ 400/ 518] Overall Loss 2.837429 Objective Loss 2.837429 LR 0.000008 Time 0.204742 -2023-04-27 02:42:29,261 - Epoch: [227][ 450/ 518] Overall Loss 2.834536 Objective Loss 2.834536 LR 0.000008 Time 0.204347 -2023-04-27 02:42:39,392 - Epoch: [227][ 500/ 518] Overall Loss 2.833662 Objective Loss 2.833662 LR 0.000008 Time 0.204171 -2023-04-27 02:42:42,917 - Epoch: [227][ 518/ 518] Overall Loss 2.834841 Objective Loss 2.834841 LR 0.000008 Time 0.203880 -2023-04-27 02:42:42,992 - --- validate (epoch=227)----------- -2023-04-27 02:42:42,992 - 4952 samples (32 per mini-batch) -2023-04-27 02:42:49,832 - Epoch: [227][ 50/ 155] Loss 3.080485 mAP 0.462817 -2023-04-27 02:42:56,262 - Epoch: [227][ 100/ 155] Loss 3.112923 mAP 0.457571 -2023-04-27 02:43:02,634 - Epoch: [227][ 150/ 155] Loss 3.136484 mAP 0.451921 -2023-04-27 02:43:03,204 - Epoch: [227][ 155/ 155] Loss 3.138703 mAP 0.451077 -2023-04-27 02:43:03,281 - ==> mAP: 0.45108 Loss: 3.139 - -2023-04-27 02:43:03,285 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:43:03,285 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:43:03,321 - - -2023-04-27 02:43:03,321 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:43:14,218 - Epoch: [228][ 50/ 518] Overall Loss 2.817256 Objective Loss 2.817256 LR 0.000008 Time 0.217884 -2023-04-27 02:43:24,367 - Epoch: [228][ 100/ 518] Overall Loss 2.809601 Objective Loss 2.809601 LR 0.000008 Time 0.210407 -2023-04-27 02:43:34,609 - Epoch: [228][ 150/ 518] Overall Loss 2.809390 Objective Loss 2.809390 LR 0.000008 Time 0.208546 -2023-04-27 02:43:44,835 - Epoch: [228][ 200/ 518] Overall Loss 2.806565 Objective Loss 2.806565 LR 0.000008 Time 0.207528 -2023-04-27 02:43:55,034 - Epoch: [228][ 250/ 518] Overall Loss 2.817969 Objective Loss 2.817969 LR 0.000008 Time 0.206815 -2023-04-27 02:44:05,144 - Epoch: [228][ 300/ 518] Overall Loss 2.828194 Objective Loss 2.828194 LR 0.000008 Time 0.206040 -2023-04-27 02:44:15,210 - Epoch: [228][ 350/ 518] Overall Loss 2.825078 Objective Loss 2.825078 LR 0.000008 Time 0.205361 -2023-04-27 02:44:25,362 - Epoch: [228][ 400/ 518] Overall Loss 2.831661 Objective Loss 2.831661 LR 0.000008 Time 0.205066 -2023-04-27 02:44:35,496 - Epoch: [228][ 450/ 518] Overall Loss 2.831269 Objective Loss 2.831269 LR 0.000008 Time 0.204799 -2023-04-27 02:44:45,603 - Epoch: [228][ 500/ 518] Overall Loss 2.834374 Objective Loss 2.834374 LR 0.000008 Time 0.204528 -2023-04-27 02:44:49,124 - Epoch: [228][ 518/ 518] Overall Loss 2.833511 Objective Loss 2.833511 LR 0.000008 Time 0.204218 -2023-04-27 02:44:49,204 - --- validate (epoch=228)----------- -2023-04-27 02:44:49,204 - 4952 samples (32 per mini-batch) -2023-04-27 02:44:55,996 - Epoch: [228][ 50/ 155] Loss 3.156313 mAP 0.467803 -2023-04-27 02:45:02,446 - Epoch: [228][ 100/ 155] Loss 3.154921 mAP 0.457131 -2023-04-27 02:45:08,901 - Epoch: [228][ 150/ 155] Loss 3.138079 mAP 0.458745 -2023-04-27 02:45:09,474 - Epoch: [228][ 155/ 155] Loss 3.141108 mAP 0.456878 -2023-04-27 02:45:09,547 - ==> mAP: 0.45688 Loss: 3.141 - -2023-04-27 02:45:09,551 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:45:09,551 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:45:09,587 - - -2023-04-27 02:45:09,588 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:45:20,583 - Epoch: [229][ 50/ 518] Overall Loss 2.840580 Objective Loss 2.840580 LR 0.000008 Time 0.219854 -2023-04-27 02:45:30,686 - Epoch: [229][ 100/ 518] Overall Loss 2.835651 Objective Loss 2.835651 LR 0.000008 Time 0.210937 -2023-04-27 02:45:40,816 - Epoch: [229][ 150/ 518] Overall Loss 2.838428 Objective Loss 2.838428 LR 0.000008 Time 0.208150 -2023-04-27 02:45:50,987 - Epoch: [229][ 200/ 518] Overall Loss 2.833170 Objective Loss 2.833170 LR 0.000008 Time 0.206957 -2023-04-27 02:46:01,088 - Epoch: [229][ 250/ 518] Overall Loss 2.824636 Objective Loss 2.824636 LR 0.000008 Time 0.205964 -2023-04-27 02:46:11,267 - Epoch: [229][ 300/ 518] Overall Loss 2.829443 Objective Loss 2.829443 LR 0.000008 Time 0.205561 -2023-04-27 02:46:21,478 - Epoch: [229][ 350/ 518] Overall Loss 2.828202 Objective Loss 2.828202 LR 0.000008 Time 0.205367 -2023-04-27 02:46:31,663 - Epoch: [229][ 400/ 518] Overall Loss 2.824323 Objective Loss 2.824323 LR 0.000008 Time 0.205154 -2023-04-27 02:46:41,796 - Epoch: [229][ 450/ 518] Overall Loss 2.826068 Objective Loss 2.826068 LR 0.000008 Time 0.204872 -2023-04-27 02:46:51,915 - Epoch: [229][ 500/ 518] Overall Loss 2.833326 Objective Loss 2.833326 LR 0.000008 Time 0.204621 -2023-04-27 02:46:55,417 - Epoch: [229][ 518/ 518] Overall Loss 2.834260 Objective Loss 2.834260 LR 0.000008 Time 0.204270 -2023-04-27 02:46:55,495 - --- validate (epoch=229)----------- -2023-04-27 02:46:55,495 - 4952 samples (32 per mini-batch) -2023-04-27 02:47:02,296 - Epoch: [229][ 50/ 155] Loss 3.147352 mAP 0.468036 -2023-04-27 02:47:08,687 - Epoch: [229][ 100/ 155] Loss 3.153649 mAP 0.452870 -2023-04-27 02:47:15,100 - Epoch: [229][ 150/ 155] Loss 3.141868 mAP 0.456235 -2023-04-27 02:47:15,671 - Epoch: [229][ 155/ 155] Loss 3.142247 mAP 0.456460 -2023-04-27 02:47:15,742 - ==> mAP: 0.45646 Loss: 3.142 - -2023-04-27 02:47:15,745 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:47:15,745 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:47:15,782 - - -2023-04-27 02:47:15,782 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:47:26,801 - Epoch: [230][ 50/ 518] Overall Loss 2.889240 Objective Loss 2.889240 LR 0.000008 Time 0.220329 -2023-04-27 02:47:36,867 - Epoch: [230][ 100/ 518] Overall Loss 2.848097 Objective Loss 2.848097 LR 0.000008 Time 0.210799 -2023-04-27 02:47:46,994 - Epoch: [230][ 150/ 518] Overall Loss 2.843613 Objective Loss 2.843613 LR 0.000008 Time 0.208035 -2023-04-27 02:47:57,257 - Epoch: [230][ 200/ 518] Overall Loss 2.842756 Objective Loss 2.842756 LR 0.000008 Time 0.207333 -2023-04-27 02:48:07,403 - Epoch: [230][ 250/ 518] Overall Loss 2.829936 Objective Loss 2.829936 LR 0.000008 Time 0.206445 -2023-04-27 02:48:17,571 - Epoch: [230][ 300/ 518] Overall Loss 2.830174 Objective Loss 2.830174 LR 0.000008 Time 0.205925 -2023-04-27 02:48:27,633 - Epoch: [230][ 350/ 518] Overall Loss 2.826558 Objective Loss 2.826558 LR 0.000008 Time 0.205251 -2023-04-27 02:48:37,795 - Epoch: [230][ 400/ 518] Overall Loss 2.825451 Objective Loss 2.825451 LR 0.000008 Time 0.204997 -2023-04-27 02:48:47,921 - Epoch: [230][ 450/ 518] Overall Loss 2.829304 Objective Loss 2.829304 LR 0.000008 Time 0.204718 -2023-04-27 02:48:58,087 - Epoch: [230][ 500/ 518] Overall Loss 2.829891 Objective Loss 2.829891 LR 0.000008 Time 0.204575 -2023-04-27 02:49:01,601 - Epoch: [230][ 518/ 518] Overall Loss 2.827652 Objective Loss 2.827652 LR 0.000008 Time 0.204249 -2023-04-27 02:49:01,678 - --- validate (epoch=230)----------- -2023-04-27 02:49:01,679 - 4952 samples (32 per mini-batch) -2023-04-27 02:49:08,399 - Epoch: [230][ 50/ 155] Loss 3.141372 mAP 0.446296 -2023-04-27 02:49:14,777 - Epoch: [230][ 100/ 155] Loss 3.133139 mAP 0.460085 -2023-04-27 02:49:21,133 - Epoch: [230][ 150/ 155] Loss 3.140823 mAP 0.453906 -2023-04-27 02:49:21,709 - Epoch: [230][ 155/ 155] Loss 3.144497 mAP 0.452072 -2023-04-27 02:49:21,777 - ==> mAP: 0.45207 Loss: 3.144 - -2023-04-27 02:49:21,781 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:49:21,781 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:49:21,817 - - -2023-04-27 02:49:21,817 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:49:32,748 - Epoch: [231][ 50/ 518] Overall Loss 2.826600 Objective Loss 2.826600 LR 0.000008 Time 0.218559 -2023-04-27 02:49:42,931 - Epoch: [231][ 100/ 518] Overall Loss 2.810909 Objective Loss 2.810909 LR 0.000008 Time 0.211094 -2023-04-27 02:49:53,073 - Epoch: [231][ 150/ 518] Overall Loss 2.816656 Objective Loss 2.816656 LR 0.000008 Time 0.208332 -2023-04-27 02:50:03,166 - Epoch: [231][ 200/ 518] Overall Loss 2.815688 Objective Loss 2.815688 LR 0.000008 Time 0.206705 -2023-04-27 02:50:13,310 - Epoch: [231][ 250/ 518] Overall Loss 2.828615 Objective Loss 2.828615 LR 0.000008 Time 0.205934 -2023-04-27 02:50:23,493 - Epoch: [231][ 300/ 518] Overall Loss 2.819677 Objective Loss 2.819677 LR 0.000008 Time 0.205550 -2023-04-27 02:50:33,699 - Epoch: [231][ 350/ 518] Overall Loss 2.823725 Objective Loss 2.823725 LR 0.000008 Time 0.205341 -2023-04-27 02:50:43,791 - Epoch: [231][ 400/ 518] Overall Loss 2.823719 Objective Loss 2.823719 LR 0.000008 Time 0.204899 -2023-04-27 02:50:53,979 - Epoch: [231][ 450/ 518] Overall Loss 2.825193 Objective Loss 2.825193 LR 0.000008 Time 0.204768 -2023-04-27 02:51:04,076 - Epoch: [231][ 500/ 518] Overall Loss 2.828167 Objective Loss 2.828167 LR 0.000008 Time 0.204483 -2023-04-27 02:51:07,604 - Epoch: [231][ 518/ 518] Overall Loss 2.833202 Objective Loss 2.833202 LR 0.000008 Time 0.204187 -2023-04-27 02:51:07,682 - --- validate (epoch=231)----------- -2023-04-27 02:51:07,682 - 4952 samples (32 per mini-batch) -2023-04-27 02:51:14,487 - Epoch: [231][ 50/ 155] Loss 3.223501 mAP 0.454163 -2023-04-27 02:51:20,878 - Epoch: [231][ 100/ 155] Loss 3.174030 mAP 0.454638 -2023-04-27 02:51:27,290 - Epoch: [231][ 150/ 155] Loss 3.158317 mAP 0.456411 -2023-04-27 02:51:27,867 - Epoch: [231][ 155/ 155] Loss 3.154404 mAP 0.456618 -2023-04-27 02:51:27,941 - ==> mAP: 0.45662 Loss: 3.154 - -2023-04-27 02:51:27,945 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:51:27,945 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:51:27,982 - - -2023-04-27 02:51:27,982 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:51:38,946 - Epoch: [232][ 50/ 518] Overall Loss 2.815822 Objective Loss 2.815822 LR 0.000008 Time 0.219237 -2023-04-27 02:51:49,109 - Epoch: [232][ 100/ 518] Overall Loss 2.805901 Objective Loss 2.805901 LR 0.000008 Time 0.211230 -2023-04-27 02:51:59,344 - Epoch: [232][ 150/ 518] Overall Loss 2.826777 Objective Loss 2.826777 LR 0.000008 Time 0.209041 -2023-04-27 02:52:09,486 - Epoch: [232][ 200/ 518] Overall Loss 2.826660 Objective Loss 2.826660 LR 0.000008 Time 0.207481 -2023-04-27 02:52:19,603 - Epoch: [232][ 250/ 518] Overall Loss 2.821612 Objective Loss 2.821612 LR 0.000008 Time 0.206447 -2023-04-27 02:52:29,759 - Epoch: [232][ 300/ 518] Overall Loss 2.825684 Objective Loss 2.825684 LR 0.000008 Time 0.205887 -2023-04-27 02:52:40,004 - Epoch: [232][ 350/ 518] Overall Loss 2.825294 Objective Loss 2.825294 LR 0.000008 Time 0.205742 -2023-04-27 02:52:50,111 - Epoch: [232][ 400/ 518] Overall Loss 2.825437 Objective Loss 2.825437 LR 0.000008 Time 0.205287 -2023-04-27 02:53:00,338 - Epoch: [232][ 450/ 518] Overall Loss 2.828188 Objective Loss 2.828188 LR 0.000008 Time 0.205201 -2023-04-27 02:53:10,448 - Epoch: [232][ 500/ 518] Overall Loss 2.830263 Objective Loss 2.830263 LR 0.000008 Time 0.204898 -2023-04-27 02:53:13,969 - Epoch: [232][ 518/ 518] Overall Loss 2.830057 Objective Loss 2.830057 LR 0.000008 Time 0.204574 -2023-04-27 02:53:14,046 - --- validate (epoch=232)----------- -2023-04-27 02:53:14,046 - 4952 samples (32 per mini-batch) -2023-04-27 02:53:20,804 - Epoch: [232][ 50/ 155] Loss 3.177887 mAP 0.449662 -2023-04-27 02:53:27,202 - Epoch: [232][ 100/ 155] Loss 3.139085 mAP 0.450522 -2023-04-27 02:53:33,576 - Epoch: [232][ 150/ 155] Loss 3.142621 mAP 0.451115 -2023-04-27 02:53:34,160 - Epoch: [232][ 155/ 155] Loss 3.140389 mAP 0.450372 -2023-04-27 02:53:34,234 - ==> mAP: 0.45037 Loss: 3.140 - -2023-04-27 02:53:34,239 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:53:34,239 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:53:34,275 - - -2023-04-27 02:53:34,275 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:53:45,312 - Epoch: [233][ 50/ 518] Overall Loss 2.842486 Objective Loss 2.842486 LR 0.000008 Time 0.220676 -2023-04-27 02:53:55,443 - Epoch: [233][ 100/ 518] Overall Loss 2.871154 Objective Loss 2.871154 LR 0.000008 Time 0.211638 -2023-04-27 02:54:05,609 - Epoch: [233][ 150/ 518] Overall Loss 2.860183 Objective Loss 2.860183 LR 0.000008 Time 0.208850 -2023-04-27 02:54:15,877 - Epoch: [233][ 200/ 518] Overall Loss 2.850721 Objective Loss 2.850721 LR 0.000008 Time 0.207968 -2023-04-27 02:54:26,031 - Epoch: [233][ 250/ 518] Overall Loss 2.859871 Objective Loss 2.859871 LR 0.000008 Time 0.206984 -2023-04-27 02:54:36,217 - Epoch: [233][ 300/ 518] Overall Loss 2.848508 Objective Loss 2.848508 LR 0.000008 Time 0.206435 -2023-04-27 02:54:46,376 - Epoch: [233][ 350/ 518] Overall Loss 2.847276 Objective Loss 2.847276 LR 0.000008 Time 0.205968 -2023-04-27 02:54:56,605 - Epoch: [233][ 400/ 518] Overall Loss 2.840191 Objective Loss 2.840191 LR 0.000008 Time 0.205789 -2023-04-27 02:55:06,750 - Epoch: [233][ 450/ 518] Overall Loss 2.835313 Objective Loss 2.835313 LR 0.000008 Time 0.205465 -2023-04-27 02:55:16,980 - Epoch: [233][ 500/ 518] Overall Loss 2.837867 Objective Loss 2.837867 LR 0.000008 Time 0.205374 -2023-04-27 02:55:20,516 - Epoch: [233][ 518/ 518] Overall Loss 2.836987 Objective Loss 2.836987 LR 0.000008 Time 0.205063 -2023-04-27 02:55:20,591 - --- validate (epoch=233)----------- -2023-04-27 02:55:20,592 - 4952 samples (32 per mini-batch) -2023-04-27 02:55:27,412 - Epoch: [233][ 50/ 155] Loss 3.140055 mAP 0.463631 -2023-04-27 02:55:33,887 - Epoch: [233][ 100/ 155] Loss 3.142752 mAP 0.465154 -2023-04-27 02:55:40,392 - Epoch: [233][ 150/ 155] Loss 3.140647 mAP 0.465994 -2023-04-27 02:55:40,952 - Epoch: [233][ 155/ 155] Loss 3.140474 mAP 0.462149 -2023-04-27 02:55:41,027 - ==> mAP: 0.46215 Loss: 3.140 - -2023-04-27 02:55:41,031 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:55:41,031 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:55:41,067 - - -2023-04-27 02:55:41,067 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:55:52,161 - Epoch: [234][ 50/ 518] Overall Loss 2.810145 Objective Loss 2.810145 LR 0.000008 Time 0.221818 -2023-04-27 02:56:02,328 - Epoch: [234][ 100/ 518] Overall Loss 2.827560 Objective Loss 2.827560 LR 0.000008 Time 0.212562 -2023-04-27 02:56:12,469 - Epoch: [234][ 150/ 518] Overall Loss 2.830668 Objective Loss 2.830668 LR 0.000008 Time 0.209303 -2023-04-27 02:56:22,623 - Epoch: [234][ 200/ 518] Overall Loss 2.833720 Objective Loss 2.833720 LR 0.000008 Time 0.207742 -2023-04-27 02:56:32,769 - Epoch: [234][ 250/ 518] Overall Loss 2.833551 Objective Loss 2.833551 LR 0.000008 Time 0.206769 -2023-04-27 02:56:42,855 - Epoch: [234][ 300/ 518] Overall Loss 2.840598 Objective Loss 2.840598 LR 0.000008 Time 0.205923 -2023-04-27 02:56:53,047 - Epoch: [234][ 350/ 518] Overall Loss 2.839892 Objective Loss 2.839892 LR 0.000008 Time 0.205622 -2023-04-27 02:57:03,250 - Epoch: [234][ 400/ 518] Overall Loss 2.845123 Objective Loss 2.845123 LR 0.000008 Time 0.205421 -2023-04-27 02:57:13,346 - Epoch: [234][ 450/ 518] Overall Loss 2.837178 Objective Loss 2.837178 LR 0.000008 Time 0.205028 -2023-04-27 02:57:23,557 - Epoch: [234][ 500/ 518] Overall Loss 2.837996 Objective Loss 2.837996 LR 0.000008 Time 0.204944 -2023-04-27 02:57:27,116 - Epoch: [234][ 518/ 518] Overall Loss 2.837473 Objective Loss 2.837473 LR 0.000008 Time 0.204692 -2023-04-27 02:57:27,193 - --- validate (epoch=234)----------- -2023-04-27 02:57:27,193 - 4952 samples (32 per mini-batch) -2023-04-27 02:57:34,010 - Epoch: [234][ 50/ 155] Loss 3.096852 mAP 0.437541 -2023-04-27 02:57:40,423 - Epoch: [234][ 100/ 155] Loss 3.127265 mAP 0.445690 -2023-04-27 02:57:46,837 - Epoch: [234][ 150/ 155] Loss 3.142490 mAP 0.448448 -2023-04-27 02:57:47,420 - Epoch: [234][ 155/ 155] Loss 3.144693 mAP 0.448564 -2023-04-27 02:57:47,494 - ==> mAP: 0.44856 Loss: 3.145 - -2023-04-27 02:57:47,498 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:57:47,498 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:57:47,535 - - -2023-04-27 02:57:47,535 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 02:57:58,436 - Epoch: [235][ 50/ 518] Overall Loss 2.903288 Objective Loss 2.903288 LR 0.000008 Time 0.217963 -2023-04-27 02:58:08,577 - Epoch: [235][ 100/ 518] Overall Loss 2.869203 Objective Loss 2.869203 LR 0.000008 Time 0.210366 -2023-04-27 02:58:18,721 - Epoch: [235][ 150/ 518] Overall Loss 2.851342 Objective Loss 2.851342 LR 0.000008 Time 0.207865 -2023-04-27 02:58:28,833 - Epoch: [235][ 200/ 518] Overall Loss 2.841876 Objective Loss 2.841876 LR 0.000008 Time 0.206451 -2023-04-27 02:58:38,963 - Epoch: [235][ 250/ 518] Overall Loss 2.841126 Objective Loss 2.841126 LR 0.000008 Time 0.205673 -2023-04-27 02:58:49,142 - Epoch: [235][ 300/ 518] Overall Loss 2.853104 Objective Loss 2.853104 LR 0.000008 Time 0.205320 -2023-04-27 02:58:59,202 - Epoch: [235][ 350/ 518] Overall Loss 2.852172 Objective Loss 2.852172 LR 0.000008 Time 0.204725 -2023-04-27 02:59:09,462 - Epoch: [235][ 400/ 518] Overall Loss 2.847677 Objective Loss 2.847677 LR 0.000008 Time 0.204782 -2023-04-27 02:59:19,687 - Epoch: [235][ 450/ 518] Overall Loss 2.843317 Objective Loss 2.843317 LR 0.000008 Time 0.204746 -2023-04-27 02:59:29,802 - Epoch: [235][ 500/ 518] Overall Loss 2.851370 Objective Loss 2.851370 LR 0.000008 Time 0.204499 -2023-04-27 02:59:33,309 - Epoch: [235][ 518/ 518] Overall Loss 2.850990 Objective Loss 2.850990 LR 0.000008 Time 0.204162 -2023-04-27 02:59:33,385 - --- validate (epoch=235)----------- -2023-04-27 02:59:33,386 - 4952 samples (32 per mini-batch) -2023-04-27 02:59:40,252 - Epoch: [235][ 50/ 155] Loss 3.165168 mAP 0.449129 -2023-04-27 02:59:46,700 - Epoch: [235][ 100/ 155] Loss 3.162542 mAP 0.450145 -2023-04-27 02:59:53,052 - Epoch: [235][ 150/ 155] Loss 3.148512 mAP 0.449439 -2023-04-27 02:59:53,633 - Epoch: [235][ 155/ 155] Loss 3.144292 mAP 0.450877 -2023-04-27 02:59:53,707 - ==> mAP: 0.45088 Loss: 3.144 - -2023-04-27 02:59:53,711 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 02:59:53,711 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 02:59:53,747 - - -2023-04-27 02:59:53,747 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:00:04,803 - Epoch: [236][ 50/ 518] Overall Loss 2.860995 Objective Loss 2.860995 LR 0.000008 Time 0.221067 -2023-04-27 03:00:14,911 - Epoch: [236][ 100/ 518] Overall Loss 2.849574 Objective Loss 2.849574 LR 0.000008 Time 0.211594 -2023-04-27 03:00:25,115 - Epoch: [236][ 150/ 518] Overall Loss 2.836780 Objective Loss 2.836780 LR 0.000008 Time 0.209082 -2023-04-27 03:00:35,309 - Epoch: [236][ 200/ 518] Overall Loss 2.830969 Objective Loss 2.830969 LR 0.000008 Time 0.207770 -2023-04-27 03:00:45,475 - Epoch: [236][ 250/ 518] Overall Loss 2.837388 Objective Loss 2.837388 LR 0.000008 Time 0.206873 -2023-04-27 03:00:55,616 - Epoch: [236][ 300/ 518] Overall Loss 2.839230 Objective Loss 2.839230 LR 0.000008 Time 0.206195 -2023-04-27 03:01:05,805 - Epoch: [236][ 350/ 518] Overall Loss 2.834942 Objective Loss 2.834942 LR 0.000008 Time 0.205844 -2023-04-27 03:01:15,831 - Epoch: [236][ 400/ 518] Overall Loss 2.835928 Objective Loss 2.835928 LR 0.000008 Time 0.205174 -2023-04-27 03:01:25,953 - Epoch: [236][ 450/ 518] Overall Loss 2.833137 Objective Loss 2.833137 LR 0.000008 Time 0.204866 -2023-04-27 03:01:36,037 - Epoch: [236][ 500/ 518] Overall Loss 2.833344 Objective Loss 2.833344 LR 0.000008 Time 0.204546 -2023-04-27 03:01:39,594 - Epoch: [236][ 518/ 518] Overall Loss 2.830463 Objective Loss 2.830463 LR 0.000008 Time 0.204303 -2023-04-27 03:01:39,674 - --- validate (epoch=236)----------- -2023-04-27 03:01:39,674 - 4952 samples (32 per mini-batch) -2023-04-27 03:01:46,404 - Epoch: [236][ 50/ 155] Loss 3.139010 mAP 0.460043 -2023-04-27 03:01:52,795 - Epoch: [236][ 100/ 155] Loss 3.151171 mAP 0.444363 -2023-04-27 03:01:59,141 - Epoch: [236][ 150/ 155] Loss 3.147443 mAP 0.443054 -2023-04-27 03:01:59,714 - Epoch: [236][ 155/ 155] Loss 3.143583 mAP 0.443783 -2023-04-27 03:01:59,780 - ==> mAP: 0.44378 Loss: 3.144 - -2023-04-27 03:01:59,784 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:01:59,784 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:01:59,821 - - -2023-04-27 03:01:59,821 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:02:10,680 - Epoch: [237][ 50/ 518] Overall Loss 2.832814 Objective Loss 2.832814 LR 0.000008 Time 0.217129 -2023-04-27 03:02:20,866 - Epoch: [237][ 100/ 518] Overall Loss 2.825315 Objective Loss 2.825315 LR 0.000008 Time 0.210403 -2023-04-27 03:02:31,031 - Epoch: [237][ 150/ 518] Overall Loss 2.833361 Objective Loss 2.833361 LR 0.000008 Time 0.208026 -2023-04-27 03:02:41,245 - Epoch: [237][ 200/ 518] Overall Loss 2.837767 Objective Loss 2.837767 LR 0.000008 Time 0.207083 -2023-04-27 03:02:51,468 - Epoch: [237][ 250/ 518] Overall Loss 2.831306 Objective Loss 2.831306 LR 0.000008 Time 0.206553 -2023-04-27 03:03:01,569 - Epoch: [237][ 300/ 518] Overall Loss 2.832573 Objective Loss 2.832573 LR 0.000008 Time 0.205792 -2023-04-27 03:03:11,753 - Epoch: [237][ 350/ 518] Overall Loss 2.837596 Objective Loss 2.837596 LR 0.000008 Time 0.205484 -2023-04-27 03:03:21,883 - Epoch: [237][ 400/ 518] Overall Loss 2.836323 Objective Loss 2.836323 LR 0.000008 Time 0.205120 -2023-04-27 03:03:32,026 - Epoch: [237][ 450/ 518] Overall Loss 2.845728 Objective Loss 2.845728 LR 0.000008 Time 0.204864 -2023-04-27 03:03:42,275 - Epoch: [237][ 500/ 518] Overall Loss 2.841143 Objective Loss 2.841143 LR 0.000008 Time 0.204875 -2023-04-27 03:03:45,808 - Epoch: [237][ 518/ 518] Overall Loss 2.838485 Objective Loss 2.838485 LR 0.000008 Time 0.204575 -2023-04-27 03:03:45,885 - --- validate (epoch=237)----------- -2023-04-27 03:03:45,885 - 4952 samples (32 per mini-batch) -2023-04-27 03:03:52,765 - Epoch: [237][ 50/ 155] Loss 3.119597 mAP 0.452916 -2023-04-27 03:03:59,131 - Epoch: [237][ 100/ 155] Loss 3.135302 mAP 0.448452 -2023-04-27 03:04:05,504 - Epoch: [237][ 150/ 155] Loss 3.141364 mAP 0.453901 -2023-04-27 03:04:06,067 - Epoch: [237][ 155/ 155] Loss 3.147882 mAP 0.450161 -2023-04-27 03:04:06,136 - ==> mAP: 0.45016 Loss: 3.148 - -2023-04-27 03:04:06,139 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:04:06,139 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:04:06,176 - - -2023-04-27 03:04:06,176 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:04:16,938 - Epoch: [238][ 50/ 518] Overall Loss 2.824409 Objective Loss 2.824409 LR 0.000008 Time 0.215197 -2023-04-27 03:04:27,151 - Epoch: [238][ 100/ 518] Overall Loss 2.850423 Objective Loss 2.850423 LR 0.000008 Time 0.209710 -2023-04-27 03:04:37,314 - Epoch: [238][ 150/ 518] Overall Loss 2.821306 Objective Loss 2.821306 LR 0.000008 Time 0.207549 -2023-04-27 03:04:47,435 - Epoch: [238][ 200/ 518] Overall Loss 2.834234 Objective Loss 2.834234 LR 0.000008 Time 0.206257 -2023-04-27 03:04:57,608 - Epoch: [238][ 250/ 518] Overall Loss 2.841553 Objective Loss 2.841553 LR 0.000008 Time 0.205692 -2023-04-27 03:05:07,723 - Epoch: [238][ 300/ 518] Overall Loss 2.838626 Objective Loss 2.838626 LR 0.000008 Time 0.205121 -2023-04-27 03:05:17,899 - Epoch: [238][ 350/ 518] Overall Loss 2.831570 Objective Loss 2.831570 LR 0.000008 Time 0.204888 -2023-04-27 03:05:28,056 - Epoch: [238][ 400/ 518] Overall Loss 2.835717 Objective Loss 2.835717 LR 0.000008 Time 0.204665 -2023-04-27 03:05:38,159 - Epoch: [238][ 450/ 518] Overall Loss 2.834643 Objective Loss 2.834643 LR 0.000008 Time 0.204372 -2023-04-27 03:05:48,427 - Epoch: [238][ 500/ 518] Overall Loss 2.837105 Objective Loss 2.837105 LR 0.000008 Time 0.204468 -2023-04-27 03:05:51,930 - Epoch: [238][ 518/ 518] Overall Loss 2.835297 Objective Loss 2.835297 LR 0.000008 Time 0.204125 -2023-04-27 03:05:52,007 - --- validate (epoch=238)----------- -2023-04-27 03:05:52,007 - 4952 samples (32 per mini-batch) -2023-04-27 03:05:58,812 - Epoch: [238][ 50/ 155] Loss 3.168899 mAP 0.430778 -2023-04-27 03:06:05,202 - Epoch: [238][ 100/ 155] Loss 3.147241 mAP 0.442345 -2023-04-27 03:06:11,546 - Epoch: [238][ 150/ 155] Loss 3.152410 mAP 0.441866 -2023-04-27 03:06:12,113 - Epoch: [238][ 155/ 155] Loss 3.150306 mAP 0.439960 -2023-04-27 03:06:12,190 - ==> mAP: 0.43996 Loss: 3.150 - -2023-04-27 03:06:12,194 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:06:12,194 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:06:12,230 - - -2023-04-27 03:06:12,230 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:06:23,215 - Epoch: [239][ 50/ 518] Overall Loss 2.819662 Objective Loss 2.819662 LR 0.000008 Time 0.219633 -2023-04-27 03:06:33,355 - Epoch: [239][ 100/ 518] Overall Loss 2.822502 Objective Loss 2.822502 LR 0.000008 Time 0.211202 -2023-04-27 03:06:43,494 - Epoch: [239][ 150/ 518] Overall Loss 2.840246 Objective Loss 2.840246 LR 0.000008 Time 0.208384 -2023-04-27 03:06:53,605 - Epoch: [239][ 200/ 518] Overall Loss 2.833193 Objective Loss 2.833193 LR 0.000008 Time 0.206835 -2023-04-27 03:07:03,759 - Epoch: [239][ 250/ 518] Overall Loss 2.827265 Objective Loss 2.827265 LR 0.000008 Time 0.206079 -2023-04-27 03:07:13,944 - Epoch: [239][ 300/ 518] Overall Loss 2.840620 Objective Loss 2.840620 LR 0.000008 Time 0.205675 -2023-04-27 03:07:24,126 - Epoch: [239][ 350/ 518] Overall Loss 2.831584 Objective Loss 2.831584 LR 0.000008 Time 0.205382 -2023-04-27 03:07:34,244 - Epoch: [239][ 400/ 518] Overall Loss 2.837621 Objective Loss 2.837621 LR 0.000008 Time 0.205000 -2023-04-27 03:07:44,415 - Epoch: [239][ 450/ 518] Overall Loss 2.833852 Objective Loss 2.833852 LR 0.000008 Time 0.204821 -2023-04-27 03:07:54,537 - Epoch: [239][ 500/ 518] Overall Loss 2.831509 Objective Loss 2.831509 LR 0.000008 Time 0.204578 -2023-04-27 03:07:58,062 - Epoch: [239][ 518/ 518] Overall Loss 2.834797 Objective Loss 2.834797 LR 0.000008 Time 0.204274 -2023-04-27 03:07:58,139 - --- validate (epoch=239)----------- -2023-04-27 03:07:58,140 - 4952 samples (32 per mini-batch) -2023-04-27 03:08:04,987 - Epoch: [239][ 50/ 155] Loss 3.156967 mAP 0.463699 -2023-04-27 03:08:11,360 - Epoch: [239][ 100/ 155] Loss 3.132672 mAP 0.457244 -2023-04-27 03:08:17,740 - Epoch: [239][ 150/ 155] Loss 3.145529 mAP 0.456999 -2023-04-27 03:08:18,309 - Epoch: [239][ 155/ 155] Loss 3.143130 mAP 0.457234 -2023-04-27 03:08:18,380 - ==> mAP: 0.45723 Loss: 3.143 - -2023-04-27 03:08:18,384 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:08:18,385 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:08:18,421 - - -2023-04-27 03:08:18,421 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:08:29,388 - Epoch: [240][ 50/ 518] Overall Loss 2.887000 Objective Loss 2.887000 LR 0.000008 Time 0.219280 -2023-04-27 03:08:39,642 - Epoch: [240][ 100/ 518] Overall Loss 2.856424 Objective Loss 2.856424 LR 0.000008 Time 0.212158 -2023-04-27 03:08:49,822 - Epoch: [240][ 150/ 518] Overall Loss 2.853056 Objective Loss 2.853056 LR 0.000008 Time 0.209300 -2023-04-27 03:08:59,924 - Epoch: [240][ 200/ 518] Overall Loss 2.847096 Objective Loss 2.847096 LR 0.000008 Time 0.207476 -2023-04-27 03:09:10,087 - Epoch: [240][ 250/ 518] Overall Loss 2.849038 Objective Loss 2.849038 LR 0.000008 Time 0.206626 -2023-04-27 03:09:20,218 - Epoch: [240][ 300/ 518] Overall Loss 2.837657 Objective Loss 2.837657 LR 0.000008 Time 0.205954 -2023-04-27 03:09:30,396 - Epoch: [240][ 350/ 518] Overall Loss 2.840751 Objective Loss 2.840751 LR 0.000008 Time 0.205608 -2023-04-27 03:09:40,537 - Epoch: [240][ 400/ 518] Overall Loss 2.839837 Objective Loss 2.839837 LR 0.000008 Time 0.205255 -2023-04-27 03:09:50,640 - Epoch: [240][ 450/ 518] Overall Loss 2.841903 Objective Loss 2.841903 LR 0.000008 Time 0.204896 -2023-04-27 03:10:00,808 - Epoch: [240][ 500/ 518] Overall Loss 2.839752 Objective Loss 2.839752 LR 0.000008 Time 0.204738 -2023-04-27 03:10:04,340 - Epoch: [240][ 518/ 518] Overall Loss 2.839877 Objective Loss 2.839877 LR 0.000008 Time 0.204442 -2023-04-27 03:10:04,419 - --- validate (epoch=240)----------- -2023-04-27 03:10:04,419 - 4952 samples (32 per mini-batch) -2023-04-27 03:10:11,220 - Epoch: [240][ 50/ 155] Loss 3.168273 mAP 0.456738 -2023-04-27 03:10:17,644 - Epoch: [240][ 100/ 155] Loss 3.158077 mAP 0.458359 -2023-04-27 03:10:24,064 - Epoch: [240][ 150/ 155] Loss 3.146985 mAP 0.458955 -2023-04-27 03:10:24,644 - Epoch: [240][ 155/ 155] Loss 3.150241 mAP 0.458012 -2023-04-27 03:10:24,708 - ==> mAP: 0.45801 Loss: 3.150 - -2023-04-27 03:10:24,711 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:10:24,711 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:10:24,748 - - -2023-04-27 03:10:24,748 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:10:35,629 - Epoch: [241][ 50/ 518] Overall Loss 2.800355 Objective Loss 2.800355 LR 0.000008 Time 0.217555 -2023-04-27 03:10:45,745 - Epoch: [241][ 100/ 518] Overall Loss 2.851935 Objective Loss 2.851935 LR 0.000008 Time 0.209924 -2023-04-27 03:10:55,850 - Epoch: [241][ 150/ 518] Overall Loss 2.861797 Objective Loss 2.861797 LR 0.000008 Time 0.207307 -2023-04-27 03:11:05,974 - Epoch: [241][ 200/ 518] Overall Loss 2.850461 Objective Loss 2.850461 LR 0.000008 Time 0.206091 -2023-04-27 03:11:16,157 - Epoch: [241][ 250/ 518] Overall Loss 2.846491 Objective Loss 2.846491 LR 0.000008 Time 0.205597 -2023-04-27 03:11:26,312 - Epoch: [241][ 300/ 518] Overall Loss 2.847881 Objective Loss 2.847881 LR 0.000008 Time 0.205174 -2023-04-27 03:11:36,476 - Epoch: [241][ 350/ 518] Overall Loss 2.842124 Objective Loss 2.842124 LR 0.000008 Time 0.204901 -2023-04-27 03:11:46,608 - Epoch: [241][ 400/ 518] Overall Loss 2.842397 Objective Loss 2.842397 LR 0.000008 Time 0.204614 -2023-04-27 03:11:56,715 - Epoch: [241][ 450/ 518] Overall Loss 2.841824 Objective Loss 2.841824 LR 0.000008 Time 0.204335 -2023-04-27 03:12:06,883 - Epoch: [241][ 500/ 518] Overall Loss 2.841957 Objective Loss 2.841957 LR 0.000008 Time 0.204235 -2023-04-27 03:12:10,394 - Epoch: [241][ 518/ 518] Overall Loss 2.844105 Objective Loss 2.844105 LR 0.000008 Time 0.203914 -2023-04-27 03:12:10,471 - --- validate (epoch=241)----------- -2023-04-27 03:12:10,471 - 4952 samples (32 per mini-batch) -2023-04-27 03:12:17,347 - Epoch: [241][ 50/ 155] Loss 3.117961 mAP 0.458574 -2023-04-27 03:12:23,778 - Epoch: [241][ 100/ 155] Loss 3.133963 mAP 0.451674 -2023-04-27 03:12:30,179 - Epoch: [241][ 150/ 155] Loss 3.148740 mAP 0.451599 -2023-04-27 03:12:30,739 - Epoch: [241][ 155/ 155] Loss 3.141148 mAP 0.452220 -2023-04-27 03:12:30,811 - ==> mAP: 0.45222 Loss: 3.141 - -2023-04-27 03:12:30,815 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:12:30,815 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:12:30,851 - - -2023-04-27 03:12:30,851 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:12:41,727 - Epoch: [242][ 50/ 518] Overall Loss 2.849431 Objective Loss 2.849431 LR 0.000008 Time 0.217448 -2023-04-27 03:12:51,892 - Epoch: [242][ 100/ 518] Overall Loss 2.853696 Objective Loss 2.853696 LR 0.000008 Time 0.210362 -2023-04-27 03:13:01,999 - Epoch: [242][ 150/ 518] Overall Loss 2.852114 Objective Loss 2.852114 LR 0.000008 Time 0.207610 -2023-04-27 03:13:12,117 - Epoch: [242][ 200/ 518] Overall Loss 2.845313 Objective Loss 2.845313 LR 0.000008 Time 0.206291 -2023-04-27 03:13:22,257 - Epoch: [242][ 250/ 518] Overall Loss 2.848270 Objective Loss 2.848270 LR 0.000008 Time 0.205586 -2023-04-27 03:13:32,461 - Epoch: [242][ 300/ 518] Overall Loss 2.854972 Objective Loss 2.854972 LR 0.000008 Time 0.205329 -2023-04-27 03:13:42,651 - Epoch: [242][ 350/ 518] Overall Loss 2.847225 Objective Loss 2.847225 LR 0.000008 Time 0.205107 -2023-04-27 03:13:52,877 - Epoch: [242][ 400/ 518] Overall Loss 2.841083 Objective Loss 2.841083 LR 0.000008 Time 0.205030 -2023-04-27 03:14:03,027 - Epoch: [242][ 450/ 518] Overall Loss 2.832297 Objective Loss 2.832297 LR 0.000008 Time 0.204800 -2023-04-27 03:14:13,247 - Epoch: [242][ 500/ 518] Overall Loss 2.834879 Objective Loss 2.834879 LR 0.000008 Time 0.204758 -2023-04-27 03:14:16,784 - Epoch: [242][ 518/ 518] Overall Loss 2.836618 Objective Loss 2.836618 LR 0.000008 Time 0.204468 -2023-04-27 03:14:16,858 - --- validate (epoch=242)----------- -2023-04-27 03:14:16,858 - 4952 samples (32 per mini-batch) -2023-04-27 03:14:23,639 - Epoch: [242][ 50/ 155] Loss 3.157299 mAP 0.443588 -2023-04-27 03:14:30,021 - Epoch: [242][ 100/ 155] Loss 3.153577 mAP 0.447783 -2023-04-27 03:14:36,343 - Epoch: [242][ 150/ 155] Loss 3.141548 mAP 0.450491 -2023-04-27 03:14:36,935 - Epoch: [242][ 155/ 155] Loss 3.141176 mAP 0.449897 -2023-04-27 03:14:37,023 - ==> mAP: 0.44990 Loss: 3.141 - -2023-04-27 03:14:37,027 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:14:37,027 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:14:37,063 - - -2023-04-27 03:14:37,063 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:14:47,967 - Epoch: [243][ 50/ 518] Overall Loss 2.870643 Objective Loss 2.870643 LR 0.000008 Time 0.218011 -2023-04-27 03:14:58,084 - Epoch: [243][ 100/ 518] Overall Loss 2.865286 Objective Loss 2.865286 LR 0.000008 Time 0.210157 -2023-04-27 03:15:08,209 - Epoch: [243][ 150/ 518] Overall Loss 2.878898 Objective Loss 2.878898 LR 0.000008 Time 0.207599 -2023-04-27 03:15:18,337 - Epoch: [243][ 200/ 518] Overall Loss 2.871688 Objective Loss 2.871688 LR 0.000008 Time 0.206327 -2023-04-27 03:15:28,517 - Epoch: [243][ 250/ 518] Overall Loss 2.864661 Objective Loss 2.864661 LR 0.000008 Time 0.205779 -2023-04-27 03:15:38,698 - Epoch: [243][ 300/ 518] Overall Loss 2.856239 Objective Loss 2.856239 LR 0.000008 Time 0.205412 -2023-04-27 03:15:48,887 - Epoch: [243][ 350/ 518] Overall Loss 2.849897 Objective Loss 2.849897 LR 0.000008 Time 0.205174 -2023-04-27 03:15:59,016 - Epoch: [243][ 400/ 518] Overall Loss 2.853410 Objective Loss 2.853410 LR 0.000008 Time 0.204847 -2023-04-27 03:16:09,115 - Epoch: [243][ 450/ 518] Overall Loss 2.845115 Objective Loss 2.845115 LR 0.000008 Time 0.204524 -2023-04-27 03:16:19,301 - Epoch: [243][ 500/ 518] Overall Loss 2.850060 Objective Loss 2.850060 LR 0.000008 Time 0.204440 -2023-04-27 03:16:22,824 - Epoch: [243][ 518/ 518] Overall Loss 2.850139 Objective Loss 2.850139 LR 0.000008 Time 0.204136 -2023-04-27 03:16:22,900 - --- validate (epoch=243)----------- -2023-04-27 03:16:22,901 - 4952 samples (32 per mini-batch) -2023-04-27 03:16:29,715 - Epoch: [243][ 50/ 155] Loss 3.140612 mAP 0.443695 -2023-04-27 03:16:36,122 - Epoch: [243][ 100/ 155] Loss 3.136742 mAP 0.454521 -2023-04-27 03:16:42,596 - Epoch: [243][ 150/ 155] Loss 3.148346 mAP 0.456518 -2023-04-27 03:16:43,150 - Epoch: [243][ 155/ 155] Loss 3.153287 mAP 0.454238 -2023-04-27 03:16:43,217 - ==> mAP: 0.45424 Loss: 3.153 - -2023-04-27 03:16:43,221 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:16:43,221 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:16:43,258 - - -2023-04-27 03:16:43,258 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:16:54,221 - Epoch: [244][ 50/ 518] Overall Loss 2.761588 Objective Loss 2.761588 LR 0.000008 Time 0.219211 -2023-04-27 03:17:04,381 - Epoch: [244][ 100/ 518] Overall Loss 2.795673 Objective Loss 2.795673 LR 0.000008 Time 0.211185 -2023-04-27 03:17:14,520 - Epoch: [244][ 150/ 518] Overall Loss 2.804243 Objective Loss 2.804243 LR 0.000008 Time 0.208375 -2023-04-27 03:17:24,755 - Epoch: [244][ 200/ 518] Overall Loss 2.816166 Objective Loss 2.816166 LR 0.000008 Time 0.207446 -2023-04-27 03:17:34,980 - Epoch: [244][ 250/ 518] Overall Loss 2.823339 Objective Loss 2.823339 LR 0.000008 Time 0.206851 -2023-04-27 03:17:45,131 - Epoch: [244][ 300/ 518] Overall Loss 2.831828 Objective Loss 2.831828 LR 0.000008 Time 0.206208 -2023-04-27 03:17:55,259 - Epoch: [244][ 350/ 518] Overall Loss 2.838586 Objective Loss 2.838586 LR 0.000008 Time 0.205682 -2023-04-27 03:18:05,384 - Epoch: [244][ 400/ 518] Overall Loss 2.838281 Objective Loss 2.838281 LR 0.000008 Time 0.205280 -2023-04-27 03:18:15,587 - Epoch: [244][ 450/ 518] Overall Loss 2.838302 Objective Loss 2.838302 LR 0.000008 Time 0.205140 -2023-04-27 03:18:25,814 - Epoch: [244][ 500/ 518] Overall Loss 2.835812 Objective Loss 2.835812 LR 0.000008 Time 0.205078 -2023-04-27 03:18:29,392 - Epoch: [244][ 518/ 518] Overall Loss 2.835433 Objective Loss 2.835433 LR 0.000008 Time 0.204857 -2023-04-27 03:18:29,469 - --- validate (epoch=244)----------- -2023-04-27 03:18:29,469 - 4952 samples (32 per mini-batch) -2023-04-27 03:18:36,252 - Epoch: [244][ 50/ 155] Loss 3.169090 mAP 0.459490 -2023-04-27 03:18:42,663 - Epoch: [244][ 100/ 155] Loss 3.148612 mAP 0.462304 -2023-04-27 03:18:49,025 - Epoch: [244][ 150/ 155] Loss 3.143157 mAP 0.457734 -2023-04-27 03:18:49,590 - Epoch: [244][ 155/ 155] Loss 3.143421 mAP 0.457843 -2023-04-27 03:18:49,671 - ==> mAP: 0.45784 Loss: 3.143 - -2023-04-27 03:18:49,675 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:18:49,675 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:18:49,712 - - -2023-04-27 03:18:49,712 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:19:00,669 - Epoch: [245][ 50/ 518] Overall Loss 2.872949 Objective Loss 2.872949 LR 0.000008 Time 0.219091 -2023-04-27 03:19:10,922 - Epoch: [245][ 100/ 518] Overall Loss 2.870876 Objective Loss 2.870876 LR 0.000008 Time 0.212061 -2023-04-27 03:19:21,034 - Epoch: [245][ 150/ 518] Overall Loss 2.869929 Objective Loss 2.869929 LR 0.000008 Time 0.208775 -2023-04-27 03:19:31,248 - Epoch: [245][ 200/ 518] Overall Loss 2.850621 Objective Loss 2.850621 LR 0.000008 Time 0.207642 -2023-04-27 03:19:41,444 - Epoch: [245][ 250/ 518] Overall Loss 2.841840 Objective Loss 2.841840 LR 0.000008 Time 0.206890 -2023-04-27 03:19:51,637 - Epoch: [245][ 300/ 518] Overall Loss 2.828718 Objective Loss 2.828718 LR 0.000008 Time 0.206379 -2023-04-27 03:20:01,784 - Epoch: [245][ 350/ 518] Overall Loss 2.826111 Objective Loss 2.826111 LR 0.000008 Time 0.205883 -2023-04-27 03:20:11,853 - Epoch: [245][ 400/ 518] Overall Loss 2.825982 Objective Loss 2.825982 LR 0.000008 Time 0.205316 -2023-04-27 03:20:22,049 - Epoch: [245][ 450/ 518] Overall Loss 2.826327 Objective Loss 2.826327 LR 0.000008 Time 0.205158 -2023-04-27 03:20:32,195 - Epoch: [245][ 500/ 518] Overall Loss 2.830322 Objective Loss 2.830322 LR 0.000008 Time 0.204932 -2023-04-27 03:20:35,707 - Epoch: [245][ 518/ 518] Overall Loss 2.832335 Objective Loss 2.832335 LR 0.000008 Time 0.204588 -2023-04-27 03:20:35,784 - --- validate (epoch=245)----------- -2023-04-27 03:20:35,784 - 4952 samples (32 per mini-batch) -2023-04-27 03:20:42,522 - Epoch: [245][ 50/ 155] Loss 3.124952 mAP 0.437278 -2023-04-27 03:20:48,914 - Epoch: [245][ 100/ 155] Loss 3.127481 mAP 0.445795 -2023-04-27 03:20:55,342 - Epoch: [245][ 150/ 155] Loss 3.139831 mAP 0.446873 -2023-04-27 03:20:55,910 - Epoch: [245][ 155/ 155] Loss 3.143261 mAP 0.445066 -2023-04-27 03:20:55,994 - ==> mAP: 0.44507 Loss: 3.143 - -2023-04-27 03:20:55,998 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:20:55,998 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:20:56,034 - - -2023-04-27 03:20:56,035 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:21:07,032 - Epoch: [246][ 50/ 518] Overall Loss 2.860270 Objective Loss 2.860270 LR 0.000008 Time 0.219885 -2023-04-27 03:21:17,216 - Epoch: [246][ 100/ 518] Overall Loss 2.880617 Objective Loss 2.880617 LR 0.000008 Time 0.211775 -2023-04-27 03:21:27,348 - Epoch: [246][ 150/ 518] Overall Loss 2.855224 Objective Loss 2.855224 LR 0.000008 Time 0.208714 -2023-04-27 03:21:37,461 - Epoch: [246][ 200/ 518] Overall Loss 2.851640 Objective Loss 2.851640 LR 0.000008 Time 0.207094 -2023-04-27 03:21:47,578 - Epoch: [246][ 250/ 518] Overall Loss 2.850439 Objective Loss 2.850439 LR 0.000008 Time 0.206137 -2023-04-27 03:21:57,735 - Epoch: [246][ 300/ 518] Overall Loss 2.849799 Objective Loss 2.849799 LR 0.000008 Time 0.205634 -2023-04-27 03:22:07,940 - Epoch: [246][ 350/ 518] Overall Loss 2.850857 Objective Loss 2.850857 LR 0.000008 Time 0.205407 -2023-04-27 03:22:18,101 - Epoch: [246][ 400/ 518] Overall Loss 2.850094 Objective Loss 2.850094 LR 0.000008 Time 0.205132 -2023-04-27 03:22:28,226 - Epoch: [246][ 450/ 518] Overall Loss 2.841234 Objective Loss 2.841234 LR 0.000008 Time 0.204835 -2023-04-27 03:22:38,405 - Epoch: [246][ 500/ 518] Overall Loss 2.842936 Objective Loss 2.842936 LR 0.000008 Time 0.204707 -2023-04-27 03:22:42,003 - Epoch: [246][ 518/ 518] Overall Loss 2.842401 Objective Loss 2.842401 LR 0.000008 Time 0.204538 -2023-04-27 03:22:42,082 - --- validate (epoch=246)----------- -2023-04-27 03:22:42,082 - 4952 samples (32 per mini-batch) -2023-04-27 03:22:48,848 - Epoch: [246][ 50/ 155] Loss 3.146606 mAP 0.429978 -2023-04-27 03:22:55,253 - Epoch: [246][ 100/ 155] Loss 3.145628 mAP 0.444043 -2023-04-27 03:23:01,658 - Epoch: [246][ 150/ 155] Loss 3.144306 mAP 0.441878 -2023-04-27 03:23:02,234 - Epoch: [246][ 155/ 155] Loss 3.144701 mAP 0.443237 -2023-04-27 03:23:02,307 - ==> mAP: 0.44324 Loss: 3.145 - -2023-04-27 03:23:02,311 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:23:02,311 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:23:02,347 - - -2023-04-27 03:23:02,347 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:23:13,239 - Epoch: [247][ 50/ 518] Overall Loss 2.860594 Objective Loss 2.860594 LR 0.000008 Time 0.217794 -2023-04-27 03:23:23,410 - Epoch: [247][ 100/ 518] Overall Loss 2.863541 Objective Loss 2.863541 LR 0.000008 Time 0.210590 -2023-04-27 03:23:33,528 - Epoch: [247][ 150/ 518] Overall Loss 2.871121 Objective Loss 2.871121 LR 0.000008 Time 0.207831 -2023-04-27 03:23:43,670 - Epoch: [247][ 200/ 518] Overall Loss 2.849465 Objective Loss 2.849465 LR 0.000008 Time 0.206576 -2023-04-27 03:23:53,848 - Epoch: [247][ 250/ 518] Overall Loss 2.843417 Objective Loss 2.843417 LR 0.000008 Time 0.205968 -2023-04-27 03:24:03,926 - Epoch: [247][ 300/ 518] Overall Loss 2.846619 Objective Loss 2.846619 LR 0.000008 Time 0.205228 -2023-04-27 03:24:14,069 - Epoch: [247][ 350/ 518] Overall Loss 2.848366 Objective Loss 2.848366 LR 0.000008 Time 0.204885 -2023-04-27 03:24:24,226 - Epoch: [247][ 400/ 518] Overall Loss 2.845663 Objective Loss 2.845663 LR 0.000008 Time 0.204663 -2023-04-27 03:24:34,396 - Epoch: [247][ 450/ 518] Overall Loss 2.841443 Objective Loss 2.841443 LR 0.000008 Time 0.204519 -2023-04-27 03:24:44,603 - Epoch: [247][ 500/ 518] Overall Loss 2.841172 Objective Loss 2.841172 LR 0.000008 Time 0.204477 -2023-04-27 03:24:48,151 - Epoch: [247][ 518/ 518] Overall Loss 2.845963 Objective Loss 2.845963 LR 0.000008 Time 0.204221 -2023-04-27 03:24:48,227 - --- validate (epoch=247)----------- -2023-04-27 03:24:48,227 - 4952 samples (32 per mini-batch) -2023-04-27 03:24:55,046 - Epoch: [247][ 50/ 155] Loss 3.181366 mAP 0.456877 -2023-04-27 03:25:01,455 - Epoch: [247][ 100/ 155] Loss 3.152826 mAP 0.455065 -2023-04-27 03:25:07,825 - Epoch: [247][ 150/ 155] Loss 3.136389 mAP 0.455786 -2023-04-27 03:25:08,392 - Epoch: [247][ 155/ 155] Loss 3.143326 mAP 0.454374 -2023-04-27 03:25:08,471 - ==> mAP: 0.45437 Loss: 3.143 - -2023-04-27 03:25:08,475 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:25:08,475 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:25:08,511 - - -2023-04-27 03:25:08,512 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:25:19,361 - Epoch: [248][ 50/ 518] Overall Loss 2.823946 Objective Loss 2.823946 LR 0.000008 Time 0.216933 -2023-04-27 03:25:29,516 - Epoch: [248][ 100/ 518] Overall Loss 2.837164 Objective Loss 2.837164 LR 0.000008 Time 0.209998 -2023-04-27 03:25:39,641 - Epoch: [248][ 150/ 518] Overall Loss 2.829524 Objective Loss 2.829524 LR 0.000008 Time 0.207489 -2023-04-27 03:25:49,859 - Epoch: [248][ 200/ 518] Overall Loss 2.828369 Objective Loss 2.828369 LR 0.000008 Time 0.206698 -2023-04-27 03:26:00,057 - Epoch: [248][ 250/ 518] Overall Loss 2.829400 Objective Loss 2.829400 LR 0.000008 Time 0.206143 -2023-04-27 03:26:10,196 - Epoch: [248][ 300/ 518] Overall Loss 2.837136 Objective Loss 2.837136 LR 0.000008 Time 0.205578 -2023-04-27 03:26:20,351 - Epoch: [248][ 350/ 518] Overall Loss 2.837948 Objective Loss 2.837948 LR 0.000008 Time 0.205220 -2023-04-27 03:26:30,492 - Epoch: [248][ 400/ 518] Overall Loss 2.841136 Objective Loss 2.841136 LR 0.000008 Time 0.204917 -2023-04-27 03:26:40,661 - Epoch: [248][ 450/ 518] Overall Loss 2.843777 Objective Loss 2.843777 LR 0.000008 Time 0.204741 -2023-04-27 03:26:50,780 - Epoch: [248][ 500/ 518] Overall Loss 2.839210 Objective Loss 2.839210 LR 0.000008 Time 0.204503 -2023-04-27 03:26:54,301 - Epoch: [248][ 518/ 518] Overall Loss 2.840259 Objective Loss 2.840259 LR 0.000008 Time 0.204193 -2023-04-27 03:26:54,375 - --- validate (epoch=248)----------- -2023-04-27 03:26:54,376 - 4952 samples (32 per mini-batch) -2023-04-27 03:27:01,119 - Epoch: [248][ 50/ 155] Loss 3.120208 mAP 0.466587 -2023-04-27 03:27:07,502 - Epoch: [248][ 100/ 155] Loss 3.121465 mAP 0.457607 -2023-04-27 03:27:13,867 - Epoch: [248][ 150/ 155] Loss 3.137175 mAP 0.446078 -2023-04-27 03:27:14,432 - Epoch: [248][ 155/ 155] Loss 3.141208 mAP 0.445244 -2023-04-27 03:27:14,503 - ==> mAP: 0.44524 Loss: 3.141 - -2023-04-27 03:27:14,507 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:27:14,507 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:27:14,545 - - -2023-04-27 03:27:14,545 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:27:25,602 - Epoch: [249][ 50/ 518] Overall Loss 2.819794 Objective Loss 2.819794 LR 0.000008 Time 0.221070 -2023-04-27 03:27:35,749 - Epoch: [249][ 100/ 518] Overall Loss 2.863397 Objective Loss 2.863397 LR 0.000008 Time 0.211990 -2023-04-27 03:27:45,922 - Epoch: [249][ 150/ 518] Overall Loss 2.858910 Objective Loss 2.858910 LR 0.000008 Time 0.209135 -2023-04-27 03:27:56,067 - Epoch: [249][ 200/ 518] Overall Loss 2.861012 Objective Loss 2.861012 LR 0.000008 Time 0.207570 -2023-04-27 03:28:06,198 - Epoch: [249][ 250/ 518] Overall Loss 2.862865 Objective Loss 2.862865 LR 0.000008 Time 0.206575 -2023-04-27 03:28:16,412 - Epoch: [249][ 300/ 518] Overall Loss 2.852235 Objective Loss 2.852235 LR 0.000008 Time 0.206184 -2023-04-27 03:28:26,540 - Epoch: [249][ 350/ 518] Overall Loss 2.853227 Objective Loss 2.853227 LR 0.000008 Time 0.205664 -2023-04-27 03:28:36,780 - Epoch: [249][ 400/ 518] Overall Loss 2.844112 Objective Loss 2.844112 LR 0.000008 Time 0.205551 -2023-04-27 03:28:46,927 - Epoch: [249][ 450/ 518] Overall Loss 2.844496 Objective Loss 2.844496 LR 0.000008 Time 0.205258 -2023-04-27 03:28:57,065 - Epoch: [249][ 500/ 518] Overall Loss 2.846503 Objective Loss 2.846503 LR 0.000008 Time 0.205004 -2023-04-27 03:29:00,606 - Epoch: [249][ 518/ 518] Overall Loss 2.848741 Objective Loss 2.848741 LR 0.000008 Time 0.204716 -2023-04-27 03:29:00,682 - --- validate (epoch=249)----------- -2023-04-27 03:29:00,682 - 4952 samples (32 per mini-batch) -2023-04-27 03:29:07,521 - Epoch: [249][ 50/ 155] Loss 3.127368 mAP 0.466355 -2023-04-27 03:29:13,927 - Epoch: [249][ 100/ 155] Loss 3.144789 mAP 0.460856 -2023-04-27 03:29:20,306 - Epoch: [249][ 150/ 155] Loss 3.147786 mAP 0.457912 -2023-04-27 03:29:20,876 - Epoch: [249][ 155/ 155] Loss 3.144298 mAP 0.457769 -2023-04-27 03:29:20,950 - ==> mAP: 0.45777 Loss: 3.144 - -2023-04-27 03:29:20,954 - ==> Best [mAP: 0.462889 vloss: 3.144180 Sparsity:0.00 Params: 2177087 on epoch: 201] -2023-04-27 03:29:20,954 - Saving checkpoint to: logs/2023.04.26-184348/checkpoint.pth.tar -2023-04-27 03:29:21,036 - - -2023-04-27 03:29:21,036 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:29:32,619 - Epoch: [250][ 50/ 518] Overall Loss 4.548745 Objective Loss 4.548745 LR 0.000008 Time 0.231603 -2023-04-27 03:29:43,410 - Epoch: [250][ 100/ 518] Overall Loss 4.060120 Objective Loss 4.060120 LR 0.000008 Time 0.223702 -2023-04-27 03:29:54,271 - Epoch: [250][ 150/ 518] Overall Loss 3.837740 Objective Loss 3.837740 LR 0.000008 Time 0.221526 -2023-04-27 03:30:05,026 - Epoch: [250][ 200/ 518] Overall Loss 3.698043 Objective Loss 3.698043 LR 0.000008 Time 0.219912 -2023-04-27 03:30:15,965 - Epoch: [250][ 250/ 518] Overall Loss 3.618128 Objective Loss 3.618128 LR 0.000008 Time 0.219682 -2023-04-27 03:30:26,808 - Epoch: [250][ 300/ 518] Overall Loss 3.558953 Objective Loss 3.558953 LR 0.000008 Time 0.219206 -2023-04-27 03:30:37,615 - Epoch: [250][ 350/ 518] Overall Loss 3.511840 Objective Loss 3.511840 LR 0.000008 Time 0.218763 -2023-04-27 03:30:48,440 - Epoch: [250][ 400/ 518] Overall Loss 3.477347 Objective Loss 3.477347 LR 0.000008 Time 0.218476 -2023-04-27 03:30:59,357 - Epoch: [250][ 450/ 518] Overall Loss 3.442102 Objective Loss 3.442102 LR 0.000008 Time 0.218458 -2023-04-27 03:31:10,175 - Epoch: [250][ 500/ 518] Overall Loss 3.420557 Objective Loss 3.420557 LR 0.000008 Time 0.218244 -2023-04-27 03:31:13,917 - Epoch: [250][ 518/ 518] Overall Loss 3.412381 Objective Loss 3.412381 LR 0.000008 Time 0.217884 -2023-04-27 03:31:13,994 - --- validate (epoch=250)----------- -2023-04-27 03:31:13,994 - 4952 samples (32 per mini-batch) -2023-04-27 03:31:22,173 - Epoch: [250][ 50/ 155] Loss 3.287830 mAP 0.406113 -2023-04-27 03:31:29,971 - Epoch: [250][ 100/ 155] Loss 3.285118 mAP 0.400154 -2023-04-27 03:31:37,770 - Epoch: [250][ 150/ 155] Loss 3.276130 mAP 0.401186 -2023-04-27 03:31:38,478 - Epoch: [250][ 155/ 155] Loss 3.276710 mAP 0.401332 -2023-04-27 03:31:38,553 - ==> mAP: 0.40133 Loss: 3.277 - -2023-04-27 03:31:38,557 - ==> Best [mAP: 0.401332 vloss: 3.276710 Sparsity:0.00 Params: 2177087 on epoch: 250] -2023-04-27 03:31:38,557 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:31:38,594 - - -2023-04-27 03:31:38,594 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:31:50,254 - Epoch: [251][ 50/ 518] Overall Loss 3.195521 Objective Loss 3.195521 LR 0.000008 Time 0.233147 -2023-04-27 03:32:01,091 - Epoch: [251][ 100/ 518] Overall Loss 3.191063 Objective Loss 3.191063 LR 0.000008 Time 0.224925 -2023-04-27 03:32:11,873 - Epoch: [251][ 150/ 518] Overall Loss 3.175939 Objective Loss 3.175939 LR 0.000008 Time 0.221820 -2023-04-27 03:32:22,608 - Epoch: [251][ 200/ 518] Overall Loss 3.164506 Objective Loss 3.164506 LR 0.000008 Time 0.220033 -2023-04-27 03:32:33,446 - Epoch: [251][ 250/ 518] Overall Loss 3.167774 Objective Loss 3.167774 LR 0.000008 Time 0.219374 -2023-04-27 03:32:44,307 - Epoch: [251][ 300/ 518] Overall Loss 3.154498 Objective Loss 3.154498 LR 0.000008 Time 0.219008 -2023-04-27 03:32:55,132 - Epoch: [251][ 350/ 518] Overall Loss 3.142627 Objective Loss 3.142627 LR 0.000008 Time 0.218644 -2023-04-27 03:33:05,986 - Epoch: [251][ 400/ 518] Overall Loss 3.138122 Objective Loss 3.138122 LR 0.000008 Time 0.218445 -2023-04-27 03:33:16,765 - Epoch: [251][ 450/ 518] Overall Loss 3.137090 Objective Loss 3.137090 LR 0.000008 Time 0.218125 -2023-04-27 03:33:27,578 - Epoch: [251][ 500/ 518] Overall Loss 3.130152 Objective Loss 3.130152 LR 0.000008 Time 0.217936 -2023-04-27 03:33:31,344 - Epoch: [251][ 518/ 518] Overall Loss 3.130237 Objective Loss 3.130237 LR 0.000008 Time 0.217631 -2023-04-27 03:33:31,419 - --- validate (epoch=251)----------- -2023-04-27 03:33:31,420 - 4952 samples (32 per mini-batch) -2023-04-27 03:33:39,637 - Epoch: [251][ 50/ 155] Loss 3.194700 mAP 0.423771 -2023-04-27 03:33:47,437 - Epoch: [251][ 100/ 155] Loss 3.205864 mAP 0.417054 -2023-04-27 03:33:55,265 - Epoch: [251][ 150/ 155] Loss 3.212300 mAP 0.411294 -2023-04-27 03:33:55,969 - Epoch: [251][ 155/ 155] Loss 3.205666 mAP 0.410577 -2023-04-27 03:33:56,038 - ==> mAP: 0.41058 Loss: 3.206 - -2023-04-27 03:33:56,042 - ==> Best [mAP: 0.410577 vloss: 3.205666 Sparsity:0.00 Params: 2177087 on epoch: 251] -2023-04-27 03:33:56,042 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:33:56,092 - - -2023-04-27 03:33:56,092 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:34:07,710 - Epoch: [252][ 50/ 518] Overall Loss 3.089833 Objective Loss 3.089833 LR 0.000008 Time 0.232303 -2023-04-27 03:34:18,580 - Epoch: [252][ 100/ 518] Overall Loss 3.113138 Objective Loss 3.113138 LR 0.000008 Time 0.224837 -2023-04-27 03:34:29,331 - Epoch: [252][ 150/ 518] Overall Loss 3.099819 Objective Loss 3.099819 LR 0.000008 Time 0.221553 -2023-04-27 03:34:40,112 - Epoch: [252][ 200/ 518] Overall Loss 3.084473 Objective Loss 3.084473 LR 0.000008 Time 0.220061 -2023-04-27 03:34:50,989 - Epoch: [252][ 250/ 518] Overall Loss 3.086754 Objective Loss 3.086754 LR 0.000008 Time 0.219550 -2023-04-27 03:35:01,827 - Epoch: [252][ 300/ 518] Overall Loss 3.083670 Objective Loss 3.083670 LR 0.000008 Time 0.219081 -2023-04-27 03:35:12,593 - Epoch: [252][ 350/ 518] Overall Loss 3.092436 Objective Loss 3.092436 LR 0.000008 Time 0.218540 -2023-04-27 03:35:23,399 - Epoch: [252][ 400/ 518] Overall Loss 3.088105 Objective Loss 3.088105 LR 0.000008 Time 0.218234 -2023-04-27 03:35:34,187 - Epoch: [252][ 450/ 518] Overall Loss 3.087804 Objective Loss 3.087804 LR 0.000008 Time 0.217955 -2023-04-27 03:35:45,018 - Epoch: [252][ 500/ 518] Overall Loss 3.086814 Objective Loss 3.086814 LR 0.000008 Time 0.217817 -2023-04-27 03:35:48,761 - Epoch: [252][ 518/ 518] Overall Loss 3.084454 Objective Loss 3.084454 LR 0.000008 Time 0.217474 -2023-04-27 03:35:48,838 - --- validate (epoch=252)----------- -2023-04-27 03:35:48,838 - 4952 samples (32 per mini-batch) -2023-04-27 03:35:57,009 - Epoch: [252][ 50/ 155] Loss 3.187899 mAP 0.417022 -2023-04-27 03:36:04,824 - Epoch: [252][ 100/ 155] Loss 3.193881 mAP 0.408778 -2023-04-27 03:36:12,649 - Epoch: [252][ 150/ 155] Loss 3.186190 mAP 0.417699 -2023-04-27 03:36:13,367 - Epoch: [252][ 155/ 155] Loss 3.182253 mAP 0.419383 -2023-04-27 03:36:13,442 - ==> mAP: 0.41938 Loss: 3.182 - -2023-04-27 03:36:13,445 - ==> Best [mAP: 0.419383 vloss: 3.182253 Sparsity:0.00 Params: 2177087 on epoch: 252] -2023-04-27 03:36:13,445 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:36:13,495 - - -2023-04-27 03:36:13,495 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:36:25,223 - Epoch: [253][ 50/ 518] Overall Loss 3.079634 Objective Loss 3.079634 LR 0.000008 Time 0.234519 -2023-04-27 03:36:36,099 - Epoch: [253][ 100/ 518] Overall Loss 3.077322 Objective Loss 3.077322 LR 0.000008 Time 0.226003 -2023-04-27 03:36:46,908 - Epoch: [253][ 150/ 518] Overall Loss 3.086872 Objective Loss 3.086872 LR 0.000008 Time 0.222714 -2023-04-27 03:36:57,736 - Epoch: [253][ 200/ 518] Overall Loss 3.065894 Objective Loss 3.065894 LR 0.000008 Time 0.221168 -2023-04-27 03:37:08,517 - Epoch: [253][ 250/ 518] Overall Loss 3.065806 Objective Loss 3.065806 LR 0.000008 Time 0.220053 -2023-04-27 03:37:19,295 - Epoch: [253][ 300/ 518] Overall Loss 3.066572 Objective Loss 3.066572 LR 0.000008 Time 0.219298 -2023-04-27 03:37:30,092 - Epoch: [253][ 350/ 518] Overall Loss 3.063381 Objective Loss 3.063381 LR 0.000008 Time 0.218814 -2023-04-27 03:37:40,898 - Epoch: [253][ 400/ 518] Overall Loss 3.066760 Objective Loss 3.066760 LR 0.000008 Time 0.218473 -2023-04-27 03:37:51,711 - Epoch: [253][ 450/ 518] Overall Loss 3.068889 Objective Loss 3.068889 LR 0.000008 Time 0.218224 -2023-04-27 03:38:02,551 - Epoch: [253][ 500/ 518] Overall Loss 3.064633 Objective Loss 3.064633 LR 0.000008 Time 0.218078 -2023-04-27 03:38:06,336 - Epoch: [253][ 518/ 518] Overall Loss 3.065516 Objective Loss 3.065516 LR 0.000008 Time 0.217807 -2023-04-27 03:38:06,414 - --- validate (epoch=253)----------- -2023-04-27 03:38:06,414 - 4952 samples (32 per mini-batch) -2023-04-27 03:38:14,656 - Epoch: [253][ 50/ 155] Loss 3.181040 mAP 0.425296 -2023-04-27 03:38:22,510 - Epoch: [253][ 100/ 155] Loss 3.159753 mAP 0.429916 -2023-04-27 03:38:30,291 - Epoch: [253][ 150/ 155] Loss 3.160320 mAP 0.417846 -2023-04-27 03:38:31,004 - Epoch: [253][ 155/ 155] Loss 3.158065 mAP 0.419727 -2023-04-27 03:38:31,088 - ==> mAP: 0.41973 Loss: 3.158 - -2023-04-27 03:38:31,093 - ==> Best [mAP: 0.419727 vloss: 3.158065 Sparsity:0.00 Params: 2177087 on epoch: 253] -2023-04-27 03:38:31,093 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:38:31,143 - - -2023-04-27 03:38:31,143 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:38:42,761 - Epoch: [254][ 50/ 518] Overall Loss 3.042650 Objective Loss 3.042650 LR 0.000008 Time 0.232300 -2023-04-27 03:38:53,548 - Epoch: [254][ 100/ 518] Overall Loss 3.045320 Objective Loss 3.045320 LR 0.000008 Time 0.224004 -2023-04-27 03:39:04,320 - Epoch: [254][ 150/ 518] Overall Loss 3.034513 Objective Loss 3.034513 LR 0.000008 Time 0.221141 -2023-04-27 03:39:15,190 - Epoch: [254][ 200/ 518] Overall Loss 3.026439 Objective Loss 3.026439 LR 0.000008 Time 0.220198 -2023-04-27 03:39:26,077 - Epoch: [254][ 250/ 518] Overall Loss 3.040811 Objective Loss 3.040811 LR 0.000008 Time 0.219700 -2023-04-27 03:39:36,955 - Epoch: [254][ 300/ 518] Overall Loss 3.041100 Objective Loss 3.041100 LR 0.000008 Time 0.219338 -2023-04-27 03:39:47,930 - Epoch: [254][ 350/ 518] Overall Loss 3.045341 Objective Loss 3.045341 LR 0.000008 Time 0.219358 -2023-04-27 03:39:58,679 - Epoch: [254][ 400/ 518] Overall Loss 3.040291 Objective Loss 3.040291 LR 0.000008 Time 0.218805 -2023-04-27 03:40:09,519 - Epoch: [254][ 450/ 518] Overall Loss 3.035574 Objective Loss 3.035574 LR 0.000008 Time 0.218578 -2023-04-27 03:40:20,324 - Epoch: [254][ 500/ 518] Overall Loss 3.033338 Objective Loss 3.033338 LR 0.000008 Time 0.218328 -2023-04-27 03:40:24,055 - Epoch: [254][ 518/ 518] Overall Loss 3.032259 Objective Loss 3.032259 LR 0.000008 Time 0.217944 -2023-04-27 03:40:24,132 - --- validate (epoch=254)----------- -2023-04-27 03:40:24,132 - 4952 samples (32 per mini-batch) -2023-04-27 03:40:32,471 - Epoch: [254][ 50/ 155] Loss 3.166718 mAP 0.429130 -2023-04-27 03:40:40,333 - Epoch: [254][ 100/ 155] Loss 3.171566 mAP 0.432213 -2023-04-27 03:40:48,133 - Epoch: [254][ 150/ 155] Loss 3.154675 mAP 0.431558 -2023-04-27 03:40:48,852 - Epoch: [254][ 155/ 155] Loss 3.152628 mAP 0.430818 -2023-04-27 03:40:48,929 - ==> mAP: 0.43082 Loss: 3.153 - -2023-04-27 03:40:48,933 - ==> Best [mAP: 0.430818 vloss: 3.152628 Sparsity:0.00 Params: 2177087 on epoch: 254] -2023-04-27 03:40:48,933 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:40:48,982 - - -2023-04-27 03:40:48,982 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:41:00,532 - Epoch: [255][ 50/ 518] Overall Loss 3.033332 Objective Loss 3.033332 LR 0.000008 Time 0.230943 -2023-04-27 03:41:11,351 - Epoch: [255][ 100/ 518] Overall Loss 3.042704 Objective Loss 3.042704 LR 0.000008 Time 0.223650 -2023-04-27 03:41:22,198 - Epoch: [255][ 150/ 518] Overall Loss 3.043596 Objective Loss 3.043596 LR 0.000008 Time 0.221403 -2023-04-27 03:41:32,949 - Epoch: [255][ 200/ 518] Overall Loss 3.039157 Objective Loss 3.039157 LR 0.000008 Time 0.219798 -2023-04-27 03:41:43,785 - Epoch: [255][ 250/ 518] Overall Loss 3.038485 Objective Loss 3.038485 LR 0.000008 Time 0.219176 -2023-04-27 03:41:54,700 - Epoch: [255][ 300/ 518] Overall Loss 3.032716 Objective Loss 3.032716 LR 0.000008 Time 0.219024 -2023-04-27 03:42:05,543 - Epoch: [255][ 350/ 518] Overall Loss 3.043618 Objective Loss 3.043618 LR 0.000008 Time 0.218712 -2023-04-27 03:42:16,362 - Epoch: [255][ 400/ 518] Overall Loss 3.040685 Objective Loss 3.040685 LR 0.000008 Time 0.218416 -2023-04-27 03:42:27,150 - Epoch: [255][ 450/ 518] Overall Loss 3.039949 Objective Loss 3.039949 LR 0.000008 Time 0.218118 -2023-04-27 03:42:37,996 - Epoch: [255][ 500/ 518] Overall Loss 3.034383 Objective Loss 3.034383 LR 0.000008 Time 0.217993 -2023-04-27 03:42:41,744 - Epoch: [255][ 518/ 518] Overall Loss 3.033733 Objective Loss 3.033733 LR 0.000008 Time 0.217654 -2023-04-27 03:42:41,821 - --- validate (epoch=255)----------- -2023-04-27 03:42:41,822 - 4952 samples (32 per mini-batch) -2023-04-27 03:42:50,043 - Epoch: [255][ 50/ 155] Loss 3.114162 mAP 0.433121 -2023-04-27 03:42:57,858 - Epoch: [255][ 100/ 155] Loss 3.152177 mAP 0.427948 -2023-04-27 03:43:05,675 - Epoch: [255][ 150/ 155] Loss 3.143248 mAP 0.428432 -2023-04-27 03:43:06,385 - Epoch: [255][ 155/ 155] Loss 3.144234 mAP 0.427871 -2023-04-27 03:43:06,458 - ==> mAP: 0.42787 Loss: 3.144 - -2023-04-27 03:43:06,462 - ==> Best [mAP: 0.430818 vloss: 3.152628 Sparsity:0.00 Params: 2177087 on epoch: 254] -2023-04-27 03:43:06,462 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:43:06,496 - - -2023-04-27 03:43:06,496 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:43:18,178 - Epoch: [256][ 50/ 518] Overall Loss 3.041977 Objective Loss 3.041977 LR 0.000008 Time 0.233590 -2023-04-27 03:43:29,096 - Epoch: [256][ 100/ 518] Overall Loss 3.059568 Objective Loss 3.059568 LR 0.000008 Time 0.225956 -2023-04-27 03:43:39,928 - Epoch: [256][ 150/ 518] Overall Loss 3.041158 Objective Loss 3.041158 LR 0.000008 Time 0.222840 -2023-04-27 03:43:50,761 - Epoch: [256][ 200/ 518] Overall Loss 3.037275 Objective Loss 3.037275 LR 0.000008 Time 0.221289 -2023-04-27 03:44:01,598 - Epoch: [256][ 250/ 518] Overall Loss 3.034150 Objective Loss 3.034150 LR 0.000008 Time 0.220373 -2023-04-27 03:44:12,387 - Epoch: [256][ 300/ 518] Overall Loss 3.020145 Objective Loss 3.020145 LR 0.000008 Time 0.219602 -2023-04-27 03:44:23,141 - Epoch: [256][ 350/ 518] Overall Loss 3.015872 Objective Loss 3.015872 LR 0.000008 Time 0.218952 -2023-04-27 03:44:33,888 - Epoch: [256][ 400/ 518] Overall Loss 3.015427 Objective Loss 3.015427 LR 0.000008 Time 0.218446 -2023-04-27 03:44:44,681 - Epoch: [256][ 450/ 518] Overall Loss 3.017475 Objective Loss 3.017475 LR 0.000008 Time 0.218154 -2023-04-27 03:44:55,418 - Epoch: [256][ 500/ 518] Overall Loss 3.017320 Objective Loss 3.017320 LR 0.000008 Time 0.217810 -2023-04-27 03:44:59,144 - Epoch: [256][ 518/ 518] Overall Loss 3.016991 Objective Loss 3.016991 LR 0.000008 Time 0.217433 -2023-04-27 03:44:59,221 - --- validate (epoch=256)----------- -2023-04-27 03:44:59,221 - 4952 samples (32 per mini-batch) -2023-04-27 03:45:07,539 - Epoch: [256][ 50/ 155] Loss 3.125432 mAP 0.436287 -2023-04-27 03:45:15,402 - Epoch: [256][ 100/ 155] Loss 3.156298 mAP 0.420897 -2023-04-27 03:45:23,246 - Epoch: [256][ 150/ 155] Loss 3.154693 mAP 0.426640 -2023-04-27 03:45:23,973 - Epoch: [256][ 155/ 155] Loss 3.152663 mAP 0.429709 -2023-04-27 03:45:24,046 - ==> mAP: 0.42971 Loss: 3.153 - -2023-04-27 03:45:24,050 - ==> Best [mAP: 0.430818 vloss: 3.152628 Sparsity:0.00 Params: 2177087 on epoch: 254] -2023-04-27 03:45:24,050 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:45:24,084 - - -2023-04-27 03:45:24,084 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:45:35,675 - Epoch: [257][ 50/ 518] Overall Loss 3.033524 Objective Loss 3.033524 LR 0.000008 Time 0.231771 -2023-04-27 03:45:46,484 - Epoch: [257][ 100/ 518] Overall Loss 2.991298 Objective Loss 2.991298 LR 0.000008 Time 0.223957 -2023-04-27 03:45:57,292 - Epoch: [257][ 150/ 518] Overall Loss 2.998900 Objective Loss 2.998900 LR 0.000008 Time 0.221348 -2023-04-27 03:46:08,127 - Epoch: [257][ 200/ 518] Overall Loss 2.992074 Objective Loss 2.992074 LR 0.000008 Time 0.220180 -2023-04-27 03:46:18,971 - Epoch: [257][ 250/ 518] Overall Loss 2.998997 Objective Loss 2.998997 LR 0.000008 Time 0.219511 -2023-04-27 03:46:29,761 - Epoch: [257][ 300/ 518] Overall Loss 3.005716 Objective Loss 3.005716 LR 0.000008 Time 0.218888 -2023-04-27 03:46:40,538 - Epoch: [257][ 350/ 518] Overall Loss 3.001889 Objective Loss 3.001889 LR 0.000008 Time 0.218405 -2023-04-27 03:46:51,357 - Epoch: [257][ 400/ 518] Overall Loss 3.002834 Objective Loss 3.002834 LR 0.000008 Time 0.218149 -2023-04-27 03:47:02,223 - Epoch: [257][ 450/ 518] Overall Loss 3.001299 Objective Loss 3.001299 LR 0.000008 Time 0.218053 -2023-04-27 03:47:13,148 - Epoch: [257][ 500/ 518] Overall Loss 3.003319 Objective Loss 3.003319 LR 0.000008 Time 0.218094 -2023-04-27 03:47:16,898 - Epoch: [257][ 518/ 518] Overall Loss 3.000654 Objective Loss 3.000654 LR 0.000008 Time 0.217755 -2023-04-27 03:47:16,975 - --- validate (epoch=257)----------- -2023-04-27 03:47:16,975 - 4952 samples (32 per mini-batch) -2023-04-27 03:47:25,144 - Epoch: [257][ 50/ 155] Loss 3.161425 mAP 0.423738 -2023-04-27 03:47:32,991 - Epoch: [257][ 100/ 155] Loss 3.155525 mAP 0.421179 -2023-04-27 03:47:40,845 - Epoch: [257][ 150/ 155] Loss 3.137262 mAP 0.423640 -2023-04-27 03:47:41,559 - Epoch: [257][ 155/ 155] Loss 3.138539 mAP 0.422382 -2023-04-27 03:47:41,633 - ==> mAP: 0.42238 Loss: 3.139 - -2023-04-27 03:47:41,636 - ==> Best [mAP: 0.430818 vloss: 3.152628 Sparsity:0.00 Params: 2177087 on epoch: 254] -2023-04-27 03:47:41,637 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:47:41,671 - - -2023-04-27 03:47:41,671 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:47:53,174 - Epoch: [258][ 50/ 518] Overall Loss 3.017400 Objective Loss 3.017400 LR 0.000008 Time 0.230004 -2023-04-27 03:48:04,099 - Epoch: [258][ 100/ 518] Overall Loss 2.990669 Objective Loss 2.990669 LR 0.000008 Time 0.224237 -2023-04-27 03:48:14,967 - Epoch: [258][ 150/ 518] Overall Loss 3.000493 Objective Loss 3.000493 LR 0.000008 Time 0.221933 -2023-04-27 03:48:25,795 - Epoch: [258][ 200/ 518] Overall Loss 3.005089 Objective Loss 3.005089 LR 0.000008 Time 0.220581 -2023-04-27 03:48:36,656 - Epoch: [258][ 250/ 518] Overall Loss 3.011870 Objective Loss 3.011870 LR 0.000008 Time 0.219903 -2023-04-27 03:48:47,500 - Epoch: [258][ 300/ 518] Overall Loss 3.015900 Objective Loss 3.015900 LR 0.000008 Time 0.219395 -2023-04-27 03:48:58,412 - Epoch: [258][ 350/ 518] Overall Loss 3.012027 Objective Loss 3.012027 LR 0.000008 Time 0.219226 -2023-04-27 03:49:09,165 - Epoch: [258][ 400/ 518] Overall Loss 3.009724 Objective Loss 3.009724 LR 0.000008 Time 0.218701 -2023-04-27 03:49:19,952 - Epoch: [258][ 450/ 518] Overall Loss 3.005817 Objective Loss 3.005817 LR 0.000008 Time 0.218369 -2023-04-27 03:49:30,796 - Epoch: [258][ 500/ 518] Overall Loss 3.008661 Objective Loss 3.008661 LR 0.000008 Time 0.218217 -2023-04-27 03:49:34,616 - Epoch: [258][ 518/ 518] Overall Loss 3.009099 Objective Loss 3.009099 LR 0.000008 Time 0.218008 -2023-04-27 03:49:34,691 - --- validate (epoch=258)----------- -2023-04-27 03:49:34,692 - 4952 samples (32 per mini-batch) -2023-04-27 03:49:42,905 - Epoch: [258][ 50/ 155] Loss 3.108758 mAP 0.438205 -2023-04-27 03:49:50,765 - Epoch: [258][ 100/ 155] Loss 3.120084 mAP 0.434161 -2023-04-27 03:49:58,617 - Epoch: [258][ 150/ 155] Loss 3.127042 mAP 0.433657 -2023-04-27 03:49:59,328 - Epoch: [258][ 155/ 155] Loss 3.123749 mAP 0.433398 -2023-04-27 03:49:59,404 - ==> mAP: 0.43340 Loss: 3.124 - -2023-04-27 03:49:59,408 - ==> Best [mAP: 0.433398 vloss: 3.123749 Sparsity:0.00 Params: 2177087 on epoch: 258] -2023-04-27 03:49:59,408 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:49:59,457 - - -2023-04-27 03:49:59,457 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:50:10,919 - Epoch: [259][ 50/ 518] Overall Loss 2.977145 Objective Loss 2.977145 LR 0.000008 Time 0.229187 -2023-04-27 03:50:21,743 - Epoch: [259][ 100/ 518] Overall Loss 3.006082 Objective Loss 3.006082 LR 0.000008 Time 0.222818 -2023-04-27 03:50:32,541 - Epoch: [259][ 150/ 518] Overall Loss 3.012120 Objective Loss 3.012120 LR 0.000008 Time 0.220517 -2023-04-27 03:50:43,311 - Epoch: [259][ 200/ 518] Overall Loss 3.010245 Objective Loss 3.010245 LR 0.000008 Time 0.219229 -2023-04-27 03:50:54,216 - Epoch: [259][ 250/ 518] Overall Loss 3.007414 Objective Loss 3.007414 LR 0.000008 Time 0.218997 -2023-04-27 03:51:04,987 - Epoch: [259][ 300/ 518] Overall Loss 3.005342 Objective Loss 3.005342 LR 0.000008 Time 0.218399 -2023-04-27 03:51:15,755 - Epoch: [259][ 350/ 518] Overall Loss 2.998101 Objective Loss 2.998101 LR 0.000008 Time 0.217960 -2023-04-27 03:51:26,558 - Epoch: [259][ 400/ 518] Overall Loss 2.999981 Objective Loss 2.999981 LR 0.000008 Time 0.217718 -2023-04-27 03:51:37,317 - Epoch: [259][ 450/ 518] Overall Loss 2.996935 Objective Loss 2.996935 LR 0.000008 Time 0.217433 -2023-04-27 03:51:48,147 - Epoch: [259][ 500/ 518] Overall Loss 2.992858 Objective Loss 2.992858 LR 0.000008 Time 0.217345 -2023-04-27 03:51:51,880 - Epoch: [259][ 518/ 518] Overall Loss 2.993708 Objective Loss 2.993708 LR 0.000008 Time 0.216998 -2023-04-27 03:51:51,955 - --- validate (epoch=259)----------- -2023-04-27 03:51:51,956 - 4952 samples (32 per mini-batch) -2023-04-27 03:52:00,233 - Epoch: [259][ 50/ 155] Loss 3.110187 mAP 0.432729 -2023-04-27 03:52:08,138 - Epoch: [259][ 100/ 155] Loss 3.118631 mAP 0.425356 -2023-04-27 03:52:15,997 - Epoch: [259][ 150/ 155] Loss 3.123641 mAP 0.426056 -2023-04-27 03:52:16,713 - Epoch: [259][ 155/ 155] Loss 3.126384 mAP 0.423183 -2023-04-27 03:52:16,797 - ==> mAP: 0.42318 Loss: 3.126 - -2023-04-27 03:52:16,801 - ==> Best [mAP: 0.433398 vloss: 3.123749 Sparsity:0.00 Params: 2177087 on epoch: 258] -2023-04-27 03:52:16,801 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:52:16,837 - - -2023-04-27 03:52:16,837 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:52:28,384 - Epoch: [260][ 50/ 518] Overall Loss 3.036504 Objective Loss 3.036504 LR 0.000008 Time 0.230896 -2023-04-27 03:52:39,154 - Epoch: [260][ 100/ 518] Overall Loss 3.029900 Objective Loss 3.029900 LR 0.000008 Time 0.223127 -2023-04-27 03:52:50,047 - Epoch: [260][ 150/ 518] Overall Loss 3.021124 Objective Loss 3.021124 LR 0.000008 Time 0.221360 -2023-04-27 03:53:00,846 - Epoch: [260][ 200/ 518] Overall Loss 3.024080 Objective Loss 3.024080 LR 0.000008 Time 0.220008 -2023-04-27 03:53:11,737 - Epoch: [260][ 250/ 518] Overall Loss 3.014920 Objective Loss 3.014920 LR 0.000008 Time 0.219566 -2023-04-27 03:53:22,549 - Epoch: [260][ 300/ 518] Overall Loss 3.001418 Objective Loss 3.001418 LR 0.000008 Time 0.219006 -2023-04-27 03:53:33,327 - Epoch: [260][ 350/ 518] Overall Loss 2.997989 Objective Loss 2.997989 LR 0.000008 Time 0.218508 -2023-04-27 03:53:44,144 - Epoch: [260][ 400/ 518] Overall Loss 2.996089 Objective Loss 2.996089 LR 0.000008 Time 0.218235 -2023-04-27 03:53:54,960 - Epoch: [260][ 450/ 518] Overall Loss 2.994810 Objective Loss 2.994810 LR 0.000008 Time 0.218019 -2023-04-27 03:54:05,797 - Epoch: [260][ 500/ 518] Overall Loss 2.995519 Objective Loss 2.995519 LR 0.000008 Time 0.217887 -2023-04-27 03:54:09,573 - Epoch: [260][ 518/ 518] Overall Loss 2.992879 Objective Loss 2.992879 LR 0.000008 Time 0.217604 -2023-04-27 03:54:09,649 - --- validate (epoch=260)----------- -2023-04-27 03:54:09,650 - 4952 samples (32 per mini-batch) -2023-04-27 03:54:17,914 - Epoch: [260][ 50/ 155] Loss 3.131004 mAP 0.445847 -2023-04-27 03:54:25,810 - Epoch: [260][ 100/ 155] Loss 3.113933 mAP 0.443389 -2023-04-27 03:54:33,717 - Epoch: [260][ 150/ 155] Loss 3.111556 mAP 0.437782 -2023-04-27 03:54:34,438 - Epoch: [260][ 155/ 155] Loss 3.108645 mAP 0.437520 -2023-04-27 03:54:34,511 - ==> mAP: 0.43752 Loss: 3.109 - -2023-04-27 03:54:34,515 - ==> Best [mAP: 0.437520 vloss: 3.108645 Sparsity:0.00 Params: 2177087 on epoch: 260] -2023-04-27 03:54:34,515 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:54:34,564 - - -2023-04-27 03:54:34,564 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:54:46,131 - Epoch: [261][ 50/ 518] Overall Loss 2.976004 Objective Loss 2.976004 LR 0.000008 Time 0.231286 -2023-04-27 03:54:56,960 - Epoch: [261][ 100/ 518] Overall Loss 2.989624 Objective Loss 2.989624 LR 0.000008 Time 0.223919 -2023-04-27 03:55:07,785 - Epoch: [261][ 150/ 518] Overall Loss 2.987740 Objective Loss 2.987740 LR 0.000008 Time 0.221436 -2023-04-27 03:55:18,593 - Epoch: [261][ 200/ 518] Overall Loss 2.986831 Objective Loss 2.986831 LR 0.000008 Time 0.220108 -2023-04-27 03:55:29,448 - Epoch: [261][ 250/ 518] Overall Loss 3.002772 Objective Loss 3.002772 LR 0.000008 Time 0.219502 -2023-04-27 03:55:40,204 - Epoch: [261][ 300/ 518] Overall Loss 2.995075 Objective Loss 2.995075 LR 0.000008 Time 0.218765 -2023-04-27 03:55:51,018 - Epoch: [261][ 350/ 518] Overall Loss 2.993761 Objective Loss 2.993761 LR 0.000008 Time 0.218406 -2023-04-27 03:56:01,983 - Epoch: [261][ 400/ 518] Overall Loss 2.994682 Objective Loss 2.994682 LR 0.000008 Time 0.218514 -2023-04-27 03:56:12,773 - Epoch: [261][ 450/ 518] Overall Loss 2.991048 Objective Loss 2.991048 LR 0.000008 Time 0.218208 -2023-04-27 03:56:23,538 - Epoch: [261][ 500/ 518] Overall Loss 2.994679 Objective Loss 2.994679 LR 0.000008 Time 0.217916 -2023-04-27 03:56:27,266 - Epoch: [261][ 518/ 518] Overall Loss 2.994769 Objective Loss 2.994769 LR 0.000008 Time 0.217539 -2023-04-27 03:56:27,343 - --- validate (epoch=261)----------- -2023-04-27 03:56:27,343 - 4952 samples (32 per mini-batch) -2023-04-27 03:56:35,615 - Epoch: [261][ 50/ 155] Loss 3.079841 mAP 0.439834 -2023-04-27 03:56:43,474 - Epoch: [261][ 100/ 155] Loss 3.110404 mAP 0.434605 -2023-04-27 03:56:51,365 - Epoch: [261][ 150/ 155] Loss 3.104727 mAP 0.437321 -2023-04-27 03:56:52,082 - Epoch: [261][ 155/ 155] Loss 3.104831 mAP 0.437064 -2023-04-27 03:56:52,157 - ==> mAP: 0.43706 Loss: 3.105 - -2023-04-27 03:56:52,161 - ==> Best [mAP: 0.437520 vloss: 3.108645 Sparsity:0.00 Params: 2177087 on epoch: 260] -2023-04-27 03:56:52,161 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:56:52,195 - - -2023-04-27 03:56:52,195 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:57:03,874 - Epoch: [262][ 50/ 518] Overall Loss 2.966476 Objective Loss 2.966476 LR 0.000008 Time 0.233516 -2023-04-27 03:57:14,697 - Epoch: [262][ 100/ 518] Overall Loss 2.994615 Objective Loss 2.994615 LR 0.000008 Time 0.224977 -2023-04-27 03:57:25,450 - Epoch: [262][ 150/ 518] Overall Loss 3.000324 Objective Loss 3.000324 LR 0.000008 Time 0.221659 -2023-04-27 03:57:36,243 - Epoch: [262][ 200/ 518] Overall Loss 2.988159 Objective Loss 2.988159 LR 0.000008 Time 0.220202 -2023-04-27 03:57:47,037 - Epoch: [262][ 250/ 518] Overall Loss 2.981325 Objective Loss 2.981325 LR 0.000008 Time 0.219329 -2023-04-27 03:57:57,951 - Epoch: [262][ 300/ 518] Overall Loss 2.984138 Objective Loss 2.984138 LR 0.000008 Time 0.219151 -2023-04-27 03:58:08,710 - Epoch: [262][ 350/ 518] Overall Loss 2.983599 Objective Loss 2.983599 LR 0.000008 Time 0.218580 -2023-04-27 03:58:19,497 - Epoch: [262][ 400/ 518] Overall Loss 2.985411 Objective Loss 2.985411 LR 0.000008 Time 0.218221 -2023-04-27 03:58:30,299 - Epoch: [262][ 450/ 518] Overall Loss 2.979877 Objective Loss 2.979877 LR 0.000008 Time 0.217974 -2023-04-27 03:58:41,095 - Epoch: [262][ 500/ 518] Overall Loss 2.984228 Objective Loss 2.984228 LR 0.000008 Time 0.217765 -2023-04-27 03:58:44,843 - Epoch: [262][ 518/ 518] Overall Loss 2.982605 Objective Loss 2.982605 LR 0.000008 Time 0.217433 -2023-04-27 03:58:44,921 - --- validate (epoch=262)----------- -2023-04-27 03:58:44,921 - 4952 samples (32 per mini-batch) -2023-04-27 03:58:53,163 - Epoch: [262][ 50/ 155] Loss 3.123451 mAP 0.412307 -2023-04-27 03:59:01,029 - Epoch: [262][ 100/ 155] Loss 3.141107 mAP 0.420080 -2023-04-27 03:59:08,900 - Epoch: [262][ 150/ 155] Loss 3.137512 mAP 0.426609 -2023-04-27 03:59:09,618 - Epoch: [262][ 155/ 155] Loss 3.136288 mAP 0.427992 -2023-04-27 03:59:09,683 - ==> mAP: 0.42799 Loss: 3.136 - -2023-04-27 03:59:09,687 - ==> Best [mAP: 0.437520 vloss: 3.108645 Sparsity:0.00 Params: 2177087 on epoch: 260] -2023-04-27 03:59:09,687 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 03:59:09,721 - - -2023-04-27 03:59:09,721 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 03:59:21,388 - Epoch: [263][ 50/ 518] Overall Loss 2.962949 Objective Loss 2.962949 LR 0.000008 Time 0.233279 -2023-04-27 03:59:32,111 - Epoch: [263][ 100/ 518] Overall Loss 2.992859 Objective Loss 2.992859 LR 0.000008 Time 0.223856 -2023-04-27 03:59:42,976 - Epoch: [263][ 150/ 518] Overall Loss 2.993453 Objective Loss 2.993453 LR 0.000008 Time 0.221660 -2023-04-27 03:59:53,785 - Epoch: [263][ 200/ 518] Overall Loss 2.996130 Objective Loss 2.996130 LR 0.000008 Time 0.220285 -2023-04-27 04:00:04,600 - Epoch: [263][ 250/ 518] Overall Loss 2.997093 Objective Loss 2.997093 LR 0.000008 Time 0.219479 -2023-04-27 04:00:15,418 - Epoch: [263][ 300/ 518] Overall Loss 2.993174 Objective Loss 2.993174 LR 0.000008 Time 0.218954 -2023-04-27 04:00:26,273 - Epoch: [263][ 350/ 518] Overall Loss 2.990055 Objective Loss 2.990055 LR 0.000008 Time 0.218684 -2023-04-27 04:00:37,180 - Epoch: [263][ 400/ 518] Overall Loss 2.992574 Objective Loss 2.992574 LR 0.000008 Time 0.218614 -2023-04-27 04:00:47,998 - Epoch: [263][ 450/ 518] Overall Loss 2.989300 Objective Loss 2.989300 LR 0.000008 Time 0.218360 -2023-04-27 04:00:58,846 - Epoch: [263][ 500/ 518] Overall Loss 2.989268 Objective Loss 2.989268 LR 0.000008 Time 0.218217 -2023-04-27 04:01:02,565 - Epoch: [263][ 518/ 518] Overall Loss 2.987842 Objective Loss 2.987842 LR 0.000008 Time 0.217813 -2023-04-27 04:01:02,642 - --- validate (epoch=263)----------- -2023-04-27 04:01:02,642 - 4952 samples (32 per mini-batch) -2023-04-27 04:01:10,854 - Epoch: [263][ 50/ 155] Loss 3.111518 mAP 0.436154 -2023-04-27 04:01:18,727 - Epoch: [263][ 100/ 155] Loss 3.099971 mAP 0.439544 -2023-04-27 04:01:26,553 - Epoch: [263][ 150/ 155] Loss 3.103498 mAP 0.432834 -2023-04-27 04:01:27,267 - Epoch: [263][ 155/ 155] Loss 3.103355 mAP 0.433379 -2023-04-27 04:01:27,342 - ==> mAP: 0.43338 Loss: 3.103 - -2023-04-27 04:01:27,346 - ==> Best [mAP: 0.437520 vloss: 3.108645 Sparsity:0.00 Params: 2177087 on epoch: 260] -2023-04-27 04:01:27,346 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:01:27,380 - - -2023-04-27 04:01:27,380 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:01:39,160 - Epoch: [264][ 50/ 518] Overall Loss 2.986884 Objective Loss 2.986884 LR 0.000008 Time 0.235539 -2023-04-27 04:01:49,991 - Epoch: [264][ 100/ 518] Overall Loss 2.981842 Objective Loss 2.981842 LR 0.000008 Time 0.226068 -2023-04-27 04:02:00,934 - Epoch: [264][ 150/ 518] Overall Loss 2.961959 Objective Loss 2.961959 LR 0.000008 Time 0.223654 -2023-04-27 04:02:11,800 - Epoch: [264][ 200/ 518] Overall Loss 2.967916 Objective Loss 2.967916 LR 0.000008 Time 0.222063 -2023-04-27 04:02:22,605 - Epoch: [264][ 250/ 518] Overall Loss 2.968542 Objective Loss 2.968542 LR 0.000008 Time 0.220863 -2023-04-27 04:02:33,426 - Epoch: [264][ 300/ 518] Overall Loss 2.964595 Objective Loss 2.964595 LR 0.000008 Time 0.220119 -2023-04-27 04:02:44,182 - Epoch: [264][ 350/ 518] Overall Loss 2.963838 Objective Loss 2.963838 LR 0.000008 Time 0.219398 -2023-04-27 04:02:55,013 - Epoch: [264][ 400/ 518] Overall Loss 2.974209 Objective Loss 2.974209 LR 0.000008 Time 0.219048 -2023-04-27 04:03:05,749 - Epoch: [264][ 450/ 518] Overall Loss 2.972455 Objective Loss 2.972455 LR 0.000008 Time 0.218564 -2023-04-27 04:03:16,527 - Epoch: [264][ 500/ 518] Overall Loss 2.972489 Objective Loss 2.972489 LR 0.000008 Time 0.218261 -2023-04-27 04:03:20,260 - Epoch: [264][ 518/ 518] Overall Loss 2.970721 Objective Loss 2.970721 LR 0.000008 Time 0.217881 -2023-04-27 04:03:20,336 - --- validate (epoch=264)----------- -2023-04-27 04:03:20,336 - 4952 samples (32 per mini-batch) -2023-04-27 04:03:28,575 - Epoch: [264][ 50/ 155] Loss 3.101590 mAP 0.437617 -2023-04-27 04:03:36,433 - Epoch: [264][ 100/ 155] Loss 3.099237 mAP 0.441297 -2023-04-27 04:03:44,277 - Epoch: [264][ 150/ 155] Loss 3.095793 mAP 0.441372 -2023-04-27 04:03:44,988 - Epoch: [264][ 155/ 155] Loss 3.099365 mAP 0.441580 -2023-04-27 04:03:45,069 - ==> mAP: 0.44158 Loss: 3.099 - -2023-04-27 04:03:45,073 - ==> Best [mAP: 0.441580 vloss: 3.099365 Sparsity:0.00 Params: 2177087 on epoch: 264] -2023-04-27 04:03:45,073 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:03:45,122 - - -2023-04-27 04:03:45,122 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:03:56,877 - Epoch: [265][ 50/ 518] Overall Loss 2.936734 Objective Loss 2.936734 LR 0.000008 Time 0.235046 -2023-04-27 04:04:07,667 - Epoch: [265][ 100/ 518] Overall Loss 2.953271 Objective Loss 2.953271 LR 0.000008 Time 0.225401 -2023-04-27 04:04:18,477 - Epoch: [265][ 150/ 518] Overall Loss 2.961389 Objective Loss 2.961389 LR 0.000008 Time 0.222329 -2023-04-27 04:04:29,259 - Epoch: [265][ 200/ 518] Overall Loss 2.949948 Objective Loss 2.949948 LR 0.000008 Time 0.220646 -2023-04-27 04:04:40,050 - Epoch: [265][ 250/ 518] Overall Loss 2.963297 Objective Loss 2.963297 LR 0.000008 Time 0.219677 -2023-04-27 04:04:50,931 - Epoch: [265][ 300/ 518] Overall Loss 2.965835 Objective Loss 2.965835 LR 0.000008 Time 0.219326 -2023-04-27 04:05:01,815 - Epoch: [265][ 350/ 518] Overall Loss 2.965457 Objective Loss 2.965457 LR 0.000008 Time 0.219087 -2023-04-27 04:05:12,597 - Epoch: [265][ 400/ 518] Overall Loss 2.964274 Objective Loss 2.964274 LR 0.000008 Time 0.218653 -2023-04-27 04:05:23,365 - Epoch: [265][ 450/ 518] Overall Loss 2.965862 Objective Loss 2.965862 LR 0.000008 Time 0.218283 -2023-04-27 04:05:34,162 - Epoch: [265][ 500/ 518] Overall Loss 2.970218 Objective Loss 2.970218 LR 0.000008 Time 0.218046 -2023-04-27 04:05:37,927 - Epoch: [265][ 518/ 518] Overall Loss 2.969797 Objective Loss 2.969797 LR 0.000008 Time 0.217736 -2023-04-27 04:05:38,001 - --- validate (epoch=265)----------- -2023-04-27 04:05:38,001 - 4952 samples (32 per mini-batch) -2023-04-27 04:05:46,261 - Epoch: [265][ 50/ 155] Loss 3.137069 mAP 0.418654 -2023-04-27 04:05:54,158 - Epoch: [265][ 100/ 155] Loss 3.098488 mAP 0.425617 -2023-04-27 04:06:02,027 - Epoch: [265][ 150/ 155] Loss 3.105071 mAP 0.426756 -2023-04-27 04:06:02,755 - Epoch: [265][ 155/ 155] Loss 3.105330 mAP 0.428440 -2023-04-27 04:06:02,829 - ==> mAP: 0.42844 Loss: 3.105 - -2023-04-27 04:06:02,833 - ==> Best [mAP: 0.441580 vloss: 3.099365 Sparsity:0.00 Params: 2177087 on epoch: 264] -2023-04-27 04:06:02,833 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:06:02,867 - - -2023-04-27 04:06:02,867 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:06:14,420 - Epoch: [266][ 50/ 518] Overall Loss 2.992824 Objective Loss 2.992824 LR 0.000008 Time 0.231006 -2023-04-27 04:06:25,224 - Epoch: [266][ 100/ 518] Overall Loss 2.953759 Objective Loss 2.953759 LR 0.000008 Time 0.223532 -2023-04-27 04:06:36,058 - Epoch: [266][ 150/ 518] Overall Loss 2.964682 Objective Loss 2.964682 LR 0.000008 Time 0.221237 -2023-04-27 04:06:46,853 - Epoch: [266][ 200/ 518] Overall Loss 2.956539 Objective Loss 2.956539 LR 0.000008 Time 0.219896 -2023-04-27 04:06:57,680 - Epoch: [266][ 250/ 518] Overall Loss 2.967604 Objective Loss 2.967604 LR 0.000008 Time 0.219217 -2023-04-27 04:07:08,473 - Epoch: [266][ 300/ 518] Overall Loss 2.972245 Objective Loss 2.972245 LR 0.000008 Time 0.218651 -2023-04-27 04:07:19,324 - Epoch: [266][ 350/ 518] Overall Loss 2.970165 Objective Loss 2.970165 LR 0.000008 Time 0.218414 -2023-04-27 04:07:30,139 - Epoch: [266][ 400/ 518] Overall Loss 2.971205 Objective Loss 2.971205 LR 0.000008 Time 0.218146 -2023-04-27 04:07:40,879 - Epoch: [266][ 450/ 518] Overall Loss 2.968785 Objective Loss 2.968785 LR 0.000008 Time 0.217771 -2023-04-27 04:07:51,679 - Epoch: [266][ 500/ 518] Overall Loss 2.965673 Objective Loss 2.965673 LR 0.000008 Time 0.217590 -2023-04-27 04:07:55,385 - Epoch: [266][ 518/ 518] Overall Loss 2.967023 Objective Loss 2.967023 LR 0.000008 Time 0.217184 -2023-04-27 04:07:55,459 - --- validate (epoch=266)----------- -2023-04-27 04:07:55,460 - 4952 samples (32 per mini-batch) -2023-04-27 04:08:03,663 - Epoch: [266][ 50/ 155] Loss 3.095660 mAP 0.447301 -2023-04-27 04:08:11,508 - Epoch: [266][ 100/ 155] Loss 3.114401 mAP 0.428971 -2023-04-27 04:08:19,301 - Epoch: [266][ 150/ 155] Loss 3.114974 mAP 0.424620 -2023-04-27 04:08:20,013 - Epoch: [266][ 155/ 155] Loss 3.117396 mAP 0.425426 -2023-04-27 04:08:20,086 - ==> mAP: 0.42543 Loss: 3.117 - -2023-04-27 04:08:20,089 - ==> Best [mAP: 0.441580 vloss: 3.099365 Sparsity:0.00 Params: 2177087 on epoch: 264] -2023-04-27 04:08:20,089 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:08:20,124 - - -2023-04-27 04:08:20,124 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:08:31,700 - Epoch: [267][ 50/ 518] Overall Loss 2.955357 Objective Loss 2.955357 LR 0.000008 Time 0.231475 -2023-04-27 04:08:42,488 - Epoch: [267][ 100/ 518] Overall Loss 2.962126 Objective Loss 2.962126 LR 0.000008 Time 0.223593 -2023-04-27 04:08:53,233 - Epoch: [267][ 150/ 518] Overall Loss 2.966127 Objective Loss 2.966127 LR 0.000008 Time 0.220691 -2023-04-27 04:09:04,070 - Epoch: [267][ 200/ 518] Overall Loss 2.972083 Objective Loss 2.972083 LR 0.000008 Time 0.219692 -2023-04-27 04:09:14,873 - Epoch: [267][ 250/ 518] Overall Loss 2.971427 Objective Loss 2.971427 LR 0.000008 Time 0.218959 -2023-04-27 04:09:25,728 - Epoch: [267][ 300/ 518] Overall Loss 2.973632 Objective Loss 2.973632 LR 0.000008 Time 0.218645 -2023-04-27 04:09:36,518 - Epoch: [267][ 350/ 518] Overall Loss 2.974892 Objective Loss 2.974892 LR 0.000008 Time 0.218234 -2023-04-27 04:09:47,414 - Epoch: [267][ 400/ 518] Overall Loss 2.971805 Objective Loss 2.971805 LR 0.000008 Time 0.218192 -2023-04-27 04:09:58,179 - Epoch: [267][ 450/ 518] Overall Loss 2.968771 Objective Loss 2.968771 LR 0.000008 Time 0.217866 -2023-04-27 04:10:08,967 - Epoch: [267][ 500/ 518] Overall Loss 2.970430 Objective Loss 2.970430 LR 0.000008 Time 0.217652 -2023-04-27 04:10:12,708 - Epoch: [267][ 518/ 518] Overall Loss 2.972870 Objective Loss 2.972870 LR 0.000008 Time 0.217311 -2023-04-27 04:10:12,787 - --- validate (epoch=267)----------- -2023-04-27 04:10:12,788 - 4952 samples (32 per mini-batch) -2023-04-27 04:10:21,050 - Epoch: [267][ 50/ 155] Loss 3.100581 mAP 0.433832 -2023-04-27 04:10:28,921 - Epoch: [267][ 100/ 155] Loss 3.125120 mAP 0.431062 -2023-04-27 04:10:36,788 - Epoch: [267][ 150/ 155] Loss 3.117895 mAP 0.429705 -2023-04-27 04:10:37,497 - Epoch: [267][ 155/ 155] Loss 3.119131 mAP 0.428177 -2023-04-27 04:10:37,571 - ==> mAP: 0.42818 Loss: 3.119 - -2023-04-27 04:10:37,575 - ==> Best [mAP: 0.441580 vloss: 3.099365 Sparsity:0.00 Params: 2177087 on epoch: 264] -2023-04-27 04:10:37,575 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:10:37,609 - - -2023-04-27 04:10:37,609 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:10:49,254 - Epoch: [268][ 50/ 518] Overall Loss 2.969801 Objective Loss 2.969801 LR 0.000008 Time 0.232838 -2023-04-27 04:11:00,056 - Epoch: [268][ 100/ 518] Overall Loss 2.979739 Objective Loss 2.979739 LR 0.000008 Time 0.224426 -2023-04-27 04:11:10,798 - Epoch: [268][ 150/ 518] Overall Loss 2.971685 Objective Loss 2.971685 LR 0.000008 Time 0.221219 -2023-04-27 04:11:21,583 - Epoch: [268][ 200/ 518] Overall Loss 2.964974 Objective Loss 2.964974 LR 0.000008 Time 0.219830 -2023-04-27 04:11:32,358 - Epoch: [268][ 250/ 518] Overall Loss 2.975782 Objective Loss 2.975782 LR 0.000008 Time 0.218960 -2023-04-27 04:11:43,174 - Epoch: [268][ 300/ 518] Overall Loss 2.974181 Objective Loss 2.974181 LR 0.000008 Time 0.218514 -2023-04-27 04:11:54,087 - Epoch: [268][ 350/ 518] Overall Loss 2.975762 Objective Loss 2.975762 LR 0.000008 Time 0.218473 -2023-04-27 04:12:04,939 - Epoch: [268][ 400/ 518] Overall Loss 2.979975 Objective Loss 2.979975 LR 0.000008 Time 0.218289 -2023-04-27 04:12:15,736 - Epoch: [268][ 450/ 518] Overall Loss 2.978045 Objective Loss 2.978045 LR 0.000008 Time 0.218024 -2023-04-27 04:12:26,582 - Epoch: [268][ 500/ 518] Overall Loss 2.971995 Objective Loss 2.971995 LR 0.000008 Time 0.217912 -2023-04-27 04:12:30,337 - Epoch: [268][ 518/ 518] Overall Loss 2.969809 Objective Loss 2.969809 LR 0.000008 Time 0.217587 -2023-04-27 04:12:30,414 - --- validate (epoch=268)----------- -2023-04-27 04:12:30,414 - 4952 samples (32 per mini-batch) -2023-04-27 04:12:38,674 - Epoch: [268][ 50/ 155] Loss 3.104807 mAP 0.440002 -2023-04-27 04:12:46,579 - Epoch: [268][ 100/ 155] Loss 3.107998 mAP 0.445402 -2023-04-27 04:12:54,421 - Epoch: [268][ 150/ 155] Loss 3.110720 mAP 0.434012 -2023-04-27 04:12:55,139 - Epoch: [268][ 155/ 155] Loss 3.105723 mAP 0.434941 -2023-04-27 04:12:55,215 - ==> mAP: 0.43494 Loss: 3.106 - -2023-04-27 04:12:55,219 - ==> Best [mAP: 0.441580 vloss: 3.099365 Sparsity:0.00 Params: 2177087 on epoch: 264] -2023-04-27 04:12:55,219 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:12:55,253 - - -2023-04-27 04:12:55,253 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:13:06,850 - Epoch: [269][ 50/ 518] Overall Loss 2.961357 Objective Loss 2.961357 LR 0.000008 Time 0.231887 -2023-04-27 04:13:17,661 - Epoch: [269][ 100/ 518] Overall Loss 2.963585 Objective Loss 2.963585 LR 0.000008 Time 0.224029 -2023-04-27 04:13:28,484 - Epoch: [269][ 150/ 518] Overall Loss 2.965968 Objective Loss 2.965968 LR 0.000008 Time 0.221499 -2023-04-27 04:13:39,263 - Epoch: [269][ 200/ 518] Overall Loss 2.966466 Objective Loss 2.966466 LR 0.000008 Time 0.220012 -2023-04-27 04:13:50,046 - Epoch: [269][ 250/ 518] Overall Loss 2.962672 Objective Loss 2.962672 LR 0.000008 Time 0.219134 -2023-04-27 04:14:00,813 - Epoch: [269][ 300/ 518] Overall Loss 2.959192 Objective Loss 2.959192 LR 0.000008 Time 0.218496 -2023-04-27 04:14:11,603 - Epoch: [269][ 350/ 518] Overall Loss 2.949789 Objective Loss 2.949789 LR 0.000008 Time 0.218108 -2023-04-27 04:14:22,436 - Epoch: [269][ 400/ 518] Overall Loss 2.951523 Objective Loss 2.951523 LR 0.000008 Time 0.217924 -2023-04-27 04:14:33,197 - Epoch: [269][ 450/ 518] Overall Loss 2.960219 Objective Loss 2.960219 LR 0.000008 Time 0.217619 -2023-04-27 04:14:44,127 - Epoch: [269][ 500/ 518] Overall Loss 2.965894 Objective Loss 2.965894 LR 0.000008 Time 0.217714 -2023-04-27 04:14:47,898 - Epoch: [269][ 518/ 518] Overall Loss 2.962807 Objective Loss 2.962807 LR 0.000008 Time 0.217428 -2023-04-27 04:14:47,975 - --- validate (epoch=269)----------- -2023-04-27 04:14:47,975 - 4952 samples (32 per mini-batch) -2023-04-27 04:14:56,261 - Epoch: [269][ 50/ 155] Loss 3.101738 mAP 0.439422 -2023-04-27 04:15:04,155 - Epoch: [269][ 100/ 155] Loss 3.097216 mAP 0.442862 -2023-04-27 04:15:11,996 - Epoch: [269][ 150/ 155] Loss 3.101330 mAP 0.443494 -2023-04-27 04:15:12,720 - Epoch: [269][ 155/ 155] Loss 3.098334 mAP 0.444663 -2023-04-27 04:15:12,796 - ==> mAP: 0.44466 Loss: 3.098 - -2023-04-27 04:15:12,800 - ==> Best [mAP: 0.444663 vloss: 3.098334 Sparsity:0.00 Params: 2177087 on epoch: 269] -2023-04-27 04:15:12,800 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:15:12,847 - - -2023-04-27 04:15:12,847 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:15:24,353 - Epoch: [270][ 50/ 518] Overall Loss 2.990140 Objective Loss 2.990140 LR 0.000008 Time 0.230063 -2023-04-27 04:15:35,280 - Epoch: [270][ 100/ 518] Overall Loss 2.961487 Objective Loss 2.961487 LR 0.000008 Time 0.224290 -2023-04-27 04:15:46,149 - Epoch: [270][ 150/ 518] Overall Loss 2.965554 Objective Loss 2.965554 LR 0.000008 Time 0.221972 -2023-04-27 04:15:57,047 - Epoch: [270][ 200/ 518] Overall Loss 2.968105 Objective Loss 2.968105 LR 0.000008 Time 0.220964 -2023-04-27 04:16:07,794 - Epoch: [270][ 250/ 518] Overall Loss 2.953144 Objective Loss 2.953144 LR 0.000008 Time 0.219751 -2023-04-27 04:16:18,589 - Epoch: [270][ 300/ 518] Overall Loss 2.955148 Objective Loss 2.955148 LR 0.000008 Time 0.219106 -2023-04-27 04:16:29,309 - Epoch: [270][ 350/ 518] Overall Loss 2.953630 Objective Loss 2.953630 LR 0.000008 Time 0.218428 -2023-04-27 04:16:40,142 - Epoch: [270][ 400/ 518] Overall Loss 2.955674 Objective Loss 2.955674 LR 0.000008 Time 0.218204 -2023-04-27 04:16:50,992 - Epoch: [270][ 450/ 518] Overall Loss 2.952844 Objective Loss 2.952844 LR 0.000008 Time 0.218067 -2023-04-27 04:17:01,787 - Epoch: [270][ 500/ 518] Overall Loss 2.955289 Objective Loss 2.955289 LR 0.000008 Time 0.217846 -2023-04-27 04:17:05,498 - Epoch: [270][ 518/ 518] Overall Loss 2.955501 Objective Loss 2.955501 LR 0.000008 Time 0.217439 -2023-04-27 04:17:05,573 - --- validate (epoch=270)----------- -2023-04-27 04:17:05,574 - 4952 samples (32 per mini-batch) -2023-04-27 04:17:13,862 - Epoch: [270][ 50/ 155] Loss 3.114160 mAP 0.453228 -2023-04-27 04:17:21,783 - Epoch: [270][ 100/ 155] Loss 3.083950 mAP 0.443441 -2023-04-27 04:17:29,664 - Epoch: [270][ 150/ 155] Loss 3.081920 mAP 0.445769 -2023-04-27 04:17:30,391 - Epoch: [270][ 155/ 155] Loss 3.081809 mAP 0.446835 -2023-04-27 04:17:30,465 - ==> mAP: 0.44684 Loss: 3.082 - -2023-04-27 04:17:30,469 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:17:30,469 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:17:30,520 - - -2023-04-27 04:17:30,520 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:17:42,156 - Epoch: [271][ 50/ 518] Overall Loss 2.946502 Objective Loss 2.946502 LR 0.000008 Time 0.232658 -2023-04-27 04:17:52,974 - Epoch: [271][ 100/ 518] Overall Loss 2.959829 Objective Loss 2.959829 LR 0.000008 Time 0.224489 -2023-04-27 04:18:03,747 - Epoch: [271][ 150/ 518] Overall Loss 2.966087 Objective Loss 2.966087 LR 0.000008 Time 0.221470 -2023-04-27 04:18:14,543 - Epoch: [271][ 200/ 518] Overall Loss 2.961211 Objective Loss 2.961211 LR 0.000008 Time 0.220073 -2023-04-27 04:18:25,365 - Epoch: [271][ 250/ 518] Overall Loss 2.963180 Objective Loss 2.963180 LR 0.000008 Time 0.219342 -2023-04-27 04:18:36,205 - Epoch: [271][ 300/ 518] Overall Loss 2.963341 Objective Loss 2.963341 LR 0.000008 Time 0.218912 -2023-04-27 04:18:47,002 - Epoch: [271][ 350/ 518] Overall Loss 2.960858 Objective Loss 2.960858 LR 0.000008 Time 0.218485 -2023-04-27 04:18:57,773 - Epoch: [271][ 400/ 518] Overall Loss 2.957729 Objective Loss 2.957729 LR 0.000008 Time 0.218097 -2023-04-27 04:19:08,547 - Epoch: [271][ 450/ 518] Overall Loss 2.963586 Objective Loss 2.963586 LR 0.000008 Time 0.217802 -2023-04-27 04:19:19,414 - Epoch: [271][ 500/ 518] Overall Loss 2.958697 Objective Loss 2.958697 LR 0.000008 Time 0.217753 -2023-04-27 04:19:23,201 - Epoch: [271][ 518/ 518] Overall Loss 2.958937 Objective Loss 2.958937 LR 0.000008 Time 0.217497 -2023-04-27 04:19:23,276 - --- validate (epoch=271)----------- -2023-04-27 04:19:23,276 - 4952 samples (32 per mini-batch) -2023-04-27 04:19:31,520 - Epoch: [271][ 50/ 155] Loss 3.080792 mAP 0.446138 -2023-04-27 04:19:39,411 - Epoch: [271][ 100/ 155] Loss 3.076481 mAP 0.443767 -2023-04-27 04:19:47,311 - Epoch: [271][ 150/ 155] Loss 3.095284 mAP 0.441196 -2023-04-27 04:19:48,029 - Epoch: [271][ 155/ 155] Loss 3.095184 mAP 0.440172 -2023-04-27 04:19:48,103 - ==> mAP: 0.44017 Loss: 3.095 - -2023-04-27 04:19:48,107 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:19:48,107 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:19:48,161 - - -2023-04-27 04:19:48,161 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:19:59,815 - Epoch: [272][ 50/ 518] Overall Loss 2.935930 Objective Loss 2.935930 LR 0.000008 Time 0.233014 -2023-04-27 04:20:10,641 - Epoch: [272][ 100/ 518] Overall Loss 2.951069 Objective Loss 2.951069 LR 0.000008 Time 0.224742 -2023-04-27 04:20:21,452 - Epoch: [272][ 150/ 518] Overall Loss 2.938881 Objective Loss 2.938881 LR 0.000008 Time 0.221893 -2023-04-27 04:20:32,245 - Epoch: [272][ 200/ 518] Overall Loss 2.936373 Objective Loss 2.936373 LR 0.000008 Time 0.220380 -2023-04-27 04:20:43,135 - Epoch: [272][ 250/ 518] Overall Loss 2.938552 Objective Loss 2.938552 LR 0.000008 Time 0.219858 -2023-04-27 04:20:54,007 - Epoch: [272][ 300/ 518] Overall Loss 2.942840 Objective Loss 2.942840 LR 0.000008 Time 0.219447 -2023-04-27 04:21:04,859 - Epoch: [272][ 350/ 518] Overall Loss 2.944756 Objective Loss 2.944756 LR 0.000008 Time 0.219101 -2023-04-27 04:21:15,667 - Epoch: [272][ 400/ 518] Overall Loss 2.940727 Objective Loss 2.940727 LR 0.000008 Time 0.218729 -2023-04-27 04:21:26,497 - Epoch: [272][ 450/ 518] Overall Loss 2.941674 Objective Loss 2.941674 LR 0.000008 Time 0.218490 -2023-04-27 04:21:37,318 - Epoch: [272][ 500/ 518] Overall Loss 2.937368 Objective Loss 2.937368 LR 0.000008 Time 0.218278 -2023-04-27 04:21:41,058 - Epoch: [272][ 518/ 518] Overall Loss 2.942120 Objective Loss 2.942120 LR 0.000008 Time 0.217912 -2023-04-27 04:21:41,131 - --- validate (epoch=272)----------- -2023-04-27 04:21:41,131 - 4952 samples (32 per mini-batch) -2023-04-27 04:21:49,423 - Epoch: [272][ 50/ 155] Loss 3.101016 mAP 0.427001 -2023-04-27 04:21:57,322 - Epoch: [272][ 100/ 155] Loss 3.093959 mAP 0.432191 -2023-04-27 04:22:05,174 - Epoch: [272][ 150/ 155] Loss 3.098715 mAP 0.434778 -2023-04-27 04:22:05,884 - Epoch: [272][ 155/ 155] Loss 3.101460 mAP 0.432179 -2023-04-27 04:22:05,961 - ==> mAP: 0.43218 Loss: 3.101 - -2023-04-27 04:22:05,965 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:22:05,965 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:22:05,999 - - -2023-04-27 04:22:05,999 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:22:17,673 - Epoch: [273][ 50/ 518] Overall Loss 2.967865 Objective Loss 2.967865 LR 0.000008 Time 0.233419 -2023-04-27 04:22:28,503 - Epoch: [273][ 100/ 518] Overall Loss 2.952416 Objective Loss 2.952416 LR 0.000008 Time 0.224994 -2023-04-27 04:22:39,381 - Epoch: [273][ 150/ 518] Overall Loss 2.955802 Objective Loss 2.955802 LR 0.000008 Time 0.222505 -2023-04-27 04:22:50,232 - Epoch: [273][ 200/ 518] Overall Loss 2.948194 Objective Loss 2.948194 LR 0.000008 Time 0.221126 -2023-04-27 04:23:01,065 - Epoch: [273][ 250/ 518] Overall Loss 2.951914 Objective Loss 2.951914 LR 0.000008 Time 0.220227 -2023-04-27 04:23:11,945 - Epoch: [273][ 300/ 518] Overall Loss 2.947430 Objective Loss 2.947430 LR 0.000008 Time 0.219783 -2023-04-27 04:23:22,787 - Epoch: [273][ 350/ 518] Overall Loss 2.938222 Objective Loss 2.938222 LR 0.000008 Time 0.219359 -2023-04-27 04:23:33,643 - Epoch: [273][ 400/ 518] Overall Loss 2.943493 Objective Loss 2.943493 LR 0.000008 Time 0.219075 -2023-04-27 04:23:44,433 - Epoch: [273][ 450/ 518] Overall Loss 2.940493 Objective Loss 2.940493 LR 0.000008 Time 0.218709 -2023-04-27 04:23:55,343 - Epoch: [273][ 500/ 518] Overall Loss 2.947603 Objective Loss 2.947603 LR 0.000008 Time 0.218655 -2023-04-27 04:23:59,116 - Epoch: [273][ 518/ 518] Overall Loss 2.947886 Objective Loss 2.947886 LR 0.000008 Time 0.218339 -2023-04-27 04:23:59,192 - --- validate (epoch=273)----------- -2023-04-27 04:23:59,193 - 4952 samples (32 per mini-batch) -2023-04-27 04:24:07,466 - Epoch: [273][ 50/ 155] Loss 3.102883 mAP 0.442735 -2023-04-27 04:24:15,356 - Epoch: [273][ 100/ 155] Loss 3.085895 mAP 0.443532 -2023-04-27 04:24:23,219 - Epoch: [273][ 150/ 155] Loss 3.088370 mAP 0.434602 -2023-04-27 04:24:23,928 - Epoch: [273][ 155/ 155] Loss 3.088096 mAP 0.432939 -2023-04-27 04:24:23,997 - ==> mAP: 0.43294 Loss: 3.088 - -2023-04-27 04:24:24,001 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:24:24,001 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:24:24,036 - - -2023-04-27 04:24:24,036 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:24:35,545 - Epoch: [274][ 50/ 518] Overall Loss 2.923915 Objective Loss 2.923915 LR 0.000008 Time 0.230126 -2023-04-27 04:24:46,388 - Epoch: [274][ 100/ 518] Overall Loss 2.909212 Objective Loss 2.909212 LR 0.000008 Time 0.223472 -2023-04-27 04:24:57,230 - Epoch: [274][ 150/ 518] Overall Loss 2.925743 Objective Loss 2.925743 LR 0.000008 Time 0.221252 -2023-04-27 04:25:08,032 - Epoch: [274][ 200/ 518] Overall Loss 2.938922 Objective Loss 2.938922 LR 0.000008 Time 0.219939 -2023-04-27 04:25:18,831 - Epoch: [274][ 250/ 518] Overall Loss 2.935996 Objective Loss 2.935996 LR 0.000008 Time 0.219142 -2023-04-27 04:25:29,475 - Epoch: [274][ 300/ 518] Overall Loss 2.940027 Objective Loss 2.940027 LR 0.000008 Time 0.218093 -2023-04-27 04:25:40,291 - Epoch: [274][ 350/ 518] Overall Loss 2.945195 Objective Loss 2.945195 LR 0.000008 Time 0.217835 -2023-04-27 04:25:51,102 - Epoch: [274][ 400/ 518] Overall Loss 2.950226 Objective Loss 2.950226 LR 0.000008 Time 0.217630 -2023-04-27 04:26:01,887 - Epoch: [274][ 450/ 518] Overall Loss 2.949610 Objective Loss 2.949610 LR 0.000008 Time 0.217411 -2023-04-27 04:26:12,709 - Epoch: [274][ 500/ 518] Overall Loss 2.952144 Objective Loss 2.952144 LR 0.000008 Time 0.217311 -2023-04-27 04:26:16,489 - Epoch: [274][ 518/ 518] Overall Loss 2.952191 Objective Loss 2.952191 LR 0.000008 Time 0.217057 -2023-04-27 04:26:16,566 - --- validate (epoch=274)----------- -2023-04-27 04:26:16,567 - 4952 samples (32 per mini-batch) -2023-04-27 04:26:24,833 - Epoch: [274][ 50/ 155] Loss 3.152308 mAP 0.435724 -2023-04-27 04:26:32,761 - Epoch: [274][ 100/ 155] Loss 3.114090 mAP 0.443574 -2023-04-27 04:26:40,577 - Epoch: [274][ 150/ 155] Loss 3.089632 mAP 0.438103 -2023-04-27 04:26:41,293 - Epoch: [274][ 155/ 155] Loss 3.090330 mAP 0.435754 -2023-04-27 04:26:41,374 - ==> mAP: 0.43575 Loss: 3.090 - -2023-04-27 04:26:41,378 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:26:41,378 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:26:41,412 - - -2023-04-27 04:26:41,412 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:26:53,172 - Epoch: [275][ 50/ 518] Overall Loss 2.932323 Objective Loss 2.932323 LR 0.000008 Time 0.235141 -2023-04-27 04:27:03,966 - Epoch: [275][ 100/ 518] Overall Loss 2.927982 Objective Loss 2.927982 LR 0.000008 Time 0.225495 -2023-04-27 04:27:14,710 - Epoch: [275][ 150/ 518] Overall Loss 2.935759 Objective Loss 2.935759 LR 0.000008 Time 0.221950 -2023-04-27 04:27:25,510 - Epoch: [275][ 200/ 518] Overall Loss 2.938367 Objective Loss 2.938367 LR 0.000008 Time 0.220453 -2023-04-27 04:27:36,278 - Epoch: [275][ 250/ 518] Overall Loss 2.939211 Objective Loss 2.939211 LR 0.000008 Time 0.219427 -2023-04-27 04:27:46,991 - Epoch: [275][ 300/ 518] Overall Loss 2.942019 Objective Loss 2.942019 LR 0.000008 Time 0.218561 -2023-04-27 04:27:57,878 - Epoch: [275][ 350/ 518] Overall Loss 2.942978 Objective Loss 2.942978 LR 0.000008 Time 0.218439 -2023-04-27 04:28:08,735 - Epoch: [275][ 400/ 518] Overall Loss 2.944065 Objective Loss 2.944065 LR 0.000008 Time 0.218273 -2023-04-27 04:28:19,642 - Epoch: [275][ 450/ 518] Overall Loss 2.946423 Objective Loss 2.946423 LR 0.000008 Time 0.218256 -2023-04-27 04:28:30,486 - Epoch: [275][ 500/ 518] Overall Loss 2.940299 Objective Loss 2.940299 LR 0.000008 Time 0.218115 -2023-04-27 04:28:34,277 - Epoch: [275][ 518/ 518] Overall Loss 2.938989 Objective Loss 2.938989 LR 0.000008 Time 0.217852 -2023-04-27 04:28:34,353 - --- validate (epoch=275)----------- -2023-04-27 04:28:34,354 - 4952 samples (32 per mini-batch) -2023-04-27 04:28:42,610 - Epoch: [275][ 50/ 155] Loss 3.086319 mAP 0.447668 -2023-04-27 04:28:50,512 - Epoch: [275][ 100/ 155] Loss 3.098762 mAP 0.444151 -2023-04-27 04:28:58,373 - Epoch: [275][ 150/ 155] Loss 3.081419 mAP 0.441319 -2023-04-27 04:28:59,080 - Epoch: [275][ 155/ 155] Loss 3.083455 mAP 0.438655 -2023-04-27 04:28:59,153 - ==> mAP: 0.43866 Loss: 3.083 - -2023-04-27 04:28:59,157 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:28:59,157 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:28:59,191 - - -2023-04-27 04:28:59,191 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:29:10,729 - Epoch: [276][ 50/ 518] Overall Loss 2.955948 Objective Loss 2.955948 LR 0.000008 Time 0.230700 -2023-04-27 04:29:21,461 - Epoch: [276][ 100/ 518] Overall Loss 2.934062 Objective Loss 2.934062 LR 0.000008 Time 0.222649 -2023-04-27 04:29:32,260 - Epoch: [276][ 150/ 518] Overall Loss 2.936394 Objective Loss 2.936394 LR 0.000008 Time 0.220416 -2023-04-27 04:29:43,084 - Epoch: [276][ 200/ 518] Overall Loss 2.936130 Objective Loss 2.936130 LR 0.000008 Time 0.219426 -2023-04-27 04:29:53,998 - Epoch: [276][ 250/ 518] Overall Loss 2.943391 Objective Loss 2.943391 LR 0.000008 Time 0.219189 -2023-04-27 04:30:04,725 - Epoch: [276][ 300/ 518] Overall Loss 2.932665 Objective Loss 2.932665 LR 0.000008 Time 0.218410 -2023-04-27 04:30:15,598 - Epoch: [276][ 350/ 518] Overall Loss 2.931324 Objective Loss 2.931324 LR 0.000008 Time 0.218269 -2023-04-27 04:30:26,503 - Epoch: [276][ 400/ 518] Overall Loss 2.929658 Objective Loss 2.929658 LR 0.000008 Time 0.218245 -2023-04-27 04:30:37,360 - Epoch: [276][ 450/ 518] Overall Loss 2.930414 Objective Loss 2.930414 LR 0.000008 Time 0.218117 -2023-04-27 04:30:48,133 - Epoch: [276][ 500/ 518] Overall Loss 2.933673 Objective Loss 2.933673 LR 0.000008 Time 0.217849 -2023-04-27 04:30:51,892 - Epoch: [276][ 518/ 518] Overall Loss 2.934845 Objective Loss 2.934845 LR 0.000008 Time 0.217535 -2023-04-27 04:30:51,967 - --- validate (epoch=276)----------- -2023-04-27 04:30:51,967 - 4952 samples (32 per mini-batch) -2023-04-27 04:31:00,264 - Epoch: [276][ 50/ 155] Loss 3.067852 mAP 0.435315 -2023-04-27 04:31:08,178 - Epoch: [276][ 100/ 155] Loss 3.099052 mAP 0.441556 -2023-04-27 04:31:16,054 - Epoch: [276][ 150/ 155] Loss 3.081428 mAP 0.435440 -2023-04-27 04:31:16,782 - Epoch: [276][ 155/ 155] Loss 3.084396 mAP 0.435081 -2023-04-27 04:31:16,855 - ==> mAP: 0.43508 Loss: 3.084 - -2023-04-27 04:31:16,859 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:31:16,859 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:31:16,893 - - -2023-04-27 04:31:16,894 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:31:28,648 - Epoch: [277][ 50/ 518] Overall Loss 2.981578 Objective Loss 2.981578 LR 0.000008 Time 0.235037 -2023-04-27 04:31:39,411 - Epoch: [277][ 100/ 518] Overall Loss 2.937921 Objective Loss 2.937921 LR 0.000008 Time 0.225127 -2023-04-27 04:31:50,361 - Epoch: [277][ 150/ 518] Overall Loss 2.953810 Objective Loss 2.953810 LR 0.000008 Time 0.223076 -2023-04-27 04:32:01,184 - Epoch: [277][ 200/ 518] Overall Loss 2.970160 Objective Loss 2.970160 LR 0.000008 Time 0.221417 -2023-04-27 04:32:11,960 - Epoch: [277][ 250/ 518] Overall Loss 2.968933 Objective Loss 2.968933 LR 0.000008 Time 0.220229 -2023-04-27 04:32:22,766 - Epoch: [277][ 300/ 518] Overall Loss 2.962673 Objective Loss 2.962673 LR 0.000008 Time 0.219541 -2023-04-27 04:32:33,596 - Epoch: [277][ 350/ 518] Overall Loss 2.960302 Objective Loss 2.960302 LR 0.000008 Time 0.219115 -2023-04-27 04:32:44,508 - Epoch: [277][ 400/ 518] Overall Loss 2.957348 Objective Loss 2.957348 LR 0.000008 Time 0.219002 -2023-04-27 04:32:55,313 - Epoch: [277][ 450/ 518] Overall Loss 2.959697 Objective Loss 2.959697 LR 0.000008 Time 0.218676 -2023-04-27 04:33:06,232 - Epoch: [277][ 500/ 518] Overall Loss 2.958548 Objective Loss 2.958548 LR 0.000008 Time 0.218643 -2023-04-27 04:33:09,974 - Epoch: [277][ 518/ 518] Overall Loss 2.956048 Objective Loss 2.956048 LR 0.000008 Time 0.218268 -2023-04-27 04:33:10,051 - --- validate (epoch=277)----------- -2023-04-27 04:33:10,051 - 4952 samples (32 per mini-batch) -2023-04-27 04:33:18,287 - Epoch: [277][ 50/ 155] Loss 3.097670 mAP 0.422080 -2023-04-27 04:33:26,164 - Epoch: [277][ 100/ 155] Loss 3.098254 mAP 0.426799 -2023-04-27 04:33:34,019 - Epoch: [277][ 150/ 155] Loss 3.101232 mAP 0.432692 -2023-04-27 04:33:34,729 - Epoch: [277][ 155/ 155] Loss 3.096425 mAP 0.432949 -2023-04-27 04:33:34,803 - ==> mAP: 0.43295 Loss: 3.096 - -2023-04-27 04:33:34,807 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:33:34,807 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:33:34,841 - - -2023-04-27 04:33:34,841 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:33:46,528 - Epoch: [278][ 50/ 518] Overall Loss 2.953580 Objective Loss 2.953580 LR 0.000008 Time 0.233685 -2023-04-27 04:33:57,386 - Epoch: [278][ 100/ 518] Overall Loss 2.930538 Objective Loss 2.930538 LR 0.000008 Time 0.225407 -2023-04-27 04:34:08,259 - Epoch: [278][ 150/ 518] Overall Loss 2.928814 Objective Loss 2.928814 LR 0.000008 Time 0.222747 -2023-04-27 04:34:19,087 - Epoch: [278][ 200/ 518] Overall Loss 2.917202 Objective Loss 2.917202 LR 0.000008 Time 0.221192 -2023-04-27 04:34:29,885 - Epoch: [278][ 250/ 518] Overall Loss 2.922639 Objective Loss 2.922639 LR 0.000008 Time 0.220138 -2023-04-27 04:34:40,619 - Epoch: [278][ 300/ 518] Overall Loss 2.923455 Objective Loss 2.923455 LR 0.000008 Time 0.219224 -2023-04-27 04:34:51,418 - Epoch: [278][ 350/ 518] Overall Loss 2.928992 Objective Loss 2.928992 LR 0.000008 Time 0.218755 -2023-04-27 04:35:02,326 - Epoch: [278][ 400/ 518] Overall Loss 2.930675 Objective Loss 2.930675 LR 0.000008 Time 0.218678 -2023-04-27 04:35:13,121 - Epoch: [278][ 450/ 518] Overall Loss 2.936264 Objective Loss 2.936264 LR 0.000008 Time 0.218366 -2023-04-27 04:35:23,976 - Epoch: [278][ 500/ 518] Overall Loss 2.935388 Objective Loss 2.935388 LR 0.000008 Time 0.218236 -2023-04-27 04:35:27,716 - Epoch: [278][ 518/ 518] Overall Loss 2.932371 Objective Loss 2.932371 LR 0.000008 Time 0.217870 -2023-04-27 04:35:27,792 - --- validate (epoch=278)----------- -2023-04-27 04:35:27,793 - 4952 samples (32 per mini-batch) -2023-04-27 04:35:36,060 - Epoch: [278][ 50/ 155] Loss 3.042210 mAP 0.449255 -2023-04-27 04:35:43,941 - Epoch: [278][ 100/ 155] Loss 3.092301 mAP 0.437435 -2023-04-27 04:35:51,813 - Epoch: [278][ 150/ 155] Loss 3.085030 mAP 0.435662 -2023-04-27 04:35:52,532 - Epoch: [278][ 155/ 155] Loss 3.086845 mAP 0.436464 -2023-04-27 04:35:52,605 - ==> mAP: 0.43646 Loss: 3.087 - -2023-04-27 04:35:52,609 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:35:52,609 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:35:52,643 - - -2023-04-27 04:35:52,643 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:36:04,225 - Epoch: [279][ 50/ 518] Overall Loss 2.901108 Objective Loss 2.901108 LR 0.000008 Time 0.231584 -2023-04-27 04:36:15,084 - Epoch: [279][ 100/ 518] Overall Loss 2.909437 Objective Loss 2.909437 LR 0.000008 Time 0.224364 -2023-04-27 04:36:25,840 - Epoch: [279][ 150/ 518] Overall Loss 2.926170 Objective Loss 2.926170 LR 0.000008 Time 0.221273 -2023-04-27 04:36:36,625 - Epoch: [279][ 200/ 518] Overall Loss 2.942182 Objective Loss 2.942182 LR 0.000008 Time 0.219874 -2023-04-27 04:36:47,402 - Epoch: [279][ 250/ 518] Overall Loss 2.954367 Objective Loss 2.954367 LR 0.000008 Time 0.219001 -2023-04-27 04:36:58,191 - Epoch: [279][ 300/ 518] Overall Loss 2.948124 Objective Loss 2.948124 LR 0.000008 Time 0.218458 -2023-04-27 04:37:09,073 - Epoch: [279][ 350/ 518] Overall Loss 2.943151 Objective Loss 2.943151 LR 0.000008 Time 0.218336 -2023-04-27 04:37:19,831 - Epoch: [279][ 400/ 518] Overall Loss 2.946438 Objective Loss 2.946438 LR 0.000008 Time 0.217935 -2023-04-27 04:37:30,632 - Epoch: [279][ 450/ 518] Overall Loss 2.948950 Objective Loss 2.948950 LR 0.000008 Time 0.217718 -2023-04-27 04:37:41,365 - Epoch: [279][ 500/ 518] Overall Loss 2.944790 Objective Loss 2.944790 LR 0.000008 Time 0.217409 -2023-04-27 04:37:45,119 - Epoch: [279][ 518/ 518] Overall Loss 2.943813 Objective Loss 2.943813 LR 0.000008 Time 0.217102 -2023-04-27 04:37:45,195 - --- validate (epoch=279)----------- -2023-04-27 04:37:45,195 - 4952 samples (32 per mini-batch) -2023-04-27 04:37:53,416 - Epoch: [279][ 50/ 155] Loss 3.076701 mAP 0.429662 -2023-04-27 04:38:01,269 - Epoch: [279][ 100/ 155] Loss 3.102287 mAP 0.424192 -2023-04-27 04:38:09,137 - Epoch: [279][ 150/ 155] Loss 3.097395 mAP 0.427430 -2023-04-27 04:38:09,847 - Epoch: [279][ 155/ 155] Loss 3.096759 mAP 0.426679 -2023-04-27 04:38:09,923 - ==> mAP: 0.42668 Loss: 3.097 - -2023-04-27 04:38:09,926 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:38:09,927 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:38:09,961 - - -2023-04-27 04:38:09,961 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:38:21,485 - Epoch: [280][ 50/ 518] Overall Loss 2.952488 Objective Loss 2.952488 LR 0.000008 Time 0.230423 -2023-04-27 04:38:32,364 - Epoch: [280][ 100/ 518] Overall Loss 2.961361 Objective Loss 2.961361 LR 0.000008 Time 0.223986 -2023-04-27 04:38:43,258 - Epoch: [280][ 150/ 518] Overall Loss 2.950216 Objective Loss 2.950216 LR 0.000008 Time 0.221944 -2023-04-27 04:38:53,990 - Epoch: [280][ 200/ 518] Overall Loss 2.947070 Objective Loss 2.947070 LR 0.000008 Time 0.220108 -2023-04-27 04:39:04,828 - Epoch: [280][ 250/ 518] Overall Loss 2.945748 Objective Loss 2.945748 LR 0.000008 Time 0.219430 -2023-04-27 04:39:15,603 - Epoch: [280][ 300/ 518] Overall Loss 2.945397 Objective Loss 2.945397 LR 0.000008 Time 0.218771 -2023-04-27 04:39:26,477 - Epoch: [280][ 350/ 518] Overall Loss 2.943687 Objective Loss 2.943687 LR 0.000008 Time 0.218581 -2023-04-27 04:39:37,167 - Epoch: [280][ 400/ 518] Overall Loss 2.941532 Objective Loss 2.941532 LR 0.000008 Time 0.217980 -2023-04-27 04:39:47,923 - Epoch: [280][ 450/ 518] Overall Loss 2.937614 Objective Loss 2.937614 LR 0.000008 Time 0.217658 -2023-04-27 04:39:58,789 - Epoch: [280][ 500/ 518] Overall Loss 2.936059 Objective Loss 2.936059 LR 0.000008 Time 0.217622 -2023-04-27 04:40:02,585 - Epoch: [280][ 518/ 518] Overall Loss 2.932289 Objective Loss 2.932289 LR 0.000008 Time 0.217387 -2023-04-27 04:40:02,662 - --- validate (epoch=280)----------- -2023-04-27 04:40:02,663 - 4952 samples (32 per mini-batch) -2023-04-27 04:40:10,998 - Epoch: [280][ 50/ 155] Loss 3.094279 mAP 0.443137 -2023-04-27 04:40:18,859 - Epoch: [280][ 100/ 155] Loss 3.100650 mAP 0.430133 -2023-04-27 04:40:26,721 - Epoch: [280][ 150/ 155] Loss 3.098188 mAP 0.436077 -2023-04-27 04:40:27,437 - Epoch: [280][ 155/ 155] Loss 3.097129 mAP 0.435325 -2023-04-27 04:40:27,510 - ==> mAP: 0.43532 Loss: 3.097 - -2023-04-27 04:40:27,514 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:40:27,514 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:40:27,550 - - -2023-04-27 04:40:27,550 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:40:39,055 - Epoch: [281][ 50/ 518] Overall Loss 2.937514 Objective Loss 2.937514 LR 0.000008 Time 0.230054 -2023-04-27 04:40:49,865 - Epoch: [281][ 100/ 518] Overall Loss 2.940364 Objective Loss 2.940364 LR 0.000008 Time 0.223111 -2023-04-27 04:41:00,670 - Epoch: [281][ 150/ 518] Overall Loss 2.937374 Objective Loss 2.937374 LR 0.000008 Time 0.220763 -2023-04-27 04:41:11,491 - Epoch: [281][ 200/ 518] Overall Loss 2.927056 Objective Loss 2.927056 LR 0.000008 Time 0.219669 -2023-04-27 04:41:22,299 - Epoch: [281][ 250/ 518] Overall Loss 2.935883 Objective Loss 2.935883 LR 0.000008 Time 0.218962 -2023-04-27 04:41:33,282 - Epoch: [281][ 300/ 518] Overall Loss 2.937159 Objective Loss 2.937159 LR 0.000008 Time 0.219073 -2023-04-27 04:41:44,159 - Epoch: [281][ 350/ 518] Overall Loss 2.928402 Objective Loss 2.928402 LR 0.000008 Time 0.218849 -2023-04-27 04:41:54,979 - Epoch: [281][ 400/ 518] Overall Loss 2.930081 Objective Loss 2.930081 LR 0.000008 Time 0.218538 -2023-04-27 04:42:05,769 - Epoch: [281][ 450/ 518] Overall Loss 2.935111 Objective Loss 2.935111 LR 0.000008 Time 0.218232 -2023-04-27 04:42:16,587 - Epoch: [281][ 500/ 518] Overall Loss 2.932980 Objective Loss 2.932980 LR 0.000008 Time 0.218042 -2023-04-27 04:42:20,367 - Epoch: [281][ 518/ 518] Overall Loss 2.936174 Objective Loss 2.936174 LR 0.000008 Time 0.217761 -2023-04-27 04:42:20,442 - --- validate (epoch=281)----------- -2023-04-27 04:42:20,443 - 4952 samples (32 per mini-batch) -2023-04-27 04:42:28,691 - Epoch: [281][ 50/ 155] Loss 3.078466 mAP 0.415061 -2023-04-27 04:42:36,607 - Epoch: [281][ 100/ 155] Loss 3.094941 mAP 0.425258 -2023-04-27 04:42:44,505 - Epoch: [281][ 150/ 155] Loss 3.092210 mAP 0.423823 -2023-04-27 04:42:45,225 - Epoch: [281][ 155/ 155] Loss 3.093438 mAP 0.423386 -2023-04-27 04:42:45,302 - ==> mAP: 0.42339 Loss: 3.093 - -2023-04-27 04:42:45,306 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:42:45,306 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:42:45,340 - - -2023-04-27 04:42:45,340 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:42:57,002 - Epoch: [282][ 50/ 518] Overall Loss 3.016117 Objective Loss 3.016117 LR 0.000008 Time 0.233191 -2023-04-27 04:43:07,711 - Epoch: [282][ 100/ 518] Overall Loss 2.980734 Objective Loss 2.980734 LR 0.000008 Time 0.223667 -2023-04-27 04:43:18,501 - Epoch: [282][ 150/ 518] Overall Loss 2.972292 Objective Loss 2.972292 LR 0.000008 Time 0.221036 -2023-04-27 04:43:29,313 - Epoch: [282][ 200/ 518] Overall Loss 2.946373 Objective Loss 2.946373 LR 0.000008 Time 0.219826 -2023-04-27 04:43:40,103 - Epoch: [282][ 250/ 518] Overall Loss 2.963075 Objective Loss 2.963075 LR 0.000008 Time 0.219017 -2023-04-27 04:43:50,835 - Epoch: [282][ 300/ 518] Overall Loss 2.951786 Objective Loss 2.951786 LR 0.000008 Time 0.218282 -2023-04-27 04:44:01,652 - Epoch: [282][ 350/ 518] Overall Loss 2.948403 Objective Loss 2.948403 LR 0.000008 Time 0.217998 -2023-04-27 04:44:12,601 - Epoch: [282][ 400/ 518] Overall Loss 2.941562 Objective Loss 2.941562 LR 0.000008 Time 0.218117 -2023-04-27 04:44:23,380 - Epoch: [282][ 450/ 518] Overall Loss 2.941648 Objective Loss 2.941648 LR 0.000008 Time 0.217833 -2023-04-27 04:44:34,214 - Epoch: [282][ 500/ 518] Overall Loss 2.940409 Objective Loss 2.940409 LR 0.000008 Time 0.217715 -2023-04-27 04:44:37,968 - Epoch: [282][ 518/ 518] Overall Loss 2.943657 Objective Loss 2.943657 LR 0.000008 Time 0.217394 -2023-04-27 04:44:38,044 - --- validate (epoch=282)----------- -2023-04-27 04:44:38,044 - 4952 samples (32 per mini-batch) -2023-04-27 04:44:46,313 - Epoch: [282][ 50/ 155] Loss 3.079115 mAP 0.459365 -2023-04-27 04:44:54,199 - Epoch: [282][ 100/ 155] Loss 3.076551 mAP 0.447932 -2023-04-27 04:45:02,041 - Epoch: [282][ 150/ 155] Loss 3.083859 mAP 0.438823 -2023-04-27 04:45:02,750 - Epoch: [282][ 155/ 155] Loss 3.086026 mAP 0.437888 -2023-04-27 04:45:02,816 - ==> mAP: 0.43789 Loss: 3.086 - -2023-04-27 04:45:02,819 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:45:02,819 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:45:02,853 - - -2023-04-27 04:45:02,853 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:45:14,516 - Epoch: [283][ 50/ 518] Overall Loss 2.912525 Objective Loss 2.912525 LR 0.000008 Time 0.233192 -2023-04-27 04:45:25,369 - Epoch: [283][ 100/ 518] Overall Loss 2.923019 Objective Loss 2.923019 LR 0.000008 Time 0.225117 -2023-04-27 04:45:36,185 - Epoch: [283][ 150/ 518] Overall Loss 2.908248 Objective Loss 2.908248 LR 0.000008 Time 0.222174 -2023-04-27 04:45:47,008 - Epoch: [283][ 200/ 518] Overall Loss 2.921184 Objective Loss 2.921184 LR 0.000008 Time 0.220734 -2023-04-27 04:45:57,848 - Epoch: [283][ 250/ 518] Overall Loss 2.933621 Objective Loss 2.933621 LR 0.000008 Time 0.219943 -2023-04-27 04:46:08,712 - Epoch: [283][ 300/ 518] Overall Loss 2.928550 Objective Loss 2.928550 LR 0.000008 Time 0.219493 -2023-04-27 04:46:19,599 - Epoch: [283][ 350/ 518] Overall Loss 2.929033 Objective Loss 2.929033 LR 0.000008 Time 0.219238 -2023-04-27 04:46:30,332 - Epoch: [283][ 400/ 518] Overall Loss 2.930128 Objective Loss 2.930128 LR 0.000008 Time 0.218663 -2023-04-27 04:46:41,102 - Epoch: [283][ 450/ 518] Overall Loss 2.929946 Objective Loss 2.929946 LR 0.000008 Time 0.218296 -2023-04-27 04:46:51,852 - Epoch: [283][ 500/ 518] Overall Loss 2.925415 Objective Loss 2.925415 LR 0.000008 Time 0.217964 -2023-04-27 04:46:55,578 - Epoch: [283][ 518/ 518] Overall Loss 2.922500 Objective Loss 2.922500 LR 0.000008 Time 0.217582 -2023-04-27 04:46:55,654 - --- validate (epoch=283)----------- -2023-04-27 04:46:55,654 - 4952 samples (32 per mini-batch) -2023-04-27 04:47:03,938 - Epoch: [283][ 50/ 155] Loss 3.148921 mAP 0.429614 -2023-04-27 04:47:11,860 - Epoch: [283][ 100/ 155] Loss 3.105006 mAP 0.436788 -2023-04-27 04:47:19,775 - Epoch: [283][ 150/ 155] Loss 3.093129 mAP 0.437121 -2023-04-27 04:47:20,493 - Epoch: [283][ 155/ 155] Loss 3.094233 mAP 0.437217 -2023-04-27 04:47:20,568 - ==> mAP: 0.43722 Loss: 3.094 - -2023-04-27 04:47:20,571 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:47:20,571 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:47:20,605 - - -2023-04-27 04:47:20,605 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:47:32,180 - Epoch: [284][ 50/ 518] Overall Loss 2.936448 Objective Loss 2.936448 LR 0.000008 Time 0.231432 -2023-04-27 04:47:42,990 - Epoch: [284][ 100/ 518] Overall Loss 2.973280 Objective Loss 2.973280 LR 0.000008 Time 0.223799 -2023-04-27 04:47:53,786 - Epoch: [284][ 150/ 518] Overall Loss 2.954745 Objective Loss 2.954745 LR 0.000008 Time 0.221164 -2023-04-27 04:48:04,611 - Epoch: [284][ 200/ 518] Overall Loss 2.951104 Objective Loss 2.951104 LR 0.000008 Time 0.219989 -2023-04-27 04:48:15,517 - Epoch: [284][ 250/ 518] Overall Loss 2.948218 Objective Loss 2.948218 LR 0.000008 Time 0.219610 -2023-04-27 04:48:26,325 - Epoch: [284][ 300/ 518] Overall Loss 2.941096 Objective Loss 2.941096 LR 0.000008 Time 0.219029 -2023-04-27 04:48:37,108 - Epoch: [284][ 350/ 518] Overall Loss 2.946830 Objective Loss 2.946830 LR 0.000008 Time 0.218545 -2023-04-27 04:48:47,911 - Epoch: [284][ 400/ 518] Overall Loss 2.939171 Objective Loss 2.939171 LR 0.000008 Time 0.218229 -2023-04-27 04:48:58,714 - Epoch: [284][ 450/ 518] Overall Loss 2.935497 Objective Loss 2.935497 LR 0.000008 Time 0.217985 -2023-04-27 04:49:09,467 - Epoch: [284][ 500/ 518] Overall Loss 2.933925 Objective Loss 2.933925 LR 0.000008 Time 0.217688 -2023-04-27 04:49:13,278 - Epoch: [284][ 518/ 518] Overall Loss 2.931564 Objective Loss 2.931564 LR 0.000008 Time 0.217481 -2023-04-27 04:49:13,354 - --- validate (epoch=284)----------- -2023-04-27 04:49:13,354 - 4952 samples (32 per mini-batch) -2023-04-27 04:49:21,648 - Epoch: [284][ 50/ 155] Loss 3.101132 mAP 0.418248 -2023-04-27 04:49:29,586 - Epoch: [284][ 100/ 155] Loss 3.102800 mAP 0.428656 -2023-04-27 04:49:37,475 - Epoch: [284][ 150/ 155] Loss 3.094617 mAP 0.440318 -2023-04-27 04:49:38,194 - Epoch: [284][ 155/ 155] Loss 3.093642 mAP 0.440682 -2023-04-27 04:49:38,267 - ==> mAP: 0.44068 Loss: 3.094 - -2023-04-27 04:49:38,271 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:49:38,271 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:49:38,305 - - -2023-04-27 04:49:38,305 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:49:49,916 - Epoch: [285][ 50/ 518] Overall Loss 2.917513 Objective Loss 2.917513 LR 0.000008 Time 0.232165 -2023-04-27 04:50:00,679 - Epoch: [285][ 100/ 518] Overall Loss 2.907937 Objective Loss 2.907937 LR 0.000008 Time 0.223702 -2023-04-27 04:50:11,560 - Epoch: [285][ 150/ 518] Overall Loss 2.903786 Objective Loss 2.903786 LR 0.000008 Time 0.221659 -2023-04-27 04:50:22,427 - Epoch: [285][ 200/ 518] Overall Loss 2.908856 Objective Loss 2.908856 LR 0.000008 Time 0.220571 -2023-04-27 04:50:33,288 - Epoch: [285][ 250/ 518] Overall Loss 2.906793 Objective Loss 2.906793 LR 0.000008 Time 0.219895 -2023-04-27 04:50:44,077 - Epoch: [285][ 300/ 518] Overall Loss 2.915454 Objective Loss 2.915454 LR 0.000008 Time 0.219205 -2023-04-27 04:50:54,905 - Epoch: [285][ 350/ 518] Overall Loss 2.914192 Objective Loss 2.914192 LR 0.000008 Time 0.218825 -2023-04-27 04:51:05,733 - Epoch: [285][ 400/ 518] Overall Loss 2.918480 Objective Loss 2.918480 LR 0.000008 Time 0.218535 -2023-04-27 04:51:16,592 - Epoch: [285][ 450/ 518] Overall Loss 2.918362 Objective Loss 2.918362 LR 0.000008 Time 0.218383 -2023-04-27 04:51:27,448 - Epoch: [285][ 500/ 518] Overall Loss 2.918047 Objective Loss 2.918047 LR 0.000008 Time 0.218252 -2023-04-27 04:51:31,145 - Epoch: [285][ 518/ 518] Overall Loss 2.919790 Objective Loss 2.919790 LR 0.000008 Time 0.217805 -2023-04-27 04:51:31,221 - --- validate (epoch=285)----------- -2023-04-27 04:51:31,222 - 4952 samples (32 per mini-batch) -2023-04-27 04:51:39,518 - Epoch: [285][ 50/ 155] Loss 3.100294 mAP 0.426286 -2023-04-27 04:51:47,403 - Epoch: [285][ 100/ 155] Loss 3.082893 mAP 0.434670 -2023-04-27 04:51:55,262 - Epoch: [285][ 150/ 155] Loss 3.091257 mAP 0.434119 -2023-04-27 04:51:55,975 - Epoch: [285][ 155/ 155] Loss 3.093361 mAP 0.432579 -2023-04-27 04:51:56,041 - ==> mAP: 0.43258 Loss: 3.093 - -2023-04-27 04:51:56,045 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:51:56,045 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:51:56,080 - - -2023-04-27 04:51:56,080 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:52:07,746 - Epoch: [286][ 50/ 518] Overall Loss 2.966886 Objective Loss 2.966886 LR 0.000008 Time 0.233262 -2023-04-27 04:52:18,648 - Epoch: [286][ 100/ 518] Overall Loss 2.948885 Objective Loss 2.948885 LR 0.000008 Time 0.225633 -2023-04-27 04:52:29,443 - Epoch: [286][ 150/ 518] Overall Loss 2.943470 Objective Loss 2.943470 LR 0.000008 Time 0.222379 -2023-04-27 04:52:40,169 - Epoch: [286][ 200/ 518] Overall Loss 2.953281 Objective Loss 2.953281 LR 0.000008 Time 0.220408 -2023-04-27 04:52:50,960 - Epoch: [286][ 250/ 518] Overall Loss 2.945837 Objective Loss 2.945837 LR 0.000008 Time 0.219484 -2023-04-27 04:53:01,645 - Epoch: [286][ 300/ 518] Overall Loss 2.933130 Objective Loss 2.933130 LR 0.000008 Time 0.218516 -2023-04-27 04:53:12,474 - Epoch: [286][ 350/ 518] Overall Loss 2.935137 Objective Loss 2.935137 LR 0.000008 Time 0.218233 -2023-04-27 04:53:23,293 - Epoch: [286][ 400/ 518] Overall Loss 2.936116 Objective Loss 2.936116 LR 0.000008 Time 0.217997 -2023-04-27 04:53:34,153 - Epoch: [286][ 450/ 518] Overall Loss 2.936821 Objective Loss 2.936821 LR 0.000008 Time 0.217905 -2023-04-27 04:53:44,961 - Epoch: [286][ 500/ 518] Overall Loss 2.935347 Objective Loss 2.935347 LR 0.000008 Time 0.217729 -2023-04-27 04:53:48,703 - Epoch: [286][ 518/ 518] Overall Loss 2.935225 Objective Loss 2.935225 LR 0.000008 Time 0.217385 -2023-04-27 04:53:48,781 - --- validate (epoch=286)----------- -2023-04-27 04:53:48,781 - 4952 samples (32 per mini-batch) -2023-04-27 04:53:57,061 - Epoch: [286][ 50/ 155] Loss 3.047871 mAP 0.435610 -2023-04-27 04:54:04,905 - Epoch: [286][ 100/ 155] Loss 3.049760 mAP 0.426265 -2023-04-27 04:54:12,767 - Epoch: [286][ 150/ 155] Loss 3.068598 mAP 0.424897 -2023-04-27 04:54:13,484 - Epoch: [286][ 155/ 155] Loss 3.069925 mAP 0.424460 -2023-04-27 04:54:13,570 - ==> mAP: 0.42446 Loss: 3.070 - -2023-04-27 04:54:13,574 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:54:13,574 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:54:13,608 - - -2023-04-27 04:54:13,608 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:54:25,330 - Epoch: [287][ 50/ 518] Overall Loss 2.891757 Objective Loss 2.891757 LR 0.000008 Time 0.234380 -2023-04-27 04:54:36,188 - Epoch: [287][ 100/ 518] Overall Loss 2.925907 Objective Loss 2.925907 LR 0.000008 Time 0.225753 -2023-04-27 04:54:46,977 - Epoch: [287][ 150/ 518] Overall Loss 2.925918 Objective Loss 2.925918 LR 0.000008 Time 0.222420 -2023-04-27 04:54:57,784 - Epoch: [287][ 200/ 518] Overall Loss 2.927241 Objective Loss 2.927241 LR 0.000008 Time 0.220843 -2023-04-27 04:55:08,620 - Epoch: [287][ 250/ 518] Overall Loss 2.939288 Objective Loss 2.939288 LR 0.000008 Time 0.220012 -2023-04-27 04:55:19,481 - Epoch: [287][ 300/ 518] Overall Loss 2.937170 Objective Loss 2.937170 LR 0.000008 Time 0.219539 -2023-04-27 04:55:30,281 - Epoch: [287][ 350/ 518] Overall Loss 2.935620 Objective Loss 2.935620 LR 0.000008 Time 0.219031 -2023-04-27 04:55:40,995 - Epoch: [287][ 400/ 518] Overall Loss 2.933362 Objective Loss 2.933362 LR 0.000008 Time 0.218432 -2023-04-27 04:55:51,813 - Epoch: [287][ 450/ 518] Overall Loss 2.926508 Objective Loss 2.926508 LR 0.000008 Time 0.218199 -2023-04-27 04:56:02,661 - Epoch: [287][ 500/ 518] Overall Loss 2.929760 Objective Loss 2.929760 LR 0.000008 Time 0.218071 -2023-04-27 04:56:06,424 - Epoch: [287][ 518/ 518] Overall Loss 2.930230 Objective Loss 2.930230 LR 0.000008 Time 0.217756 -2023-04-27 04:56:06,498 - --- validate (epoch=287)----------- -2023-04-27 04:56:06,499 - 4952 samples (32 per mini-batch) -2023-04-27 04:56:14,733 - Epoch: [287][ 50/ 155] Loss 3.094244 mAP 0.426316 -2023-04-27 04:56:22,625 - Epoch: [287][ 100/ 155] Loss 3.080599 mAP 0.436101 -2023-04-27 04:56:30,494 - Epoch: [287][ 150/ 155] Loss 3.091570 mAP 0.438268 -2023-04-27 04:56:31,203 - Epoch: [287][ 155/ 155] Loss 3.091305 mAP 0.438559 -2023-04-27 04:56:31,270 - ==> mAP: 0.43856 Loss: 3.091 - -2023-04-27 04:56:31,274 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:56:31,274 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:56:31,308 - - -2023-04-27 04:56:31,308 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:56:42,849 - Epoch: [288][ 50/ 518] Overall Loss 2.936641 Objective Loss 2.936641 LR 0.000008 Time 0.230762 -2023-04-27 04:56:53,654 - Epoch: [288][ 100/ 518] Overall Loss 2.908985 Objective Loss 2.908985 LR 0.000008 Time 0.223412 -2023-04-27 04:57:04,498 - Epoch: [288][ 150/ 518] Overall Loss 2.901962 Objective Loss 2.901962 LR 0.000008 Time 0.221229 -2023-04-27 04:57:15,253 - Epoch: [288][ 200/ 518] Overall Loss 2.896586 Objective Loss 2.896586 LR 0.000008 Time 0.219685 -2023-04-27 04:57:26,161 - Epoch: [288][ 250/ 518] Overall Loss 2.910661 Objective Loss 2.910661 LR 0.000008 Time 0.219377 -2023-04-27 04:57:37,082 - Epoch: [288][ 300/ 518] Overall Loss 2.911461 Objective Loss 2.911461 LR 0.000008 Time 0.219211 -2023-04-27 04:57:47,905 - Epoch: [288][ 350/ 518] Overall Loss 2.916080 Objective Loss 2.916080 LR 0.000008 Time 0.218814 -2023-04-27 04:57:58,719 - Epoch: [288][ 400/ 518] Overall Loss 2.910488 Objective Loss 2.910488 LR 0.000008 Time 0.218492 -2023-04-27 04:58:09,578 - Epoch: [288][ 450/ 518] Overall Loss 2.913205 Objective Loss 2.913205 LR 0.000008 Time 0.218345 -2023-04-27 04:58:20,414 - Epoch: [288][ 500/ 518] Overall Loss 2.921768 Objective Loss 2.921768 LR 0.000008 Time 0.218178 -2023-04-27 04:58:24,156 - Epoch: [288][ 518/ 518] Overall Loss 2.922844 Objective Loss 2.922844 LR 0.000008 Time 0.217820 -2023-04-27 04:58:24,231 - --- validate (epoch=288)----------- -2023-04-27 04:58:24,231 - 4952 samples (32 per mini-batch) -2023-04-27 04:58:32,507 - Epoch: [288][ 50/ 155] Loss 3.090659 mAP 0.447918 -2023-04-27 04:58:40,418 - Epoch: [288][ 100/ 155] Loss 3.108958 mAP 0.438976 -2023-04-27 04:58:48,326 - Epoch: [288][ 150/ 155] Loss 3.078560 mAP 0.442699 -2023-04-27 04:58:49,033 - Epoch: [288][ 155/ 155] Loss 3.073228 mAP 0.441630 -2023-04-27 04:58:49,114 - ==> mAP: 0.44163 Loss: 3.073 - -2023-04-27 04:58:49,118 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 04:58:49,118 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 04:58:49,152 - - -2023-04-27 04:58:49,153 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 04:59:00,706 - Epoch: [289][ 50/ 518] Overall Loss 2.914016 Objective Loss 2.914016 LR 0.000008 Time 0.231008 -2023-04-27 04:59:11,483 - Epoch: [289][ 100/ 518] Overall Loss 2.912539 Objective Loss 2.912539 LR 0.000008 Time 0.223257 -2023-04-27 04:59:22,322 - Epoch: [289][ 150/ 518] Overall Loss 2.924027 Objective Loss 2.924027 LR 0.000008 Time 0.221090 -2023-04-27 04:59:33,153 - Epoch: [289][ 200/ 518] Overall Loss 2.927276 Objective Loss 2.927276 LR 0.000008 Time 0.219967 -2023-04-27 04:59:43,964 - Epoch: [289][ 250/ 518] Overall Loss 2.930019 Objective Loss 2.930019 LR 0.000008 Time 0.219210 -2023-04-27 04:59:54,737 - Epoch: [289][ 300/ 518] Overall Loss 2.937924 Objective Loss 2.937924 LR 0.000008 Time 0.218581 -2023-04-27 05:00:05,513 - Epoch: [289][ 350/ 518] Overall Loss 2.937023 Objective Loss 2.937023 LR 0.000008 Time 0.218139 -2023-04-27 05:00:16,326 - Epoch: [289][ 400/ 518] Overall Loss 2.931278 Objective Loss 2.931278 LR 0.000008 Time 0.217900 -2023-04-27 05:00:27,176 - Epoch: [289][ 450/ 518] Overall Loss 2.927935 Objective Loss 2.927935 LR 0.000008 Time 0.217796 -2023-04-27 05:00:38,028 - Epoch: [289][ 500/ 518] Overall Loss 2.925402 Objective Loss 2.925402 LR 0.000008 Time 0.217718 -2023-04-27 05:00:41,775 - Epoch: [289][ 518/ 518] Overall Loss 2.922943 Objective Loss 2.922943 LR 0.000008 Time 0.217384 -2023-04-27 05:00:41,850 - --- validate (epoch=289)----------- -2023-04-27 05:00:41,850 - 4952 samples (32 per mini-batch) -2023-04-27 05:00:50,134 - Epoch: [289][ 50/ 155] Loss 3.065512 mAP 0.430465 -2023-04-27 05:00:58,048 - Epoch: [289][ 100/ 155] Loss 3.079609 mAP 0.439569 -2023-04-27 05:01:05,937 - Epoch: [289][ 150/ 155] Loss 3.072478 mAP 0.431406 -2023-04-27 05:01:06,653 - Epoch: [289][ 155/ 155] Loss 3.069013 mAP 0.431815 -2023-04-27 05:01:06,725 - ==> mAP: 0.43182 Loss: 3.069 - -2023-04-27 05:01:06,729 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 05:01:06,729 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:01:06,763 - - -2023-04-27 05:01:06,763 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:01:18,510 - Epoch: [290][ 50/ 518] Overall Loss 2.919580 Objective Loss 2.919580 LR 0.000008 Time 0.234893 -2023-04-27 05:01:29,383 - Epoch: [290][ 100/ 518] Overall Loss 2.920591 Objective Loss 2.920591 LR 0.000008 Time 0.226156 -2023-04-27 05:01:40,124 - Epoch: [290][ 150/ 518] Overall Loss 2.913312 Objective Loss 2.913312 LR 0.000008 Time 0.222364 -2023-04-27 05:01:50,990 - Epoch: [290][ 200/ 518] Overall Loss 2.918421 Objective Loss 2.918421 LR 0.000008 Time 0.221100 -2023-04-27 05:02:01,833 - Epoch: [290][ 250/ 518] Overall Loss 2.929135 Objective Loss 2.929135 LR 0.000008 Time 0.220245 -2023-04-27 05:02:12,769 - Epoch: [290][ 300/ 518] Overall Loss 2.928417 Objective Loss 2.928417 LR 0.000008 Time 0.219986 -2023-04-27 05:02:23,659 - Epoch: [290][ 350/ 518] Overall Loss 2.928940 Objective Loss 2.928940 LR 0.000008 Time 0.219669 -2023-04-27 05:02:34,572 - Epoch: [290][ 400/ 518] Overall Loss 2.932021 Objective Loss 2.932021 LR 0.000008 Time 0.219489 -2023-04-27 05:02:45,407 - Epoch: [290][ 450/ 518] Overall Loss 2.927776 Objective Loss 2.927776 LR 0.000008 Time 0.219176 -2023-04-27 05:02:56,193 - Epoch: [290][ 500/ 518] Overall Loss 2.921373 Objective Loss 2.921373 LR 0.000008 Time 0.218827 -2023-04-27 05:02:59,937 - Epoch: [290][ 518/ 518] Overall Loss 2.925155 Objective Loss 2.925155 LR 0.000008 Time 0.218449 -2023-04-27 05:03:00,013 - --- validate (epoch=290)----------- -2023-04-27 05:03:00,014 - 4952 samples (32 per mini-batch) -2023-04-27 05:03:08,321 - Epoch: [290][ 50/ 155] Loss 3.077751 mAP 0.430518 -2023-04-27 05:03:16,271 - Epoch: [290][ 100/ 155] Loss 3.071047 mAP 0.441891 -2023-04-27 05:03:24,116 - Epoch: [290][ 150/ 155] Loss 3.068410 mAP 0.445280 -2023-04-27 05:03:24,842 - Epoch: [290][ 155/ 155] Loss 3.070588 mAP 0.444036 -2023-04-27 05:03:24,908 - ==> mAP: 0.44404 Loss: 3.071 - -2023-04-27 05:03:24,911 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 05:03:24,911 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:03:24,946 - - -2023-04-27 05:03:24,946 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:03:36,513 - Epoch: [291][ 50/ 518] Overall Loss 2.908736 Objective Loss 2.908736 LR 0.000008 Time 0.231285 -2023-04-27 05:03:47,354 - Epoch: [291][ 100/ 518] Overall Loss 2.928966 Objective Loss 2.928966 LR 0.000008 Time 0.224038 -2023-04-27 05:03:58,258 - Epoch: [291][ 150/ 518] Overall Loss 2.932445 Objective Loss 2.932445 LR 0.000008 Time 0.222042 -2023-04-27 05:04:09,090 - Epoch: [291][ 200/ 518] Overall Loss 2.925612 Objective Loss 2.925612 LR 0.000008 Time 0.220684 -2023-04-27 05:04:19,952 - Epoch: [291][ 250/ 518] Overall Loss 2.917948 Objective Loss 2.917948 LR 0.000008 Time 0.219988 -2023-04-27 05:04:30,781 - Epoch: [291][ 300/ 518] Overall Loss 2.920167 Objective Loss 2.920167 LR 0.000008 Time 0.219415 -2023-04-27 05:04:41,584 - Epoch: [291][ 350/ 518] Overall Loss 2.923842 Objective Loss 2.923842 LR 0.000008 Time 0.218932 -2023-04-27 05:04:52,383 - Epoch: [291][ 400/ 518] Overall Loss 2.921154 Objective Loss 2.921154 LR 0.000008 Time 0.218559 -2023-04-27 05:05:03,143 - Epoch: [291][ 450/ 518] Overall Loss 2.919595 Objective Loss 2.919595 LR 0.000008 Time 0.218181 -2023-04-27 05:05:14,035 - Epoch: [291][ 500/ 518] Overall Loss 2.915669 Objective Loss 2.915669 LR 0.000008 Time 0.218145 -2023-04-27 05:05:17,843 - Epoch: [291][ 518/ 518] Overall Loss 2.917876 Objective Loss 2.917876 LR 0.000008 Time 0.217915 -2023-04-27 05:05:17,920 - --- validate (epoch=291)----------- -2023-04-27 05:05:17,920 - 4952 samples (32 per mini-batch) -2023-04-27 05:05:26,175 - Epoch: [291][ 50/ 155] Loss 3.061823 mAP 0.443451 -2023-04-27 05:05:34,117 - Epoch: [291][ 100/ 155] Loss 3.075443 mAP 0.438833 -2023-04-27 05:05:41,975 - Epoch: [291][ 150/ 155] Loss 3.069740 mAP 0.437865 -2023-04-27 05:05:42,689 - Epoch: [291][ 155/ 155] Loss 3.070671 mAP 0.438013 -2023-04-27 05:05:42,762 - ==> mAP: 0.43801 Loss: 3.071 - -2023-04-27 05:05:42,766 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 05:05:42,766 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:05:42,801 - - -2023-04-27 05:05:42,802 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:05:54,403 - Epoch: [292][ 50/ 518] Overall Loss 2.919714 Objective Loss 2.919714 LR 0.000008 Time 0.231981 -2023-04-27 05:06:05,313 - Epoch: [292][ 100/ 518] Overall Loss 2.950927 Objective Loss 2.950927 LR 0.000008 Time 0.225072 -2023-04-27 05:06:16,156 - Epoch: [292][ 150/ 518] Overall Loss 2.929389 Objective Loss 2.929389 LR 0.000008 Time 0.222322 -2023-04-27 05:06:26,871 - Epoch: [292][ 200/ 518] Overall Loss 2.920661 Objective Loss 2.920661 LR 0.000008 Time 0.220312 -2023-04-27 05:06:37,697 - Epoch: [292][ 250/ 518] Overall Loss 2.922494 Objective Loss 2.922494 LR 0.000008 Time 0.219545 -2023-04-27 05:06:48,490 - Epoch: [292][ 300/ 518] Overall Loss 2.924583 Objective Loss 2.924583 LR 0.000008 Time 0.218927 -2023-04-27 05:06:59,372 - Epoch: [292][ 350/ 518] Overall Loss 2.923850 Objective Loss 2.923850 LR 0.000008 Time 0.218737 -2023-04-27 05:07:10,201 - Epoch: [292][ 400/ 518] Overall Loss 2.921860 Objective Loss 2.921860 LR 0.000008 Time 0.218466 -2023-04-27 05:07:21,044 - Epoch: [292][ 450/ 518] Overall Loss 2.915925 Objective Loss 2.915925 LR 0.000008 Time 0.218283 -2023-04-27 05:07:31,878 - Epoch: [292][ 500/ 518] Overall Loss 2.918442 Objective Loss 2.918442 LR 0.000008 Time 0.218120 -2023-04-27 05:07:35,637 - Epoch: [292][ 518/ 518] Overall Loss 2.917647 Objective Loss 2.917647 LR 0.000008 Time 0.217795 -2023-04-27 05:07:35,712 - --- validate (epoch=292)----------- -2023-04-27 05:07:35,712 - 4952 samples (32 per mini-batch) -2023-04-27 05:07:44,007 - Epoch: [292][ 50/ 155] Loss 3.026883 mAP 0.445070 -2023-04-27 05:07:51,945 - Epoch: [292][ 100/ 155] Loss 3.055471 mAP 0.438649 -2023-04-27 05:07:59,871 - Epoch: [292][ 150/ 155] Loss 3.063287 mAP 0.444224 -2023-04-27 05:08:00,600 - Epoch: [292][ 155/ 155] Loss 3.063216 mAP 0.443696 -2023-04-27 05:08:00,673 - ==> mAP: 0.44370 Loss: 3.063 - -2023-04-27 05:08:00,676 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 05:08:00,677 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:08:00,711 - - -2023-04-27 05:08:00,711 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:08:12,261 - Epoch: [293][ 50/ 518] Overall Loss 2.915838 Objective Loss 2.915838 LR 0.000008 Time 0.230941 -2023-04-27 05:08:23,142 - Epoch: [293][ 100/ 518] Overall Loss 2.877692 Objective Loss 2.877692 LR 0.000008 Time 0.224264 -2023-04-27 05:08:33,962 - Epoch: [293][ 150/ 518] Overall Loss 2.891919 Objective Loss 2.891919 LR 0.000008 Time 0.221634 -2023-04-27 05:08:44,664 - Epoch: [293][ 200/ 518] Overall Loss 2.900026 Objective Loss 2.900026 LR 0.000008 Time 0.219730 -2023-04-27 05:08:55,539 - Epoch: [293][ 250/ 518] Overall Loss 2.904639 Objective Loss 2.904639 LR 0.000008 Time 0.219278 -2023-04-27 05:09:06,368 - Epoch: [293][ 300/ 518] Overall Loss 2.911720 Objective Loss 2.911720 LR 0.000008 Time 0.218820 -2023-04-27 05:09:17,259 - Epoch: [293][ 350/ 518] Overall Loss 2.911722 Objective Loss 2.911722 LR 0.000008 Time 0.218675 -2023-04-27 05:09:27,999 - Epoch: [293][ 400/ 518] Overall Loss 2.920950 Objective Loss 2.920950 LR 0.000008 Time 0.218187 -2023-04-27 05:09:38,856 - Epoch: [293][ 450/ 518] Overall Loss 2.921030 Objective Loss 2.921030 LR 0.000008 Time 0.218065 -2023-04-27 05:09:49,710 - Epoch: [293][ 500/ 518] Overall Loss 2.918610 Objective Loss 2.918610 LR 0.000008 Time 0.217965 -2023-04-27 05:09:53,452 - Epoch: [293][ 518/ 518] Overall Loss 2.918124 Objective Loss 2.918124 LR 0.000008 Time 0.217613 -2023-04-27 05:09:53,529 - --- validate (epoch=293)----------- -2023-04-27 05:09:53,529 - 4952 samples (32 per mini-batch) -2023-04-27 05:10:01,861 - Epoch: [293][ 50/ 155] Loss 3.066046 mAP 0.452367 -2023-04-27 05:10:09,773 - Epoch: [293][ 100/ 155] Loss 3.091214 mAP 0.439415 -2023-04-27 05:10:17,677 - Epoch: [293][ 150/ 155] Loss 3.070189 mAP 0.442671 -2023-04-27 05:10:18,392 - Epoch: [293][ 155/ 155] Loss 3.072362 mAP 0.440900 -2023-04-27 05:10:18,469 - ==> mAP: 0.44090 Loss: 3.072 - -2023-04-27 05:10:18,473 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 05:10:18,474 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:10:18,508 - - -2023-04-27 05:10:18,508 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:10:30,070 - Epoch: [294][ 50/ 518] Overall Loss 2.924362 Objective Loss 2.924362 LR 0.000008 Time 0.231177 -2023-04-27 05:10:40,889 - Epoch: [294][ 100/ 518] Overall Loss 2.921170 Objective Loss 2.921170 LR 0.000008 Time 0.223763 -2023-04-27 05:10:51,712 - Epoch: [294][ 150/ 518] Overall Loss 2.927209 Objective Loss 2.927209 LR 0.000008 Time 0.221321 -2023-04-27 05:11:02,587 - Epoch: [294][ 200/ 518] Overall Loss 2.929932 Objective Loss 2.929932 LR 0.000008 Time 0.220356 -2023-04-27 05:11:13,427 - Epoch: [294][ 250/ 518] Overall Loss 2.938957 Objective Loss 2.938957 LR 0.000008 Time 0.219641 -2023-04-27 05:11:24,273 - Epoch: [294][ 300/ 518] Overall Loss 2.938496 Objective Loss 2.938496 LR 0.000008 Time 0.219181 -2023-04-27 05:11:35,031 - Epoch: [294][ 350/ 518] Overall Loss 2.935979 Objective Loss 2.935979 LR 0.000008 Time 0.218602 -2023-04-27 05:11:45,859 - Epoch: [294][ 400/ 518] Overall Loss 2.941382 Objective Loss 2.941382 LR 0.000008 Time 0.218342 -2023-04-27 05:11:56,672 - Epoch: [294][ 450/ 518] Overall Loss 2.936699 Objective Loss 2.936699 LR 0.000008 Time 0.218108 -2023-04-27 05:12:07,528 - Epoch: [294][ 500/ 518] Overall Loss 2.932593 Objective Loss 2.932593 LR 0.000008 Time 0.218006 -2023-04-27 05:12:11,244 - Epoch: [294][ 518/ 518] Overall Loss 2.931456 Objective Loss 2.931456 LR 0.000008 Time 0.217604 -2023-04-27 05:12:11,319 - --- validate (epoch=294)----------- -2023-04-27 05:12:11,320 - 4952 samples (32 per mini-batch) -2023-04-27 05:12:19,590 - Epoch: [294][ 50/ 155] Loss 3.123250 mAP 0.443126 -2023-04-27 05:12:27,441 - Epoch: [294][ 100/ 155] Loss 3.093875 mAP 0.440656 -2023-04-27 05:12:35,307 - Epoch: [294][ 150/ 155] Loss 3.073903 mAP 0.442217 -2023-04-27 05:12:36,034 - Epoch: [294][ 155/ 155] Loss 3.075480 mAP 0.440835 -2023-04-27 05:12:36,107 - ==> mAP: 0.44084 Loss: 3.075 - -2023-04-27 05:12:36,110 - ==> Best [mAP: 0.446835 vloss: 3.081809 Sparsity:0.00 Params: 2177087 on epoch: 270] -2023-04-27 05:12:36,111 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:12:36,145 - - -2023-04-27 05:12:36,145 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:12:47,661 - Epoch: [295][ 50/ 518] Overall Loss 2.909150 Objective Loss 2.909150 LR 0.000008 Time 0.230276 -2023-04-27 05:12:58,469 - Epoch: [295][ 100/ 518] Overall Loss 2.893798 Objective Loss 2.893798 LR 0.000008 Time 0.223201 -2023-04-27 05:13:09,293 - Epoch: [295][ 150/ 518] Overall Loss 2.905301 Objective Loss 2.905301 LR 0.000008 Time 0.220944 -2023-04-27 05:13:20,078 - Epoch: [295][ 200/ 518] Overall Loss 2.901378 Objective Loss 2.901378 LR 0.000008 Time 0.219630 -2023-04-27 05:13:30,860 - Epoch: [295][ 250/ 518] Overall Loss 2.907015 Objective Loss 2.907015 LR 0.000008 Time 0.218825 -2023-04-27 05:13:41,679 - Epoch: [295][ 300/ 518] Overall Loss 2.906953 Objective Loss 2.906953 LR 0.000008 Time 0.218413 -2023-04-27 05:13:52,465 - Epoch: [295][ 350/ 518] Overall Loss 2.911744 Objective Loss 2.911744 LR 0.000008 Time 0.218023 -2023-04-27 05:14:03,296 - Epoch: [295][ 400/ 518] Overall Loss 2.912571 Objective Loss 2.912571 LR 0.000008 Time 0.217843 -2023-04-27 05:14:14,080 - Epoch: [295][ 450/ 518] Overall Loss 2.910491 Objective Loss 2.910491 LR 0.000008 Time 0.217599 -2023-04-27 05:14:24,883 - Epoch: [295][ 500/ 518] Overall Loss 2.910395 Objective Loss 2.910395 LR 0.000008 Time 0.217442 -2023-04-27 05:14:28,583 - Epoch: [295][ 518/ 518] Overall Loss 2.913088 Objective Loss 2.913088 LR 0.000008 Time 0.217028 -2023-04-27 05:14:28,658 - --- validate (epoch=295)----------- -2023-04-27 05:14:28,659 - 4952 samples (32 per mini-batch) -2023-04-27 05:14:36,938 - Epoch: [295][ 50/ 155] Loss 3.052971 mAP 0.465437 -2023-04-27 05:14:44,849 - Epoch: [295][ 100/ 155] Loss 3.062688 mAP 0.442845 -2023-04-27 05:14:52,792 - Epoch: [295][ 150/ 155] Loss 3.055143 mAP 0.448548 -2023-04-27 05:14:53,509 - Epoch: [295][ 155/ 155] Loss 3.058122 mAP 0.447032 -2023-04-27 05:14:53,578 - ==> mAP: 0.44703 Loss: 3.058 - -2023-04-27 05:14:53,581 - ==> Best [mAP: 0.447032 vloss: 3.058122 Sparsity:0.00 Params: 2177087 on epoch: 295] -2023-04-27 05:14:53,582 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:14:53,631 - - -2023-04-27 05:14:53,631 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:15:05,264 - Epoch: [296][ 50/ 518] Overall Loss 2.911000 Objective Loss 2.911000 LR 0.000008 Time 0.232604 -2023-04-27 05:15:16,008 - Epoch: [296][ 100/ 518] Overall Loss 2.912129 Objective Loss 2.912129 LR 0.000008 Time 0.223720 -2023-04-27 05:15:26,854 - Epoch: [296][ 150/ 518] Overall Loss 2.913533 Objective Loss 2.913533 LR 0.000008 Time 0.221448 -2023-04-27 05:15:37,769 - Epoch: [296][ 200/ 518] Overall Loss 2.912528 Objective Loss 2.912528 LR 0.000008 Time 0.220651 -2023-04-27 05:15:48,605 - Epoch: [296][ 250/ 518] Overall Loss 2.922077 Objective Loss 2.922077 LR 0.000008 Time 0.219857 -2023-04-27 05:15:59,476 - Epoch: [296][ 300/ 518] Overall Loss 2.924272 Objective Loss 2.924272 LR 0.000008 Time 0.219446 -2023-04-27 05:16:10,320 - Epoch: [296][ 350/ 518] Overall Loss 2.916394 Objective Loss 2.916394 LR 0.000008 Time 0.219075 -2023-04-27 05:16:21,149 - Epoch: [296][ 400/ 518] Overall Loss 2.917968 Objective Loss 2.917968 LR 0.000008 Time 0.218760 -2023-04-27 05:16:31,981 - Epoch: [296][ 450/ 518] Overall Loss 2.916689 Objective Loss 2.916689 LR 0.000008 Time 0.218520 -2023-04-27 05:16:42,757 - Epoch: [296][ 500/ 518] Overall Loss 2.918798 Objective Loss 2.918798 LR 0.000008 Time 0.218217 -2023-04-27 05:16:46,514 - Epoch: [296][ 518/ 518] Overall Loss 2.920246 Objective Loss 2.920246 LR 0.000008 Time 0.217887 -2023-04-27 05:16:46,589 - --- validate (epoch=296)----------- -2023-04-27 05:16:46,589 - 4952 samples (32 per mini-batch) -2023-04-27 05:16:54,882 - Epoch: [296][ 50/ 155] Loss 3.028394 mAP 0.462881 -2023-04-27 05:17:02,800 - Epoch: [296][ 100/ 155] Loss 3.036356 mAP 0.449288 -2023-04-27 05:17:10,667 - Epoch: [296][ 150/ 155] Loss 3.055166 mAP 0.443091 -2023-04-27 05:17:11,391 - Epoch: [296][ 155/ 155] Loss 3.057581 mAP 0.443178 -2023-04-27 05:17:11,462 - ==> mAP: 0.44318 Loss: 3.058 - -2023-04-27 05:17:11,466 - ==> Best [mAP: 0.447032 vloss: 3.058122 Sparsity:0.00 Params: 2177087 on epoch: 295] -2023-04-27 05:17:11,466 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:17:11,500 - - -2023-04-27 05:17:11,500 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:17:23,145 - Epoch: [297][ 50/ 518] Overall Loss 2.885653 Objective Loss 2.885653 LR 0.000008 Time 0.232835 -2023-04-27 05:17:33,863 - Epoch: [297][ 100/ 518] Overall Loss 2.904046 Objective Loss 2.904046 LR 0.000008 Time 0.223587 -2023-04-27 05:17:44,665 - Epoch: [297][ 150/ 518] Overall Loss 2.912024 Objective Loss 2.912024 LR 0.000008 Time 0.221060 -2023-04-27 05:17:55,549 - Epoch: [297][ 200/ 518] Overall Loss 2.912725 Objective Loss 2.912725 LR 0.000008 Time 0.220208 -2023-04-27 05:18:06,320 - Epoch: [297][ 250/ 518] Overall Loss 2.913028 Objective Loss 2.913028 LR 0.000008 Time 0.219245 -2023-04-27 05:18:17,119 - Epoch: [297][ 300/ 518] Overall Loss 2.917759 Objective Loss 2.917759 LR 0.000008 Time 0.218694 -2023-04-27 05:18:27,999 - Epoch: [297][ 350/ 518] Overall Loss 2.907387 Objective Loss 2.907387 LR 0.000008 Time 0.218535 -2023-04-27 05:18:38,864 - Epoch: [297][ 400/ 518] Overall Loss 2.906146 Objective Loss 2.906146 LR 0.000008 Time 0.218376 -2023-04-27 05:18:49,603 - Epoch: [297][ 450/ 518] Overall Loss 2.908022 Objective Loss 2.908022 LR 0.000008 Time 0.217972 -2023-04-27 05:19:00,409 - Epoch: [297][ 500/ 518] Overall Loss 2.909624 Objective Loss 2.909624 LR 0.000008 Time 0.217784 -2023-04-27 05:19:04,134 - Epoch: [297][ 518/ 518] Overall Loss 2.910147 Objective Loss 2.910147 LR 0.000008 Time 0.217406 -2023-04-27 05:19:04,212 - --- validate (epoch=297)----------- -2023-04-27 05:19:04,212 - 4952 samples (32 per mini-batch) -2023-04-27 05:19:12,548 - Epoch: [297][ 50/ 155] Loss 3.104408 mAP 0.433900 -2023-04-27 05:19:20,498 - Epoch: [297][ 100/ 155] Loss 3.092896 mAP 0.442911 -2023-04-27 05:19:28,455 - Epoch: [297][ 150/ 155] Loss 3.070721 mAP 0.453656 -2023-04-27 05:19:29,171 - Epoch: [297][ 155/ 155] Loss 3.073783 mAP 0.451723 -2023-04-27 05:19:29,247 - ==> mAP: 0.45172 Loss: 3.074 - -2023-04-27 05:19:29,251 - ==> Best [mAP: 0.451723 vloss: 3.073783 Sparsity:0.00 Params: 2177087 on epoch: 297] -2023-04-27 05:19:29,251 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:19:29,300 - - -2023-04-27 05:19:29,300 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:19:40,861 - Epoch: [298][ 50/ 518] Overall Loss 2.877476 Objective Loss 2.877476 LR 0.000008 Time 0.231174 -2023-04-27 05:19:51,667 - Epoch: [298][ 100/ 518] Overall Loss 2.892009 Objective Loss 2.892009 LR 0.000008 Time 0.223627 -2023-04-27 05:20:02,353 - Epoch: [298][ 150/ 518] Overall Loss 2.878752 Objective Loss 2.878752 LR 0.000008 Time 0.220318 -2023-04-27 05:20:13,172 - Epoch: [298][ 200/ 518] Overall Loss 2.893607 Objective Loss 2.893607 LR 0.000008 Time 0.219323 -2023-04-27 05:20:23,933 - Epoch: [298][ 250/ 518] Overall Loss 2.904146 Objective Loss 2.904146 LR 0.000008 Time 0.218495 -2023-04-27 05:20:34,707 - Epoch: [298][ 300/ 518] Overall Loss 2.906283 Objective Loss 2.906283 LR 0.000008 Time 0.217989 -2023-04-27 05:20:45,512 - Epoch: [298][ 350/ 518] Overall Loss 2.901674 Objective Loss 2.901674 LR 0.000008 Time 0.217713 -2023-04-27 05:20:56,283 - Epoch: [298][ 400/ 518] Overall Loss 2.905595 Objective Loss 2.905595 LR 0.000008 Time 0.217423 -2023-04-27 05:21:07,115 - Epoch: [298][ 450/ 518] Overall Loss 2.908722 Objective Loss 2.908722 LR 0.000008 Time 0.217333 -2023-04-27 05:21:17,998 - Epoch: [298][ 500/ 518] Overall Loss 2.909473 Objective Loss 2.909473 LR 0.000008 Time 0.217362 -2023-04-27 05:21:21,692 - Epoch: [298][ 518/ 518] Overall Loss 2.910258 Objective Loss 2.910258 LR 0.000008 Time 0.216941 -2023-04-27 05:21:21,767 - --- validate (epoch=298)----------- -2023-04-27 05:21:21,768 - 4952 samples (32 per mini-batch) -2023-04-27 05:21:30,044 - Epoch: [298][ 50/ 155] Loss 3.078304 mAP 0.445496 -2023-04-27 05:21:37,950 - Epoch: [298][ 100/ 155] Loss 3.068417 mAP 0.445819 -2023-04-27 05:21:45,843 - Epoch: [298][ 150/ 155] Loss 3.051445 mAP 0.448288 -2023-04-27 05:21:46,568 - Epoch: [298][ 155/ 155] Loss 3.058153 mAP 0.447612 -2023-04-27 05:21:46,648 - ==> mAP: 0.44761 Loss: 3.058 - -2023-04-27 05:21:46,652 - ==> Best [mAP: 0.451723 vloss: 3.073783 Sparsity:0.00 Params: 2177087 on epoch: 297] -2023-04-27 05:21:46,652 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:21:46,686 - - -2023-04-27 05:21:46,686 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:21:58,227 - Epoch: [299][ 50/ 518] Overall Loss 2.936718 Objective Loss 2.936718 LR 0.000008 Time 0.230767 -2023-04-27 05:22:09,043 - Epoch: [299][ 100/ 518] Overall Loss 2.937289 Objective Loss 2.937289 LR 0.000008 Time 0.223527 -2023-04-27 05:22:19,922 - Epoch: [299][ 150/ 518] Overall Loss 2.939359 Objective Loss 2.939359 LR 0.000008 Time 0.221536 -2023-04-27 05:22:30,851 - Epoch: [299][ 200/ 518] Overall Loss 2.942282 Objective Loss 2.942282 LR 0.000008 Time 0.220787 -2023-04-27 05:22:41,619 - Epoch: [299][ 250/ 518] Overall Loss 2.936919 Objective Loss 2.936919 LR 0.000008 Time 0.219694 -2023-04-27 05:22:52,478 - Epoch: [299][ 300/ 518] Overall Loss 2.925446 Objective Loss 2.925446 LR 0.000008 Time 0.219272 -2023-04-27 05:23:03,270 - Epoch: [299][ 350/ 518] Overall Loss 2.920917 Objective Loss 2.920917 LR 0.000008 Time 0.218776 -2023-04-27 05:23:14,106 - Epoch: [299][ 400/ 518] Overall Loss 2.916930 Objective Loss 2.916930 LR 0.000008 Time 0.218516 -2023-04-27 05:23:24,899 - Epoch: [299][ 450/ 518] Overall Loss 2.923722 Objective Loss 2.923722 LR 0.000008 Time 0.218217 -2023-04-27 05:23:35,765 - Epoch: [299][ 500/ 518] Overall Loss 2.920645 Objective Loss 2.920645 LR 0.000008 Time 0.218125 -2023-04-27 05:23:39,539 - Epoch: [299][ 518/ 518] Overall Loss 2.921241 Objective Loss 2.921241 LR 0.000008 Time 0.217830 -2023-04-27 05:23:39,613 - --- validate (epoch=299)----------- -2023-04-27 05:23:39,613 - 4952 samples (32 per mini-batch) -2023-04-27 05:23:47,867 - Epoch: [299][ 50/ 155] Loss 3.016052 mAP 0.438493 -2023-04-27 05:23:55,759 - Epoch: [299][ 100/ 155] Loss 3.050409 mAP 0.435008 -2023-04-27 05:24:03,640 - Epoch: [299][ 150/ 155] Loss 3.064417 mAP 0.436216 -2023-04-27 05:24:04,354 - Epoch: [299][ 155/ 155] Loss 3.066608 mAP 0.437309 -2023-04-27 05:24:04,427 - ==> mAP: 0.43731 Loss: 3.067 - -2023-04-27 05:24:04,431 - ==> Best [mAP: 0.451723 vloss: 3.073783 Sparsity:0.00 Params: 2177087 on epoch: 297] -2023-04-27 05:24:04,431 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:24:04,466 - - -2023-04-27 05:24:04,466 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:24:16,034 - Epoch: [300][ 50/ 518] Overall Loss 2.896384 Objective Loss 2.896384 LR 0.000008 Time 0.231303 -2023-04-27 05:24:26,807 - Epoch: [300][ 100/ 518] Overall Loss 2.883464 Objective Loss 2.883464 LR 0.000008 Time 0.223366 -2023-04-27 05:24:37,737 - Epoch: [300][ 150/ 518] Overall Loss 2.877581 Objective Loss 2.877581 LR 0.000008 Time 0.221768 -2023-04-27 05:24:48,593 - Epoch: [300][ 200/ 518] Overall Loss 2.890315 Objective Loss 2.890315 LR 0.000008 Time 0.220598 -2023-04-27 05:24:59,403 - Epoch: [300][ 250/ 518] Overall Loss 2.890854 Objective Loss 2.890854 LR 0.000008 Time 0.219709 -2023-04-27 05:25:10,196 - Epoch: [300][ 300/ 518] Overall Loss 2.890381 Objective Loss 2.890381 LR 0.000008 Time 0.219066 -2023-04-27 05:25:21,024 - Epoch: [300][ 350/ 518] Overall Loss 2.891592 Objective Loss 2.891592 LR 0.000008 Time 0.218701 -2023-04-27 05:25:31,776 - Epoch: [300][ 400/ 518] Overall Loss 2.902399 Objective Loss 2.902399 LR 0.000008 Time 0.218239 -2023-04-27 05:25:42,564 - Epoch: [300][ 450/ 518] Overall Loss 2.907935 Objective Loss 2.907935 LR 0.000008 Time 0.217960 -2023-04-27 05:25:53,439 - Epoch: [300][ 500/ 518] Overall Loss 2.913492 Objective Loss 2.913492 LR 0.000008 Time 0.217911 -2023-04-27 05:25:57,240 - Epoch: [300][ 518/ 518] Overall Loss 2.916183 Objective Loss 2.916183 LR 0.000008 Time 0.217676 -2023-04-27 05:25:57,318 - --- validate (epoch=300)----------- -2023-04-27 05:25:57,319 - 4952 samples (32 per mini-batch) -2023-04-27 05:26:05,644 - Epoch: [300][ 50/ 155] Loss 3.062009 mAP 0.442371 -2023-04-27 05:26:13,541 - Epoch: [300][ 100/ 155] Loss 3.071778 mAP 0.438962 -2023-04-27 05:26:21,441 - Epoch: [300][ 150/ 155] Loss 3.084255 mAP 0.441421 -2023-04-27 05:26:22,167 - Epoch: [300][ 155/ 155] Loss 3.079451 mAP 0.443098 -2023-04-27 05:26:22,243 - ==> mAP: 0.44310 Loss: 3.079 - -2023-04-27 05:26:22,247 - ==> Best [mAP: 0.451723 vloss: 3.073783 Sparsity:0.00 Params: 2177087 on epoch: 297] -2023-04-27 05:26:22,247 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:26:22,281 - - -2023-04-27 05:26:22,281 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:26:33,780 - Epoch: [301][ 50/ 518] Overall Loss 2.922221 Objective Loss 2.922221 LR 0.000008 Time 0.229929 -2023-04-27 05:26:44,566 - Epoch: [301][ 100/ 518] Overall Loss 2.892802 Objective Loss 2.892802 LR 0.000008 Time 0.222810 -2023-04-27 05:26:55,398 - Epoch: [301][ 150/ 518] Overall Loss 2.899877 Objective Loss 2.899877 LR 0.000008 Time 0.220740 -2023-04-27 05:27:06,363 - Epoch: [301][ 200/ 518] Overall Loss 2.907066 Objective Loss 2.907066 LR 0.000008 Time 0.220375 -2023-04-27 05:27:17,115 - Epoch: [301][ 250/ 518] Overall Loss 2.905802 Objective Loss 2.905802 LR 0.000008 Time 0.219302 -2023-04-27 05:27:27,922 - Epoch: [301][ 300/ 518] Overall Loss 2.905023 Objective Loss 2.905023 LR 0.000008 Time 0.218768 -2023-04-27 05:27:38,655 - Epoch: [301][ 350/ 518] Overall Loss 2.898447 Objective Loss 2.898447 LR 0.000008 Time 0.218177 -2023-04-27 05:27:49,507 - Epoch: [301][ 400/ 518] Overall Loss 2.895623 Objective Loss 2.895623 LR 0.000008 Time 0.218030 -2023-04-27 05:28:00,302 - Epoch: [301][ 450/ 518] Overall Loss 2.896194 Objective Loss 2.896194 LR 0.000008 Time 0.217790 -2023-04-27 05:28:11,130 - Epoch: [301][ 500/ 518] Overall Loss 2.894379 Objective Loss 2.894379 LR 0.000008 Time 0.217664 -2023-04-27 05:28:14,892 - Epoch: [301][ 518/ 518] Overall Loss 2.894908 Objective Loss 2.894908 LR 0.000008 Time 0.217362 -2023-04-27 05:28:14,969 - --- validate (epoch=301)----------- -2023-04-27 05:28:14,969 - 4952 samples (32 per mini-batch) -2023-04-27 05:28:23,272 - Epoch: [301][ 50/ 155] Loss 3.057773 mAP 0.427640 -2023-04-27 05:28:31,217 - Epoch: [301][ 100/ 155] Loss 3.064718 mAP 0.430749 -2023-04-27 05:28:39,176 - Epoch: [301][ 150/ 155] Loss 3.065453 mAP 0.434767 -2023-04-27 05:28:39,899 - Epoch: [301][ 155/ 155] Loss 3.071042 mAP 0.434975 -2023-04-27 05:28:39,963 - ==> mAP: 0.43497 Loss: 3.071 - -2023-04-27 05:28:39,967 - ==> Best [mAP: 0.451723 vloss: 3.073783 Sparsity:0.00 Params: 2177087 on epoch: 297] -2023-04-27 05:28:39,967 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:28:40,000 - - -2023-04-27 05:28:40,000 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:28:51,690 - Epoch: [302][ 50/ 518] Overall Loss 2.931648 Objective Loss 2.931648 LR 0.000008 Time 0.233739 -2023-04-27 05:29:02,523 - Epoch: [302][ 100/ 518] Overall Loss 2.939891 Objective Loss 2.939891 LR 0.000008 Time 0.225182 -2023-04-27 05:29:13,305 - Epoch: [302][ 150/ 518] Overall Loss 2.922225 Objective Loss 2.922225 LR 0.000008 Time 0.221988 -2023-04-27 05:29:24,177 - Epoch: [302][ 200/ 518] Overall Loss 2.920268 Objective Loss 2.920268 LR 0.000008 Time 0.220843 -2023-04-27 05:29:35,096 - Epoch: [302][ 250/ 518] Overall Loss 2.906661 Objective Loss 2.906661 LR 0.000008 Time 0.220345 -2023-04-27 05:29:45,844 - Epoch: [302][ 300/ 518] Overall Loss 2.908289 Objective Loss 2.908289 LR 0.000008 Time 0.219444 -2023-04-27 05:29:56,582 - Epoch: [302][ 350/ 518] Overall Loss 2.909442 Objective Loss 2.909442 LR 0.000008 Time 0.218769 -2023-04-27 05:30:07,403 - Epoch: [302][ 400/ 518] Overall Loss 2.910627 Objective Loss 2.910627 LR 0.000008 Time 0.218473 -2023-04-27 05:30:18,254 - Epoch: [302][ 450/ 518] Overall Loss 2.908017 Objective Loss 2.908017 LR 0.000008 Time 0.218308 -2023-04-27 05:30:29,081 - Epoch: [302][ 500/ 518] Overall Loss 2.909381 Objective Loss 2.909381 LR 0.000008 Time 0.218126 -2023-04-27 05:30:32,824 - Epoch: [302][ 518/ 518] Overall Loss 2.907117 Objective Loss 2.907117 LR 0.000008 Time 0.217773 -2023-04-27 05:30:32,901 - --- validate (epoch=302)----------- -2023-04-27 05:30:32,901 - 4952 samples (32 per mini-batch) -2023-04-27 05:30:41,189 - Epoch: [302][ 50/ 155] Loss 3.067212 mAP 0.446631 -2023-04-27 05:30:49,106 - Epoch: [302][ 100/ 155] Loss 3.048969 mAP 0.446873 -2023-04-27 05:30:57,004 - Epoch: [302][ 150/ 155] Loss 3.045356 mAP 0.452013 -2023-04-27 05:30:57,727 - Epoch: [302][ 155/ 155] Loss 3.047499 mAP 0.452178 -2023-04-27 05:30:57,806 - ==> mAP: 0.45218 Loss: 3.047 - -2023-04-27 05:30:57,810 - ==> Best [mAP: 0.452178 vloss: 3.047499 Sparsity:0.00 Params: 2177087 on epoch: 302] -2023-04-27 05:30:57,810 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:30:57,891 - - -2023-04-27 05:30:57,891 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:31:09,506 - Epoch: [303][ 50/ 518] Overall Loss 2.907420 Objective Loss 2.907420 LR 0.000008 Time 0.232246 -2023-04-27 05:31:20,289 - Epoch: [303][ 100/ 518] Overall Loss 2.923934 Objective Loss 2.923934 LR 0.000008 Time 0.223935 -2023-04-27 05:31:31,040 - Epoch: [303][ 150/ 518] Overall Loss 2.905343 Objective Loss 2.905343 LR 0.000008 Time 0.220955 -2023-04-27 05:31:41,856 - Epoch: [303][ 200/ 518] Overall Loss 2.894368 Objective Loss 2.894368 LR 0.000008 Time 0.219789 -2023-04-27 05:31:52,601 - Epoch: [303][ 250/ 518] Overall Loss 2.900185 Objective Loss 2.900185 LR 0.000008 Time 0.218807 -2023-04-27 05:32:03,373 - Epoch: [303][ 300/ 518] Overall Loss 2.909748 Objective Loss 2.909748 LR 0.000008 Time 0.218239 -2023-04-27 05:32:14,168 - Epoch: [303][ 350/ 518] Overall Loss 2.911017 Objective Loss 2.911017 LR 0.000008 Time 0.217901 -2023-04-27 05:32:24,968 - Epoch: [303][ 400/ 518] Overall Loss 2.914704 Objective Loss 2.914704 LR 0.000008 Time 0.217660 -2023-04-27 05:32:35,782 - Epoch: [303][ 450/ 518] Overall Loss 2.911229 Objective Loss 2.911229 LR 0.000008 Time 0.217502 -2023-04-27 05:32:46,611 - Epoch: [303][ 500/ 518] Overall Loss 2.914993 Objective Loss 2.914993 LR 0.000008 Time 0.217406 -2023-04-27 05:32:50,390 - Epoch: [303][ 518/ 518] Overall Loss 2.915034 Objective Loss 2.915034 LR 0.000008 Time 0.217146 -2023-04-27 05:32:50,465 - --- validate (epoch=303)----------- -2023-04-27 05:32:50,465 - 4952 samples (32 per mini-batch) -2023-04-27 05:32:58,752 - Epoch: [303][ 50/ 155] Loss 3.083050 mAP 0.445000 -2023-04-27 05:33:06,647 - Epoch: [303][ 100/ 155] Loss 3.071810 mAP 0.440054 -2023-04-27 05:33:14,529 - Epoch: [303][ 150/ 155] Loss 3.061679 mAP 0.445450 -2023-04-27 05:33:15,251 - Epoch: [303][ 155/ 155] Loss 3.059380 mAP 0.447288 -2023-04-27 05:33:15,325 - ==> mAP: 0.44729 Loss: 3.059 - -2023-04-27 05:33:15,329 - ==> Best [mAP: 0.452178 vloss: 3.047499 Sparsity:0.00 Params: 2177087 on epoch: 302] -2023-04-27 05:33:15,330 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:33:15,365 - - -2023-04-27 05:33:15,365 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:33:27,040 - Epoch: [304][ 50/ 518] Overall Loss 2.879444 Objective Loss 2.879444 LR 0.000008 Time 0.233453 -2023-04-27 05:33:37,873 - Epoch: [304][ 100/ 518] Overall Loss 2.887042 Objective Loss 2.887042 LR 0.000008 Time 0.225043 -2023-04-27 05:33:48,725 - Epoch: [304][ 150/ 518] Overall Loss 2.879815 Objective Loss 2.879815 LR 0.000008 Time 0.222360 -2023-04-27 05:33:59,545 - Epoch: [304][ 200/ 518] Overall Loss 2.874711 Objective Loss 2.874711 LR 0.000008 Time 0.220863 -2023-04-27 05:34:10,268 - Epoch: [304][ 250/ 518] Overall Loss 2.887234 Objective Loss 2.887234 LR 0.000008 Time 0.219578 -2023-04-27 05:34:21,109 - Epoch: [304][ 300/ 518] Overall Loss 2.885824 Objective Loss 2.885824 LR 0.000008 Time 0.219113 -2023-04-27 05:34:31,953 - Epoch: [304][ 350/ 518] Overall Loss 2.887334 Objective Loss 2.887334 LR 0.000008 Time 0.218787 -2023-04-27 05:34:42,775 - Epoch: [304][ 400/ 518] Overall Loss 2.889546 Objective Loss 2.889546 LR 0.000008 Time 0.218490 -2023-04-27 05:34:53,682 - Epoch: [304][ 450/ 518] Overall Loss 2.895151 Objective Loss 2.895151 LR 0.000008 Time 0.218449 -2023-04-27 05:35:04,437 - Epoch: [304][ 500/ 518] Overall Loss 2.902143 Objective Loss 2.902143 LR 0.000008 Time 0.218112 -2023-04-27 05:35:08,162 - Epoch: [304][ 518/ 518] Overall Loss 2.900563 Objective Loss 2.900563 LR 0.000008 Time 0.217721 -2023-04-27 05:35:08,238 - --- validate (epoch=304)----------- -2023-04-27 05:35:08,238 - 4952 samples (32 per mini-batch) -2023-04-27 05:35:16,559 - Epoch: [304][ 50/ 155] Loss 3.106924 mAP 0.421302 -2023-04-27 05:35:24,481 - Epoch: [304][ 100/ 155] Loss 3.066500 mAP 0.444959 -2023-04-27 05:35:32,366 - Epoch: [304][ 150/ 155] Loss 3.070094 mAP 0.445770 -2023-04-27 05:35:33,091 - Epoch: [304][ 155/ 155] Loss 3.075796 mAP 0.444039 -2023-04-27 05:35:33,164 - ==> mAP: 0.44404 Loss: 3.076 - -2023-04-27 05:35:33,168 - ==> Best [mAP: 0.452178 vloss: 3.047499 Sparsity:0.00 Params: 2177087 on epoch: 302] -2023-04-27 05:35:33,168 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:35:33,203 - - -2023-04-27 05:35:33,203 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:35:44,840 - Epoch: [305][ 50/ 518] Overall Loss 2.850876 Objective Loss 2.850876 LR 0.000008 Time 0.232698 -2023-04-27 05:35:55,643 - Epoch: [305][ 100/ 518] Overall Loss 2.880279 Objective Loss 2.880279 LR 0.000008 Time 0.224355 -2023-04-27 05:36:06,482 - Epoch: [305][ 150/ 518] Overall Loss 2.896883 Objective Loss 2.896883 LR 0.000008 Time 0.221825 -2023-04-27 05:36:17,285 - Epoch: [305][ 200/ 518] Overall Loss 2.905490 Objective Loss 2.905490 LR 0.000008 Time 0.220376 -2023-04-27 05:36:28,120 - Epoch: [305][ 250/ 518] Overall Loss 2.911508 Objective Loss 2.911508 LR 0.000008 Time 0.219631 -2023-04-27 05:36:39,015 - Epoch: [305][ 300/ 518] Overall Loss 2.912235 Objective Loss 2.912235 LR 0.000008 Time 0.219338 -2023-04-27 05:36:49,866 - Epoch: [305][ 350/ 518] Overall Loss 2.912565 Objective Loss 2.912565 LR 0.000008 Time 0.219004 -2023-04-27 05:37:00,696 - Epoch: [305][ 400/ 518] Overall Loss 2.905274 Objective Loss 2.905274 LR 0.000008 Time 0.218700 -2023-04-27 05:37:11,515 - Epoch: [305][ 450/ 518] Overall Loss 2.902083 Objective Loss 2.902083 LR 0.000008 Time 0.218438 -2023-04-27 05:37:22,363 - Epoch: [305][ 500/ 518] Overall Loss 2.907966 Objective Loss 2.907966 LR 0.000008 Time 0.218287 -2023-04-27 05:37:26,131 - Epoch: [305][ 518/ 518] Overall Loss 2.910261 Objective Loss 2.910261 LR 0.000008 Time 0.217975 -2023-04-27 05:37:26,208 - --- validate (epoch=305)----------- -2023-04-27 05:37:26,209 - 4952 samples (32 per mini-batch) -2023-04-27 05:37:34,491 - Epoch: [305][ 50/ 155] Loss 3.096755 mAP 0.438945 -2023-04-27 05:37:42,465 - Epoch: [305][ 100/ 155] Loss 3.075440 mAP 0.447436 -2023-04-27 05:37:50,338 - Epoch: [305][ 150/ 155] Loss 3.071261 mAP 0.444458 -2023-04-27 05:37:51,058 - Epoch: [305][ 155/ 155] Loss 3.073961 mAP 0.443614 -2023-04-27 05:37:51,126 - ==> mAP: 0.44361 Loss: 3.074 - -2023-04-27 05:37:51,130 - ==> Best [mAP: 0.452178 vloss: 3.047499 Sparsity:0.00 Params: 2177087 on epoch: 302] -2023-04-27 05:37:51,130 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:37:51,164 - - -2023-04-27 05:37:51,164 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:38:02,769 - Epoch: [306][ 50/ 518] Overall Loss 2.907435 Objective Loss 2.907435 LR 0.000008 Time 0.232047 -2023-04-27 05:38:13,480 - Epoch: [306][ 100/ 518] Overall Loss 2.904561 Objective Loss 2.904561 LR 0.000008 Time 0.223114 -2023-04-27 05:38:24,316 - Epoch: [306][ 150/ 518] Overall Loss 2.913193 Objective Loss 2.913193 LR 0.000008 Time 0.220976 -2023-04-27 05:38:35,140 - Epoch: [306][ 200/ 518] Overall Loss 2.918202 Objective Loss 2.918202 LR 0.000008 Time 0.219841 -2023-04-27 05:38:45,896 - Epoch: [306][ 250/ 518] Overall Loss 2.920171 Objective Loss 2.920171 LR 0.000008 Time 0.218892 -2023-04-27 05:38:56,752 - Epoch: [306][ 300/ 518] Overall Loss 2.919023 Objective Loss 2.919023 LR 0.000008 Time 0.218593 -2023-04-27 05:39:07,600 - Epoch: [306][ 350/ 518] Overall Loss 2.927298 Objective Loss 2.927298 LR 0.000008 Time 0.218353 -2023-04-27 05:39:18,457 - Epoch: [306][ 400/ 518] Overall Loss 2.925261 Objective Loss 2.925261 LR 0.000008 Time 0.218199 -2023-04-27 05:39:29,258 - Epoch: [306][ 450/ 518] Overall Loss 2.924377 Objective Loss 2.924377 LR 0.000008 Time 0.217952 -2023-04-27 05:39:40,000 - Epoch: [306][ 500/ 518] Overall Loss 2.921497 Objective Loss 2.921497 LR 0.000008 Time 0.217638 -2023-04-27 05:39:43,718 - Epoch: [306][ 518/ 518] Overall Loss 2.921960 Objective Loss 2.921960 LR 0.000008 Time 0.217253 -2023-04-27 05:39:43,794 - --- validate (epoch=306)----------- -2023-04-27 05:39:43,795 - 4952 samples (32 per mini-batch) -2023-04-27 05:39:52,143 - Epoch: [306][ 50/ 155] Loss 3.052648 mAP 0.456124 -2023-04-27 05:40:00,069 - Epoch: [306][ 100/ 155] Loss 3.063056 mAP 0.458237 -2023-04-27 05:40:07,982 - Epoch: [306][ 150/ 155] Loss 3.055885 mAP 0.454517 -2023-04-27 05:40:08,694 - Epoch: [306][ 155/ 155] Loss 3.054171 mAP 0.452556 -2023-04-27 05:40:08,758 - ==> mAP: 0.45256 Loss: 3.054 - -2023-04-27 05:40:08,762 - ==> Best [mAP: 0.452556 vloss: 3.054171 Sparsity:0.00 Params: 2177087 on epoch: 306] -2023-04-27 05:40:08,762 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:40:08,811 - - -2023-04-27 05:40:08,811 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:40:20,415 - Epoch: [307][ 50/ 518] Overall Loss 2.912392 Objective Loss 2.912392 LR 0.000008 Time 0.232021 -2023-04-27 05:40:31,244 - Epoch: [307][ 100/ 518] Overall Loss 2.887520 Objective Loss 2.887520 LR 0.000008 Time 0.224290 -2023-04-27 05:40:41,976 - Epoch: [307][ 150/ 518] Overall Loss 2.882502 Objective Loss 2.882502 LR 0.000008 Time 0.221063 -2023-04-27 05:40:52,772 - Epoch: [307][ 200/ 518] Overall Loss 2.884478 Objective Loss 2.884478 LR 0.000008 Time 0.219769 -2023-04-27 05:41:03,549 - Epoch: [307][ 250/ 518] Overall Loss 2.900130 Objective Loss 2.900130 LR 0.000008 Time 0.218917 -2023-04-27 05:41:14,314 - Epoch: [307][ 300/ 518] Overall Loss 2.900743 Objective Loss 2.900743 LR 0.000008 Time 0.218306 -2023-04-27 05:41:25,072 - Epoch: [307][ 350/ 518] Overall Loss 2.903699 Objective Loss 2.903699 LR 0.000008 Time 0.217854 -2023-04-27 05:41:35,873 - Epoch: [307][ 400/ 518] Overall Loss 2.901865 Objective Loss 2.901865 LR 0.000008 Time 0.217619 -2023-04-27 05:41:46,628 - Epoch: [307][ 450/ 518] Overall Loss 2.901388 Objective Loss 2.901388 LR 0.000008 Time 0.217336 -2023-04-27 05:41:57,439 - Epoch: [307][ 500/ 518] Overall Loss 2.902333 Objective Loss 2.902333 LR 0.000008 Time 0.217223 -2023-04-27 05:42:01,192 - Epoch: [307][ 518/ 518] Overall Loss 2.904080 Objective Loss 2.904080 LR 0.000008 Time 0.216919 -2023-04-27 05:42:01,271 - --- validate (epoch=307)----------- -2023-04-27 05:42:01,271 - 4952 samples (32 per mini-batch) -2023-04-27 05:42:09,538 - Epoch: [307][ 50/ 155] Loss 3.084528 mAP 0.447841 -2023-04-27 05:42:17,501 - Epoch: [307][ 100/ 155] Loss 3.082506 mAP 0.448938 -2023-04-27 05:42:25,405 - Epoch: [307][ 150/ 155] Loss 3.065253 mAP 0.454253 -2023-04-27 05:42:26,131 - Epoch: [307][ 155/ 155] Loss 3.062401 mAP 0.454234 -2023-04-27 05:42:26,203 - ==> mAP: 0.45423 Loss: 3.062 - -2023-04-27 05:42:26,206 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 05:42:26,206 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:42:26,256 - - -2023-04-27 05:42:26,256 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:42:37,804 - Epoch: [308][ 50/ 518] Overall Loss 2.884972 Objective Loss 2.884972 LR 0.000008 Time 0.230896 -2023-04-27 05:42:48,656 - Epoch: [308][ 100/ 518] Overall Loss 2.905083 Objective Loss 2.905083 LR 0.000008 Time 0.223961 -2023-04-27 05:42:59,437 - Epoch: [308][ 150/ 518] Overall Loss 2.899229 Objective Loss 2.899229 LR 0.000008 Time 0.221170 -2023-04-27 05:43:10,273 - Epoch: [308][ 200/ 518] Overall Loss 2.893310 Objective Loss 2.893310 LR 0.000008 Time 0.220050 -2023-04-27 05:43:21,071 - Epoch: [308][ 250/ 518] Overall Loss 2.898815 Objective Loss 2.898815 LR 0.000008 Time 0.219226 -2023-04-27 05:43:31,889 - Epoch: [308][ 300/ 518] Overall Loss 2.899628 Objective Loss 2.899628 LR 0.000008 Time 0.218743 -2023-04-27 05:43:42,744 - Epoch: [308][ 350/ 518] Overall Loss 2.897010 Objective Loss 2.897010 LR 0.000008 Time 0.218504 -2023-04-27 05:43:53,632 - Epoch: [308][ 400/ 518] Overall Loss 2.896687 Objective Loss 2.896687 LR 0.000008 Time 0.218405 -2023-04-27 05:44:04,405 - Epoch: [308][ 450/ 518] Overall Loss 2.894929 Objective Loss 2.894929 LR 0.000008 Time 0.218075 -2023-04-27 05:44:15,144 - Epoch: [308][ 500/ 518] Overall Loss 2.890560 Objective Loss 2.890560 LR 0.000008 Time 0.217743 -2023-04-27 05:44:18,888 - Epoch: [308][ 518/ 518] Overall Loss 2.890594 Objective Loss 2.890594 LR 0.000008 Time 0.217404 -2023-04-27 05:44:18,964 - --- validate (epoch=308)----------- -2023-04-27 05:44:18,964 - 4952 samples (32 per mini-batch) -2023-04-27 05:44:27,297 - Epoch: [308][ 50/ 155] Loss 3.076963 mAP 0.436861 -2023-04-27 05:44:35,251 - Epoch: [308][ 100/ 155] Loss 3.031208 mAP 0.451594 -2023-04-27 05:44:43,155 - Epoch: [308][ 150/ 155] Loss 3.037694 mAP 0.447079 -2023-04-27 05:44:43,874 - Epoch: [308][ 155/ 155] Loss 3.041896 mAP 0.446099 -2023-04-27 05:44:43,957 - ==> mAP: 0.44610 Loss: 3.042 - -2023-04-27 05:44:43,962 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 05:44:43,962 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:44:43,997 - - -2023-04-27 05:44:43,997 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:44:55,747 - Epoch: [309][ 50/ 518] Overall Loss 2.965294 Objective Loss 2.965294 LR 0.000008 Time 0.234957 -2023-04-27 05:45:06,643 - Epoch: [309][ 100/ 518] Overall Loss 2.973806 Objective Loss 2.973806 LR 0.000008 Time 0.226419 -2023-04-27 05:45:17,406 - Epoch: [309][ 150/ 518] Overall Loss 2.945564 Objective Loss 2.945564 LR 0.000008 Time 0.222686 -2023-04-27 05:45:28,195 - Epoch: [309][ 200/ 518] Overall Loss 2.937545 Objective Loss 2.937545 LR 0.000008 Time 0.220952 -2023-04-27 05:45:39,001 - Epoch: [309][ 250/ 518] Overall Loss 2.922190 Objective Loss 2.922190 LR 0.000008 Time 0.219979 -2023-04-27 05:45:49,810 - Epoch: [309][ 300/ 518] Overall Loss 2.925093 Objective Loss 2.925093 LR 0.000008 Time 0.219342 -2023-04-27 05:46:00,554 - Epoch: [309][ 350/ 518] Overall Loss 2.914035 Objective Loss 2.914035 LR 0.000008 Time 0.218700 -2023-04-27 05:46:11,358 - Epoch: [309][ 400/ 518] Overall Loss 2.915409 Objective Loss 2.915409 LR 0.000008 Time 0.218369 -2023-04-27 05:46:22,194 - Epoch: [309][ 450/ 518] Overall Loss 2.915892 Objective Loss 2.915892 LR 0.000008 Time 0.218183 -2023-04-27 05:46:33,048 - Epoch: [309][ 500/ 518] Overall Loss 2.915344 Objective Loss 2.915344 LR 0.000008 Time 0.218069 -2023-04-27 05:46:36,819 - Epoch: [309][ 518/ 518] Overall Loss 2.914107 Objective Loss 2.914107 LR 0.000008 Time 0.217770 -2023-04-27 05:46:36,897 - --- validate (epoch=309)----------- -2023-04-27 05:46:36,897 - 4952 samples (32 per mini-batch) -2023-04-27 05:46:45,175 - Epoch: [309][ 50/ 155] Loss 3.065429 mAP 0.425488 -2023-04-27 05:46:53,079 - Epoch: [309][ 100/ 155] Loss 3.084452 mAP 0.439458 -2023-04-27 05:47:00,941 - Epoch: [309][ 150/ 155] Loss 3.062605 mAP 0.442352 -2023-04-27 05:47:01,652 - Epoch: [309][ 155/ 155] Loss 3.065904 mAP 0.442887 -2023-04-27 05:47:01,731 - ==> mAP: 0.44289 Loss: 3.066 - -2023-04-27 05:47:01,734 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 05:47:01,734 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:47:01,768 - - -2023-04-27 05:47:01,769 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:47:13,338 - Epoch: [310][ 50/ 518] Overall Loss 2.905827 Objective Loss 2.905827 LR 0.000008 Time 0.231329 -2023-04-27 05:47:24,178 - Epoch: [310][ 100/ 518] Overall Loss 2.908496 Objective Loss 2.908496 LR 0.000008 Time 0.224049 -2023-04-27 05:47:35,075 - Epoch: [310][ 150/ 518] Overall Loss 2.883667 Objective Loss 2.883667 LR 0.000008 Time 0.222004 -2023-04-27 05:47:45,898 - Epoch: [310][ 200/ 518] Overall Loss 2.892712 Objective Loss 2.892712 LR 0.000008 Time 0.220607 -2023-04-27 05:47:56,729 - Epoch: [310][ 250/ 518] Overall Loss 2.894882 Objective Loss 2.894882 LR 0.000008 Time 0.219807 -2023-04-27 05:48:07,556 - Epoch: [310][ 300/ 518] Overall Loss 2.891861 Objective Loss 2.891861 LR 0.000008 Time 0.219257 -2023-04-27 05:48:18,351 - Epoch: [310][ 350/ 518] Overall Loss 2.900937 Objective Loss 2.900937 LR 0.000008 Time 0.218772 -2023-04-27 05:48:29,133 - Epoch: [310][ 400/ 518] Overall Loss 2.898735 Objective Loss 2.898735 LR 0.000008 Time 0.218376 -2023-04-27 05:48:39,911 - Epoch: [310][ 450/ 518] Overall Loss 2.900074 Objective Loss 2.900074 LR 0.000008 Time 0.218061 -2023-04-27 05:48:50,713 - Epoch: [310][ 500/ 518] Overall Loss 2.900069 Objective Loss 2.900069 LR 0.000008 Time 0.217856 -2023-04-27 05:48:54,451 - Epoch: [310][ 518/ 518] Overall Loss 2.899907 Objective Loss 2.899907 LR 0.000008 Time 0.217501 -2023-04-27 05:48:54,526 - --- validate (epoch=310)----------- -2023-04-27 05:48:54,526 - 4952 samples (32 per mini-batch) -2023-04-27 05:49:02,801 - Epoch: [310][ 50/ 155] Loss 3.056844 mAP 0.433517 -2023-04-27 05:49:10,719 - Epoch: [310][ 100/ 155] Loss 3.057278 mAP 0.434470 -2023-04-27 05:49:18,585 - Epoch: [310][ 150/ 155] Loss 3.057944 mAP 0.434180 -2023-04-27 05:49:19,302 - Epoch: [310][ 155/ 155] Loss 3.062165 mAP 0.433084 -2023-04-27 05:49:19,369 - ==> mAP: 0.43308 Loss: 3.062 - -2023-04-27 05:49:19,373 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 05:49:19,373 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:49:19,408 - - -2023-04-27 05:49:19,408 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:49:31,131 - Epoch: [311][ 50/ 518] Overall Loss 2.864396 Objective Loss 2.864396 LR 0.000008 Time 0.234407 -2023-04-27 05:49:41,947 - Epoch: [311][ 100/ 518] Overall Loss 2.880030 Objective Loss 2.880030 LR 0.000008 Time 0.225350 -2023-04-27 05:49:52,728 - Epoch: [311][ 150/ 518] Overall Loss 2.887256 Objective Loss 2.887256 LR 0.000008 Time 0.222094 -2023-04-27 05:50:03,569 - Epoch: [311][ 200/ 518] Overall Loss 2.898363 Objective Loss 2.898363 LR 0.000008 Time 0.220767 -2023-04-27 05:50:14,354 - Epoch: [311][ 250/ 518] Overall Loss 2.907645 Objective Loss 2.907645 LR 0.000008 Time 0.219750 -2023-04-27 05:50:25,134 - Epoch: [311][ 300/ 518] Overall Loss 2.911398 Objective Loss 2.911398 LR 0.000008 Time 0.219053 -2023-04-27 05:50:35,917 - Epoch: [311][ 350/ 518] Overall Loss 2.903514 Objective Loss 2.903514 LR 0.000008 Time 0.218562 -2023-04-27 05:50:46,774 - Epoch: [311][ 400/ 518] Overall Loss 2.904510 Objective Loss 2.904510 LR 0.000008 Time 0.218382 -2023-04-27 05:50:57,606 - Epoch: [311][ 450/ 518] Overall Loss 2.906296 Objective Loss 2.906296 LR 0.000008 Time 0.218185 -2023-04-27 05:51:08,529 - Epoch: [311][ 500/ 518] Overall Loss 2.902685 Objective Loss 2.902685 LR 0.000008 Time 0.218208 -2023-04-27 05:51:12,286 - Epoch: [311][ 518/ 518] Overall Loss 2.901888 Objective Loss 2.901888 LR 0.000008 Time 0.217877 -2023-04-27 05:51:12,363 - --- validate (epoch=311)----------- -2023-04-27 05:51:12,363 - 4952 samples (32 per mini-batch) -2023-04-27 05:51:20,742 - Epoch: [311][ 50/ 155] Loss 3.072769 mAP 0.438171 -2023-04-27 05:51:28,701 - Epoch: [311][ 100/ 155] Loss 3.066971 mAP 0.443615 -2023-04-27 05:51:36,586 - Epoch: [311][ 150/ 155] Loss 3.069183 mAP 0.444519 -2023-04-27 05:51:37,304 - Epoch: [311][ 155/ 155] Loss 3.064678 mAP 0.443439 -2023-04-27 05:51:37,378 - ==> mAP: 0.44344 Loss: 3.065 - -2023-04-27 05:51:37,382 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 05:51:37,382 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:51:37,417 - - -2023-04-27 05:51:37,417 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:51:49,114 - Epoch: [312][ 50/ 518] Overall Loss 2.938758 Objective Loss 2.938758 LR 0.000008 Time 0.233891 -2023-04-27 05:51:59,951 - Epoch: [312][ 100/ 518] Overall Loss 2.918911 Objective Loss 2.918911 LR 0.000008 Time 0.225299 -2023-04-27 05:52:10,769 - Epoch: [312][ 150/ 518] Overall Loss 2.911491 Objective Loss 2.911491 LR 0.000008 Time 0.222313 -2023-04-27 05:52:21,597 - Epoch: [312][ 200/ 518] Overall Loss 2.910321 Objective Loss 2.910321 LR 0.000008 Time 0.220863 -2023-04-27 05:52:32,462 - Epoch: [312][ 250/ 518] Overall Loss 2.897864 Objective Loss 2.897864 LR 0.000008 Time 0.220145 -2023-04-27 05:52:43,392 - Epoch: [312][ 300/ 518] Overall Loss 2.901333 Objective Loss 2.901333 LR 0.000008 Time 0.219883 -2023-04-27 05:52:54,217 - Epoch: [312][ 350/ 518] Overall Loss 2.904451 Objective Loss 2.904451 LR 0.000008 Time 0.219394 -2023-04-27 05:53:05,075 - Epoch: [312][ 400/ 518] Overall Loss 2.906437 Objective Loss 2.906437 LR 0.000008 Time 0.219111 -2023-04-27 05:53:15,988 - Epoch: [312][ 450/ 518] Overall Loss 2.909021 Objective Loss 2.909021 LR 0.000008 Time 0.219014 -2023-04-27 05:53:26,870 - Epoch: [312][ 500/ 518] Overall Loss 2.908665 Objective Loss 2.908665 LR 0.000008 Time 0.218872 -2023-04-27 05:53:30,586 - Epoch: [312][ 518/ 518] Overall Loss 2.908173 Objective Loss 2.908173 LR 0.000008 Time 0.218440 -2023-04-27 05:53:30,663 - --- validate (epoch=312)----------- -2023-04-27 05:53:30,663 - 4952 samples (32 per mini-batch) -2023-04-27 05:53:39,000 - Epoch: [312][ 50/ 155] Loss 3.086608 mAP 0.437746 -2023-04-27 05:53:46,953 - Epoch: [312][ 100/ 155] Loss 3.061403 mAP 0.441407 -2023-04-27 05:53:54,850 - Epoch: [312][ 150/ 155] Loss 3.065628 mAP 0.443886 -2023-04-27 05:53:55,579 - Epoch: [312][ 155/ 155] Loss 3.067960 mAP 0.444896 -2023-04-27 05:53:55,655 - ==> mAP: 0.44490 Loss: 3.068 - -2023-04-27 05:53:55,659 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 05:53:55,659 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:53:55,693 - - -2023-04-27 05:53:55,693 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:54:07,435 - Epoch: [313][ 50/ 518] Overall Loss 2.907078 Objective Loss 2.907078 LR 0.000008 Time 0.234793 -2023-04-27 05:54:18,288 - Epoch: [313][ 100/ 518] Overall Loss 2.911513 Objective Loss 2.911513 LR 0.000008 Time 0.225909 -2023-04-27 05:54:29,108 - Epoch: [313][ 150/ 518] Overall Loss 2.924622 Objective Loss 2.924622 LR 0.000008 Time 0.222729 -2023-04-27 05:54:39,912 - Epoch: [313][ 200/ 518] Overall Loss 2.914248 Objective Loss 2.914248 LR 0.000008 Time 0.221056 -2023-04-27 05:54:50,720 - Epoch: [313][ 250/ 518] Overall Loss 2.911086 Objective Loss 2.911086 LR 0.000008 Time 0.220070 -2023-04-27 05:55:01,522 - Epoch: [313][ 300/ 518] Overall Loss 2.907525 Objective Loss 2.907525 LR 0.000008 Time 0.219395 -2023-04-27 05:55:12,332 - Epoch: [313][ 350/ 518] Overall Loss 2.912121 Objective Loss 2.912121 LR 0.000008 Time 0.218934 -2023-04-27 05:55:23,090 - Epoch: [313][ 400/ 518] Overall Loss 2.905280 Objective Loss 2.905280 LR 0.000008 Time 0.218458 -2023-04-27 05:55:33,900 - Epoch: [313][ 450/ 518] Overall Loss 2.896428 Objective Loss 2.896428 LR 0.000008 Time 0.218203 -2023-04-27 05:55:44,687 - Epoch: [313][ 500/ 518] Overall Loss 2.895912 Objective Loss 2.895912 LR 0.000008 Time 0.217954 -2023-04-27 05:55:48,489 - Epoch: [313][ 518/ 518] Overall Loss 2.895742 Objective Loss 2.895742 LR 0.000008 Time 0.217720 -2023-04-27 05:55:48,564 - --- validate (epoch=313)----------- -2023-04-27 05:55:48,564 - 4952 samples (32 per mini-batch) -2023-04-27 05:55:56,888 - Epoch: [313][ 50/ 155] Loss 3.143976 mAP 0.436817 -2023-04-27 05:56:04,816 - Epoch: [313][ 100/ 155] Loss 3.106638 mAP 0.438189 -2023-04-27 05:56:12,696 - Epoch: [313][ 150/ 155] Loss 3.105042 mAP 0.440790 -2023-04-27 05:56:13,414 - Epoch: [313][ 155/ 155] Loss 3.106186 mAP 0.440568 -2023-04-27 05:56:13,492 - ==> mAP: 0.44057 Loss: 3.106 - -2023-04-27 05:56:13,497 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 05:56:13,497 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:56:13,532 - - -2023-04-27 05:56:13,532 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:56:25,102 - Epoch: [314][ 50/ 518] Overall Loss 2.896385 Objective Loss 2.896385 LR 0.000008 Time 0.231341 -2023-04-27 05:56:35,887 - Epoch: [314][ 100/ 518] Overall Loss 2.900779 Objective Loss 2.900779 LR 0.000008 Time 0.223508 -2023-04-27 05:56:46,672 - Epoch: [314][ 150/ 518] Overall Loss 2.915138 Objective Loss 2.915138 LR 0.000008 Time 0.220896 -2023-04-27 05:56:57,533 - Epoch: [314][ 200/ 518] Overall Loss 2.923622 Objective Loss 2.923622 LR 0.000008 Time 0.219969 -2023-04-27 05:57:08,319 - Epoch: [314][ 250/ 518] Overall Loss 2.904862 Objective Loss 2.904862 LR 0.000008 Time 0.219113 -2023-04-27 05:57:19,090 - Epoch: [314][ 300/ 518] Overall Loss 2.900474 Objective Loss 2.900474 LR 0.000008 Time 0.218491 -2023-04-27 05:57:29,888 - Epoch: [314][ 350/ 518] Overall Loss 2.899119 Objective Loss 2.899119 LR 0.000008 Time 0.218126 -2023-04-27 05:57:40,673 - Epoch: [314][ 400/ 518] Overall Loss 2.902065 Objective Loss 2.902065 LR 0.000008 Time 0.217817 -2023-04-27 05:57:51,451 - Epoch: [314][ 450/ 518] Overall Loss 2.899959 Objective Loss 2.899959 LR 0.000008 Time 0.217563 -2023-04-27 05:58:02,241 - Epoch: [314][ 500/ 518] Overall Loss 2.899359 Objective Loss 2.899359 LR 0.000008 Time 0.217385 -2023-04-27 05:58:06,008 - Epoch: [314][ 518/ 518] Overall Loss 2.897729 Objective Loss 2.897729 LR 0.000008 Time 0.217101 -2023-04-27 05:58:06,083 - --- validate (epoch=314)----------- -2023-04-27 05:58:06,084 - 4952 samples (32 per mini-batch) -2023-04-27 05:58:14,372 - Epoch: [314][ 50/ 155] Loss 3.076362 mAP 0.436317 -2023-04-27 05:58:22,240 - Epoch: [314][ 100/ 155] Loss 3.067671 mAP 0.439035 -2023-04-27 05:58:30,120 - Epoch: [314][ 150/ 155] Loss 3.066983 mAP 0.436857 -2023-04-27 05:58:30,837 - Epoch: [314][ 155/ 155] Loss 3.062856 mAP 0.438348 -2023-04-27 05:58:30,920 - ==> mAP: 0.43835 Loss: 3.063 - -2023-04-27 05:58:30,924 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 05:58:30,924 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 05:58:30,960 - - -2023-04-27 05:58:30,960 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 05:58:42,723 - Epoch: [315][ 50/ 518] Overall Loss 2.922623 Objective Loss 2.922623 LR 0.000008 Time 0.235202 -2023-04-27 05:58:53,595 - Epoch: [315][ 100/ 518] Overall Loss 2.904281 Objective Loss 2.904281 LR 0.000008 Time 0.226307 -2023-04-27 05:59:04,361 - Epoch: [315][ 150/ 518] Overall Loss 2.886336 Objective Loss 2.886336 LR 0.000008 Time 0.222637 -2023-04-27 05:59:15,148 - Epoch: [315][ 200/ 518] Overall Loss 2.883618 Objective Loss 2.883618 LR 0.000008 Time 0.220901 -2023-04-27 05:59:25,998 - Epoch: [315][ 250/ 518] Overall Loss 2.887791 Objective Loss 2.887791 LR 0.000008 Time 0.220117 -2023-04-27 05:59:36,767 - Epoch: [315][ 300/ 518] Overall Loss 2.890666 Objective Loss 2.890666 LR 0.000008 Time 0.219322 -2023-04-27 05:59:47,516 - Epoch: [315][ 350/ 518] Overall Loss 2.893714 Objective Loss 2.893714 LR 0.000008 Time 0.218696 -2023-04-27 05:59:58,311 - Epoch: [315][ 400/ 518] Overall Loss 2.894133 Objective Loss 2.894133 LR 0.000008 Time 0.218344 -2023-04-27 06:00:09,073 - Epoch: [315][ 450/ 518] Overall Loss 2.891237 Objective Loss 2.891237 LR 0.000008 Time 0.217995 -2023-04-27 06:00:19,886 - Epoch: [315][ 500/ 518] Overall Loss 2.896078 Objective Loss 2.896078 LR 0.000008 Time 0.217818 -2023-04-27 06:00:23,680 - Epoch: [315][ 518/ 518] Overall Loss 2.897545 Objective Loss 2.897545 LR 0.000008 Time 0.217573 -2023-04-27 06:00:23,756 - --- validate (epoch=315)----------- -2023-04-27 06:00:23,756 - 4952 samples (32 per mini-batch) -2023-04-27 06:00:32,091 - Epoch: [315][ 50/ 155] Loss 3.036800 mAP 0.469541 -2023-04-27 06:00:40,025 - Epoch: [315][ 100/ 155] Loss 3.033217 mAP 0.454582 -2023-04-27 06:00:47,906 - Epoch: [315][ 150/ 155] Loss 3.047841 mAP 0.453246 -2023-04-27 06:00:48,629 - Epoch: [315][ 155/ 155] Loss 3.051960 mAP 0.453113 -2023-04-27 06:00:48,701 - ==> mAP: 0.45311 Loss: 3.052 - -2023-04-27 06:00:48,705 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:00:48,705 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:00:48,739 - - -2023-04-27 06:00:48,739 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:01:00,314 - Epoch: [316][ 50/ 518] Overall Loss 2.851189 Objective Loss 2.851189 LR 0.000008 Time 0.231446 -2023-04-27 06:01:11,131 - Epoch: [316][ 100/ 518] Overall Loss 2.842888 Objective Loss 2.842888 LR 0.000008 Time 0.223880 -2023-04-27 06:01:21,967 - Epoch: [316][ 150/ 518] Overall Loss 2.843306 Objective Loss 2.843306 LR 0.000008 Time 0.221482 -2023-04-27 06:01:32,734 - Epoch: [316][ 200/ 518] Overall Loss 2.843614 Objective Loss 2.843614 LR 0.000008 Time 0.219935 -2023-04-27 06:01:43,537 - Epoch: [316][ 250/ 518] Overall Loss 2.851169 Objective Loss 2.851169 LR 0.000008 Time 0.219156 -2023-04-27 06:01:54,403 - Epoch: [316][ 300/ 518] Overall Loss 2.861848 Objective Loss 2.861848 LR 0.000008 Time 0.218843 -2023-04-27 06:02:05,233 - Epoch: [316][ 350/ 518] Overall Loss 2.865010 Objective Loss 2.865010 LR 0.000008 Time 0.218520 -2023-04-27 06:02:16,040 - Epoch: [316][ 400/ 518] Overall Loss 2.867172 Objective Loss 2.867172 LR 0.000008 Time 0.218219 -2023-04-27 06:02:26,835 - Epoch: [316][ 450/ 518] Overall Loss 2.871870 Objective Loss 2.871870 LR 0.000008 Time 0.217957 -2023-04-27 06:02:37,625 - Epoch: [316][ 500/ 518] Overall Loss 2.874390 Objective Loss 2.874390 LR 0.000008 Time 0.217738 -2023-04-27 06:02:41,356 - Epoch: [316][ 518/ 518] Overall Loss 2.874196 Objective Loss 2.874196 LR 0.000008 Time 0.217373 -2023-04-27 06:02:41,433 - --- validate (epoch=316)----------- -2023-04-27 06:02:41,433 - 4952 samples (32 per mini-batch) -2023-04-27 06:02:49,687 - Epoch: [316][ 50/ 155] Loss 3.034567 mAP 0.446315 -2023-04-27 06:02:57,600 - Epoch: [316][ 100/ 155] Loss 3.061110 mAP 0.449510 -2023-04-27 06:03:05,491 - Epoch: [316][ 150/ 155] Loss 3.052220 mAP 0.448789 -2023-04-27 06:03:06,214 - Epoch: [316][ 155/ 155] Loss 3.049893 mAP 0.450656 -2023-04-27 06:03:06,289 - ==> mAP: 0.45066 Loss: 3.050 - -2023-04-27 06:03:06,293 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:03:06,293 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:03:06,327 - - -2023-04-27 06:03:06,327 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:03:18,034 - Epoch: [317][ 50/ 518] Overall Loss 2.920475 Objective Loss 2.920475 LR 0.000008 Time 0.234076 -2023-04-27 06:03:28,811 - Epoch: [317][ 100/ 518] Overall Loss 2.918660 Objective Loss 2.918660 LR 0.000008 Time 0.224790 -2023-04-27 06:03:39,552 - Epoch: [317][ 150/ 518] Overall Loss 2.910671 Objective Loss 2.910671 LR 0.000008 Time 0.221462 -2023-04-27 06:03:50,293 - Epoch: [317][ 200/ 518] Overall Loss 2.911461 Objective Loss 2.911461 LR 0.000008 Time 0.219791 -2023-04-27 06:04:01,103 - Epoch: [317][ 250/ 518] Overall Loss 2.911880 Objective Loss 2.911880 LR 0.000008 Time 0.219065 -2023-04-27 06:04:11,899 - Epoch: [317][ 300/ 518] Overall Loss 2.908811 Objective Loss 2.908811 LR 0.000008 Time 0.218538 -2023-04-27 06:04:22,691 - Epoch: [317][ 350/ 518] Overall Loss 2.905454 Objective Loss 2.905454 LR 0.000008 Time 0.218149 -2023-04-27 06:04:33,519 - Epoch: [317][ 400/ 518] Overall Loss 2.906999 Objective Loss 2.906999 LR 0.000008 Time 0.217945 -2023-04-27 06:04:44,398 - Epoch: [317][ 450/ 518] Overall Loss 2.899917 Objective Loss 2.899917 LR 0.000008 Time 0.217902 -2023-04-27 06:04:55,221 - Epoch: [317][ 500/ 518] Overall Loss 2.905916 Objective Loss 2.905916 LR 0.000008 Time 0.217754 -2023-04-27 06:04:58,978 - Epoch: [317][ 518/ 518] Overall Loss 2.904122 Objective Loss 2.904122 LR 0.000008 Time 0.217440 -2023-04-27 06:04:59,054 - --- validate (epoch=317)----------- -2023-04-27 06:04:59,055 - 4952 samples (32 per mini-batch) -2023-04-27 06:05:07,390 - Epoch: [317][ 50/ 155] Loss 3.117379 mAP 0.441125 -2023-04-27 06:05:15,340 - Epoch: [317][ 100/ 155] Loss 3.064856 mAP 0.445234 -2023-04-27 06:05:23,246 - Epoch: [317][ 150/ 155] Loss 3.048498 mAP 0.446569 -2023-04-27 06:05:23,972 - Epoch: [317][ 155/ 155] Loss 3.048645 mAP 0.447270 -2023-04-27 06:05:24,045 - ==> mAP: 0.44727 Loss: 3.049 - -2023-04-27 06:05:24,049 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:05:24,049 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:05:24,083 - - -2023-04-27 06:05:24,083 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:05:35,560 - Epoch: [318][ 50/ 518] Overall Loss 2.797829 Objective Loss 2.797829 LR 0.000008 Time 0.229475 -2023-04-27 06:05:46,356 - Epoch: [318][ 100/ 518] Overall Loss 2.812802 Objective Loss 2.812802 LR 0.000008 Time 0.222679 -2023-04-27 06:05:57,213 - Epoch: [318][ 150/ 518] Overall Loss 2.845383 Objective Loss 2.845383 LR 0.000008 Time 0.220826 -2023-04-27 06:06:08,086 - Epoch: [318][ 200/ 518] Overall Loss 2.868431 Objective Loss 2.868431 LR 0.000008 Time 0.219979 -2023-04-27 06:06:18,885 - Epoch: [318][ 250/ 518] Overall Loss 2.875000 Objective Loss 2.875000 LR 0.000008 Time 0.219169 -2023-04-27 06:06:29,762 - Epoch: [318][ 300/ 518] Overall Loss 2.887547 Objective Loss 2.887547 LR 0.000008 Time 0.218894 -2023-04-27 06:06:40,604 - Epoch: [318][ 350/ 518] Overall Loss 2.890318 Objective Loss 2.890318 LR 0.000008 Time 0.218596 -2023-04-27 06:06:51,479 - Epoch: [318][ 400/ 518] Overall Loss 2.898320 Objective Loss 2.898320 LR 0.000008 Time 0.218454 -2023-04-27 06:07:02,338 - Epoch: [318][ 450/ 518] Overall Loss 2.900876 Objective Loss 2.900876 LR 0.000008 Time 0.218311 -2023-04-27 06:07:13,100 - Epoch: [318][ 500/ 518] Overall Loss 2.900091 Objective Loss 2.900091 LR 0.000008 Time 0.218000 -2023-04-27 06:07:16,828 - Epoch: [318][ 518/ 518] Overall Loss 2.900031 Objective Loss 2.900031 LR 0.000008 Time 0.217621 -2023-04-27 06:07:16,908 - --- validate (epoch=318)----------- -2023-04-27 06:07:16,908 - 4952 samples (32 per mini-batch) -2023-04-27 06:07:25,235 - Epoch: [318][ 50/ 155] Loss 3.069604 mAP 0.446033 -2023-04-27 06:07:33,120 - Epoch: [318][ 100/ 155] Loss 3.064371 mAP 0.439774 -2023-04-27 06:07:41,032 - Epoch: [318][ 150/ 155] Loss 3.055180 mAP 0.443760 -2023-04-27 06:07:41,747 - Epoch: [318][ 155/ 155] Loss 3.054714 mAP 0.441363 -2023-04-27 06:07:41,824 - ==> mAP: 0.44136 Loss: 3.055 - -2023-04-27 06:07:41,827 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:07:41,827 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:07:41,861 - - -2023-04-27 06:07:41,861 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:07:53,495 - Epoch: [319][ 50/ 518] Overall Loss 2.908601 Objective Loss 2.908601 LR 0.000008 Time 0.232632 -2023-04-27 06:08:04,234 - Epoch: [319][ 100/ 518] Overall Loss 2.925925 Objective Loss 2.925925 LR 0.000008 Time 0.223689 -2023-04-27 06:08:15,043 - Epoch: [319][ 150/ 518] Overall Loss 2.916265 Objective Loss 2.916265 LR 0.000008 Time 0.221175 -2023-04-27 06:08:25,757 - Epoch: [319][ 200/ 518] Overall Loss 2.907994 Objective Loss 2.907994 LR 0.000008 Time 0.219441 -2023-04-27 06:08:36,515 - Epoch: [319][ 250/ 518] Overall Loss 2.907755 Objective Loss 2.907755 LR 0.000008 Time 0.218578 -2023-04-27 06:08:47,234 - Epoch: [319][ 300/ 518] Overall Loss 2.900710 Objective Loss 2.900710 LR 0.000008 Time 0.217873 -2023-04-27 06:08:58,021 - Epoch: [319][ 350/ 518] Overall Loss 2.901714 Objective Loss 2.901714 LR 0.000008 Time 0.217564 -2023-04-27 06:09:08,834 - Epoch: [319][ 400/ 518] Overall Loss 2.901387 Objective Loss 2.901387 LR 0.000008 Time 0.217396 -2023-04-27 06:09:19,552 - Epoch: [319][ 450/ 518] Overall Loss 2.903243 Objective Loss 2.903243 LR 0.000008 Time 0.217056 -2023-04-27 06:09:30,372 - Epoch: [319][ 500/ 518] Overall Loss 2.899665 Objective Loss 2.899665 LR 0.000008 Time 0.216988 -2023-04-27 06:09:34,089 - Epoch: [319][ 518/ 518] Overall Loss 2.900413 Objective Loss 2.900413 LR 0.000008 Time 0.216623 -2023-04-27 06:09:34,165 - --- validate (epoch=319)----------- -2023-04-27 06:09:34,166 - 4952 samples (32 per mini-batch) -2023-04-27 06:09:42,479 - Epoch: [319][ 50/ 155] Loss 3.047009 mAP 0.435343 -2023-04-27 06:09:50,424 - Epoch: [319][ 100/ 155] Loss 3.060467 mAP 0.434605 -2023-04-27 06:09:58,326 - Epoch: [319][ 150/ 155] Loss 3.062634 mAP 0.438950 -2023-04-27 06:09:59,041 - Epoch: [319][ 155/ 155] Loss 3.064031 mAP 0.437389 -2023-04-27 06:09:59,113 - ==> mAP: 0.43739 Loss: 3.064 - -2023-04-27 06:09:59,117 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:09:59,117 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:09:59,152 - - -2023-04-27 06:09:59,152 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:10:10,920 - Epoch: [320][ 50/ 518] Overall Loss 2.858288 Objective Loss 2.858288 LR 0.000008 Time 0.235298 -2023-04-27 06:10:21,673 - Epoch: [320][ 100/ 518] Overall Loss 2.864087 Objective Loss 2.864087 LR 0.000008 Time 0.225162 -2023-04-27 06:10:32,619 - Epoch: [320][ 150/ 518] Overall Loss 2.896284 Objective Loss 2.896284 LR 0.000008 Time 0.223072 -2023-04-27 06:10:43,442 - Epoch: [320][ 200/ 518] Overall Loss 2.888097 Objective Loss 2.888097 LR 0.000008 Time 0.221410 -2023-04-27 06:10:54,265 - Epoch: [320][ 250/ 518] Overall Loss 2.887899 Objective Loss 2.887899 LR 0.000008 Time 0.220414 -2023-04-27 06:11:05,144 - Epoch: [320][ 300/ 518] Overall Loss 2.892971 Objective Loss 2.892971 LR 0.000008 Time 0.219936 -2023-04-27 06:11:15,912 - Epoch: [320][ 350/ 518] Overall Loss 2.888485 Objective Loss 2.888485 LR 0.000008 Time 0.219279 -2023-04-27 06:11:26,726 - Epoch: [320][ 400/ 518] Overall Loss 2.893549 Objective Loss 2.893549 LR 0.000008 Time 0.218898 -2023-04-27 06:11:37,535 - Epoch: [320][ 450/ 518] Overall Loss 2.898373 Objective Loss 2.898373 LR 0.000008 Time 0.218595 -2023-04-27 06:11:48,378 - Epoch: [320][ 500/ 518] Overall Loss 2.897671 Objective Loss 2.897671 LR 0.000008 Time 0.218418 -2023-04-27 06:11:52,176 - Epoch: [320][ 518/ 518] Overall Loss 2.899182 Objective Loss 2.899182 LR 0.000008 Time 0.218159 -2023-04-27 06:11:52,252 - --- validate (epoch=320)----------- -2023-04-27 06:11:52,252 - 4952 samples (32 per mini-batch) -2023-04-27 06:12:00,528 - Epoch: [320][ 50/ 155] Loss 3.092623 mAP 0.438224 -2023-04-27 06:12:08,421 - Epoch: [320][ 100/ 155] Loss 3.069668 mAP 0.444822 -2023-04-27 06:12:16,296 - Epoch: [320][ 150/ 155] Loss 3.077012 mAP 0.441841 -2023-04-27 06:12:17,009 - Epoch: [320][ 155/ 155] Loss 3.077138 mAP 0.440452 -2023-04-27 06:12:17,075 - ==> mAP: 0.44045 Loss: 3.077 - -2023-04-27 06:12:17,078 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:12:17,079 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:12:17,113 - - -2023-04-27 06:12:17,113 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:12:28,680 - Epoch: [321][ 50/ 518] Overall Loss 2.887299 Objective Loss 2.887299 LR 0.000008 Time 0.231291 -2023-04-27 06:12:39,404 - Epoch: [321][ 100/ 518] Overall Loss 2.906451 Objective Loss 2.906451 LR 0.000008 Time 0.222871 -2023-04-27 06:12:50,220 - Epoch: [321][ 150/ 518] Overall Loss 2.890821 Objective Loss 2.890821 LR 0.000008 Time 0.220677 -2023-04-27 06:13:01,075 - Epoch: [321][ 200/ 518] Overall Loss 2.892819 Objective Loss 2.892819 LR 0.000008 Time 0.219774 -2023-04-27 06:13:11,858 - Epoch: [321][ 250/ 518] Overall Loss 2.889718 Objective Loss 2.889718 LR 0.000008 Time 0.218945 -2023-04-27 06:13:22,637 - Epoch: [321][ 300/ 518] Overall Loss 2.885065 Objective Loss 2.885065 LR 0.000008 Time 0.218378 -2023-04-27 06:13:33,503 - Epoch: [321][ 350/ 518] Overall Loss 2.880246 Objective Loss 2.880246 LR 0.000008 Time 0.218224 -2023-04-27 06:13:44,342 - Epoch: [321][ 400/ 518] Overall Loss 2.877831 Objective Loss 2.877831 LR 0.000008 Time 0.218039 -2023-04-27 06:13:55,164 - Epoch: [321][ 450/ 518] Overall Loss 2.886921 Objective Loss 2.886921 LR 0.000008 Time 0.217858 -2023-04-27 06:14:06,023 - Epoch: [321][ 500/ 518] Overall Loss 2.888799 Objective Loss 2.888799 LR 0.000008 Time 0.217787 -2023-04-27 06:14:09,752 - Epoch: [321][ 518/ 518] Overall Loss 2.888622 Objective Loss 2.888622 LR 0.000008 Time 0.217416 -2023-04-27 06:14:09,830 - --- validate (epoch=321)----------- -2023-04-27 06:14:09,830 - 4952 samples (32 per mini-batch) -2023-04-27 06:14:18,119 - Epoch: [321][ 50/ 155] Loss 3.046041 mAP 0.446910 -2023-04-27 06:14:26,072 - Epoch: [321][ 100/ 155] Loss 3.050211 mAP 0.444533 -2023-04-27 06:14:33,980 - Epoch: [321][ 150/ 155] Loss 3.041197 mAP 0.446277 -2023-04-27 06:14:34,712 - Epoch: [321][ 155/ 155] Loss 3.040495 mAP 0.447139 -2023-04-27 06:14:34,785 - ==> mAP: 0.44714 Loss: 3.040 - -2023-04-27 06:14:34,789 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:14:34,789 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:14:34,823 - - -2023-04-27 06:14:34,824 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:14:46,516 - Epoch: [322][ 50/ 518] Overall Loss 2.934841 Objective Loss 2.934841 LR 0.000008 Time 0.233795 -2023-04-27 06:14:57,303 - Epoch: [322][ 100/ 518] Overall Loss 2.897563 Objective Loss 2.897563 LR 0.000008 Time 0.224749 -2023-04-27 06:15:08,170 - Epoch: [322][ 150/ 518] Overall Loss 2.874655 Objective Loss 2.874655 LR 0.000008 Time 0.222271 -2023-04-27 06:15:19,059 - Epoch: [322][ 200/ 518] Overall Loss 2.871823 Objective Loss 2.871823 LR 0.000008 Time 0.221140 -2023-04-27 06:15:29,954 - Epoch: [322][ 250/ 518] Overall Loss 2.878381 Objective Loss 2.878381 LR 0.000008 Time 0.220487 -2023-04-27 06:15:40,799 - Epoch: [322][ 300/ 518] Overall Loss 2.878026 Objective Loss 2.878026 LR 0.000008 Time 0.219884 -2023-04-27 06:15:51,628 - Epoch: [322][ 350/ 518] Overall Loss 2.876753 Objective Loss 2.876753 LR 0.000008 Time 0.219407 -2023-04-27 06:16:02,425 - Epoch: [322][ 400/ 518] Overall Loss 2.876608 Objective Loss 2.876608 LR 0.000008 Time 0.218970 -2023-04-27 06:16:13,289 - Epoch: [322][ 450/ 518] Overall Loss 2.880017 Objective Loss 2.880017 LR 0.000008 Time 0.218778 -2023-04-27 06:16:24,111 - Epoch: [322][ 500/ 518] Overall Loss 2.880202 Objective Loss 2.880202 LR 0.000008 Time 0.218540 -2023-04-27 06:16:27,813 - Epoch: [322][ 518/ 518] Overall Loss 2.878380 Objective Loss 2.878380 LR 0.000008 Time 0.218092 -2023-04-27 06:16:27,890 - --- validate (epoch=322)----------- -2023-04-27 06:16:27,890 - 4952 samples (32 per mini-batch) -2023-04-27 06:16:36,182 - Epoch: [322][ 50/ 155] Loss 3.040937 mAP 0.450597 -2023-04-27 06:16:44,072 - Epoch: [322][ 100/ 155] Loss 3.053674 mAP 0.443531 -2023-04-27 06:16:51,968 - Epoch: [322][ 150/ 155] Loss 3.054861 mAP 0.442060 -2023-04-27 06:16:52,680 - Epoch: [322][ 155/ 155] Loss 3.054744 mAP 0.441421 -2023-04-27 06:16:52,755 - ==> mAP: 0.44142 Loss: 3.055 - -2023-04-27 06:16:52,759 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:16:52,759 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:16:52,793 - - -2023-04-27 06:16:52,793 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:17:04,376 - Epoch: [323][ 50/ 518] Overall Loss 2.936197 Objective Loss 2.936197 LR 0.000008 Time 0.231617 -2023-04-27 06:17:15,272 - Epoch: [323][ 100/ 518] Overall Loss 2.923195 Objective Loss 2.923195 LR 0.000008 Time 0.224753 -2023-04-27 06:17:26,006 - Epoch: [323][ 150/ 518] Overall Loss 2.897659 Objective Loss 2.897659 LR 0.000008 Time 0.221383 -2023-04-27 06:17:36,762 - Epoch: [323][ 200/ 518] Overall Loss 2.893783 Objective Loss 2.893783 LR 0.000008 Time 0.219808 -2023-04-27 06:17:47,542 - Epoch: [323][ 250/ 518] Overall Loss 2.895339 Objective Loss 2.895339 LR 0.000008 Time 0.218959 -2023-04-27 06:17:58,290 - Epoch: [323][ 300/ 518] Overall Loss 2.900201 Objective Loss 2.900201 LR 0.000008 Time 0.218289 -2023-04-27 06:18:09,091 - Epoch: [323][ 350/ 518] Overall Loss 2.900814 Objective Loss 2.900814 LR 0.000008 Time 0.217962 -2023-04-27 06:18:19,867 - Epoch: [323][ 400/ 518] Overall Loss 2.903522 Objective Loss 2.903522 LR 0.000008 Time 0.217650 -2023-04-27 06:18:30,688 - Epoch: [323][ 450/ 518] Overall Loss 2.903875 Objective Loss 2.903875 LR 0.000008 Time 0.217511 -2023-04-27 06:18:41,542 - Epoch: [323][ 500/ 518] Overall Loss 2.906333 Objective Loss 2.906333 LR 0.000008 Time 0.217464 -2023-04-27 06:18:45,294 - Epoch: [323][ 518/ 518] Overall Loss 2.907537 Objective Loss 2.907537 LR 0.000008 Time 0.217150 -2023-04-27 06:18:45,370 - --- validate (epoch=323)----------- -2023-04-27 06:18:45,371 - 4952 samples (32 per mini-batch) -2023-04-27 06:18:53,655 - Epoch: [323][ 50/ 155] Loss 3.060260 mAP 0.449633 -2023-04-27 06:19:01,537 - Epoch: [323][ 100/ 155] Loss 3.061981 mAP 0.439486 -2023-04-27 06:19:09,409 - Epoch: [323][ 150/ 155] Loss 3.059260 mAP 0.436930 -2023-04-27 06:19:10,129 - Epoch: [323][ 155/ 155] Loss 3.056501 mAP 0.438413 -2023-04-27 06:19:10,204 - ==> mAP: 0.43841 Loss: 3.057 - -2023-04-27 06:19:10,207 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:19:10,207 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:19:10,241 - - -2023-04-27 06:19:10,242 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:19:21,798 - Epoch: [324][ 50/ 518] Overall Loss 2.932219 Objective Loss 2.932219 LR 0.000008 Time 0.231077 -2023-04-27 06:19:32,703 - Epoch: [324][ 100/ 518] Overall Loss 2.911842 Objective Loss 2.911842 LR 0.000008 Time 0.224576 -2023-04-27 06:19:43,534 - Epoch: [324][ 150/ 518] Overall Loss 2.913623 Objective Loss 2.913623 LR 0.000008 Time 0.221914 -2023-04-27 06:19:54,400 - Epoch: [324][ 200/ 518] Overall Loss 2.893222 Objective Loss 2.893222 LR 0.000008 Time 0.220758 -2023-04-27 06:20:05,207 - Epoch: [324][ 250/ 518] Overall Loss 2.899088 Objective Loss 2.899088 LR 0.000008 Time 0.219826 -2023-04-27 06:20:16,096 - Epoch: [324][ 300/ 518] Overall Loss 2.896315 Objective Loss 2.896315 LR 0.000008 Time 0.219482 -2023-04-27 06:20:26,879 - Epoch: [324][ 350/ 518] Overall Loss 2.888630 Objective Loss 2.888630 LR 0.000008 Time 0.218930 -2023-04-27 06:20:37,714 - Epoch: [324][ 400/ 518] Overall Loss 2.889126 Objective Loss 2.889126 LR 0.000008 Time 0.218647 -2023-04-27 06:20:48,498 - Epoch: [324][ 450/ 518] Overall Loss 2.892778 Objective Loss 2.892778 LR 0.000008 Time 0.218314 -2023-04-27 06:20:59,240 - Epoch: [324][ 500/ 518] Overall Loss 2.892709 Objective Loss 2.892709 LR 0.000008 Time 0.217964 -2023-04-27 06:21:03,027 - Epoch: [324][ 518/ 518] Overall Loss 2.890051 Objective Loss 2.890051 LR 0.000008 Time 0.217700 -2023-04-27 06:21:03,104 - --- validate (epoch=324)----------- -2023-04-27 06:21:03,104 - 4952 samples (32 per mini-batch) -2023-04-27 06:21:11,408 - Epoch: [324][ 50/ 155] Loss 3.094703 mAP 0.455757 -2023-04-27 06:21:19,288 - Epoch: [324][ 100/ 155] Loss 3.077866 mAP 0.451941 -2023-04-27 06:21:27,149 - Epoch: [324][ 150/ 155] Loss 3.063720 mAP 0.446753 -2023-04-27 06:21:27,871 - Epoch: [324][ 155/ 155] Loss 3.061556 mAP 0.444977 -2023-04-27 06:21:27,944 - ==> mAP: 0.44498 Loss: 3.062 - -2023-04-27 06:21:27,949 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:21:27,949 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:21:27,983 - - -2023-04-27 06:21:27,983 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:21:39,547 - Epoch: [325][ 50/ 518] Overall Loss 2.911008 Objective Loss 2.911008 LR 0.000008 Time 0.231212 -2023-04-27 06:21:50,332 - Epoch: [325][ 100/ 518] Overall Loss 2.901441 Objective Loss 2.901441 LR 0.000008 Time 0.223447 -2023-04-27 06:22:01,142 - Epoch: [325][ 150/ 518] Overall Loss 2.874001 Objective Loss 2.874001 LR 0.000008 Time 0.221017 -2023-04-27 06:22:12,005 - Epoch: [325][ 200/ 518] Overall Loss 2.874964 Objective Loss 2.874964 LR 0.000008 Time 0.220072 -2023-04-27 06:22:22,809 - Epoch: [325][ 250/ 518] Overall Loss 2.881794 Objective Loss 2.881794 LR 0.000008 Time 0.219267 -2023-04-27 06:22:33,586 - Epoch: [325][ 300/ 518] Overall Loss 2.875110 Objective Loss 2.875110 LR 0.000008 Time 0.218642 -2023-04-27 06:22:44,506 - Epoch: [325][ 350/ 518] Overall Loss 2.877564 Objective Loss 2.877564 LR 0.000008 Time 0.218601 -2023-04-27 06:22:55,280 - Epoch: [325][ 400/ 518] Overall Loss 2.879941 Objective Loss 2.879941 LR 0.000008 Time 0.218207 -2023-04-27 06:23:06,156 - Epoch: [325][ 450/ 518] Overall Loss 2.878470 Objective Loss 2.878470 LR 0.000008 Time 0.218128 -2023-04-27 06:23:16,919 - Epoch: [325][ 500/ 518] Overall Loss 2.879633 Objective Loss 2.879633 LR 0.000008 Time 0.217837 -2023-04-27 06:23:20,711 - Epoch: [325][ 518/ 518] Overall Loss 2.878834 Objective Loss 2.878834 LR 0.000008 Time 0.217587 -2023-04-27 06:23:20,787 - --- validate (epoch=325)----------- -2023-04-27 06:23:20,788 - 4952 samples (32 per mini-batch) -2023-04-27 06:23:29,061 - Epoch: [325][ 50/ 155] Loss 3.074802 mAP 0.436184 -2023-04-27 06:23:36,980 - Epoch: [325][ 100/ 155] Loss 3.030940 mAP 0.452566 -2023-04-27 06:23:44,882 - Epoch: [325][ 150/ 155] Loss 3.055082 mAP 0.442528 -2023-04-27 06:23:45,596 - Epoch: [325][ 155/ 155] Loss 3.051174 mAP 0.444043 -2023-04-27 06:23:45,669 - ==> mAP: 0.44404 Loss: 3.051 - -2023-04-27 06:23:45,672 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:23:45,672 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:23:45,707 - - -2023-04-27 06:23:45,707 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:23:57,303 - Epoch: [326][ 50/ 518] Overall Loss 2.908570 Objective Loss 2.908570 LR 0.000008 Time 0.231863 -2023-04-27 06:24:08,159 - Epoch: [326][ 100/ 518] Overall Loss 2.898901 Objective Loss 2.898901 LR 0.000008 Time 0.224483 -2023-04-27 06:24:18,995 - Epoch: [326][ 150/ 518] Overall Loss 2.880191 Objective Loss 2.880191 LR 0.000008 Time 0.221885 -2023-04-27 06:24:29,764 - Epoch: [326][ 200/ 518] Overall Loss 2.874177 Objective Loss 2.874177 LR 0.000008 Time 0.220248 -2023-04-27 06:24:40,617 - Epoch: [326][ 250/ 518] Overall Loss 2.879562 Objective Loss 2.879562 LR 0.000008 Time 0.219603 -2023-04-27 06:24:51,386 - Epoch: [326][ 300/ 518] Overall Loss 2.886982 Objective Loss 2.886982 LR 0.000008 Time 0.218895 -2023-04-27 06:25:02,217 - Epoch: [326][ 350/ 518] Overall Loss 2.889449 Objective Loss 2.889449 LR 0.000008 Time 0.218566 -2023-04-27 06:25:13,092 - Epoch: [326][ 400/ 518] Overall Loss 2.895499 Objective Loss 2.895499 LR 0.000008 Time 0.218430 -2023-04-27 06:25:23,956 - Epoch: [326][ 450/ 518] Overall Loss 2.888790 Objective Loss 2.888790 LR 0.000008 Time 0.218298 -2023-04-27 06:25:34,803 - Epoch: [326][ 500/ 518] Overall Loss 2.885354 Objective Loss 2.885354 LR 0.000008 Time 0.218159 -2023-04-27 06:25:38,557 - Epoch: [326][ 518/ 518] Overall Loss 2.881311 Objective Loss 2.881311 LR 0.000008 Time 0.217825 -2023-04-27 06:25:38,634 - --- validate (epoch=326)----------- -2023-04-27 06:25:38,634 - 4952 samples (32 per mini-batch) -2023-04-27 06:25:46,929 - Epoch: [326][ 50/ 155] Loss 3.056035 mAP 0.437483 -2023-04-27 06:25:54,848 - Epoch: [326][ 100/ 155] Loss 3.063077 mAP 0.443356 -2023-04-27 06:26:02,821 - Epoch: [326][ 150/ 155] Loss 3.068978 mAP 0.445289 -2023-04-27 06:26:03,540 - Epoch: [326][ 155/ 155] Loss 3.067191 mAP 0.444444 -2023-04-27 06:26:03,610 - ==> mAP: 0.44444 Loss: 3.067 - -2023-04-27 06:26:03,614 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:26:03,614 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:26:03,648 - - -2023-04-27 06:26:03,648 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:26:15,169 - Epoch: [327][ 50/ 518] Overall Loss 2.898067 Objective Loss 2.898067 LR 0.000008 Time 0.230360 -2023-04-27 06:26:25,888 - Epoch: [327][ 100/ 518] Overall Loss 2.870731 Objective Loss 2.870731 LR 0.000008 Time 0.222351 -2023-04-27 06:26:36,755 - Epoch: [327][ 150/ 518] Overall Loss 2.894087 Objective Loss 2.894087 LR 0.000008 Time 0.220670 -2023-04-27 06:26:47,556 - Epoch: [327][ 200/ 518] Overall Loss 2.891628 Objective Loss 2.891628 LR 0.000008 Time 0.219498 -2023-04-27 06:26:58,379 - Epoch: [327][ 250/ 518] Overall Loss 2.884486 Objective Loss 2.884486 LR 0.000008 Time 0.218885 -2023-04-27 06:27:09,151 - Epoch: [327][ 300/ 518] Overall Loss 2.888028 Objective Loss 2.888028 LR 0.000008 Time 0.218307 -2023-04-27 06:27:19,980 - Epoch: [327][ 350/ 518] Overall Loss 2.883897 Objective Loss 2.883897 LR 0.000008 Time 0.218055 -2023-04-27 06:27:30,763 - Epoch: [327][ 400/ 518] Overall Loss 2.884500 Objective Loss 2.884500 LR 0.000008 Time 0.217752 -2023-04-27 06:27:41,598 - Epoch: [327][ 450/ 518] Overall Loss 2.883573 Objective Loss 2.883573 LR 0.000008 Time 0.217632 -2023-04-27 06:27:52,368 - Epoch: [327][ 500/ 518] Overall Loss 2.887208 Objective Loss 2.887208 LR 0.000008 Time 0.217405 -2023-04-27 06:27:56,086 - Epoch: [327][ 518/ 518] Overall Loss 2.891060 Objective Loss 2.891060 LR 0.000008 Time 0.217028 -2023-04-27 06:27:56,162 - --- validate (epoch=327)----------- -2023-04-27 06:27:56,162 - 4952 samples (32 per mini-batch) -2023-04-27 06:28:04,447 - Epoch: [327][ 50/ 155] Loss 3.056293 mAP 0.450105 -2023-04-27 06:28:12,355 - Epoch: [327][ 100/ 155] Loss 3.066155 mAP 0.438426 -2023-04-27 06:28:20,227 - Epoch: [327][ 150/ 155] Loss 3.070656 mAP 0.435238 -2023-04-27 06:28:20,942 - Epoch: [327][ 155/ 155] Loss 3.072287 mAP 0.436123 -2023-04-27 06:28:21,012 - ==> mAP: 0.43612 Loss: 3.072 - -2023-04-27 06:28:21,017 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:28:21,017 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:28:21,052 - - -2023-04-27 06:28:21,052 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:28:32,583 - Epoch: [328][ 50/ 518] Overall Loss 2.900690 Objective Loss 2.900690 LR 0.000008 Time 0.230551 -2023-04-27 06:28:43,372 - Epoch: [328][ 100/ 518] Overall Loss 2.869431 Objective Loss 2.869431 LR 0.000008 Time 0.223148 -2023-04-27 06:28:54,201 - Epoch: [328][ 150/ 518] Overall Loss 2.879019 Objective Loss 2.879019 LR 0.000008 Time 0.220950 -2023-04-27 06:29:05,040 - Epoch: [328][ 200/ 518] Overall Loss 2.870048 Objective Loss 2.870048 LR 0.000008 Time 0.219903 -2023-04-27 06:29:15,806 - Epoch: [328][ 250/ 518] Overall Loss 2.879479 Objective Loss 2.879479 LR 0.000008 Time 0.218978 -2023-04-27 06:29:26,609 - Epoch: [328][ 300/ 518] Overall Loss 2.880988 Objective Loss 2.880988 LR 0.000008 Time 0.218486 -2023-04-27 06:29:37,465 - Epoch: [328][ 350/ 518] Overall Loss 2.882757 Objective Loss 2.882757 LR 0.000008 Time 0.218287 -2023-04-27 06:29:48,351 - Epoch: [328][ 400/ 518] Overall Loss 2.889510 Objective Loss 2.889510 LR 0.000008 Time 0.218211 -2023-04-27 06:29:59,166 - Epoch: [328][ 450/ 518] Overall Loss 2.887727 Objective Loss 2.887727 LR 0.000008 Time 0.217995 -2023-04-27 06:30:09,963 - Epoch: [328][ 500/ 518] Overall Loss 2.885643 Objective Loss 2.885643 LR 0.000008 Time 0.217788 -2023-04-27 06:30:13,652 - Epoch: [328][ 518/ 518] Overall Loss 2.885454 Objective Loss 2.885454 LR 0.000008 Time 0.217340 -2023-04-27 06:30:13,728 - --- validate (epoch=328)----------- -2023-04-27 06:30:13,728 - 4952 samples (32 per mini-batch) -2023-04-27 06:30:22,051 - Epoch: [328][ 50/ 155] Loss 3.044973 mAP 0.452938 -2023-04-27 06:30:29,938 - Epoch: [328][ 100/ 155] Loss 3.073371 mAP 0.445014 -2023-04-27 06:30:37,866 - Epoch: [328][ 150/ 155] Loss 3.049382 mAP 0.446826 -2023-04-27 06:30:38,592 - Epoch: [328][ 155/ 155] Loss 3.047892 mAP 0.447708 -2023-04-27 06:30:38,674 - ==> mAP: 0.44771 Loss: 3.048 - -2023-04-27 06:30:38,678 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:30:38,679 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:30:38,714 - - -2023-04-27 06:30:38,714 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:30:50,378 - Epoch: [329][ 50/ 518] Overall Loss 2.834687 Objective Loss 2.834687 LR 0.000008 Time 0.233223 -2023-04-27 06:31:01,193 - Epoch: [329][ 100/ 518] Overall Loss 2.859768 Objective Loss 2.859768 LR 0.000008 Time 0.224749 -2023-04-27 06:31:11,979 - Epoch: [329][ 150/ 518] Overall Loss 2.881079 Objective Loss 2.881079 LR 0.000008 Time 0.221726 -2023-04-27 06:31:22,724 - Epoch: [329][ 200/ 518] Overall Loss 2.883617 Objective Loss 2.883617 LR 0.000008 Time 0.220014 -2023-04-27 06:31:33,575 - Epoch: [329][ 250/ 518] Overall Loss 2.889886 Objective Loss 2.889886 LR 0.000008 Time 0.219408 -2023-04-27 06:31:44,391 - Epoch: [329][ 300/ 518] Overall Loss 2.886845 Objective Loss 2.886845 LR 0.000008 Time 0.218887 -2023-04-27 06:31:55,275 - Epoch: [329][ 350/ 518] Overall Loss 2.885857 Objective Loss 2.885857 LR 0.000008 Time 0.218710 -2023-04-27 06:32:06,130 - Epoch: [329][ 400/ 518] Overall Loss 2.886947 Objective Loss 2.886947 LR 0.000008 Time 0.218505 -2023-04-27 06:32:16,987 - Epoch: [329][ 450/ 518] Overall Loss 2.888617 Objective Loss 2.888617 LR 0.000008 Time 0.218351 -2023-04-27 06:32:27,730 - Epoch: [329][ 500/ 518] Overall Loss 2.884868 Objective Loss 2.884868 LR 0.000008 Time 0.217998 -2023-04-27 06:32:31,498 - Epoch: [329][ 518/ 518] Overall Loss 2.885200 Objective Loss 2.885200 LR 0.000008 Time 0.217697 -2023-04-27 06:32:31,577 - --- validate (epoch=329)----------- -2023-04-27 06:32:31,577 - 4952 samples (32 per mini-batch) -2023-04-27 06:32:39,910 - Epoch: [329][ 50/ 155] Loss 3.069027 mAP 0.450778 -2023-04-27 06:32:47,812 - Epoch: [329][ 100/ 155] Loss 3.081711 mAP 0.438957 -2023-04-27 06:32:55,732 - Epoch: [329][ 150/ 155] Loss 3.078926 mAP 0.437345 -2023-04-27 06:32:56,449 - Epoch: [329][ 155/ 155] Loss 3.075851 mAP 0.437486 -2023-04-27 06:32:56,521 - ==> mAP: 0.43749 Loss: 3.076 - -2023-04-27 06:32:56,525 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:32:56,525 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:32:56,560 - - -2023-04-27 06:32:56,560 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:33:08,192 - Epoch: [330][ 50/ 518] Overall Loss 2.931801 Objective Loss 2.931801 LR 0.000008 Time 0.232576 -2023-04-27 06:33:18,966 - Epoch: [330][ 100/ 518] Overall Loss 2.925898 Objective Loss 2.925898 LR 0.000008 Time 0.224018 -2023-04-27 06:33:29,801 - Epoch: [330][ 150/ 518] Overall Loss 2.920256 Objective Loss 2.920256 LR 0.000008 Time 0.221567 -2023-04-27 06:33:40,653 - Epoch: [330][ 200/ 518] Overall Loss 2.918358 Objective Loss 2.918358 LR 0.000008 Time 0.220427 -2023-04-27 06:33:51,403 - Epoch: [330][ 250/ 518] Overall Loss 2.907304 Objective Loss 2.907304 LR 0.000008 Time 0.219336 -2023-04-27 06:34:02,229 - Epoch: [330][ 300/ 518] Overall Loss 2.899963 Objective Loss 2.899963 LR 0.000008 Time 0.218859 -2023-04-27 06:34:13,066 - Epoch: [330][ 350/ 518] Overall Loss 2.903187 Objective Loss 2.903187 LR 0.000008 Time 0.218552 -2023-04-27 06:34:23,931 - Epoch: [330][ 400/ 518] Overall Loss 2.899241 Objective Loss 2.899241 LR 0.000008 Time 0.218392 -2023-04-27 06:34:34,742 - Epoch: [330][ 450/ 518] Overall Loss 2.892030 Objective Loss 2.892030 LR 0.000008 Time 0.218147 -2023-04-27 06:34:45,585 - Epoch: [330][ 500/ 518] Overall Loss 2.889948 Objective Loss 2.889948 LR 0.000008 Time 0.218016 -2023-04-27 06:34:49,384 - Epoch: [330][ 518/ 518] Overall Loss 2.892552 Objective Loss 2.892552 LR 0.000008 Time 0.217773 -2023-04-27 06:34:49,459 - --- validate (epoch=330)----------- -2023-04-27 06:34:49,459 - 4952 samples (32 per mini-batch) -2023-04-27 06:34:57,766 - Epoch: [330][ 50/ 155] Loss 3.082450 mAP 0.427523 -2023-04-27 06:35:05,669 - Epoch: [330][ 100/ 155] Loss 3.069876 mAP 0.430367 -2023-04-27 06:35:13,528 - Epoch: [330][ 150/ 155] Loss 3.045054 mAP 0.436674 -2023-04-27 06:35:14,257 - Epoch: [330][ 155/ 155] Loss 3.044736 mAP 0.439067 -2023-04-27 06:35:14,337 - ==> mAP: 0.43907 Loss: 3.045 - -2023-04-27 06:35:14,340 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:35:14,340 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:35:14,444 - - -2023-04-27 06:35:14,444 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:35:26,037 - Epoch: [331][ 50/ 518] Overall Loss 2.936068 Objective Loss 2.936068 LR 0.000008 Time 0.231801 -2023-04-27 06:35:36,777 - Epoch: [331][ 100/ 518] Overall Loss 2.906689 Objective Loss 2.906689 LR 0.000008 Time 0.223293 -2023-04-27 06:35:47,564 - Epoch: [331][ 150/ 518] Overall Loss 2.886034 Objective Loss 2.886034 LR 0.000008 Time 0.220762 -2023-04-27 06:35:58,373 - Epoch: [331][ 200/ 518] Overall Loss 2.898304 Objective Loss 2.898304 LR 0.000008 Time 0.219610 -2023-04-27 06:36:09,223 - Epoch: [331][ 250/ 518] Overall Loss 2.891306 Objective Loss 2.891306 LR 0.000008 Time 0.219079 -2023-04-27 06:36:20,082 - Epoch: [331][ 300/ 518] Overall Loss 2.877816 Objective Loss 2.877816 LR 0.000008 Time 0.218760 -2023-04-27 06:36:30,886 - Epoch: [331][ 350/ 518] Overall Loss 2.873017 Objective Loss 2.873017 LR 0.000008 Time 0.218372 -2023-04-27 06:36:41,751 - Epoch: [331][ 400/ 518] Overall Loss 2.874977 Objective Loss 2.874977 LR 0.000008 Time 0.218233 -2023-04-27 06:36:52,525 - Epoch: [331][ 450/ 518] Overall Loss 2.872947 Objective Loss 2.872947 LR 0.000008 Time 0.217924 -2023-04-27 06:37:03,251 - Epoch: [331][ 500/ 518] Overall Loss 2.874404 Objective Loss 2.874404 LR 0.000008 Time 0.217582 -2023-04-27 06:37:06,989 - Epoch: [331][ 518/ 518] Overall Loss 2.873884 Objective Loss 2.873884 LR 0.000008 Time 0.217235 -2023-04-27 06:37:07,064 - --- validate (epoch=331)----------- -2023-04-27 06:37:07,064 - 4952 samples (32 per mini-batch) -2023-04-27 06:37:15,321 - Epoch: [331][ 50/ 155] Loss 3.044048 mAP 0.443784 -2023-04-27 06:37:23,207 - Epoch: [331][ 100/ 155] Loss 3.038157 mAP 0.444719 -2023-04-27 06:37:31,150 - Epoch: [331][ 150/ 155] Loss 3.043248 mAP 0.448525 -2023-04-27 06:37:31,871 - Epoch: [331][ 155/ 155] Loss 3.041405 mAP 0.448534 -2023-04-27 06:37:31,947 - ==> mAP: 0.44853 Loss: 3.041 - -2023-04-27 06:37:31,951 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:37:31,951 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:37:31,985 - - -2023-04-27 06:37:31,985 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:37:43,431 - Epoch: [332][ 50/ 518] Overall Loss 2.950423 Objective Loss 2.950423 LR 0.000008 Time 0.228867 -2023-04-27 06:37:54,249 - Epoch: [332][ 100/ 518] Overall Loss 2.909618 Objective Loss 2.909618 LR 0.000008 Time 0.222598 -2023-04-27 06:38:05,054 - Epoch: [332][ 150/ 518] Overall Loss 2.907474 Objective Loss 2.907474 LR 0.000008 Time 0.220427 -2023-04-27 06:38:15,829 - Epoch: [332][ 200/ 518] Overall Loss 2.898411 Objective Loss 2.898411 LR 0.000008 Time 0.219182 -2023-04-27 06:38:26,597 - Epoch: [332][ 250/ 518] Overall Loss 2.890587 Objective Loss 2.890587 LR 0.000008 Time 0.218414 -2023-04-27 06:38:37,527 - Epoch: [332][ 300/ 518] Overall Loss 2.898944 Objective Loss 2.898944 LR 0.000008 Time 0.218439 -2023-04-27 06:38:48,337 - Epoch: [332][ 350/ 518] Overall Loss 2.898592 Objective Loss 2.898592 LR 0.000008 Time 0.218115 -2023-04-27 06:38:59,137 - Epoch: [332][ 400/ 518] Overall Loss 2.896275 Objective Loss 2.896275 LR 0.000008 Time 0.217847 -2023-04-27 06:39:09,983 - Epoch: [332][ 450/ 518] Overall Loss 2.890017 Objective Loss 2.890017 LR 0.000008 Time 0.217741 -2023-04-27 06:39:20,825 - Epoch: [332][ 500/ 518] Overall Loss 2.885572 Objective Loss 2.885572 LR 0.000008 Time 0.217647 -2023-04-27 06:39:24,572 - Epoch: [332][ 518/ 518] Overall Loss 2.884428 Objective Loss 2.884428 LR 0.000008 Time 0.217316 -2023-04-27 06:39:24,648 - --- validate (epoch=332)----------- -2023-04-27 06:39:24,649 - 4952 samples (32 per mini-batch) -2023-04-27 06:39:32,967 - Epoch: [332][ 50/ 155] Loss 3.066585 mAP 0.446004 -2023-04-27 06:39:40,874 - Epoch: [332][ 100/ 155] Loss 3.056042 mAP 0.449794 -2023-04-27 06:39:48,782 - Epoch: [332][ 150/ 155] Loss 3.066761 mAP 0.448393 -2023-04-27 06:39:49,499 - Epoch: [332][ 155/ 155] Loss 3.066857 mAP 0.446726 -2023-04-27 06:39:49,572 - ==> mAP: 0.44673 Loss: 3.067 - -2023-04-27 06:39:49,576 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:39:49,576 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:39:49,625 - - -2023-04-27 06:39:49,625 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:40:01,400 - Epoch: [333][ 50/ 518] Overall Loss 2.936023 Objective Loss 2.936023 LR 0.000008 Time 0.235430 -2023-04-27 06:40:12,166 - Epoch: [333][ 100/ 518] Overall Loss 2.869060 Objective Loss 2.869060 LR 0.000008 Time 0.225354 -2023-04-27 06:40:22,960 - Epoch: [333][ 150/ 518] Overall Loss 2.873143 Objective Loss 2.873143 LR 0.000008 Time 0.222187 -2023-04-27 06:40:33,807 - Epoch: [333][ 200/ 518] Overall Loss 2.864825 Objective Loss 2.864825 LR 0.000008 Time 0.220864 -2023-04-27 06:40:44,588 - Epoch: [333][ 250/ 518] Overall Loss 2.873664 Objective Loss 2.873664 LR 0.000008 Time 0.219812 -2023-04-27 06:40:55,486 - Epoch: [333][ 300/ 518] Overall Loss 2.874126 Objective Loss 2.874126 LR 0.000008 Time 0.219498 -2023-04-27 06:41:06,305 - Epoch: [333][ 350/ 518] Overall Loss 2.877582 Objective Loss 2.877582 LR 0.000008 Time 0.219048 -2023-04-27 06:41:17,152 - Epoch: [333][ 400/ 518] Overall Loss 2.881041 Objective Loss 2.881041 LR 0.000008 Time 0.218779 -2023-04-27 06:41:28,066 - Epoch: [333][ 450/ 518] Overall Loss 2.885089 Objective Loss 2.885089 LR 0.000008 Time 0.218721 -2023-04-27 06:41:38,884 - Epoch: [333][ 500/ 518] Overall Loss 2.888064 Objective Loss 2.888064 LR 0.000008 Time 0.218482 -2023-04-27 06:41:42,626 - Epoch: [333][ 518/ 518] Overall Loss 2.887068 Objective Loss 2.887068 LR 0.000008 Time 0.218112 -2023-04-27 06:41:42,702 - --- validate (epoch=333)----------- -2023-04-27 06:41:42,702 - 4952 samples (32 per mini-batch) -2023-04-27 06:41:51,056 - Epoch: [333][ 50/ 155] Loss 3.037465 mAP 0.455866 -2023-04-27 06:41:59,018 - Epoch: [333][ 100/ 155] Loss 3.063801 mAP 0.446767 -2023-04-27 06:42:06,952 - Epoch: [333][ 150/ 155] Loss 3.062277 mAP 0.449134 -2023-04-27 06:42:07,663 - Epoch: [333][ 155/ 155] Loss 3.063741 mAP 0.447446 -2023-04-27 06:42:07,730 - ==> mAP: 0.44745 Loss: 3.064 - -2023-04-27 06:42:07,734 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:42:07,734 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:42:07,768 - - -2023-04-27 06:42:07,768 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:42:19,420 - Epoch: [334][ 50/ 518] Overall Loss 2.922745 Objective Loss 2.922745 LR 0.000008 Time 0.232982 -2023-04-27 06:42:30,258 - Epoch: [334][ 100/ 518] Overall Loss 2.905347 Objective Loss 2.905347 LR 0.000008 Time 0.224859 -2023-04-27 06:42:41,035 - Epoch: [334][ 150/ 518] Overall Loss 2.906397 Objective Loss 2.906397 LR 0.000008 Time 0.221741 -2023-04-27 06:42:51,817 - Epoch: [334][ 200/ 518] Overall Loss 2.884563 Objective Loss 2.884563 LR 0.000008 Time 0.220206 -2023-04-27 06:43:02,686 - Epoch: [334][ 250/ 518] Overall Loss 2.873199 Objective Loss 2.873199 LR 0.000008 Time 0.219636 -2023-04-27 06:43:13,498 - Epoch: [334][ 300/ 518] Overall Loss 2.879659 Objective Loss 2.879659 LR 0.000008 Time 0.219066 -2023-04-27 06:43:24,350 - Epoch: [334][ 350/ 518] Overall Loss 2.884138 Objective Loss 2.884138 LR 0.000008 Time 0.218770 -2023-04-27 06:43:35,138 - Epoch: [334][ 400/ 518] Overall Loss 2.886528 Objective Loss 2.886528 LR 0.000008 Time 0.218390 -2023-04-27 06:43:45,986 - Epoch: [334][ 450/ 518] Overall Loss 2.888450 Objective Loss 2.888450 LR 0.000008 Time 0.218228 -2023-04-27 06:43:56,908 - Epoch: [334][ 500/ 518] Overall Loss 2.884173 Objective Loss 2.884173 LR 0.000008 Time 0.218246 -2023-04-27 06:44:00,668 - Epoch: [334][ 518/ 518] Overall Loss 2.882601 Objective Loss 2.882601 LR 0.000008 Time 0.217920 -2023-04-27 06:44:00,745 - --- validate (epoch=334)----------- -2023-04-27 06:44:00,745 - 4952 samples (32 per mini-batch) -2023-04-27 06:44:09,072 - Epoch: [334][ 50/ 155] Loss 3.009885 mAP 0.460090 -2023-04-27 06:44:17,015 - Epoch: [334][ 100/ 155] Loss 3.013375 mAP 0.454209 -2023-04-27 06:44:24,899 - Epoch: [334][ 150/ 155] Loss 3.043151 mAP 0.449772 -2023-04-27 06:44:25,615 - Epoch: [334][ 155/ 155] Loss 3.044924 mAP 0.448501 -2023-04-27 06:44:25,691 - ==> mAP: 0.44850 Loss: 3.045 - -2023-04-27 06:44:25,695 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:44:25,695 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:44:25,730 - - -2023-04-27 06:44:25,730 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:44:37,289 - Epoch: [335][ 50/ 518] Overall Loss 2.873582 Objective Loss 2.873582 LR 0.000008 Time 0.231128 -2023-04-27 06:44:48,081 - Epoch: [335][ 100/ 518] Overall Loss 2.873824 Objective Loss 2.873824 LR 0.000008 Time 0.223465 -2023-04-27 06:44:58,894 - Epoch: [335][ 150/ 518] Overall Loss 2.893780 Objective Loss 2.893780 LR 0.000008 Time 0.221054 -2023-04-27 06:45:09,728 - Epoch: [335][ 200/ 518] Overall Loss 2.895770 Objective Loss 2.895770 LR 0.000008 Time 0.219953 -2023-04-27 06:45:20,483 - Epoch: [335][ 250/ 518] Overall Loss 2.893931 Objective Loss 2.893931 LR 0.000008 Time 0.218976 -2023-04-27 06:45:31,328 - Epoch: [335][ 300/ 518] Overall Loss 2.882284 Objective Loss 2.882284 LR 0.000008 Time 0.218625 -2023-04-27 06:45:42,121 - Epoch: [335][ 350/ 518] Overall Loss 2.881195 Objective Loss 2.881195 LR 0.000008 Time 0.218227 -2023-04-27 06:45:52,868 - Epoch: [335][ 400/ 518] Overall Loss 2.875284 Objective Loss 2.875284 LR 0.000008 Time 0.217811 -2023-04-27 06:46:03,680 - Epoch: [335][ 450/ 518] Overall Loss 2.875264 Objective Loss 2.875264 LR 0.000008 Time 0.217632 -2023-04-27 06:46:14,490 - Epoch: [335][ 500/ 518] Overall Loss 2.876649 Objective Loss 2.876649 LR 0.000008 Time 0.217486 -2023-04-27 06:46:18,253 - Epoch: [335][ 518/ 518] Overall Loss 2.878970 Objective Loss 2.878970 LR 0.000008 Time 0.217193 -2023-04-27 06:46:18,331 - --- validate (epoch=335)----------- -2023-04-27 06:46:18,331 - 4952 samples (32 per mini-batch) -2023-04-27 06:46:26,628 - Epoch: [335][ 50/ 155] Loss 3.027470 mAP 0.437844 -2023-04-27 06:46:34,546 - Epoch: [335][ 100/ 155] Loss 3.036690 mAP 0.441274 -2023-04-27 06:46:42,467 - Epoch: [335][ 150/ 155] Loss 3.052467 mAP 0.446885 -2023-04-27 06:46:43,188 - Epoch: [335][ 155/ 155] Loss 3.050040 mAP 0.448861 -2023-04-27 06:46:43,260 - ==> mAP: 0.44886 Loss: 3.050 - -2023-04-27 06:46:43,264 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:46:43,265 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:46:43,300 - - -2023-04-27 06:46:43,300 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:46:54,782 - Epoch: [336][ 50/ 518] Overall Loss 2.857091 Objective Loss 2.857091 LR 0.000008 Time 0.229583 -2023-04-27 06:47:05,626 - Epoch: [336][ 100/ 518] Overall Loss 2.874416 Objective Loss 2.874416 LR 0.000008 Time 0.223215 -2023-04-27 06:47:16,521 - Epoch: [336][ 150/ 518] Overall Loss 2.871755 Objective Loss 2.871755 LR 0.000008 Time 0.221437 -2023-04-27 06:47:27,348 - Epoch: [336][ 200/ 518] Overall Loss 2.878941 Objective Loss 2.878941 LR 0.000008 Time 0.220203 -2023-04-27 06:47:38,122 - Epoch: [336][ 250/ 518] Overall Loss 2.883217 Objective Loss 2.883217 LR 0.000008 Time 0.219251 -2023-04-27 06:47:48,890 - Epoch: [336][ 300/ 518] Overall Loss 2.879619 Objective Loss 2.879619 LR 0.000008 Time 0.218597 -2023-04-27 06:47:59,674 - Epoch: [336][ 350/ 518] Overall Loss 2.882386 Objective Loss 2.882386 LR 0.000008 Time 0.218177 -2023-04-27 06:48:10,482 - Epoch: [336][ 400/ 518] Overall Loss 2.883788 Objective Loss 2.883788 LR 0.000008 Time 0.217921 -2023-04-27 06:48:21,204 - Epoch: [336][ 450/ 518] Overall Loss 2.883968 Objective Loss 2.883968 LR 0.000008 Time 0.217530 -2023-04-27 06:48:31,994 - Epoch: [336][ 500/ 518] Overall Loss 2.887979 Objective Loss 2.887979 LR 0.000008 Time 0.217355 -2023-04-27 06:48:35,748 - Epoch: [336][ 518/ 518] Overall Loss 2.890151 Objective Loss 2.890151 LR 0.000008 Time 0.217047 -2023-04-27 06:48:35,824 - --- validate (epoch=336)----------- -2023-04-27 06:48:35,824 - 4952 samples (32 per mini-batch) -2023-04-27 06:48:44,088 - Epoch: [336][ 50/ 155] Loss 3.025618 mAP 0.463452 -2023-04-27 06:48:52,026 - Epoch: [336][ 100/ 155] Loss 3.045393 mAP 0.453762 -2023-04-27 06:48:59,883 - Epoch: [336][ 150/ 155] Loss 3.040149 mAP 0.449073 -2023-04-27 06:49:00,608 - Epoch: [336][ 155/ 155] Loss 3.039450 mAP 0.448359 -2023-04-27 06:49:00,687 - ==> mAP: 0.44836 Loss: 3.039 - -2023-04-27 06:49:00,690 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:49:00,690 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:49:00,725 - - -2023-04-27 06:49:00,725 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:49:12,271 - Epoch: [337][ 50/ 518] Overall Loss 2.874777 Objective Loss 2.874777 LR 0.000008 Time 0.230862 -2023-04-27 06:49:23,063 - Epoch: [337][ 100/ 518] Overall Loss 2.871591 Objective Loss 2.871591 LR 0.000008 Time 0.223332 -2023-04-27 06:49:33,840 - Epoch: [337][ 150/ 518] Overall Loss 2.886036 Objective Loss 2.886036 LR 0.000008 Time 0.220728 -2023-04-27 06:49:44,657 - Epoch: [337][ 200/ 518] Overall Loss 2.877313 Objective Loss 2.877313 LR 0.000008 Time 0.219621 -2023-04-27 06:49:55,479 - Epoch: [337][ 250/ 518] Overall Loss 2.887858 Objective Loss 2.887858 LR 0.000008 Time 0.218981 -2023-04-27 06:50:06,299 - Epoch: [337][ 300/ 518] Overall Loss 2.887338 Objective Loss 2.887338 LR 0.000008 Time 0.218545 -2023-04-27 06:50:17,162 - Epoch: [337][ 350/ 518] Overall Loss 2.881819 Objective Loss 2.881819 LR 0.000008 Time 0.218358 -2023-04-27 06:50:27,912 - Epoch: [337][ 400/ 518] Overall Loss 2.883407 Objective Loss 2.883407 LR 0.000008 Time 0.217933 -2023-04-27 06:50:38,684 - Epoch: [337][ 450/ 518] Overall Loss 2.887523 Objective Loss 2.887523 LR 0.000008 Time 0.217653 -2023-04-27 06:50:49,427 - Epoch: [337][ 500/ 518] Overall Loss 2.883038 Objective Loss 2.883038 LR 0.000008 Time 0.217370 -2023-04-27 06:50:53,189 - Epoch: [337][ 518/ 518] Overall Loss 2.883113 Objective Loss 2.883113 LR 0.000008 Time 0.217078 -2023-04-27 06:50:53,266 - --- validate (epoch=337)----------- -2023-04-27 06:50:53,266 - 4952 samples (32 per mini-batch) -2023-04-27 06:51:01,490 - Epoch: [337][ 50/ 155] Loss 3.050352 mAP 0.436861 -2023-04-27 06:51:09,406 - Epoch: [337][ 100/ 155] Loss 3.073283 mAP 0.439461 -2023-04-27 06:51:17,242 - Epoch: [337][ 150/ 155] Loss 3.078938 mAP 0.436939 -2023-04-27 06:51:17,960 - Epoch: [337][ 155/ 155] Loss 3.077194 mAP 0.436935 -2023-04-27 06:51:18,039 - ==> mAP: 0.43694 Loss: 3.077 - -2023-04-27 06:51:18,043 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:51:18,043 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:51:18,077 - - -2023-04-27 06:51:18,077 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:51:29,720 - Epoch: [338][ 50/ 518] Overall Loss 2.860741 Objective Loss 2.860741 LR 0.000008 Time 0.232811 -2023-04-27 06:51:40,514 - Epoch: [338][ 100/ 518] Overall Loss 2.873591 Objective Loss 2.873591 LR 0.000008 Time 0.224331 -2023-04-27 06:51:51,330 - Epoch: [338][ 150/ 518] Overall Loss 2.887936 Objective Loss 2.887936 LR 0.000008 Time 0.221645 -2023-04-27 06:52:02,106 - Epoch: [338][ 200/ 518] Overall Loss 2.891978 Objective Loss 2.891978 LR 0.000008 Time 0.220108 -2023-04-27 06:52:12,883 - Epoch: [338][ 250/ 518] Overall Loss 2.898248 Objective Loss 2.898248 LR 0.000008 Time 0.219188 -2023-04-27 06:52:23,805 - Epoch: [338][ 300/ 518] Overall Loss 2.887280 Objective Loss 2.887280 LR 0.000008 Time 0.219057 -2023-04-27 06:52:34,621 - Epoch: [338][ 350/ 518] Overall Loss 2.887538 Objective Loss 2.887538 LR 0.000008 Time 0.218662 -2023-04-27 06:52:45,466 - Epoch: [338][ 400/ 518] Overall Loss 2.886502 Objective Loss 2.886502 LR 0.000008 Time 0.218439 -2023-04-27 06:52:56,267 - Epoch: [338][ 450/ 518] Overall Loss 2.882063 Objective Loss 2.882063 LR 0.000008 Time 0.218165 -2023-04-27 06:53:07,075 - Epoch: [338][ 500/ 518] Overall Loss 2.874043 Objective Loss 2.874043 LR 0.000008 Time 0.217962 -2023-04-27 06:53:10,791 - Epoch: [338][ 518/ 518] Overall Loss 2.873502 Objective Loss 2.873502 LR 0.000008 Time 0.217562 -2023-04-27 06:53:10,870 - --- validate (epoch=338)----------- -2023-04-27 06:53:10,871 - 4952 samples (32 per mini-batch) -2023-04-27 06:53:19,205 - Epoch: [338][ 50/ 155] Loss 3.048899 mAP 0.463645 -2023-04-27 06:53:27,155 - Epoch: [338][ 100/ 155] Loss 3.056503 mAP 0.453047 -2023-04-27 06:53:35,044 - Epoch: [338][ 150/ 155] Loss 3.074269 mAP 0.440264 -2023-04-27 06:53:35,755 - Epoch: [338][ 155/ 155] Loss 3.073122 mAP 0.440723 -2023-04-27 06:53:35,828 - ==> mAP: 0.44072 Loss: 3.073 - -2023-04-27 06:53:35,832 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:53:35,832 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:53:35,866 - - -2023-04-27 06:53:35,866 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:53:47,386 - Epoch: [339][ 50/ 518] Overall Loss 2.900375 Objective Loss 2.900375 LR 0.000008 Time 0.230339 -2023-04-27 06:53:58,164 - Epoch: [339][ 100/ 518] Overall Loss 2.882674 Objective Loss 2.882674 LR 0.000008 Time 0.222933 -2023-04-27 06:54:09,014 - Epoch: [339][ 150/ 518] Overall Loss 2.867812 Objective Loss 2.867812 LR 0.000008 Time 0.220946 -2023-04-27 06:54:19,863 - Epoch: [339][ 200/ 518] Overall Loss 2.873546 Objective Loss 2.873546 LR 0.000008 Time 0.219947 -2023-04-27 06:54:30,674 - Epoch: [339][ 250/ 518] Overall Loss 2.862341 Objective Loss 2.862341 LR 0.000008 Time 0.219193 -2023-04-27 06:54:41,553 - Epoch: [339][ 300/ 518] Overall Loss 2.869350 Objective Loss 2.869350 LR 0.000008 Time 0.218919 -2023-04-27 06:54:52,425 - Epoch: [339][ 350/ 518] Overall Loss 2.872786 Objective Loss 2.872786 LR 0.000008 Time 0.218705 -2023-04-27 06:55:03,228 - Epoch: [339][ 400/ 518] Overall Loss 2.877231 Objective Loss 2.877231 LR 0.000008 Time 0.218369 -2023-04-27 06:55:13,996 - Epoch: [339][ 450/ 518] Overall Loss 2.876158 Objective Loss 2.876158 LR 0.000008 Time 0.218031 -2023-04-27 06:55:24,793 - Epoch: [339][ 500/ 518] Overall Loss 2.876865 Objective Loss 2.876865 LR 0.000008 Time 0.217819 -2023-04-27 06:55:28,519 - Epoch: [339][ 518/ 518] Overall Loss 2.874464 Objective Loss 2.874464 LR 0.000008 Time 0.217442 -2023-04-27 06:55:28,596 - --- validate (epoch=339)----------- -2023-04-27 06:55:28,596 - 4952 samples (32 per mini-batch) -2023-04-27 06:55:36,851 - Epoch: [339][ 50/ 155] Loss 3.066096 mAP 0.446432 -2023-04-27 06:55:44,758 - Epoch: [339][ 100/ 155] Loss 3.068812 mAP 0.449686 -2023-04-27 06:55:52,644 - Epoch: [339][ 150/ 155] Loss 3.060674 mAP 0.446864 -2023-04-27 06:55:53,370 - Epoch: [339][ 155/ 155] Loss 3.060034 mAP 0.445968 -2023-04-27 06:55:53,445 - ==> mAP: 0.44597 Loss: 3.060 - -2023-04-27 06:55:53,449 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:55:53,449 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:55:53,484 - - -2023-04-27 06:55:53,484 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:56:05,115 - Epoch: [340][ 50/ 518] Overall Loss 2.897422 Objective Loss 2.897422 LR 0.000008 Time 0.232563 -2023-04-27 06:56:15,919 - Epoch: [340][ 100/ 518] Overall Loss 2.858873 Objective Loss 2.858873 LR 0.000008 Time 0.224304 -2023-04-27 06:56:26,752 - Epoch: [340][ 150/ 518] Overall Loss 2.873354 Objective Loss 2.873354 LR 0.000008 Time 0.221745 -2023-04-27 06:56:37,543 - Epoch: [340][ 200/ 518] Overall Loss 2.865587 Objective Loss 2.865587 LR 0.000008 Time 0.220256 -2023-04-27 06:56:48,336 - Epoch: [340][ 250/ 518] Overall Loss 2.868376 Objective Loss 2.868376 LR 0.000008 Time 0.219370 -2023-04-27 06:56:59,117 - Epoch: [340][ 300/ 518] Overall Loss 2.872872 Objective Loss 2.872872 LR 0.000008 Time 0.218740 -2023-04-27 06:57:09,942 - Epoch: [340][ 350/ 518] Overall Loss 2.873179 Objective Loss 2.873179 LR 0.000008 Time 0.218415 -2023-04-27 06:57:20,808 - Epoch: [340][ 400/ 518] Overall Loss 2.871900 Objective Loss 2.871900 LR 0.000008 Time 0.218276 -2023-04-27 06:57:31,566 - Epoch: [340][ 450/ 518] Overall Loss 2.870087 Objective Loss 2.870087 LR 0.000008 Time 0.217926 -2023-04-27 06:57:42,469 - Epoch: [340][ 500/ 518] Overall Loss 2.874363 Objective Loss 2.874363 LR 0.000008 Time 0.217936 -2023-04-27 06:57:46,174 - Epoch: [340][ 518/ 518] Overall Loss 2.874930 Objective Loss 2.874930 LR 0.000008 Time 0.217515 -2023-04-27 06:57:46,249 - --- validate (epoch=340)----------- -2023-04-27 06:57:46,250 - 4952 samples (32 per mini-batch) -2023-04-27 06:57:54,485 - Epoch: [340][ 50/ 155] Loss 3.017373 mAP 0.444699 -2023-04-27 06:58:02,367 - Epoch: [340][ 100/ 155] Loss 3.028053 mAP 0.444389 -2023-04-27 06:58:10,217 - Epoch: [340][ 150/ 155] Loss 3.040823 mAP 0.438844 -2023-04-27 06:58:10,941 - Epoch: [340][ 155/ 155] Loss 3.036915 mAP 0.438743 -2023-04-27 06:58:11,010 - ==> mAP: 0.43874 Loss: 3.037 - -2023-04-27 06:58:11,014 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 06:58:11,014 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 06:58:11,049 - - -2023-04-27 06:58:11,049 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 06:58:22,733 - Epoch: [341][ 50/ 518] Overall Loss 2.904711 Objective Loss 2.904711 LR 0.000008 Time 0.233623 -2023-04-27 06:58:33,490 - Epoch: [341][ 100/ 518] Overall Loss 2.864845 Objective Loss 2.864845 LR 0.000008 Time 0.224362 -2023-04-27 06:58:44,294 - Epoch: [341][ 150/ 518] Overall Loss 2.877123 Objective Loss 2.877123 LR 0.000008 Time 0.221588 -2023-04-27 06:58:55,115 - Epoch: [341][ 200/ 518] Overall Loss 2.875604 Objective Loss 2.875604 LR 0.000008 Time 0.220292 -2023-04-27 06:59:06,007 - Epoch: [341][ 250/ 518] Overall Loss 2.870365 Objective Loss 2.870365 LR 0.000008 Time 0.219794 -2023-04-27 06:59:16,678 - Epoch: [341][ 300/ 518] Overall Loss 2.869766 Objective Loss 2.869766 LR 0.000008 Time 0.218727 -2023-04-27 06:59:27,493 - Epoch: [341][ 350/ 518] Overall Loss 2.874940 Objective Loss 2.874940 LR 0.000008 Time 0.218375 -2023-04-27 06:59:38,234 - Epoch: [341][ 400/ 518] Overall Loss 2.878467 Objective Loss 2.878467 LR 0.000008 Time 0.217927 -2023-04-27 06:59:49,063 - Epoch: [341][ 450/ 518] Overall Loss 2.876464 Objective Loss 2.876464 LR 0.000008 Time 0.217774 -2023-04-27 06:59:59,875 - Epoch: [341][ 500/ 518] Overall Loss 2.882241 Objective Loss 2.882241 LR 0.000008 Time 0.217618 -2023-04-27 07:00:03,619 - Epoch: [341][ 518/ 518] Overall Loss 2.879654 Objective Loss 2.879654 LR 0.000008 Time 0.217282 -2023-04-27 07:00:03,695 - --- validate (epoch=341)----------- -2023-04-27 07:00:03,696 - 4952 samples (32 per mini-batch) -2023-04-27 07:00:11,994 - Epoch: [341][ 50/ 155] Loss 3.097212 mAP 0.422983 -2023-04-27 07:00:19,891 - Epoch: [341][ 100/ 155] Loss 3.063150 mAP 0.437400 -2023-04-27 07:00:27,826 - Epoch: [341][ 150/ 155] Loss 3.049876 mAP 0.447277 -2023-04-27 07:00:28,552 - Epoch: [341][ 155/ 155] Loss 3.049946 mAP 0.447914 -2023-04-27 07:00:28,626 - ==> mAP: 0.44791 Loss: 3.050 - -2023-04-27 07:00:28,630 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:00:28,630 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:00:28,664 - - -2023-04-27 07:00:28,664 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:00:40,130 - Epoch: [342][ 50/ 518] Overall Loss 2.904572 Objective Loss 2.904572 LR 0.000008 Time 0.229267 -2023-04-27 07:00:50,909 - Epoch: [342][ 100/ 518] Overall Loss 2.896784 Objective Loss 2.896784 LR 0.000008 Time 0.222400 -2023-04-27 07:01:01,710 - Epoch: [342][ 150/ 518] Overall Loss 2.890759 Objective Loss 2.890759 LR 0.000008 Time 0.220267 -2023-04-27 07:01:12,529 - Epoch: [342][ 200/ 518] Overall Loss 2.881507 Objective Loss 2.881507 LR 0.000008 Time 0.219288 -2023-04-27 07:01:23,373 - Epoch: [342][ 250/ 518] Overall Loss 2.877651 Objective Loss 2.877651 LR 0.000008 Time 0.218799 -2023-04-27 07:01:34,179 - Epoch: [342][ 300/ 518] Overall Loss 2.876120 Objective Loss 2.876120 LR 0.000008 Time 0.218346 -2023-04-27 07:01:45,034 - Epoch: [342][ 350/ 518] Overall Loss 2.879352 Objective Loss 2.879352 LR 0.000008 Time 0.218165 -2023-04-27 07:01:55,835 - Epoch: [342][ 400/ 518] Overall Loss 2.886389 Objective Loss 2.886389 LR 0.000008 Time 0.217893 -2023-04-27 07:02:06,664 - Epoch: [342][ 450/ 518] Overall Loss 2.888313 Objective Loss 2.888313 LR 0.000008 Time 0.217743 -2023-04-27 07:02:17,539 - Epoch: [342][ 500/ 518] Overall Loss 2.883380 Objective Loss 2.883380 LR 0.000008 Time 0.217715 -2023-04-27 07:02:21,315 - Epoch: [342][ 518/ 518] Overall Loss 2.886003 Objective Loss 2.886003 LR 0.000008 Time 0.217438 -2023-04-27 07:02:21,391 - --- validate (epoch=342)----------- -2023-04-27 07:02:21,391 - 4952 samples (32 per mini-batch) -2023-04-27 07:02:29,671 - Epoch: [342][ 50/ 155] Loss 3.074111 mAP 0.447663 -2023-04-27 07:02:37,592 - Epoch: [342][ 100/ 155] Loss 3.059137 mAP 0.442576 -2023-04-27 07:02:45,529 - Epoch: [342][ 150/ 155] Loss 3.045459 mAP 0.444354 -2023-04-27 07:02:46,243 - Epoch: [342][ 155/ 155] Loss 3.044636 mAP 0.443570 -2023-04-27 07:02:46,318 - ==> mAP: 0.44357 Loss: 3.045 - -2023-04-27 07:02:46,322 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:02:46,322 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:02:46,357 - - -2023-04-27 07:02:46,357 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:02:57,906 - Epoch: [343][ 50/ 518] Overall Loss 2.889369 Objective Loss 2.889369 LR 0.000008 Time 0.230926 -2023-04-27 07:03:08,637 - Epoch: [343][ 100/ 518] Overall Loss 2.885267 Objective Loss 2.885267 LR 0.000008 Time 0.222759 -2023-04-27 07:03:19,485 - Epoch: [343][ 150/ 518] Overall Loss 2.880531 Objective Loss 2.880531 LR 0.000008 Time 0.220817 -2023-04-27 07:03:30,401 - Epoch: [343][ 200/ 518] Overall Loss 2.884717 Objective Loss 2.884717 LR 0.000008 Time 0.220186 -2023-04-27 07:03:41,251 - Epoch: [343][ 250/ 518] Overall Loss 2.878600 Objective Loss 2.878600 LR 0.000008 Time 0.219541 -2023-04-27 07:03:52,052 - Epoch: [343][ 300/ 518] Overall Loss 2.876696 Objective Loss 2.876696 LR 0.000008 Time 0.218948 -2023-04-27 07:04:02,924 - Epoch: [343][ 350/ 518] Overall Loss 2.876729 Objective Loss 2.876729 LR 0.000008 Time 0.218728 -2023-04-27 07:04:13,695 - Epoch: [343][ 400/ 518] Overall Loss 2.875162 Objective Loss 2.875162 LR 0.000008 Time 0.218310 -2023-04-27 07:04:24,518 - Epoch: [343][ 450/ 518] Overall Loss 2.877057 Objective Loss 2.877057 LR 0.000008 Time 0.218102 -2023-04-27 07:04:35,296 - Epoch: [343][ 500/ 518] Overall Loss 2.875917 Objective Loss 2.875917 LR 0.000008 Time 0.217845 -2023-04-27 07:04:39,080 - Epoch: [343][ 518/ 518] Overall Loss 2.877268 Objective Loss 2.877268 LR 0.000008 Time 0.217579 -2023-04-27 07:04:39,153 - --- validate (epoch=343)----------- -2023-04-27 07:04:39,153 - 4952 samples (32 per mini-batch) -2023-04-27 07:04:47,404 - Epoch: [343][ 50/ 155] Loss 3.064848 mAP 0.427854 -2023-04-27 07:04:55,316 - Epoch: [343][ 100/ 155] Loss 3.060547 mAP 0.434964 -2023-04-27 07:05:03,204 - Epoch: [343][ 150/ 155] Loss 3.087715 mAP 0.436240 -2023-04-27 07:05:03,929 - Epoch: [343][ 155/ 155] Loss 3.088476 mAP 0.435941 -2023-04-27 07:05:04,005 - ==> mAP: 0.43594 Loss: 3.088 - -2023-04-27 07:05:04,009 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:05:04,009 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:05:04,044 - - -2023-04-27 07:05:04,044 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:05:15,768 - Epoch: [344][ 50/ 518] Overall Loss 2.867732 Objective Loss 2.867732 LR 0.000008 Time 0.234423 -2023-04-27 07:05:26,626 - Epoch: [344][ 100/ 518] Overall Loss 2.875758 Objective Loss 2.875758 LR 0.000008 Time 0.225779 -2023-04-27 07:05:37,528 - Epoch: [344][ 150/ 518] Overall Loss 2.881706 Objective Loss 2.881706 LR 0.000008 Time 0.223187 -2023-04-27 07:05:48,364 - Epoch: [344][ 200/ 518] Overall Loss 2.858119 Objective Loss 2.858119 LR 0.000008 Time 0.221564 -2023-04-27 07:05:59,223 - Epoch: [344][ 250/ 518] Overall Loss 2.871202 Objective Loss 2.871202 LR 0.000008 Time 0.220682 -2023-04-27 07:06:10,092 - Epoch: [344][ 300/ 518] Overall Loss 2.871163 Objective Loss 2.871163 LR 0.000008 Time 0.220125 -2023-04-27 07:06:20,943 - Epoch: [344][ 350/ 518] Overall Loss 2.871981 Objective Loss 2.871981 LR 0.000008 Time 0.219678 -2023-04-27 07:06:31,691 - Epoch: [344][ 400/ 518] Overall Loss 2.869458 Objective Loss 2.869458 LR 0.000008 Time 0.219082 -2023-04-27 07:06:42,522 - Epoch: [344][ 450/ 518] Overall Loss 2.872439 Objective Loss 2.872439 LR 0.000008 Time 0.218807 -2023-04-27 07:06:53,410 - Epoch: [344][ 500/ 518] Overall Loss 2.870876 Objective Loss 2.870876 LR 0.000008 Time 0.218698 -2023-04-27 07:06:57,137 - Epoch: [344][ 518/ 518] Overall Loss 2.871602 Objective Loss 2.871602 LR 0.000008 Time 0.218294 -2023-04-27 07:06:57,214 - --- validate (epoch=344)----------- -2023-04-27 07:06:57,214 - 4952 samples (32 per mini-batch) -2023-04-27 07:07:05,485 - Epoch: [344][ 50/ 155] Loss 3.051304 mAP 0.449028 -2023-04-27 07:07:13,367 - Epoch: [344][ 100/ 155] Loss 3.033700 mAP 0.450079 -2023-04-27 07:07:21,258 - Epoch: [344][ 150/ 155] Loss 3.047952 mAP 0.445180 -2023-04-27 07:07:21,977 - Epoch: [344][ 155/ 155] Loss 3.045243 mAP 0.446159 -2023-04-27 07:07:22,051 - ==> mAP: 0.44616 Loss: 3.045 - -2023-04-27 07:07:22,055 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:07:22,055 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:07:22,089 - - -2023-04-27 07:07:22,089 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:07:33,801 - Epoch: [345][ 50/ 518] Overall Loss 2.867552 Objective Loss 2.867552 LR 0.000008 Time 0.234188 -2023-04-27 07:07:44,529 - Epoch: [345][ 100/ 518] Overall Loss 2.911345 Objective Loss 2.911345 LR 0.000008 Time 0.224359 -2023-04-27 07:07:55,403 - Epoch: [345][ 150/ 518] Overall Loss 2.885362 Objective Loss 2.885362 LR 0.000008 Time 0.222051 -2023-04-27 07:08:06,200 - Epoch: [345][ 200/ 518] Overall Loss 2.872225 Objective Loss 2.872225 LR 0.000008 Time 0.220516 -2023-04-27 07:08:17,036 - Epoch: [345][ 250/ 518] Overall Loss 2.863496 Objective Loss 2.863496 LR 0.000008 Time 0.219750 -2023-04-27 07:08:27,857 - Epoch: [345][ 300/ 518] Overall Loss 2.868553 Objective Loss 2.868553 LR 0.000008 Time 0.219190 -2023-04-27 07:08:38,692 - Epoch: [345][ 350/ 518] Overall Loss 2.865623 Objective Loss 2.865623 LR 0.000008 Time 0.218831 -2023-04-27 07:08:49,486 - Epoch: [345][ 400/ 518] Overall Loss 2.863118 Objective Loss 2.863118 LR 0.000008 Time 0.218457 -2023-04-27 07:09:00,395 - Epoch: [345][ 450/ 518] Overall Loss 2.864808 Objective Loss 2.864808 LR 0.000008 Time 0.218423 -2023-04-27 07:09:11,181 - Epoch: [345][ 500/ 518] Overall Loss 2.862175 Objective Loss 2.862175 LR 0.000008 Time 0.218150 -2023-04-27 07:09:14,968 - Epoch: [345][ 518/ 518] Overall Loss 2.860688 Objective Loss 2.860688 LR 0.000008 Time 0.217879 -2023-04-27 07:09:15,043 - --- validate (epoch=345)----------- -2023-04-27 07:09:15,044 - 4952 samples (32 per mini-batch) -2023-04-27 07:09:23,367 - Epoch: [345][ 50/ 155] Loss 3.066918 mAP 0.459970 -2023-04-27 07:09:31,301 - Epoch: [345][ 100/ 155] Loss 3.066694 mAP 0.441291 -2023-04-27 07:09:39,190 - Epoch: [345][ 150/ 155] Loss 3.061279 mAP 0.439354 -2023-04-27 07:09:39,918 - Epoch: [345][ 155/ 155] Loss 3.064993 mAP 0.439172 -2023-04-27 07:09:39,985 - ==> mAP: 0.43917 Loss: 3.065 - -2023-04-27 07:09:39,989 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:09:39,989 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:09:40,024 - - -2023-04-27 07:09:40,024 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:09:51,698 - Epoch: [346][ 50/ 518] Overall Loss 2.871158 Objective Loss 2.871158 LR 0.000008 Time 0.233417 -2023-04-27 07:10:02,484 - Epoch: [346][ 100/ 518] Overall Loss 2.863588 Objective Loss 2.863588 LR 0.000008 Time 0.224561 -2023-04-27 07:10:13,260 - Epoch: [346][ 150/ 518] Overall Loss 2.880691 Objective Loss 2.880691 LR 0.000008 Time 0.221532 -2023-04-27 07:10:24,188 - Epoch: [346][ 200/ 518] Overall Loss 2.865971 Objective Loss 2.865971 LR 0.000008 Time 0.220781 -2023-04-27 07:10:34,953 - Epoch: [346][ 250/ 518] Overall Loss 2.880043 Objective Loss 2.880043 LR 0.000008 Time 0.219678 -2023-04-27 07:10:45,784 - Epoch: [346][ 300/ 518] Overall Loss 2.876956 Objective Loss 2.876956 LR 0.000008 Time 0.219165 -2023-04-27 07:10:56,637 - Epoch: [346][ 350/ 518] Overall Loss 2.874386 Objective Loss 2.874386 LR 0.000008 Time 0.218860 -2023-04-27 07:11:07,418 - Epoch: [346][ 400/ 518] Overall Loss 2.875233 Objective Loss 2.875233 LR 0.000008 Time 0.218452 -2023-04-27 07:11:18,225 - Epoch: [346][ 450/ 518] Overall Loss 2.874625 Objective Loss 2.874625 LR 0.000008 Time 0.218191 -2023-04-27 07:11:29,110 - Epoch: [346][ 500/ 518] Overall Loss 2.879257 Objective Loss 2.879257 LR 0.000008 Time 0.218139 -2023-04-27 07:11:32,869 - Epoch: [346][ 518/ 518] Overall Loss 2.881322 Objective Loss 2.881322 LR 0.000008 Time 0.217814 -2023-04-27 07:11:32,946 - --- validate (epoch=346)----------- -2023-04-27 07:11:32,947 - 4952 samples (32 per mini-batch) -2023-04-27 07:11:41,238 - Epoch: [346][ 50/ 155] Loss 3.016428 mAP 0.448626 -2023-04-27 07:11:49,143 - Epoch: [346][ 100/ 155] Loss 3.044954 mAP 0.447872 -2023-04-27 07:11:57,062 - Epoch: [346][ 150/ 155] Loss 3.045678 mAP 0.449858 -2023-04-27 07:11:57,787 - Epoch: [346][ 155/ 155] Loss 3.046975 mAP 0.450352 -2023-04-27 07:11:57,874 - ==> mAP: 0.45035 Loss: 3.047 - -2023-04-27 07:11:57,879 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:11:57,879 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:11:57,913 - - -2023-04-27 07:11:57,913 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:12:09,469 - Epoch: [347][ 50/ 518] Overall Loss 2.847637 Objective Loss 2.847637 LR 0.000008 Time 0.231063 -2023-04-27 07:12:20,250 - Epoch: [347][ 100/ 518] Overall Loss 2.882830 Objective Loss 2.882830 LR 0.000008 Time 0.223326 -2023-04-27 07:12:31,050 - Epoch: [347][ 150/ 518] Overall Loss 2.878398 Objective Loss 2.878398 LR 0.000008 Time 0.220877 -2023-04-27 07:12:41,865 - Epoch: [347][ 200/ 518] Overall Loss 2.870274 Objective Loss 2.870274 LR 0.000008 Time 0.219724 -2023-04-27 07:12:52,728 - Epoch: [347][ 250/ 518] Overall Loss 2.876742 Objective Loss 2.876742 LR 0.000008 Time 0.219226 -2023-04-27 07:13:03,488 - Epoch: [347][ 300/ 518] Overall Loss 2.880631 Objective Loss 2.880631 LR 0.000008 Time 0.218549 -2023-04-27 07:13:14,271 - Epoch: [347][ 350/ 518] Overall Loss 2.879674 Objective Loss 2.879674 LR 0.000008 Time 0.218132 -2023-04-27 07:13:25,180 - Epoch: [347][ 400/ 518] Overall Loss 2.880789 Objective Loss 2.880789 LR 0.000008 Time 0.218134 -2023-04-27 07:13:35,956 - Epoch: [347][ 450/ 518] Overall Loss 2.879857 Objective Loss 2.879857 LR 0.000008 Time 0.217838 -2023-04-27 07:13:46,705 - Epoch: [347][ 500/ 518] Overall Loss 2.881827 Objective Loss 2.881827 LR 0.000008 Time 0.217550 -2023-04-27 07:13:50,455 - Epoch: [347][ 518/ 518] Overall Loss 2.882271 Objective Loss 2.882271 LR 0.000008 Time 0.217230 -2023-04-27 07:13:50,533 - --- validate (epoch=347)----------- -2023-04-27 07:13:50,534 - 4952 samples (32 per mini-batch) -2023-04-27 07:13:58,849 - Epoch: [347][ 50/ 155] Loss 3.047943 mAP 0.453664 -2023-04-27 07:14:06,810 - Epoch: [347][ 100/ 155] Loss 3.042837 mAP 0.450900 -2023-04-27 07:14:14,727 - Epoch: [347][ 150/ 155] Loss 3.046798 mAP 0.454195 -2023-04-27 07:14:15,437 - Epoch: [347][ 155/ 155] Loss 3.050053 mAP 0.453323 -2023-04-27 07:14:15,513 - ==> mAP: 0.45332 Loss: 3.050 - -2023-04-27 07:14:15,517 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:14:15,517 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:14:15,551 - - -2023-04-27 07:14:15,551 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:14:27,088 - Epoch: [348][ 50/ 518] Overall Loss 2.815218 Objective Loss 2.815218 LR 0.000008 Time 0.230685 -2023-04-27 07:14:37,947 - Epoch: [348][ 100/ 518] Overall Loss 2.829849 Objective Loss 2.829849 LR 0.000008 Time 0.223918 -2023-04-27 07:14:48,648 - Epoch: [348][ 150/ 518] Overall Loss 2.857381 Objective Loss 2.857381 LR 0.000008 Time 0.220607 -2023-04-27 07:14:59,498 - Epoch: [348][ 200/ 518] Overall Loss 2.867654 Objective Loss 2.867654 LR 0.000008 Time 0.219697 -2023-04-27 07:15:10,254 - Epoch: [348][ 250/ 518] Overall Loss 2.870157 Objective Loss 2.870157 LR 0.000008 Time 0.218777 -2023-04-27 07:15:20,982 - Epoch: [348][ 300/ 518] Overall Loss 2.860486 Objective Loss 2.860486 LR 0.000008 Time 0.218066 -2023-04-27 07:15:31,723 - Epoch: [348][ 350/ 518] Overall Loss 2.862100 Objective Loss 2.862100 LR 0.000008 Time 0.217599 -2023-04-27 07:15:42,503 - Epoch: [348][ 400/ 518] Overall Loss 2.863459 Objective Loss 2.863459 LR 0.000008 Time 0.217346 -2023-04-27 07:15:53,319 - Epoch: [348][ 450/ 518] Overall Loss 2.865505 Objective Loss 2.865505 LR 0.000008 Time 0.217228 -2023-04-27 07:16:04,116 - Epoch: [348][ 500/ 518] Overall Loss 2.867318 Objective Loss 2.867318 LR 0.000008 Time 0.217097 -2023-04-27 07:16:07,875 - Epoch: [348][ 518/ 518] Overall Loss 2.870964 Objective Loss 2.870964 LR 0.000008 Time 0.216809 -2023-04-27 07:16:07,953 - --- validate (epoch=348)----------- -2023-04-27 07:16:07,953 - 4952 samples (32 per mini-batch) -2023-04-27 07:16:16,251 - Epoch: [348][ 50/ 155] Loss 3.025388 mAP 0.429854 -2023-04-27 07:16:24,175 - Epoch: [348][ 100/ 155] Loss 2.994480 mAP 0.444592 -2023-04-27 07:16:32,054 - Epoch: [348][ 150/ 155] Loss 3.026665 mAP 0.440033 -2023-04-27 07:16:32,777 - Epoch: [348][ 155/ 155] Loss 3.031569 mAP 0.440664 -2023-04-27 07:16:32,851 - ==> mAP: 0.44066 Loss: 3.032 - -2023-04-27 07:16:32,855 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:16:32,855 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:16:32,889 - - -2023-04-27 07:16:32,889 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:16:44,452 - Epoch: [349][ 50/ 518] Overall Loss 2.864333 Objective Loss 2.864333 LR 0.000008 Time 0.231215 -2023-04-27 07:16:55,239 - Epoch: [349][ 100/ 518] Overall Loss 2.892486 Objective Loss 2.892486 LR 0.000008 Time 0.223457 -2023-04-27 07:17:06,032 - Epoch: [349][ 150/ 518] Overall Loss 2.881909 Objective Loss 2.881909 LR 0.000008 Time 0.220912 -2023-04-27 07:17:16,945 - Epoch: [349][ 200/ 518] Overall Loss 2.891037 Objective Loss 2.891037 LR 0.000008 Time 0.220244 -2023-04-27 07:17:27,812 - Epoch: [349][ 250/ 518] Overall Loss 2.893314 Objective Loss 2.893314 LR 0.000008 Time 0.219657 -2023-04-27 07:17:38,705 - Epoch: [349][ 300/ 518] Overall Loss 2.886468 Objective Loss 2.886468 LR 0.000008 Time 0.219351 -2023-04-27 07:17:49,484 - Epoch: [349][ 350/ 518] Overall Loss 2.888508 Objective Loss 2.888508 LR 0.000008 Time 0.218809 -2023-04-27 07:18:00,267 - Epoch: [349][ 400/ 518] Overall Loss 2.885002 Objective Loss 2.885002 LR 0.000008 Time 0.218410 -2023-04-27 07:18:11,040 - Epoch: [349][ 450/ 518] Overall Loss 2.885557 Objective Loss 2.885557 LR 0.000008 Time 0.218080 -2023-04-27 07:18:21,935 - Epoch: [349][ 500/ 518] Overall Loss 2.883538 Objective Loss 2.883538 LR 0.000008 Time 0.218058 -2023-04-27 07:18:25,715 - Epoch: [349][ 518/ 518] Overall Loss 2.884990 Objective Loss 2.884990 LR 0.000008 Time 0.217778 -2023-04-27 07:18:25,792 - --- validate (epoch=349)----------- -2023-04-27 07:18:25,792 - 4952 samples (32 per mini-batch) -2023-04-27 07:18:34,053 - Epoch: [349][ 50/ 155] Loss 3.057458 mAP 0.440886 -2023-04-27 07:18:41,971 - Epoch: [349][ 100/ 155] Loss 3.049870 mAP 0.448305 -2023-04-27 07:18:49,896 - Epoch: [349][ 150/ 155] Loss 3.057298 mAP 0.442635 -2023-04-27 07:18:50,613 - Epoch: [349][ 155/ 155] Loss 3.058305 mAP 0.443109 -2023-04-27 07:18:50,686 - ==> mAP: 0.44311 Loss: 3.058 - -2023-04-27 07:18:50,690 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:18:50,690 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:18:50,725 - - -2023-04-27 07:18:50,725 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:19:02,253 - Epoch: [350][ 50/ 518] Overall Loss 2.835191 Objective Loss 2.835191 LR 0.000008 Time 0.230510 -2023-04-27 07:19:13,119 - Epoch: [350][ 100/ 518] Overall Loss 2.835441 Objective Loss 2.835441 LR 0.000008 Time 0.223893 -2023-04-27 07:19:24,037 - Epoch: [350][ 150/ 518] Overall Loss 2.853099 Objective Loss 2.853099 LR 0.000008 Time 0.222037 -2023-04-27 07:19:34,884 - Epoch: [350][ 200/ 518] Overall Loss 2.870616 Objective Loss 2.870616 LR 0.000008 Time 0.220755 -2023-04-27 07:19:45,670 - Epoch: [350][ 250/ 518] Overall Loss 2.875299 Objective Loss 2.875299 LR 0.000008 Time 0.219744 -2023-04-27 07:19:56,484 - Epoch: [350][ 300/ 518] Overall Loss 2.879387 Objective Loss 2.879387 LR 0.000008 Time 0.219160 -2023-04-27 07:20:07,330 - Epoch: [350][ 350/ 518] Overall Loss 2.880851 Objective Loss 2.880851 LR 0.000008 Time 0.218835 -2023-04-27 07:20:18,058 - Epoch: [350][ 400/ 518] Overall Loss 2.880776 Objective Loss 2.880776 LR 0.000008 Time 0.218299 -2023-04-27 07:20:28,965 - Epoch: [350][ 450/ 518] Overall Loss 2.884960 Objective Loss 2.884960 LR 0.000008 Time 0.218276 -2023-04-27 07:20:39,751 - Epoch: [350][ 500/ 518] Overall Loss 2.884202 Objective Loss 2.884202 LR 0.000008 Time 0.218018 -2023-04-27 07:20:43,531 - Epoch: [350][ 518/ 518] Overall Loss 2.886432 Objective Loss 2.886432 LR 0.000008 Time 0.217737 -2023-04-27 07:20:43,607 - --- validate (epoch=350)----------- -2023-04-27 07:20:43,607 - 4952 samples (32 per mini-batch) -2023-04-27 07:20:51,933 - Epoch: [350][ 50/ 155] Loss 3.050104 mAP 0.457325 -2023-04-27 07:20:59,861 - Epoch: [350][ 100/ 155] Loss 3.086197 mAP 0.445514 -2023-04-27 07:21:07,757 - Epoch: [350][ 150/ 155] Loss 3.072384 mAP 0.444124 -2023-04-27 07:21:08,468 - Epoch: [350][ 155/ 155] Loss 3.070628 mAP 0.443363 -2023-04-27 07:21:08,539 - ==> mAP: 0.44336 Loss: 3.071 - -2023-04-27 07:21:08,543 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:21:08,543 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:21:08,577 - - -2023-04-27 07:21:08,577 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:21:20,088 - Epoch: [351][ 50/ 518] Overall Loss 2.876071 Objective Loss 2.876071 LR 0.000008 Time 0.230169 -2023-04-27 07:21:30,994 - Epoch: [351][ 100/ 518] Overall Loss 2.867492 Objective Loss 2.867492 LR 0.000008 Time 0.224131 -2023-04-27 07:21:41,821 - Epoch: [351][ 150/ 518] Overall Loss 2.872653 Objective Loss 2.872653 LR 0.000008 Time 0.221591 -2023-04-27 07:21:52,582 - Epoch: [351][ 200/ 518] Overall Loss 2.864846 Objective Loss 2.864846 LR 0.000008 Time 0.219990 -2023-04-27 07:22:03,425 - Epoch: [351][ 250/ 518] Overall Loss 2.880226 Objective Loss 2.880226 LR 0.000008 Time 0.219355 -2023-04-27 07:22:14,245 - Epoch: [351][ 300/ 518] Overall Loss 2.882585 Objective Loss 2.882585 LR 0.000008 Time 0.218858 -2023-04-27 07:22:25,015 - Epoch: [351][ 350/ 518] Overall Loss 2.877009 Objective Loss 2.877009 LR 0.000008 Time 0.218360 -2023-04-27 07:22:35,771 - Epoch: [351][ 400/ 518] Overall Loss 2.870629 Objective Loss 2.870629 LR 0.000008 Time 0.217952 -2023-04-27 07:22:46,539 - Epoch: [351][ 450/ 518] Overall Loss 2.875464 Objective Loss 2.875464 LR 0.000008 Time 0.217660 -2023-04-27 07:22:57,414 - Epoch: [351][ 500/ 518] Overall Loss 2.875971 Objective Loss 2.875971 LR 0.000008 Time 0.217641 -2023-04-27 07:23:01,130 - Epoch: [351][ 518/ 518] Overall Loss 2.876088 Objective Loss 2.876088 LR 0.000008 Time 0.217251 -2023-04-27 07:23:01,206 - --- validate (epoch=351)----------- -2023-04-27 07:23:01,207 - 4952 samples (32 per mini-batch) -2023-04-27 07:23:09,512 - Epoch: [351][ 50/ 155] Loss 3.077772 mAP 0.449345 -2023-04-27 07:23:17,444 - Epoch: [351][ 100/ 155] Loss 3.063352 mAP 0.445561 -2023-04-27 07:23:25,359 - Epoch: [351][ 150/ 155] Loss 3.059749 mAP 0.445674 -2023-04-27 07:23:26,085 - Epoch: [351][ 155/ 155] Loss 3.056420 mAP 0.447450 -2023-04-27 07:23:26,159 - ==> mAP: 0.44745 Loss: 3.056 - -2023-04-27 07:23:26,163 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:23:26,163 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:23:26,197 - - -2023-04-27 07:23:26,197 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:23:37,640 - Epoch: [352][ 50/ 518] Overall Loss 2.848664 Objective Loss 2.848664 LR 0.000008 Time 0.228799 -2023-04-27 07:23:48,404 - Epoch: [352][ 100/ 518] Overall Loss 2.867807 Objective Loss 2.867807 LR 0.000008 Time 0.222025 -2023-04-27 07:23:59,258 - Epoch: [352][ 150/ 518] Overall Loss 2.866106 Objective Loss 2.866106 LR 0.000008 Time 0.220369 -2023-04-27 07:24:10,075 - Epoch: [352][ 200/ 518] Overall Loss 2.857677 Objective Loss 2.857677 LR 0.000008 Time 0.219352 -2023-04-27 07:24:20,889 - Epoch: [352][ 250/ 518] Overall Loss 2.861891 Objective Loss 2.861891 LR 0.000008 Time 0.218731 -2023-04-27 07:24:31,738 - Epoch: [352][ 300/ 518] Overall Loss 2.868159 Objective Loss 2.868159 LR 0.000008 Time 0.218434 -2023-04-27 07:24:42,570 - Epoch: [352][ 350/ 518] Overall Loss 2.874491 Objective Loss 2.874491 LR 0.000008 Time 0.218174 -2023-04-27 07:24:53,364 - Epoch: [352][ 400/ 518] Overall Loss 2.869848 Objective Loss 2.869848 LR 0.000008 Time 0.217883 -2023-04-27 07:25:04,163 - Epoch: [352][ 450/ 518] Overall Loss 2.869536 Objective Loss 2.869536 LR 0.000008 Time 0.217667 -2023-04-27 07:25:14,985 - Epoch: [352][ 500/ 518] Overall Loss 2.874912 Objective Loss 2.874912 LR 0.000008 Time 0.217541 -2023-04-27 07:25:18,712 - Epoch: [352][ 518/ 518] Overall Loss 2.873294 Objective Loss 2.873294 LR 0.000008 Time 0.217176 -2023-04-27 07:25:18,789 - --- validate (epoch=352)----------- -2023-04-27 07:25:18,790 - 4952 samples (32 per mini-batch) -2023-04-27 07:25:27,079 - Epoch: [352][ 50/ 155] Loss 3.047522 mAP 0.455830 -2023-04-27 07:25:34,973 - Epoch: [352][ 100/ 155] Loss 3.073195 mAP 0.451507 -2023-04-27 07:25:42,836 - Epoch: [352][ 150/ 155] Loss 3.047048 mAP 0.450894 -2023-04-27 07:25:43,556 - Epoch: [352][ 155/ 155] Loss 3.042547 mAP 0.452577 -2023-04-27 07:25:43,630 - ==> mAP: 0.45258 Loss: 3.043 - -2023-04-27 07:25:43,634 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:25:43,634 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:25:43,669 - - -2023-04-27 07:25:43,669 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:25:55,487 - Epoch: [353][ 50/ 518] Overall Loss 2.873402 Objective Loss 2.873402 LR 0.000008 Time 0.236310 -2023-04-27 07:26:06,311 - Epoch: [353][ 100/ 518] Overall Loss 2.864401 Objective Loss 2.864401 LR 0.000008 Time 0.226377 -2023-04-27 07:26:17,083 - Epoch: [353][ 150/ 518] Overall Loss 2.865162 Objective Loss 2.865162 LR 0.000008 Time 0.222723 -2023-04-27 07:26:27,838 - Epoch: [353][ 200/ 518] Overall Loss 2.856608 Objective Loss 2.856608 LR 0.000008 Time 0.220811 -2023-04-27 07:26:38,589 - Epoch: [353][ 250/ 518] Overall Loss 2.863410 Objective Loss 2.863410 LR 0.000008 Time 0.219646 -2023-04-27 07:26:49,397 - Epoch: [353][ 300/ 518] Overall Loss 2.866467 Objective Loss 2.866467 LR 0.000008 Time 0.219058 -2023-04-27 07:27:00,252 - Epoch: [353][ 350/ 518] Overall Loss 2.863632 Objective Loss 2.863632 LR 0.000008 Time 0.218775 -2023-04-27 07:27:11,078 - Epoch: [353][ 400/ 518] Overall Loss 2.862027 Objective Loss 2.862027 LR 0.000008 Time 0.218489 -2023-04-27 07:27:21,954 - Epoch: [353][ 450/ 518] Overall Loss 2.867089 Objective Loss 2.867089 LR 0.000008 Time 0.218379 -2023-04-27 07:27:32,799 - Epoch: [353][ 500/ 518] Overall Loss 2.865671 Objective Loss 2.865671 LR 0.000008 Time 0.218227 -2023-04-27 07:27:36,553 - Epoch: [353][ 518/ 518] Overall Loss 2.867142 Objective Loss 2.867142 LR 0.000008 Time 0.217891 -2023-04-27 07:27:36,631 - --- validate (epoch=353)----------- -2023-04-27 07:27:36,632 - 4952 samples (32 per mini-batch) -2023-04-27 07:27:44,932 - Epoch: [353][ 50/ 155] Loss 3.032321 mAP 0.461192 -2023-04-27 07:27:52,856 - Epoch: [353][ 100/ 155] Loss 3.030413 mAP 0.448500 -2023-04-27 07:28:00,745 - Epoch: [353][ 150/ 155] Loss 3.038340 mAP 0.454131 -2023-04-27 07:28:01,468 - Epoch: [353][ 155/ 155] Loss 3.034919 mAP 0.451147 -2023-04-27 07:28:01,535 - ==> mAP: 0.45115 Loss: 3.035 - -2023-04-27 07:28:01,539 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:28:01,539 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:28:01,574 - - -2023-04-27 07:28:01,574 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:28:13,215 - Epoch: [354][ 50/ 518] Overall Loss 2.838772 Objective Loss 2.838772 LR 0.000008 Time 0.232755 -2023-04-27 07:28:24,001 - Epoch: [354][ 100/ 518] Overall Loss 2.857679 Objective Loss 2.857679 LR 0.000008 Time 0.224219 -2023-04-27 07:28:34,803 - Epoch: [354][ 150/ 518] Overall Loss 2.840003 Objective Loss 2.840003 LR 0.000008 Time 0.221485 -2023-04-27 07:28:45,632 - Epoch: [354][ 200/ 518] Overall Loss 2.858749 Objective Loss 2.858749 LR 0.000008 Time 0.220253 -2023-04-27 07:28:56,398 - Epoch: [354][ 250/ 518] Overall Loss 2.852633 Objective Loss 2.852633 LR 0.000008 Time 0.219257 -2023-04-27 07:29:07,357 - Epoch: [354][ 300/ 518] Overall Loss 2.855336 Objective Loss 2.855336 LR 0.000008 Time 0.219239 -2023-04-27 07:29:18,235 - Epoch: [354][ 350/ 518] Overall Loss 2.857849 Objective Loss 2.857849 LR 0.000008 Time 0.218996 -2023-04-27 07:29:29,057 - Epoch: [354][ 400/ 518] Overall Loss 2.862726 Objective Loss 2.862726 LR 0.000008 Time 0.218672 -2023-04-27 07:29:39,850 - Epoch: [354][ 450/ 518] Overall Loss 2.856046 Objective Loss 2.856046 LR 0.000008 Time 0.218357 -2023-04-27 07:29:50,662 - Epoch: [354][ 500/ 518] Overall Loss 2.866139 Objective Loss 2.866139 LR 0.000008 Time 0.218142 -2023-04-27 07:29:54,446 - Epoch: [354][ 518/ 518] Overall Loss 2.868409 Objective Loss 2.868409 LR 0.000008 Time 0.217866 -2023-04-27 07:29:54,523 - --- validate (epoch=354)----------- -2023-04-27 07:29:54,524 - 4952 samples (32 per mini-batch) -2023-04-27 07:30:02,805 - Epoch: [354][ 50/ 155] Loss 3.046067 mAP 0.450547 -2023-04-27 07:30:10,745 - Epoch: [354][ 100/ 155] Loss 3.029114 mAP 0.457531 -2023-04-27 07:30:18,649 - Epoch: [354][ 150/ 155] Loss 3.041274 mAP 0.451567 -2023-04-27 07:30:19,366 - Epoch: [354][ 155/ 155] Loss 3.046839 mAP 0.449515 -2023-04-27 07:30:19,436 - ==> mAP: 0.44952 Loss: 3.047 - -2023-04-27 07:30:19,440 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:30:19,440 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:30:19,475 - - -2023-04-27 07:30:19,475 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:30:31,074 - Epoch: [355][ 50/ 518] Overall Loss 2.891342 Objective Loss 2.891342 LR 0.000008 Time 0.231917 -2023-04-27 07:30:41,907 - Epoch: [355][ 100/ 518] Overall Loss 2.892797 Objective Loss 2.892797 LR 0.000008 Time 0.224272 -2023-04-27 07:30:52,739 - Epoch: [355][ 150/ 518] Overall Loss 2.911606 Objective Loss 2.911606 LR 0.000008 Time 0.221723 -2023-04-27 07:31:03,494 - Epoch: [355][ 200/ 518] Overall Loss 2.889074 Objective Loss 2.889074 LR 0.000008 Time 0.220058 -2023-04-27 07:31:14,303 - Epoch: [355][ 250/ 518] Overall Loss 2.889300 Objective Loss 2.889300 LR 0.000008 Time 0.219277 -2023-04-27 07:31:25,094 - Epoch: [355][ 300/ 518] Overall Loss 2.885582 Objective Loss 2.885582 LR 0.000008 Time 0.218694 -2023-04-27 07:31:35,894 - Epoch: [355][ 350/ 518] Overall Loss 2.888794 Objective Loss 2.888794 LR 0.000008 Time 0.218306 -2023-04-27 07:31:46,632 - Epoch: [355][ 400/ 518] Overall Loss 2.888444 Objective Loss 2.888444 LR 0.000008 Time 0.217857 -2023-04-27 07:31:57,402 - Epoch: [355][ 450/ 518] Overall Loss 2.887773 Objective Loss 2.887773 LR 0.000008 Time 0.217581 -2023-04-27 07:32:08,231 - Epoch: [355][ 500/ 518] Overall Loss 2.886995 Objective Loss 2.886995 LR 0.000008 Time 0.217478 -2023-04-27 07:32:11,988 - Epoch: [355][ 518/ 518] Overall Loss 2.886553 Objective Loss 2.886553 LR 0.000008 Time 0.217173 -2023-04-27 07:32:12,064 - --- validate (epoch=355)----------- -2023-04-27 07:32:12,064 - 4952 samples (32 per mini-batch) -2023-04-27 07:32:20,365 - Epoch: [355][ 50/ 155] Loss 3.053714 mAP 0.446714 -2023-04-27 07:32:28,272 - Epoch: [355][ 100/ 155] Loss 3.065095 mAP 0.445573 -2023-04-27 07:32:36,106 - Epoch: [355][ 150/ 155] Loss 3.054408 mAP 0.449589 -2023-04-27 07:32:36,824 - Epoch: [355][ 155/ 155] Loss 3.051858 mAP 0.448444 -2023-04-27 07:32:36,899 - ==> mAP: 0.44844 Loss: 3.052 - -2023-04-27 07:32:36,903 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:32:36,903 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:32:36,936 - - -2023-04-27 07:32:36,937 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:32:48,644 - Epoch: [356][ 50/ 518] Overall Loss 2.921563 Objective Loss 2.921563 LR 0.000008 Time 0.234092 -2023-04-27 07:32:59,466 - Epoch: [356][ 100/ 518] Overall Loss 2.883686 Objective Loss 2.883686 LR 0.000008 Time 0.225252 -2023-04-27 07:33:10,239 - Epoch: [356][ 150/ 518] Overall Loss 2.882355 Objective Loss 2.882355 LR 0.000008 Time 0.221974 -2023-04-27 07:33:21,056 - Epoch: [356][ 200/ 518] Overall Loss 2.871842 Objective Loss 2.871842 LR 0.000008 Time 0.220561 -2023-04-27 07:33:31,842 - Epoch: [356][ 250/ 518] Overall Loss 2.870132 Objective Loss 2.870132 LR 0.000008 Time 0.219584 -2023-04-27 07:33:42,596 - Epoch: [356][ 300/ 518] Overall Loss 2.867799 Objective Loss 2.867799 LR 0.000008 Time 0.218829 -2023-04-27 07:33:53,420 - Epoch: [356][ 350/ 518] Overall Loss 2.862716 Objective Loss 2.862716 LR 0.000008 Time 0.218490 -2023-04-27 07:34:04,234 - Epoch: [356][ 400/ 518] Overall Loss 2.869990 Objective Loss 2.869990 LR 0.000008 Time 0.218209 -2023-04-27 07:34:15,042 - Epoch: [356][ 450/ 518] Overall Loss 2.868876 Objective Loss 2.868876 LR 0.000008 Time 0.217977 -2023-04-27 07:34:25,789 - Epoch: [356][ 500/ 518] Overall Loss 2.867582 Objective Loss 2.867582 LR 0.000008 Time 0.217671 -2023-04-27 07:34:29,563 - Epoch: [356][ 518/ 518] Overall Loss 2.866574 Objective Loss 2.866574 LR 0.000008 Time 0.217392 -2023-04-27 07:34:29,640 - --- validate (epoch=356)----------- -2023-04-27 07:34:29,641 - 4952 samples (32 per mini-batch) -2023-04-27 07:34:37,912 - Epoch: [356][ 50/ 155] Loss 3.096265 mAP 0.440022 -2023-04-27 07:34:45,798 - Epoch: [356][ 100/ 155] Loss 3.081795 mAP 0.439278 -2023-04-27 07:34:53,687 - Epoch: [356][ 150/ 155] Loss 3.073272 mAP 0.442976 -2023-04-27 07:34:54,407 - Epoch: [356][ 155/ 155] Loss 3.073087 mAP 0.442790 -2023-04-27 07:34:54,481 - ==> mAP: 0.44279 Loss: 3.073 - -2023-04-27 07:34:54,485 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:34:54,486 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:34:54,520 - - -2023-04-27 07:34:54,520 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:35:06,121 - Epoch: [357][ 50/ 518] Overall Loss 2.909468 Objective Loss 2.909468 LR 0.000008 Time 0.231954 -2023-04-27 07:35:17,073 - Epoch: [357][ 100/ 518] Overall Loss 2.885037 Objective Loss 2.885037 LR 0.000008 Time 0.225489 -2023-04-27 07:35:27,841 - Epoch: [357][ 150/ 518] Overall Loss 2.873085 Objective Loss 2.873085 LR 0.000008 Time 0.222099 -2023-04-27 07:35:38,680 - Epoch: [357][ 200/ 518] Overall Loss 2.873158 Objective Loss 2.873158 LR 0.000008 Time 0.220762 -2023-04-27 07:35:49,497 - Epoch: [357][ 250/ 518] Overall Loss 2.877445 Objective Loss 2.877445 LR 0.000008 Time 0.219871 -2023-04-27 07:36:00,266 - Epoch: [357][ 300/ 518] Overall Loss 2.876632 Objective Loss 2.876632 LR 0.000008 Time 0.219119 -2023-04-27 07:36:11,056 - Epoch: [357][ 350/ 518] Overall Loss 2.869187 Objective Loss 2.869187 LR 0.000008 Time 0.218638 -2023-04-27 07:36:21,781 - Epoch: [357][ 400/ 518] Overall Loss 2.866136 Objective Loss 2.866136 LR 0.000008 Time 0.218119 -2023-04-27 07:36:32,600 - Epoch: [357][ 450/ 518] Overall Loss 2.868380 Objective Loss 2.868380 LR 0.000008 Time 0.217920 -2023-04-27 07:36:43,451 - Epoch: [357][ 500/ 518] Overall Loss 2.866762 Objective Loss 2.866762 LR 0.000008 Time 0.217828 -2023-04-27 07:36:47,284 - Epoch: [357][ 518/ 518] Overall Loss 2.867535 Objective Loss 2.867535 LR 0.000008 Time 0.217657 -2023-04-27 07:36:47,360 - --- validate (epoch=357)----------- -2023-04-27 07:36:47,361 - 4952 samples (32 per mini-batch) -2023-04-27 07:36:55,605 - Epoch: [357][ 50/ 155] Loss 3.036389 mAP 0.443501 -2023-04-27 07:37:03,522 - Epoch: [357][ 100/ 155] Loss 3.047261 mAP 0.448576 -2023-04-27 07:37:11,377 - Epoch: [357][ 150/ 155] Loss 3.043453 mAP 0.445354 -2023-04-27 07:37:12,095 - Epoch: [357][ 155/ 155] Loss 3.043023 mAP 0.445454 -2023-04-27 07:37:12,169 - ==> mAP: 0.44545 Loss: 3.043 - -2023-04-27 07:37:12,173 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:37:12,173 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:37:12,207 - - -2023-04-27 07:37:12,208 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:37:23,792 - Epoch: [358][ 50/ 518] Overall Loss 2.924230 Objective Loss 2.924230 LR 0.000008 Time 0.231629 -2023-04-27 07:37:34,521 - Epoch: [358][ 100/ 518] Overall Loss 2.909454 Objective Loss 2.909454 LR 0.000008 Time 0.223094 -2023-04-27 07:37:45,326 - Epoch: [358][ 150/ 518] Overall Loss 2.886820 Objective Loss 2.886820 LR 0.000008 Time 0.220748 -2023-04-27 07:37:56,158 - Epoch: [358][ 200/ 518] Overall Loss 2.871516 Objective Loss 2.871516 LR 0.000008 Time 0.219713 -2023-04-27 07:38:06,879 - Epoch: [358][ 250/ 518] Overall Loss 2.869687 Objective Loss 2.869687 LR 0.000008 Time 0.218648 -2023-04-27 07:38:17,647 - Epoch: [358][ 300/ 518] Overall Loss 2.856580 Objective Loss 2.856580 LR 0.000008 Time 0.218096 -2023-04-27 07:38:28,432 - Epoch: [358][ 350/ 518] Overall Loss 2.857348 Objective Loss 2.857348 LR 0.000008 Time 0.217750 -2023-04-27 07:38:39,276 - Epoch: [358][ 400/ 518] Overall Loss 2.855574 Objective Loss 2.855574 LR 0.000008 Time 0.217637 -2023-04-27 07:38:50,113 - Epoch: [358][ 450/ 518] Overall Loss 2.859818 Objective Loss 2.859818 LR 0.000008 Time 0.217533 -2023-04-27 07:39:00,907 - Epoch: [358][ 500/ 518] Overall Loss 2.856839 Objective Loss 2.856839 LR 0.000008 Time 0.217364 -2023-04-27 07:39:04,643 - Epoch: [358][ 518/ 518] Overall Loss 2.854998 Objective Loss 2.854998 LR 0.000008 Time 0.217023 -2023-04-27 07:39:04,718 - --- validate (epoch=358)----------- -2023-04-27 07:39:04,718 - 4952 samples (32 per mini-batch) -2023-04-27 07:39:13,007 - Epoch: [358][ 50/ 155] Loss 3.050135 mAP 0.457006 -2023-04-27 07:39:20,952 - Epoch: [358][ 100/ 155] Loss 3.043688 mAP 0.453332 -2023-04-27 07:39:28,868 - Epoch: [358][ 150/ 155] Loss 3.043982 mAP 0.453813 -2023-04-27 07:39:29,595 - Epoch: [358][ 155/ 155] Loss 3.046887 mAP 0.454120 -2023-04-27 07:39:29,677 - ==> mAP: 0.45412 Loss: 3.047 - -2023-04-27 07:39:29,681 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:39:29,681 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:39:29,716 - - -2023-04-27 07:39:29,716 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:39:41,395 - Epoch: [359][ 50/ 518] Overall Loss 2.888346 Objective Loss 2.888346 LR 0.000008 Time 0.233524 -2023-04-27 07:39:52,167 - Epoch: [359][ 100/ 518] Overall Loss 2.907492 Objective Loss 2.907492 LR 0.000008 Time 0.224462 -2023-04-27 07:40:02,936 - Epoch: [359][ 150/ 518] Overall Loss 2.894209 Objective Loss 2.894209 LR 0.000008 Time 0.221425 -2023-04-27 07:40:13,692 - Epoch: [359][ 200/ 518] Overall Loss 2.888851 Objective Loss 2.888851 LR 0.000008 Time 0.219841 -2023-04-27 07:40:24,603 - Epoch: [359][ 250/ 518] Overall Loss 2.877645 Objective Loss 2.877645 LR 0.000008 Time 0.219509 -2023-04-27 07:40:35,364 - Epoch: [359][ 300/ 518] Overall Loss 2.867877 Objective Loss 2.867877 LR 0.000008 Time 0.218790 -2023-04-27 07:40:46,138 - Epoch: [359][ 350/ 518] Overall Loss 2.868874 Objective Loss 2.868874 LR 0.000008 Time 0.218313 -2023-04-27 07:40:56,966 - Epoch: [359][ 400/ 518] Overall Loss 2.875049 Objective Loss 2.875049 LR 0.000008 Time 0.218091 -2023-04-27 07:41:07,873 - Epoch: [359][ 450/ 518] Overall Loss 2.877105 Objective Loss 2.877105 LR 0.000008 Time 0.218092 -2023-04-27 07:41:18,643 - Epoch: [359][ 500/ 518] Overall Loss 2.878914 Objective Loss 2.878914 LR 0.000008 Time 0.217821 -2023-04-27 07:41:22,368 - Epoch: [359][ 518/ 518] Overall Loss 2.879952 Objective Loss 2.879952 LR 0.000008 Time 0.217441 -2023-04-27 07:41:22,446 - --- validate (epoch=359)----------- -2023-04-27 07:41:22,446 - 4952 samples (32 per mini-batch) -2023-04-27 07:41:30,723 - Epoch: [359][ 50/ 155] Loss 3.014148 mAP 0.437963 -2023-04-27 07:41:38,652 - Epoch: [359][ 100/ 155] Loss 3.025238 mAP 0.444137 -2023-04-27 07:41:46,566 - Epoch: [359][ 150/ 155] Loss 3.036414 mAP 0.445018 -2023-04-27 07:41:47,284 - Epoch: [359][ 155/ 155] Loss 3.039960 mAP 0.444522 -2023-04-27 07:41:47,356 - ==> mAP: 0.44452 Loss: 3.040 - -2023-04-27 07:41:47,360 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:41:47,361 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:41:47,395 - - -2023-04-27 07:41:47,395 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:41:58,929 - Epoch: [360][ 50/ 518] Overall Loss 2.882190 Objective Loss 2.882190 LR 0.000008 Time 0.230624 -2023-04-27 07:42:09,734 - Epoch: [360][ 100/ 518] Overall Loss 2.904937 Objective Loss 2.904937 LR 0.000008 Time 0.223340 -2023-04-27 07:42:20,562 - Epoch: [360][ 150/ 518] Overall Loss 2.887813 Objective Loss 2.887813 LR 0.000008 Time 0.221074 -2023-04-27 07:42:31,441 - Epoch: [360][ 200/ 518] Overall Loss 2.881513 Objective Loss 2.881513 LR 0.000008 Time 0.220191 -2023-04-27 07:42:42,236 - Epoch: [360][ 250/ 518] Overall Loss 2.885918 Objective Loss 2.885918 LR 0.000008 Time 0.219327 -2023-04-27 07:42:53,034 - Epoch: [360][ 300/ 518] Overall Loss 2.882595 Objective Loss 2.882595 LR 0.000008 Time 0.218761 -2023-04-27 07:43:03,918 - Epoch: [360][ 350/ 518] Overall Loss 2.874089 Objective Loss 2.874089 LR 0.000008 Time 0.218603 -2023-04-27 07:43:14,733 - Epoch: [360][ 400/ 518] Overall Loss 2.866949 Objective Loss 2.866949 LR 0.000008 Time 0.218309 -2023-04-27 07:43:25,488 - Epoch: [360][ 450/ 518] Overall Loss 2.863040 Objective Loss 2.863040 LR 0.000008 Time 0.217950 -2023-04-27 07:43:36,322 - Epoch: [360][ 500/ 518] Overall Loss 2.863137 Objective Loss 2.863137 LR 0.000008 Time 0.217820 -2023-04-27 07:43:40,041 - Epoch: [360][ 518/ 518] Overall Loss 2.865807 Objective Loss 2.865807 LR 0.000008 Time 0.217430 -2023-04-27 07:43:40,116 - --- validate (epoch=360)----------- -2023-04-27 07:43:40,116 - 4952 samples (32 per mini-batch) -2023-04-27 07:43:48,430 - Epoch: [360][ 50/ 155] Loss 3.038988 mAP 0.435284 -2023-04-27 07:43:56,343 - Epoch: [360][ 100/ 155] Loss 3.036872 mAP 0.441390 -2023-04-27 07:44:04,240 - Epoch: [360][ 150/ 155] Loss 3.039063 mAP 0.436976 -2023-04-27 07:44:04,948 - Epoch: [360][ 155/ 155] Loss 3.043096 mAP 0.433940 -2023-04-27 07:44:05,022 - ==> mAP: 0.43394 Loss: 3.043 - -2023-04-27 07:44:05,026 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:44:05,026 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:44:05,060 - - -2023-04-27 07:44:05,060 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:44:16,728 - Epoch: [361][ 50/ 518] Overall Loss 2.885562 Objective Loss 2.885562 LR 0.000008 Time 0.233300 -2023-04-27 07:44:27,523 - Epoch: [361][ 100/ 518] Overall Loss 2.850843 Objective Loss 2.850843 LR 0.000008 Time 0.224578 -2023-04-27 07:44:38,424 - Epoch: [361][ 150/ 518] Overall Loss 2.872980 Objective Loss 2.872980 LR 0.000008 Time 0.222382 -2023-04-27 07:44:49,245 - Epoch: [361][ 200/ 518] Overall Loss 2.873719 Objective Loss 2.873719 LR 0.000008 Time 0.220887 -2023-04-27 07:44:59,934 - Epoch: [361][ 250/ 518] Overall Loss 2.861974 Objective Loss 2.861974 LR 0.000008 Time 0.219460 -2023-04-27 07:45:10,714 - Epoch: [361][ 300/ 518] Overall Loss 2.871653 Objective Loss 2.871653 LR 0.000008 Time 0.218810 -2023-04-27 07:45:21,517 - Epoch: [361][ 350/ 518] Overall Loss 2.862063 Objective Loss 2.862063 LR 0.000008 Time 0.218412 -2023-04-27 07:45:32,267 - Epoch: [361][ 400/ 518] Overall Loss 2.857878 Objective Loss 2.857878 LR 0.000008 Time 0.217983 -2023-04-27 07:45:43,102 - Epoch: [361][ 450/ 518] Overall Loss 2.860287 Objective Loss 2.860287 LR 0.000008 Time 0.217838 -2023-04-27 07:45:53,940 - Epoch: [361][ 500/ 518] Overall Loss 2.863406 Objective Loss 2.863406 LR 0.000008 Time 0.217726 -2023-04-27 07:45:57,672 - Epoch: [361][ 518/ 518] Overall Loss 2.860667 Objective Loss 2.860667 LR 0.000008 Time 0.217364 -2023-04-27 07:45:57,748 - --- validate (epoch=361)----------- -2023-04-27 07:45:57,748 - 4952 samples (32 per mini-batch) -2023-04-27 07:46:06,041 - Epoch: [361][ 50/ 155] Loss 3.052515 mAP 0.441011 -2023-04-27 07:46:13,968 - Epoch: [361][ 100/ 155] Loss 3.037367 mAP 0.452613 -2023-04-27 07:46:21,912 - Epoch: [361][ 150/ 155] Loss 3.052406 mAP 0.443899 -2023-04-27 07:46:22,634 - Epoch: [361][ 155/ 155] Loss 3.057212 mAP 0.440872 -2023-04-27 07:46:22,703 - ==> mAP: 0.44087 Loss: 3.057 - -2023-04-27 07:46:22,707 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:46:22,707 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:46:22,741 - - -2023-04-27 07:46:22,741 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:46:34,240 - Epoch: [362][ 50/ 518] Overall Loss 2.854148 Objective Loss 2.854148 LR 0.000008 Time 0.229924 -2023-04-27 07:46:45,054 - Epoch: [362][ 100/ 518] Overall Loss 2.851800 Objective Loss 2.851800 LR 0.000008 Time 0.223088 -2023-04-27 07:46:55,801 - Epoch: [362][ 150/ 518] Overall Loss 2.854388 Objective Loss 2.854388 LR 0.000008 Time 0.220358 -2023-04-27 07:47:06,603 - Epoch: [362][ 200/ 518] Overall Loss 2.841833 Objective Loss 2.841833 LR 0.000008 Time 0.219269 -2023-04-27 07:47:17,332 - Epoch: [362][ 250/ 518] Overall Loss 2.844002 Objective Loss 2.844002 LR 0.000008 Time 0.218327 -2023-04-27 07:47:28,163 - Epoch: [362][ 300/ 518] Overall Loss 2.847365 Objective Loss 2.847365 LR 0.000008 Time 0.218035 -2023-04-27 07:47:38,994 - Epoch: [362][ 350/ 518] Overall Loss 2.855511 Objective Loss 2.855511 LR 0.000008 Time 0.217830 -2023-04-27 07:47:49,829 - Epoch: [362][ 400/ 518] Overall Loss 2.861466 Objective Loss 2.861466 LR 0.000008 Time 0.217684 -2023-04-27 07:48:00,735 - Epoch: [362][ 450/ 518] Overall Loss 2.868857 Objective Loss 2.868857 LR 0.000008 Time 0.217729 -2023-04-27 07:48:11,633 - Epoch: [362][ 500/ 518] Overall Loss 2.865861 Objective Loss 2.865861 LR 0.000008 Time 0.217750 -2023-04-27 07:48:15,383 - Epoch: [362][ 518/ 518] Overall Loss 2.864492 Objective Loss 2.864492 LR 0.000008 Time 0.217421 -2023-04-27 07:48:15,456 - --- validate (epoch=362)----------- -2023-04-27 07:48:15,457 - 4952 samples (32 per mini-batch) -2023-04-27 07:48:23,781 - Epoch: [362][ 50/ 155] Loss 3.081503 mAP 0.442692 -2023-04-27 07:48:31,689 - Epoch: [362][ 100/ 155] Loss 3.042356 mAP 0.446073 -2023-04-27 07:48:39,603 - Epoch: [362][ 150/ 155] Loss 3.045703 mAP 0.451255 -2023-04-27 07:48:40,321 - Epoch: [362][ 155/ 155] Loss 3.043720 mAP 0.452512 -2023-04-27 07:48:40,387 - ==> mAP: 0.45251 Loss: 3.044 - -2023-04-27 07:48:40,391 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:48:40,391 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:48:40,425 - - -2023-04-27 07:48:40,425 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:48:52,120 - Epoch: [363][ 50/ 518] Overall Loss 2.864332 Objective Loss 2.864332 LR 0.000008 Time 0.233830 -2023-04-27 07:49:02,998 - Epoch: [363][ 100/ 518] Overall Loss 2.867456 Objective Loss 2.867456 LR 0.000008 Time 0.225682 -2023-04-27 07:49:13,774 - Epoch: [363][ 150/ 518] Overall Loss 2.871798 Objective Loss 2.871798 LR 0.000008 Time 0.222283 -2023-04-27 07:49:24,589 - Epoch: [363][ 200/ 518] Overall Loss 2.867149 Objective Loss 2.867149 LR 0.000008 Time 0.220780 -2023-04-27 07:49:35,420 - Epoch: [363][ 250/ 518] Overall Loss 2.866912 Objective Loss 2.866912 LR 0.000008 Time 0.219943 -2023-04-27 07:49:46,174 - Epoch: [363][ 300/ 518] Overall Loss 2.860939 Objective Loss 2.860939 LR 0.000008 Time 0.219128 -2023-04-27 07:49:57,015 - Epoch: [363][ 350/ 518] Overall Loss 2.860400 Objective Loss 2.860400 LR 0.000008 Time 0.218793 -2023-04-27 07:50:07,856 - Epoch: [363][ 400/ 518] Overall Loss 2.854304 Objective Loss 2.854304 LR 0.000008 Time 0.218543 -2023-04-27 07:50:18,613 - Epoch: [363][ 450/ 518] Overall Loss 2.852139 Objective Loss 2.852139 LR 0.000008 Time 0.218160 -2023-04-27 07:50:29,427 - Epoch: [363][ 500/ 518] Overall Loss 2.857841 Objective Loss 2.857841 LR 0.000008 Time 0.217970 -2023-04-27 07:50:33,194 - Epoch: [363][ 518/ 518] Overall Loss 2.857082 Objective Loss 2.857082 LR 0.000008 Time 0.217666 -2023-04-27 07:50:33,271 - --- validate (epoch=363)----------- -2023-04-27 07:50:33,271 - 4952 samples (32 per mini-batch) -2023-04-27 07:50:41,529 - Epoch: [363][ 50/ 155] Loss 3.069273 mAP 0.432568 -2023-04-27 07:50:49,435 - Epoch: [363][ 100/ 155] Loss 3.086864 mAP 0.424311 -2023-04-27 07:50:57,340 - Epoch: [363][ 150/ 155] Loss 3.059684 mAP 0.441130 -2023-04-27 07:50:58,064 - Epoch: [363][ 155/ 155] Loss 3.056121 mAP 0.441215 -2023-04-27 07:50:58,141 - ==> mAP: 0.44122 Loss: 3.056 - -2023-04-27 07:50:58,145 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:50:58,146 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:50:58,181 - - -2023-04-27 07:50:58,181 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:51:09,810 - Epoch: [364][ 50/ 518] Overall Loss 2.810275 Objective Loss 2.810275 LR 0.000008 Time 0.232530 -2023-04-27 07:51:20,593 - Epoch: [364][ 100/ 518] Overall Loss 2.828994 Objective Loss 2.828994 LR 0.000008 Time 0.224079 -2023-04-27 07:51:31,430 - Epoch: [364][ 150/ 518] Overall Loss 2.831079 Objective Loss 2.831079 LR 0.000008 Time 0.221621 -2023-04-27 07:51:42,310 - Epoch: [364][ 200/ 518] Overall Loss 2.846604 Objective Loss 2.846604 LR 0.000008 Time 0.220609 -2023-04-27 07:51:53,103 - Epoch: [364][ 250/ 518] Overall Loss 2.850290 Objective Loss 2.850290 LR 0.000008 Time 0.219654 -2023-04-27 07:52:03,913 - Epoch: [364][ 300/ 518] Overall Loss 2.856504 Objective Loss 2.856504 LR 0.000008 Time 0.219071 -2023-04-27 07:52:14,681 - Epoch: [364][ 350/ 518] Overall Loss 2.856975 Objective Loss 2.856975 LR 0.000008 Time 0.218537 -2023-04-27 07:52:25,536 - Epoch: [364][ 400/ 518] Overall Loss 2.865476 Objective Loss 2.865476 LR 0.000008 Time 0.218354 -2023-04-27 07:52:36,288 - Epoch: [364][ 450/ 518] Overall Loss 2.868669 Objective Loss 2.868669 LR 0.000008 Time 0.217984 -2023-04-27 07:52:47,096 - Epoch: [364][ 500/ 518] Overall Loss 2.871048 Objective Loss 2.871048 LR 0.000008 Time 0.217797 -2023-04-27 07:52:50,825 - Epoch: [364][ 518/ 518] Overall Loss 2.868494 Objective Loss 2.868494 LR 0.000008 Time 0.217427 -2023-04-27 07:52:50,901 - --- validate (epoch=364)----------- -2023-04-27 07:52:50,902 - 4952 samples (32 per mini-batch) -2023-04-27 07:52:59,219 - Epoch: [364][ 50/ 155] Loss 3.007589 mAP 0.466977 -2023-04-27 07:53:07,167 - Epoch: [364][ 100/ 155] Loss 3.043014 mAP 0.454582 -2023-04-27 07:53:15,092 - Epoch: [364][ 150/ 155] Loss 3.038746 mAP 0.453186 -2023-04-27 07:53:15,804 - Epoch: [364][ 155/ 155] Loss 3.039212 mAP 0.452225 -2023-04-27 07:53:15,878 - ==> mAP: 0.45223 Loss: 3.039 - -2023-04-27 07:53:15,882 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:53:15,882 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:53:15,917 - - -2023-04-27 07:53:15,917 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:53:27,491 - Epoch: [365][ 50/ 518] Overall Loss 2.924481 Objective Loss 2.924481 LR 0.000008 Time 0.231434 -2023-04-27 07:53:38,360 - Epoch: [365][ 100/ 518] Overall Loss 2.905822 Objective Loss 2.905822 LR 0.000008 Time 0.224391 -2023-04-27 07:53:49,280 - Epoch: [365][ 150/ 518] Overall Loss 2.875816 Objective Loss 2.875816 LR 0.000008 Time 0.222382 -2023-04-27 07:54:00,084 - Epoch: [365][ 200/ 518] Overall Loss 2.871380 Objective Loss 2.871380 LR 0.000008 Time 0.220799 -2023-04-27 07:54:10,844 - Epoch: [365][ 250/ 518] Overall Loss 2.870580 Objective Loss 2.870580 LR 0.000008 Time 0.219674 -2023-04-27 07:54:21,673 - Epoch: [365][ 300/ 518] Overall Loss 2.868904 Objective Loss 2.868904 LR 0.000008 Time 0.219153 -2023-04-27 07:54:32,391 - Epoch: [365][ 350/ 518] Overall Loss 2.867579 Objective Loss 2.867579 LR 0.000008 Time 0.218464 -2023-04-27 07:54:43,306 - Epoch: [365][ 400/ 518] Overall Loss 2.871074 Objective Loss 2.871074 LR 0.000008 Time 0.218439 -2023-04-27 07:54:54,144 - Epoch: [365][ 450/ 518] Overall Loss 2.868886 Objective Loss 2.868886 LR 0.000008 Time 0.218249 -2023-04-27 07:55:04,933 - Epoch: [365][ 500/ 518] Overall Loss 2.867646 Objective Loss 2.867646 LR 0.000008 Time 0.218000 -2023-04-27 07:55:08,630 - Epoch: [365][ 518/ 518] Overall Loss 2.869330 Objective Loss 2.869330 LR 0.000008 Time 0.217559 -2023-04-27 07:55:08,706 - --- validate (epoch=365)----------- -2023-04-27 07:55:08,707 - 4952 samples (32 per mini-batch) -2023-04-27 07:55:16,966 - Epoch: [365][ 50/ 155] Loss 3.140777 mAP 0.441244 -2023-04-27 07:55:24,878 - Epoch: [365][ 100/ 155] Loss 3.094540 mAP 0.448591 -2023-04-27 07:55:32,783 - Epoch: [365][ 150/ 155] Loss 3.083510 mAP 0.447867 -2023-04-27 07:55:33,509 - Epoch: [365][ 155/ 155] Loss 3.084567 mAP 0.447994 -2023-04-27 07:55:33,587 - ==> mAP: 0.44799 Loss: 3.085 - -2023-04-27 07:55:33,591 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:55:33,591 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:55:33,625 - - -2023-04-27 07:55:33,625 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:55:45,235 - Epoch: [366][ 50/ 518] Overall Loss 2.934736 Objective Loss 2.934736 LR 0.000008 Time 0.232137 -2023-04-27 07:55:55,993 - Epoch: [366][ 100/ 518] Overall Loss 2.896028 Objective Loss 2.896028 LR 0.000008 Time 0.223631 -2023-04-27 07:56:06,824 - Epoch: [366][ 150/ 518] Overall Loss 2.883364 Objective Loss 2.883364 LR 0.000008 Time 0.221287 -2023-04-27 07:56:17,665 - Epoch: [366][ 200/ 518] Overall Loss 2.874910 Objective Loss 2.874910 LR 0.000008 Time 0.220163 -2023-04-27 07:56:28,424 - Epoch: [366][ 250/ 518] Overall Loss 2.871224 Objective Loss 2.871224 LR 0.000008 Time 0.219160 -2023-04-27 07:56:39,248 - Epoch: [366][ 300/ 518] Overall Loss 2.881342 Objective Loss 2.881342 LR 0.000008 Time 0.218708 -2023-04-27 07:56:50,029 - Epoch: [366][ 350/ 518] Overall Loss 2.883573 Objective Loss 2.883573 LR 0.000008 Time 0.218262 -2023-04-27 07:57:00,847 - Epoch: [366][ 400/ 518] Overall Loss 2.876901 Objective Loss 2.876901 LR 0.000008 Time 0.218021 -2023-04-27 07:57:11,684 - Epoch: [366][ 450/ 518] Overall Loss 2.873009 Objective Loss 2.873009 LR 0.000008 Time 0.217876 -2023-04-27 07:57:22,545 - Epoch: [366][ 500/ 518] Overall Loss 2.876213 Objective Loss 2.876213 LR 0.000008 Time 0.217807 -2023-04-27 07:57:26,259 - Epoch: [366][ 518/ 518] Overall Loss 2.874427 Objective Loss 2.874427 LR 0.000008 Time 0.217406 -2023-04-27 07:57:26,335 - --- validate (epoch=366)----------- -2023-04-27 07:57:26,336 - 4952 samples (32 per mini-batch) -2023-04-27 07:57:34,631 - Epoch: [366][ 50/ 155] Loss 3.033650 mAP 0.439872 -2023-04-27 07:57:42,545 - Epoch: [366][ 100/ 155] Loss 3.048375 mAP 0.444336 -2023-04-27 07:57:50,443 - Epoch: [366][ 150/ 155] Loss 3.030183 mAP 0.451137 -2023-04-27 07:57:51,163 - Epoch: [366][ 155/ 155] Loss 3.025752 mAP 0.451660 -2023-04-27 07:57:51,236 - ==> mAP: 0.45166 Loss: 3.026 - -2023-04-27 07:57:51,241 - ==> Best [mAP: 0.454234 vloss: 3.062401 Sparsity:0.00 Params: 2177087 on epoch: 307] -2023-04-27 07:57:51,241 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 07:57:51,277 - - -2023-04-27 07:57:51,277 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 07:58:02,761 - Epoch: [367][ 50/ 518] Overall Loss 2.903397 Objective Loss 2.903397 LR 0.000008 Time 0.229620 -2023-04-27 07:58:13,567 - Epoch: [367][ 100/ 518] Overall Loss 2.898923 Objective Loss 2.898923 LR 0.000008 Time 0.222854 -2023-04-27 07:58:24,374 - Epoch: [367][ 150/ 518] Overall Loss 2.867365 Objective Loss 2.867365 LR 0.000008 Time 0.220612 -2023-04-27 07:58:35,126 - Epoch: [367][ 200/ 518] Overall Loss 2.872621 Objective Loss 2.872621 LR 0.000008 Time 0.219211 -2023-04-27 07:58:45,998 - Epoch: [367][ 250/ 518] Overall Loss 2.876251 Objective Loss 2.876251 LR 0.000008 Time 0.218848 -2023-04-27 07:58:56,764 - Epoch: [367][ 300/ 518] Overall Loss 2.882129 Objective Loss 2.882129 LR 0.000008 Time 0.218255 -2023-04-27 07:59:07,564 - Epoch: [367][ 350/ 518] Overall Loss 2.880567 Objective Loss 2.880567 LR 0.000008 Time 0.217928 -2023-04-27 07:59:18,391 - Epoch: [367][ 400/ 518] Overall Loss 2.874665 Objective Loss 2.874665 LR 0.000008 Time 0.217752 -2023-04-27 07:59:29,273 - Epoch: [367][ 450/ 518] Overall Loss 2.876258 Objective Loss 2.876258 LR 0.000008 Time 0.217736 -2023-04-27 07:59:40,115 - Epoch: [367][ 500/ 518] Overall Loss 2.876681 Objective Loss 2.876681 LR 0.000008 Time 0.217643 -2023-04-27 07:59:43,851 - Epoch: [367][ 518/ 518] Overall Loss 2.877396 Objective Loss 2.877396 LR 0.000008 Time 0.217290 -2023-04-27 07:59:43,928 - --- validate (epoch=367)----------- -2023-04-27 07:59:43,928 - 4952 samples (32 per mini-batch) -2023-04-27 07:59:52,270 - Epoch: [367][ 50/ 155] Loss 3.042438 mAP 0.468007 -2023-04-27 08:00:00,207 - Epoch: [367][ 100/ 155] Loss 3.050080 mAP 0.459082 -2023-04-27 08:00:08,113 - Epoch: [367][ 150/ 155] Loss 3.038764 mAP 0.455918 -2023-04-27 08:00:08,845 - Epoch: [367][ 155/ 155] Loss 3.032273 mAP 0.458592 -2023-04-27 08:00:08,917 - ==> mAP: 0.45859 Loss: 3.032 - -2023-04-27 08:00:08,921 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:00:08,921 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:00:08,971 - - -2023-04-27 08:00:08,972 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:00:20,611 - Epoch: [368][ 50/ 518] Overall Loss 2.857022 Objective Loss 2.857022 LR 0.000008 Time 0.232741 -2023-04-27 08:00:31,426 - Epoch: [368][ 100/ 518] Overall Loss 2.850707 Objective Loss 2.850707 LR 0.000008 Time 0.224499 -2023-04-27 08:00:42,259 - Epoch: [368][ 150/ 518] Overall Loss 2.851337 Objective Loss 2.851337 LR 0.000008 Time 0.221878 -2023-04-27 08:00:53,090 - Epoch: [368][ 200/ 518] Overall Loss 2.850680 Objective Loss 2.850680 LR 0.000008 Time 0.220557 -2023-04-27 08:01:03,860 - Epoch: [368][ 250/ 518] Overall Loss 2.855631 Objective Loss 2.855631 LR 0.000008 Time 0.219516 -2023-04-27 08:01:14,603 - Epoch: [368][ 300/ 518] Overall Loss 2.861280 Objective Loss 2.861280 LR 0.000008 Time 0.218737 -2023-04-27 08:01:25,442 - Epoch: [368][ 350/ 518] Overall Loss 2.856710 Objective Loss 2.856710 LR 0.000008 Time 0.218451 -2023-04-27 08:01:36,232 - Epoch: [368][ 400/ 518] Overall Loss 2.862302 Objective Loss 2.862302 LR 0.000008 Time 0.218116 -2023-04-27 08:01:47,032 - Epoch: [368][ 450/ 518] Overall Loss 2.863540 Objective Loss 2.863540 LR 0.000008 Time 0.217879 -2023-04-27 08:01:57,958 - Epoch: [368][ 500/ 518] Overall Loss 2.865830 Objective Loss 2.865830 LR 0.000008 Time 0.217939 -2023-04-27 08:02:01,710 - Epoch: [368][ 518/ 518] Overall Loss 2.863271 Objective Loss 2.863271 LR 0.000008 Time 0.217608 -2023-04-27 08:02:01,786 - --- validate (epoch=368)----------- -2023-04-27 08:02:01,786 - 4952 samples (32 per mini-batch) -2023-04-27 08:02:10,081 - Epoch: [368][ 50/ 155] Loss 3.048494 mAP 0.449850 -2023-04-27 08:02:18,028 - Epoch: [368][ 100/ 155] Loss 3.045954 mAP 0.451383 -2023-04-27 08:02:25,921 - Epoch: [368][ 150/ 155] Loss 3.045110 mAP 0.453191 -2023-04-27 08:02:26,634 - Epoch: [368][ 155/ 155] Loss 3.047396 mAP 0.451256 -2023-04-27 08:02:26,708 - ==> mAP: 0.45126 Loss: 3.047 - -2023-04-27 08:02:26,711 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:02:26,711 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:02:26,746 - - -2023-04-27 08:02:26,746 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:02:38,410 - Epoch: [369][ 50/ 518] Overall Loss 2.814911 Objective Loss 2.814911 LR 0.000008 Time 0.233219 -2023-04-27 08:02:49,231 - Epoch: [369][ 100/ 518] Overall Loss 2.818468 Objective Loss 2.818468 LR 0.000008 Time 0.224809 -2023-04-27 08:03:00,132 - Epoch: [369][ 150/ 518] Overall Loss 2.822437 Objective Loss 2.822437 LR 0.000008 Time 0.222535 -2023-04-27 08:03:10,910 - Epoch: [369][ 200/ 518] Overall Loss 2.839170 Objective Loss 2.839170 LR 0.000008 Time 0.220781 -2023-04-27 08:03:21,755 - Epoch: [369][ 250/ 518] Overall Loss 2.849044 Objective Loss 2.849044 LR 0.000008 Time 0.219999 -2023-04-27 08:03:32,557 - Epoch: [369][ 300/ 518] Overall Loss 2.844597 Objective Loss 2.844597 LR 0.000008 Time 0.219336 -2023-04-27 08:03:43,343 - Epoch: [369][ 350/ 518] Overall Loss 2.855642 Objective Loss 2.855642 LR 0.000008 Time 0.218814 -2023-04-27 08:03:54,177 - Epoch: [369][ 400/ 518] Overall Loss 2.856524 Objective Loss 2.856524 LR 0.000008 Time 0.218542 -2023-04-27 08:04:04,920 - Epoch: [369][ 450/ 518] Overall Loss 2.860280 Objective Loss 2.860280 LR 0.000008 Time 0.218130 -2023-04-27 08:04:15,733 - Epoch: [369][ 500/ 518] Overall Loss 2.858625 Objective Loss 2.858625 LR 0.000008 Time 0.217940 -2023-04-27 08:04:19,490 - Epoch: [369][ 518/ 518] Overall Loss 2.861826 Objective Loss 2.861826 LR 0.000008 Time 0.217619 -2023-04-27 08:04:19,566 - --- validate (epoch=369)----------- -2023-04-27 08:04:19,566 - 4952 samples (32 per mini-batch) -2023-04-27 08:04:27,882 - Epoch: [369][ 50/ 155] Loss 3.024224 mAP 0.467254 -2023-04-27 08:04:35,818 - Epoch: [369][ 100/ 155] Loss 3.005604 mAP 0.459859 -2023-04-27 08:04:43,704 - Epoch: [369][ 150/ 155] Loss 3.017929 mAP 0.452334 -2023-04-27 08:04:44,418 - Epoch: [369][ 155/ 155] Loss 3.012405 mAP 0.453111 -2023-04-27 08:04:44,494 - ==> mAP: 0.45311 Loss: 3.012 - -2023-04-27 08:04:44,498 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:04:44,498 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:04:44,532 - - -2023-04-27 08:04:44,533 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:04:56,105 - Epoch: [370][ 50/ 518] Overall Loss 2.873385 Objective Loss 2.873385 LR 0.000008 Time 0.231399 -2023-04-27 08:05:06,937 - Epoch: [370][ 100/ 518] Overall Loss 2.867400 Objective Loss 2.867400 LR 0.000008 Time 0.224003 -2023-04-27 08:05:17,759 - Epoch: [370][ 150/ 518] Overall Loss 2.862311 Objective Loss 2.862311 LR 0.000008 Time 0.221470 -2023-04-27 08:05:28,587 - Epoch: [370][ 200/ 518] Overall Loss 2.856382 Objective Loss 2.856382 LR 0.000008 Time 0.220235 -2023-04-27 08:05:39,397 - Epoch: [370][ 250/ 518] Overall Loss 2.864513 Objective Loss 2.864513 LR 0.000008 Time 0.219421 -2023-04-27 08:05:50,140 - Epoch: [370][ 300/ 518] Overall Loss 2.867024 Objective Loss 2.867024 LR 0.000008 Time 0.218655 -2023-04-27 08:06:00,955 - Epoch: [370][ 350/ 518] Overall Loss 2.861971 Objective Loss 2.861971 LR 0.000008 Time 0.218314 -2023-04-27 08:06:11,782 - Epoch: [370][ 400/ 518] Overall Loss 2.862488 Objective Loss 2.862488 LR 0.000008 Time 0.218090 -2023-04-27 08:06:22,636 - Epoch: [370][ 450/ 518] Overall Loss 2.865446 Objective Loss 2.865446 LR 0.000008 Time 0.217973 -2023-04-27 08:06:33,463 - Epoch: [370][ 500/ 518] Overall Loss 2.863748 Objective Loss 2.863748 LR 0.000008 Time 0.217827 -2023-04-27 08:06:37,188 - Epoch: [370][ 518/ 518] Overall Loss 2.863527 Objective Loss 2.863527 LR 0.000008 Time 0.217448 -2023-04-27 08:06:37,264 - --- validate (epoch=370)----------- -2023-04-27 08:06:37,264 - 4952 samples (32 per mini-batch) -2023-04-27 08:06:45,601 - Epoch: [370][ 50/ 155] Loss 3.066857 mAP 0.441622 -2023-04-27 08:06:53,555 - Epoch: [370][ 100/ 155] Loss 3.049439 mAP 0.447764 -2023-04-27 08:07:01,490 - Epoch: [370][ 150/ 155] Loss 3.041316 mAP 0.452087 -2023-04-27 08:07:02,206 - Epoch: [370][ 155/ 155] Loss 3.040429 mAP 0.452405 -2023-04-27 08:07:02,283 - ==> mAP: 0.45241 Loss: 3.040 - -2023-04-27 08:07:02,287 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:07:02,287 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:07:02,321 - - -2023-04-27 08:07:02,321 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:07:13,803 - Epoch: [371][ 50/ 518] Overall Loss 2.863983 Objective Loss 2.863983 LR 0.000008 Time 0.229573 -2023-04-27 08:07:24,694 - Epoch: [371][ 100/ 518] Overall Loss 2.852425 Objective Loss 2.852425 LR 0.000008 Time 0.223685 -2023-04-27 08:07:35,527 - Epoch: [371][ 150/ 518] Overall Loss 2.856879 Objective Loss 2.856879 LR 0.000008 Time 0.221329 -2023-04-27 08:07:46,343 - Epoch: [371][ 200/ 518] Overall Loss 2.861101 Objective Loss 2.861101 LR 0.000008 Time 0.220069 -2023-04-27 08:07:57,243 - Epoch: [371][ 250/ 518] Overall Loss 2.855805 Objective Loss 2.855805 LR 0.000008 Time 0.219651 -2023-04-27 08:08:08,092 - Epoch: [371][ 300/ 518] Overall Loss 2.861710 Objective Loss 2.861710 LR 0.000008 Time 0.219200 -2023-04-27 08:08:18,931 - Epoch: [371][ 350/ 518] Overall Loss 2.857888 Objective Loss 2.857888 LR 0.000008 Time 0.218849 -2023-04-27 08:08:29,871 - Epoch: [371][ 400/ 518] Overall Loss 2.864811 Objective Loss 2.864811 LR 0.000008 Time 0.218839 -2023-04-27 08:08:40,634 - Epoch: [371][ 450/ 518] Overall Loss 2.862521 Objective Loss 2.862521 LR 0.000008 Time 0.218439 -2023-04-27 08:08:51,480 - Epoch: [371][ 500/ 518] Overall Loss 2.860879 Objective Loss 2.860879 LR 0.000008 Time 0.218284 -2023-04-27 08:08:55,190 - Epoch: [371][ 518/ 518] Overall Loss 2.854553 Objective Loss 2.854553 LR 0.000008 Time 0.217859 -2023-04-27 08:08:55,264 - --- validate (epoch=371)----------- -2023-04-27 08:08:55,265 - 4952 samples (32 per mini-batch) -2023-04-27 08:09:03,589 - Epoch: [371][ 50/ 155] Loss 3.036824 mAP 0.455412 -2023-04-27 08:09:11,515 - Epoch: [371][ 100/ 155] Loss 3.051070 mAP 0.449773 -2023-04-27 08:09:19,398 - Epoch: [371][ 150/ 155] Loss 3.043839 mAP 0.450462 -2023-04-27 08:09:20,124 - Epoch: [371][ 155/ 155] Loss 3.042390 mAP 0.451275 -2023-04-27 08:09:20,201 - ==> mAP: 0.45128 Loss: 3.042 - -2023-04-27 08:09:20,205 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:09:20,206 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:09:20,239 - - -2023-04-27 08:09:20,239 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:09:31,812 - Epoch: [372][ 50/ 518] Overall Loss 2.869314 Objective Loss 2.869314 LR 0.000008 Time 0.231404 -2023-04-27 08:09:42,650 - Epoch: [372][ 100/ 518] Overall Loss 2.874960 Objective Loss 2.874960 LR 0.000008 Time 0.224065 -2023-04-27 08:09:53,461 - Epoch: [372][ 150/ 518] Overall Loss 2.869102 Objective Loss 2.869102 LR 0.000008 Time 0.221438 -2023-04-27 08:10:04,236 - Epoch: [372][ 200/ 518] Overall Loss 2.869131 Objective Loss 2.869131 LR 0.000008 Time 0.219948 -2023-04-27 08:10:15,069 - Epoch: [372][ 250/ 518] Overall Loss 2.870465 Objective Loss 2.870465 LR 0.000008 Time 0.219284 -2023-04-27 08:10:25,925 - Epoch: [372][ 300/ 518] Overall Loss 2.869332 Objective Loss 2.869332 LR 0.000008 Time 0.218916 -2023-04-27 08:10:36,722 - Epoch: [372][ 350/ 518] Overall Loss 2.871924 Objective Loss 2.871924 LR 0.000008 Time 0.218488 -2023-04-27 08:10:47,547 - Epoch: [372][ 400/ 518] Overall Loss 2.868544 Objective Loss 2.868544 LR 0.000008 Time 0.218237 -2023-04-27 08:10:58,395 - Epoch: [372][ 450/ 518] Overall Loss 2.871948 Objective Loss 2.871948 LR 0.000008 Time 0.218089 -2023-04-27 08:11:09,228 - Epoch: [372][ 500/ 518] Overall Loss 2.868265 Objective Loss 2.868265 LR 0.000008 Time 0.217943 -2023-04-27 08:11:12,968 - Epoch: [372][ 518/ 518] Overall Loss 2.866628 Objective Loss 2.866628 LR 0.000008 Time 0.217590 -2023-04-27 08:11:13,046 - --- validate (epoch=372)----------- -2023-04-27 08:11:13,046 - 4952 samples (32 per mini-batch) -2023-04-27 08:11:21,407 - Epoch: [372][ 50/ 155] Loss 3.072641 mAP 0.442055 -2023-04-27 08:11:29,395 - Epoch: [372][ 100/ 155] Loss 3.028341 mAP 0.456699 -2023-04-27 08:11:37,326 - Epoch: [372][ 150/ 155] Loss 3.035696 mAP 0.452122 -2023-04-27 08:11:38,050 - Epoch: [372][ 155/ 155] Loss 3.040617 mAP 0.452249 -2023-04-27 08:11:38,115 - ==> mAP: 0.45225 Loss: 3.041 - -2023-04-27 08:11:38,119 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:11:38,119 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:11:38,154 - - -2023-04-27 08:11:38,154 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:11:49,882 - Epoch: [373][ 50/ 518] Overall Loss 2.857847 Objective Loss 2.857847 LR 0.000008 Time 0.234503 -2023-04-27 08:12:00,691 - Epoch: [373][ 100/ 518] Overall Loss 2.823592 Objective Loss 2.823592 LR 0.000008 Time 0.225322 -2023-04-27 08:12:11,578 - Epoch: [373][ 150/ 518] Overall Loss 2.836571 Objective Loss 2.836571 LR 0.000008 Time 0.222787 -2023-04-27 08:12:22,437 - Epoch: [373][ 200/ 518] Overall Loss 2.838915 Objective Loss 2.838915 LR 0.000008 Time 0.221377 -2023-04-27 08:12:33,304 - Epoch: [373][ 250/ 518] Overall Loss 2.842713 Objective Loss 2.842713 LR 0.000008 Time 0.220563 -2023-04-27 08:12:44,134 - Epoch: [373][ 300/ 518] Overall Loss 2.851865 Objective Loss 2.851865 LR 0.000008 Time 0.219897 -2023-04-27 08:12:54,891 - Epoch: [373][ 350/ 518] Overall Loss 2.852738 Objective Loss 2.852738 LR 0.000008 Time 0.219212 -2023-04-27 08:13:05,762 - Epoch: [373][ 400/ 518] Overall Loss 2.854312 Objective Loss 2.854312 LR 0.000008 Time 0.218984 -2023-04-27 08:13:16,585 - Epoch: [373][ 450/ 518] Overall Loss 2.858607 Objective Loss 2.858607 LR 0.000008 Time 0.218702 -2023-04-27 08:13:27,405 - Epoch: [373][ 500/ 518] Overall Loss 2.864924 Objective Loss 2.864924 LR 0.000008 Time 0.218467 -2023-04-27 08:13:31,136 - Epoch: [373][ 518/ 518] Overall Loss 2.864545 Objective Loss 2.864545 LR 0.000008 Time 0.218078 -2023-04-27 08:13:31,213 - --- validate (epoch=373)----------- -2023-04-27 08:13:31,214 - 4952 samples (32 per mini-batch) -2023-04-27 08:13:39,475 - Epoch: [373][ 50/ 155] Loss 3.061898 mAP 0.446175 -2023-04-27 08:13:47,369 - Epoch: [373][ 100/ 155] Loss 3.053523 mAP 0.433636 -2023-04-27 08:13:55,254 - Epoch: [373][ 150/ 155] Loss 3.037055 mAP 0.442666 -2023-04-27 08:13:55,968 - Epoch: [373][ 155/ 155] Loss 3.035556 mAP 0.444424 -2023-04-27 08:13:56,049 - ==> mAP: 0.44442 Loss: 3.036 - -2023-04-27 08:13:56,053 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:13:56,053 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:13:56,089 - - -2023-04-27 08:13:56,089 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:14:07,772 - Epoch: [374][ 50/ 518] Overall Loss 2.862606 Objective Loss 2.862606 LR 0.000008 Time 0.233597 -2023-04-27 08:14:18,534 - Epoch: [374][ 100/ 518] Overall Loss 2.871711 Objective Loss 2.871711 LR 0.000008 Time 0.224411 -2023-04-27 08:14:29,370 - Epoch: [374][ 150/ 518] Overall Loss 2.873492 Objective Loss 2.873492 LR 0.000008 Time 0.221831 -2023-04-27 08:14:40,179 - Epoch: [374][ 200/ 518] Overall Loss 2.860279 Objective Loss 2.860279 LR 0.000008 Time 0.220413 -2023-04-27 08:14:50,903 - Epoch: [374][ 250/ 518] Overall Loss 2.854906 Objective Loss 2.854906 LR 0.000008 Time 0.219221 -2023-04-27 08:15:01,740 - Epoch: [374][ 300/ 518] Overall Loss 2.846307 Objective Loss 2.846307 LR 0.000008 Time 0.218802 -2023-04-27 08:15:12,603 - Epoch: [374][ 350/ 518] Overall Loss 2.847711 Objective Loss 2.847711 LR 0.000008 Time 0.218576 -2023-04-27 08:15:23,396 - Epoch: [374][ 400/ 518] Overall Loss 2.851909 Objective Loss 2.851909 LR 0.000008 Time 0.218233 -2023-04-27 08:15:34,191 - Epoch: [374][ 450/ 518] Overall Loss 2.852942 Objective Loss 2.852942 LR 0.000008 Time 0.217971 -2023-04-27 08:15:45,026 - Epoch: [374][ 500/ 518] Overall Loss 2.852988 Objective Loss 2.852988 LR 0.000008 Time 0.217839 -2023-04-27 08:15:48,753 - Epoch: [374][ 518/ 518] Overall Loss 2.851952 Objective Loss 2.851952 LR 0.000008 Time 0.217464 -2023-04-27 08:15:48,830 - --- validate (epoch=374)----------- -2023-04-27 08:15:48,830 - 4952 samples (32 per mini-batch) -2023-04-27 08:15:57,158 - Epoch: [374][ 50/ 155] Loss 3.056324 mAP 0.452514 -2023-04-27 08:16:05,079 - Epoch: [374][ 100/ 155] Loss 3.056189 mAP 0.451890 -2023-04-27 08:16:12,952 - Epoch: [374][ 150/ 155] Loss 3.037203 mAP 0.448005 -2023-04-27 08:16:13,666 - Epoch: [374][ 155/ 155] Loss 3.037995 mAP 0.447683 -2023-04-27 08:16:13,735 - ==> mAP: 0.44768 Loss: 3.038 - -2023-04-27 08:16:13,739 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:16:13,739 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:16:13,773 - - -2023-04-27 08:16:13,774 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:16:25,496 - Epoch: [375][ 50/ 518] Overall Loss 2.870206 Objective Loss 2.870206 LR 0.000008 Time 0.234399 -2023-04-27 08:16:36,319 - Epoch: [375][ 100/ 518] Overall Loss 2.837856 Objective Loss 2.837856 LR 0.000008 Time 0.225415 -2023-04-27 08:16:47,216 - Epoch: [375][ 150/ 518] Overall Loss 2.850794 Objective Loss 2.850794 LR 0.000008 Time 0.222911 -2023-04-27 08:16:58,054 - Epoch: [375][ 200/ 518] Overall Loss 2.850472 Objective Loss 2.850472 LR 0.000008 Time 0.221367 -2023-04-27 08:17:08,835 - Epoch: [375][ 250/ 518] Overall Loss 2.850345 Objective Loss 2.850345 LR 0.000008 Time 0.220210 -2023-04-27 08:17:19,620 - Epoch: [375][ 300/ 518] Overall Loss 2.846423 Objective Loss 2.846423 LR 0.000008 Time 0.219453 -2023-04-27 08:17:30,450 - Epoch: [375][ 350/ 518] Overall Loss 2.850095 Objective Loss 2.850095 LR 0.000008 Time 0.219041 -2023-04-27 08:17:41,283 - Epoch: [375][ 400/ 518] Overall Loss 2.851993 Objective Loss 2.851993 LR 0.000008 Time 0.218741 -2023-04-27 08:17:52,174 - Epoch: [375][ 450/ 518] Overall Loss 2.850109 Objective Loss 2.850109 LR 0.000008 Time 0.218634 -2023-04-27 08:18:03,025 - Epoch: [375][ 500/ 518] Overall Loss 2.852404 Objective Loss 2.852404 LR 0.000008 Time 0.218469 -2023-04-27 08:18:06,769 - Epoch: [375][ 518/ 518] Overall Loss 2.852170 Objective Loss 2.852170 LR 0.000008 Time 0.218105 -2023-04-27 08:18:06,848 - --- validate (epoch=375)----------- -2023-04-27 08:18:06,848 - 4952 samples (32 per mini-batch) -2023-04-27 08:18:15,217 - Epoch: [375][ 50/ 155] Loss 3.030808 mAP 0.459773 -2023-04-27 08:18:23,207 - Epoch: [375][ 100/ 155] Loss 3.044668 mAP 0.450688 -2023-04-27 08:18:31,173 - Epoch: [375][ 150/ 155] Loss 3.033739 mAP 0.450817 -2023-04-27 08:18:31,883 - Epoch: [375][ 155/ 155] Loss 3.033724 mAP 0.449076 -2023-04-27 08:18:31,963 - ==> mAP: 0.44908 Loss: 3.034 - -2023-04-27 08:18:31,966 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:18:31,967 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:18:32,001 - - -2023-04-27 08:18:32,001 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:18:43,680 - Epoch: [376][ 50/ 518] Overall Loss 2.890545 Objective Loss 2.890545 LR 0.000008 Time 0.233522 -2023-04-27 08:18:54,528 - Epoch: [376][ 100/ 518] Overall Loss 2.879744 Objective Loss 2.879744 LR 0.000008 Time 0.225228 -2023-04-27 08:19:05,305 - Epoch: [376][ 150/ 518] Overall Loss 2.864941 Objective Loss 2.864941 LR 0.000008 Time 0.221985 -2023-04-27 08:19:16,085 - Epoch: [376][ 200/ 518] Overall Loss 2.860265 Objective Loss 2.860265 LR 0.000008 Time 0.220383 -2023-04-27 08:19:26,810 - Epoch: [376][ 250/ 518] Overall Loss 2.855235 Objective Loss 2.855235 LR 0.000008 Time 0.219199 -2023-04-27 08:19:37,609 - Epoch: [376][ 300/ 518] Overall Loss 2.852829 Objective Loss 2.852829 LR 0.000008 Time 0.218657 -2023-04-27 08:19:48,410 - Epoch: [376][ 350/ 518] Overall Loss 2.854520 Objective Loss 2.854520 LR 0.000008 Time 0.218277 -2023-04-27 08:19:59,258 - Epoch: [376][ 400/ 518] Overall Loss 2.854647 Objective Loss 2.854647 LR 0.000008 Time 0.218108 -2023-04-27 08:20:09,989 - Epoch: [376][ 450/ 518] Overall Loss 2.856969 Objective Loss 2.856969 LR 0.000008 Time 0.217717 -2023-04-27 08:20:20,791 - Epoch: [376][ 500/ 518] Overall Loss 2.861218 Objective Loss 2.861218 LR 0.000008 Time 0.217546 -2023-04-27 08:20:24,488 - Epoch: [376][ 518/ 518] Overall Loss 2.862852 Objective Loss 2.862852 LR 0.000008 Time 0.217122 -2023-04-27 08:20:24,566 - --- validate (epoch=376)----------- -2023-04-27 08:20:24,566 - 4952 samples (32 per mini-batch) -2023-04-27 08:20:32,875 - Epoch: [376][ 50/ 155] Loss 3.076894 mAP 0.455674 -2023-04-27 08:20:40,807 - Epoch: [376][ 100/ 155] Loss 3.048409 mAP 0.449770 -2023-04-27 08:20:48,695 - Epoch: [376][ 150/ 155] Loss 3.049951 mAP 0.447178 -2023-04-27 08:20:49,420 - Epoch: [376][ 155/ 155] Loss 3.047767 mAP 0.448709 -2023-04-27 08:20:49,492 - ==> mAP: 0.44871 Loss: 3.048 - -2023-04-27 08:20:49,496 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:20:49,497 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:20:49,531 - - -2023-04-27 08:20:49,531 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:21:01,222 - Epoch: [377][ 50/ 518] Overall Loss 2.861189 Objective Loss 2.861189 LR 0.000008 Time 0.233760 -2023-04-27 08:21:12,046 - Epoch: [377][ 100/ 518] Overall Loss 2.861075 Objective Loss 2.861075 LR 0.000008 Time 0.225109 -2023-04-27 08:21:22,767 - Epoch: [377][ 150/ 518] Overall Loss 2.868728 Objective Loss 2.868728 LR 0.000008 Time 0.221533 -2023-04-27 08:21:33,576 - Epoch: [377][ 200/ 518] Overall Loss 2.857944 Objective Loss 2.857944 LR 0.000008 Time 0.220186 -2023-04-27 08:21:44,373 - Epoch: [377][ 250/ 518] Overall Loss 2.864840 Objective Loss 2.864840 LR 0.000008 Time 0.219331 -2023-04-27 08:21:55,197 - Epoch: [377][ 300/ 518] Overall Loss 2.867250 Objective Loss 2.867250 LR 0.000008 Time 0.218850 -2023-04-27 08:22:06,019 - Epoch: [377][ 350/ 518] Overall Loss 2.867279 Objective Loss 2.867279 LR 0.000008 Time 0.218502 -2023-04-27 08:22:16,797 - Epoch: [377][ 400/ 518] Overall Loss 2.869935 Objective Loss 2.869935 LR 0.000008 Time 0.218131 -2023-04-27 08:22:27,597 - Epoch: [377][ 450/ 518] Overall Loss 2.865551 Objective Loss 2.865551 LR 0.000008 Time 0.217891 -2023-04-27 08:22:38,473 - Epoch: [377][ 500/ 518] Overall Loss 2.867141 Objective Loss 2.867141 LR 0.000008 Time 0.217850 -2023-04-27 08:22:42,214 - Epoch: [377][ 518/ 518] Overall Loss 2.866565 Objective Loss 2.866565 LR 0.000008 Time 0.217501 -2023-04-27 08:22:42,291 - --- validate (epoch=377)----------- -2023-04-27 08:22:42,291 - 4952 samples (32 per mini-batch) -2023-04-27 08:22:50,571 - Epoch: [377][ 50/ 155] Loss 3.055832 mAP 0.448093 -2023-04-27 08:22:58,498 - Epoch: [377][ 100/ 155] Loss 3.071528 mAP 0.440057 -2023-04-27 08:23:06,393 - Epoch: [377][ 150/ 155] Loss 3.045951 mAP 0.443899 -2023-04-27 08:23:07,113 - Epoch: [377][ 155/ 155] Loss 3.046380 mAP 0.444904 -2023-04-27 08:23:07,182 - ==> mAP: 0.44490 Loss: 3.046 - -2023-04-27 08:23:07,186 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:23:07,186 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:23:07,221 - - -2023-04-27 08:23:07,221 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:23:18,778 - Epoch: [378][ 50/ 518] Overall Loss 2.832137 Objective Loss 2.832137 LR 0.000008 Time 0.231099 -2023-04-27 08:23:29,620 - Epoch: [378][ 100/ 518] Overall Loss 2.832259 Objective Loss 2.832259 LR 0.000008 Time 0.223953 -2023-04-27 08:23:40,449 - Epoch: [378][ 150/ 518] Overall Loss 2.831868 Objective Loss 2.831868 LR 0.000008 Time 0.221481 -2023-04-27 08:23:51,241 - Epoch: [378][ 200/ 518] Overall Loss 2.849772 Objective Loss 2.849772 LR 0.000008 Time 0.220061 -2023-04-27 08:24:02,009 - Epoch: [378][ 250/ 518] Overall Loss 2.850465 Objective Loss 2.850465 LR 0.000008 Time 0.219119 -2023-04-27 08:24:12,827 - Epoch: [378][ 300/ 518] Overall Loss 2.850694 Objective Loss 2.850694 LR 0.000008 Time 0.218651 -2023-04-27 08:24:23,688 - Epoch: [378][ 350/ 518] Overall Loss 2.853793 Objective Loss 2.853793 LR 0.000008 Time 0.218444 -2023-04-27 08:24:34,533 - Epoch: [378][ 400/ 518] Overall Loss 2.850773 Objective Loss 2.850773 LR 0.000008 Time 0.218247 -2023-04-27 08:24:45,415 - Epoch: [378][ 450/ 518] Overall Loss 2.850322 Objective Loss 2.850322 LR 0.000008 Time 0.218177 -2023-04-27 08:24:56,257 - Epoch: [378][ 500/ 518] Overall Loss 2.856117 Objective Loss 2.856117 LR 0.000008 Time 0.218038 -2023-04-27 08:24:59,993 - Epoch: [378][ 518/ 518] Overall Loss 2.861394 Objective Loss 2.861394 LR 0.000008 Time 0.217674 -2023-04-27 08:25:00,066 - --- validate (epoch=378)----------- -2023-04-27 08:25:00,066 - 4952 samples (32 per mini-batch) -2023-04-27 08:25:08,377 - Epoch: [378][ 50/ 155] Loss 3.066599 mAP 0.442920 -2023-04-27 08:25:16,313 - Epoch: [378][ 100/ 155] Loss 3.061739 mAP 0.446090 -2023-04-27 08:25:24,204 - Epoch: [378][ 150/ 155] Loss 3.039024 mAP 0.452784 -2023-04-27 08:25:24,927 - Epoch: [378][ 155/ 155] Loss 3.038256 mAP 0.452016 -2023-04-27 08:25:25,004 - ==> mAP: 0.45202 Loss: 3.038 - -2023-04-27 08:25:25,007 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:25:25,007 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:25:25,041 - - -2023-04-27 08:25:25,042 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:25:36,710 - Epoch: [379][ 50/ 518] Overall Loss 2.834904 Objective Loss 2.834904 LR 0.000008 Time 0.233314 -2023-04-27 08:25:47,520 - Epoch: [379][ 100/ 518] Overall Loss 2.848456 Objective Loss 2.848456 LR 0.000008 Time 0.224738 -2023-04-27 08:25:58,389 - Epoch: [379][ 150/ 518] Overall Loss 2.847865 Objective Loss 2.847865 LR 0.000008 Time 0.222279 -2023-04-27 08:26:09,204 - Epoch: [379][ 200/ 518] Overall Loss 2.859520 Objective Loss 2.859520 LR 0.000008 Time 0.220777 -2023-04-27 08:26:20,012 - Epoch: [379][ 250/ 518] Overall Loss 2.850339 Objective Loss 2.850339 LR 0.000008 Time 0.219848 -2023-04-27 08:26:30,823 - Epoch: [379][ 300/ 518] Overall Loss 2.855197 Objective Loss 2.855197 LR 0.000008 Time 0.219235 -2023-04-27 08:26:41,692 - Epoch: [379][ 350/ 518] Overall Loss 2.855622 Objective Loss 2.855622 LR 0.000008 Time 0.218967 -2023-04-27 08:26:52,448 - Epoch: [379][ 400/ 518] Overall Loss 2.855319 Objective Loss 2.855319 LR 0.000008 Time 0.218481 -2023-04-27 08:27:03,160 - Epoch: [379][ 450/ 518] Overall Loss 2.856193 Objective Loss 2.856193 LR 0.000008 Time 0.218008 -2023-04-27 08:27:13,997 - Epoch: [379][ 500/ 518] Overall Loss 2.862223 Objective Loss 2.862223 LR 0.000008 Time 0.217878 -2023-04-27 08:27:17,766 - Epoch: [379][ 518/ 518] Overall Loss 2.862782 Objective Loss 2.862782 LR 0.000008 Time 0.217582 -2023-04-27 08:27:17,843 - --- validate (epoch=379)----------- -2023-04-27 08:27:17,843 - 4952 samples (32 per mini-batch) -2023-04-27 08:27:26,193 - Epoch: [379][ 50/ 155] Loss 3.073034 mAP 0.467737 -2023-04-27 08:27:34,116 - Epoch: [379][ 100/ 155] Loss 3.030493 mAP 0.460526 -2023-04-27 08:27:41,983 - Epoch: [379][ 150/ 155] Loss 3.051125 mAP 0.456107 -2023-04-27 08:27:42,690 - Epoch: [379][ 155/ 155] Loss 3.052549 mAP 0.456555 -2023-04-27 08:27:42,765 - ==> mAP: 0.45656 Loss: 3.053 - -2023-04-27 08:27:42,770 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:27:42,770 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:27:42,806 - - -2023-04-27 08:27:42,806 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:27:54,477 - Epoch: [380][ 50/ 518] Overall Loss 2.824218 Objective Loss 2.824218 LR 0.000008 Time 0.233372 -2023-04-27 08:28:05,293 - Epoch: [380][ 100/ 518] Overall Loss 2.814120 Objective Loss 2.814120 LR 0.000008 Time 0.224823 -2023-04-27 08:28:15,992 - Epoch: [380][ 150/ 518] Overall Loss 2.831629 Objective Loss 2.831629 LR 0.000008 Time 0.221203 -2023-04-27 08:28:26,786 - Epoch: [380][ 200/ 518] Overall Loss 2.838413 Objective Loss 2.838413 LR 0.000008 Time 0.219861 -2023-04-27 08:28:37,592 - Epoch: [380][ 250/ 518] Overall Loss 2.831339 Objective Loss 2.831339 LR 0.000008 Time 0.219109 -2023-04-27 08:28:48,430 - Epoch: [380][ 300/ 518] Overall Loss 2.839796 Objective Loss 2.839796 LR 0.000008 Time 0.218711 -2023-04-27 08:28:59,221 - Epoch: [380][ 350/ 518] Overall Loss 2.836672 Objective Loss 2.836672 LR 0.000008 Time 0.218294 -2023-04-27 08:29:10,091 - Epoch: [380][ 400/ 518] Overall Loss 2.844403 Objective Loss 2.844403 LR 0.000008 Time 0.218177 -2023-04-27 08:29:20,860 - Epoch: [380][ 450/ 518] Overall Loss 2.847865 Objective Loss 2.847865 LR 0.000008 Time 0.217864 -2023-04-27 08:29:31,709 - Epoch: [380][ 500/ 518] Overall Loss 2.842854 Objective Loss 2.842854 LR 0.000008 Time 0.217772 -2023-04-27 08:29:35,497 - Epoch: [380][ 518/ 518] Overall Loss 2.845024 Objective Loss 2.845024 LR 0.000008 Time 0.217517 -2023-04-27 08:29:35,575 - --- validate (epoch=380)----------- -2023-04-27 08:29:35,576 - 4952 samples (32 per mini-batch) -2023-04-27 08:29:43,884 - Epoch: [380][ 50/ 155] Loss 3.113311 mAP 0.459118 -2023-04-27 08:29:51,835 - Epoch: [380][ 100/ 155] Loss 3.065300 mAP 0.449593 -2023-04-27 08:29:59,778 - Epoch: [380][ 150/ 155] Loss 3.055984 mAP 0.443354 -2023-04-27 08:30:00,499 - Epoch: [380][ 155/ 155] Loss 3.050506 mAP 0.443435 -2023-04-27 08:30:00,568 - ==> mAP: 0.44343 Loss: 3.051 - -2023-04-27 08:30:00,571 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:30:00,571 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:30:00,605 - - -2023-04-27 08:30:00,605 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:30:12,174 - Epoch: [381][ 50/ 518] Overall Loss 2.899429 Objective Loss 2.899429 LR 0.000008 Time 0.231323 -2023-04-27 08:30:23,082 - Epoch: [381][ 100/ 518] Overall Loss 2.870579 Objective Loss 2.870579 LR 0.000008 Time 0.224726 -2023-04-27 08:30:33,870 - Epoch: [381][ 150/ 518] Overall Loss 2.860263 Objective Loss 2.860263 LR 0.000008 Time 0.221722 -2023-04-27 08:30:44,681 - Epoch: [381][ 200/ 518] Overall Loss 2.848496 Objective Loss 2.848496 LR 0.000008 Time 0.220342 -2023-04-27 08:30:55,527 - Epoch: [381][ 250/ 518] Overall Loss 2.843503 Objective Loss 2.843503 LR 0.000008 Time 0.219649 -2023-04-27 08:31:06,283 - Epoch: [381][ 300/ 518] Overall Loss 2.854643 Objective Loss 2.854643 LR 0.000008 Time 0.218892 -2023-04-27 08:31:17,098 - Epoch: [381][ 350/ 518] Overall Loss 2.851685 Objective Loss 2.851685 LR 0.000008 Time 0.218517 -2023-04-27 08:31:27,878 - Epoch: [381][ 400/ 518] Overall Loss 2.845929 Objective Loss 2.845929 LR 0.000008 Time 0.218148 -2023-04-27 08:31:38,694 - Epoch: [381][ 450/ 518] Overall Loss 2.850191 Objective Loss 2.850191 LR 0.000008 Time 0.217941 -2023-04-27 08:31:49,589 - Epoch: [381][ 500/ 518] Overall Loss 2.851203 Objective Loss 2.851203 LR 0.000008 Time 0.217935 -2023-04-27 08:31:53,333 - Epoch: [381][ 518/ 518] Overall Loss 2.850890 Objective Loss 2.850890 LR 0.000008 Time 0.217587 -2023-04-27 08:31:53,410 - --- validate (epoch=381)----------- -2023-04-27 08:31:53,410 - 4952 samples (32 per mini-batch) -2023-04-27 08:32:01,743 - Epoch: [381][ 50/ 155] Loss 3.023347 mAP 0.465214 -2023-04-27 08:32:09,721 - Epoch: [381][ 100/ 155] Loss 3.032367 mAP 0.455093 -2023-04-27 08:32:17,632 - Epoch: [381][ 150/ 155] Loss 3.024319 mAP 0.457929 -2023-04-27 08:32:18,353 - Epoch: [381][ 155/ 155] Loss 3.024346 mAP 0.457336 -2023-04-27 08:32:18,425 - ==> mAP: 0.45734 Loss: 3.024 - -2023-04-27 08:32:18,429 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:32:18,430 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:32:18,465 - - -2023-04-27 08:32:18,465 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:32:30,073 - Epoch: [382][ 50/ 518] Overall Loss 2.880411 Objective Loss 2.880411 LR 0.000008 Time 0.232107 -2023-04-27 08:32:40,915 - Epoch: [382][ 100/ 518] Overall Loss 2.891887 Objective Loss 2.891887 LR 0.000008 Time 0.224459 -2023-04-27 08:32:51,800 - Epoch: [382][ 150/ 518] Overall Loss 2.865306 Objective Loss 2.865306 LR 0.000008 Time 0.222199 -2023-04-27 08:33:02,617 - Epoch: [382][ 200/ 518] Overall Loss 2.850410 Objective Loss 2.850410 LR 0.000008 Time 0.220726 -2023-04-27 08:33:13,527 - Epoch: [382][ 250/ 518] Overall Loss 2.843497 Objective Loss 2.843497 LR 0.000008 Time 0.220213 -2023-04-27 08:33:24,332 - Epoch: [382][ 300/ 518] Overall Loss 2.850586 Objective Loss 2.850586 LR 0.000008 Time 0.219524 -2023-04-27 08:33:35,221 - Epoch: [382][ 350/ 518] Overall Loss 2.852019 Objective Loss 2.852019 LR 0.000008 Time 0.219268 -2023-04-27 08:33:46,016 - Epoch: [382][ 400/ 518] Overall Loss 2.851629 Objective Loss 2.851629 LR 0.000008 Time 0.218845 -2023-04-27 08:33:56,834 - Epoch: [382][ 450/ 518] Overall Loss 2.848664 Objective Loss 2.848664 LR 0.000008 Time 0.218564 -2023-04-27 08:34:07,768 - Epoch: [382][ 500/ 518] Overall Loss 2.846262 Objective Loss 2.846262 LR 0.000008 Time 0.218573 -2023-04-27 08:34:11,514 - Epoch: [382][ 518/ 518] Overall Loss 2.846528 Objective Loss 2.846528 LR 0.000008 Time 0.218209 -2023-04-27 08:34:11,592 - --- validate (epoch=382)----------- -2023-04-27 08:34:11,592 - 4952 samples (32 per mini-batch) -2023-04-27 08:34:19,919 - Epoch: [382][ 50/ 155] Loss 3.023836 mAP 0.477010 -2023-04-27 08:34:27,867 - Epoch: [382][ 100/ 155] Loss 3.027355 mAP 0.453395 -2023-04-27 08:34:35,782 - Epoch: [382][ 150/ 155] Loss 3.025541 mAP 0.449964 -2023-04-27 08:34:36,497 - Epoch: [382][ 155/ 155] Loss 3.023584 mAP 0.450474 -2023-04-27 08:34:36,575 - ==> mAP: 0.45047 Loss: 3.024 - -2023-04-27 08:34:36,578 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:34:36,579 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:34:36,613 - - -2023-04-27 08:34:36,613 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:34:48,137 - Epoch: [383][ 50/ 518] Overall Loss 2.888702 Objective Loss 2.888702 LR 0.000008 Time 0.230433 -2023-04-27 08:34:58,924 - Epoch: [383][ 100/ 518] Overall Loss 2.836354 Objective Loss 2.836354 LR 0.000008 Time 0.223068 -2023-04-27 08:35:09,745 - Epoch: [383][ 150/ 518] Overall Loss 2.832917 Objective Loss 2.832917 LR 0.000008 Time 0.220842 -2023-04-27 08:35:20,507 - Epoch: [383][ 200/ 518] Overall Loss 2.831314 Objective Loss 2.831314 LR 0.000008 Time 0.219433 -2023-04-27 08:35:31,301 - Epoch: [383][ 250/ 518] Overall Loss 2.844469 Objective Loss 2.844469 LR 0.000008 Time 0.218715 -2023-04-27 08:35:42,134 - Epoch: [383][ 300/ 518] Overall Loss 2.855438 Objective Loss 2.855438 LR 0.000008 Time 0.218368 -2023-04-27 08:35:53,012 - Epoch: [383][ 350/ 518] Overall Loss 2.852349 Objective Loss 2.852349 LR 0.000008 Time 0.218248 -2023-04-27 08:36:03,850 - Epoch: [383][ 400/ 518] Overall Loss 2.858935 Objective Loss 2.858935 LR 0.000008 Time 0.218058 -2023-04-27 08:36:14,661 - Epoch: [383][ 450/ 518] Overall Loss 2.853074 Objective Loss 2.853074 LR 0.000008 Time 0.217851 -2023-04-27 08:36:25,522 - Epoch: [383][ 500/ 518] Overall Loss 2.856169 Objective Loss 2.856169 LR 0.000008 Time 0.217784 -2023-04-27 08:36:29,243 - Epoch: [383][ 518/ 518] Overall Loss 2.854858 Objective Loss 2.854858 LR 0.000008 Time 0.217398 -2023-04-27 08:36:29,318 - --- validate (epoch=383)----------- -2023-04-27 08:36:29,319 - 4952 samples (32 per mini-batch) -2023-04-27 08:36:37,615 - Epoch: [383][ 50/ 155] Loss 3.022514 mAP 0.465446 -2023-04-27 08:36:45,541 - Epoch: [383][ 100/ 155] Loss 3.023125 mAP 0.454769 -2023-04-27 08:36:53,435 - Epoch: [383][ 150/ 155] Loss 3.032220 mAP 0.445225 -2023-04-27 08:36:54,163 - Epoch: [383][ 155/ 155] Loss 3.029943 mAP 0.447040 -2023-04-27 08:36:54,237 - ==> mAP: 0.44704 Loss: 3.030 - -2023-04-27 08:36:54,241 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:36:54,241 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:36:54,277 - - -2023-04-27 08:36:54,277 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:37:05,985 - Epoch: [384][ 50/ 518] Overall Loss 2.835695 Objective Loss 2.835695 LR 0.000008 Time 0.234116 -2023-04-27 08:37:16,849 - Epoch: [384][ 100/ 518] Overall Loss 2.830107 Objective Loss 2.830107 LR 0.000008 Time 0.225675 -2023-04-27 08:37:27,538 - Epoch: [384][ 150/ 518] Overall Loss 2.840334 Objective Loss 2.840334 LR 0.000008 Time 0.221701 -2023-04-27 08:37:38,320 - Epoch: [384][ 200/ 518] Overall Loss 2.831458 Objective Loss 2.831458 LR 0.000008 Time 0.220180 -2023-04-27 08:37:49,186 - Epoch: [384][ 250/ 518] Overall Loss 2.836290 Objective Loss 2.836290 LR 0.000008 Time 0.219601 -2023-04-27 08:38:00,052 - Epoch: [384][ 300/ 518] Overall Loss 2.845097 Objective Loss 2.845097 LR 0.000008 Time 0.219214 -2023-04-27 08:38:10,861 - Epoch: [384][ 350/ 518] Overall Loss 2.849744 Objective Loss 2.849744 LR 0.000008 Time 0.218777 -2023-04-27 08:38:21,727 - Epoch: [384][ 400/ 518] Overall Loss 2.848560 Objective Loss 2.848560 LR 0.000008 Time 0.218590 -2023-04-27 08:38:32,595 - Epoch: [384][ 450/ 518] Overall Loss 2.846828 Objective Loss 2.846828 LR 0.000008 Time 0.218450 -2023-04-27 08:38:43,450 - Epoch: [384][ 500/ 518] Overall Loss 2.849306 Objective Loss 2.849306 LR 0.000008 Time 0.218311 -2023-04-27 08:38:47,204 - Epoch: [384][ 518/ 518] Overall Loss 2.849997 Objective Loss 2.849997 LR 0.000008 Time 0.217973 -2023-04-27 08:38:47,281 - --- validate (epoch=384)----------- -2023-04-27 08:38:47,281 - 4952 samples (32 per mini-batch) -2023-04-27 08:38:55,627 - Epoch: [384][ 50/ 155] Loss 3.088969 mAP 0.445020 -2023-04-27 08:39:03,508 - Epoch: [384][ 100/ 155] Loss 3.055703 mAP 0.440921 -2023-04-27 08:39:11,396 - Epoch: [384][ 150/ 155] Loss 3.040675 mAP 0.448600 -2023-04-27 08:39:12,113 - Epoch: [384][ 155/ 155] Loss 3.036911 mAP 0.449585 -2023-04-27 08:39:12,189 - ==> mAP: 0.44959 Loss: 3.037 - -2023-04-27 08:39:12,193 - ==> Best [mAP: 0.458592 vloss: 3.032273 Sparsity:0.00 Params: 2177087 on epoch: 367] -2023-04-27 08:39:12,193 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:39:12,227 - - -2023-04-27 08:39:12,227 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:39:23,794 - Epoch: [385][ 50/ 518] Overall Loss 2.887228 Objective Loss 2.887228 LR 0.000008 Time 0.231285 -2023-04-27 08:39:34,597 - Epoch: [385][ 100/ 518] Overall Loss 2.861545 Objective Loss 2.861545 LR 0.000008 Time 0.223656 -2023-04-27 08:39:45,428 - Epoch: [385][ 150/ 518] Overall Loss 2.851685 Objective Loss 2.851685 LR 0.000008 Time 0.221297 -2023-04-27 08:39:56,240 - Epoch: [385][ 200/ 518] Overall Loss 2.857823 Objective Loss 2.857823 LR 0.000008 Time 0.220024 -2023-04-27 08:40:07,224 - Epoch: [385][ 250/ 518] Overall Loss 2.844182 Objective Loss 2.844182 LR 0.000008 Time 0.219952 -2023-04-27 08:40:17,941 - Epoch: [385][ 300/ 518] Overall Loss 2.853734 Objective Loss 2.853734 LR 0.000008 Time 0.219010 -2023-04-27 08:40:28,709 - Epoch: [385][ 350/ 518] Overall Loss 2.855496 Objective Loss 2.855496 LR 0.000008 Time 0.218484 -2023-04-27 08:40:39,515 - Epoch: [385][ 400/ 518] Overall Loss 2.859394 Objective Loss 2.859394 LR 0.000008 Time 0.218184 -2023-04-27 08:40:50,372 - Epoch: [385][ 450/ 518] Overall Loss 2.857065 Objective Loss 2.857065 LR 0.000008 Time 0.218065 -2023-04-27 08:41:01,228 - Epoch: [385][ 500/ 518] Overall Loss 2.856774 Objective Loss 2.856774 LR 0.000008 Time 0.217968 -2023-04-27 08:41:04,964 - Epoch: [385][ 518/ 518] Overall Loss 2.856774 Objective Loss 2.856774 LR 0.000008 Time 0.217604 -2023-04-27 08:41:05,041 - --- validate (epoch=385)----------- -2023-04-27 08:41:05,041 - 4952 samples (32 per mini-batch) -2023-04-27 08:41:13,387 - Epoch: [385][ 50/ 155] Loss 3.007619 mAP 0.468534 -2023-04-27 08:41:21,389 - Epoch: [385][ 100/ 155] Loss 3.030757 mAP 0.465943 -2023-04-27 08:41:29,339 - Epoch: [385][ 150/ 155] Loss 3.033971 mAP 0.462239 -2023-04-27 08:41:30,070 - Epoch: [385][ 155/ 155] Loss 3.029455 mAP 0.461116 -2023-04-27 08:41:30,145 - ==> mAP: 0.46112 Loss: 3.029 - -2023-04-27 08:41:30,149 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 08:41:30,149 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:41:30,198 - - -2023-04-27 08:41:30,199 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:41:41,928 - Epoch: [386][ 50/ 518] Overall Loss 2.862542 Objective Loss 2.862542 LR 0.000008 Time 0.234530 -2023-04-27 08:41:52,724 - Epoch: [386][ 100/ 518] Overall Loss 2.838003 Objective Loss 2.838003 LR 0.000008 Time 0.225215 -2023-04-27 08:42:03,541 - Epoch: [386][ 150/ 518] Overall Loss 2.830419 Objective Loss 2.830419 LR 0.000008 Time 0.222244 -2023-04-27 08:42:14,362 - Epoch: [386][ 200/ 518] Overall Loss 2.822268 Objective Loss 2.822268 LR 0.000008 Time 0.220778 -2023-04-27 08:42:25,107 - Epoch: [386][ 250/ 518] Overall Loss 2.837538 Objective Loss 2.837538 LR 0.000008 Time 0.219598 -2023-04-27 08:42:35,936 - Epoch: [386][ 300/ 518] Overall Loss 2.834996 Objective Loss 2.834996 LR 0.000008 Time 0.219088 -2023-04-27 08:42:46,719 - Epoch: [386][ 350/ 518] Overall Loss 2.835530 Objective Loss 2.835530 LR 0.000008 Time 0.218594 -2023-04-27 08:42:57,579 - Epoch: [386][ 400/ 518] Overall Loss 2.831335 Objective Loss 2.831335 LR 0.000008 Time 0.218418 -2023-04-27 08:43:08,367 - Epoch: [386][ 450/ 518] Overall Loss 2.833142 Objective Loss 2.833142 LR 0.000008 Time 0.218118 -2023-04-27 08:43:19,216 - Epoch: [386][ 500/ 518] Overall Loss 2.837355 Objective Loss 2.837355 LR 0.000008 Time 0.218002 -2023-04-27 08:43:22,942 - Epoch: [386][ 518/ 518] Overall Loss 2.836474 Objective Loss 2.836474 LR 0.000008 Time 0.217618 -2023-04-27 08:43:23,019 - --- validate (epoch=386)----------- -2023-04-27 08:43:23,019 - 4952 samples (32 per mini-batch) -2023-04-27 08:43:31,317 - Epoch: [386][ 50/ 155] Loss 3.023105 mAP 0.467699 -2023-04-27 08:43:39,211 - Epoch: [386][ 100/ 155] Loss 3.019413 mAP 0.455058 -2023-04-27 08:43:47,073 - Epoch: [386][ 150/ 155] Loss 3.061056 mAP 0.445481 -2023-04-27 08:43:47,789 - Epoch: [386][ 155/ 155] Loss 3.062415 mAP 0.444434 -2023-04-27 08:43:47,866 - ==> mAP: 0.44443 Loss: 3.062 - -2023-04-27 08:43:47,870 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 08:43:47,870 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:43:47,904 - - -2023-04-27 08:43:47,905 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:43:59,456 - Epoch: [387][ 50/ 518] Overall Loss 2.768243 Objective Loss 2.768243 LR 0.000008 Time 0.230969 -2023-04-27 08:44:10,173 - Epoch: [387][ 100/ 518] Overall Loss 2.816206 Objective Loss 2.816206 LR 0.000008 Time 0.222646 -2023-04-27 08:44:20,939 - Epoch: [387][ 150/ 518] Overall Loss 2.821538 Objective Loss 2.821538 LR 0.000008 Time 0.220190 -2023-04-27 08:44:31,702 - Epoch: [387][ 200/ 518] Overall Loss 2.841096 Objective Loss 2.841096 LR 0.000008 Time 0.218949 -2023-04-27 08:44:42,549 - Epoch: [387][ 250/ 518] Overall Loss 2.843651 Objective Loss 2.843651 LR 0.000008 Time 0.218541 -2023-04-27 08:44:53,363 - Epoch: [387][ 300/ 518] Overall Loss 2.848453 Objective Loss 2.848453 LR 0.000008 Time 0.218159 -2023-04-27 08:45:04,125 - Epoch: [387][ 350/ 518] Overall Loss 2.848273 Objective Loss 2.848273 LR 0.000008 Time 0.217738 -2023-04-27 08:45:14,999 - Epoch: [387][ 400/ 518] Overall Loss 2.850458 Objective Loss 2.850458 LR 0.000008 Time 0.217703 -2023-04-27 08:45:25,781 - Epoch: [387][ 450/ 518] Overall Loss 2.845865 Objective Loss 2.845865 LR 0.000008 Time 0.217470 -2023-04-27 08:45:36,657 - Epoch: [387][ 500/ 518] Overall Loss 2.849574 Objective Loss 2.849574 LR 0.000008 Time 0.217472 -2023-04-27 08:45:40,453 - Epoch: [387][ 518/ 518] Overall Loss 2.850538 Objective Loss 2.850538 LR 0.000008 Time 0.217242 -2023-04-27 08:45:40,530 - --- validate (epoch=387)----------- -2023-04-27 08:45:40,531 - 4952 samples (32 per mini-batch) -2023-04-27 08:45:48,860 - Epoch: [387][ 50/ 155] Loss 3.031534 mAP 0.452711 -2023-04-27 08:45:56,768 - Epoch: [387][ 100/ 155] Loss 3.016503 mAP 0.450557 -2023-04-27 08:46:04,680 - Epoch: [387][ 150/ 155] Loss 3.009257 mAP 0.454487 -2023-04-27 08:46:05,399 - Epoch: [387][ 155/ 155] Loss 3.016417 mAP 0.452711 -2023-04-27 08:46:05,474 - ==> mAP: 0.45271 Loss: 3.016 - -2023-04-27 08:46:05,478 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 08:46:05,478 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:46:05,512 - - -2023-04-27 08:46:05,512 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:46:17,032 - Epoch: [388][ 50/ 518] Overall Loss 2.907274 Objective Loss 2.907274 LR 0.000008 Time 0.230357 -2023-04-27 08:46:27,873 - Epoch: [388][ 100/ 518] Overall Loss 2.896158 Objective Loss 2.896158 LR 0.000008 Time 0.223571 -2023-04-27 08:46:38,754 - Epoch: [388][ 150/ 518] Overall Loss 2.878404 Objective Loss 2.878404 LR 0.000008 Time 0.221574 -2023-04-27 08:46:49,524 - Epoch: [388][ 200/ 518] Overall Loss 2.868649 Objective Loss 2.868649 LR 0.000008 Time 0.220023 -2023-04-27 08:47:00,247 - Epoch: [388][ 250/ 518] Overall Loss 2.863539 Objective Loss 2.863539 LR 0.000008 Time 0.218906 -2023-04-27 08:47:11,111 - Epoch: [388][ 300/ 518] Overall Loss 2.852898 Objective Loss 2.852898 LR 0.000008 Time 0.218630 -2023-04-27 08:47:21,931 - Epoch: [388][ 350/ 518] Overall Loss 2.847851 Objective Loss 2.847851 LR 0.000008 Time 0.218305 -2023-04-27 08:47:32,740 - Epoch: [388][ 400/ 518] Overall Loss 2.851586 Objective Loss 2.851586 LR 0.000008 Time 0.218037 -2023-04-27 08:47:43,586 - Epoch: [388][ 450/ 518] Overall Loss 2.850619 Objective Loss 2.850619 LR 0.000008 Time 0.217909 -2023-04-27 08:47:54,412 - Epoch: [388][ 500/ 518] Overall Loss 2.852990 Objective Loss 2.852990 LR 0.000008 Time 0.217767 -2023-04-27 08:47:58,113 - Epoch: [388][ 518/ 518] Overall Loss 2.853207 Objective Loss 2.853207 LR 0.000008 Time 0.217342 -2023-04-27 08:47:58,189 - --- validate (epoch=388)----------- -2023-04-27 08:47:58,190 - 4952 samples (32 per mini-batch) -2023-04-27 08:48:06,488 - Epoch: [388][ 50/ 155] Loss 3.009388 mAP 0.454285 -2023-04-27 08:48:14,474 - Epoch: [388][ 100/ 155] Loss 3.020181 mAP 0.460567 -2023-04-27 08:48:22,374 - Epoch: [388][ 150/ 155] Loss 3.040772 mAP 0.454443 -2023-04-27 08:48:23,086 - Epoch: [388][ 155/ 155] Loss 3.039271 mAP 0.452007 -2023-04-27 08:48:23,182 - ==> mAP: 0.45201 Loss: 3.039 - -2023-04-27 08:48:23,187 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 08:48:23,187 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:48:23,222 - - -2023-04-27 08:48:23,222 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:48:34,884 - Epoch: [389][ 50/ 518] Overall Loss 2.789259 Objective Loss 2.789259 LR 0.000008 Time 0.233183 -2023-04-27 08:48:45,752 - Epoch: [389][ 100/ 518] Overall Loss 2.795462 Objective Loss 2.795462 LR 0.000008 Time 0.225256 -2023-04-27 08:48:56,576 - Epoch: [389][ 150/ 518] Overall Loss 2.811716 Objective Loss 2.811716 LR 0.000008 Time 0.222323 -2023-04-27 08:49:07,405 - Epoch: [389][ 200/ 518] Overall Loss 2.825911 Objective Loss 2.825911 LR 0.000008 Time 0.220880 -2023-04-27 08:49:18,205 - Epoch: [389][ 250/ 518] Overall Loss 2.837898 Objective Loss 2.837898 LR 0.000008 Time 0.219896 -2023-04-27 08:49:29,025 - Epoch: [389][ 300/ 518] Overall Loss 2.845855 Objective Loss 2.845855 LR 0.000008 Time 0.219309 -2023-04-27 08:49:39,917 - Epoch: [389][ 350/ 518] Overall Loss 2.845391 Objective Loss 2.845391 LR 0.000008 Time 0.219095 -2023-04-27 08:49:50,777 - Epoch: [389][ 400/ 518] Overall Loss 2.848477 Objective Loss 2.848477 LR 0.000008 Time 0.218853 -2023-04-27 08:50:01,668 - Epoch: [389][ 450/ 518] Overall Loss 2.850352 Objective Loss 2.850352 LR 0.000008 Time 0.218735 -2023-04-27 08:50:12,534 - Epoch: [389][ 500/ 518] Overall Loss 2.848494 Objective Loss 2.848494 LR 0.000008 Time 0.218590 -2023-04-27 08:50:16,274 - Epoch: [389][ 518/ 518] Overall Loss 2.848160 Objective Loss 2.848160 LR 0.000008 Time 0.218214 -2023-04-27 08:50:16,350 - --- validate (epoch=389)----------- -2023-04-27 08:50:16,350 - 4952 samples (32 per mini-batch) -2023-04-27 08:50:24,670 - Epoch: [389][ 50/ 155] Loss 3.019130 mAP 0.442860 -2023-04-27 08:50:32,662 - Epoch: [389][ 100/ 155] Loss 3.011736 mAP 0.457784 -2023-04-27 08:50:40,593 - Epoch: [389][ 150/ 155] Loss 3.036749 mAP 0.449083 -2023-04-27 08:50:41,314 - Epoch: [389][ 155/ 155] Loss 3.038026 mAP 0.449045 -2023-04-27 08:50:41,380 - ==> mAP: 0.44905 Loss: 3.038 - -2023-04-27 08:50:41,385 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 08:50:41,385 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:50:41,419 - - -2023-04-27 08:50:41,420 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:50:53,061 - Epoch: [390][ 50/ 518] Overall Loss 2.867159 Objective Loss 2.867159 LR 0.000008 Time 0.232770 -2023-04-27 08:51:03,883 - Epoch: [390][ 100/ 518] Overall Loss 2.864087 Objective Loss 2.864087 LR 0.000008 Time 0.224587 -2023-04-27 08:51:14,719 - Epoch: [390][ 150/ 518] Overall Loss 2.851771 Objective Loss 2.851771 LR 0.000008 Time 0.221958 -2023-04-27 08:51:25,509 - Epoch: [390][ 200/ 518] Overall Loss 2.843303 Objective Loss 2.843303 LR 0.000008 Time 0.220412 -2023-04-27 08:51:36,297 - Epoch: [390][ 250/ 518] Overall Loss 2.846618 Objective Loss 2.846618 LR 0.000008 Time 0.219476 -2023-04-27 08:51:47,138 - Epoch: [390][ 300/ 518] Overall Loss 2.844917 Objective Loss 2.844917 LR 0.000008 Time 0.219028 -2023-04-27 08:51:57,921 - Epoch: [390][ 350/ 518] Overall Loss 2.840128 Objective Loss 2.840128 LR 0.000008 Time 0.218541 -2023-04-27 08:52:08,690 - Epoch: [390][ 400/ 518] Overall Loss 2.849814 Objective Loss 2.849814 LR 0.000008 Time 0.218142 -2023-04-27 08:52:19,555 - Epoch: [390][ 450/ 518] Overall Loss 2.855117 Objective Loss 2.855117 LR 0.000008 Time 0.218045 -2023-04-27 08:52:30,297 - Epoch: [390][ 500/ 518] Overall Loss 2.854512 Objective Loss 2.854512 LR 0.000008 Time 0.217721 -2023-04-27 08:52:34,050 - Epoch: [390][ 518/ 518] Overall Loss 2.856792 Objective Loss 2.856792 LR 0.000008 Time 0.217400 -2023-04-27 08:52:34,127 - --- validate (epoch=390)----------- -2023-04-27 08:52:34,128 - 4952 samples (32 per mini-batch) -2023-04-27 08:52:42,474 - Epoch: [390][ 50/ 155] Loss 3.095078 mAP 0.443146 -2023-04-27 08:52:50,422 - Epoch: [390][ 100/ 155] Loss 3.070221 mAP 0.447778 -2023-04-27 08:52:58,351 - Epoch: [390][ 150/ 155] Loss 3.032854 mAP 0.454169 -2023-04-27 08:52:59,075 - Epoch: [390][ 155/ 155] Loss 3.029321 mAP 0.455607 -2023-04-27 08:52:59,150 - ==> mAP: 0.45561 Loss: 3.029 - -2023-04-27 08:52:59,154 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 08:52:59,154 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:52:59,188 - - -2023-04-27 08:52:59,188 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:53:10,671 - Epoch: [391][ 50/ 518] Overall Loss 2.805705 Objective Loss 2.805705 LR 0.000008 Time 0.229594 -2023-04-27 08:53:21,448 - Epoch: [391][ 100/ 518] Overall Loss 2.838521 Objective Loss 2.838521 LR 0.000008 Time 0.222552 -2023-04-27 08:53:32,294 - Epoch: [391][ 150/ 518] Overall Loss 2.840934 Objective Loss 2.840934 LR 0.000008 Time 0.220667 -2023-04-27 08:53:43,163 - Epoch: [391][ 200/ 518] Overall Loss 2.842952 Objective Loss 2.842952 LR 0.000008 Time 0.219835 -2023-04-27 08:53:53,987 - Epoch: [391][ 250/ 518] Overall Loss 2.844120 Objective Loss 2.844120 LR 0.000008 Time 0.219161 -2023-04-27 08:54:04,861 - Epoch: [391][ 300/ 518] Overall Loss 2.856096 Objective Loss 2.856096 LR 0.000008 Time 0.218873 -2023-04-27 08:54:15,723 - Epoch: [391][ 350/ 518] Overall Loss 2.854114 Objective Loss 2.854114 LR 0.000008 Time 0.218635 -2023-04-27 08:54:26,563 - Epoch: [391][ 400/ 518] Overall Loss 2.852878 Objective Loss 2.852878 LR 0.000008 Time 0.218403 -2023-04-27 08:54:37,374 - Epoch: [391][ 450/ 518] Overall Loss 2.857822 Objective Loss 2.857822 LR 0.000008 Time 0.218156 -2023-04-27 08:54:48,126 - Epoch: [391][ 500/ 518] Overall Loss 2.855251 Objective Loss 2.855251 LR 0.000008 Time 0.217841 -2023-04-27 08:54:51,875 - Epoch: [391][ 518/ 518] Overall Loss 2.855880 Objective Loss 2.855880 LR 0.000008 Time 0.217508 -2023-04-27 08:54:51,952 - --- validate (epoch=391)----------- -2023-04-27 08:54:51,952 - 4952 samples (32 per mini-batch) -2023-04-27 08:55:00,233 - Epoch: [391][ 50/ 155] Loss 3.079060 mAP 0.436689 -2023-04-27 08:55:08,115 - Epoch: [391][ 100/ 155] Loss 3.032654 mAP 0.455208 -2023-04-27 08:55:15,981 - Epoch: [391][ 150/ 155] Loss 3.044892 mAP 0.446128 -2023-04-27 08:55:16,701 - Epoch: [391][ 155/ 155] Loss 3.039157 mAP 0.445856 -2023-04-27 08:55:16,778 - ==> mAP: 0.44586 Loss: 3.039 - -2023-04-27 08:55:16,782 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 08:55:16,782 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:55:16,817 - - -2023-04-27 08:55:16,817 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:55:28,402 - Epoch: [392][ 50/ 518] Overall Loss 2.898677 Objective Loss 2.898677 LR 0.000008 Time 0.231642 -2023-04-27 08:55:39,252 - Epoch: [392][ 100/ 518] Overall Loss 2.889155 Objective Loss 2.889155 LR 0.000008 Time 0.224307 -2023-04-27 08:55:50,103 - Epoch: [392][ 150/ 518] Overall Loss 2.877968 Objective Loss 2.877968 LR 0.000008 Time 0.221865 -2023-04-27 08:56:00,929 - Epoch: [392][ 200/ 518] Overall Loss 2.871596 Objective Loss 2.871596 LR 0.000008 Time 0.220525 -2023-04-27 08:56:11,680 - Epoch: [392][ 250/ 518] Overall Loss 2.861046 Objective Loss 2.861046 LR 0.000008 Time 0.219416 -2023-04-27 08:56:22,505 - Epoch: [392][ 300/ 518] Overall Loss 2.857039 Objective Loss 2.857039 LR 0.000008 Time 0.218926 -2023-04-27 08:56:33,328 - Epoch: [392][ 350/ 518] Overall Loss 2.865254 Objective Loss 2.865254 LR 0.000008 Time 0.218568 -2023-04-27 08:56:44,184 - Epoch: [392][ 400/ 518] Overall Loss 2.854395 Objective Loss 2.854395 LR 0.000008 Time 0.218383 -2023-04-27 08:56:55,036 - Epoch: [392][ 450/ 518] Overall Loss 2.851711 Objective Loss 2.851711 LR 0.000008 Time 0.218230 -2023-04-27 08:57:05,851 - Epoch: [392][ 500/ 518] Overall Loss 2.850259 Objective Loss 2.850259 LR 0.000008 Time 0.218034 -2023-04-27 08:57:09,570 - Epoch: [392][ 518/ 518] Overall Loss 2.848551 Objective Loss 2.848551 LR 0.000008 Time 0.217637 -2023-04-27 08:57:09,647 - --- validate (epoch=392)----------- -2023-04-27 08:57:09,647 - 4952 samples (32 per mini-batch) -2023-04-27 08:57:17,915 - Epoch: [392][ 50/ 155] Loss 3.083833 mAP 0.447890 -2023-04-27 08:57:25,841 - Epoch: [392][ 100/ 155] Loss 3.050502 mAP 0.452508 -2023-04-27 08:57:33,754 - Epoch: [392][ 150/ 155] Loss 3.030914 mAP 0.453812 -2023-04-27 08:57:34,465 - Epoch: [392][ 155/ 155] Loss 3.030393 mAP 0.449737 -2023-04-27 08:57:34,541 - ==> mAP: 0.44974 Loss: 3.030 - -2023-04-27 08:57:34,545 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 08:57:34,545 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:57:34,579 - - -2023-04-27 08:57:34,580 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 08:57:46,347 - Epoch: [393][ 50/ 518] Overall Loss 2.861631 Objective Loss 2.861631 LR 0.000008 Time 0.235291 -2023-04-27 08:57:57,111 - Epoch: [393][ 100/ 518] Overall Loss 2.883427 Objective Loss 2.883427 LR 0.000008 Time 0.225269 -2023-04-27 08:58:07,931 - Epoch: [393][ 150/ 518] Overall Loss 2.864893 Objective Loss 2.864893 LR 0.000008 Time 0.222300 -2023-04-27 08:58:18,747 - Epoch: [393][ 200/ 518] Overall Loss 2.854621 Objective Loss 2.854621 LR 0.000008 Time 0.220799 -2023-04-27 08:58:29,565 - Epoch: [393][ 250/ 518] Overall Loss 2.838378 Objective Loss 2.838378 LR 0.000008 Time 0.219906 -2023-04-27 08:58:40,395 - Epoch: [393][ 300/ 518] Overall Loss 2.842262 Objective Loss 2.842262 LR 0.000008 Time 0.219349 -2023-04-27 08:58:51,259 - Epoch: [393][ 350/ 518] Overall Loss 2.848010 Objective Loss 2.848010 LR 0.000008 Time 0.219050 -2023-04-27 08:59:02,021 - Epoch: [393][ 400/ 518] Overall Loss 2.845162 Objective Loss 2.845162 LR 0.000008 Time 0.218569 -2023-04-27 08:59:12,806 - Epoch: [393][ 450/ 518] Overall Loss 2.851758 Objective Loss 2.851758 LR 0.000008 Time 0.218247 -2023-04-27 08:59:23,652 - Epoch: [393][ 500/ 518] Overall Loss 2.847052 Objective Loss 2.847052 LR 0.000008 Time 0.218111 -2023-04-27 08:59:27,388 - Epoch: [393][ 518/ 518] Overall Loss 2.846443 Objective Loss 2.846443 LR 0.000008 Time 0.217743 -2023-04-27 08:59:27,464 - --- validate (epoch=393)----------- -2023-04-27 08:59:27,465 - 4952 samples (32 per mini-batch) -2023-04-27 08:59:35,723 - Epoch: [393][ 50/ 155] Loss 3.043279 mAP 0.427450 -2023-04-27 08:59:43,630 - Epoch: [393][ 100/ 155] Loss 3.045599 mAP 0.435736 -2023-04-27 08:59:51,552 - Epoch: [393][ 150/ 155] Loss 3.035047 mAP 0.437729 -2023-04-27 08:59:52,266 - Epoch: [393][ 155/ 155] Loss 3.032598 mAP 0.437713 -2023-04-27 08:59:52,342 - ==> mAP: 0.43771 Loss: 3.033 - -2023-04-27 08:59:52,346 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 08:59:52,346 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 08:59:52,380 - - -2023-04-27 08:59:52,380 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 09:00:04,319 - Epoch: [394][ 50/ 518] Overall Loss 2.837725 Objective Loss 2.837725 LR 0.000008 Time 0.238728 -2023-04-27 09:00:15,233 - Epoch: [394][ 100/ 518] Overall Loss 2.850817 Objective Loss 2.850817 LR 0.000008 Time 0.228489 -2023-04-27 09:00:26,011 - Epoch: [394][ 150/ 518] Overall Loss 2.843131 Objective Loss 2.843131 LR 0.000008 Time 0.224169 -2023-04-27 09:00:36,828 - Epoch: [394][ 200/ 518] Overall Loss 2.844387 Objective Loss 2.844387 LR 0.000008 Time 0.222204 -2023-04-27 09:00:47,678 - Epoch: [394][ 250/ 518] Overall Loss 2.852007 Objective Loss 2.852007 LR 0.000008 Time 0.221155 -2023-04-27 09:00:58,450 - Epoch: [394][ 300/ 518] Overall Loss 2.854395 Objective Loss 2.854395 LR 0.000008 Time 0.220200 -2023-04-27 09:01:09,255 - Epoch: [394][ 350/ 518] Overall Loss 2.850606 Objective Loss 2.850606 LR 0.000008 Time 0.219609 -2023-04-27 09:01:20,121 - Epoch: [394][ 400/ 518] Overall Loss 2.845524 Objective Loss 2.845524 LR 0.000008 Time 0.219317 -2023-04-27 09:01:30,956 - Epoch: [394][ 450/ 518] Overall Loss 2.846373 Objective Loss 2.846373 LR 0.000008 Time 0.219024 -2023-04-27 09:01:41,801 - Epoch: [394][ 500/ 518] Overall Loss 2.846538 Objective Loss 2.846538 LR 0.000008 Time 0.218808 -2023-04-27 09:01:45,500 - Epoch: [394][ 518/ 518] Overall Loss 2.844859 Objective Loss 2.844859 LR 0.000008 Time 0.218346 -2023-04-27 09:01:45,577 - --- validate (epoch=394)----------- -2023-04-27 09:01:45,577 - 4952 samples (32 per mini-batch) -2023-04-27 09:01:53,909 - Epoch: [394][ 50/ 155] Loss 3.044907 mAP 0.454007 -2023-04-27 09:02:01,917 - Epoch: [394][ 100/ 155] Loss 3.036424 mAP 0.456919 -2023-04-27 09:02:09,849 - Epoch: [394][ 150/ 155] Loss 3.029036 mAP 0.456215 -2023-04-27 09:02:10,576 - Epoch: [394][ 155/ 155] Loss 3.031016 mAP 0.456989 -2023-04-27 09:02:10,647 - ==> mAP: 0.45699 Loss: 3.031 - -2023-04-27 09:02:10,651 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 09:02:10,651 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 09:02:10,685 - - -2023-04-27 09:02:10,685 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 09:02:22,253 - Epoch: [395][ 50/ 518] Overall Loss 2.864311 Objective Loss 2.864311 LR 0.000008 Time 0.231291 -2023-04-27 09:02:33,015 - Epoch: [395][ 100/ 518] Overall Loss 2.856179 Objective Loss 2.856179 LR 0.000008 Time 0.223253 -2023-04-27 09:02:43,799 - Epoch: [395][ 150/ 518] Overall Loss 2.848957 Objective Loss 2.848957 LR 0.000008 Time 0.220716 -2023-04-27 09:02:54,587 - Epoch: [395][ 200/ 518] Overall Loss 2.852150 Objective Loss 2.852150 LR 0.000008 Time 0.219471 -2023-04-27 09:03:05,413 - Epoch: [395][ 250/ 518] Overall Loss 2.842139 Objective Loss 2.842139 LR 0.000008 Time 0.218874 -2023-04-27 09:03:16,321 - Epoch: [395][ 300/ 518] Overall Loss 2.844124 Objective Loss 2.844124 LR 0.000008 Time 0.218750 -2023-04-27 09:03:27,185 - Epoch: [395][ 350/ 518] Overall Loss 2.845452 Objective Loss 2.845452 LR 0.000008 Time 0.218534 -2023-04-27 09:03:37,990 - Epoch: [395][ 400/ 518] Overall Loss 2.849428 Objective Loss 2.849428 LR 0.000008 Time 0.218228 -2023-04-27 09:03:48,770 - Epoch: [395][ 450/ 518] Overall Loss 2.850638 Objective Loss 2.850638 LR 0.000008 Time 0.217933 -2023-04-27 09:03:59,616 - Epoch: [395][ 500/ 518] Overall Loss 2.845845 Objective Loss 2.845845 LR 0.000008 Time 0.217828 -2023-04-27 09:04:03,365 - Epoch: [395][ 518/ 518] Overall Loss 2.845517 Objective Loss 2.845517 LR 0.000008 Time 0.217494 -2023-04-27 09:04:03,442 - --- validate (epoch=395)----------- -2023-04-27 09:04:03,442 - 4952 samples (32 per mini-batch) -2023-04-27 09:04:11,795 - Epoch: [395][ 50/ 155] Loss 3.046234 mAP 0.451913 -2023-04-27 09:04:19,803 - Epoch: [395][ 100/ 155] Loss 3.039536 mAP 0.457541 -2023-04-27 09:04:27,751 - Epoch: [395][ 150/ 155] Loss 3.025629 mAP 0.458328 -2023-04-27 09:04:28,478 - Epoch: [395][ 155/ 155] Loss 3.025943 mAP 0.458235 -2023-04-27 09:04:28,545 - ==> mAP: 0.45823 Loss: 3.026 - -2023-04-27 09:04:28,548 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 09:04:28,548 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 09:04:28,583 - - -2023-04-27 09:04:28,583 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 09:04:40,330 - Epoch: [396][ 50/ 518] Overall Loss 2.889794 Objective Loss 2.889794 LR 0.000008 Time 0.234897 -2023-04-27 09:04:51,182 - Epoch: [396][ 100/ 518] Overall Loss 2.873933 Objective Loss 2.873933 LR 0.000008 Time 0.225948 -2023-04-27 09:05:01,943 - Epoch: [396][ 150/ 518] Overall Loss 2.870502 Objective Loss 2.870502 LR 0.000008 Time 0.222359 -2023-04-27 09:05:12,718 - Epoch: [396][ 200/ 518] Overall Loss 2.860234 Objective Loss 2.860234 LR 0.000008 Time 0.220635 -2023-04-27 09:05:23,571 - Epoch: [396][ 250/ 518] Overall Loss 2.861818 Objective Loss 2.861818 LR 0.000008 Time 0.219915 -2023-04-27 09:05:34,402 - Epoch: [396][ 300/ 518] Overall Loss 2.862359 Objective Loss 2.862359 LR 0.000008 Time 0.219362 -2023-04-27 09:05:45,271 - Epoch: [396][ 350/ 518] Overall Loss 2.864253 Objective Loss 2.864253 LR 0.000008 Time 0.219073 -2023-04-27 09:05:56,110 - Epoch: [396][ 400/ 518] Overall Loss 2.860673 Objective Loss 2.860673 LR 0.000008 Time 0.218784 -2023-04-27 09:06:06,900 - Epoch: [396][ 450/ 518] Overall Loss 2.853595 Objective Loss 2.853595 LR 0.000008 Time 0.218448 -2023-04-27 09:06:17,731 - Epoch: [396][ 500/ 518] Overall Loss 2.849766 Objective Loss 2.849766 LR 0.000008 Time 0.218261 -2023-04-27 09:06:21,524 - Epoch: [396][ 518/ 518] Overall Loss 2.854478 Objective Loss 2.854478 LR 0.000008 Time 0.217999 -2023-04-27 09:06:21,601 - --- validate (epoch=396)----------- -2023-04-27 09:06:21,602 - 4952 samples (32 per mini-batch) -2023-04-27 09:06:29,888 - Epoch: [396][ 50/ 155] Loss 3.086509 mAP 0.438031 -2023-04-27 09:06:37,834 - Epoch: [396][ 100/ 155] Loss 3.046841 mAP 0.441977 -2023-04-27 09:06:45,767 - Epoch: [396][ 150/ 155] Loss 3.028003 mAP 0.449807 -2023-04-27 09:06:46,497 - Epoch: [396][ 155/ 155] Loss 3.027378 mAP 0.448580 -2023-04-27 09:06:46,581 - ==> mAP: 0.44858 Loss: 3.027 - -2023-04-27 09:06:46,584 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 09:06:46,584 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 09:06:46,619 - - -2023-04-27 09:06:46,619 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 09:06:58,191 - Epoch: [397][ 50/ 518] Overall Loss 2.854081 Objective Loss 2.854081 LR 0.000008 Time 0.231391 -2023-04-27 09:07:09,038 - Epoch: [397][ 100/ 518] Overall Loss 2.853952 Objective Loss 2.853952 LR 0.000008 Time 0.224152 -2023-04-27 09:07:19,795 - Epoch: [397][ 150/ 518] Overall Loss 2.863148 Objective Loss 2.863148 LR 0.000008 Time 0.221136 -2023-04-27 09:07:30,692 - Epoch: [397][ 200/ 518] Overall Loss 2.862215 Objective Loss 2.862215 LR 0.000008 Time 0.220328 -2023-04-27 09:07:41,582 - Epoch: [397][ 250/ 518] Overall Loss 2.853305 Objective Loss 2.853305 LR 0.000008 Time 0.219815 -2023-04-27 09:07:52,407 - Epoch: [397][ 300/ 518] Overall Loss 2.847071 Objective Loss 2.847071 LR 0.000008 Time 0.219257 -2023-04-27 09:08:03,148 - Epoch: [397][ 350/ 518] Overall Loss 2.844636 Objective Loss 2.844636 LR 0.000008 Time 0.218621 -2023-04-27 09:08:13,938 - Epoch: [397][ 400/ 518] Overall Loss 2.838235 Objective Loss 2.838235 LR 0.000008 Time 0.218265 -2023-04-27 09:08:24,688 - Epoch: [397][ 450/ 518] Overall Loss 2.838606 Objective Loss 2.838606 LR 0.000008 Time 0.217897 -2023-04-27 09:08:35,477 - Epoch: [397][ 500/ 518] Overall Loss 2.840534 Objective Loss 2.840534 LR 0.000008 Time 0.217684 -2023-04-27 09:08:39,206 - Epoch: [397][ 518/ 518] Overall Loss 2.839304 Objective Loss 2.839304 LR 0.000008 Time 0.217317 -2023-04-27 09:08:39,282 - --- validate (epoch=397)----------- -2023-04-27 09:08:39,283 - 4952 samples (32 per mini-batch) -2023-04-27 09:08:47,583 - Epoch: [397][ 50/ 155] Loss 3.034619 mAP 0.443384 -2023-04-27 09:08:55,564 - Epoch: [397][ 100/ 155] Loss 3.036795 mAP 0.439716 -2023-04-27 09:09:03,493 - Epoch: [397][ 150/ 155] Loss 3.033041 mAP 0.443013 -2023-04-27 09:09:04,203 - Epoch: [397][ 155/ 155] Loss 3.034255 mAP 0.441952 -2023-04-27 09:09:04,277 - ==> mAP: 0.44195 Loss: 3.034 - -2023-04-27 09:09:04,281 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 09:09:04,281 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 09:09:04,315 - - -2023-04-27 09:09:04,316 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 09:09:15,982 - Epoch: [398][ 50/ 518] Overall Loss 2.828162 Objective Loss 2.828162 LR 0.000008 Time 0.233270 -2023-04-27 09:09:26,794 - Epoch: [398][ 100/ 518] Overall Loss 2.845164 Objective Loss 2.845164 LR 0.000008 Time 0.224744 -2023-04-27 09:09:37,643 - Epoch: [398][ 150/ 518] Overall Loss 2.836454 Objective Loss 2.836454 LR 0.000008 Time 0.222146 -2023-04-27 09:09:48,520 - Epoch: [398][ 200/ 518] Overall Loss 2.839886 Objective Loss 2.839886 LR 0.000008 Time 0.220988 -2023-04-27 09:09:59,379 - Epoch: [398][ 250/ 518] Overall Loss 2.835748 Objective Loss 2.835748 LR 0.000008 Time 0.220216 -2023-04-27 09:10:10,169 - Epoch: [398][ 300/ 518] Overall Loss 2.843931 Objective Loss 2.843931 LR 0.000008 Time 0.219477 -2023-04-27 09:10:20,964 - Epoch: [398][ 350/ 518] Overall Loss 2.840271 Objective Loss 2.840271 LR 0.000008 Time 0.218960 -2023-04-27 09:10:31,757 - Epoch: [398][ 400/ 518] Overall Loss 2.843867 Objective Loss 2.843867 LR 0.000008 Time 0.218569 -2023-04-27 09:10:42,495 - Epoch: [398][ 450/ 518] Overall Loss 2.843937 Objective Loss 2.843937 LR 0.000008 Time 0.218142 -2023-04-27 09:10:53,350 - Epoch: [398][ 500/ 518] Overall Loss 2.844779 Objective Loss 2.844779 LR 0.000008 Time 0.218035 -2023-04-27 09:10:57,097 - Epoch: [398][ 518/ 518] Overall Loss 2.845338 Objective Loss 2.845338 LR 0.000008 Time 0.217692 -2023-04-27 09:10:57,175 - --- validate (epoch=398)----------- -2023-04-27 09:10:57,175 - 4952 samples (32 per mini-batch) -2023-04-27 09:11:05,534 - Epoch: [398][ 50/ 155] Loss 3.059549 mAP 0.439681 -2023-04-27 09:11:13,478 - Epoch: [398][ 100/ 155] Loss 3.042070 mAP 0.448026 -2023-04-27 09:11:21,434 - Epoch: [398][ 150/ 155] Loss 3.033927 mAP 0.455180 -2023-04-27 09:11:22,160 - Epoch: [398][ 155/ 155] Loss 3.028840 mAP 0.457442 -2023-04-27 09:11:22,234 - ==> mAP: 0.45744 Loss: 3.029 - -2023-04-27 09:11:22,238 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 09:11:22,238 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 09:11:22,273 - - -2023-04-27 09:11:22,274 - Training epoch: 16551 samples (32 per mini-batch) -2023-04-27 09:11:33,784 - Epoch: [399][ 50/ 518] Overall Loss 2.900167 Objective Loss 2.900167 LR 0.000008 Time 0.230155 -2023-04-27 09:11:44,576 - Epoch: [399][ 100/ 518] Overall Loss 2.892673 Objective Loss 2.892673 LR 0.000008 Time 0.222984 -2023-04-27 09:11:55,322 - Epoch: [399][ 150/ 518] Overall Loss 2.862588 Objective Loss 2.862588 LR 0.000008 Time 0.220285 -2023-04-27 09:12:06,100 - Epoch: [399][ 200/ 518] Overall Loss 2.858342 Objective Loss 2.858342 LR 0.000008 Time 0.219094 -2023-04-27 09:12:16,891 - Epoch: [399][ 250/ 518] Overall Loss 2.860577 Objective Loss 2.860577 LR 0.000008 Time 0.218434 -2023-04-27 09:12:27,718 - Epoch: [399][ 300/ 518] Overall Loss 2.857732 Objective Loss 2.857732 LR 0.000008 Time 0.218113 -2023-04-27 09:12:38,484 - Epoch: [399][ 350/ 518] Overall Loss 2.861838 Objective Loss 2.861838 LR 0.000008 Time 0.217709 -2023-04-27 09:12:49,273 - Epoch: [399][ 400/ 518] Overall Loss 2.859286 Objective Loss 2.859286 LR 0.000008 Time 0.217463 -2023-04-27 09:13:00,120 - Epoch: [399][ 450/ 518] Overall Loss 2.863566 Objective Loss 2.863566 LR 0.000008 Time 0.217402 -2023-04-27 09:13:10,982 - Epoch: [399][ 500/ 518] Overall Loss 2.864670 Objective Loss 2.864670 LR 0.000008 Time 0.217383 -2023-04-27 09:13:14,740 - Epoch: [399][ 518/ 518] Overall Loss 2.860107 Objective Loss 2.860107 LR 0.000008 Time 0.217084 -2023-04-27 09:13:14,817 - --- validate (epoch=399)----------- -2023-04-27 09:13:14,817 - 4952 samples (32 per mini-batch) -2023-04-27 09:13:23,096 - Epoch: [399][ 50/ 155] Loss 3.032493 mAP 0.453379 -2023-04-27 09:13:31,052 - Epoch: [399][ 100/ 155] Loss 3.023493 mAP 0.449258 -2023-04-27 09:13:38,973 - Epoch: [399][ 150/ 155] Loss 3.024590 mAP 0.459217 -2023-04-27 09:13:39,693 - Epoch: [399][ 155/ 155] Loss 3.023713 mAP 0.458375 -2023-04-27 09:13:39,772 - ==> mAP: 0.45838 Loss: 3.024 - -2023-04-27 09:13:39,775 - ==> Best [mAP: 0.461116 vloss: 3.029455 Sparsity:0.00 Params: 2177087 on epoch: 385] -2023-04-27 09:13:39,776 - Saving checkpoint to: logs/2023.04.26-184348/qat_checkpoint.pth.tar -2023-04-27 09:13:39,810 - --- test --------------------- -2023-04-27 09:13:39,810 - 4952 samples (32 per mini-batch) -2023-04-27 09:13:48,098 - Test: [ 50/ 155] Loss 3.027062 mAP 0.451130 -2023-04-27 09:13:56,022 - Test: [ 100/ 155] Loss 3.032714 mAP 0.440267 -2023-04-27 09:14:03,925 - Test: [ 150/ 155] Loss 3.031370 mAP 0.441334 -2023-04-27 09:14:04,636 - Test: [ 155/ 155] Loss 3.028019 mAP 0.442482 -2023-04-27 09:14:04,710 - ==> mAP: 0.44248 Loss: 3.028 - -2023-04-27 09:14:04,713 - -2023-04-27 09:14:04,713 - Log file for this run: /home/seldauyanik/Workspace/ai8x-training/logs/2023.04.26-184348/2023.04.26-184348.log +2025-05-16 17:57:09,519 - + +2025-05-16 17:57:09,519 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 17:57:45,923 - Epoch: [64][ 50/ 518] Overall Loss 2.776415 Objective Loss 2.776415 LR 0.000250 Time 0.727946 +2025-05-16 17:58:14,519 - Epoch: [64][ 100/ 518] Overall Loss 2.799637 Objective Loss 2.799637 LR 0.000250 Time 0.649912 +2025-05-16 17:58:39,806 - Epoch: [64][ 150/ 518] Overall Loss 2.800914 Objective Loss 2.800914 LR 0.000250 Time 0.601794 +2025-05-16 17:59:01,124 - Epoch: [64][ 200/ 518] Overall Loss 2.818121 Objective Loss 2.818121 LR 0.000250 Time 0.557928 +2025-05-16 17:59:23,853 - Epoch: [64][ 250/ 518] Overall Loss 2.814098 Objective Loss 2.814098 LR 0.000250 Time 0.537250 +2025-05-16 17:59:46,654 - Epoch: [64][ 300/ 518] Overall Loss 2.816723 Objective Loss 2.816723 LR 0.000250 Time 0.523670 +2025-05-16 18:00:08,625 - Epoch: [64][ 350/ 518] Overall Loss 2.810894 Objective Loss 2.810894 LR 0.000250 Time 0.511628 +2025-05-16 18:00:30,826 - Epoch: [64][ 400/ 518] Overall Loss 2.814380 Objective Loss 2.814380 LR 0.000250 Time 0.503172 +2025-05-16 18:00:52,582 - Epoch: [64][ 450/ 518] Overall Loss 2.815057 Objective Loss 2.815057 LR 0.000250 Time 0.495606 +2025-05-16 18:01:15,061 - Epoch: [64][ 500/ 518] Overall Loss 2.816873 Objective Loss 2.816873 LR 0.000250 Time 0.490999 +2025-05-16 18:01:22,938 - Epoch: [64][ 518/ 518] Overall Loss 2.817094 Objective Loss 2.817094 LR 0.000250 Time 0.489142 +2025-05-16 18:01:23,009 - --- validate (epoch=64)----------- +2025-05-16 18:01:23,011 - 4952 samples (32 per mini-batch) +2025-05-16 18:01:23,015 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 18:01:46,898 - Epoch: [64][ 50/ 155] Loss 3.073188 mAP 0.433425 +2025-05-16 18:02:10,149 - Epoch: [64][ 100/ 155] Loss 3.100219 mAP 0.426315 +2025-05-16 18:02:35,522 - Epoch: [64][ 150/ 155] Loss 3.099323 mAP 0.429420 +2025-05-16 18:02:41,554 - Epoch: [64][ 155/ 155] Loss 3.097561 mAP 0.429216 +2025-05-16 18:02:41,660 - ==> mAP: 0.42922 Loss: 3.098 + +2025-05-16 18:02:41,681 - ==> Best [mAP: 0.429216 vloss: 3.097561 Params: 2177088 on epoch: 64] +2025-05-16 18:02:41,681 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 18:02:41,914 - + +2025-05-16 18:02:41,924 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 18:03:05,604 - Epoch: [65][ 50/ 518] Overall Loss 2.813766 Objective Loss 2.813766 LR 0.000250 Time 0.473488 +2025-05-16 18:03:29,120 - Epoch: [65][ 100/ 518] Overall Loss 2.810949 Objective Loss 2.810949 LR 0.000250 Time 0.471878 +2025-05-16 18:03:51,481 - Epoch: [65][ 150/ 518] Overall Loss 2.807671 Objective Loss 2.807671 LR 0.000250 Time 0.463632 +2025-05-16 18:04:14,602 - Epoch: [65][ 200/ 518] Overall Loss 2.818850 Objective Loss 2.818850 LR 0.000250 Time 0.463320 +2025-05-16 18:04:36,888 - Epoch: [65][ 250/ 518] Overall Loss 2.813100 Objective Loss 2.813100 LR 0.000250 Time 0.459791 +2025-05-16 18:04:59,301 - Epoch: [65][ 300/ 518] Overall Loss 2.812238 Objective Loss 2.812238 LR 0.000250 Time 0.457864 +2025-05-16 18:05:21,917 - Epoch: [65][ 350/ 518] Overall Loss 2.814155 Objective Loss 2.814155 LR 0.000250 Time 0.457059 +2025-05-16 18:05:43,840 - Epoch: [65][ 400/ 518] Overall Loss 2.812748 Objective Loss 2.812748 LR 0.000250 Time 0.454729 +2025-05-16 18:06:06,758 - Epoch: [65][ 450/ 518] Overall Loss 2.819120 Objective Loss 2.819120 LR 0.000250 Time 0.455127 +2025-05-16 18:06:29,052 - Epoch: [65][ 500/ 518] Overall Loss 2.820230 Objective Loss 2.820230 LR 0.000250 Time 0.454200 +2025-05-16 18:06:37,036 - Epoch: [65][ 518/ 518] Overall Loss 2.820487 Objective Loss 2.820487 LR 0.000250 Time 0.453828 +2025-05-16 18:06:37,132 - --- validate (epoch=65)----------- +2025-05-16 18:06:37,133 - 4952 samples (32 per mini-batch) +2025-05-16 18:06:37,138 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 18:07:01,563 - Epoch: [65][ 50/ 155] Loss 3.130664 mAP 0.427951 +2025-05-16 18:07:26,860 - Epoch: [65][ 100/ 155] Loss 3.109971 mAP 0.419223 +2025-05-16 18:07:52,299 - Epoch: [65][ 150/ 155] Loss 3.101793 mAP 0.423670 +2025-05-16 18:07:57,205 - Epoch: [65][ 155/ 155] Loss 3.107182 mAP 0.421174 +2025-05-16 18:07:57,349 - ==> mAP: 0.42117 Loss: 3.107 + +2025-05-16 18:07:57,360 - ==> Best [mAP: 0.429216 vloss: 3.097561 Params: 2177088 on epoch: 64] +2025-05-16 18:07:57,361 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 18:07:57,600 - + +2025-05-16 18:07:57,600 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 18:08:22,816 - Epoch: [66][ 50/ 518] Overall Loss 2.832123 Objective Loss 2.832123 LR 0.000250 Time 0.504202 +2025-05-16 18:08:46,292 - Epoch: [66][ 100/ 518] Overall Loss 2.847876 Objective Loss 2.847876 LR 0.000250 Time 0.486844 +2025-05-16 18:09:08,900 - Epoch: [66][ 150/ 518] Overall Loss 2.837179 Objective Loss 2.837179 LR 0.000250 Time 0.475271 +2025-05-16 18:09:31,095 - Epoch: [66][ 200/ 518] Overall Loss 2.849293 Objective Loss 2.849293 LR 0.000250 Time 0.467392 +2025-05-16 18:09:54,168 - Epoch: [66][ 250/ 518] Overall Loss 2.839395 Objective Loss 2.839395 LR 0.000250 Time 0.466196 +2025-05-16 18:10:16,079 - Epoch: [66][ 300/ 518] Overall Loss 2.826583 Objective Loss 2.826583 LR 0.000250 Time 0.461525 +2025-05-16 18:10:38,630 - Epoch: [66][ 350/ 518] Overall Loss 2.830270 Objective Loss 2.830270 LR 0.000250 Time 0.460018 +2025-05-16 18:11:01,176 - Epoch: [66][ 400/ 518] Overall Loss 2.828818 Objective Loss 2.828818 LR 0.000250 Time 0.458873 +2025-05-16 18:11:23,513 - Epoch: [66][ 450/ 518] Overall Loss 2.824024 Objective Loss 2.824024 LR 0.000250 Time 0.457522 +2025-05-16 18:11:45,573 - Epoch: [66][ 500/ 518] Overall Loss 2.823408 Objective Loss 2.823408 LR 0.000250 Time 0.455885 +2025-05-16 18:11:53,230 - Epoch: [66][ 518/ 518] Overall Loss 2.823097 Objective Loss 2.823097 LR 0.000250 Time 0.454824 +2025-05-16 18:11:53,296 - --- validate (epoch=66)----------- +2025-05-16 18:11:53,297 - 4952 samples (32 per mini-batch) +2025-05-16 18:11:53,301 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 18:12:16,729 - Epoch: [66][ 50/ 155] Loss 3.146520 mAP 0.413048 +2025-05-16 18:12:41,222 - Epoch: [66][ 100/ 155] Loss 3.113265 mAP 0.420494 +2025-05-16 18:13:05,535 - Epoch: [66][ 150/ 155] Loss 3.113479 mAP 0.420937 +2025-05-16 18:13:11,137 - Epoch: [66][ 155/ 155] Loss 3.116721 mAP 0.417733 +2025-05-16 18:13:11,216 - ==> mAP: 0.41773 Loss: 3.117 + +2025-05-16 18:13:11,227 - ==> Best [mAP: 0.429216 vloss: 3.097561 Params: 2177088 on epoch: 64] +2025-05-16 18:13:11,227 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 18:13:11,416 - + +2025-05-16 18:13:11,416 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 18:13:35,199 - Epoch: [67][ 50/ 518] Overall Loss 2.793617 Objective Loss 2.793617 LR 0.000250 Time 0.475539 +2025-05-16 18:13:58,439 - Epoch: [67][ 100/ 518] Overall Loss 2.817851 Objective Loss 2.817851 LR 0.000250 Time 0.470153 +2025-05-16 18:14:21,517 - Epoch: [67][ 150/ 518] Overall Loss 2.816843 Objective Loss 2.816843 LR 0.000250 Time 0.467226 +2025-05-16 18:14:42,649 - Epoch: [67][ 200/ 518] Overall Loss 2.809741 Objective Loss 2.809741 LR 0.000250 Time 0.456071 +2025-05-16 18:15:05,479 - Epoch: [67][ 250/ 518] Overall Loss 2.799505 Objective Loss 2.799505 LR 0.000250 Time 0.456168 +2025-05-16 18:15:28,651 - Epoch: [67][ 300/ 518] Overall Loss 2.804069 Objective Loss 2.804069 LR 0.000250 Time 0.457366 +2025-05-16 18:15:51,192 - Epoch: [67][ 350/ 518] Overall Loss 2.803692 Objective Loss 2.803692 LR 0.000250 Time 0.456424 +2025-05-16 18:16:13,543 - Epoch: [67][ 400/ 518] Overall Loss 2.802847 Objective Loss 2.802847 LR 0.000250 Time 0.455244 +2025-05-16 18:16:36,960 - Epoch: [67][ 450/ 518] Overall Loss 2.800724 Objective Loss 2.800724 LR 0.000250 Time 0.456695 +2025-05-16 18:16:59,777 - Epoch: [67][ 500/ 518] Overall Loss 2.802573 Objective Loss 2.802573 LR 0.000250 Time 0.456646 +2025-05-16 18:17:07,848 - Epoch: [67][ 518/ 518] Overall Loss 2.803509 Objective Loss 2.803509 LR 0.000250 Time 0.456357 +2025-05-16 18:17:07,945 - --- validate (epoch=67)----------- +2025-05-16 18:17:07,946 - 4952 samples (32 per mini-batch) +2025-05-16 18:17:07,951 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 18:17:32,088 - Epoch: [67][ 50/ 155] Loss 3.135817 mAP 0.428612 +2025-05-16 18:17:56,451 - Epoch: [67][ 100/ 155] Loss 3.117931 mAP 0.433260 +2025-05-16 18:18:21,587 - Epoch: [67][ 150/ 155] Loss 3.108294 mAP 0.433672 +2025-05-16 18:18:27,054 - Epoch: [67][ 155/ 155] Loss 3.109681 mAP 0.430747 +2025-05-16 18:18:27,156 - ==> mAP: 0.43075 Loss: 3.110 + +2025-05-16 18:18:27,434 - ==> Best [mAP: 0.430747 vloss: 3.109681 Params: 2177088 on epoch: 67] +2025-05-16 18:18:27,439 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 18:18:27,592 - + +2025-05-16 18:18:27,593 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 18:18:52,332 - Epoch: [68][ 50/ 518] Overall Loss 2.810733 Objective Loss 2.810733 LR 0.000250 Time 0.494671 +2025-05-16 18:19:14,416 - Epoch: [68][ 100/ 518] Overall Loss 2.816422 Objective Loss 2.816422 LR 0.000250 Time 0.468154 +2025-05-16 18:19:36,679 - Epoch: [68][ 150/ 518] Overall Loss 2.798790 Objective Loss 2.798790 LR 0.000250 Time 0.460511 +2025-05-16 18:19:59,211 - Epoch: [68][ 200/ 518] Overall Loss 2.790850 Objective Loss 2.790850 LR 0.000250 Time 0.458033 +2025-05-16 18:20:20,961 - Epoch: [68][ 250/ 518] Overall Loss 2.790071 Objective Loss 2.790071 LR 0.000250 Time 0.453418 +2025-05-16 18:20:43,460 - Epoch: [68][ 300/ 518] Overall Loss 2.792008 Objective Loss 2.792008 LR 0.000250 Time 0.452796 +2025-05-16 18:21:05,238 - Epoch: [68][ 350/ 518] Overall Loss 2.789433 Objective Loss 2.789433 LR 0.000250 Time 0.450327 +2025-05-16 18:21:27,250 - Epoch: [68][ 400/ 518] Overall Loss 2.794172 Objective Loss 2.794172 LR 0.000250 Time 0.449060 +2025-05-16 18:21:49,148 - Epoch: [68][ 450/ 518] Overall Loss 2.793786 Objective Loss 2.793786 LR 0.000250 Time 0.447824 +2025-05-16 18:22:11,388 - Epoch: [68][ 500/ 518] Overall Loss 2.798610 Objective Loss 2.798610 LR 0.000250 Time 0.447517 +2025-05-16 18:22:19,264 - Epoch: [68][ 518/ 518] Overall Loss 2.800461 Objective Loss 2.800461 LR 0.000250 Time 0.447170 +2025-05-16 18:22:19,365 - --- validate (epoch=68)----------- +2025-05-16 18:22:19,366 - 4952 samples (32 per mini-batch) +2025-05-16 18:22:19,371 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 18:22:42,566 - Epoch: [68][ 50/ 155] Loss 3.090871 mAP 0.427932 +2025-05-16 18:23:06,092 - Epoch: [68][ 100/ 155] Loss 3.095557 mAP 0.426811 +2025-05-16 18:23:30,695 - Epoch: [68][ 150/ 155] Loss 3.097250 mAP 0.426804 +2025-05-16 18:23:36,183 - Epoch: [68][ 155/ 155] Loss 3.098817 mAP 0.424760 +2025-05-16 18:23:36,251 - ==> mAP: 0.42476 Loss: 3.099 + +2025-05-16 18:23:36,262 - ==> Best [mAP: 0.430747 vloss: 3.109681 Params: 2177088 on epoch: 67] +2025-05-16 18:23:36,262 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 18:23:36,378 - + +2025-05-16 18:23:36,378 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 18:24:01,048 - Epoch: [69][ 50/ 518] Overall Loss 2.832626 Objective Loss 2.832626 LR 0.000250 Time 0.493276 +2025-05-16 18:24:23,224 - Epoch: [69][ 100/ 518] Overall Loss 2.802376 Objective Loss 2.802376 LR 0.000250 Time 0.468337 +2025-05-16 18:24:45,624 - Epoch: [69][ 150/ 518] Overall Loss 2.776951 Objective Loss 2.776951 LR 0.000250 Time 0.461544 +2025-05-16 18:25:07,798 - Epoch: [69][ 200/ 518] Overall Loss 2.776491 Objective Loss 2.776491 LR 0.000250 Time 0.457018 +2025-05-16 18:25:30,247 - Epoch: [69][ 250/ 518] Overall Loss 2.786917 Objective Loss 2.786917 LR 0.000250 Time 0.455399 +2025-05-16 18:25:53,216 - Epoch: [69][ 300/ 518] Overall Loss 2.786110 Objective Loss 2.786110 LR 0.000250 Time 0.456057 +2025-05-16 18:26:17,155 - Epoch: [69][ 350/ 518] Overall Loss 2.779916 Objective Loss 2.779916 LR 0.000250 Time 0.459297 +2025-05-16 18:26:39,282 - Epoch: [69][ 400/ 518] Overall Loss 2.782762 Objective Loss 2.782762 LR 0.000250 Time 0.457197 +2025-05-16 18:27:02,148 - Epoch: [69][ 450/ 518] Overall Loss 2.785448 Objective Loss 2.785448 LR 0.000250 Time 0.457206 +2025-05-16 18:27:24,530 - Epoch: [69][ 500/ 518] Overall Loss 2.790491 Objective Loss 2.790491 LR 0.000250 Time 0.456245 +2025-05-16 18:27:32,587 - Epoch: [69][ 518/ 518] Overall Loss 2.794188 Objective Loss 2.794188 LR 0.000250 Time 0.455944 +2025-05-16 18:27:32,719 - --- validate (epoch=69)----------- +2025-05-16 18:27:32,720 - 4952 samples (32 per mini-batch) +2025-05-16 18:27:32,725 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 18:27:54,756 - Epoch: [69][ 50/ 155] Loss 3.081090 mAP 0.426078 +2025-05-16 18:28:18,059 - Epoch: [69][ 100/ 155] Loss 3.093791 mAP 0.422229 +2025-05-16 18:28:41,872 - Epoch: [69][ 150/ 155] Loss 3.096346 mAP 0.421253 +2025-05-16 18:28:47,176 - Epoch: [69][ 155/ 155] Loss 3.098498 mAP 0.419200 +2025-05-16 18:28:47,248 - ==> mAP: 0.41920 Loss: 3.098 + +2025-05-16 18:28:47,259 - ==> Best [mAP: 0.430747 vloss: 3.109681 Params: 2177088 on epoch: 67] +2025-05-16 18:28:47,259 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 18:28:47,500 - + +2025-05-16 18:28:47,501 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 18:29:12,221 - Epoch: [70][ 50/ 518] Overall Loss 2.840196 Objective Loss 2.840196 LR 0.000250 Time 0.494306 +2025-05-16 18:29:34,966 - Epoch: [70][ 100/ 518] Overall Loss 2.803723 Objective Loss 2.803723 LR 0.000250 Time 0.474574 +2025-05-16 18:29:58,100 - Epoch: [70][ 150/ 518] Overall Loss 2.795544 Objective Loss 2.795544 LR 0.000250 Time 0.470597 +2025-05-16 18:30:21,432 - Epoch: [70][ 200/ 518] Overall Loss 2.794390 Objective Loss 2.794390 LR 0.000250 Time 0.469582 +2025-05-16 18:30:43,268 - Epoch: [70][ 250/ 518] Overall Loss 2.794145 Objective Loss 2.794145 LR 0.000250 Time 0.463000 +2025-05-16 18:31:05,470 - Epoch: [70][ 300/ 518] Overall Loss 2.796486 Objective Loss 2.796486 LR 0.000250 Time 0.459834 +2025-05-16 18:31:28,563 - Epoch: [70][ 350/ 518] Overall Loss 2.797339 Objective Loss 2.797339 LR 0.000250 Time 0.460118 +2025-05-16 18:31:51,031 - Epoch: [70][ 400/ 518] Overall Loss 2.798280 Objective Loss 2.798280 LR 0.000250 Time 0.458769 +2025-05-16 18:32:13,377 - Epoch: [70][ 450/ 518] Overall Loss 2.791061 Objective Loss 2.791061 LR 0.000250 Time 0.457447 +2025-05-16 18:32:36,377 - Epoch: [70][ 500/ 518] Overall Loss 2.787076 Objective Loss 2.787076 LR 0.000250 Time 0.457699 +2025-05-16 18:32:44,841 - Epoch: [70][ 518/ 518] Overall Loss 2.794181 Objective Loss 2.794181 LR 0.000250 Time 0.458133 +2025-05-16 18:32:44,921 - --- validate (epoch=70)----------- +2025-05-16 18:32:44,922 - 4952 samples (32 per mini-batch) +2025-05-16 18:32:44,927 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 18:33:08,424 - Epoch: [70][ 50/ 155] Loss 3.096041 mAP 0.427626 +2025-05-16 18:33:34,458 - Epoch: [70][ 100/ 155] Loss 3.115661 mAP 0.416260 +2025-05-16 18:33:59,841 - Epoch: [70][ 150/ 155] Loss 3.124224 mAP 0.418394 +2025-05-16 18:34:05,698 - Epoch: [70][ 155/ 155] Loss 3.123897 mAP 0.417474 +2025-05-16 18:34:05,810 - ==> mAP: 0.41747 Loss: 3.124 + +2025-05-16 18:34:05,822 - ==> Best [mAP: 0.430747 vloss: 3.109681 Params: 2177088 on epoch: 67] +2025-05-16 18:34:05,822 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 18:34:06,024 - + +2025-05-16 18:34:06,025 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 18:34:29,962 - Epoch: [71][ 50/ 518] Overall Loss 2.910418 Objective Loss 2.910418 LR 0.000250 Time 0.478626 +2025-05-16 18:34:53,091 - Epoch: [71][ 100/ 518] Overall Loss 2.860936 Objective Loss 2.860936 LR 0.000250 Time 0.470581 +2025-05-16 18:35:15,208 - Epoch: [71][ 150/ 518] Overall Loss 2.858063 Objective Loss 2.858063 LR 0.000250 Time 0.461156 +2025-05-16 18:35:37,071 - Epoch: [71][ 200/ 518] Overall Loss 2.823701 Objective Loss 2.823701 LR 0.000250 Time 0.455171 +2025-05-16 18:35:59,278 - Epoch: [71][ 250/ 518] Overall Loss 2.811641 Objective Loss 2.811641 LR 0.000250 Time 0.452935 +2025-05-16 18:36:22,522 - Epoch: [71][ 300/ 518] Overall Loss 2.811346 Objective Loss 2.811346 LR 0.000250 Time 0.454922 +2025-05-16 18:36:45,105 - Epoch: [71][ 350/ 518] Overall Loss 2.814151 Objective Loss 2.814151 LR 0.000250 Time 0.454450 +2025-05-16 18:37:07,337 - Epoch: [71][ 400/ 518] Overall Loss 2.809777 Objective Loss 2.809777 LR 0.000250 Time 0.453216 +2025-05-16 18:37:29,544 - Epoch: [71][ 450/ 518] Overall Loss 2.809919 Objective Loss 2.809919 LR 0.000250 Time 0.452193 +2025-05-16 18:37:51,068 - Epoch: [71][ 500/ 518] Overall Loss 2.809055 Objective Loss 2.809055 LR 0.000250 Time 0.450017 +2025-05-16 18:37:59,714 - Epoch: [71][ 518/ 518] Overall Loss 2.806274 Objective Loss 2.806274 LR 0.000250 Time 0.451070 +2025-05-16 18:37:59,805 - --- validate (epoch=71)----------- +2025-05-16 18:37:59,807 - 4952 samples (32 per mini-batch) +2025-05-16 18:37:59,811 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 18:38:23,608 - Epoch: [71][ 50/ 155] Loss 3.063061 mAP 0.429238 +2025-05-16 18:38:48,250 - Epoch: [71][ 100/ 155] Loss 3.047885 mAP 0.433362 +2025-05-16 18:39:12,257 - Epoch: [71][ 150/ 155] Loss 3.054194 mAP 0.433496 +2025-05-16 18:39:17,908 - Epoch: [71][ 155/ 155] Loss 3.056688 mAP 0.434528 +2025-05-16 18:39:17,983 - ==> mAP: 0.43453 Loss: 3.057 + +2025-05-16 18:39:18,022 - ==> Best [mAP: 0.434528 vloss: 3.056688 Params: 2177088 on epoch: 71] +2025-05-16 18:39:18,035 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 18:39:18,257 - + +2025-05-16 18:39:18,257 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 18:39:42,624 - Epoch: [72][ 50/ 518] Overall Loss 2.779945 Objective Loss 2.779945 LR 0.000250 Time 0.487228 +2025-05-16 18:40:05,305 - Epoch: [72][ 100/ 518] Overall Loss 2.780406 Objective Loss 2.780406 LR 0.000250 Time 0.470404 +2025-05-16 18:40:27,033 - Epoch: [72][ 150/ 518] Overall Loss 2.800147 Objective Loss 2.800147 LR 0.000250 Time 0.458442 +2025-05-16 18:40:49,604 - Epoch: [72][ 200/ 518] Overall Loss 2.811712 Objective Loss 2.811712 LR 0.000250 Time 0.456678 +2025-05-16 18:41:11,974 - Epoch: [72][ 250/ 518] Overall Loss 2.797038 Objective Loss 2.797038 LR 0.000250 Time 0.454815 +2025-05-16 18:41:34,521 - Epoch: [72][ 300/ 518] Overall Loss 2.797908 Objective Loss 2.797908 LR 0.000250 Time 0.454158 +2025-05-16 18:41:57,505 - Epoch: [72][ 350/ 518] Overall Loss 2.797001 Objective Loss 2.797001 LR 0.000250 Time 0.454939 +2025-05-16 18:42:19,697 - Epoch: [72][ 400/ 518] Overall Loss 2.795988 Objective Loss 2.795988 LR 0.000250 Time 0.453548 +2025-05-16 18:42:42,808 - Epoch: [72][ 450/ 518] Overall Loss 2.789305 Objective Loss 2.789305 LR 0.000250 Time 0.454507 +2025-05-16 18:43:06,212 - Epoch: [72][ 500/ 518] Overall Loss 2.794530 Objective Loss 2.794530 LR 0.000250 Time 0.455858 +2025-05-16 18:43:13,994 - Epoch: [72][ 518/ 518] Overall Loss 2.792068 Objective Loss 2.792068 LR 0.000250 Time 0.455039 +2025-05-16 18:43:14,071 - --- validate (epoch=72)----------- +2025-05-16 18:43:14,072 - 4952 samples (32 per mini-batch) +2025-05-16 18:43:14,077 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 18:43:37,999 - Epoch: [72][ 50/ 155] Loss 3.035406 mAP 0.453591 +2025-05-16 18:44:02,838 - Epoch: [72][ 100/ 155] Loss 3.067625 mAP 0.437424 +2025-05-16 18:44:28,700 - Epoch: [72][ 150/ 155] Loss 3.063979 mAP 0.434405 +2025-05-16 18:44:34,269 - Epoch: [72][ 155/ 155] Loss 3.065379 mAP 0.435388 +2025-05-16 18:44:34,367 - ==> mAP: 0.43539 Loss: 3.065 + +2025-05-16 18:44:34,378 - ==> Best [mAP: 0.435388 vloss: 3.065379 Params: 2177088 on epoch: 72] +2025-05-16 18:44:34,378 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 18:44:34,732 - + +2025-05-16 18:44:34,733 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 18:44:59,499 - Epoch: [73][ 50/ 518] Overall Loss 2.756061 Objective Loss 2.756061 LR 0.000250 Time 0.495212 +2025-05-16 18:45:21,741 - Epoch: [73][ 100/ 518] Overall Loss 2.767017 Objective Loss 2.767017 LR 0.000250 Time 0.469937 +2025-05-16 18:45:43,316 - Epoch: [73][ 150/ 518] Overall Loss 2.771183 Objective Loss 2.771183 LR 0.000250 Time 0.457113 +2025-05-16 18:46:05,583 - Epoch: [73][ 200/ 518] Overall Loss 2.774507 Objective Loss 2.774507 LR 0.000250 Time 0.454157 +2025-05-16 18:46:28,539 - Epoch: [73][ 250/ 518] Overall Loss 2.766746 Objective Loss 2.766746 LR 0.000250 Time 0.455113 +2025-05-16 18:46:52,348 - Epoch: [73][ 300/ 518] Overall Loss 2.779817 Objective Loss 2.779817 LR 0.000250 Time 0.458597 +2025-05-16 18:47:14,571 - Epoch: [73][ 350/ 518] Overall Loss 2.777978 Objective Loss 2.777978 LR 0.000250 Time 0.456571 +2025-05-16 18:47:36,037 - Epoch: [73][ 400/ 518] Overall Loss 2.779287 Objective Loss 2.779287 LR 0.000250 Time 0.453161 +2025-05-16 18:47:58,340 - Epoch: [73][ 450/ 518] Overall Loss 2.776804 Objective Loss 2.776804 LR 0.000250 Time 0.452366 +2025-05-16 18:48:21,646 - Epoch: [73][ 500/ 518] Overall Loss 2.779897 Objective Loss 2.779897 LR 0.000250 Time 0.453737 +2025-05-16 18:48:29,058 - Epoch: [73][ 518/ 518] Overall Loss 2.779233 Objective Loss 2.779233 LR 0.000250 Time 0.452279 +2025-05-16 18:48:29,132 - --- validate (epoch=73)----------- +2025-05-16 18:48:29,133 - 4952 samples (32 per mini-batch) +2025-05-16 18:48:29,137 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 18:48:51,982 - Epoch: [73][ 50/ 155] Loss 3.028594 mAP 0.448897 +2025-05-16 18:49:16,913 - Epoch: [73][ 100/ 155] Loss 3.042903 mAP 0.448217 +2025-05-16 18:49:41,056 - Epoch: [73][ 150/ 155] Loss 3.046279 mAP 0.441993 +2025-05-16 18:49:47,334 - Epoch: [73][ 155/ 155] Loss 3.046252 mAP 0.439926 +2025-05-16 18:49:47,413 - ==> mAP: 0.43993 Loss: 3.046 + +2025-05-16 18:49:47,425 - ==> Best [mAP: 0.439926 vloss: 3.046252 Params: 2177088 on epoch: 73] +2025-05-16 18:49:47,425 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 18:49:47,619 - + +2025-05-16 18:49:47,620 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 18:50:11,491 - Epoch: [74][ 50/ 518] Overall Loss 2.748402 Objective Loss 2.748402 LR 0.000250 Time 0.477312 +2025-05-16 18:50:33,920 - Epoch: [74][ 100/ 518] Overall Loss 2.760367 Objective Loss 2.760367 LR 0.000250 Time 0.462924 +2025-05-16 18:50:55,617 - Epoch: [74][ 150/ 518] Overall Loss 2.761782 Objective Loss 2.761782 LR 0.000250 Time 0.453249 +2025-05-16 18:51:17,784 - Epoch: [74][ 200/ 518] Overall Loss 2.777740 Objective Loss 2.777740 LR 0.000250 Time 0.450762 +2025-05-16 18:51:39,994 - Epoch: [74][ 250/ 518] Overall Loss 2.771558 Objective Loss 2.771558 LR 0.000250 Time 0.449443 +2025-05-16 18:52:02,595 - Epoch: [74][ 300/ 518] Overall Loss 2.777190 Objective Loss 2.777190 LR 0.000250 Time 0.449864 +2025-05-16 18:52:24,881 - Epoch: [74][ 350/ 518] Overall Loss 2.775838 Objective Loss 2.775838 LR 0.000250 Time 0.449266 +2025-05-16 18:52:47,320 - Epoch: [74][ 400/ 518] Overall Loss 2.775504 Objective Loss 2.775504 LR 0.000250 Time 0.449200 +2025-05-16 18:53:10,067 - Epoch: [74][ 450/ 518] Overall Loss 2.773933 Objective Loss 2.773933 LR 0.000250 Time 0.449834 +2025-05-16 18:53:32,547 - Epoch: [74][ 500/ 518] Overall Loss 2.771206 Objective Loss 2.771206 LR 0.000250 Time 0.449805 +2025-05-16 18:53:40,047 - Epoch: [74][ 518/ 518] Overall Loss 2.767945 Objective Loss 2.767945 LR 0.000250 Time 0.448652 +2025-05-16 18:53:40,126 - --- validate (epoch=74)----------- +2025-05-16 18:53:40,127 - 4952 samples (32 per mini-batch) +2025-05-16 18:53:40,132 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 18:54:02,319 - Epoch: [74][ 50/ 155] Loss 3.084132 mAP 0.435594 +2025-05-16 18:54:26,069 - Epoch: [74][ 100/ 155] Loss 3.073053 mAP 0.435746 +2025-05-16 18:54:50,701 - Epoch: [74][ 150/ 155] Loss 3.077021 mAP 0.435717 +2025-05-16 18:54:56,717 - Epoch: [74][ 155/ 155] Loss 3.079974 mAP 0.435073 +2025-05-16 18:54:56,805 - ==> mAP: 0.43507 Loss: 3.080 + +2025-05-16 18:54:56,824 - ==> Best [mAP: 0.439926 vloss: 3.046252 Params: 2177088 on epoch: 73] +2025-05-16 18:54:56,829 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 18:54:57,153 - + +2025-05-16 18:54:57,154 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 18:55:21,933 - Epoch: [75][ 50/ 518] Overall Loss 2.717312 Objective Loss 2.717312 LR 0.000250 Time 0.495388 +2025-05-16 18:55:44,087 - Epoch: [75][ 100/ 518] Overall Loss 2.728517 Objective Loss 2.728517 LR 0.000250 Time 0.469145 +2025-05-16 18:56:06,619 - Epoch: [75][ 150/ 518] Overall Loss 2.747019 Objective Loss 2.747019 LR 0.000250 Time 0.462963 +2025-05-16 18:56:29,055 - Epoch: [75][ 200/ 518] Overall Loss 2.764024 Objective Loss 2.764024 LR 0.000250 Time 0.459392 +2025-05-16 18:56:51,563 - Epoch: [75][ 250/ 518] Overall Loss 2.751974 Objective Loss 2.751974 LR 0.000250 Time 0.457540 +2025-05-16 18:57:14,603 - Epoch: [75][ 300/ 518] Overall Loss 2.770129 Objective Loss 2.770129 LR 0.000250 Time 0.458079 +2025-05-16 18:57:36,800 - Epoch: [75][ 350/ 518] Overall Loss 2.778508 Objective Loss 2.778508 LR 0.000250 Time 0.456054 +2025-05-16 18:57:58,624 - Epoch: [75][ 400/ 518] Overall Loss 2.769566 Objective Loss 2.769566 LR 0.000250 Time 0.453603 +2025-05-16 18:58:22,146 - Epoch: [75][ 450/ 518] Overall Loss 2.770707 Objective Loss 2.770707 LR 0.000250 Time 0.455468 +2025-05-16 18:58:45,442 - Epoch: [75][ 500/ 518] Overall Loss 2.763374 Objective Loss 2.763374 LR 0.000250 Time 0.456510 +2025-05-16 18:58:53,521 - Epoch: [75][ 518/ 518] Overall Loss 2.765326 Objective Loss 2.765326 LR 0.000250 Time 0.456241 +2025-05-16 18:58:53,599 - --- validate (epoch=75)----------- +2025-05-16 18:58:53,600 - 4952 samples (32 per mini-batch) +2025-05-16 18:58:53,605 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 18:59:19,520 - Epoch: [75][ 50/ 155] Loss 3.064424 mAP 0.450249 +2025-05-16 18:59:44,735 - Epoch: [75][ 100/ 155] Loss 3.045133 mAP 0.447941 +2025-05-16 19:00:11,274 - Epoch: [75][ 150/ 155] Loss 3.044480 mAP 0.440386 +2025-05-16 19:00:16,873 - Epoch: [75][ 155/ 155] Loss 3.041635 mAP 0.441292 +2025-05-16 19:00:16,998 - ==> mAP: 0.44129 Loss: 3.042 + +2025-05-16 19:00:17,009 - ==> Best [mAP: 0.441292 vloss: 3.041635 Params: 2177088 on epoch: 75] +2025-05-16 19:00:17,009 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 19:00:17,200 - + +2025-05-16 19:00:17,200 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 19:00:43,176 - Epoch: [76][ 50/ 518] Overall Loss 2.860175 Objective Loss 2.860175 LR 0.000250 Time 0.519401 +2025-05-16 19:01:05,467 - Epoch: [76][ 100/ 518] Overall Loss 2.778328 Objective Loss 2.778328 LR 0.000250 Time 0.482593 +2025-05-16 19:01:28,166 - Epoch: [76][ 150/ 518] Overall Loss 2.782507 Objective Loss 2.782507 LR 0.000250 Time 0.473039 +2025-05-16 19:01:50,420 - Epoch: [76][ 200/ 518] Overall Loss 2.779577 Objective Loss 2.779577 LR 0.000250 Time 0.466040 +2025-05-16 19:02:12,944 - Epoch: [76][ 250/ 518] Overall Loss 2.779990 Objective Loss 2.779990 LR 0.000250 Time 0.462865 +2025-05-16 19:02:35,652 - Epoch: [76][ 300/ 518] Overall Loss 2.774451 Objective Loss 2.774451 LR 0.000250 Time 0.461392 +2025-05-16 19:02:58,866 - Epoch: [76][ 350/ 518] Overall Loss 2.780480 Objective Loss 2.780480 LR 0.000250 Time 0.461797 +2025-05-16 19:03:22,136 - Epoch: [76][ 400/ 518] Overall Loss 2.778726 Objective Loss 2.778726 LR 0.000250 Time 0.462243 +2025-05-16 19:03:44,392 - Epoch: [76][ 450/ 518] Overall Loss 2.776588 Objective Loss 2.776588 LR 0.000250 Time 0.460335 +2025-05-16 19:04:07,176 - Epoch: [76][ 500/ 518] Overall Loss 2.777912 Objective Loss 2.777912 LR 0.000250 Time 0.459860 +2025-05-16 19:04:14,604 - Epoch: [76][ 518/ 518] Overall Loss 2.777222 Objective Loss 2.777222 LR 0.000250 Time 0.458219 +2025-05-16 19:04:14,746 - --- validate (epoch=76)----------- +2025-05-16 19:04:14,747 - 4952 samples (32 per mini-batch) +2025-05-16 19:04:14,752 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 19:04:38,981 - Epoch: [76][ 50/ 155] Loss 3.068889 mAP 0.438145 +2025-05-16 19:05:03,281 - Epoch: [76][ 100/ 155] Loss 3.087098 mAP 0.437196 +2025-05-16 19:05:26,543 - Epoch: [76][ 150/ 155] Loss 3.095277 mAP 0.433790 +2025-05-16 19:05:31,697 - Epoch: [76][ 155/ 155] Loss 3.099176 mAP 0.432002 +2025-05-16 19:05:31,820 - ==> mAP: 0.43200 Loss: 3.099 + +2025-05-16 19:05:31,832 - ==> Best [mAP: 0.441292 vloss: 3.041635 Params: 2177088 on epoch: 75] +2025-05-16 19:05:31,832 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 19:05:31,973 - + +2025-05-16 19:05:31,974 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 19:05:56,356 - Epoch: [77][ 50/ 518] Overall Loss 2.729483 Objective Loss 2.729483 LR 0.000250 Time 0.487525 +2025-05-16 19:06:18,700 - Epoch: [77][ 100/ 518] Overall Loss 2.742547 Objective Loss 2.742547 LR 0.000250 Time 0.467188 +2025-05-16 19:06:42,090 - Epoch: [77][ 150/ 518] Overall Loss 2.741313 Objective Loss 2.741313 LR 0.000250 Time 0.467375 +2025-05-16 19:07:04,078 - Epoch: [77][ 200/ 518] Overall Loss 2.738129 Objective Loss 2.738129 LR 0.000250 Time 0.460464 +2025-05-16 19:07:26,743 - Epoch: [77][ 250/ 518] Overall Loss 2.739370 Objective Loss 2.739370 LR 0.000250 Time 0.459021 +2025-05-16 19:07:48,673 - Epoch: [77][ 300/ 518] Overall Loss 2.737058 Objective Loss 2.737058 LR 0.000250 Time 0.455613 +2025-05-16 19:08:10,208 - Epoch: [77][ 350/ 518] Overall Loss 2.745175 Objective Loss 2.745175 LR 0.000250 Time 0.452050 +2025-05-16 19:08:33,502 - Epoch: [77][ 400/ 518] Overall Loss 2.740726 Objective Loss 2.740726 LR 0.000250 Time 0.453750 +2025-05-16 19:08:55,703 - Epoch: [77][ 450/ 518] Overall Loss 2.746659 Objective Loss 2.746659 LR 0.000250 Time 0.452666 +2025-05-16 19:09:18,963 - Epoch: [77][ 500/ 518] Overall Loss 2.751925 Objective Loss 2.751925 LR 0.000250 Time 0.453914 +2025-05-16 19:09:26,363 - Epoch: [77][ 518/ 518] Overall Loss 2.748465 Objective Loss 2.748465 LR 0.000250 Time 0.452425 +2025-05-16 19:09:26,435 - --- validate (epoch=77)----------- +2025-05-16 19:09:26,436 - 4952 samples (32 per mini-batch) +2025-05-16 19:09:26,440 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 19:09:49,307 - Epoch: [77][ 50/ 155] Loss 3.113243 mAP 0.426982 +2025-05-16 19:10:11,462 - Epoch: [77][ 100/ 155] Loss 3.087512 mAP 0.429513 +2025-05-16 19:10:37,158 - Epoch: [77][ 150/ 155] Loss 3.088981 mAP 0.427499 +2025-05-16 19:10:43,267 - Epoch: [77][ 155/ 155] Loss 3.086414 mAP 0.427415 +2025-05-16 19:10:43,365 - ==> mAP: 0.42742 Loss: 3.086 + +2025-05-16 19:10:43,376 - ==> Best [mAP: 0.441292 vloss: 3.041635 Params: 2177088 on epoch: 75] +2025-05-16 19:10:43,376 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 19:10:43,544 - + +2025-05-16 19:10:43,545 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 19:11:08,402 - Epoch: [78][ 50/ 518] Overall Loss 2.770838 Objective Loss 2.770838 LR 0.000250 Time 0.497032 +2025-05-16 19:11:31,021 - Epoch: [78][ 100/ 518] Overall Loss 2.771399 Objective Loss 2.771399 LR 0.000250 Time 0.474687 +2025-05-16 19:11:53,105 - Epoch: [78][ 150/ 518] Overall Loss 2.756858 Objective Loss 2.756858 LR 0.000250 Time 0.463677 +2025-05-16 19:12:15,026 - Epoch: [78][ 200/ 518] Overall Loss 2.757105 Objective Loss 2.757105 LR 0.000250 Time 0.457349 +2025-05-16 19:12:37,668 - Epoch: [78][ 250/ 518] Overall Loss 2.763194 Objective Loss 2.763194 LR 0.000250 Time 0.456437 +2025-05-16 19:13:00,290 - Epoch: [78][ 300/ 518] Overall Loss 2.765306 Objective Loss 2.765306 LR 0.000250 Time 0.455763 +2025-05-16 19:13:23,312 - Epoch: [78][ 350/ 518] Overall Loss 2.763697 Objective Loss 2.763697 LR 0.000250 Time 0.456425 +2025-05-16 19:13:46,287 - Epoch: [78][ 400/ 518] Overall Loss 2.758824 Objective Loss 2.758824 LR 0.000250 Time 0.456804 +2025-05-16 19:14:09,028 - Epoch: [78][ 450/ 518] Overall Loss 2.757024 Objective Loss 2.757024 LR 0.000250 Time 0.456579 +2025-05-16 19:14:31,460 - Epoch: [78][ 500/ 518] Overall Loss 2.756569 Objective Loss 2.756569 LR 0.000250 Time 0.455782 +2025-05-16 19:14:39,198 - Epoch: [78][ 518/ 518] Overall Loss 2.755010 Objective Loss 2.755010 LR 0.000250 Time 0.454879 +2025-05-16 19:14:39,296 - --- validate (epoch=78)----------- +2025-05-16 19:14:39,297 - 4952 samples (32 per mini-batch) +2025-05-16 19:14:39,302 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 19:15:00,681 - Epoch: [78][ 50/ 155] Loss 3.111960 mAP 0.430924 +2025-05-16 19:15:24,034 - Epoch: [78][ 100/ 155] Loss 3.092952 mAP 0.426495 +2025-05-16 19:15:49,185 - Epoch: [78][ 150/ 155] Loss 3.098519 mAP 0.416488 +2025-05-16 19:15:55,367 - Epoch: [78][ 155/ 155] Loss 3.096432 mAP 0.415892 +2025-05-16 19:15:55,509 - ==> mAP: 0.41589 Loss: 3.096 + +2025-05-16 19:15:55,520 - ==> Best [mAP: 0.441292 vloss: 3.041635 Params: 2177088 on epoch: 75] +2025-05-16 19:15:55,521 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 19:15:55,657 - + +2025-05-16 19:15:55,657 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 19:16:20,799 - Epoch: [79][ 50/ 518] Overall Loss 2.776070 Objective Loss 2.776070 LR 0.000250 Time 0.502721 +2025-05-16 19:16:43,134 - Epoch: [79][ 100/ 518] Overall Loss 2.770651 Objective Loss 2.770651 LR 0.000250 Time 0.474437 +2025-05-16 19:17:06,181 - Epoch: [79][ 150/ 518] Overall Loss 2.757388 Objective Loss 2.757388 LR 0.000250 Time 0.469921 +2025-05-16 19:17:28,132 - Epoch: [79][ 200/ 518] Overall Loss 2.757392 Objective Loss 2.757392 LR 0.000250 Time 0.462185 +2025-05-16 19:17:49,606 - Epoch: [79][ 250/ 518] Overall Loss 2.756311 Objective Loss 2.756311 LR 0.000250 Time 0.455612 +2025-05-16 19:18:12,789 - Epoch: [79][ 300/ 518] Overall Loss 2.766606 Objective Loss 2.766606 LR 0.000250 Time 0.456947 +2025-05-16 19:18:34,778 - Epoch: [79][ 350/ 518] Overall Loss 2.756900 Objective Loss 2.756900 LR 0.000250 Time 0.454488 +2025-05-16 19:18:57,247 - Epoch: [79][ 400/ 518] Overall Loss 2.753576 Objective Loss 2.753576 LR 0.000250 Time 0.453845 +2025-05-16 19:19:19,031 - Epoch: [79][ 450/ 518] Overall Loss 2.748594 Objective Loss 2.748594 LR 0.000250 Time 0.451823 +2025-05-16 19:19:42,036 - Epoch: [79][ 500/ 518] Overall Loss 2.746929 Objective Loss 2.746929 LR 0.000250 Time 0.452646 +2025-05-16 19:19:49,789 - Epoch: [79][ 518/ 518] Overall Loss 2.749482 Objective Loss 2.749482 LR 0.000250 Time 0.451883 +2025-05-16 19:19:49,895 - --- validate (epoch=79)----------- +2025-05-16 19:19:49,896 - 4952 samples (32 per mini-batch) +2025-05-16 19:19:49,901 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 19:20:13,949 - Epoch: [79][ 50/ 155] Loss 3.056778 mAP 0.451965 +2025-05-16 19:20:38,052 - Epoch: [79][ 100/ 155] Loss 3.044535 mAP 0.447693 +2025-05-16 19:21:03,023 - Epoch: [79][ 150/ 155] Loss 3.063102 mAP 0.444803 +2025-05-16 19:21:09,440 - Epoch: [79][ 155/ 155] Loss 3.062238 mAP 0.444585 +2025-05-16 19:21:09,594 - ==> mAP: 0.44458 Loss: 3.062 + +2025-05-16 19:21:09,609 - ==> Best [mAP: 0.444585 vloss: 3.062238 Params: 2177088 on epoch: 79] +2025-05-16 19:21:09,609 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 19:21:09,861 - + +2025-05-16 19:21:09,861 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 19:21:35,087 - Epoch: [80][ 50/ 518] Overall Loss 2.759390 Objective Loss 2.759390 LR 0.000250 Time 0.504407 +2025-05-16 19:21:57,247 - Epoch: [80][ 100/ 518] Overall Loss 2.771010 Objective Loss 2.771010 LR 0.000250 Time 0.473782 +2025-05-16 19:22:20,109 - Epoch: [80][ 150/ 518] Overall Loss 2.761389 Objective Loss 2.761389 LR 0.000250 Time 0.468257 +2025-05-16 19:22:43,028 - Epoch: [80][ 200/ 518] Overall Loss 2.753550 Objective Loss 2.753550 LR 0.000250 Time 0.465774 +2025-05-16 19:23:05,905 - Epoch: [80][ 250/ 518] Overall Loss 2.742992 Objective Loss 2.742992 LR 0.000250 Time 0.464118 +2025-05-16 19:23:28,950 - Epoch: [80][ 300/ 518] Overall Loss 2.750962 Objective Loss 2.750962 LR 0.000250 Time 0.463575 +2025-05-16 19:23:52,307 - Epoch: [80][ 350/ 518] Overall Loss 2.746105 Objective Loss 2.746105 LR 0.000250 Time 0.464079 +2025-05-16 19:24:14,404 - Epoch: [80][ 400/ 518] Overall Loss 2.749584 Objective Loss 2.749584 LR 0.000250 Time 0.461306 +2025-05-16 19:24:37,125 - Epoch: [80][ 450/ 518] Overall Loss 2.746033 Objective Loss 2.746033 LR 0.000250 Time 0.460537 +2025-05-16 19:25:00,012 - Epoch: [80][ 500/ 518] Overall Loss 2.749892 Objective Loss 2.749892 LR 0.000250 Time 0.460253 +2025-05-16 19:25:07,751 - Epoch: [80][ 518/ 518] Overall Loss 2.750050 Objective Loss 2.750050 LR 0.000250 Time 0.459200 +2025-05-16 19:25:07,869 - --- validate (epoch=80)----------- +2025-05-16 19:25:07,870 - 4952 samples (32 per mini-batch) +2025-05-16 19:25:07,875 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 19:25:31,959 - Epoch: [80][ 50/ 155] Loss 3.009267 mAP 0.462109 +2025-05-16 19:25:54,933 - Epoch: [80][ 100/ 155] Loss 3.023269 mAP 0.448056 +2025-05-16 19:26:20,243 - Epoch: [80][ 150/ 155] Loss 3.045924 mAP 0.441785 +2025-05-16 19:26:25,173 - Epoch: [80][ 155/ 155] Loss 3.046414 mAP 0.442638 +2025-05-16 19:26:25,274 - ==> mAP: 0.44264 Loss: 3.046 + +2025-05-16 19:26:25,292 - ==> Best [mAP: 0.444585 vloss: 3.062238 Params: 2177088 on epoch: 79] +2025-05-16 19:26:25,298 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 19:26:25,542 - + +2025-05-16 19:26:25,543 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 19:26:50,346 - Epoch: [81][ 50/ 518] Overall Loss 2.782039 Objective Loss 2.782039 LR 0.000250 Time 0.495955 +2025-05-16 19:27:12,604 - Epoch: [81][ 100/ 518] Overall Loss 2.766902 Objective Loss 2.766902 LR 0.000250 Time 0.470531 +2025-05-16 19:27:35,958 - Epoch: [81][ 150/ 518] Overall Loss 2.738655 Objective Loss 2.738655 LR 0.000250 Time 0.469368 +2025-05-16 19:27:57,671 - Epoch: [81][ 200/ 518] Overall Loss 2.742491 Objective Loss 2.742491 LR 0.000250 Time 0.460585 +2025-05-16 19:28:20,266 - Epoch: [81][ 250/ 518] Overall Loss 2.738881 Objective Loss 2.738881 LR 0.000250 Time 0.458839 +2025-05-16 19:28:42,914 - Epoch: [81][ 300/ 518] Overall Loss 2.751872 Objective Loss 2.751872 LR 0.000250 Time 0.457854 +2025-05-16 19:29:05,825 - Epoch: [81][ 350/ 518] Overall Loss 2.749465 Objective Loss 2.749465 LR 0.000250 Time 0.457899 +2025-05-16 19:29:28,322 - Epoch: [81][ 400/ 518] Overall Loss 2.744427 Objective Loss 2.744427 LR 0.000250 Time 0.456893 +2025-05-16 19:29:51,002 - Epoch: [81][ 450/ 518] Overall Loss 2.745978 Objective Loss 2.745978 LR 0.000250 Time 0.456523 +2025-05-16 19:30:13,211 - Epoch: [81][ 500/ 518] Overall Loss 2.749843 Objective Loss 2.749843 LR 0.000250 Time 0.455285 +2025-05-16 19:30:21,652 - Epoch: [81][ 518/ 518] Overall Loss 2.749332 Objective Loss 2.749332 LR 0.000250 Time 0.455759 +2025-05-16 19:30:21,817 - --- validate (epoch=81)----------- +2025-05-16 19:30:21,819 - 4952 samples (32 per mini-batch) +2025-05-16 19:30:21,824 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 19:30:47,028 - Epoch: [81][ 50/ 155] Loss 3.025315 mAP 0.450416 +2025-05-16 19:31:10,871 - Epoch: [81][ 100/ 155] Loss 3.047218 mAP 0.447082 +2025-05-16 19:31:36,880 - Epoch: [81][ 150/ 155] Loss 3.053890 mAP 0.442921 +2025-05-16 19:31:42,610 - Epoch: [81][ 155/ 155] Loss 3.054648 mAP 0.442267 +2025-05-16 19:31:42,688 - ==> mAP: 0.44227 Loss: 3.055 + +2025-05-16 19:31:42,699 - ==> Best [mAP: 0.444585 vloss: 3.062238 Params: 2177088 on epoch: 79] +2025-05-16 19:31:42,700 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 19:31:42,832 - + +2025-05-16 19:31:42,832 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 19:32:07,824 - Epoch: [82][ 50/ 518] Overall Loss 2.715944 Objective Loss 2.715944 LR 0.000250 Time 0.499727 +2025-05-16 19:32:29,710 - Epoch: [82][ 100/ 518] Overall Loss 2.701086 Objective Loss 2.701086 LR 0.000250 Time 0.468515 +2025-05-16 19:32:52,529 - Epoch: [82][ 150/ 518] Overall Loss 2.724067 Objective Loss 2.724067 LR 0.000250 Time 0.464460 +2025-05-16 19:33:15,181 - Epoch: [82][ 200/ 518] Overall Loss 2.712717 Objective Loss 2.712717 LR 0.000250 Time 0.461593 +2025-05-16 19:33:38,268 - Epoch: [82][ 250/ 518] Overall Loss 2.727240 Objective Loss 2.727240 LR 0.000250 Time 0.461608 +2025-05-16 19:34:00,704 - Epoch: [82][ 300/ 518] Overall Loss 2.733048 Objective Loss 2.733048 LR 0.000250 Time 0.459453 +2025-05-16 19:34:24,176 - Epoch: [82][ 350/ 518] Overall Loss 2.732734 Objective Loss 2.732734 LR 0.000250 Time 0.460873 +2025-05-16 19:34:46,619 - Epoch: [82][ 400/ 518] Overall Loss 2.735177 Objective Loss 2.735177 LR 0.000250 Time 0.459367 +2025-05-16 19:35:09,209 - Epoch: [82][ 450/ 518] Overall Loss 2.734410 Objective Loss 2.734410 LR 0.000250 Time 0.458521 +2025-05-16 19:35:31,888 - Epoch: [82][ 500/ 518] Overall Loss 2.738068 Objective Loss 2.738068 LR 0.000250 Time 0.458023 +2025-05-16 19:35:39,576 - Epoch: [82][ 518/ 518] Overall Loss 2.737734 Objective Loss 2.737734 LR 0.000250 Time 0.456948 +2025-05-16 19:35:39,699 - --- validate (epoch=82)----------- +2025-05-16 19:35:39,700 - 4952 samples (32 per mini-batch) +2025-05-16 19:35:39,705 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 19:36:02,722 - Epoch: [82][ 50/ 155] Loss 3.061200 mAP 0.435872 +2025-05-16 19:36:26,087 - Epoch: [82][ 100/ 155] Loss 3.037790 mAP 0.444058 +2025-05-16 19:36:53,269 - Epoch: [82][ 150/ 155] Loss 3.027487 mAP 0.447928 +2025-05-16 19:36:59,281 - Epoch: [82][ 155/ 155] Loss 3.027363 mAP 0.447879 +2025-05-16 19:36:59,422 - ==> mAP: 0.44788 Loss: 3.027 + +2025-05-16 19:36:59,434 - ==> Best [mAP: 0.447879 vloss: 3.027363 Params: 2177088 on epoch: 82] +2025-05-16 19:36:59,434 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 19:36:59,647 - + +2025-05-16 19:36:59,647 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 19:37:24,773 - Epoch: [83][ 50/ 518] Overall Loss 2.703746 Objective Loss 2.703746 LR 0.000250 Time 0.502390 +2025-05-16 19:37:46,495 - Epoch: [83][ 100/ 518] Overall Loss 2.724241 Objective Loss 2.724241 LR 0.000250 Time 0.468395 +2025-05-16 19:38:09,913 - Epoch: [83][ 150/ 518] Overall Loss 2.750438 Objective Loss 2.750438 LR 0.000250 Time 0.468370 +2025-05-16 19:38:32,556 - Epoch: [83][ 200/ 518] Overall Loss 2.739127 Objective Loss 2.739127 LR 0.000250 Time 0.464484 +2025-05-16 19:38:55,610 - Epoch: [83][ 250/ 518] Overall Loss 2.740978 Objective Loss 2.740978 LR 0.000250 Time 0.463792 +2025-05-16 19:39:19,031 - Epoch: [83][ 300/ 518] Overall Loss 2.734779 Objective Loss 2.734779 LR 0.000250 Time 0.464560 +2025-05-16 19:39:41,665 - Epoch: [83][ 350/ 518] Overall Loss 2.734466 Objective Loss 2.734466 LR 0.000250 Time 0.462856 +2025-05-16 19:40:04,631 - Epoch: [83][ 400/ 518] Overall Loss 2.734384 Objective Loss 2.734384 LR 0.000250 Time 0.462410 +2025-05-16 19:40:27,060 - Epoch: [83][ 450/ 518] Overall Loss 2.734707 Objective Loss 2.734707 LR 0.000250 Time 0.460858 +2025-05-16 19:40:49,666 - Epoch: [83][ 500/ 518] Overall Loss 2.733336 Objective Loss 2.733336 LR 0.000250 Time 0.459979 +2025-05-16 19:40:57,486 - Epoch: [83][ 518/ 518] Overall Loss 2.729381 Objective Loss 2.729381 LR 0.000250 Time 0.459089 +2025-05-16 19:40:57,583 - --- validate (epoch=83)----------- +2025-05-16 19:40:57,584 - 4952 samples (32 per mini-batch) +2025-05-16 19:40:57,589 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 19:41:19,921 - Epoch: [83][ 50/ 155] Loss 3.046639 mAP 0.444319 +2025-05-16 19:41:43,641 - Epoch: [83][ 100/ 155] Loss 3.009068 mAP 0.452771 +2025-05-16 19:42:08,231 - Epoch: [83][ 150/ 155] Loss 3.033134 mAP 0.446874 +2025-05-16 19:42:13,938 - Epoch: [83][ 155/ 155] Loss 3.028214 mAP 0.448439 +2025-05-16 19:42:14,028 - ==> mAP: 0.44844 Loss: 3.028 + +2025-05-16 19:42:14,049 - ==> Best [mAP: 0.448439 vloss: 3.028214 Params: 2177088 on epoch: 83] +2025-05-16 19:42:14,052 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 19:42:14,265 - + +2025-05-16 19:42:14,266 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 19:42:40,051 - Epoch: [84][ 50/ 518] Overall Loss 2.727304 Objective Loss 2.727304 LR 0.000250 Time 0.515590 +2025-05-16 19:43:02,097 - Epoch: [84][ 100/ 518] Overall Loss 2.708533 Objective Loss 2.708533 LR 0.000250 Time 0.478239 +2025-05-16 19:43:24,766 - Epoch: [84][ 150/ 518] Overall Loss 2.707145 Objective Loss 2.707145 LR 0.000250 Time 0.469935 +2025-05-16 19:43:47,639 - Epoch: [84][ 200/ 518] Overall Loss 2.707009 Objective Loss 2.707009 LR 0.000250 Time 0.466795 +2025-05-16 19:44:09,838 - Epoch: [84][ 250/ 518] Overall Loss 2.726658 Objective Loss 2.726658 LR 0.000250 Time 0.462206 +2025-05-16 19:44:33,288 - Epoch: [84][ 300/ 518] Overall Loss 2.729206 Objective Loss 2.729206 LR 0.000250 Time 0.463333 +2025-05-16 19:44:55,773 - Epoch: [84][ 350/ 518] Overall Loss 2.726788 Objective Loss 2.726788 LR 0.000250 Time 0.461380 +2025-05-16 19:45:19,050 - Epoch: [84][ 400/ 518] Overall Loss 2.730876 Objective Loss 2.730876 LR 0.000250 Time 0.461896 +2025-05-16 19:45:41,664 - Epoch: [84][ 450/ 518] Overall Loss 2.729800 Objective Loss 2.729800 LR 0.000250 Time 0.460822 +2025-05-16 19:46:03,884 - Epoch: [84][ 500/ 518] Overall Loss 2.731104 Objective Loss 2.731104 LR 0.000250 Time 0.459175 +2025-05-16 19:46:11,982 - Epoch: [84][ 518/ 518] Overall Loss 2.732425 Objective Loss 2.732425 LR 0.000250 Time 0.458852 +2025-05-16 19:46:12,067 - --- validate (epoch=84)----------- +2025-05-16 19:46:12,068 - 4952 samples (32 per mini-batch) +2025-05-16 19:46:12,073 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 19:46:36,441 - Epoch: [84][ 50/ 155] Loss 3.071683 mAP 0.437036 +2025-05-16 19:47:01,655 - Epoch: [84][ 100/ 155] Loss 3.066448 mAP 0.442600 +2025-05-16 19:47:26,376 - Epoch: [84][ 150/ 155] Loss 3.060870 mAP 0.447423 +2025-05-16 19:47:32,196 - Epoch: [84][ 155/ 155] Loss 3.060944 mAP 0.446535 +2025-05-16 19:47:32,287 - ==> mAP: 0.44653 Loss: 3.061 + +2025-05-16 19:47:32,303 - ==> Best [mAP: 0.448439 vloss: 3.028214 Params: 2177088 on epoch: 83] +2025-05-16 19:47:32,306 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 19:47:32,532 - + +2025-05-16 19:47:32,533 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 19:47:57,519 - Epoch: [85][ 50/ 518] Overall Loss 2.733195 Objective Loss 2.733195 LR 0.000250 Time 0.499615 +2025-05-16 19:48:20,189 - Epoch: [85][ 100/ 518] Overall Loss 2.719784 Objective Loss 2.719784 LR 0.000250 Time 0.476482 +2025-05-16 19:48:42,228 - Epoch: [85][ 150/ 518] Overall Loss 2.718641 Objective Loss 2.718641 LR 0.000250 Time 0.464570 +2025-05-16 19:49:04,493 - Epoch: [85][ 200/ 518] Overall Loss 2.712991 Objective Loss 2.712991 LR 0.000250 Time 0.459739 +2025-05-16 19:49:27,272 - Epoch: [85][ 250/ 518] Overall Loss 2.706353 Objective Loss 2.706353 LR 0.000250 Time 0.458899 +2025-05-16 19:49:50,344 - Epoch: [85][ 300/ 518] Overall Loss 2.716855 Objective Loss 2.716855 LR 0.000250 Time 0.459319 +2025-05-16 19:50:13,107 - Epoch: [85][ 350/ 518] Overall Loss 2.724941 Objective Loss 2.724941 LR 0.000250 Time 0.458733 +2025-05-16 19:50:34,844 - Epoch: [85][ 400/ 518] Overall Loss 2.719792 Objective Loss 2.719792 LR 0.000250 Time 0.455729 +2025-05-16 19:50:57,884 - Epoch: [85][ 450/ 518] Overall Loss 2.712843 Objective Loss 2.712843 LR 0.000250 Time 0.456287 +2025-05-16 19:51:19,347 - Epoch: [85][ 500/ 518] Overall Loss 2.713692 Objective Loss 2.713692 LR 0.000250 Time 0.453579 +2025-05-16 19:51:27,550 - Epoch: [85][ 518/ 518] Overall Loss 2.717587 Objective Loss 2.717587 LR 0.000250 Time 0.453652 +2025-05-16 19:51:27,658 - --- validate (epoch=85)----------- +2025-05-16 19:51:27,659 - 4952 samples (32 per mini-batch) +2025-05-16 19:51:27,668 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 19:51:50,204 - Epoch: [85][ 50/ 155] Loss 3.050934 mAP 0.448308 +2025-05-16 19:52:14,777 - Epoch: [85][ 100/ 155] Loss 3.073606 mAP 0.441496 +2025-05-16 19:52:41,104 - Epoch: [85][ 150/ 155] Loss 3.070658 mAP 0.434123 +2025-05-16 19:52:46,440 - Epoch: [85][ 155/ 155] Loss 3.069579 mAP 0.433599 +2025-05-16 19:52:46,522 - ==> mAP: 0.43360 Loss: 3.070 + +2025-05-16 19:52:46,535 - ==> Best [mAP: 0.448439 vloss: 3.028214 Params: 2177088 on epoch: 83] +2025-05-16 19:52:46,536 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 19:52:46,684 - + +2025-05-16 19:52:46,684 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 19:53:10,298 - Epoch: [86][ 50/ 518] Overall Loss 2.673627 Objective Loss 2.673627 LR 0.000250 Time 0.472166 +2025-05-16 19:53:32,725 - Epoch: [86][ 100/ 518] Overall Loss 2.683053 Objective Loss 2.683053 LR 0.000250 Time 0.460334 +2025-05-16 19:53:54,869 - Epoch: [86][ 150/ 518] Overall Loss 2.695385 Objective Loss 2.695385 LR 0.000250 Time 0.454500 +2025-05-16 19:54:17,108 - Epoch: [86][ 200/ 518] Overall Loss 2.714165 Objective Loss 2.714165 LR 0.000250 Time 0.452062 +2025-05-16 19:54:39,859 - Epoch: [86][ 250/ 518] Overall Loss 2.714121 Objective Loss 2.714121 LR 0.000250 Time 0.452646 +2025-05-16 19:55:02,643 - Epoch: [86][ 300/ 518] Overall Loss 2.696915 Objective Loss 2.696915 LR 0.000250 Time 0.453144 +2025-05-16 19:55:25,571 - Epoch: [86][ 350/ 518] Overall Loss 2.699187 Objective Loss 2.699187 LR 0.000250 Time 0.453913 +2025-05-16 19:55:47,901 - Epoch: [86][ 400/ 518] Overall Loss 2.702779 Objective Loss 2.702779 LR 0.000250 Time 0.452995 +2025-05-16 19:56:11,330 - Epoch: [86][ 450/ 518] Overall Loss 2.708533 Objective Loss 2.708533 LR 0.000250 Time 0.454722 +2025-05-16 19:56:33,802 - Epoch: [86][ 500/ 518] Overall Loss 2.709836 Objective Loss 2.709836 LR 0.000250 Time 0.454181 +2025-05-16 19:56:41,426 - Epoch: [86][ 518/ 518] Overall Loss 2.708885 Objective Loss 2.708885 LR 0.000250 Time 0.453076 +2025-05-16 19:56:41,576 - --- validate (epoch=86)----------- +2025-05-16 19:56:41,577 - 4952 samples (32 per mini-batch) +2025-05-16 19:56:41,582 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 19:57:04,509 - Epoch: [86][ 50/ 155] Loss 3.085957 mAP 0.438393 +2025-05-16 19:57:29,552 - Epoch: [86][ 100/ 155] Loss 3.067240 mAP 0.441072 +2025-05-16 19:57:54,458 - Epoch: [86][ 150/ 155] Loss 3.039933 mAP 0.448523 +2025-05-16 19:58:01,146 - Epoch: [86][ 155/ 155] Loss 3.042412 mAP 0.446393 +2025-05-16 19:58:01,254 - ==> mAP: 0.44639 Loss: 3.042 + +2025-05-16 19:58:01,266 - ==> Best [mAP: 0.448439 vloss: 3.028214 Params: 2177088 on epoch: 83] +2025-05-16 19:58:01,266 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 19:58:01,468 - + +2025-05-16 19:58:01,468 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 19:58:27,124 - Epoch: [87][ 50/ 518] Overall Loss 2.724521 Objective Loss 2.724521 LR 0.000250 Time 0.513007 +2025-05-16 19:58:49,811 - Epoch: [87][ 100/ 518] Overall Loss 2.686346 Objective Loss 2.686346 LR 0.000250 Time 0.483356 +2025-05-16 19:59:12,704 - Epoch: [87][ 150/ 518] Overall Loss 2.702264 Objective Loss 2.702264 LR 0.000250 Time 0.474845 +2025-05-16 19:59:35,971 - Epoch: [87][ 200/ 518] Overall Loss 2.682907 Objective Loss 2.682907 LR 0.000250 Time 0.472459 +2025-05-16 19:59:57,881 - Epoch: [87][ 250/ 518] Overall Loss 2.697098 Objective Loss 2.697098 LR 0.000250 Time 0.465598 +2025-05-16 20:00:19,493 - Epoch: [87][ 300/ 518] Overall Loss 2.697572 Objective Loss 2.697572 LR 0.000250 Time 0.460031 +2025-05-16 20:00:42,168 - Epoch: [87][ 350/ 518] Overall Loss 2.702981 Objective Loss 2.702981 LR 0.000250 Time 0.459092 +2025-05-16 20:01:03,918 - Epoch: [87][ 400/ 518] Overall Loss 2.701703 Objective Loss 2.701703 LR 0.000250 Time 0.456075 +2025-05-16 20:01:26,697 - Epoch: [87][ 450/ 518] Overall Loss 2.699890 Objective Loss 2.699890 LR 0.000250 Time 0.456016 +2025-05-16 20:01:48,807 - Epoch: [87][ 500/ 518] Overall Loss 2.702942 Objective Loss 2.702942 LR 0.000250 Time 0.454630 +2025-05-16 20:01:56,161 - Epoch: [87][ 518/ 518] Overall Loss 2.701452 Objective Loss 2.701452 LR 0.000250 Time 0.453028 +2025-05-16 20:01:56,308 - --- validate (epoch=87)----------- +2025-05-16 20:01:56,309 - 4952 samples (32 per mini-batch) +2025-05-16 20:01:56,314 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:02:19,907 - Epoch: [87][ 50/ 155] Loss 3.052586 mAP 0.428289 +2025-05-16 20:02:45,167 - Epoch: [87][ 100/ 155] Loss 3.058079 mAP 0.431727 +2025-05-16 20:03:09,940 - Epoch: [87][ 150/ 155] Loss 3.049730 mAP 0.436364 +2025-05-16 20:03:15,542 - Epoch: [87][ 155/ 155] Loss 3.046300 mAP 0.438028 +2025-05-16 20:03:15,669 - ==> mAP: 0.43803 Loss: 3.046 + +2025-05-16 20:03:15,680 - ==> Best [mAP: 0.448439 vloss: 3.028214 Params: 2177088 on epoch: 83] +2025-05-16 20:03:15,681 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:03:15,825 - + +2025-05-16 20:03:15,825 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:03:40,506 - Epoch: [88][ 50/ 518] Overall Loss 2.723494 Objective Loss 2.723494 LR 0.000250 Time 0.493496 +2025-05-16 20:04:03,288 - Epoch: [88][ 100/ 518] Overall Loss 2.697213 Objective Loss 2.697213 LR 0.000250 Time 0.474548 +2025-05-16 20:04:25,436 - Epoch: [88][ 150/ 518] Overall Loss 2.680627 Objective Loss 2.680627 LR 0.000250 Time 0.464005 +2025-05-16 20:04:49,045 - Epoch: [88][ 200/ 518] Overall Loss 2.681401 Objective Loss 2.681401 LR 0.000250 Time 0.466037 +2025-05-16 20:05:11,674 - Epoch: [88][ 250/ 518] Overall Loss 2.680354 Objective Loss 2.680354 LR 0.000250 Time 0.463338 +2025-05-16 20:05:34,858 - Epoch: [88][ 300/ 518] Overall Loss 2.691909 Objective Loss 2.691909 LR 0.000250 Time 0.463389 +2025-05-16 20:05:57,155 - Epoch: [88][ 350/ 518] Overall Loss 2.701034 Objective Loss 2.701034 LR 0.000250 Time 0.460889 +2025-05-16 20:06:20,354 - Epoch: [88][ 400/ 518] Overall Loss 2.702387 Objective Loss 2.702387 LR 0.000250 Time 0.461271 +2025-05-16 20:06:43,101 - Epoch: [88][ 450/ 518] Overall Loss 2.704012 Objective Loss 2.704012 LR 0.000250 Time 0.460564 +2025-05-16 20:07:05,612 - Epoch: [88][ 500/ 518] Overall Loss 2.708202 Objective Loss 2.708202 LR 0.000250 Time 0.459527 +2025-05-16 20:07:13,366 - Epoch: [88][ 518/ 518] Overall Loss 2.712062 Objective Loss 2.712062 LR 0.000250 Time 0.458526 +2025-05-16 20:07:13,497 - --- validate (epoch=88)----------- +2025-05-16 20:07:13,498 - 4952 samples (32 per mini-batch) +2025-05-16 20:07:13,503 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:07:37,550 - Epoch: [88][ 50/ 155] Loss 3.084318 mAP 0.437208 +2025-05-16 20:07:59,621 - Epoch: [88][ 100/ 155] Loss 3.048701 mAP 0.447306 +2025-05-16 20:08:25,330 - Epoch: [88][ 150/ 155] Loss 3.041891 mAP 0.450637 +2025-05-16 20:08:29,647 - Epoch: [88][ 155/ 155] Loss 3.041336 mAP 0.450674 +2025-05-16 20:08:29,781 - ==> mAP: 0.45067 Loss: 3.041 + +2025-05-16 20:08:30,106 - ==> Best [mAP: 0.450674 vloss: 3.041336 Params: 2177088 on epoch: 88] +2025-05-16 20:08:30,106 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:08:30,396 - + +2025-05-16 20:08:30,396 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:08:56,542 - Epoch: [89][ 50/ 518] Overall Loss 2.718566 Objective Loss 2.718566 LR 0.000250 Time 0.522797 +2025-05-16 20:09:18,918 - Epoch: [89][ 100/ 518] Overall Loss 2.722526 Objective Loss 2.722526 LR 0.000250 Time 0.485138 +2025-05-16 20:09:41,707 - Epoch: [89][ 150/ 518] Overall Loss 2.725550 Objective Loss 2.725550 LR 0.000250 Time 0.475341 +2025-05-16 20:10:04,141 - Epoch: [89][ 200/ 518] Overall Loss 2.709664 Objective Loss 2.709664 LR 0.000250 Time 0.468667 +2025-05-16 20:10:25,784 - Epoch: [89][ 250/ 518] Overall Loss 2.694841 Objective Loss 2.694841 LR 0.000250 Time 0.461498 +2025-05-16 20:10:47,328 - Epoch: [89][ 300/ 518] Overall Loss 2.700842 Objective Loss 2.700842 LR 0.000250 Time 0.456386 +2025-05-16 20:11:09,732 - Epoch: [89][ 350/ 518] Overall Loss 2.697441 Objective Loss 2.697441 LR 0.000250 Time 0.455196 +2025-05-16 20:11:31,126 - Epoch: [89][ 400/ 518] Overall Loss 2.704301 Objective Loss 2.704301 LR 0.000250 Time 0.451761 +2025-05-16 20:11:53,256 - Epoch: [89][ 450/ 518] Overall Loss 2.699471 Objective Loss 2.699471 LR 0.000250 Time 0.450738 +2025-05-16 20:12:15,982 - Epoch: [89][ 500/ 518] Overall Loss 2.700431 Objective Loss 2.700431 LR 0.000250 Time 0.451112 +2025-05-16 20:12:23,557 - Epoch: [89][ 518/ 518] Overall Loss 2.701009 Objective Loss 2.701009 LR 0.000250 Time 0.450058 +2025-05-16 20:12:23,668 - --- validate (epoch=89)----------- +2025-05-16 20:12:23,670 - 4952 samples (32 per mini-batch) +2025-05-16 20:12:23,675 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:12:47,702 - Epoch: [89][ 50/ 155] Loss 3.074890 mAP 0.447381 +2025-05-16 20:13:12,818 - Epoch: [89][ 100/ 155] Loss 3.063789 mAP 0.447457 +2025-05-16 20:13:37,098 - Epoch: [89][ 150/ 155] Loss 3.053708 mAP 0.444130 +2025-05-16 20:13:43,109 - Epoch: [89][ 155/ 155] Loss 3.048254 mAP 0.444056 +2025-05-16 20:13:43,214 - ==> mAP: 0.44406 Loss: 3.048 + +2025-05-16 20:13:43,226 - ==> Best [mAP: 0.450674 vloss: 3.041336 Params: 2177088 on epoch: 88] +2025-05-16 20:13:43,226 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:13:43,422 - + +2025-05-16 20:13:43,422 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:14:08,011 - Epoch: [90][ 50/ 518] Overall Loss 2.714151 Objective Loss 2.714151 LR 0.000250 Time 0.491668 +2025-05-16 20:14:29,996 - Epoch: [90][ 100/ 518] Overall Loss 2.677323 Objective Loss 2.677323 LR 0.000250 Time 0.465662 +2025-05-16 20:14:52,053 - Epoch: [90][ 150/ 518] Overall Loss 2.703581 Objective Loss 2.703581 LR 0.000250 Time 0.457477 +2025-05-16 20:15:13,904 - Epoch: [90][ 200/ 518] Overall Loss 2.693439 Objective Loss 2.693439 LR 0.000250 Time 0.452353 +2025-05-16 20:15:36,841 - Epoch: [90][ 250/ 518] Overall Loss 2.686748 Objective Loss 2.686748 LR 0.000250 Time 0.453623 +2025-05-16 20:15:58,911 - Epoch: [90][ 300/ 518] Overall Loss 2.688998 Objective Loss 2.688998 LR 0.000250 Time 0.451580 +2025-05-16 20:16:22,157 - Epoch: [90][ 350/ 518] Overall Loss 2.688109 Objective Loss 2.688109 LR 0.000250 Time 0.453479 +2025-05-16 20:16:44,200 - Epoch: [90][ 400/ 518] Overall Loss 2.690562 Objective Loss 2.690562 LR 0.000250 Time 0.451896 +2025-05-16 20:17:07,161 - Epoch: [90][ 450/ 518] Overall Loss 2.696405 Objective Loss 2.696405 LR 0.000250 Time 0.452707 +2025-05-16 20:17:30,127 - Epoch: [90][ 500/ 518] Overall Loss 2.697957 Objective Loss 2.697957 LR 0.000250 Time 0.453364 +2025-05-16 20:17:38,319 - Epoch: [90][ 518/ 518] Overall Loss 2.697112 Objective Loss 2.697112 LR 0.000250 Time 0.453422 +2025-05-16 20:17:38,412 - --- validate (epoch=90)----------- +2025-05-16 20:17:38,413 - 4952 samples (32 per mini-batch) +2025-05-16 20:17:38,417 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:18:01,939 - Epoch: [90][ 50/ 155] Loss 3.053911 mAP 0.443283 +2025-05-16 20:18:26,625 - Epoch: [90][ 100/ 155] Loss 3.042895 mAP 0.451478 +2025-05-16 20:18:53,027 - Epoch: [90][ 150/ 155] Loss 3.026611 mAP 0.448806 +2025-05-16 20:18:58,467 - Epoch: [90][ 155/ 155] Loss 3.028763 mAP 0.447798 +2025-05-16 20:18:58,537 - ==> mAP: 0.44780 Loss: 3.029 + +2025-05-16 20:18:58,551 - ==> Best [mAP: 0.450674 vloss: 3.041336 Params: 2177088 on epoch: 88] +2025-05-16 20:18:58,551 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:18:58,820 - + +2025-05-16 20:18:58,820 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:19:23,057 - Epoch: [91][ 50/ 518] Overall Loss 2.709316 Objective Loss 2.709316 LR 0.000250 Time 0.484614 +2025-05-16 20:19:46,489 - Epoch: [91][ 100/ 518] Overall Loss 2.717957 Objective Loss 2.717957 LR 0.000250 Time 0.476606 +2025-05-16 20:20:07,952 - Epoch: [91][ 150/ 518] Overall Loss 2.709620 Objective Loss 2.709620 LR 0.000250 Time 0.460817 +2025-05-16 20:20:30,514 - Epoch: [91][ 200/ 518] Overall Loss 2.705950 Objective Loss 2.705950 LR 0.000250 Time 0.458409 +2025-05-16 20:20:53,027 - Epoch: [91][ 250/ 518] Overall Loss 2.699344 Objective Loss 2.699344 LR 0.000250 Time 0.456772 +2025-05-16 20:21:16,057 - Epoch: [91][ 300/ 518] Overall Loss 2.695507 Objective Loss 2.695507 LR 0.000250 Time 0.457401 +2025-05-16 20:21:39,489 - Epoch: [91][ 350/ 518] Overall Loss 2.685422 Objective Loss 2.685422 LR 0.000250 Time 0.459002 +2025-05-16 20:21:59,431 - Epoch: [91][ 400/ 518] Overall Loss 2.682926 Objective Loss 2.682926 LR 0.000250 Time 0.451472 +2025-05-16 20:22:15,324 - Epoch: [91][ 450/ 518] Overall Loss 2.684909 Objective Loss 2.684909 LR 0.000250 Time 0.436623 +2025-05-16 20:22:31,266 - Epoch: [91][ 500/ 518] Overall Loss 2.685960 Objective Loss 2.685960 LR 0.000250 Time 0.424845 +2025-05-16 20:22:36,887 - Epoch: [91][ 518/ 518] Overall Loss 2.684168 Objective Loss 2.684168 LR 0.000250 Time 0.420932 +2025-05-16 20:22:36,921 - --- validate (epoch=91)----------- +2025-05-16 20:22:36,921 - 4952 samples (32 per mini-batch) +2025-05-16 20:22:36,925 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:22:49,045 - Epoch: [91][ 50/ 155] Loss 3.032193 mAP 0.454511 +2025-05-16 20:23:01,174 - Epoch: [91][ 100/ 155] Loss 3.034840 mAP 0.442916 +2025-05-16 20:23:14,012 - Epoch: [91][ 150/ 155] Loss 3.028220 mAP 0.447280 +2025-05-16 20:23:16,835 - Epoch: [91][ 155/ 155] Loss 3.025378 mAP 0.449920 +2025-05-16 20:23:16,873 - ==> mAP: 0.44992 Loss: 3.025 + +2025-05-16 20:23:16,881 - ==> Best [mAP: 0.450674 vloss: 3.041336 Params: 2177088 on epoch: 88] +2025-05-16 20:23:16,882 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:23:16,976 - + +2025-05-16 20:23:16,977 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:23:33,729 - Epoch: [92][ 50/ 518] Overall Loss 2.716353 Objective Loss 2.716353 LR 0.000250 Time 0.334994 +2025-05-16 20:23:49,597 - Epoch: [92][ 100/ 518] Overall Loss 2.714271 Objective Loss 2.714271 LR 0.000250 Time 0.326165 +2025-05-16 20:24:05,475 - Epoch: [92][ 150/ 518] Overall Loss 2.700210 Objective Loss 2.700210 LR 0.000250 Time 0.323289 +2025-05-16 20:24:21,381 - Epoch: [92][ 200/ 518] Overall Loss 2.702345 Objective Loss 2.702345 LR 0.000250 Time 0.321991 +2025-05-16 20:24:37,295 - Epoch: [92][ 250/ 518] Overall Loss 2.695675 Objective Loss 2.695675 LR 0.000250 Time 0.321245 +2025-05-16 20:24:53,221 - Epoch: [92][ 300/ 518] Overall Loss 2.693299 Objective Loss 2.693299 LR 0.000250 Time 0.320787 +2025-05-16 20:25:09,146 - Epoch: [92][ 350/ 518] Overall Loss 2.696399 Objective Loss 2.696399 LR 0.000250 Time 0.320459 +2025-05-16 20:25:25,078 - Epoch: [92][ 400/ 518] Overall Loss 2.691321 Objective Loss 2.691321 LR 0.000250 Time 0.320230 +2025-05-16 20:25:41,004 - Epoch: [92][ 450/ 518] Overall Loss 2.689628 Objective Loss 2.689628 LR 0.000250 Time 0.320040 +2025-05-16 20:25:56,915 - Epoch: [92][ 500/ 518] Overall Loss 2.690709 Objective Loss 2.690709 LR 0.000250 Time 0.319856 +2025-05-16 20:26:02,548 - Epoch: [92][ 518/ 518] Overall Loss 2.692021 Objective Loss 2.692021 LR 0.000250 Time 0.319615 +2025-05-16 20:26:02,583 - --- validate (epoch=92)----------- +2025-05-16 20:26:02,583 - 4952 samples (32 per mini-batch) +2025-05-16 20:26:02,587 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:26:14,874 - Epoch: [92][ 50/ 155] Loss 3.049839 mAP 0.451827 +2025-05-16 20:26:27,416 - Epoch: [92][ 100/ 155] Loss 3.063435 mAP 0.446751 +2025-05-16 20:26:40,582 - Epoch: [92][ 150/ 155] Loss 3.047113 mAP 0.445429 +2025-05-16 20:26:43,458 - Epoch: [92][ 155/ 155] Loss 3.044836 mAP 0.445394 +2025-05-16 20:26:43,499 - ==> mAP: 0.44539 Loss: 3.045 + +2025-05-16 20:26:43,508 - ==> Best [mAP: 0.450674 vloss: 3.041336 Params: 2177088 on epoch: 88] +2025-05-16 20:26:43,508 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:26:43,605 - + +2025-05-16 20:26:43,605 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:27:00,295 - Epoch: [93][ 50/ 518] Overall Loss 2.708834 Objective Loss 2.708834 LR 0.000250 Time 0.333733 +2025-05-16 20:27:16,175 - Epoch: [93][ 100/ 518] Overall Loss 2.681462 Objective Loss 2.681462 LR 0.000250 Time 0.325656 +2025-05-16 20:27:32,072 - Epoch: [93][ 150/ 518] Overall Loss 2.682805 Objective Loss 2.682805 LR 0.000250 Time 0.323081 +2025-05-16 20:27:47,982 - Epoch: [93][ 200/ 518] Overall Loss 2.664612 Objective Loss 2.664612 LR 0.000250 Time 0.321855 +2025-05-16 20:28:03,884 - Epoch: [93][ 250/ 518] Overall Loss 2.681134 Objective Loss 2.681134 LR 0.000250 Time 0.321090 +2025-05-16 20:28:19,795 - Epoch: [93][ 300/ 518] Overall Loss 2.671375 Objective Loss 2.671375 LR 0.000250 Time 0.320609 +2025-05-16 20:28:35,718 - Epoch: [93][ 350/ 518] Overall Loss 2.674972 Objective Loss 2.674972 LR 0.000250 Time 0.320300 +2025-05-16 20:28:51,647 - Epoch: [93][ 400/ 518] Overall Loss 2.678865 Objective Loss 2.678865 LR 0.000250 Time 0.320082 +2025-05-16 20:29:07,551 - Epoch: [93][ 450/ 518] Overall Loss 2.682572 Objective Loss 2.682572 LR 0.000250 Time 0.319857 +2025-05-16 20:29:23,465 - Epoch: [93][ 500/ 518] Overall Loss 2.680741 Objective Loss 2.680741 LR 0.000250 Time 0.319698 +2025-05-16 20:29:29,093 - Epoch: [93][ 518/ 518] Overall Loss 2.683115 Objective Loss 2.683115 LR 0.000250 Time 0.319453 +2025-05-16 20:29:29,130 - --- validate (epoch=93)----------- +2025-05-16 20:29:29,131 - 4952 samples (32 per mini-batch) +2025-05-16 20:29:29,134 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:29:41,452 - Epoch: [93][ 50/ 155] Loss 3.056007 mAP 0.434781 +2025-05-16 20:29:53,919 - Epoch: [93][ 100/ 155] Loss 3.023058 mAP 0.454502 +2025-05-16 20:30:07,141 - Epoch: [93][ 150/ 155] Loss 3.024011 mAP 0.454633 +2025-05-16 20:30:10,118 - Epoch: [93][ 155/ 155] Loss 3.024263 mAP 0.455613 +2025-05-16 20:30:10,158 - ==> mAP: 0.45561 Loss: 3.024 + +2025-05-16 20:30:10,167 - ==> Best [mAP: 0.455613 vloss: 3.024263 Params: 2177088 on epoch: 93] +2025-05-16 20:30:10,168 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:30:10,290 - + +2025-05-16 20:30:10,290 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:30:27,083 - Epoch: [94][ 50/ 518] Overall Loss 2.657135 Objective Loss 2.657135 LR 0.000250 Time 0.335803 +2025-05-16 20:30:42,987 - Epoch: [94][ 100/ 518] Overall Loss 2.669744 Objective Loss 2.669744 LR 0.000250 Time 0.326930 +2025-05-16 20:30:58,884 - Epoch: [94][ 150/ 518] Overall Loss 2.671919 Objective Loss 2.671919 LR 0.000250 Time 0.323932 +2025-05-16 20:31:14,801 - Epoch: [94][ 200/ 518] Overall Loss 2.662197 Objective Loss 2.662197 LR 0.000250 Time 0.322531 +2025-05-16 20:31:30,729 - Epoch: [94][ 250/ 518] Overall Loss 2.668417 Objective Loss 2.668417 LR 0.000250 Time 0.321731 +2025-05-16 20:31:46,639 - Epoch: [94][ 300/ 518] Overall Loss 2.680182 Objective Loss 2.680182 LR 0.000250 Time 0.321141 +2025-05-16 20:32:02,552 - Epoch: [94][ 350/ 518] Overall Loss 2.672814 Objective Loss 2.672814 LR 0.000250 Time 0.320726 +2025-05-16 20:32:18,482 - Epoch: [94][ 400/ 518] Overall Loss 2.666155 Objective Loss 2.666155 LR 0.000250 Time 0.320456 +2025-05-16 20:32:34,419 - Epoch: [94][ 450/ 518] Overall Loss 2.675302 Objective Loss 2.675302 LR 0.000250 Time 0.320266 +2025-05-16 20:32:50,345 - Epoch: [94][ 500/ 518] Overall Loss 2.678878 Objective Loss 2.678878 LR 0.000250 Time 0.320089 +2025-05-16 20:32:55,968 - Epoch: [94][ 518/ 518] Overall Loss 2.680659 Objective Loss 2.680659 LR 0.000250 Time 0.319819 +2025-05-16 20:32:56,006 - --- validate (epoch=94)----------- +2025-05-16 20:32:56,007 - 4952 samples (32 per mini-batch) +2025-05-16 20:32:56,010 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:33:08,350 - Epoch: [94][ 50/ 155] Loss 3.045530 mAP 0.447848 +2025-05-16 20:33:20,692 - Epoch: [94][ 100/ 155] Loss 3.013101 mAP 0.454399 +2025-05-16 20:33:33,798 - Epoch: [94][ 150/ 155] Loss 3.018234 mAP 0.451911 +2025-05-16 20:33:36,736 - Epoch: [94][ 155/ 155] Loss 3.017550 mAP 0.451185 +2025-05-16 20:33:36,774 - ==> mAP: 0.45118 Loss: 3.018 + +2025-05-16 20:33:36,783 - ==> Best [mAP: 0.455613 vloss: 3.024263 Params: 2177088 on epoch: 93] +2025-05-16 20:33:36,783 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:33:36,881 - + +2025-05-16 20:33:36,881 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:33:53,818 - Epoch: [95][ 50/ 518] Overall Loss 2.664434 Objective Loss 2.664434 LR 0.000250 Time 0.338674 +2025-05-16 20:34:09,719 - Epoch: [95][ 100/ 518] Overall Loss 2.670439 Objective Loss 2.670439 LR 0.000250 Time 0.328343 +2025-05-16 20:34:25,601 - Epoch: [95][ 150/ 518] Overall Loss 2.664116 Objective Loss 2.664116 LR 0.000250 Time 0.324766 +2025-05-16 20:34:41,510 - Epoch: [95][ 200/ 518] Overall Loss 2.664476 Objective Loss 2.664476 LR 0.000250 Time 0.323117 +2025-05-16 20:34:57,415 - Epoch: [95][ 250/ 518] Overall Loss 2.669500 Objective Loss 2.669500 LR 0.000250 Time 0.322106 +2025-05-16 20:35:13,309 - Epoch: [95][ 300/ 518] Overall Loss 2.670794 Objective Loss 2.670794 LR 0.000250 Time 0.321401 +2025-05-16 20:35:29,237 - Epoch: [95][ 350/ 518] Overall Loss 2.674401 Objective Loss 2.674401 LR 0.000250 Time 0.320991 +2025-05-16 20:35:45,146 - Epoch: [95][ 400/ 518] Overall Loss 2.680629 Objective Loss 2.680629 LR 0.000250 Time 0.320638 +2025-05-16 20:36:01,049 - Epoch: [95][ 450/ 518] Overall Loss 2.685931 Objective Loss 2.685931 LR 0.000250 Time 0.320349 +2025-05-16 20:36:16,975 - Epoch: [95][ 500/ 518] Overall Loss 2.682571 Objective Loss 2.682571 LR 0.000250 Time 0.320164 +2025-05-16 20:36:22,588 - Epoch: [95][ 518/ 518] Overall Loss 2.683544 Objective Loss 2.683544 LR 0.000250 Time 0.319875 +2025-05-16 20:36:22,624 - --- validate (epoch=95)----------- +2025-05-16 20:36:22,624 - 4952 samples (32 per mini-batch) +2025-05-16 20:36:22,628 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:36:34,830 - Epoch: [95][ 50/ 155] Loss 3.097206 mAP 0.468207 +2025-05-16 20:36:47,237 - Epoch: [95][ 100/ 155] Loss 3.065377 mAP 0.462233 +2025-05-16 20:37:00,378 - Epoch: [95][ 150/ 155] Loss 3.065567 mAP 0.449529 +2025-05-16 20:37:03,366 - Epoch: [95][ 155/ 155] Loss 3.059061 mAP 0.450953 +2025-05-16 20:37:03,400 - ==> mAP: 0.45095 Loss: 3.059 + +2025-05-16 20:37:03,410 - ==> Best [mAP: 0.455613 vloss: 3.024263 Params: 2177088 on epoch: 93] +2025-05-16 20:37:03,410 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:37:03,508 - + +2025-05-16 20:37:03,508 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:37:20,380 - Epoch: [96][ 50/ 518] Overall Loss 2.634568 Objective Loss 2.634568 LR 0.000250 Time 0.337360 +2025-05-16 20:37:36,260 - Epoch: [96][ 100/ 518] Overall Loss 2.636991 Objective Loss 2.636991 LR 0.000250 Time 0.327470 +2025-05-16 20:37:52,134 - Epoch: [96][ 150/ 518] Overall Loss 2.641326 Objective Loss 2.641326 LR 0.000250 Time 0.324138 +2025-05-16 20:38:08,053 - Epoch: [96][ 200/ 518] Overall Loss 2.644537 Objective Loss 2.644537 LR 0.000250 Time 0.322690 +2025-05-16 20:38:23,963 - Epoch: [96][ 250/ 518] Overall Loss 2.663291 Objective Loss 2.663291 LR 0.000250 Time 0.321789 +2025-05-16 20:38:39,874 - Epoch: [96][ 300/ 518] Overall Loss 2.673974 Objective Loss 2.673974 LR 0.000250 Time 0.321192 +2025-05-16 20:38:55,786 - Epoch: [96][ 350/ 518] Overall Loss 2.672374 Objective Loss 2.672374 LR 0.000250 Time 0.320766 +2025-05-16 20:39:11,700 - Epoch: [96][ 400/ 518] Overall Loss 2.672500 Objective Loss 2.672500 LR 0.000250 Time 0.320453 +2025-05-16 20:39:27,620 - Epoch: [96][ 450/ 518] Overall Loss 2.668447 Objective Loss 2.668447 LR 0.000250 Time 0.320223 +2025-05-16 20:39:43,542 - Epoch: [96][ 500/ 518] Overall Loss 2.670285 Objective Loss 2.670285 LR 0.000250 Time 0.320041 +2025-05-16 20:39:49,170 - Epoch: [96][ 518/ 518] Overall Loss 2.672065 Objective Loss 2.672065 LR 0.000250 Time 0.319784 +2025-05-16 20:39:49,207 - --- validate (epoch=96)----------- +2025-05-16 20:39:49,208 - 4952 samples (32 per mini-batch) +2025-05-16 20:39:49,212 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:40:01,420 - Epoch: [96][ 50/ 155] Loss 3.033137 mAP 0.467166 +2025-05-16 20:40:13,558 - Epoch: [96][ 100/ 155] Loss 3.021518 mAP 0.464131 +2025-05-16 20:40:26,391 - Epoch: [96][ 150/ 155] Loss 3.027631 mAP 0.457881 +2025-05-16 20:40:29,270 - Epoch: [96][ 155/ 155] Loss 3.026206 mAP 0.458293 +2025-05-16 20:40:29,310 - ==> mAP: 0.45829 Loss: 3.026 + +2025-05-16 20:40:29,319 - ==> Best [mAP: 0.458293 vloss: 3.026206 Params: 2177088 on epoch: 96] +2025-05-16 20:40:29,319 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:40:29,438 - + +2025-05-16 20:40:29,438 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:40:46,266 - Epoch: [97][ 50/ 518] Overall Loss 2.682047 Objective Loss 2.682047 LR 0.000250 Time 0.336482 +2025-05-16 20:41:02,139 - Epoch: [97][ 100/ 518] Overall Loss 2.670101 Objective Loss 2.670101 LR 0.000250 Time 0.326972 +2025-05-16 20:41:18,005 - Epoch: [97][ 150/ 518] Overall Loss 2.681131 Objective Loss 2.681131 LR 0.000250 Time 0.323744 +2025-05-16 20:41:33,897 - Epoch: [97][ 200/ 518] Overall Loss 2.670568 Objective Loss 2.670568 LR 0.000250 Time 0.322262 +2025-05-16 20:41:49,803 - Epoch: [97][ 250/ 518] Overall Loss 2.673824 Objective Loss 2.673824 LR 0.000250 Time 0.321434 +2025-05-16 20:42:05,711 - Epoch: [97][ 300/ 518] Overall Loss 2.673487 Objective Loss 2.673487 LR 0.000250 Time 0.320884 +2025-05-16 20:42:21,610 - Epoch: [97][ 350/ 518] Overall Loss 2.676727 Objective Loss 2.676727 LR 0.000250 Time 0.320465 +2025-05-16 20:42:37,559 - Epoch: [97][ 400/ 518] Overall Loss 2.678533 Objective Loss 2.678533 LR 0.000250 Time 0.320279 +2025-05-16 20:42:53,495 - Epoch: [97][ 450/ 518] Overall Loss 2.680878 Objective Loss 2.680878 LR 0.000250 Time 0.320104 +2025-05-16 20:43:09,446 - Epoch: [97][ 500/ 518] Overall Loss 2.680388 Objective Loss 2.680388 LR 0.000250 Time 0.319994 +2025-05-16 20:43:15,083 - Epoch: [97][ 518/ 518] Overall Loss 2.683048 Objective Loss 2.683048 LR 0.000250 Time 0.319757 +2025-05-16 20:43:15,121 - --- validate (epoch=97)----------- +2025-05-16 20:43:15,122 - 4952 samples (32 per mini-batch) +2025-05-16 20:43:15,125 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:43:27,151 - Epoch: [97][ 50/ 155] Loss 3.028047 mAP 0.464185 +2025-05-16 20:43:39,514 - Epoch: [97][ 100/ 155] Loss 3.041338 mAP 0.439693 +2025-05-16 20:43:52,430 - Epoch: [97][ 150/ 155] Loss 3.042887 mAP 0.444110 +2025-05-16 20:43:55,149 - Epoch: [97][ 155/ 155] Loss 3.036601 mAP 0.444952 +2025-05-16 20:43:55,183 - ==> mAP: 0.44495 Loss: 3.037 + +2025-05-16 20:43:55,192 - ==> Best [mAP: 0.458293 vloss: 3.026206 Params: 2177088 on epoch: 96] +2025-05-16 20:43:55,192 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:43:55,289 - + +2025-05-16 20:43:55,289 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:44:12,049 - Epoch: [98][ 50/ 518] Overall Loss 2.613728 Objective Loss 2.613728 LR 0.000250 Time 0.335113 +2025-05-16 20:44:27,939 - Epoch: [98][ 100/ 518] Overall Loss 2.587691 Objective Loss 2.587691 LR 0.000250 Time 0.326448 +2025-05-16 20:44:43,828 - Epoch: [98][ 150/ 518] Overall Loss 2.609515 Objective Loss 2.609515 LR 0.000250 Time 0.323551 +2025-05-16 20:44:59,769 - Epoch: [98][ 200/ 518] Overall Loss 2.621123 Objective Loss 2.621123 LR 0.000250 Time 0.322366 +2025-05-16 20:45:15,696 - Epoch: [98][ 250/ 518] Overall Loss 2.630524 Objective Loss 2.630524 LR 0.000250 Time 0.321599 +2025-05-16 20:45:31,641 - Epoch: [98][ 300/ 518] Overall Loss 2.639037 Objective Loss 2.639037 LR 0.000250 Time 0.321148 +2025-05-16 20:45:47,586 - Epoch: [98][ 350/ 518] Overall Loss 2.650555 Objective Loss 2.650555 LR 0.000250 Time 0.320823 +2025-05-16 20:46:03,526 - Epoch: [98][ 400/ 518] Overall Loss 2.656115 Objective Loss 2.656115 LR 0.000250 Time 0.320569 +2025-05-16 20:46:19,466 - Epoch: [98][ 450/ 518] Overall Loss 2.656599 Objective Loss 2.656599 LR 0.000250 Time 0.320371 +2025-05-16 20:46:35,422 - Epoch: [98][ 500/ 518] Overall Loss 2.657508 Objective Loss 2.657508 LR 0.000250 Time 0.320246 +2025-05-16 20:46:41,064 - Epoch: [98][ 518/ 518] Overall Loss 2.659654 Objective Loss 2.659654 LR 0.000250 Time 0.320007 +2025-05-16 20:46:41,101 - --- validate (epoch=98)----------- +2025-05-16 20:46:41,102 - 4952 samples (32 per mini-batch) +2025-05-16 20:46:41,105 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:46:53,564 - Epoch: [98][ 50/ 155] Loss 3.082953 mAP 0.453467 +2025-05-16 20:47:06,102 - Epoch: [98][ 100/ 155] Loss 3.077725 mAP 0.450279 +2025-05-16 20:47:19,244 - Epoch: [98][ 150/ 155] Loss 3.044547 mAP 0.459296 +2025-05-16 20:47:22,168 - Epoch: [98][ 155/ 155] Loss 3.048562 mAP 0.457447 +2025-05-16 20:47:22,202 - ==> mAP: 0.45745 Loss: 3.049 + +2025-05-16 20:47:22,211 - ==> Best [mAP: 0.458293 vloss: 3.026206 Params: 2177088 on epoch: 96] +2025-05-16 20:47:22,211 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:47:22,313 - + +2025-05-16 20:47:22,313 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:47:38,985 - Epoch: [99][ 50/ 518] Overall Loss 2.705727 Objective Loss 2.705727 LR 0.000250 Time 0.333369 +2025-05-16 20:47:54,865 - Epoch: [99][ 100/ 518] Overall Loss 2.703409 Objective Loss 2.703409 LR 0.000250 Time 0.325477 +2025-05-16 20:48:10,755 - Epoch: [99][ 150/ 518] Overall Loss 2.697789 Objective Loss 2.697789 LR 0.000250 Time 0.322909 +2025-05-16 20:48:26,689 - Epoch: [99][ 200/ 518] Overall Loss 2.689902 Objective Loss 2.689902 LR 0.000250 Time 0.321850 +2025-05-16 20:48:42,621 - Epoch: [99][ 250/ 518] Overall Loss 2.690705 Objective Loss 2.690705 LR 0.000250 Time 0.321207 +2025-05-16 20:48:58,531 - Epoch: [99][ 300/ 518] Overall Loss 2.682258 Objective Loss 2.682258 LR 0.000250 Time 0.320701 +2025-05-16 20:49:14,426 - Epoch: [99][ 350/ 518] Overall Loss 2.682255 Objective Loss 2.682255 LR 0.000250 Time 0.320300 +2025-05-16 20:49:30,325 - Epoch: [99][ 400/ 518] Overall Loss 2.682742 Objective Loss 2.682742 LR 0.000250 Time 0.320008 +2025-05-16 20:49:46,238 - Epoch: [99][ 450/ 518] Overall Loss 2.675387 Objective Loss 2.675387 LR 0.000250 Time 0.319810 +2025-05-16 20:50:02,183 - Epoch: [99][ 500/ 518] Overall Loss 2.676028 Objective Loss 2.676028 LR 0.000250 Time 0.319718 +2025-05-16 20:50:07,801 - Epoch: [99][ 518/ 518] Overall Loss 2.681382 Objective Loss 2.681382 LR 0.000250 Time 0.319453 +2025-05-16 20:50:07,837 - --- validate (epoch=99)----------- +2025-05-16 20:50:07,838 - 4952 samples (32 per mini-batch) +2025-05-16 20:50:07,841 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:50:20,160 - Epoch: [99][ 50/ 155] Loss 3.033231 mAP 0.467639 +2025-05-16 20:50:32,770 - Epoch: [99][ 100/ 155] Loss 3.031578 mAP 0.464301 +2025-05-16 20:50:45,823 - Epoch: [99][ 150/ 155] Loss 3.012860 mAP 0.466075 +2025-05-16 20:50:48,631 - Epoch: [99][ 155/ 155] Loss 3.017910 mAP 0.465308 +2025-05-16 20:50:48,668 - ==> mAP: 0.46531 Loss: 3.018 + +2025-05-16 20:50:48,677 - ==> Best [mAP: 0.465308 vloss: 3.017910 Params: 2177088 on epoch: 99] +2025-05-16 20:50:48,677 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:50:48,947 - + +2025-05-16 20:50:48,947 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:51:05,812 - Epoch: [100][ 50/ 518] Overall Loss 2.645199 Objective Loss 2.645199 LR 0.000063 Time 0.337227 +2025-05-16 20:51:21,669 - Epoch: [100][ 100/ 518] Overall Loss 2.642311 Objective Loss 2.642311 LR 0.000063 Time 0.327179 +2025-05-16 20:51:37,545 - Epoch: [100][ 150/ 518] Overall Loss 2.631884 Objective Loss 2.631884 LR 0.000063 Time 0.323949 +2025-05-16 20:51:53,458 - Epoch: [100][ 200/ 518] Overall Loss 2.630177 Objective Loss 2.630177 LR 0.000063 Time 0.322526 +2025-05-16 20:52:09,401 - Epoch: [100][ 250/ 518] Overall Loss 2.612682 Objective Loss 2.612682 LR 0.000063 Time 0.321789 +2025-05-16 20:52:25,302 - Epoch: [100][ 300/ 518] Overall Loss 2.613194 Objective Loss 2.613194 LR 0.000063 Time 0.321157 +2025-05-16 20:52:41,211 - Epoch: [100][ 350/ 518] Overall Loss 2.622718 Objective Loss 2.622718 LR 0.000063 Time 0.320729 +2025-05-16 20:52:57,122 - Epoch: [100][ 400/ 518] Overall Loss 2.623402 Objective Loss 2.623402 LR 0.000063 Time 0.320414 +2025-05-16 20:53:13,049 - Epoch: [100][ 450/ 518] Overall Loss 2.623185 Objective Loss 2.623185 LR 0.000063 Time 0.320205 +2025-05-16 20:53:28,958 - Epoch: [100][ 500/ 518] Overall Loss 2.625745 Objective Loss 2.625745 LR 0.000063 Time 0.320000 +2025-05-16 20:53:34,575 - Epoch: [100][ 518/ 518] Overall Loss 2.624261 Objective Loss 2.624261 LR 0.000063 Time 0.319722 +2025-05-16 20:53:34,615 - --- validate (epoch=100)----------- +2025-05-16 20:53:34,615 - 4952 samples (32 per mini-batch) +2025-05-16 20:53:34,619 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:53:46,702 - Epoch: [100][ 50/ 155] Loss 2.928338 mAP 0.479721 +2025-05-16 20:53:59,051 - Epoch: [100][ 100/ 155] Loss 2.963509 mAP 0.461471 +2025-05-16 20:54:11,829 - Epoch: [100][ 150/ 155] Loss 2.974343 mAP 0.464882 +2025-05-16 20:54:14,726 - Epoch: [100][ 155/ 155] Loss 2.977054 mAP 0.464248 +2025-05-16 20:54:14,766 - ==> mAP: 0.46425 Loss: 2.977 + +2025-05-16 20:54:14,775 - ==> Best [mAP: 0.465308 vloss: 3.017910 Params: 2177088 on epoch: 99] +2025-05-16 20:54:14,775 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:54:14,876 - + +2025-05-16 20:54:14,877 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:54:31,668 - Epoch: [101][ 50/ 518] Overall Loss 2.598696 Objective Loss 2.598696 LR 0.000063 Time 0.335765 +2025-05-16 20:54:47,558 - Epoch: [101][ 100/ 518] Overall Loss 2.597596 Objective Loss 2.597596 LR 0.000063 Time 0.326768 +2025-05-16 20:55:03,468 - Epoch: [101][ 150/ 518] Overall Loss 2.592724 Objective Loss 2.592724 LR 0.000063 Time 0.323911 +2025-05-16 20:55:19,386 - Epoch: [101][ 200/ 518] Overall Loss 2.586045 Objective Loss 2.586045 LR 0.000063 Time 0.322517 +2025-05-16 20:55:35,291 - Epoch: [101][ 250/ 518] Overall Loss 2.593701 Objective Loss 2.593701 LR 0.000063 Time 0.321632 +2025-05-16 20:55:51,200 - Epoch: [101][ 300/ 518] Overall Loss 2.594482 Objective Loss 2.594482 LR 0.000063 Time 0.321052 +2025-05-16 20:56:07,108 - Epoch: [101][ 350/ 518] Overall Loss 2.593530 Objective Loss 2.593530 LR 0.000063 Time 0.320636 +2025-05-16 20:56:23,016 - Epoch: [101][ 400/ 518] Overall Loss 2.605417 Objective Loss 2.605417 LR 0.000063 Time 0.320324 +2025-05-16 20:56:38,923 - Epoch: [101][ 450/ 518] Overall Loss 2.608925 Objective Loss 2.608925 LR 0.000063 Time 0.320080 +2025-05-16 20:56:54,849 - Epoch: [101][ 500/ 518] Overall Loss 2.604782 Objective Loss 2.604782 LR 0.000063 Time 0.319921 +2025-05-16 20:57:00,485 - Epoch: [101][ 518/ 518] Overall Loss 2.605784 Objective Loss 2.605784 LR 0.000063 Time 0.319683 +2025-05-16 20:57:00,523 - --- validate (epoch=101)----------- +2025-05-16 20:57:00,524 - 4952 samples (32 per mini-batch) +2025-05-16 20:57:00,527 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 20:57:12,897 - Epoch: [101][ 50/ 155] Loss 2.980608 mAP 0.469958 +2025-05-16 20:57:25,275 - Epoch: [101][ 100/ 155] Loss 2.999141 mAP 0.458428 +2025-05-16 20:57:38,429 - Epoch: [101][ 150/ 155] Loss 2.969227 mAP 0.464146 +2025-05-16 20:57:41,393 - Epoch: [101][ 155/ 155] Loss 2.974564 mAP 0.464768 +2025-05-16 20:57:41,431 - ==> mAP: 0.46477 Loss: 2.975 + +2025-05-16 20:57:41,440 - ==> Best [mAP: 0.465308 vloss: 3.017910 Params: 2177088 on epoch: 99] +2025-05-16 20:57:41,440 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 20:57:41,544 - + +2025-05-16 20:57:41,544 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 20:57:58,218 - Epoch: [102][ 50/ 518] Overall Loss 2.673404 Objective Loss 2.673404 LR 0.000063 Time 0.333399 +2025-05-16 20:58:14,090 - Epoch: [102][ 100/ 518] Overall Loss 2.655085 Objective Loss 2.655085 LR 0.000063 Time 0.325406 +2025-05-16 20:58:30,000 - Epoch: [102][ 150/ 518] Overall Loss 2.640157 Objective Loss 2.640157 LR 0.000063 Time 0.323002 +2025-05-16 20:58:45,928 - Epoch: [102][ 200/ 518] Overall Loss 2.622168 Objective Loss 2.622168 LR 0.000063 Time 0.321890 +2025-05-16 20:59:01,865 - Epoch: [102][ 250/ 518] Overall Loss 2.612922 Objective Loss 2.612922 LR 0.000063 Time 0.321257 +2025-05-16 20:59:17,805 - Epoch: [102][ 300/ 518] Overall Loss 2.615103 Objective Loss 2.615103 LR 0.000063 Time 0.320843 +2025-05-16 20:59:33,722 - Epoch: [102][ 350/ 518] Overall Loss 2.606684 Objective Loss 2.606684 LR 0.000063 Time 0.320484 +2025-05-16 20:59:49,675 - Epoch: [102][ 400/ 518] Overall Loss 2.603404 Objective Loss 2.603404 LR 0.000063 Time 0.320303 +2025-05-16 21:00:05,638 - Epoch: [102][ 450/ 518] Overall Loss 2.602128 Objective Loss 2.602128 LR 0.000063 Time 0.320186 +2025-05-16 21:00:21,584 - Epoch: [102][ 500/ 518] Overall Loss 2.597114 Objective Loss 2.597114 LR 0.000063 Time 0.320059 +2025-05-16 21:00:27,200 - Epoch: [102][ 518/ 518] Overall Loss 2.597672 Objective Loss 2.597672 LR 0.000063 Time 0.319777 +2025-05-16 21:00:27,238 - --- validate (epoch=102)----------- +2025-05-16 21:00:27,239 - 4952 samples (32 per mini-batch) +2025-05-16 21:00:27,242 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:00:39,392 - Epoch: [102][ 50/ 155] Loss 2.973786 mAP 0.457852 +2025-05-16 21:00:51,948 - Epoch: [102][ 100/ 155] Loss 2.971209 mAP 0.468591 +2025-05-16 21:01:04,761 - Epoch: [102][ 150/ 155] Loss 2.974142 mAP 0.465454 +2025-05-16 21:01:07,581 - Epoch: [102][ 155/ 155] Loss 2.973819 mAP 0.465562 +2025-05-16 21:01:07,615 - ==> mAP: 0.46556 Loss: 2.974 + +2025-05-16 21:01:07,624 - ==> Best [mAP: 0.465562 vloss: 2.973819 Params: 2177088 on epoch: 102] +2025-05-16 21:01:07,624 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:01:07,765 - + +2025-05-16 21:01:07,765 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:01:24,422 - Epoch: [103][ 50/ 518] Overall Loss 2.646351 Objective Loss 2.646351 LR 0.000063 Time 0.333075 +2025-05-16 21:01:40,287 - Epoch: [103][ 100/ 518] Overall Loss 2.592988 Objective Loss 2.592988 LR 0.000063 Time 0.325174 +2025-05-16 21:01:56,173 - Epoch: [103][ 150/ 518] Overall Loss 2.599328 Objective Loss 2.599328 LR 0.000063 Time 0.322682 +2025-05-16 21:02:12,063 - Epoch: [103][ 200/ 518] Overall Loss 2.600071 Objective Loss 2.600071 LR 0.000063 Time 0.321454 +2025-05-16 21:02:27,978 - Epoch: [103][ 250/ 518] Overall Loss 2.600429 Objective Loss 2.600429 LR 0.000063 Time 0.320821 +2025-05-16 21:02:43,887 - Epoch: [103][ 300/ 518] Overall Loss 2.598044 Objective Loss 2.598044 LR 0.000063 Time 0.320379 +2025-05-16 21:02:59,793 - Epoch: [103][ 350/ 518] Overall Loss 2.593376 Objective Loss 2.593376 LR 0.000063 Time 0.320052 +2025-05-16 21:03:15,723 - Epoch: [103][ 400/ 518] Overall Loss 2.599118 Objective Loss 2.599118 LR 0.000063 Time 0.319869 +2025-05-16 21:03:31,656 - Epoch: [103][ 450/ 518] Overall Loss 2.595100 Objective Loss 2.595100 LR 0.000063 Time 0.319733 +2025-05-16 21:03:47,578 - Epoch: [103][ 500/ 518] Overall Loss 2.600602 Objective Loss 2.600602 LR 0.000063 Time 0.319603 +2025-05-16 21:03:53,180 - Epoch: [103][ 518/ 518] Overall Loss 2.603205 Objective Loss 2.603205 LR 0.000063 Time 0.319310 +2025-05-16 21:03:53,220 - --- validate (epoch=103)----------- +2025-05-16 21:03:53,221 - 4952 samples (32 per mini-batch) +2025-05-16 21:03:53,224 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:04:05,743 - Epoch: [103][ 50/ 155] Loss 2.956516 mAP 0.480112 +2025-05-16 21:04:18,322 - Epoch: [103][ 100/ 155] Loss 2.948256 mAP 0.476713 +2025-05-16 21:04:31,603 - Epoch: [103][ 150/ 155] Loss 2.961737 mAP 0.476622 +2025-05-16 21:04:34,602 - Epoch: [103][ 155/ 155] Loss 2.962095 mAP 0.476832 +2025-05-16 21:04:34,641 - ==> mAP: 0.47683 Loss: 2.962 + +2025-05-16 21:04:34,651 - ==> Best [mAP: 0.476832 vloss: 2.962095 Params: 2177088 on epoch: 103] +2025-05-16 21:04:34,651 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:04:34,787 - + +2025-05-16 21:04:34,787 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:04:51,720 - Epoch: [104][ 50/ 518] Overall Loss 2.609584 Objective Loss 2.609584 LR 0.000063 Time 0.338588 +2025-05-16 21:05:07,573 - Epoch: [104][ 100/ 518] Overall Loss 2.608729 Objective Loss 2.608729 LR 0.000063 Time 0.327818 +2025-05-16 21:05:23,467 - Epoch: [104][ 150/ 518] Overall Loss 2.585045 Objective Loss 2.585045 LR 0.000063 Time 0.324496 +2025-05-16 21:05:39,372 - Epoch: [104][ 200/ 518] Overall Loss 2.598831 Objective Loss 2.598831 LR 0.000063 Time 0.322893 +2025-05-16 21:05:55,270 - Epoch: [104][ 250/ 518] Overall Loss 2.593193 Objective Loss 2.593193 LR 0.000063 Time 0.321901 +2025-05-16 21:06:11,175 - Epoch: [104][ 300/ 518] Overall Loss 2.591003 Objective Loss 2.591003 LR 0.000063 Time 0.321267 +2025-05-16 21:06:27,102 - Epoch: [104][ 350/ 518] Overall Loss 2.585644 Objective Loss 2.585644 LR 0.000063 Time 0.320875 +2025-05-16 21:06:43,022 - Epoch: [104][ 400/ 518] Overall Loss 2.586224 Objective Loss 2.586224 LR 0.000063 Time 0.320562 +2025-05-16 21:06:58,924 - Epoch: [104][ 450/ 518] Overall Loss 2.578689 Objective Loss 2.578689 LR 0.000063 Time 0.320279 +2025-05-16 21:07:14,836 - Epoch: [104][ 500/ 518] Overall Loss 2.578655 Objective Loss 2.578655 LR 0.000063 Time 0.320075 +2025-05-16 21:07:20,463 - Epoch: [104][ 518/ 518] Overall Loss 2.578643 Objective Loss 2.578643 LR 0.000063 Time 0.319814 +2025-05-16 21:07:20,499 - --- validate (epoch=104)----------- +2025-05-16 21:07:20,500 - 4952 samples (32 per mini-batch) +2025-05-16 21:07:20,503 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:07:32,840 - Epoch: [104][ 50/ 155] Loss 3.029371 mAP 0.460961 +2025-05-16 21:07:45,145 - Epoch: [104][ 100/ 155] Loss 2.979258 mAP 0.468442 +2025-05-16 21:07:58,223 - Epoch: [104][ 150/ 155] Loss 2.974270 mAP 0.470940 +2025-05-16 21:08:01,086 - Epoch: [104][ 155/ 155] Loss 2.976019 mAP 0.470876 +2025-05-16 21:08:01,128 - ==> mAP: 0.47088 Loss: 2.976 + +2025-05-16 21:08:01,137 - ==> Best [mAP: 0.476832 vloss: 2.962095 Params: 2177088 on epoch: 103] +2025-05-16 21:08:01,137 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:08:01,231 - + +2025-05-16 21:08:01,232 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:08:17,991 - Epoch: [105][ 50/ 518] Overall Loss 2.573257 Objective Loss 2.573257 LR 0.000063 Time 0.335121 +2025-05-16 21:08:33,867 - Epoch: [105][ 100/ 518] Overall Loss 2.575628 Objective Loss 2.575628 LR 0.000063 Time 0.326306 +2025-05-16 21:08:49,741 - Epoch: [105][ 150/ 518] Overall Loss 2.578134 Objective Loss 2.578134 LR 0.000063 Time 0.323360 +2025-05-16 21:09:05,645 - Epoch: [105][ 200/ 518] Overall Loss 2.578533 Objective Loss 2.578533 LR 0.000063 Time 0.322034 +2025-05-16 21:09:21,574 - Epoch: [105][ 250/ 518] Overall Loss 2.574781 Objective Loss 2.574781 LR 0.000063 Time 0.321340 +2025-05-16 21:09:37,514 - Epoch: [105][ 300/ 518] Overall Loss 2.577367 Objective Loss 2.577367 LR 0.000063 Time 0.320917 +2025-05-16 21:09:53,462 - Epoch: [105][ 350/ 518] Overall Loss 2.579285 Objective Loss 2.579285 LR 0.000063 Time 0.320635 +2025-05-16 21:10:09,393 - Epoch: [105][ 400/ 518] Overall Loss 2.585447 Objective Loss 2.585447 LR 0.000063 Time 0.320381 +2025-05-16 21:10:25,323 - Epoch: [105][ 450/ 518] Overall Loss 2.585494 Objective Loss 2.585494 LR 0.000063 Time 0.320182 +2025-05-16 21:10:41,260 - Epoch: [105][ 500/ 518] Overall Loss 2.588151 Objective Loss 2.588151 LR 0.000063 Time 0.320037 +2025-05-16 21:10:46,895 - Epoch: [105][ 518/ 518] Overall Loss 2.587859 Objective Loss 2.587859 LR 0.000063 Time 0.319793 +2025-05-16 21:10:46,929 - --- validate (epoch=105)----------- +2025-05-16 21:10:46,930 - 4952 samples (32 per mini-batch) +2025-05-16 21:10:46,933 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:10:59,097 - Epoch: [105][ 50/ 155] Loss 2.968014 mAP 0.455913 +2025-05-16 21:11:11,555 - Epoch: [105][ 100/ 155] Loss 2.980422 mAP 0.464048 +2025-05-16 21:11:24,340 - Epoch: [105][ 150/ 155] Loss 2.970443 mAP 0.467949 +2025-05-16 21:11:27,196 - Epoch: [105][ 155/ 155] Loss 2.970624 mAP 0.468166 +2025-05-16 21:11:27,235 - ==> mAP: 0.46817 Loss: 2.971 + +2025-05-16 21:11:27,243 - ==> Best [mAP: 0.476832 vloss: 2.962095 Params: 2177088 on epoch: 103] +2025-05-16 21:11:27,243 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:11:27,339 - + +2025-05-16 21:11:27,339 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:11:44,052 - Epoch: [106][ 50/ 518] Overall Loss 2.557789 Objective Loss 2.557789 LR 0.000063 Time 0.334192 +2025-05-16 21:11:59,913 - Epoch: [106][ 100/ 518] Overall Loss 2.550399 Objective Loss 2.550399 LR 0.000063 Time 0.325686 +2025-05-16 21:12:15,804 - Epoch: [106][ 150/ 518] Overall Loss 2.557734 Objective Loss 2.557734 LR 0.000063 Time 0.323062 +2025-05-16 21:12:31,712 - Epoch: [106][ 200/ 518] Overall Loss 2.550612 Objective Loss 2.550612 LR 0.000063 Time 0.321832 +2025-05-16 21:12:47,649 - Epoch: [106][ 250/ 518] Overall Loss 2.554968 Objective Loss 2.554968 LR 0.000063 Time 0.321209 +2025-05-16 21:13:03,550 - Epoch: [106][ 300/ 518] Overall Loss 2.567126 Objective Loss 2.567126 LR 0.000063 Time 0.320673 +2025-05-16 21:13:19,470 - Epoch: [106][ 350/ 518] Overall Loss 2.579942 Objective Loss 2.579942 LR 0.000063 Time 0.320346 +2025-05-16 21:13:35,390 - Epoch: [106][ 400/ 518] Overall Loss 2.584739 Objective Loss 2.584739 LR 0.000063 Time 0.320100 +2025-05-16 21:13:51,315 - Epoch: [106][ 450/ 518] Overall Loss 2.578321 Objective Loss 2.578321 LR 0.000063 Time 0.319921 +2025-05-16 21:14:07,254 - Epoch: [106][ 500/ 518] Overall Loss 2.570829 Objective Loss 2.570829 LR 0.000063 Time 0.319805 +2025-05-16 21:14:12,886 - Epoch: [106][ 518/ 518] Overall Loss 2.570473 Objective Loss 2.570473 LR 0.000063 Time 0.319564 +2025-05-16 21:14:12,924 - --- validate (epoch=106)----------- +2025-05-16 21:14:12,925 - 4952 samples (32 per mini-batch) +2025-05-16 21:14:12,928 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:14:25,331 - Epoch: [106][ 50/ 155] Loss 2.949397 mAP 0.477174 +2025-05-16 21:14:37,677 - Epoch: [106][ 100/ 155] Loss 2.971089 mAP 0.468532 +2025-05-16 21:14:50,850 - Epoch: [106][ 150/ 155] Loss 2.972324 mAP 0.471602 +2025-05-16 21:14:53,839 - Epoch: [106][ 155/ 155] Loss 2.967786 mAP 0.472321 +2025-05-16 21:14:53,880 - ==> mAP: 0.47232 Loss: 2.968 + +2025-05-16 21:14:53,889 - ==> Best [mAP: 0.476832 vloss: 2.962095 Params: 2177088 on epoch: 103] +2025-05-16 21:14:53,889 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:14:53,986 - + +2025-05-16 21:14:53,986 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:15:10,731 - Epoch: [107][ 50/ 518] Overall Loss 2.563675 Objective Loss 2.563675 LR 0.000063 Time 0.334832 +2025-05-16 21:15:26,614 - Epoch: [107][ 100/ 518] Overall Loss 2.562722 Objective Loss 2.562722 LR 0.000063 Time 0.326234 +2025-05-16 21:15:42,507 - Epoch: [107][ 150/ 518] Overall Loss 2.568547 Objective Loss 2.568547 LR 0.000063 Time 0.323436 +2025-05-16 21:15:58,442 - Epoch: [107][ 200/ 518] Overall Loss 2.575551 Objective Loss 2.575551 LR 0.000063 Time 0.322247 +2025-05-16 21:16:14,367 - Epoch: [107][ 250/ 518] Overall Loss 2.567767 Objective Loss 2.567767 LR 0.000063 Time 0.321493 +2025-05-16 21:16:30,296 - Epoch: [107][ 300/ 518] Overall Loss 2.561867 Objective Loss 2.561867 LR 0.000063 Time 0.321005 +2025-05-16 21:16:46,216 - Epoch: [107][ 350/ 518] Overall Loss 2.572599 Objective Loss 2.572599 LR 0.000063 Time 0.320630 +2025-05-16 21:17:02,143 - Epoch: [107][ 400/ 518] Overall Loss 2.568691 Objective Loss 2.568691 LR 0.000063 Time 0.320368 +2025-05-16 21:17:18,071 - Epoch: [107][ 450/ 518] Overall Loss 2.572134 Objective Loss 2.572134 LR 0.000063 Time 0.320164 +2025-05-16 21:17:33,984 - Epoch: [107][ 500/ 518] Overall Loss 2.569944 Objective Loss 2.569944 LR 0.000063 Time 0.319971 +2025-05-16 21:17:39,612 - Epoch: [107][ 518/ 518] Overall Loss 2.571848 Objective Loss 2.571848 LR 0.000063 Time 0.319716 +2025-05-16 21:17:39,652 - --- validate (epoch=107)----------- +2025-05-16 21:17:39,653 - 4952 samples (32 per mini-batch) +2025-05-16 21:17:39,656 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:17:51,969 - Epoch: [107][ 50/ 155] Loss 2.954481 mAP 0.488596 +2025-05-16 21:18:04,664 - Epoch: [107][ 100/ 155] Loss 2.943464 mAP 0.484781 +2025-05-16 21:18:17,822 - Epoch: [107][ 150/ 155] Loss 2.967205 mAP 0.472178 +2025-05-16 21:18:20,650 - Epoch: [107][ 155/ 155] Loss 2.964709 mAP 0.472725 +2025-05-16 21:18:20,691 - ==> mAP: 0.47273 Loss: 2.965 + +2025-05-16 21:18:20,700 - ==> Best [mAP: 0.476832 vloss: 2.962095 Params: 2177088 on epoch: 103] +2025-05-16 21:18:20,700 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:18:20,797 - + +2025-05-16 21:18:20,797 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:18:37,707 - Epoch: [108][ 50/ 518] Overall Loss 2.549729 Objective Loss 2.549729 LR 0.000063 Time 0.338122 +2025-05-16 21:18:53,603 - Epoch: [108][ 100/ 518] Overall Loss 2.549671 Objective Loss 2.549671 LR 0.000063 Time 0.328013 +2025-05-16 21:19:09,514 - Epoch: [108][ 150/ 518] Overall Loss 2.549630 Objective Loss 2.549630 LR 0.000063 Time 0.324742 +2025-05-16 21:19:25,410 - Epoch: [108][ 200/ 518] Overall Loss 2.544554 Objective Loss 2.544554 LR 0.000063 Time 0.323032 +2025-05-16 21:19:41,307 - Epoch: [108][ 250/ 518] Overall Loss 2.531645 Objective Loss 2.531645 LR 0.000063 Time 0.322010 +2025-05-16 21:19:57,219 - Epoch: [108][ 300/ 518] Overall Loss 2.536025 Objective Loss 2.536025 LR 0.000063 Time 0.321378 +2025-05-16 21:20:13,165 - Epoch: [108][ 350/ 518] Overall Loss 2.544538 Objective Loss 2.544538 LR 0.000063 Time 0.321025 +2025-05-16 21:20:29,082 - Epoch: [108][ 400/ 518] Overall Loss 2.547309 Objective Loss 2.547309 LR 0.000063 Time 0.320688 +2025-05-16 21:20:44,995 - Epoch: [108][ 450/ 518] Overall Loss 2.547101 Objective Loss 2.547101 LR 0.000063 Time 0.320414 +2025-05-16 21:21:00,916 - Epoch: [108][ 500/ 518] Overall Loss 2.550538 Objective Loss 2.550538 LR 0.000063 Time 0.320212 +2025-05-16 21:21:06,531 - Epoch: [108][ 518/ 518] Overall Loss 2.554469 Objective Loss 2.554469 LR 0.000063 Time 0.319925 +2025-05-16 21:21:06,564 - --- validate (epoch=108)----------- +2025-05-16 21:21:06,564 - 4952 samples (32 per mini-batch) +2025-05-16 21:21:06,568 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:21:18,747 - Epoch: [108][ 50/ 155] Loss 2.986007 mAP 0.466994 +2025-05-16 21:21:31,188 - Epoch: [108][ 100/ 155] Loss 2.979062 mAP 0.468364 +2025-05-16 21:21:44,348 - Epoch: [108][ 150/ 155] Loss 2.957840 mAP 0.472915 +2025-05-16 21:21:47,308 - Epoch: [108][ 155/ 155] Loss 2.961333 mAP 0.474255 +2025-05-16 21:21:47,349 - ==> mAP: 0.47425 Loss: 2.961 + +2025-05-16 21:21:47,359 - ==> Best [mAP: 0.476832 vloss: 2.962095 Params: 2177088 on epoch: 103] +2025-05-16 21:21:47,359 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:21:47,456 - + +2025-05-16 21:21:47,456 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:22:04,311 - Epoch: [109][ 50/ 518] Overall Loss 2.578907 Objective Loss 2.578907 LR 0.000063 Time 0.337024 +2025-05-16 21:22:20,168 - Epoch: [109][ 100/ 518] Overall Loss 2.561630 Objective Loss 2.561630 LR 0.000063 Time 0.327077 +2025-05-16 21:22:36,041 - Epoch: [109][ 150/ 518] Overall Loss 2.570886 Objective Loss 2.570886 LR 0.000063 Time 0.323864 +2025-05-16 21:22:51,923 - Epoch: [109][ 200/ 518] Overall Loss 2.579175 Objective Loss 2.579175 LR 0.000063 Time 0.322303 +2025-05-16 21:23:07,835 - Epoch: [109][ 250/ 518] Overall Loss 2.573623 Objective Loss 2.573623 LR 0.000063 Time 0.321487 +2025-05-16 21:23:23,735 - Epoch: [109][ 300/ 518] Overall Loss 2.567125 Objective Loss 2.567125 LR 0.000063 Time 0.320901 +2025-05-16 21:23:39,635 - Epoch: [109][ 350/ 518] Overall Loss 2.570683 Objective Loss 2.570683 LR 0.000063 Time 0.320484 +2025-05-16 21:23:55,535 - Epoch: [109][ 400/ 518] Overall Loss 2.570149 Objective Loss 2.570149 LR 0.000063 Time 0.320171 +2025-05-16 21:24:11,450 - Epoch: [109][ 450/ 518] Overall Loss 2.564627 Objective Loss 2.564627 LR 0.000063 Time 0.319960 +2025-05-16 21:24:27,351 - Epoch: [109][ 500/ 518] Overall Loss 2.559278 Objective Loss 2.559278 LR 0.000063 Time 0.319763 +2025-05-16 21:24:32,975 - Epoch: [109][ 518/ 518] Overall Loss 2.553459 Objective Loss 2.553459 LR 0.000063 Time 0.319509 +2025-05-16 21:24:33,017 - --- validate (epoch=109)----------- +2025-05-16 21:24:33,018 - 4952 samples (32 per mini-batch) +2025-05-16 21:24:33,021 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:24:45,268 - Epoch: [109][ 50/ 155] Loss 2.956856 mAP 0.469845 +2025-05-16 21:24:57,481 - Epoch: [109][ 100/ 155] Loss 2.961914 mAP 0.471189 +2025-05-16 21:25:10,376 - Epoch: [109][ 150/ 155] Loss 2.976971 mAP 0.470535 +2025-05-16 21:25:13,226 - Epoch: [109][ 155/ 155] Loss 2.970653 mAP 0.472268 +2025-05-16 21:25:13,267 - ==> mAP: 0.47227 Loss: 2.971 + +2025-05-16 21:25:13,276 - ==> Best [mAP: 0.476832 vloss: 2.962095 Params: 2177088 on epoch: 103] +2025-05-16 21:25:13,276 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:25:13,370 - + +2025-05-16 21:25:13,370 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:25:30,184 - Epoch: [110][ 50/ 518] Overall Loss 2.582662 Objective Loss 2.582662 LR 0.000063 Time 0.336211 +2025-05-16 21:25:46,054 - Epoch: [110][ 100/ 518] Overall Loss 2.552254 Objective Loss 2.552254 LR 0.000063 Time 0.326799 +2025-05-16 21:26:01,953 - Epoch: [110][ 150/ 518] Overall Loss 2.547252 Objective Loss 2.547252 LR 0.000063 Time 0.323858 +2025-05-16 21:26:17,894 - Epoch: [110][ 200/ 518] Overall Loss 2.547874 Objective Loss 2.547874 LR 0.000063 Time 0.322595 +2025-05-16 21:26:33,843 - Epoch: [110][ 250/ 518] Overall Loss 2.558486 Objective Loss 2.558486 LR 0.000063 Time 0.321868 +2025-05-16 21:26:49,775 - Epoch: [110][ 300/ 518] Overall Loss 2.556833 Objective Loss 2.556833 LR 0.000063 Time 0.321330 +2025-05-16 21:27:05,705 - Epoch: [110][ 350/ 518] Overall Loss 2.554438 Objective Loss 2.554438 LR 0.000063 Time 0.320938 +2025-05-16 21:27:21,631 - Epoch: [110][ 400/ 518] Overall Loss 2.555681 Objective Loss 2.555681 LR 0.000063 Time 0.320631 +2025-05-16 21:27:37,541 - Epoch: [110][ 450/ 518] Overall Loss 2.553709 Objective Loss 2.553709 LR 0.000063 Time 0.320359 +2025-05-16 21:27:53,453 - Epoch: [110][ 500/ 518] Overall Loss 2.555728 Objective Loss 2.555728 LR 0.000063 Time 0.320145 +2025-05-16 21:27:59,069 - Epoch: [110][ 518/ 518] Overall Loss 2.555469 Objective Loss 2.555469 LR 0.000063 Time 0.319862 +2025-05-16 21:27:59,105 - --- validate (epoch=110)----------- +2025-05-16 21:27:59,106 - 4952 samples (32 per mini-batch) +2025-05-16 21:27:59,109 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:28:11,237 - Epoch: [110][ 50/ 155] Loss 3.044938 mAP 0.469954 +2025-05-16 21:28:23,548 - Epoch: [110][ 100/ 155] Loss 2.978099 mAP 0.476883 +2025-05-16 21:28:36,503 - Epoch: [110][ 150/ 155] Loss 2.963254 mAP 0.476912 +2025-05-16 21:28:39,369 - Epoch: [110][ 155/ 155] Loss 2.963537 mAP 0.476044 +2025-05-16 21:28:39,410 - ==> mAP: 0.47604 Loss: 2.964 + +2025-05-16 21:28:39,419 - ==> Best [mAP: 0.476832 vloss: 2.962095 Params: 2177088 on epoch: 103] +2025-05-16 21:28:39,419 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:28:39,510 - + +2025-05-16 21:28:39,510 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:28:56,337 - Epoch: [111][ 50/ 518] Overall Loss 2.551813 Objective Loss 2.551813 LR 0.000063 Time 0.336463 +2025-05-16 21:29:12,198 - Epoch: [111][ 100/ 518] Overall Loss 2.537172 Objective Loss 2.537172 LR 0.000063 Time 0.326833 +2025-05-16 21:29:28,086 - Epoch: [111][ 150/ 518] Overall Loss 2.550244 Objective Loss 2.550244 LR 0.000063 Time 0.323802 +2025-05-16 21:29:44,008 - Epoch: [111][ 200/ 518] Overall Loss 2.556839 Objective Loss 2.556839 LR 0.000063 Time 0.322459 +2025-05-16 21:29:59,915 - Epoch: [111][ 250/ 518] Overall Loss 2.561467 Objective Loss 2.561467 LR 0.000063 Time 0.321591 +2025-05-16 21:30:15,834 - Epoch: [111][ 300/ 518] Overall Loss 2.559745 Objective Loss 2.559745 LR 0.000063 Time 0.321053 +2025-05-16 21:30:31,749 - Epoch: [111][ 350/ 518] Overall Loss 2.552936 Objective Loss 2.552936 LR 0.000063 Time 0.320657 +2025-05-16 21:30:47,685 - Epoch: [111][ 400/ 518] Overall Loss 2.552147 Objective Loss 2.552147 LR 0.000063 Time 0.320413 +2025-05-16 21:31:03,616 - Epoch: [111][ 450/ 518] Overall Loss 2.549398 Objective Loss 2.549398 LR 0.000063 Time 0.320211 +2025-05-16 21:31:19,562 - Epoch: [111][ 500/ 518] Overall Loss 2.551053 Objective Loss 2.551053 LR 0.000063 Time 0.320081 +2025-05-16 21:31:25,184 - Epoch: [111][ 518/ 518] Overall Loss 2.547467 Objective Loss 2.547467 LR 0.000063 Time 0.319810 +2025-05-16 21:31:25,219 - --- validate (epoch=111)----------- +2025-05-16 21:31:25,220 - 4952 samples (32 per mini-batch) +2025-05-16 21:31:25,223 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:31:37,576 - Epoch: [111][ 50/ 155] Loss 3.007724 mAP 0.467315 +2025-05-16 21:31:50,064 - Epoch: [111][ 100/ 155] Loss 2.960382 mAP 0.470526 +2025-05-16 21:32:03,303 - Epoch: [111][ 150/ 155] Loss 2.958824 mAP 0.474385 +2025-05-16 21:32:06,283 - Epoch: [111][ 155/ 155] Loss 2.954263 mAP 0.476997 +2025-05-16 21:32:06,323 - ==> mAP: 0.47700 Loss: 2.954 + +2025-05-16 21:32:06,333 - ==> Best [mAP: 0.476997 vloss: 2.954263 Params: 2177088 on epoch: 111] +2025-05-16 21:32:06,333 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:32:06,456 - + +2025-05-16 21:32:06,456 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:32:23,248 - Epoch: [112][ 50/ 518] Overall Loss 2.491912 Objective Loss 2.491912 LR 0.000063 Time 0.335764 +2025-05-16 21:32:39,114 - Epoch: [112][ 100/ 518] Overall Loss 2.497968 Objective Loss 2.497968 LR 0.000063 Time 0.326530 +2025-05-16 21:32:55,004 - Epoch: [112][ 150/ 518] Overall Loss 2.520099 Objective Loss 2.520099 LR 0.000063 Time 0.323620 +2025-05-16 21:33:10,921 - Epoch: [112][ 200/ 518] Overall Loss 2.536935 Objective Loss 2.536935 LR 0.000063 Time 0.322296 +2025-05-16 21:33:26,853 - Epoch: [112][ 250/ 518] Overall Loss 2.542120 Objective Loss 2.542120 LR 0.000063 Time 0.321560 +2025-05-16 21:33:42,794 - Epoch: [112][ 300/ 518] Overall Loss 2.543089 Objective Loss 2.543089 LR 0.000063 Time 0.321102 +2025-05-16 21:33:58,739 - Epoch: [112][ 350/ 518] Overall Loss 2.534346 Objective Loss 2.534346 LR 0.000063 Time 0.320786 +2025-05-16 21:34:14,694 - Epoch: [112][ 400/ 518] Overall Loss 2.539519 Objective Loss 2.539519 LR 0.000063 Time 0.320574 +2025-05-16 21:34:30,644 - Epoch: [112][ 450/ 518] Overall Loss 2.536110 Objective Loss 2.536110 LR 0.000063 Time 0.320397 +2025-05-16 21:34:46,572 - Epoch: [112][ 500/ 518] Overall Loss 2.539859 Objective Loss 2.539859 LR 0.000063 Time 0.320212 +2025-05-16 21:34:52,179 - Epoch: [112][ 518/ 518] Overall Loss 2.536056 Objective Loss 2.536056 LR 0.000063 Time 0.319908 +2025-05-16 21:34:52,216 - --- validate (epoch=112)----------- +2025-05-16 21:34:52,217 - 4952 samples (32 per mini-batch) +2025-05-16 21:34:52,220 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:35:04,336 - Epoch: [112][ 50/ 155] Loss 2.965306 mAP 0.483659 +2025-05-16 21:35:16,887 - Epoch: [112][ 100/ 155] Loss 2.955163 mAP 0.478106 +2025-05-16 21:35:29,984 - Epoch: [112][ 150/ 155] Loss 2.962906 mAP 0.475023 +2025-05-16 21:35:32,793 - Epoch: [112][ 155/ 155] Loss 2.963331 mAP 0.475649 +2025-05-16 21:35:32,835 - ==> mAP: 0.47565 Loss: 2.963 + +2025-05-16 21:35:32,845 - ==> Best [mAP: 0.476997 vloss: 2.954263 Params: 2177088 on epoch: 111] +2025-05-16 21:35:32,845 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:35:32,941 - + +2025-05-16 21:35:32,941 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:35:49,889 - Epoch: [113][ 50/ 518] Overall Loss 2.522344 Objective Loss 2.522344 LR 0.000063 Time 0.338884 +2025-05-16 21:36:05,826 - Epoch: [113][ 100/ 518] Overall Loss 2.535202 Objective Loss 2.535202 LR 0.000063 Time 0.328807 +2025-05-16 21:36:21,752 - Epoch: [113][ 150/ 518] Overall Loss 2.544339 Objective Loss 2.544339 LR 0.000063 Time 0.325374 +2025-05-16 21:36:37,691 - Epoch: [113][ 200/ 518] Overall Loss 2.544889 Objective Loss 2.544889 LR 0.000063 Time 0.323722 +2025-05-16 21:36:53,617 - Epoch: [113][ 250/ 518] Overall Loss 2.534087 Objective Loss 2.534087 LR 0.000063 Time 0.322678 +2025-05-16 21:37:09,541 - Epoch: [113][ 300/ 518] Overall Loss 2.535019 Objective Loss 2.535019 LR 0.000063 Time 0.321973 +2025-05-16 21:37:25,447 - Epoch: [113][ 350/ 518] Overall Loss 2.536695 Objective Loss 2.536695 LR 0.000063 Time 0.321420 +2025-05-16 21:37:41,364 - Epoch: [113][ 400/ 518] Overall Loss 2.538651 Objective Loss 2.538651 LR 0.000063 Time 0.321034 +2025-05-16 21:37:57,285 - Epoch: [113][ 450/ 518] Overall Loss 2.539452 Objective Loss 2.539452 LR 0.000063 Time 0.320741 +2025-05-16 21:38:13,238 - Epoch: [113][ 500/ 518] Overall Loss 2.538290 Objective Loss 2.538290 LR 0.000063 Time 0.320570 +2025-05-16 21:38:18,863 - Epoch: [113][ 518/ 518] Overall Loss 2.535065 Objective Loss 2.535065 LR 0.000063 Time 0.320290 +2025-05-16 21:38:18,896 - --- validate (epoch=113)----------- +2025-05-16 21:38:18,897 - 4952 samples (32 per mini-batch) +2025-05-16 21:38:18,901 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:38:31,275 - Epoch: [113][ 50/ 155] Loss 2.922918 mAP 0.469894 +2025-05-16 21:38:43,699 - Epoch: [113][ 100/ 155] Loss 2.928847 mAP 0.475742 +2025-05-16 21:38:56,837 - Epoch: [113][ 150/ 155] Loss 2.950531 mAP 0.473618 +2025-05-16 21:38:59,812 - Epoch: [113][ 155/ 155] Loss 2.951743 mAP 0.474282 +2025-05-16 21:38:59,853 - ==> mAP: 0.47428 Loss: 2.952 + +2025-05-16 21:38:59,862 - ==> Best [mAP: 0.476997 vloss: 2.954263 Params: 2177088 on epoch: 111] +2025-05-16 21:38:59,862 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:38:59,960 - + +2025-05-16 21:38:59,960 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:39:16,724 - Epoch: [114][ 50/ 518] Overall Loss 2.565821 Objective Loss 2.565821 LR 0.000063 Time 0.335221 +2025-05-16 21:39:32,602 - Epoch: [114][ 100/ 518] Overall Loss 2.562341 Objective Loss 2.562341 LR 0.000063 Time 0.326374 +2025-05-16 21:39:48,480 - Epoch: [114][ 150/ 518] Overall Loss 2.584059 Objective Loss 2.584059 LR 0.000063 Time 0.323430 +2025-05-16 21:40:04,370 - Epoch: [114][ 200/ 518] Overall Loss 2.588392 Objective Loss 2.588392 LR 0.000063 Time 0.322017 +2025-05-16 21:40:20,259 - Epoch: [114][ 250/ 518] Overall Loss 2.591085 Objective Loss 2.591085 LR 0.000063 Time 0.321168 +2025-05-16 21:40:36,154 - Epoch: [114][ 300/ 518] Overall Loss 2.598924 Objective Loss 2.598924 LR 0.000063 Time 0.320619 +2025-05-16 21:40:52,064 - Epoch: [114][ 350/ 518] Overall Loss 2.600973 Objective Loss 2.600973 LR 0.000063 Time 0.320271 +2025-05-16 21:41:07,986 - Epoch: [114][ 400/ 518] Overall Loss 2.596341 Objective Loss 2.596341 LR 0.000063 Time 0.320039 +2025-05-16 21:41:23,907 - Epoch: [114][ 450/ 518] Overall Loss 2.595620 Objective Loss 2.595620 LR 0.000063 Time 0.319856 +2025-05-16 21:41:39,849 - Epoch: [114][ 500/ 518] Overall Loss 2.588443 Objective Loss 2.588443 LR 0.000063 Time 0.319753 +2025-05-16 21:41:45,483 - Epoch: [114][ 518/ 518] Overall Loss 2.585907 Objective Loss 2.585907 LR 0.000063 Time 0.319519 +2025-05-16 21:41:45,519 - --- validate (epoch=114)----------- +2025-05-16 21:41:45,520 - 4952 samples (32 per mini-batch) +2025-05-16 21:41:45,524 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:41:57,565 - Epoch: [114][ 50/ 155] Loss 2.983983 mAP 0.483248 +2025-05-16 21:42:09,969 - Epoch: [114][ 100/ 155] Loss 2.961756 mAP 0.481084 +2025-05-16 21:42:22,987 - Epoch: [114][ 150/ 155] Loss 2.956398 mAP 0.473443 +2025-05-16 21:42:25,779 - Epoch: [114][ 155/ 155] Loss 2.957020 mAP 0.473505 +2025-05-16 21:42:25,819 - ==> mAP: 0.47350 Loss: 2.957 + +2025-05-16 21:42:25,828 - ==> Best [mAP: 0.476997 vloss: 2.954263 Params: 2177088 on epoch: 111] +2025-05-16 21:42:25,828 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:42:25,926 - + +2025-05-16 21:42:25,926 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:42:42,755 - Epoch: [115][ 50/ 518] Overall Loss 2.592761 Objective Loss 2.592761 LR 0.000063 Time 0.336527 +2025-05-16 21:42:58,638 - Epoch: [115][ 100/ 518] Overall Loss 2.570571 Objective Loss 2.570571 LR 0.000063 Time 0.327085 +2025-05-16 21:43:14,517 - Epoch: [115][ 150/ 518] Overall Loss 2.596946 Objective Loss 2.596946 LR 0.000063 Time 0.323908 +2025-05-16 21:43:30,415 - Epoch: [115][ 200/ 518] Overall Loss 2.576018 Objective Loss 2.576018 LR 0.000063 Time 0.322417 +2025-05-16 21:43:46,356 - Epoch: [115][ 250/ 518] Overall Loss 2.579610 Objective Loss 2.579610 LR 0.000063 Time 0.321694 +2025-05-16 21:44:02,263 - Epoch: [115][ 300/ 518] Overall Loss 2.587411 Objective Loss 2.587411 LR 0.000063 Time 0.321097 +2025-05-16 21:44:18,180 - Epoch: [115][ 350/ 518] Overall Loss 2.585190 Objective Loss 2.585190 LR 0.000063 Time 0.320700 +2025-05-16 21:44:34,096 - Epoch: [115][ 400/ 518] Overall Loss 2.585427 Objective Loss 2.585427 LR 0.000063 Time 0.320399 +2025-05-16 21:44:50,009 - Epoch: [115][ 450/ 518] Overall Loss 2.581683 Objective Loss 2.581683 LR 0.000063 Time 0.320159 +2025-05-16 21:45:05,938 - Epoch: [115][ 500/ 518] Overall Loss 2.579718 Objective Loss 2.579718 LR 0.000063 Time 0.319999 +2025-05-16 21:45:11,552 - Epoch: [115][ 518/ 518] Overall Loss 2.575226 Objective Loss 2.575226 LR 0.000063 Time 0.319716 +2025-05-16 21:45:11,588 - --- validate (epoch=115)----------- +2025-05-16 21:45:11,589 - 4952 samples (32 per mini-batch) +2025-05-16 21:45:11,592 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:45:23,736 - Epoch: [115][ 50/ 155] Loss 2.970411 mAP 0.457577 +2025-05-16 21:45:36,256 - Epoch: [115][ 100/ 155] Loss 2.967971 mAP 0.464282 +2025-05-16 21:45:49,174 - Epoch: [115][ 150/ 155] Loss 2.948768 mAP 0.476483 +2025-05-16 21:45:52,070 - Epoch: [115][ 155/ 155] Loss 2.950384 mAP 0.475162 +2025-05-16 21:45:52,108 - ==> mAP: 0.47516 Loss: 2.950 + +2025-05-16 21:45:52,118 - ==> Best [mAP: 0.476997 vloss: 2.954263 Params: 2177088 on epoch: 111] +2025-05-16 21:45:52,118 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:45:52,212 - + +2025-05-16 21:45:52,213 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:46:08,934 - Epoch: [116][ 50/ 518] Overall Loss 2.591461 Objective Loss 2.591461 LR 0.000063 Time 0.334356 +2025-05-16 21:46:24,805 - Epoch: [116][ 100/ 518] Overall Loss 2.589223 Objective Loss 2.589223 LR 0.000063 Time 0.325876 +2025-05-16 21:46:40,690 - Epoch: [116][ 150/ 518] Overall Loss 2.591937 Objective Loss 2.591937 LR 0.000063 Time 0.323144 +2025-05-16 21:46:56,604 - Epoch: [116][ 200/ 518] Overall Loss 2.579127 Objective Loss 2.579127 LR 0.000063 Time 0.321922 +2025-05-16 21:47:12,502 - Epoch: [116][ 250/ 518] Overall Loss 2.565558 Objective Loss 2.565558 LR 0.000063 Time 0.321128 +2025-05-16 21:47:28,409 - Epoch: [116][ 300/ 518] Overall Loss 2.572027 Objective Loss 2.572027 LR 0.000063 Time 0.320626 +2025-05-16 21:47:44,356 - Epoch: [116][ 350/ 518] Overall Loss 2.568854 Objective Loss 2.568854 LR 0.000063 Time 0.320383 +2025-05-16 21:48:00,284 - Epoch: [116][ 400/ 518] Overall Loss 2.565812 Objective Loss 2.565812 LR 0.000063 Time 0.320154 +2025-05-16 21:48:16,201 - Epoch: [116][ 450/ 518] Overall Loss 2.564905 Objective Loss 2.564905 LR 0.000063 Time 0.319951 +2025-05-16 21:48:32,125 - Epoch: [116][ 500/ 518] Overall Loss 2.571391 Objective Loss 2.571391 LR 0.000063 Time 0.319803 +2025-05-16 21:48:37,744 - Epoch: [116][ 518/ 518] Overall Loss 2.569982 Objective Loss 2.569982 LR 0.000063 Time 0.319536 +2025-05-16 21:48:37,778 - --- validate (epoch=116)----------- +2025-05-16 21:48:37,779 - 4952 samples (32 per mini-batch) +2025-05-16 21:48:37,782 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:48:50,099 - Epoch: [116][ 50/ 155] Loss 2.943611 mAP 0.482706 +2025-05-16 21:49:02,490 - Epoch: [116][ 100/ 155] Loss 2.955545 mAP 0.480080 +2025-05-16 21:49:15,603 - Epoch: [116][ 150/ 155] Loss 2.956708 mAP 0.473234 +2025-05-16 21:49:18,573 - Epoch: [116][ 155/ 155] Loss 2.961346 mAP 0.472643 +2025-05-16 21:49:18,615 - ==> mAP: 0.47264 Loss: 2.961 + +2025-05-16 21:49:18,625 - ==> Best [mAP: 0.476997 vloss: 2.954263 Params: 2177088 on epoch: 111] +2025-05-16 21:49:18,625 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:49:18,722 - + +2025-05-16 21:49:18,723 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:49:35,450 - Epoch: [117][ 50/ 518] Overall Loss 2.550543 Objective Loss 2.550543 LR 0.000063 Time 0.334477 +2025-05-16 21:49:51,311 - Epoch: [117][ 100/ 518] Overall Loss 2.562025 Objective Loss 2.562025 LR 0.000063 Time 0.325839 +2025-05-16 21:50:07,190 - Epoch: [117][ 150/ 518] Overall Loss 2.556553 Objective Loss 2.556553 LR 0.000063 Time 0.323080 +2025-05-16 21:50:23,097 - Epoch: [117][ 200/ 518] Overall Loss 2.560536 Objective Loss 2.560536 LR 0.000063 Time 0.321836 +2025-05-16 21:50:39,007 - Epoch: [117][ 250/ 518] Overall Loss 2.556798 Objective Loss 2.556798 LR 0.000063 Time 0.321105 +2025-05-16 21:50:54,943 - Epoch: [117][ 300/ 518] Overall Loss 2.562493 Objective Loss 2.562493 LR 0.000063 Time 0.320706 +2025-05-16 21:51:10,881 - Epoch: [117][ 350/ 518] Overall Loss 2.568819 Objective Loss 2.568819 LR 0.000063 Time 0.320426 +2025-05-16 21:51:26,861 - Epoch: [117][ 400/ 518] Overall Loss 2.566144 Objective Loss 2.566144 LR 0.000063 Time 0.320322 +2025-05-16 21:51:42,808 - Epoch: [117][ 450/ 518] Overall Loss 2.572200 Objective Loss 2.572200 LR 0.000063 Time 0.320167 +2025-05-16 21:51:58,762 - Epoch: [117][ 500/ 518] Overall Loss 2.571050 Objective Loss 2.571050 LR 0.000063 Time 0.320056 +2025-05-16 21:52:04,389 - Epoch: [117][ 518/ 518] Overall Loss 2.570148 Objective Loss 2.570148 LR 0.000063 Time 0.319796 +2025-05-16 21:52:04,422 - --- validate (epoch=117)----------- +2025-05-16 21:52:04,423 - 4952 samples (32 per mini-batch) +2025-05-16 21:52:04,427 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:52:16,650 - Epoch: [117][ 50/ 155] Loss 2.890766 mAP 0.485526 +2025-05-16 21:52:29,159 - Epoch: [117][ 100/ 155] Loss 2.946385 mAP 0.482060 +2025-05-16 21:52:42,430 - Epoch: [117][ 150/ 155] Loss 2.950326 mAP 0.477215 +2025-05-16 21:52:45,397 - Epoch: [117][ 155/ 155] Loss 2.950625 mAP 0.477833 +2025-05-16 21:52:45,438 - ==> mAP: 0.47783 Loss: 2.951 + +2025-05-16 21:52:45,448 - ==> Best [mAP: 0.477833 vloss: 2.950625 Params: 2177088 on epoch: 117] +2025-05-16 21:52:45,448 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:52:45,571 - + +2025-05-16 21:52:45,571 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:53:02,444 - Epoch: [118][ 50/ 518] Overall Loss 2.569540 Objective Loss 2.569540 LR 0.000063 Time 0.337398 +2025-05-16 21:53:18,317 - Epoch: [118][ 100/ 518] Overall Loss 2.534750 Objective Loss 2.534750 LR 0.000063 Time 0.327416 +2025-05-16 21:53:34,221 - Epoch: [118][ 150/ 518] Overall Loss 2.544759 Objective Loss 2.544759 LR 0.000063 Time 0.324295 +2025-05-16 21:53:50,145 - Epoch: [118][ 200/ 518] Overall Loss 2.544818 Objective Loss 2.544818 LR 0.000063 Time 0.322842 +2025-05-16 21:54:06,073 - Epoch: [118][ 250/ 518] Overall Loss 2.551762 Objective Loss 2.551762 LR 0.000063 Time 0.321982 +2025-05-16 21:54:21,990 - Epoch: [118][ 300/ 518] Overall Loss 2.557725 Objective Loss 2.557725 LR 0.000063 Time 0.321370 +2025-05-16 21:54:37,901 - Epoch: [118][ 350/ 518] Overall Loss 2.566355 Objective Loss 2.566355 LR 0.000063 Time 0.320918 +2025-05-16 21:54:53,820 - Epoch: [118][ 400/ 518] Overall Loss 2.570496 Objective Loss 2.570496 LR 0.000063 Time 0.320598 +2025-05-16 21:55:09,734 - Epoch: [118][ 450/ 518] Overall Loss 2.570377 Objective Loss 2.570377 LR 0.000063 Time 0.320337 +2025-05-16 21:55:25,646 - Epoch: [118][ 500/ 518] Overall Loss 2.567490 Objective Loss 2.567490 LR 0.000063 Time 0.320126 +2025-05-16 21:55:31,271 - Epoch: [118][ 518/ 518] Overall Loss 2.569250 Objective Loss 2.569250 LR 0.000063 Time 0.319861 +2025-05-16 21:55:31,310 - --- validate (epoch=118)----------- +2025-05-16 21:55:31,311 - 4952 samples (32 per mini-batch) +2025-05-16 21:55:31,314 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:55:43,650 - Epoch: [118][ 50/ 155] Loss 2.996561 mAP 0.468329 +2025-05-16 21:55:55,973 - Epoch: [118][ 100/ 155] Loss 2.973587 mAP 0.471948 +2025-05-16 21:56:09,138 - Epoch: [118][ 150/ 155] Loss 2.969644 mAP 0.475174 +2025-05-16 21:56:12,124 - Epoch: [118][ 155/ 155] Loss 2.970499 mAP 0.475220 +2025-05-16 21:56:12,162 - ==> mAP: 0.47522 Loss: 2.970 + +2025-05-16 21:56:12,172 - ==> Best [mAP: 0.477833 vloss: 2.950625 Params: 2177088 on epoch: 117] +2025-05-16 21:56:12,172 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:56:12,268 - + +2025-05-16 21:56:12,269 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:56:29,193 - Epoch: [119][ 50/ 518] Overall Loss 2.511216 Objective Loss 2.511216 LR 0.000063 Time 0.338419 +2025-05-16 21:56:45,045 - Epoch: [119][ 100/ 518] Overall Loss 2.551063 Objective Loss 2.551063 LR 0.000063 Time 0.327716 +2025-05-16 21:57:00,945 - Epoch: [119][ 150/ 518] Overall Loss 2.557317 Objective Loss 2.557317 LR 0.000063 Time 0.324475 +2025-05-16 21:57:16,880 - Epoch: [119][ 200/ 518] Overall Loss 2.552840 Objective Loss 2.552840 LR 0.000063 Time 0.323027 +2025-05-16 21:57:32,781 - Epoch: [119][ 250/ 518] Overall Loss 2.570560 Objective Loss 2.570560 LR 0.000063 Time 0.322023 +2025-05-16 21:57:48,675 - Epoch: [119][ 300/ 518] Overall Loss 2.568900 Objective Loss 2.568900 LR 0.000063 Time 0.321329 +2025-05-16 21:58:04,587 - Epoch: [119][ 350/ 518] Overall Loss 2.559708 Objective Loss 2.559708 LR 0.000063 Time 0.320886 +2025-05-16 21:58:20,500 - Epoch: [119][ 400/ 518] Overall Loss 2.568471 Objective Loss 2.568471 LR 0.000063 Time 0.320554 +2025-05-16 21:58:36,411 - Epoch: [119][ 450/ 518] Overall Loss 2.565355 Objective Loss 2.565355 LR 0.000063 Time 0.320294 +2025-05-16 21:58:52,317 - Epoch: [119][ 500/ 518] Overall Loss 2.568753 Objective Loss 2.568753 LR 0.000063 Time 0.320074 +2025-05-16 21:58:57,929 - Epoch: [119][ 518/ 518] Overall Loss 2.569665 Objective Loss 2.569665 LR 0.000063 Time 0.319785 +2025-05-16 21:58:57,966 - --- validate (epoch=119)----------- +2025-05-16 21:58:57,967 - 4952 samples (32 per mini-batch) +2025-05-16 21:58:57,970 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 21:59:10,213 - Epoch: [119][ 50/ 155] Loss 2.921352 mAP 0.491978 +2025-05-16 21:59:22,801 - Epoch: [119][ 100/ 155] Loss 2.920020 mAP 0.489887 +2025-05-16 21:59:35,855 - Epoch: [119][ 150/ 155] Loss 2.944339 mAP 0.482978 +2025-05-16 21:59:38,797 - Epoch: [119][ 155/ 155] Loss 2.943503 mAP 0.481545 +2025-05-16 21:59:38,832 - ==> mAP: 0.48154 Loss: 2.944 + +2025-05-16 21:59:38,842 - ==> Best [mAP: 0.481545 vloss: 2.943503 Params: 2177088 on epoch: 119] +2025-05-16 21:59:38,842 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 21:59:38,963 - + +2025-05-16 21:59:38,964 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 21:59:55,901 - Epoch: [120][ 50/ 518] Overall Loss 2.601799 Objective Loss 2.601799 LR 0.000063 Time 0.338684 +2025-05-16 22:00:11,797 - Epoch: [120][ 100/ 518] Overall Loss 2.592788 Objective Loss 2.592788 LR 0.000063 Time 0.328300 +2025-05-16 22:00:27,700 - Epoch: [120][ 150/ 518] Overall Loss 2.605062 Objective Loss 2.605062 LR 0.000063 Time 0.324880 +2025-05-16 22:00:43,627 - Epoch: [120][ 200/ 518] Overall Loss 2.586695 Objective Loss 2.586695 LR 0.000063 Time 0.323289 +2025-05-16 22:00:59,521 - Epoch: [120][ 250/ 518] Overall Loss 2.584187 Objective Loss 2.584187 LR 0.000063 Time 0.322204 +2025-05-16 22:01:15,418 - Epoch: [120][ 300/ 518] Overall Loss 2.579361 Objective Loss 2.579361 LR 0.000063 Time 0.321490 +2025-05-16 22:01:31,328 - Epoch: [120][ 350/ 518] Overall Loss 2.571798 Objective Loss 2.571798 LR 0.000063 Time 0.321017 +2025-05-16 22:01:47,237 - Epoch: [120][ 400/ 518] Overall Loss 2.566463 Objective Loss 2.566463 LR 0.000063 Time 0.320659 +2025-05-16 22:02:03,147 - Epoch: [120][ 450/ 518] Overall Loss 2.573712 Objective Loss 2.573712 LR 0.000063 Time 0.320383 +2025-05-16 22:02:19,057 - Epoch: [120][ 500/ 518] Overall Loss 2.569261 Objective Loss 2.569261 LR 0.000063 Time 0.320164 +2025-05-16 22:02:24,668 - Epoch: [120][ 518/ 518] Overall Loss 2.567160 Objective Loss 2.567160 LR 0.000063 Time 0.319869 +2025-05-16 22:02:24,704 - --- validate (epoch=120)----------- +2025-05-16 22:02:24,705 - 4952 samples (32 per mini-batch) +2025-05-16 22:02:24,708 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:02:37,074 - Epoch: [120][ 50/ 155] Loss 2.926664 mAP 0.504739 +2025-05-16 22:02:49,361 - Epoch: [120][ 100/ 155] Loss 2.944538 mAP 0.483184 +2025-05-16 22:03:02,499 - Epoch: [120][ 150/ 155] Loss 2.943474 mAP 0.477913 +2025-05-16 22:03:05,444 - Epoch: [120][ 155/ 155] Loss 2.944881 mAP 0.477138 +2025-05-16 22:03:05,484 - ==> mAP: 0.47714 Loss: 2.945 + +2025-05-16 22:03:05,494 - ==> Best [mAP: 0.481545 vloss: 2.943503 Params: 2177088 on epoch: 119] +2025-05-16 22:03:05,494 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:03:05,588 - + +2025-05-16 22:03:05,588 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:03:22,322 - Epoch: [121][ 50/ 518] Overall Loss 2.540669 Objective Loss 2.540669 LR 0.000063 Time 0.334617 +2025-05-16 22:03:38,193 - Epoch: [121][ 100/ 518] Overall Loss 2.546974 Objective Loss 2.546974 LR 0.000063 Time 0.326003 +2025-05-16 22:03:54,068 - Epoch: [121][ 150/ 518] Overall Loss 2.544415 Objective Loss 2.544415 LR 0.000063 Time 0.323161 +2025-05-16 22:04:09,975 - Epoch: [121][ 200/ 518] Overall Loss 2.536042 Objective Loss 2.536042 LR 0.000063 Time 0.321905 +2025-05-16 22:04:25,904 - Epoch: [121][ 250/ 518] Overall Loss 2.549130 Objective Loss 2.549130 LR 0.000063 Time 0.321237 +2025-05-16 22:04:41,840 - Epoch: [121][ 300/ 518] Overall Loss 2.546331 Objective Loss 2.546331 LR 0.000063 Time 0.320816 +2025-05-16 22:04:57,793 - Epoch: [121][ 350/ 518] Overall Loss 2.550620 Objective Loss 2.550620 LR 0.000063 Time 0.320562 +2025-05-16 22:05:13,734 - Epoch: [121][ 400/ 518] Overall Loss 2.554840 Objective Loss 2.554840 LR 0.000063 Time 0.320342 +2025-05-16 22:05:29,649 - Epoch: [121][ 450/ 518] Overall Loss 2.557964 Objective Loss 2.557964 LR 0.000063 Time 0.320113 +2025-05-16 22:05:45,560 - Epoch: [121][ 500/ 518] Overall Loss 2.564025 Objective Loss 2.564025 LR 0.000063 Time 0.319922 +2025-05-16 22:05:51,172 - Epoch: [121][ 518/ 518] Overall Loss 2.564705 Objective Loss 2.564705 LR 0.000063 Time 0.319638 +2025-05-16 22:05:51,208 - --- validate (epoch=121)----------- +2025-05-16 22:05:51,209 - 4952 samples (32 per mini-batch) +2025-05-16 22:05:51,213 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:06:03,324 - Epoch: [121][ 50/ 155] Loss 2.980556 mAP 0.473056 +2025-05-16 22:06:16,004 - Epoch: [121][ 100/ 155] Loss 2.965844 mAP 0.465196 +2025-05-16 22:06:29,029 - Epoch: [121][ 150/ 155] Loss 2.948384 mAP 0.471575 +2025-05-16 22:06:31,916 - Epoch: [121][ 155/ 155] Loss 2.947256 mAP 0.473130 +2025-05-16 22:06:31,950 - ==> mAP: 0.47313 Loss: 2.947 + +2025-05-16 22:06:31,959 - ==> Best [mAP: 0.481545 vloss: 2.943503 Params: 2177088 on epoch: 119] +2025-05-16 22:06:31,959 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:06:32,055 - + +2025-05-16 22:06:32,056 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:06:49,019 - Epoch: [122][ 50/ 518] Overall Loss 2.554688 Objective Loss 2.554688 LR 0.000063 Time 0.339210 +2025-05-16 22:07:04,932 - Epoch: [122][ 100/ 518] Overall Loss 2.550571 Objective Loss 2.550571 LR 0.000063 Time 0.328724 +2025-05-16 22:07:20,829 - Epoch: [122][ 150/ 518] Overall Loss 2.550230 Objective Loss 2.550230 LR 0.000063 Time 0.325122 +2025-05-16 22:07:36,737 - Epoch: [122][ 200/ 518] Overall Loss 2.565317 Objective Loss 2.565317 LR 0.000063 Time 0.323380 +2025-05-16 22:07:52,639 - Epoch: [122][ 250/ 518] Overall Loss 2.566047 Objective Loss 2.566047 LR 0.000063 Time 0.322306 +2025-05-16 22:08:08,545 - Epoch: [122][ 300/ 518] Overall Loss 2.564312 Objective Loss 2.564312 LR 0.000063 Time 0.321606 +2025-05-16 22:08:24,458 - Epoch: [122][ 350/ 518] Overall Loss 2.563413 Objective Loss 2.563413 LR 0.000063 Time 0.321125 +2025-05-16 22:08:40,378 - Epoch: [122][ 400/ 518] Overall Loss 2.562916 Objective Loss 2.562916 LR 0.000063 Time 0.320783 +2025-05-16 22:08:56,297 - Epoch: [122][ 450/ 518] Overall Loss 2.566096 Objective Loss 2.566096 LR 0.000063 Time 0.320514 +2025-05-16 22:09:12,244 - Epoch: [122][ 500/ 518] Overall Loss 2.560520 Objective Loss 2.560520 LR 0.000063 Time 0.320355 +2025-05-16 22:09:17,858 - Epoch: [122][ 518/ 518] Overall Loss 2.561364 Objective Loss 2.561364 LR 0.000063 Time 0.320060 +2025-05-16 22:09:17,893 - --- validate (epoch=122)----------- +2025-05-16 22:09:17,894 - 4952 samples (32 per mini-batch) +2025-05-16 22:09:17,897 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:09:30,302 - Epoch: [122][ 50/ 155] Loss 2.944197 mAP 0.483291 +2025-05-16 22:09:42,855 - Epoch: [122][ 100/ 155] Loss 2.956194 mAP 0.479260 +2025-05-16 22:09:56,152 - Epoch: [122][ 150/ 155] Loss 2.953950 mAP 0.477491 +2025-05-16 22:09:59,130 - Epoch: [122][ 155/ 155] Loss 2.955214 mAP 0.477586 +2025-05-16 22:09:59,171 - ==> mAP: 0.47759 Loss: 2.955 + +2025-05-16 22:09:59,181 - ==> Best [mAP: 0.481545 vloss: 2.943503 Params: 2177088 on epoch: 119] +2025-05-16 22:09:59,181 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:09:59,278 - + +2025-05-16 22:09:59,278 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:10:16,068 - Epoch: [123][ 50/ 518] Overall Loss 2.596272 Objective Loss 2.596272 LR 0.000063 Time 0.335717 +2025-05-16 22:10:31,957 - Epoch: [123][ 100/ 518] Overall Loss 2.576096 Objective Loss 2.576096 LR 0.000063 Time 0.326746 +2025-05-16 22:10:47,864 - Epoch: [123][ 150/ 518] Overall Loss 2.565063 Objective Loss 2.565063 LR 0.000063 Time 0.323872 +2025-05-16 22:11:03,784 - Epoch: [123][ 200/ 518] Overall Loss 2.546818 Objective Loss 2.546818 LR 0.000063 Time 0.322501 +2025-05-16 22:11:19,719 - Epoch: [123][ 250/ 518] Overall Loss 2.545695 Objective Loss 2.545695 LR 0.000063 Time 0.321735 +2025-05-16 22:11:35,658 - Epoch: [123][ 300/ 518] Overall Loss 2.541197 Objective Loss 2.541197 LR 0.000063 Time 0.321241 +2025-05-16 22:11:51,574 - Epoch: [123][ 350/ 518] Overall Loss 2.548385 Objective Loss 2.548385 LR 0.000063 Time 0.320820 +2025-05-16 22:12:07,499 - Epoch: [123][ 400/ 518] Overall Loss 2.549475 Objective Loss 2.549475 LR 0.000063 Time 0.320528 +2025-05-16 22:12:23,443 - Epoch: [123][ 450/ 518] Overall Loss 2.548692 Objective Loss 2.548692 LR 0.000063 Time 0.320345 +2025-05-16 22:12:39,367 - Epoch: [123][ 500/ 518] Overall Loss 2.554200 Objective Loss 2.554200 LR 0.000063 Time 0.320157 +2025-05-16 22:12:44,982 - Epoch: [123][ 518/ 518] Overall Loss 2.551682 Objective Loss 2.551682 LR 0.000063 Time 0.319871 +2025-05-16 22:12:45,020 - --- validate (epoch=123)----------- +2025-05-16 22:12:45,021 - 4952 samples (32 per mini-batch) +2025-05-16 22:12:45,024 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:12:57,057 - Epoch: [123][ 50/ 155] Loss 2.918029 mAP 0.495105 +2025-05-16 22:13:09,466 - Epoch: [123][ 100/ 155] Loss 2.923912 mAP 0.491491 +2025-05-16 22:13:22,476 - Epoch: [123][ 150/ 155] Loss 2.943948 mAP 0.483084 +2025-05-16 22:13:25,208 - Epoch: [123][ 155/ 155] Loss 2.944103 mAP 0.482380 +2025-05-16 22:13:25,249 - ==> mAP: 0.48238 Loss: 2.944 + +2025-05-16 22:13:25,258 - ==> Best [mAP: 0.482380 vloss: 2.944103 Params: 2177088 on epoch: 123] +2025-05-16 22:13:25,258 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:13:25,386 - + +2025-05-16 22:13:25,386 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:13:42,136 - Epoch: [124][ 50/ 518] Overall Loss 2.516026 Objective Loss 2.516026 LR 0.000063 Time 0.334924 +2025-05-16 22:13:58,012 - Epoch: [124][ 100/ 518] Overall Loss 2.555798 Objective Loss 2.555798 LR 0.000063 Time 0.326215 +2025-05-16 22:14:13,897 - Epoch: [124][ 150/ 518] Overall Loss 2.566070 Objective Loss 2.566070 LR 0.000063 Time 0.323372 +2025-05-16 22:14:29,808 - Epoch: [124][ 200/ 518] Overall Loss 2.550535 Objective Loss 2.550535 LR 0.000063 Time 0.322076 +2025-05-16 22:14:45,731 - Epoch: [124][ 250/ 518] Overall Loss 2.558977 Objective Loss 2.558977 LR 0.000063 Time 0.321349 +2025-05-16 22:15:01,656 - Epoch: [124][ 300/ 518] Overall Loss 2.558720 Objective Loss 2.558720 LR 0.000063 Time 0.320870 +2025-05-16 22:15:17,567 - Epoch: [124][ 350/ 518] Overall Loss 2.558777 Objective Loss 2.558777 LR 0.000063 Time 0.320491 +2025-05-16 22:15:33,476 - Epoch: [124][ 400/ 518] Overall Loss 2.553683 Objective Loss 2.553683 LR 0.000063 Time 0.320198 +2025-05-16 22:15:49,388 - Epoch: [124][ 450/ 518] Overall Loss 2.560846 Objective Loss 2.560846 LR 0.000063 Time 0.319979 +2025-05-16 22:16:05,300 - Epoch: [124][ 500/ 518] Overall Loss 2.557955 Objective Loss 2.557955 LR 0.000063 Time 0.319802 +2025-05-16 22:16:10,908 - Epoch: [124][ 518/ 518] Overall Loss 2.556923 Objective Loss 2.556923 LR 0.000063 Time 0.319515 +2025-05-16 22:16:10,947 - --- validate (epoch=124)----------- +2025-05-16 22:16:10,948 - 4952 samples (32 per mini-batch) +2025-05-16 22:16:10,951 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:16:23,326 - Epoch: [124][ 50/ 155] Loss 2.937654 mAP 0.479957 +2025-05-16 22:16:35,675 - Epoch: [124][ 100/ 155] Loss 2.954643 mAP 0.480547 +2025-05-16 22:16:48,683 - Epoch: [124][ 150/ 155] Loss 2.948427 mAP 0.479257 +2025-05-16 22:16:51,556 - Epoch: [124][ 155/ 155] Loss 2.947553 mAP 0.480310 +2025-05-16 22:16:51,593 - ==> mAP: 0.48031 Loss: 2.948 + +2025-05-16 22:16:51,603 - ==> Best [mAP: 0.482380 vloss: 2.944103 Params: 2177088 on epoch: 123] +2025-05-16 22:16:51,603 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:16:51,707 - + +2025-05-16 22:16:51,707 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:17:08,349 - Epoch: [125][ 50/ 518] Overall Loss 2.545677 Objective Loss 2.545677 LR 0.000063 Time 0.332767 +2025-05-16 22:17:24,212 - Epoch: [125][ 100/ 518] Overall Loss 2.539136 Objective Loss 2.539136 LR 0.000063 Time 0.325006 +2025-05-16 22:17:40,089 - Epoch: [125][ 150/ 518] Overall Loss 2.559124 Objective Loss 2.559124 LR 0.000063 Time 0.322510 +2025-05-16 22:17:55,991 - Epoch: [125][ 200/ 518] Overall Loss 2.550747 Objective Loss 2.550747 LR 0.000063 Time 0.321385 +2025-05-16 22:18:11,912 - Epoch: [125][ 250/ 518] Overall Loss 2.550208 Objective Loss 2.550208 LR 0.000063 Time 0.320787 +2025-05-16 22:18:27,829 - Epoch: [125][ 300/ 518] Overall Loss 2.545724 Objective Loss 2.545724 LR 0.000063 Time 0.320376 +2025-05-16 22:18:43,739 - Epoch: [125][ 350/ 518] Overall Loss 2.539292 Objective Loss 2.539292 LR 0.000063 Time 0.320063 +2025-05-16 22:18:59,653 - Epoch: [125][ 400/ 518] Overall Loss 2.546551 Objective Loss 2.546551 LR 0.000063 Time 0.319838 +2025-05-16 22:19:15,564 - Epoch: [125][ 450/ 518] Overall Loss 2.548734 Objective Loss 2.548734 LR 0.000063 Time 0.319656 +2025-05-16 22:19:31,513 - Epoch: [125][ 500/ 518] Overall Loss 2.553708 Objective Loss 2.553708 LR 0.000063 Time 0.319586 +2025-05-16 22:19:37,147 - Epoch: [125][ 518/ 518] Overall Loss 2.555993 Objective Loss 2.555993 LR 0.000063 Time 0.319358 +2025-05-16 22:19:37,182 - --- validate (epoch=125)----------- +2025-05-16 22:19:37,183 - 4952 samples (32 per mini-batch) +2025-05-16 22:19:37,187 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:19:49,375 - Epoch: [125][ 50/ 155] Loss 2.925209 mAP 0.468924 +2025-05-16 22:20:01,577 - Epoch: [125][ 100/ 155] Loss 2.960450 mAP 0.463338 +2025-05-16 22:20:14,455 - Epoch: [125][ 150/ 155] Loss 2.941130 mAP 0.473968 +2025-05-16 22:20:17,328 - Epoch: [125][ 155/ 155] Loss 2.942440 mAP 0.475401 +2025-05-16 22:20:17,365 - ==> mAP: 0.47540 Loss: 2.942 + +2025-05-16 22:20:17,374 - ==> Best [mAP: 0.482380 vloss: 2.944103 Params: 2177088 on epoch: 123] +2025-05-16 22:20:17,374 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:20:17,469 - + +2025-05-16 22:20:17,469 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:20:34,334 - Epoch: [126][ 50/ 518] Overall Loss 2.537385 Objective Loss 2.537385 LR 0.000063 Time 0.337225 +2025-05-16 22:20:50,208 - Epoch: [126][ 100/ 518] Overall Loss 2.539751 Objective Loss 2.539751 LR 0.000063 Time 0.327343 +2025-05-16 22:21:06,095 - Epoch: [126][ 150/ 518] Overall Loss 2.548884 Objective Loss 2.548884 LR 0.000063 Time 0.324133 +2025-05-16 22:21:22,020 - Epoch: [126][ 200/ 518] Overall Loss 2.568326 Objective Loss 2.568326 LR 0.000063 Time 0.322723 +2025-05-16 22:21:37,972 - Epoch: [126][ 250/ 518] Overall Loss 2.563169 Objective Loss 2.563169 LR 0.000063 Time 0.321984 +2025-05-16 22:21:53,927 - Epoch: [126][ 300/ 518] Overall Loss 2.562133 Objective Loss 2.562133 LR 0.000063 Time 0.321498 +2025-05-16 22:22:09,848 - Epoch: [126][ 350/ 518] Overall Loss 2.566434 Objective Loss 2.566434 LR 0.000063 Time 0.321058 +2025-05-16 22:22:25,777 - Epoch: [126][ 400/ 518] Overall Loss 2.567252 Objective Loss 2.567252 LR 0.000063 Time 0.320744 +2025-05-16 22:22:41,701 - Epoch: [126][ 450/ 518] Overall Loss 2.573498 Objective Loss 2.573498 LR 0.000063 Time 0.320491 +2025-05-16 22:22:57,628 - Epoch: [126][ 500/ 518] Overall Loss 2.573526 Objective Loss 2.573526 LR 0.000063 Time 0.320293 +2025-05-16 22:23:03,267 - Epoch: [126][ 518/ 518] Overall Loss 2.569876 Objective Loss 2.569876 LR 0.000063 Time 0.320048 +2025-05-16 22:23:03,304 - --- validate (epoch=126)----------- +2025-05-16 22:23:03,305 - 4952 samples (32 per mini-batch) +2025-05-16 22:23:03,309 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:23:15,438 - Epoch: [126][ 50/ 155] Loss 2.932529 mAP 0.494627 +2025-05-16 22:23:28,166 - Epoch: [126][ 100/ 155] Loss 2.936950 mAP 0.494695 +2025-05-16 22:23:41,183 - Epoch: [126][ 150/ 155] Loss 2.946032 mAP 0.485048 +2025-05-16 22:23:44,155 - Epoch: [126][ 155/ 155] Loss 2.943982 mAP 0.484358 +2025-05-16 22:23:44,195 - ==> mAP: 0.48436 Loss: 2.944 + +2025-05-16 22:23:44,205 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 22:23:44,205 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:23:44,338 - + +2025-05-16 22:23:44,338 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:24:01,126 - Epoch: [127][ 50/ 518] Overall Loss 2.523040 Objective Loss 2.523040 LR 0.000063 Time 0.335674 +2025-05-16 22:24:17,005 - Epoch: [127][ 100/ 518] Overall Loss 2.533073 Objective Loss 2.533073 LR 0.000063 Time 0.326617 +2025-05-16 22:24:32,890 - Epoch: [127][ 150/ 518] Overall Loss 2.528633 Objective Loss 2.528633 LR 0.000063 Time 0.323640 +2025-05-16 22:24:48,788 - Epoch: [127][ 200/ 518] Overall Loss 2.539458 Objective Loss 2.539458 LR 0.000063 Time 0.322213 +2025-05-16 22:25:04,697 - Epoch: [127][ 250/ 518] Overall Loss 2.554023 Objective Loss 2.554023 LR 0.000063 Time 0.321401 +2025-05-16 22:25:20,613 - Epoch: [127][ 300/ 518] Overall Loss 2.558598 Objective Loss 2.558598 LR 0.000063 Time 0.320885 +2025-05-16 22:25:36,545 - Epoch: [127][ 350/ 518] Overall Loss 2.560857 Objective Loss 2.560857 LR 0.000063 Time 0.320561 +2025-05-16 22:25:52,507 - Epoch: [127][ 400/ 518] Overall Loss 2.556112 Objective Loss 2.556112 LR 0.000063 Time 0.320395 +2025-05-16 22:26:08,454 - Epoch: [127][ 450/ 518] Overall Loss 2.559795 Objective Loss 2.559795 LR 0.000063 Time 0.320231 +2025-05-16 22:26:24,401 - Epoch: [127][ 500/ 518] Overall Loss 2.556724 Objective Loss 2.556724 LR 0.000063 Time 0.320100 +2025-05-16 22:26:30,018 - Epoch: [127][ 518/ 518] Overall Loss 2.556414 Objective Loss 2.556414 LR 0.000063 Time 0.319820 +2025-05-16 22:26:30,056 - --- validate (epoch=127)----------- +2025-05-16 22:26:30,057 - 4952 samples (32 per mini-batch) +2025-05-16 22:26:30,060 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:26:42,439 - Epoch: [127][ 50/ 155] Loss 2.899419 mAP 0.502887 +2025-05-16 22:26:54,977 - Epoch: [127][ 100/ 155] Loss 2.938210 mAP 0.478812 +2025-05-16 22:27:08,036 - Epoch: [127][ 150/ 155] Loss 2.942164 mAP 0.482751 +2025-05-16 22:27:10,966 - Epoch: [127][ 155/ 155] Loss 2.946166 mAP 0.483113 +2025-05-16 22:27:11,006 - ==> mAP: 0.48311 Loss: 2.946 + +2025-05-16 22:27:11,016 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 22:27:11,016 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:27:11,111 - + +2025-05-16 22:27:11,111 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:27:27,814 - Epoch: [128][ 50/ 518] Overall Loss 2.580904 Objective Loss 2.580904 LR 0.000063 Time 0.333975 +2025-05-16 22:27:43,687 - Epoch: [128][ 100/ 518] Overall Loss 2.604918 Objective Loss 2.604918 LR 0.000063 Time 0.325711 +2025-05-16 22:27:59,564 - Epoch: [128][ 150/ 518] Overall Loss 2.592400 Objective Loss 2.592400 LR 0.000063 Time 0.322979 +2025-05-16 22:28:15,488 - Epoch: [128][ 200/ 518] Overall Loss 2.584808 Objective Loss 2.584808 LR 0.000063 Time 0.321851 +2025-05-16 22:28:31,449 - Epoch: [128][ 250/ 518] Overall Loss 2.574115 Objective Loss 2.574115 LR 0.000063 Time 0.321320 +2025-05-16 22:28:47,397 - Epoch: [128][ 300/ 518] Overall Loss 2.587968 Objective Loss 2.587968 LR 0.000063 Time 0.320928 +2025-05-16 22:29:03,327 - Epoch: [128][ 350/ 518] Overall Loss 2.584443 Objective Loss 2.584443 LR 0.000063 Time 0.320592 +2025-05-16 22:29:19,262 - Epoch: [128][ 400/ 518] Overall Loss 2.582885 Objective Loss 2.582885 LR 0.000063 Time 0.320352 +2025-05-16 22:29:35,182 - Epoch: [128][ 450/ 518] Overall Loss 2.588775 Objective Loss 2.588775 LR 0.000063 Time 0.320133 +2025-05-16 22:29:51,142 - Epoch: [128][ 500/ 518] Overall Loss 2.590482 Objective Loss 2.590482 LR 0.000063 Time 0.320039 +2025-05-16 22:29:56,768 - Epoch: [128][ 518/ 518] Overall Loss 2.591745 Objective Loss 2.591745 LR 0.000063 Time 0.319778 +2025-05-16 22:29:56,803 - --- validate (epoch=128)----------- +2025-05-16 22:29:56,803 - 4952 samples (32 per mini-batch) +2025-05-16 22:29:56,807 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:30:08,964 - Epoch: [128][ 50/ 155] Loss 2.923728 mAP 0.498836 +2025-05-16 22:30:21,396 - Epoch: [128][ 100/ 155] Loss 2.925411 mAP 0.491069 +2025-05-16 22:30:34,082 - Epoch: [128][ 150/ 155] Loss 2.942003 mAP 0.478071 +2025-05-16 22:30:36,914 - Epoch: [128][ 155/ 155] Loss 2.940829 mAP 0.478569 +2025-05-16 22:30:36,954 - ==> mAP: 0.47857 Loss: 2.941 + +2025-05-16 22:30:36,963 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 22:30:36,963 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:30:37,057 - + +2025-05-16 22:30:37,057 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:30:53,918 - Epoch: [129][ 50/ 518] Overall Loss 2.590454 Objective Loss 2.590454 LR 0.000063 Time 0.337132 +2025-05-16 22:31:09,808 - Epoch: [129][ 100/ 518] Overall Loss 2.593036 Objective Loss 2.593036 LR 0.000063 Time 0.327466 +2025-05-16 22:31:25,705 - Epoch: [129][ 150/ 518] Overall Loss 2.580020 Objective Loss 2.580020 LR 0.000063 Time 0.324286 +2025-05-16 22:31:41,627 - Epoch: [129][ 200/ 518] Overall Loss 2.584470 Objective Loss 2.584470 LR 0.000063 Time 0.322820 +2025-05-16 22:31:57,530 - Epoch: [129][ 250/ 518] Overall Loss 2.581803 Objective Loss 2.581803 LR 0.000063 Time 0.321865 +2025-05-16 22:32:13,421 - Epoch: [129][ 300/ 518] Overall Loss 2.597765 Objective Loss 2.597765 LR 0.000063 Time 0.321188 +2025-05-16 22:32:29,320 - Epoch: [129][ 350/ 518] Overall Loss 2.586429 Objective Loss 2.586429 LR 0.000063 Time 0.320727 +2025-05-16 22:32:45,239 - Epoch: [129][ 400/ 518] Overall Loss 2.587457 Objective Loss 2.587457 LR 0.000063 Time 0.320429 +2025-05-16 22:33:01,153 - Epoch: [129][ 450/ 518] Overall Loss 2.587404 Objective Loss 2.587404 LR 0.000063 Time 0.320189 +2025-05-16 22:33:17,080 - Epoch: [129][ 500/ 518] Overall Loss 2.585351 Objective Loss 2.585351 LR 0.000063 Time 0.320022 +2025-05-16 22:33:22,702 - Epoch: [129][ 518/ 518] Overall Loss 2.585798 Objective Loss 2.585798 LR 0.000063 Time 0.319753 +2025-05-16 22:33:22,739 - --- validate (epoch=129)----------- +2025-05-16 22:33:22,739 - 4952 samples (32 per mini-batch) +2025-05-16 22:33:22,743 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:33:34,927 - Epoch: [129][ 50/ 155] Loss 2.944706 mAP 0.478925 +2025-05-16 22:33:47,183 - Epoch: [129][ 100/ 155] Loss 2.937780 mAP 0.482512 +2025-05-16 22:34:00,092 - Epoch: [129][ 150/ 155] Loss 2.937670 mAP 0.482519 +2025-05-16 22:34:02,996 - Epoch: [129][ 155/ 155] Loss 2.942110 mAP 0.482067 +2025-05-16 22:34:03,035 - ==> mAP: 0.48207 Loss: 2.942 + +2025-05-16 22:34:03,045 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 22:34:03,045 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:34:03,140 - + +2025-05-16 22:34:03,140 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:34:19,835 - Epoch: [130][ 50/ 518] Overall Loss 2.564582 Objective Loss 2.564582 LR 0.000063 Time 0.333836 +2025-05-16 22:34:35,735 - Epoch: [130][ 100/ 518] Overall Loss 2.579969 Objective Loss 2.579969 LR 0.000063 Time 0.325911 +2025-05-16 22:34:51,618 - Epoch: [130][ 150/ 518] Overall Loss 2.594931 Objective Loss 2.594931 LR 0.000063 Time 0.323153 +2025-05-16 22:35:07,525 - Epoch: [130][ 200/ 518] Overall Loss 2.583352 Objective Loss 2.583352 LR 0.000063 Time 0.321895 +2025-05-16 22:35:23,446 - Epoch: [130][ 250/ 518] Overall Loss 2.594753 Objective Loss 2.594753 LR 0.000063 Time 0.321197 +2025-05-16 22:35:39,385 - Epoch: [130][ 300/ 518] Overall Loss 2.605762 Objective Loss 2.605762 LR 0.000063 Time 0.320790 +2025-05-16 22:35:55,339 - Epoch: [130][ 350/ 518] Overall Loss 2.601257 Objective Loss 2.601257 LR 0.000063 Time 0.320545 +2025-05-16 22:36:11,285 - Epoch: [130][ 400/ 518] Overall Loss 2.600587 Objective Loss 2.600587 LR 0.000063 Time 0.320340 +2025-05-16 22:36:27,197 - Epoch: [130][ 450/ 518] Overall Loss 2.597302 Objective Loss 2.597302 LR 0.000063 Time 0.320104 +2025-05-16 22:36:43,106 - Epoch: [130][ 500/ 518] Overall Loss 2.591580 Objective Loss 2.591580 LR 0.000063 Time 0.319910 +2025-05-16 22:36:48,732 - Epoch: [130][ 518/ 518] Overall Loss 2.588147 Objective Loss 2.588147 LR 0.000063 Time 0.319652 +2025-05-16 22:36:48,770 - --- validate (epoch=130)----------- +2025-05-16 22:36:48,771 - 4952 samples (32 per mini-batch) +2025-05-16 22:36:48,774 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:37:00,882 - Epoch: [130][ 50/ 155] Loss 2.971532 mAP 0.476420 +2025-05-16 22:37:13,433 - Epoch: [130][ 100/ 155] Loss 2.942209 mAP 0.484646 +2025-05-16 22:37:26,427 - Epoch: [130][ 150/ 155] Loss 2.932544 mAP 0.477544 +2025-05-16 22:37:29,380 - Epoch: [130][ 155/ 155] Loss 2.932370 mAP 0.476742 +2025-05-16 22:37:29,416 - ==> mAP: 0.47674 Loss: 2.932 + +2025-05-16 22:37:29,425 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 22:37:29,425 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:37:29,522 - + +2025-05-16 22:37:29,522 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:37:46,231 - Epoch: [131][ 50/ 518] Overall Loss 2.566144 Objective Loss 2.566144 LR 0.000063 Time 0.334121 +2025-05-16 22:38:02,139 - Epoch: [131][ 100/ 518] Overall Loss 2.596639 Objective Loss 2.596639 LR 0.000063 Time 0.326136 +2025-05-16 22:38:18,055 - Epoch: [131][ 150/ 518] Overall Loss 2.588732 Objective Loss 2.588732 LR 0.000063 Time 0.323521 +2025-05-16 22:38:33,969 - Epoch: [131][ 200/ 518] Overall Loss 2.585326 Objective Loss 2.585326 LR 0.000063 Time 0.322211 +2025-05-16 22:38:49,902 - Epoch: [131][ 250/ 518] Overall Loss 2.578153 Objective Loss 2.578153 LR 0.000063 Time 0.321495 +2025-05-16 22:39:05,848 - Epoch: [131][ 300/ 518] Overall Loss 2.575981 Objective Loss 2.575981 LR 0.000063 Time 0.321063 +2025-05-16 22:39:21,786 - Epoch: [131][ 350/ 518] Overall Loss 2.579453 Objective Loss 2.579453 LR 0.000063 Time 0.320732 +2025-05-16 22:39:37,709 - Epoch: [131][ 400/ 518] Overall Loss 2.582051 Objective Loss 2.582051 LR 0.000063 Time 0.320445 +2025-05-16 22:39:53,630 - Epoch: [131][ 450/ 518] Overall Loss 2.582243 Objective Loss 2.582243 LR 0.000063 Time 0.320217 +2025-05-16 22:40:09,543 - Epoch: [131][ 500/ 518] Overall Loss 2.581567 Objective Loss 2.581567 LR 0.000063 Time 0.320021 +2025-05-16 22:40:15,154 - Epoch: [131][ 518/ 518] Overall Loss 2.582653 Objective Loss 2.582653 LR 0.000063 Time 0.319731 +2025-05-16 22:40:15,188 - --- validate (epoch=131)----------- +2025-05-16 22:40:15,188 - 4952 samples (32 per mini-batch) +2025-05-16 22:40:15,192 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:40:27,487 - Epoch: [131][ 50/ 155] Loss 2.987021 mAP 0.465700 +2025-05-16 22:40:39,812 - Epoch: [131][ 100/ 155] Loss 2.986222 mAP 0.462264 +2025-05-16 22:40:52,744 - Epoch: [131][ 150/ 155] Loss 2.952085 mAP 0.472514 +2025-05-16 22:40:55,694 - Epoch: [131][ 155/ 155] Loss 2.950765 mAP 0.475721 +2025-05-16 22:40:55,731 - ==> mAP: 0.47572 Loss: 2.951 + +2025-05-16 22:40:55,740 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 22:40:55,740 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:40:55,832 - + +2025-05-16 22:40:55,832 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:41:12,717 - Epoch: [132][ 50/ 518] Overall Loss 2.584458 Objective Loss 2.584458 LR 0.000063 Time 0.337628 +2025-05-16 22:41:28,628 - Epoch: [132][ 100/ 518] Overall Loss 2.598308 Objective Loss 2.598308 LR 0.000063 Time 0.327916 +2025-05-16 22:41:44,531 - Epoch: [132][ 150/ 518] Overall Loss 2.580448 Objective Loss 2.580448 LR 0.000063 Time 0.324628 +2025-05-16 22:42:00,463 - Epoch: [132][ 200/ 518] Overall Loss 2.575214 Objective Loss 2.575214 LR 0.000063 Time 0.323129 +2025-05-16 22:42:16,417 - Epoch: [132][ 250/ 518] Overall Loss 2.575537 Objective Loss 2.575537 LR 0.000063 Time 0.322314 +2025-05-16 22:42:32,347 - Epoch: [132][ 300/ 518] Overall Loss 2.577920 Objective Loss 2.577920 LR 0.000063 Time 0.321692 +2025-05-16 22:42:48,280 - Epoch: [132][ 350/ 518] Overall Loss 2.571985 Objective Loss 2.571985 LR 0.000063 Time 0.321257 +2025-05-16 22:43:04,220 - Epoch: [132][ 400/ 518] Overall Loss 2.581730 Objective Loss 2.581730 LR 0.000063 Time 0.320947 +2025-05-16 22:43:20,150 - Epoch: [132][ 450/ 518] Overall Loss 2.576897 Objective Loss 2.576897 LR 0.000063 Time 0.320683 +2025-05-16 22:43:36,072 - Epoch: [132][ 500/ 518] Overall Loss 2.577942 Objective Loss 2.577942 LR 0.000063 Time 0.320457 +2025-05-16 22:43:41,699 - Epoch: [132][ 518/ 518] Overall Loss 2.579251 Objective Loss 2.579251 LR 0.000063 Time 0.320183 +2025-05-16 22:43:41,739 - --- validate (epoch=132)----------- +2025-05-16 22:43:41,740 - 4952 samples (32 per mini-batch) +2025-05-16 22:43:41,743 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:43:53,819 - Epoch: [132][ 50/ 155] Loss 2.964452 mAP 0.493079 +2025-05-16 22:44:06,208 - Epoch: [132][ 100/ 155] Loss 2.955625 mAP 0.481471 +2025-05-16 22:44:19,021 - Epoch: [132][ 150/ 155] Loss 2.950551 mAP 0.475373 +2025-05-16 22:44:21,813 - Epoch: [132][ 155/ 155] Loss 2.945418 mAP 0.477037 +2025-05-16 22:44:21,853 - ==> mAP: 0.47704 Loss: 2.945 + +2025-05-16 22:44:21,863 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 22:44:21,863 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:44:21,961 - + +2025-05-16 22:44:21,961 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:44:39,053 - Epoch: [133][ 50/ 518] Overall Loss 2.497396 Objective Loss 2.497396 LR 0.000063 Time 0.341785 +2025-05-16 22:44:54,975 - Epoch: [133][ 100/ 518] Overall Loss 2.562147 Objective Loss 2.562147 LR 0.000063 Time 0.330097 +2025-05-16 22:45:10,895 - Epoch: [133][ 150/ 518] Overall Loss 2.561090 Objective Loss 2.561090 LR 0.000063 Time 0.326196 +2025-05-16 22:45:26,799 - Epoch: [133][ 200/ 518] Overall Loss 2.558173 Objective Loss 2.558173 LR 0.000063 Time 0.324162 +2025-05-16 22:45:42,735 - Epoch: [133][ 250/ 518] Overall Loss 2.568132 Objective Loss 2.568132 LR 0.000063 Time 0.323068 +2025-05-16 22:45:58,668 - Epoch: [133][ 300/ 518] Overall Loss 2.569714 Objective Loss 2.569714 LR 0.000063 Time 0.322333 +2025-05-16 22:46:14,593 - Epoch: [133][ 350/ 518] Overall Loss 2.568100 Objective Loss 2.568100 LR 0.000063 Time 0.321781 +2025-05-16 22:46:30,525 - Epoch: [133][ 400/ 518] Overall Loss 2.578311 Objective Loss 2.578311 LR 0.000063 Time 0.321385 +2025-05-16 22:46:46,456 - Epoch: [133][ 450/ 518] Overall Loss 2.580332 Objective Loss 2.580332 LR 0.000063 Time 0.321076 +2025-05-16 22:47:02,385 - Epoch: [133][ 500/ 518] Overall Loss 2.580928 Objective Loss 2.580928 LR 0.000063 Time 0.320825 +2025-05-16 22:47:08,015 - Epoch: [133][ 518/ 518] Overall Loss 2.581064 Objective Loss 2.581064 LR 0.000063 Time 0.320544 +2025-05-16 22:47:08,053 - --- validate (epoch=133)----------- +2025-05-16 22:47:08,054 - 4952 samples (32 per mini-batch) +2025-05-16 22:47:08,058 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:47:20,060 - Epoch: [133][ 50/ 155] Loss 3.024360 mAP 0.465926 +2025-05-16 22:47:32,260 - Epoch: [133][ 100/ 155] Loss 2.968301 mAP 0.463688 +2025-05-16 22:47:45,240 - Epoch: [133][ 150/ 155] Loss 2.949000 mAP 0.475403 +2025-05-16 22:47:48,115 - Epoch: [133][ 155/ 155] Loss 2.948199 mAP 0.477829 +2025-05-16 22:47:48,153 - ==> mAP: 0.47783 Loss: 2.948 + +2025-05-16 22:47:48,162 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 22:47:48,162 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:47:48,257 - + +2025-05-16 22:47:48,257 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:48:05,042 - Epoch: [134][ 50/ 518] Overall Loss 2.526443 Objective Loss 2.526443 LR 0.000063 Time 0.335628 +2025-05-16 22:48:20,930 - Epoch: [134][ 100/ 518] Overall Loss 2.563114 Objective Loss 2.563114 LR 0.000063 Time 0.326685 +2025-05-16 22:48:36,796 - Epoch: [134][ 150/ 518] Overall Loss 2.583073 Objective Loss 2.583073 LR 0.000063 Time 0.323555 +2025-05-16 22:48:52,680 - Epoch: [134][ 200/ 518] Overall Loss 2.586313 Objective Loss 2.586313 LR 0.000063 Time 0.322081 +2025-05-16 22:49:08,618 - Epoch: [134][ 250/ 518] Overall Loss 2.572685 Objective Loss 2.572685 LR 0.000063 Time 0.321416 +2025-05-16 22:49:24,552 - Epoch: [134][ 300/ 518] Overall Loss 2.572481 Objective Loss 2.572481 LR 0.000063 Time 0.320957 +2025-05-16 22:49:40,506 - Epoch: [134][ 350/ 518] Overall Loss 2.572931 Objective Loss 2.572931 LR 0.000063 Time 0.320689 +2025-05-16 22:49:56,428 - Epoch: [134][ 400/ 518] Overall Loss 2.573848 Objective Loss 2.573848 LR 0.000063 Time 0.320405 +2025-05-16 22:50:12,368 - Epoch: [134][ 450/ 518] Overall Loss 2.578089 Objective Loss 2.578089 LR 0.000063 Time 0.320223 +2025-05-16 22:50:28,310 - Epoch: [134][ 500/ 518] Overall Loss 2.583006 Objective Loss 2.583006 LR 0.000063 Time 0.320086 +2025-05-16 22:50:33,938 - Epoch: [134][ 518/ 518] Overall Loss 2.586299 Objective Loss 2.586299 LR 0.000063 Time 0.319826 +2025-05-16 22:50:33,977 - --- validate (epoch=134)----------- +2025-05-16 22:50:33,978 - 4952 samples (32 per mini-batch) +2025-05-16 22:50:33,981 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:50:46,127 - Epoch: [134][ 50/ 155] Loss 2.953335 mAP 0.489067 +2025-05-16 22:50:58,332 - Epoch: [134][ 100/ 155] Loss 2.944413 mAP 0.486651 +2025-05-16 22:51:11,359 - Epoch: [134][ 150/ 155] Loss 2.942832 mAP 0.480337 +2025-05-16 22:51:14,236 - Epoch: [134][ 155/ 155] Loss 2.939562 mAP 0.480840 +2025-05-16 22:51:14,276 - ==> mAP: 0.48084 Loss: 2.940 + +2025-05-16 22:51:14,286 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 22:51:14,286 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:51:14,380 - + +2025-05-16 22:51:14,380 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:51:31,202 - Epoch: [135][ 50/ 518] Overall Loss 2.576244 Objective Loss 2.576244 LR 0.000063 Time 0.336377 +2025-05-16 22:51:47,084 - Epoch: [135][ 100/ 518] Overall Loss 2.585455 Objective Loss 2.585455 LR 0.000063 Time 0.326990 +2025-05-16 22:52:02,959 - Epoch: [135][ 150/ 518] Overall Loss 2.580149 Objective Loss 2.580149 LR 0.000063 Time 0.323822 +2025-05-16 22:52:18,868 - Epoch: [135][ 200/ 518] Overall Loss 2.575568 Objective Loss 2.575568 LR 0.000063 Time 0.322409 +2025-05-16 22:52:34,823 - Epoch: [135][ 250/ 518] Overall Loss 2.590829 Objective Loss 2.590829 LR 0.000063 Time 0.321742 +2025-05-16 22:52:50,758 - Epoch: [135][ 300/ 518] Overall Loss 2.601667 Objective Loss 2.601667 LR 0.000063 Time 0.321233 +2025-05-16 22:53:06,690 - Epoch: [135][ 350/ 518] Overall Loss 2.592708 Objective Loss 2.592708 LR 0.000063 Time 0.320863 +2025-05-16 22:53:22,634 - Epoch: [135][ 400/ 518] Overall Loss 2.591312 Objective Loss 2.591312 LR 0.000063 Time 0.320613 +2025-05-16 22:53:38,552 - Epoch: [135][ 450/ 518] Overall Loss 2.594829 Objective Loss 2.594829 LR 0.000063 Time 0.320361 +2025-05-16 22:53:54,466 - Epoch: [135][ 500/ 518] Overall Loss 2.588321 Objective Loss 2.588321 LR 0.000063 Time 0.320150 +2025-05-16 22:54:00,076 - Epoch: [135][ 518/ 518] Overall Loss 2.585286 Objective Loss 2.585286 LR 0.000063 Time 0.319854 +2025-05-16 22:54:00,114 - --- validate (epoch=135)----------- +2025-05-16 22:54:00,115 - 4952 samples (32 per mini-batch) +2025-05-16 22:54:00,118 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:54:12,206 - Epoch: [135][ 50/ 155] Loss 2.936704 mAP 0.483875 +2025-05-16 22:54:24,650 - Epoch: [135][ 100/ 155] Loss 2.941248 mAP 0.480248 +2025-05-16 22:54:37,302 - Epoch: [135][ 150/ 155] Loss 2.925974 mAP 0.475719 +2025-05-16 22:54:40,107 - Epoch: [135][ 155/ 155] Loss 2.931252 mAP 0.475026 +2025-05-16 22:54:40,146 - ==> mAP: 0.47503 Loss: 2.931 + +2025-05-16 22:54:40,155 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 22:54:40,155 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:54:40,250 - + +2025-05-16 22:54:40,250 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:54:57,128 - Epoch: [136][ 50/ 518] Overall Loss 2.584369 Objective Loss 2.584369 LR 0.000063 Time 0.337508 +2025-05-16 22:55:12,996 - Epoch: [136][ 100/ 518] Overall Loss 2.583496 Objective Loss 2.583496 LR 0.000063 Time 0.327424 +2025-05-16 22:55:28,873 - Epoch: [136][ 150/ 518] Overall Loss 2.566606 Objective Loss 2.566606 LR 0.000063 Time 0.324120 +2025-05-16 22:55:44,770 - Epoch: [136][ 200/ 518] Overall Loss 2.570770 Objective Loss 2.570770 LR 0.000063 Time 0.322569 +2025-05-16 22:56:00,671 - Epoch: [136][ 250/ 518] Overall Loss 2.567931 Objective Loss 2.567931 LR 0.000063 Time 0.321655 +2025-05-16 22:56:16,587 - Epoch: [136][ 300/ 518] Overall Loss 2.569235 Objective Loss 2.569235 LR 0.000063 Time 0.321096 +2025-05-16 22:56:32,519 - Epoch: [136][ 350/ 518] Overall Loss 2.576847 Objective Loss 2.576847 LR 0.000063 Time 0.320743 +2025-05-16 22:56:48,443 - Epoch: [136][ 400/ 518] Overall Loss 2.566984 Objective Loss 2.566984 LR 0.000063 Time 0.320460 +2025-05-16 22:57:04,364 - Epoch: [136][ 450/ 518] Overall Loss 2.567422 Objective Loss 2.567422 LR 0.000063 Time 0.320229 +2025-05-16 22:57:20,303 - Epoch: [136][ 500/ 518] Overall Loss 2.564154 Objective Loss 2.564154 LR 0.000063 Time 0.320084 +2025-05-16 22:57:25,925 - Epoch: [136][ 518/ 518] Overall Loss 2.566191 Objective Loss 2.566191 LR 0.000063 Time 0.319814 +2025-05-16 22:57:25,961 - --- validate (epoch=136)----------- +2025-05-16 22:57:25,962 - 4952 samples (32 per mini-batch) +2025-05-16 22:57:25,965 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 22:57:38,171 - Epoch: [136][ 50/ 155] Loss 2.924480 mAP 0.487542 +2025-05-16 22:57:50,443 - Epoch: [136][ 100/ 155] Loss 2.938386 mAP 0.483904 +2025-05-16 22:58:03,626 - Epoch: [136][ 150/ 155] Loss 2.938501 mAP 0.478192 +2025-05-16 22:58:06,595 - Epoch: [136][ 155/ 155] Loss 2.940689 mAP 0.479591 +2025-05-16 22:58:06,635 - ==> mAP: 0.47959 Loss: 2.941 + +2025-05-16 22:58:06,644 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 22:58:06,644 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 22:58:06,741 - + +2025-05-16 22:58:06,741 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 22:58:23,602 - Epoch: [137][ 50/ 518] Overall Loss 2.598416 Objective Loss 2.598416 LR 0.000063 Time 0.337155 +2025-05-16 22:58:39,472 - Epoch: [137][ 100/ 518] Overall Loss 2.598303 Objective Loss 2.598303 LR 0.000063 Time 0.327266 +2025-05-16 22:58:55,361 - Epoch: [137][ 150/ 518] Overall Loss 2.601817 Objective Loss 2.601817 LR 0.000063 Time 0.324098 +2025-05-16 22:59:11,274 - Epoch: [137][ 200/ 518] Overall Loss 2.591650 Objective Loss 2.591650 LR 0.000063 Time 0.322631 +2025-05-16 22:59:27,170 - Epoch: [137][ 250/ 518] Overall Loss 2.586440 Objective Loss 2.586440 LR 0.000063 Time 0.321686 +2025-05-16 22:59:43,083 - Epoch: [137][ 300/ 518] Overall Loss 2.589207 Objective Loss 2.589207 LR 0.000063 Time 0.321112 +2025-05-16 22:59:59,009 - Epoch: [137][ 350/ 518] Overall Loss 2.591176 Objective Loss 2.591176 LR 0.000063 Time 0.320739 +2025-05-16 23:00:14,934 - Epoch: [137][ 400/ 518] Overall Loss 2.584822 Objective Loss 2.584822 LR 0.000063 Time 0.320456 +2025-05-16 23:00:30,860 - Epoch: [137][ 450/ 518] Overall Loss 2.586368 Objective Loss 2.586368 LR 0.000063 Time 0.320238 +2025-05-16 23:00:46,787 - Epoch: [137][ 500/ 518] Overall Loss 2.579512 Objective Loss 2.579512 LR 0.000063 Time 0.320066 +2025-05-16 23:00:52,422 - Epoch: [137][ 518/ 518] Overall Loss 2.579512 Objective Loss 2.579512 LR 0.000063 Time 0.319823 +2025-05-16 23:00:52,458 - --- validate (epoch=137)----------- +2025-05-16 23:00:52,458 - 4952 samples (32 per mini-batch) +2025-05-16 23:00:52,462 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:01:04,588 - Epoch: [137][ 50/ 155] Loss 2.911134 mAP 0.489923 +2025-05-16 23:01:16,991 - Epoch: [137][ 100/ 155] Loss 2.927261 mAP 0.481444 +2025-05-16 23:01:29,866 - Epoch: [137][ 150/ 155] Loss 2.942454 mAP 0.477304 +2025-05-16 23:01:32,736 - Epoch: [137][ 155/ 155] Loss 2.934706 mAP 0.477018 +2025-05-16 23:01:32,774 - ==> mAP: 0.47702 Loss: 2.935 + +2025-05-16 23:01:32,784 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 23:01:32,784 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:01:32,879 - + +2025-05-16 23:01:32,879 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:01:49,710 - Epoch: [138][ 50/ 518] Overall Loss 2.530650 Objective Loss 2.530650 LR 0.000063 Time 0.336556 +2025-05-16 23:02:05,608 - Epoch: [138][ 100/ 518] Overall Loss 2.557451 Objective Loss 2.557451 LR 0.000063 Time 0.327254 +2025-05-16 23:02:21,539 - Epoch: [138][ 150/ 518] Overall Loss 2.551020 Objective Loss 2.551020 LR 0.000063 Time 0.324372 +2025-05-16 23:02:37,476 - Epoch: [138][ 200/ 518] Overall Loss 2.562215 Objective Loss 2.562215 LR 0.000063 Time 0.322960 +2025-05-16 23:02:53,418 - Epoch: [138][ 250/ 518] Overall Loss 2.565845 Objective Loss 2.565845 LR 0.000063 Time 0.322134 +2025-05-16 23:03:09,347 - Epoch: [138][ 300/ 518] Overall Loss 2.562475 Objective Loss 2.562475 LR 0.000063 Time 0.321538 +2025-05-16 23:03:25,259 - Epoch: [138][ 350/ 518] Overall Loss 2.571006 Objective Loss 2.571006 LR 0.000063 Time 0.321064 +2025-05-16 23:03:41,183 - Epoch: [138][ 400/ 518] Overall Loss 2.566769 Objective Loss 2.566769 LR 0.000063 Time 0.320739 +2025-05-16 23:03:57,132 - Epoch: [138][ 450/ 518] Overall Loss 2.565706 Objective Loss 2.565706 LR 0.000063 Time 0.320541 +2025-05-16 23:04:13,075 - Epoch: [138][ 500/ 518] Overall Loss 2.574705 Objective Loss 2.574705 LR 0.000063 Time 0.320370 +2025-05-16 23:04:18,712 - Epoch: [138][ 518/ 518] Overall Loss 2.573715 Objective Loss 2.573715 LR 0.000063 Time 0.320119 +2025-05-16 23:04:18,747 - --- validate (epoch=138)----------- +2025-05-16 23:04:18,748 - 4952 samples (32 per mini-batch) +2025-05-16 23:04:18,751 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:04:30,868 - Epoch: [138][ 50/ 155] Loss 2.926617 mAP 0.480666 +2025-05-16 23:04:43,429 - Epoch: [138][ 100/ 155] Loss 2.930419 mAP 0.479373 +2025-05-16 23:04:56,426 - Epoch: [138][ 150/ 155] Loss 2.935787 mAP 0.478102 +2025-05-16 23:04:59,377 - Epoch: [138][ 155/ 155] Loss 2.936017 mAP 0.477393 +2025-05-16 23:04:59,414 - ==> mAP: 0.47739 Loss: 2.936 + +2025-05-16 23:04:59,424 - ==> Best [mAP: 0.484358 vloss: 2.943982 Params: 2177088 on epoch: 126] +2025-05-16 23:04:59,424 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:04:59,521 - + +2025-05-16 23:04:59,521 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:05:16,343 - Epoch: [139][ 50/ 518] Overall Loss 2.541017 Objective Loss 2.541017 LR 0.000063 Time 0.336362 +2025-05-16 23:05:32,240 - Epoch: [139][ 100/ 518] Overall Loss 2.565307 Objective Loss 2.565307 LR 0.000063 Time 0.327142 +2025-05-16 23:05:48,136 - Epoch: [139][ 150/ 518] Overall Loss 2.575364 Objective Loss 2.575364 LR 0.000063 Time 0.324062 +2025-05-16 23:06:04,033 - Epoch: [139][ 200/ 518] Overall Loss 2.591677 Objective Loss 2.591677 LR 0.000063 Time 0.322526 +2025-05-16 23:06:19,946 - Epoch: [139][ 250/ 518] Overall Loss 2.589464 Objective Loss 2.589464 LR 0.000063 Time 0.321672 +2025-05-16 23:06:35,857 - Epoch: [139][ 300/ 518] Overall Loss 2.596339 Objective Loss 2.596339 LR 0.000063 Time 0.321092 +2025-05-16 23:06:51,768 - Epoch: [139][ 350/ 518] Overall Loss 2.597169 Objective Loss 2.597169 LR 0.000063 Time 0.320679 +2025-05-16 23:07:07,676 - Epoch: [139][ 400/ 518] Overall Loss 2.602509 Objective Loss 2.602509 LR 0.000063 Time 0.320362 +2025-05-16 23:07:23,607 - Epoch: [139][ 450/ 518] Overall Loss 2.593573 Objective Loss 2.593573 LR 0.000063 Time 0.320165 +2025-05-16 23:07:39,536 - Epoch: [139][ 500/ 518] Overall Loss 2.586876 Objective Loss 2.586876 LR 0.000063 Time 0.320005 +2025-05-16 23:07:45,169 - Epoch: [139][ 518/ 518] Overall Loss 2.588096 Objective Loss 2.588096 LR 0.000063 Time 0.319759 +2025-05-16 23:07:45,209 - --- validate (epoch=139)----------- +2025-05-16 23:07:45,210 - 4952 samples (32 per mini-batch) +2025-05-16 23:07:45,213 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:07:57,759 - Epoch: [139][ 50/ 155] Loss 2.903033 mAP 0.497188 +2025-05-16 23:08:10,187 - Epoch: [139][ 100/ 155] Loss 2.901295 mAP 0.495486 +2025-05-16 23:08:23,248 - Epoch: [139][ 150/ 155] Loss 2.923794 mAP 0.485438 +2025-05-16 23:08:26,230 - Epoch: [139][ 155/ 155] Loss 2.929144 mAP 0.486261 +2025-05-16 23:08:26,270 - ==> mAP: 0.48626 Loss: 2.929 + +2025-05-16 23:08:26,280 - ==> Best [mAP: 0.486261 vloss: 2.929144 Params: 2177088 on epoch: 139] +2025-05-16 23:08:26,280 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:08:26,401 - + +2025-05-16 23:08:26,401 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:08:43,337 - Epoch: [140][ 50/ 518] Overall Loss 2.602111 Objective Loss 2.602111 LR 0.000063 Time 0.338642 +2025-05-16 23:08:59,219 - Epoch: [140][ 100/ 518] Overall Loss 2.554589 Objective Loss 2.554589 LR 0.000063 Time 0.328139 +2025-05-16 23:09:15,105 - Epoch: [140][ 150/ 518] Overall Loss 2.576470 Objective Loss 2.576470 LR 0.000063 Time 0.324657 +2025-05-16 23:09:30,994 - Epoch: [140][ 200/ 518] Overall Loss 2.581192 Objective Loss 2.581192 LR 0.000063 Time 0.322929 +2025-05-16 23:09:46,900 - Epoch: [140][ 250/ 518] Overall Loss 2.584784 Objective Loss 2.584784 LR 0.000063 Time 0.321966 +2025-05-16 23:10:02,839 - Epoch: [140][ 300/ 518] Overall Loss 2.586636 Objective Loss 2.586636 LR 0.000063 Time 0.321432 +2025-05-16 23:10:18,772 - Epoch: [140][ 350/ 518] Overall Loss 2.574168 Objective Loss 2.574168 LR 0.000063 Time 0.321033 +2025-05-16 23:10:34,719 - Epoch: [140][ 400/ 518] Overall Loss 2.578078 Objective Loss 2.578078 LR 0.000063 Time 0.320770 +2025-05-16 23:10:50,665 - Epoch: [140][ 450/ 518] Overall Loss 2.570672 Objective Loss 2.570672 LR 0.000063 Time 0.320563 +2025-05-16 23:11:06,615 - Epoch: [140][ 500/ 518] Overall Loss 2.569732 Objective Loss 2.569732 LR 0.000063 Time 0.320406 +2025-05-16 23:11:12,251 - Epoch: [140][ 518/ 518] Overall Loss 2.572264 Objective Loss 2.572264 LR 0.000063 Time 0.320151 +2025-05-16 23:11:12,290 - --- validate (epoch=140)----------- +2025-05-16 23:11:12,291 - 4952 samples (32 per mini-batch) +2025-05-16 23:11:12,294 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:11:24,306 - Epoch: [140][ 50/ 155] Loss 2.915376 mAP 0.492858 +2025-05-16 23:11:36,868 - Epoch: [140][ 100/ 155] Loss 2.931794 mAP 0.489425 +2025-05-16 23:11:49,834 - Epoch: [140][ 150/ 155] Loss 2.930323 mAP 0.480686 +2025-05-16 23:11:52,767 - Epoch: [140][ 155/ 155] Loss 2.932750 mAP 0.479474 +2025-05-16 23:11:52,807 - ==> mAP: 0.47947 Loss: 2.933 + +2025-05-16 23:11:52,817 - ==> Best [mAP: 0.486261 vloss: 2.929144 Params: 2177088 on epoch: 139] +2025-05-16 23:11:52,817 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:11:52,914 - + +2025-05-16 23:11:52,915 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:12:09,752 - Epoch: [141][ 50/ 518] Overall Loss 2.588782 Objective Loss 2.588782 LR 0.000063 Time 0.336686 +2025-05-16 23:12:25,641 - Epoch: [141][ 100/ 518] Overall Loss 2.573076 Objective Loss 2.573076 LR 0.000063 Time 0.327219 +2025-05-16 23:12:41,547 - Epoch: [141][ 150/ 518] Overall Loss 2.577559 Objective Loss 2.577559 LR 0.000063 Time 0.324182 +2025-05-16 23:12:57,475 - Epoch: [141][ 200/ 518] Overall Loss 2.563481 Objective Loss 2.563481 LR 0.000063 Time 0.322772 +2025-05-16 23:13:13,413 - Epoch: [141][ 250/ 518] Overall Loss 2.566811 Objective Loss 2.566811 LR 0.000063 Time 0.321969 +2025-05-16 23:13:29,331 - Epoch: [141][ 300/ 518] Overall Loss 2.573077 Objective Loss 2.573077 LR 0.000063 Time 0.321366 +2025-05-16 23:13:45,242 - Epoch: [141][ 350/ 518] Overall Loss 2.580624 Objective Loss 2.580624 LR 0.000063 Time 0.320912 +2025-05-16 23:14:01,166 - Epoch: [141][ 400/ 518] Overall Loss 2.571535 Objective Loss 2.571535 LR 0.000063 Time 0.320605 +2025-05-16 23:14:17,091 - Epoch: [141][ 450/ 518] Overall Loss 2.579868 Objective Loss 2.579868 LR 0.000063 Time 0.320370 +2025-05-16 23:14:33,016 - Epoch: [141][ 500/ 518] Overall Loss 2.579073 Objective Loss 2.579073 LR 0.000063 Time 0.320181 +2025-05-16 23:14:38,642 - Epoch: [141][ 518/ 518] Overall Loss 2.582413 Objective Loss 2.582413 LR 0.000063 Time 0.319914 +2025-05-16 23:14:38,678 - --- validate (epoch=141)----------- +2025-05-16 23:14:38,678 - 4952 samples (32 per mini-batch) +2025-05-16 23:14:38,682 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:14:50,897 - Epoch: [141][ 50/ 155] Loss 2.922532 mAP 0.488312 +2025-05-16 23:15:03,225 - Epoch: [141][ 100/ 155] Loss 2.921572 mAP 0.483860 +2025-05-16 23:15:16,173 - Epoch: [141][ 150/ 155] Loss 2.929072 mAP 0.483200 +2025-05-16 23:15:19,063 - Epoch: [141][ 155/ 155] Loss 2.925565 mAP 0.483570 +2025-05-16 23:15:19,103 - ==> mAP: 0.48357 Loss: 2.926 + +2025-05-16 23:15:19,112 - ==> Best [mAP: 0.486261 vloss: 2.929144 Params: 2177088 on epoch: 139] +2025-05-16 23:15:19,112 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:15:19,207 - + +2025-05-16 23:15:19,207 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:15:35,946 - Epoch: [142][ 50/ 518] Overall Loss 2.592793 Objective Loss 2.592793 LR 0.000063 Time 0.334709 +2025-05-16 23:15:51,837 - Epoch: [142][ 100/ 518] Overall Loss 2.571365 Objective Loss 2.571365 LR 0.000063 Time 0.326258 +2025-05-16 23:16:07,746 - Epoch: [142][ 150/ 518] Overall Loss 2.571135 Objective Loss 2.571135 LR 0.000063 Time 0.323561 +2025-05-16 23:16:23,692 - Epoch: [142][ 200/ 518] Overall Loss 2.564149 Objective Loss 2.564149 LR 0.000063 Time 0.322398 +2025-05-16 23:16:39,607 - Epoch: [142][ 250/ 518] Overall Loss 2.568908 Objective Loss 2.568908 LR 0.000063 Time 0.321577 +2025-05-16 23:16:55,517 - Epoch: [142][ 300/ 518] Overall Loss 2.572263 Objective Loss 2.572263 LR 0.000063 Time 0.321009 +2025-05-16 23:17:11,424 - Epoch: [142][ 350/ 518] Overall Loss 2.575895 Objective Loss 2.575895 LR 0.000063 Time 0.320596 +2025-05-16 23:17:27,347 - Epoch: [142][ 400/ 518] Overall Loss 2.572378 Objective Loss 2.572378 LR 0.000063 Time 0.320328 +2025-05-16 23:17:43,246 - Epoch: [142][ 450/ 518] Overall Loss 2.568396 Objective Loss 2.568396 LR 0.000063 Time 0.320065 +2025-05-16 23:17:59,156 - Epoch: [142][ 500/ 518] Overall Loss 2.571477 Objective Loss 2.571477 LR 0.000063 Time 0.319875 +2025-05-16 23:18:04,757 - Epoch: [142][ 518/ 518] Overall Loss 2.570699 Objective Loss 2.570699 LR 0.000063 Time 0.319573 +2025-05-16 23:18:04,792 - --- validate (epoch=142)----------- +2025-05-16 23:18:04,793 - 4952 samples (32 per mini-batch) +2025-05-16 23:18:04,796 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:18:17,072 - Epoch: [142][ 50/ 155] Loss 2.906467 mAP 0.496436 +2025-05-16 23:18:29,761 - Epoch: [142][ 100/ 155] Loss 2.925596 mAP 0.486446 +2025-05-16 23:18:42,808 - Epoch: [142][ 150/ 155] Loss 2.919947 mAP 0.485235 +2025-05-16 23:18:45,758 - Epoch: [142][ 155/ 155] Loss 2.926558 mAP 0.483932 +2025-05-16 23:18:45,797 - ==> mAP: 0.48393 Loss: 2.927 + +2025-05-16 23:18:45,806 - ==> Best [mAP: 0.486261 vloss: 2.929144 Params: 2177088 on epoch: 139] +2025-05-16 23:18:45,806 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:18:45,904 - + +2025-05-16 23:18:45,905 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:19:02,707 - Epoch: [143][ 50/ 518] Overall Loss 2.535291 Objective Loss 2.535291 LR 0.000063 Time 0.335979 +2025-05-16 23:19:18,591 - Epoch: [143][ 100/ 518] Overall Loss 2.565924 Objective Loss 2.565924 LR 0.000063 Time 0.326822 +2025-05-16 23:19:34,482 - Epoch: [143][ 150/ 518] Overall Loss 2.591394 Objective Loss 2.591394 LR 0.000063 Time 0.323809 +2025-05-16 23:19:50,390 - Epoch: [143][ 200/ 518] Overall Loss 2.592684 Objective Loss 2.592684 LR 0.000063 Time 0.322392 +2025-05-16 23:20:06,314 - Epoch: [143][ 250/ 518] Overall Loss 2.590852 Objective Loss 2.590852 LR 0.000063 Time 0.321608 +2025-05-16 23:20:22,254 - Epoch: [143][ 300/ 518] Overall Loss 2.580722 Objective Loss 2.580722 LR 0.000063 Time 0.321138 +2025-05-16 23:20:38,190 - Epoch: [143][ 350/ 518] Overall Loss 2.581395 Objective Loss 2.581395 LR 0.000063 Time 0.320789 +2025-05-16 23:20:54,151 - Epoch: [143][ 400/ 518] Overall Loss 2.571960 Objective Loss 2.571960 LR 0.000063 Time 0.320592 +2025-05-16 23:21:10,093 - Epoch: [143][ 450/ 518] Overall Loss 2.575663 Objective Loss 2.575663 LR 0.000063 Time 0.320396 +2025-05-16 23:21:26,030 - Epoch: [143][ 500/ 518] Overall Loss 2.573222 Objective Loss 2.573222 LR 0.000063 Time 0.320228 +2025-05-16 23:21:31,668 - Epoch: [143][ 518/ 518] Overall Loss 2.574657 Objective Loss 2.574657 LR 0.000063 Time 0.319985 +2025-05-16 23:21:31,707 - --- validate (epoch=143)----------- +2025-05-16 23:21:31,707 - 4952 samples (32 per mini-batch) +2025-05-16 23:21:31,711 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:21:44,058 - Epoch: [143][ 50/ 155] Loss 2.920326 mAP 0.489088 +2025-05-16 23:21:56,507 - Epoch: [143][ 100/ 155] Loss 2.942806 mAP 0.485641 +2025-05-16 23:22:09,625 - Epoch: [143][ 150/ 155] Loss 2.928631 mAP 0.484332 +2025-05-16 23:22:12,595 - Epoch: [143][ 155/ 155] Loss 2.934673 mAP 0.482805 +2025-05-16 23:22:12,635 - ==> mAP: 0.48281 Loss: 2.935 + +2025-05-16 23:22:12,645 - ==> Best [mAP: 0.486261 vloss: 2.929144 Params: 2177088 on epoch: 139] +2025-05-16 23:22:12,645 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:22:12,742 - + +2025-05-16 23:22:12,743 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:22:29,546 - Epoch: [144][ 50/ 518] Overall Loss 2.550073 Objective Loss 2.550073 LR 0.000063 Time 0.335989 +2025-05-16 23:22:45,400 - Epoch: [144][ 100/ 518] Overall Loss 2.553066 Objective Loss 2.553066 LR 0.000063 Time 0.326522 +2025-05-16 23:23:01,293 - Epoch: [144][ 150/ 518] Overall Loss 2.555767 Objective Loss 2.555767 LR 0.000063 Time 0.323628 +2025-05-16 23:23:17,193 - Epoch: [144][ 200/ 518] Overall Loss 2.572863 Objective Loss 2.572863 LR 0.000063 Time 0.322220 +2025-05-16 23:23:33,089 - Epoch: [144][ 250/ 518] Overall Loss 2.564314 Objective Loss 2.564314 LR 0.000063 Time 0.321354 +2025-05-16 23:23:49,001 - Epoch: [144][ 300/ 518] Overall Loss 2.577144 Objective Loss 2.577144 LR 0.000063 Time 0.320831 +2025-05-16 23:24:04,906 - Epoch: [144][ 350/ 518] Overall Loss 2.569675 Objective Loss 2.569675 LR 0.000063 Time 0.320437 +2025-05-16 23:24:20,841 - Epoch: [144][ 400/ 518] Overall Loss 2.580923 Objective Loss 2.580923 LR 0.000063 Time 0.320219 +2025-05-16 23:24:36,768 - Epoch: [144][ 450/ 518] Overall Loss 2.578932 Objective Loss 2.578932 LR 0.000063 Time 0.320031 +2025-05-16 23:24:52,686 - Epoch: [144][ 500/ 518] Overall Loss 2.579180 Objective Loss 2.579180 LR 0.000063 Time 0.319861 +2025-05-16 23:24:58,321 - Epoch: [144][ 518/ 518] Overall Loss 2.576870 Objective Loss 2.576870 LR 0.000063 Time 0.319624 +2025-05-16 23:24:58,358 - --- validate (epoch=144)----------- +2025-05-16 23:24:58,359 - 4952 samples (32 per mini-batch) +2025-05-16 23:24:58,362 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:25:10,515 - Epoch: [144][ 50/ 155] Loss 2.967786 mAP 0.476419 +2025-05-16 23:25:23,075 - Epoch: [144][ 100/ 155] Loss 2.945419 mAP 0.478419 +2025-05-16 23:25:36,034 - Epoch: [144][ 150/ 155] Loss 2.924686 mAP 0.479177 +2025-05-16 23:25:39,007 - Epoch: [144][ 155/ 155] Loss 2.925453 mAP 0.480513 +2025-05-16 23:25:39,046 - ==> mAP: 0.48051 Loss: 2.925 + +2025-05-16 23:25:39,056 - ==> Best [mAP: 0.486261 vloss: 2.929144 Params: 2177088 on epoch: 139] +2025-05-16 23:25:39,056 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:25:39,153 - + +2025-05-16 23:25:39,154 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:25:55,891 - Epoch: [145][ 50/ 518] Overall Loss 2.590388 Objective Loss 2.590388 LR 0.000063 Time 0.334681 +2025-05-16 23:26:11,772 - Epoch: [145][ 100/ 518] Overall Loss 2.567871 Objective Loss 2.567871 LR 0.000063 Time 0.326142 +2025-05-16 23:26:27,664 - Epoch: [145][ 150/ 518] Overall Loss 2.569857 Objective Loss 2.569857 LR 0.000063 Time 0.323370 +2025-05-16 23:26:43,582 - Epoch: [145][ 200/ 518] Overall Loss 2.571537 Objective Loss 2.571537 LR 0.000063 Time 0.322110 +2025-05-16 23:26:59,518 - Epoch: [145][ 250/ 518] Overall Loss 2.568623 Objective Loss 2.568623 LR 0.000063 Time 0.321433 +2025-05-16 23:27:15,462 - Epoch: [145][ 300/ 518] Overall Loss 2.572691 Objective Loss 2.572691 LR 0.000063 Time 0.321005 +2025-05-16 23:27:31,415 - Epoch: [145][ 350/ 518] Overall Loss 2.579760 Objective Loss 2.579760 LR 0.000063 Time 0.320723 +2025-05-16 23:27:47,339 - Epoch: [145][ 400/ 518] Overall Loss 2.580740 Objective Loss 2.580740 LR 0.000063 Time 0.320441 +2025-05-16 23:28:03,284 - Epoch: [145][ 450/ 518] Overall Loss 2.577201 Objective Loss 2.577201 LR 0.000063 Time 0.320269 +2025-05-16 23:28:19,224 - Epoch: [145][ 500/ 518] Overall Loss 2.577602 Objective Loss 2.577602 LR 0.000063 Time 0.320120 +2025-05-16 23:28:24,843 - Epoch: [145][ 518/ 518] Overall Loss 2.575278 Objective Loss 2.575278 LR 0.000063 Time 0.319843 +2025-05-16 23:28:24,881 - --- validate (epoch=145)----------- +2025-05-16 23:28:24,882 - 4952 samples (32 per mini-batch) +2025-05-16 23:28:24,885 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:28:37,114 - Epoch: [145][ 50/ 155] Loss 2.901211 mAP 0.492051 +2025-05-16 23:28:49,376 - Epoch: [145][ 100/ 155] Loss 2.916490 mAP 0.484776 +2025-05-16 23:29:02,362 - Epoch: [145][ 150/ 155] Loss 2.916293 mAP 0.481754 +2025-05-16 23:29:05,232 - Epoch: [145][ 155/ 155] Loss 2.924569 mAP 0.478713 +2025-05-16 23:29:05,271 - ==> mAP: 0.47871 Loss: 2.925 + +2025-05-16 23:29:05,281 - ==> Best [mAP: 0.486261 vloss: 2.929144 Params: 2177088 on epoch: 139] +2025-05-16 23:29:05,281 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:29:05,375 - + +2025-05-16 23:29:05,375 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:29:22,071 - Epoch: [146][ 50/ 518] Overall Loss 2.524471 Objective Loss 2.524471 LR 0.000063 Time 0.333846 +2025-05-16 23:29:37,933 - Epoch: [146][ 100/ 518] Overall Loss 2.555229 Objective Loss 2.555229 LR 0.000063 Time 0.325534 +2025-05-16 23:29:53,802 - Epoch: [146][ 150/ 518] Overall Loss 2.548939 Objective Loss 2.548939 LR 0.000063 Time 0.322810 +2025-05-16 23:30:09,695 - Epoch: [146][ 200/ 518] Overall Loss 2.567873 Objective Loss 2.567873 LR 0.000063 Time 0.321566 +2025-05-16 23:30:25,595 - Epoch: [146][ 250/ 518] Overall Loss 2.568291 Objective Loss 2.568291 LR 0.000063 Time 0.320850 +2025-05-16 23:30:41,501 - Epoch: [146][ 300/ 518] Overall Loss 2.573760 Objective Loss 2.573760 LR 0.000063 Time 0.320390 +2025-05-16 23:30:57,427 - Epoch: [146][ 350/ 518] Overall Loss 2.574397 Objective Loss 2.574397 LR 0.000063 Time 0.320121 +2025-05-16 23:31:13,354 - Epoch: [146][ 400/ 518] Overall Loss 2.566940 Objective Loss 2.566940 LR 0.000063 Time 0.319923 +2025-05-16 23:31:29,263 - Epoch: [146][ 450/ 518] Overall Loss 2.560533 Objective Loss 2.560533 LR 0.000063 Time 0.319726 +2025-05-16 23:31:45,200 - Epoch: [146][ 500/ 518] Overall Loss 2.562711 Objective Loss 2.562711 LR 0.000063 Time 0.319624 +2025-05-16 23:31:50,834 - Epoch: [146][ 518/ 518] Overall Loss 2.562729 Objective Loss 2.562729 LR 0.000063 Time 0.319394 +2025-05-16 23:31:50,874 - --- validate (epoch=146)----------- +2025-05-16 23:31:50,874 - 4952 samples (32 per mini-batch) +2025-05-16 23:31:50,878 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:32:03,175 - Epoch: [146][ 50/ 155] Loss 2.901266 mAP 0.498877 +2025-05-16 23:32:15,758 - Epoch: [146][ 100/ 155] Loss 2.908312 mAP 0.497415 +2025-05-16 23:32:28,929 - Epoch: [146][ 150/ 155] Loss 2.934781 mAP 0.480239 +2025-05-16 23:32:31,918 - Epoch: [146][ 155/ 155] Loss 2.937772 mAP 0.480701 +2025-05-16 23:32:31,957 - ==> mAP: 0.48070 Loss: 2.938 + +2025-05-16 23:32:31,967 - ==> Best [mAP: 0.486261 vloss: 2.929144 Params: 2177088 on epoch: 139] +2025-05-16 23:32:31,968 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:32:32,064 - + +2025-05-16 23:32:32,065 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:32:48,837 - Epoch: [147][ 50/ 518] Overall Loss 2.563911 Objective Loss 2.563911 LR 0.000063 Time 0.335371 +2025-05-16 23:33:04,716 - Epoch: [147][ 100/ 518] Overall Loss 2.580148 Objective Loss 2.580148 LR 0.000063 Time 0.326469 +2025-05-16 23:33:20,605 - Epoch: [147][ 150/ 518] Overall Loss 2.570528 Objective Loss 2.570528 LR 0.000063 Time 0.323568 +2025-05-16 23:33:36,507 - Epoch: [147][ 200/ 518] Overall Loss 2.570330 Objective Loss 2.570330 LR 0.000063 Time 0.322177 +2025-05-16 23:33:52,434 - Epoch: [147][ 250/ 518] Overall Loss 2.574191 Objective Loss 2.574191 LR 0.000063 Time 0.321447 +2025-05-16 23:34:08,386 - Epoch: [147][ 300/ 518] Overall Loss 2.575238 Objective Loss 2.575238 LR 0.000063 Time 0.321043 +2025-05-16 23:34:24,323 - Epoch: [147][ 350/ 518] Overall Loss 2.578594 Objective Loss 2.578594 LR 0.000063 Time 0.320711 +2025-05-16 23:34:40,245 - Epoch: [147][ 400/ 518] Overall Loss 2.579615 Objective Loss 2.579615 LR 0.000063 Time 0.320425 +2025-05-16 23:34:56,192 - Epoch: [147][ 450/ 518] Overall Loss 2.582415 Objective Loss 2.582415 LR 0.000063 Time 0.320259 +2025-05-16 23:35:12,149 - Epoch: [147][ 500/ 518] Overall Loss 2.580599 Objective Loss 2.580599 LR 0.000063 Time 0.320146 +2025-05-16 23:35:17,758 - Epoch: [147][ 518/ 518] Overall Loss 2.586466 Objective Loss 2.586466 LR 0.000063 Time 0.319847 +2025-05-16 23:35:17,796 - --- validate (epoch=147)----------- +2025-05-16 23:35:17,797 - 4952 samples (32 per mini-batch) +2025-05-16 23:35:17,800 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:35:29,940 - Epoch: [147][ 50/ 155] Loss 2.948165 mAP 0.482856 +2025-05-16 23:35:42,375 - Epoch: [147][ 100/ 155] Loss 2.948437 mAP 0.484072 +2025-05-16 23:35:55,158 - Epoch: [147][ 150/ 155] Loss 2.964061 mAP 0.480669 +2025-05-16 23:35:58,043 - Epoch: [147][ 155/ 155] Loss 2.965976 mAP 0.479057 +2025-05-16 23:35:58,082 - ==> mAP: 0.47906 Loss: 2.966 + +2025-05-16 23:35:58,091 - ==> Best [mAP: 0.486261 vloss: 2.929144 Params: 2177088 on epoch: 139] +2025-05-16 23:35:58,091 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:35:58,187 - + +2025-05-16 23:35:58,187 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:36:15,030 - Epoch: [148][ 50/ 518] Overall Loss 2.569558 Objective Loss 2.569558 LR 0.000063 Time 0.336783 +2025-05-16 23:36:30,908 - Epoch: [148][ 100/ 518] Overall Loss 2.571563 Objective Loss 2.571563 LR 0.000063 Time 0.327159 +2025-05-16 23:36:46,795 - Epoch: [148][ 150/ 518] Overall Loss 2.577238 Objective Loss 2.577238 LR 0.000063 Time 0.324014 +2025-05-16 23:37:02,732 - Epoch: [148][ 200/ 518] Overall Loss 2.568950 Objective Loss 2.568950 LR 0.000063 Time 0.322693 +2025-05-16 23:37:18,674 - Epoch: [148][ 250/ 518] Overall Loss 2.575964 Objective Loss 2.575964 LR 0.000063 Time 0.321919 +2025-05-16 23:37:34,599 - Epoch: [148][ 300/ 518] Overall Loss 2.565607 Objective Loss 2.565607 LR 0.000063 Time 0.321344 +2025-05-16 23:37:50,521 - Epoch: [148][ 350/ 518] Overall Loss 2.567610 Objective Loss 2.567610 LR 0.000063 Time 0.320928 +2025-05-16 23:38:06,460 - Epoch: [148][ 400/ 518] Overall Loss 2.570024 Objective Loss 2.570024 LR 0.000063 Time 0.320655 +2025-05-16 23:38:22,385 - Epoch: [148][ 450/ 518] Overall Loss 2.571243 Objective Loss 2.571243 LR 0.000063 Time 0.320414 +2025-05-16 23:38:38,316 - Epoch: [148][ 500/ 518] Overall Loss 2.570626 Objective Loss 2.570626 LR 0.000063 Time 0.320232 +2025-05-16 23:38:43,929 - Epoch: [148][ 518/ 518] Overall Loss 2.572769 Objective Loss 2.572769 LR 0.000063 Time 0.319939 +2025-05-16 23:38:43,966 - --- validate (epoch=148)----------- +2025-05-16 23:38:43,966 - 4952 samples (32 per mini-batch) +2025-05-16 23:38:43,970 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:38:56,395 - Epoch: [148][ 50/ 155] Loss 2.888544 mAP 0.499424 +2025-05-16 23:39:08,856 - Epoch: [148][ 100/ 155] Loss 2.907266 mAP 0.492022 +2025-05-16 23:39:21,715 - Epoch: [148][ 150/ 155] Loss 2.925398 mAP 0.487180 +2025-05-16 23:39:24,583 - Epoch: [148][ 155/ 155] Loss 2.925915 mAP 0.486563 +2025-05-16 23:39:24,616 - ==> mAP: 0.48656 Loss: 2.926 + +2025-05-16 23:39:24,626 - ==> Best [mAP: 0.486563 vloss: 2.925915 Params: 2177088 on epoch: 148] +2025-05-16 23:39:24,626 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:39:24,745 - + +2025-05-16 23:39:24,745 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:39:41,578 - Epoch: [149][ 50/ 518] Overall Loss 2.607370 Objective Loss 2.607370 LR 0.000063 Time 0.336578 +2025-05-16 23:39:57,444 - Epoch: [149][ 100/ 518] Overall Loss 2.600365 Objective Loss 2.600365 LR 0.000063 Time 0.326944 +2025-05-16 23:40:13,327 - Epoch: [149][ 150/ 518] Overall Loss 2.585628 Objective Loss 2.585628 LR 0.000063 Time 0.323841 +2025-05-16 23:40:29,227 - Epoch: [149][ 200/ 518] Overall Loss 2.580763 Objective Loss 2.580763 LR 0.000063 Time 0.322379 +2025-05-16 23:40:45,155 - Epoch: [149][ 250/ 518] Overall Loss 2.575887 Objective Loss 2.575887 LR 0.000063 Time 0.321609 +2025-05-16 23:41:01,059 - Epoch: [149][ 300/ 518] Overall Loss 2.571572 Objective Loss 2.571572 LR 0.000063 Time 0.321017 +2025-05-16 23:41:16,998 - Epoch: [149][ 350/ 518] Overall Loss 2.582297 Objective Loss 2.582297 LR 0.000063 Time 0.320698 +2025-05-16 23:41:32,917 - Epoch: [149][ 400/ 518] Overall Loss 2.581127 Objective Loss 2.581127 LR 0.000063 Time 0.320405 +2025-05-16 23:41:48,835 - Epoch: [149][ 450/ 518] Overall Loss 2.575324 Objective Loss 2.575324 LR 0.000063 Time 0.320175 +2025-05-16 23:42:04,756 - Epoch: [149][ 500/ 518] Overall Loss 2.571286 Objective Loss 2.571286 LR 0.000063 Time 0.319997 +2025-05-16 23:42:10,383 - Epoch: [149][ 518/ 518] Overall Loss 2.574884 Objective Loss 2.574884 LR 0.000063 Time 0.319740 +2025-05-16 23:42:10,419 - --- validate (epoch=149)----------- +2025-05-16 23:42:10,420 - 4952 samples (32 per mini-batch) +2025-05-16 23:42:10,423 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:42:22,489 - Epoch: [149][ 50/ 155] Loss 2.907963 mAP 0.502499 +2025-05-16 23:42:34,974 - Epoch: [149][ 100/ 155] Loss 2.922412 mAP 0.492954 +2025-05-16 23:42:48,329 - Epoch: [149][ 150/ 155] Loss 2.934630 mAP 0.486878 +2025-05-16 23:42:51,331 - Epoch: [149][ 155/ 155] Loss 2.934826 mAP 0.484826 +2025-05-16 23:42:51,371 - ==> mAP: 0.48483 Loss: 2.935 + +2025-05-16 23:42:51,382 - ==> Best [mAP: 0.486563 vloss: 2.925915 Params: 2177088 on epoch: 148] +2025-05-16 23:42:51,382 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:42:51,479 - + +2025-05-16 23:42:51,479 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:43:08,162 - Epoch: [150][ 50/ 518] Overall Loss 2.582551 Objective Loss 2.582551 LR 0.000016 Time 0.333586 +2025-05-16 23:43:24,068 - Epoch: [150][ 100/ 518] Overall Loss 2.594840 Objective Loss 2.594840 LR 0.000016 Time 0.325847 +2025-05-16 23:43:39,958 - Epoch: [150][ 150/ 518] Overall Loss 2.591458 Objective Loss 2.591458 LR 0.000016 Time 0.323155 +2025-05-16 23:43:55,878 - Epoch: [150][ 200/ 518] Overall Loss 2.587847 Objective Loss 2.587847 LR 0.000016 Time 0.321960 +2025-05-16 23:44:11,793 - Epoch: [150][ 250/ 518] Overall Loss 2.573076 Objective Loss 2.573076 LR 0.000016 Time 0.321226 +2025-05-16 23:44:27,705 - Epoch: [150][ 300/ 518] Overall Loss 2.569072 Objective Loss 2.569072 LR 0.000016 Time 0.320722 +2025-05-16 23:44:43,618 - Epoch: [150][ 350/ 518] Overall Loss 2.570320 Objective Loss 2.570320 LR 0.000016 Time 0.320368 +2025-05-16 23:44:59,534 - Epoch: [150][ 400/ 518] Overall Loss 2.565476 Objective Loss 2.565476 LR 0.000016 Time 0.320109 +2025-05-16 23:45:15,462 - Epoch: [150][ 450/ 518] Overall Loss 2.563528 Objective Loss 2.563528 LR 0.000016 Time 0.319936 +2025-05-16 23:45:31,414 - Epoch: [150][ 500/ 518] Overall Loss 2.564837 Objective Loss 2.564837 LR 0.000016 Time 0.319846 +2025-05-16 23:45:37,049 - Epoch: [150][ 518/ 518] Overall Loss 2.568150 Objective Loss 2.568150 LR 0.000016 Time 0.319609 +2025-05-16 23:45:37,084 - --- validate (epoch=150)----------- +2025-05-16 23:45:37,085 - 4952 samples (32 per mini-batch) +2025-05-16 23:45:37,088 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:45:49,409 - Epoch: [150][ 50/ 155] Loss 2.955615 mAP 0.493529 +2025-05-16 23:46:01,768 - Epoch: [150][ 100/ 155] Loss 2.936744 mAP 0.486338 +2025-05-16 23:46:14,902 - Epoch: [150][ 150/ 155] Loss 2.919948 mAP 0.490363 +2025-05-16 23:46:17,901 - Epoch: [150][ 155/ 155] Loss 2.917984 mAP 0.490555 +2025-05-16 23:46:17,939 - ==> mAP: 0.49056 Loss: 2.918 + +2025-05-16 23:46:17,949 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-16 23:46:17,949 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:46:18,071 - + +2025-05-16 23:46:18,071 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:46:34,804 - Epoch: [151][ 50/ 518] Overall Loss 2.599760 Objective Loss 2.599760 LR 0.000016 Time 0.334580 +2025-05-16 23:46:50,703 - Epoch: [151][ 100/ 518] Overall Loss 2.563210 Objective Loss 2.563210 LR 0.000016 Time 0.326273 +2025-05-16 23:47:06,604 - Epoch: [151][ 150/ 518] Overall Loss 2.559234 Objective Loss 2.559234 LR 0.000016 Time 0.323520 +2025-05-16 23:47:22,532 - Epoch: [151][ 200/ 518] Overall Loss 2.560681 Objective Loss 2.560681 LR 0.000016 Time 0.322278 +2025-05-16 23:47:38,445 - Epoch: [151][ 250/ 518] Overall Loss 2.563660 Objective Loss 2.563660 LR 0.000016 Time 0.321468 +2025-05-16 23:47:54,369 - Epoch: [151][ 300/ 518] Overall Loss 2.554461 Objective Loss 2.554461 LR 0.000016 Time 0.320966 +2025-05-16 23:48:10,277 - Epoch: [151][ 350/ 518] Overall Loss 2.551980 Objective Loss 2.551980 LR 0.000016 Time 0.320563 +2025-05-16 23:48:26,187 - Epoch: [151][ 400/ 518] Overall Loss 2.556283 Objective Loss 2.556283 LR 0.000016 Time 0.320266 +2025-05-16 23:48:42,108 - Epoch: [151][ 450/ 518] Overall Loss 2.553476 Objective Loss 2.553476 LR 0.000016 Time 0.320058 +2025-05-16 23:48:58,050 - Epoch: [151][ 500/ 518] Overall Loss 2.546683 Objective Loss 2.546683 LR 0.000016 Time 0.319935 +2025-05-16 23:49:03,679 - Epoch: [151][ 518/ 518] Overall Loss 2.545977 Objective Loss 2.545977 LR 0.000016 Time 0.319682 +2025-05-16 23:49:03,716 - --- validate (epoch=151)----------- +2025-05-16 23:49:03,717 - 4952 samples (32 per mini-batch) +2025-05-16 23:49:03,720 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:49:15,895 - Epoch: [151][ 50/ 155] Loss 2.891763 mAP 0.504711 +2025-05-16 23:49:28,490 - Epoch: [151][ 100/ 155] Loss 2.910509 mAP 0.486475 +2025-05-16 23:49:41,560 - Epoch: [151][ 150/ 155] Loss 2.914137 mAP 0.485359 +2025-05-16 23:49:44,490 - Epoch: [151][ 155/ 155] Loss 2.915970 mAP 0.483019 +2025-05-16 23:49:44,536 - ==> mAP: 0.48302 Loss: 2.916 + +2025-05-16 23:49:44,546 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-16 23:49:44,546 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:49:44,642 - + +2025-05-16 23:49:44,643 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:50:01,449 - Epoch: [152][ 50/ 518] Overall Loss 2.520350 Objective Loss 2.520350 LR 0.000016 Time 0.336062 +2025-05-16 23:50:17,324 - Epoch: [152][ 100/ 518] Overall Loss 2.558787 Objective Loss 2.558787 LR 0.000016 Time 0.326770 +2025-05-16 23:50:33,240 - Epoch: [152][ 150/ 518] Overall Loss 2.571167 Objective Loss 2.571167 LR 0.000016 Time 0.323948 +2025-05-16 23:50:49,155 - Epoch: [152][ 200/ 518] Overall Loss 2.563728 Objective Loss 2.563728 LR 0.000016 Time 0.322528 +2025-05-16 23:51:05,083 - Epoch: [152][ 250/ 518] Overall Loss 2.553543 Objective Loss 2.553543 LR 0.000016 Time 0.321733 +2025-05-16 23:51:21,025 - Epoch: [152][ 300/ 518] Overall Loss 2.553785 Objective Loss 2.553785 LR 0.000016 Time 0.321247 +2025-05-16 23:51:36,958 - Epoch: [152][ 350/ 518] Overall Loss 2.546674 Objective Loss 2.546674 LR 0.000016 Time 0.320874 +2025-05-16 23:51:52,907 - Epoch: [152][ 400/ 518] Overall Loss 2.546372 Objective Loss 2.546372 LR 0.000016 Time 0.320636 +2025-05-16 23:52:08,857 - Epoch: [152][ 450/ 518] Overall Loss 2.546205 Objective Loss 2.546205 LR 0.000016 Time 0.320453 +2025-05-16 23:52:24,786 - Epoch: [152][ 500/ 518] Overall Loss 2.548678 Objective Loss 2.548678 LR 0.000016 Time 0.320262 +2025-05-16 23:52:30,396 - Epoch: [152][ 518/ 518] Overall Loss 2.547168 Objective Loss 2.547168 LR 0.000016 Time 0.319964 +2025-05-16 23:52:30,432 - --- validate (epoch=152)----------- +2025-05-16 23:52:30,433 - 4952 samples (32 per mini-batch) +2025-05-16 23:52:30,436 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:52:42,696 - Epoch: [152][ 50/ 155] Loss 2.934533 mAP 0.482524 +2025-05-16 23:52:55,203 - Epoch: [152][ 100/ 155] Loss 2.927936 mAP 0.480554 +2025-05-16 23:53:08,350 - Epoch: [152][ 150/ 155] Loss 2.920783 mAP 0.481896 +2025-05-16 23:53:11,310 - Epoch: [152][ 155/ 155] Loss 2.921050 mAP 0.482804 +2025-05-16 23:53:11,352 - ==> mAP: 0.48280 Loss: 2.921 + +2025-05-16 23:53:11,362 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-16 23:53:11,362 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:53:11,456 - + +2025-05-16 23:53:11,457 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:53:28,165 - Epoch: [153][ 50/ 518] Overall Loss 2.550631 Objective Loss 2.550631 LR 0.000016 Time 0.334090 +2025-05-16 23:53:44,061 - Epoch: [153][ 100/ 518] Overall Loss 2.542865 Objective Loss 2.542865 LR 0.000016 Time 0.325999 +2025-05-16 23:53:59,972 - Epoch: [153][ 150/ 518] Overall Loss 2.552244 Objective Loss 2.552244 LR 0.000016 Time 0.323403 +2025-05-16 23:54:15,896 - Epoch: [153][ 200/ 518] Overall Loss 2.555051 Objective Loss 2.555051 LR 0.000016 Time 0.322172 +2025-05-16 23:54:31,835 - Epoch: [153][ 250/ 518] Overall Loss 2.564606 Objective Loss 2.564606 LR 0.000016 Time 0.321489 +2025-05-16 23:54:47,750 - Epoch: [153][ 300/ 518] Overall Loss 2.560308 Objective Loss 2.560308 LR 0.000016 Time 0.320954 +2025-05-16 23:55:03,674 - Epoch: [153][ 350/ 518] Overall Loss 2.553782 Objective Loss 2.553782 LR 0.000016 Time 0.320597 +2025-05-16 23:55:19,588 - Epoch: [153][ 400/ 518] Overall Loss 2.554154 Objective Loss 2.554154 LR 0.000016 Time 0.320304 +2025-05-16 23:55:35,512 - Epoch: [153][ 450/ 518] Overall Loss 2.552565 Objective Loss 2.552565 LR 0.000016 Time 0.320099 +2025-05-16 23:55:51,456 - Epoch: [153][ 500/ 518] Overall Loss 2.551228 Objective Loss 2.551228 LR 0.000016 Time 0.319976 +2025-05-16 23:55:57,074 - Epoch: [153][ 518/ 518] Overall Loss 2.547826 Objective Loss 2.547826 LR 0.000016 Time 0.319702 +2025-05-16 23:55:57,111 - --- validate (epoch=153)----------- +2025-05-16 23:55:57,112 - 4952 samples (32 per mini-batch) +2025-05-16 23:55:57,115 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:56:09,425 - Epoch: [153][ 50/ 155] Loss 2.907642 mAP 0.496138 +2025-05-16 23:56:22,152 - Epoch: [153][ 100/ 155] Loss 2.903664 mAP 0.486624 +2025-05-16 23:56:35,359 - Epoch: [153][ 150/ 155] Loss 2.919552 mAP 0.486359 +2025-05-16 23:56:38,361 - Epoch: [153][ 155/ 155] Loss 2.921619 mAP 0.484899 +2025-05-16 23:56:38,400 - ==> mAP: 0.48490 Loss: 2.922 + +2025-05-16 23:56:38,409 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-16 23:56:38,410 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-16 23:56:38,503 - + +2025-05-16 23:56:38,503 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-16 23:56:55,178 - Epoch: [154][ 50/ 518] Overall Loss 2.569297 Objective Loss 2.569297 LR 0.000016 Time 0.333424 +2025-05-16 23:57:11,041 - Epoch: [154][ 100/ 518] Overall Loss 2.572377 Objective Loss 2.572377 LR 0.000016 Time 0.325336 +2025-05-16 23:57:26,938 - Epoch: [154][ 150/ 518] Overall Loss 2.573725 Objective Loss 2.573725 LR 0.000016 Time 0.322865 +2025-05-16 23:57:42,850 - Epoch: [154][ 200/ 518] Overall Loss 2.567083 Objective Loss 2.567083 LR 0.000016 Time 0.321702 +2025-05-16 23:57:58,770 - Epoch: [154][ 250/ 518] Overall Loss 2.554734 Objective Loss 2.554734 LR 0.000016 Time 0.321036 +2025-05-16 23:58:14,697 - Epoch: [154][ 300/ 518] Overall Loss 2.564410 Objective Loss 2.564410 LR 0.000016 Time 0.320618 +2025-05-16 23:58:30,635 - Epoch: [154][ 350/ 518] Overall Loss 2.557288 Objective Loss 2.557288 LR 0.000016 Time 0.320348 +2025-05-16 23:58:46,591 - Epoch: [154][ 400/ 518] Overall Loss 2.557635 Objective Loss 2.557635 LR 0.000016 Time 0.320194 +2025-05-16 23:59:02,525 - Epoch: [154][ 450/ 518] Overall Loss 2.561028 Objective Loss 2.561028 LR 0.000016 Time 0.320024 +2025-05-16 23:59:18,451 - Epoch: [154][ 500/ 518] Overall Loss 2.564807 Objective Loss 2.564807 LR 0.000016 Time 0.319871 +2025-05-16 23:59:24,073 - Epoch: [154][ 518/ 518] Overall Loss 2.562407 Objective Loss 2.562407 LR 0.000016 Time 0.319608 +2025-05-16 23:59:24,111 - --- validate (epoch=154)----------- +2025-05-16 23:59:24,112 - 4952 samples (32 per mini-batch) +2025-05-16 23:59:24,115 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-16 23:59:36,205 - Epoch: [154][ 50/ 155] Loss 2.856321 mAP 0.500139 +2025-05-16 23:59:48,606 - Epoch: [154][ 100/ 155] Loss 2.901506 mAP 0.481801 +2025-05-17 00:00:01,440 - Epoch: [154][ 150/ 155] Loss 2.898566 mAP 0.486810 +2025-05-17 00:00:04,314 - Epoch: [154][ 155/ 155] Loss 2.906274 mAP 0.485234 +2025-05-17 00:00:04,351 - ==> mAP: 0.48523 Loss: 2.906 + +2025-05-17 00:00:04,361 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:00:04,362 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:00:04,458 - + +2025-05-17 00:00:04,459 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:00:21,222 - Epoch: [155][ 50/ 518] Overall Loss 2.560130 Objective Loss 2.560130 LR 0.000016 Time 0.335208 +2025-05-17 00:00:37,127 - Epoch: [155][ 100/ 518] Overall Loss 2.545248 Objective Loss 2.545248 LR 0.000016 Time 0.326642 +2025-05-17 00:00:53,053 - Epoch: [155][ 150/ 518] Overall Loss 2.557419 Objective Loss 2.557419 LR 0.000016 Time 0.323933 +2025-05-17 00:01:09,001 - Epoch: [155][ 200/ 518] Overall Loss 2.556222 Objective Loss 2.556222 LR 0.000016 Time 0.322684 +2025-05-17 00:01:24,950 - Epoch: [155][ 250/ 518] Overall Loss 2.571035 Objective Loss 2.571035 LR 0.000016 Time 0.321940 +2025-05-17 00:01:40,911 - Epoch: [155][ 300/ 518] Overall Loss 2.558836 Objective Loss 2.558836 LR 0.000016 Time 0.321487 +2025-05-17 00:01:56,863 - Epoch: [155][ 350/ 518] Overall Loss 2.560129 Objective Loss 2.560129 LR 0.000016 Time 0.321134 +2025-05-17 00:02:12,816 - Epoch: [155][ 400/ 518] Overall Loss 2.558760 Objective Loss 2.558760 LR 0.000016 Time 0.320874 +2025-05-17 00:02:28,786 - Epoch: [155][ 450/ 518] Overall Loss 2.559370 Objective Loss 2.559370 LR 0.000016 Time 0.320708 +2025-05-17 00:02:44,755 - Epoch: [155][ 500/ 518] Overall Loss 2.559783 Objective Loss 2.559783 LR 0.000016 Time 0.320575 +2025-05-17 00:02:50,395 - Epoch: [155][ 518/ 518] Overall Loss 2.560446 Objective Loss 2.560446 LR 0.000016 Time 0.320322 +2025-05-17 00:02:50,432 - --- validate (epoch=155)----------- +2025-05-17 00:02:50,433 - 4952 samples (32 per mini-batch) +2025-05-17 00:02:50,436 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:03:02,667 - Epoch: [155][ 50/ 155] Loss 2.903873 mAP 0.500483 +2025-05-17 00:03:15,070 - Epoch: [155][ 100/ 155] Loss 2.900763 mAP 0.483505 +2025-05-17 00:03:27,998 - Epoch: [155][ 150/ 155] Loss 2.915396 mAP 0.484235 +2025-05-17 00:03:30,888 - Epoch: [155][ 155/ 155] Loss 2.916201 mAP 0.485544 +2025-05-17 00:03:30,926 - ==> mAP: 0.48554 Loss: 2.916 + +2025-05-17 00:03:30,936 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:03:30,936 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:03:31,031 - + +2025-05-17 00:03:31,032 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:03:47,834 - Epoch: [156][ 50/ 518] Overall Loss 2.604548 Objective Loss 2.604548 LR 0.000016 Time 0.335989 +2025-05-17 00:04:03,720 - Epoch: [156][ 100/ 518] Overall Loss 2.538178 Objective Loss 2.538178 LR 0.000016 Time 0.326851 +2025-05-17 00:04:19,632 - Epoch: [156][ 150/ 518] Overall Loss 2.542215 Objective Loss 2.542215 LR 0.000016 Time 0.323975 +2025-05-17 00:04:35,556 - Epoch: [156][ 200/ 518] Overall Loss 2.540249 Objective Loss 2.540249 LR 0.000016 Time 0.322597 +2025-05-17 00:04:51,499 - Epoch: [156][ 250/ 518] Overall Loss 2.541891 Objective Loss 2.541891 LR 0.000016 Time 0.321848 +2025-05-17 00:05:07,438 - Epoch: [156][ 300/ 518] Overall Loss 2.545032 Objective Loss 2.545032 LR 0.000016 Time 0.321332 +2025-05-17 00:05:23,353 - Epoch: [156][ 350/ 518] Overall Loss 2.552876 Objective Loss 2.552876 LR 0.000016 Time 0.320896 +2025-05-17 00:05:39,259 - Epoch: [156][ 400/ 518] Overall Loss 2.553144 Objective Loss 2.553144 LR 0.000016 Time 0.320547 +2025-05-17 00:05:55,175 - Epoch: [156][ 450/ 518] Overall Loss 2.554999 Objective Loss 2.554999 LR 0.000016 Time 0.320298 +2025-05-17 00:06:11,105 - Epoch: [156][ 500/ 518] Overall Loss 2.552921 Objective Loss 2.552921 LR 0.000016 Time 0.320125 +2025-05-17 00:06:16,732 - Epoch: [156][ 518/ 518] Overall Loss 2.551643 Objective Loss 2.551643 LR 0.000016 Time 0.319864 +2025-05-17 00:06:16,772 - --- validate (epoch=156)----------- +2025-05-17 00:06:16,773 - 4952 samples (32 per mini-batch) +2025-05-17 00:06:16,776 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:06:28,914 - Epoch: [156][ 50/ 155] Loss 2.920064 mAP 0.489918 +2025-05-17 00:06:41,413 - Epoch: [156][ 100/ 155] Loss 2.926384 mAP 0.488482 +2025-05-17 00:06:54,406 - Epoch: [156][ 150/ 155] Loss 2.912657 mAP 0.489200 +2025-05-17 00:06:57,399 - Epoch: [156][ 155/ 155] Loss 2.914426 mAP 0.490168 +2025-05-17 00:06:57,441 - ==> mAP: 0.49017 Loss: 2.914 + +2025-05-17 00:06:57,451 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:06:57,452 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:06:57,553 - + +2025-05-17 00:06:57,553 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:07:14,317 - Epoch: [157][ 50/ 518] Overall Loss 2.544835 Objective Loss 2.544835 LR 0.000016 Time 0.335222 +2025-05-17 00:07:30,209 - Epoch: [157][ 100/ 518] Overall Loss 2.553137 Objective Loss 2.553137 LR 0.000016 Time 0.326518 +2025-05-17 00:07:46,101 - Epoch: [157][ 150/ 518] Overall Loss 2.563387 Objective Loss 2.563387 LR 0.000016 Time 0.323615 +2025-05-17 00:08:01,997 - Epoch: [157][ 200/ 518] Overall Loss 2.562265 Objective Loss 2.562265 LR 0.000016 Time 0.322187 +2025-05-17 00:08:17,896 - Epoch: [157][ 250/ 518] Overall Loss 2.560942 Objective Loss 2.560942 LR 0.000016 Time 0.321340 +2025-05-17 00:08:33,811 - Epoch: [157][ 300/ 518] Overall Loss 2.555234 Objective Loss 2.555234 LR 0.000016 Time 0.320832 +2025-05-17 00:08:49,749 - Epoch: [157][ 350/ 518] Overall Loss 2.547363 Objective Loss 2.547363 LR 0.000016 Time 0.320533 +2025-05-17 00:09:05,698 - Epoch: [157][ 400/ 518] Overall Loss 2.556461 Objective Loss 2.556461 LR 0.000016 Time 0.320336 +2025-05-17 00:09:21,646 - Epoch: [157][ 450/ 518] Overall Loss 2.556727 Objective Loss 2.556727 LR 0.000016 Time 0.320182 +2025-05-17 00:09:37,579 - Epoch: [157][ 500/ 518] Overall Loss 2.554092 Objective Loss 2.554092 LR 0.000016 Time 0.320029 +2025-05-17 00:09:43,191 - Epoch: [157][ 518/ 518] Overall Loss 2.554287 Objective Loss 2.554287 LR 0.000016 Time 0.319741 +2025-05-17 00:09:43,229 - --- validate (epoch=157)----------- +2025-05-17 00:09:43,229 - 4952 samples (32 per mini-batch) +2025-05-17 00:09:43,233 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:09:55,387 - Epoch: [157][ 50/ 155] Loss 2.893195 mAP 0.500962 +2025-05-17 00:10:07,878 - Epoch: [157][ 100/ 155] Loss 2.890457 mAP 0.490326 +2025-05-17 00:10:20,928 - Epoch: [157][ 150/ 155] Loss 2.912951 mAP 0.483589 +2025-05-17 00:10:23,876 - Epoch: [157][ 155/ 155] Loss 2.909887 mAP 0.484887 +2025-05-17 00:10:23,917 - ==> mAP: 0.48489 Loss: 2.910 + +2025-05-17 00:10:23,927 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:10:23,927 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:10:24,027 - + +2025-05-17 00:10:24,028 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:10:40,800 - Epoch: [158][ 50/ 518] Overall Loss 2.591451 Objective Loss 2.591451 LR 0.000016 Time 0.335380 +2025-05-17 00:10:56,662 - Epoch: [158][ 100/ 518] Overall Loss 2.558596 Objective Loss 2.558596 LR 0.000016 Time 0.326300 +2025-05-17 00:11:12,531 - Epoch: [158][ 150/ 518] Overall Loss 2.563248 Objective Loss 2.563248 LR 0.000016 Time 0.323317 +2025-05-17 00:11:28,421 - Epoch: [158][ 200/ 518] Overall Loss 2.552814 Objective Loss 2.552814 LR 0.000016 Time 0.321935 +2025-05-17 00:11:44,329 - Epoch: [158][ 250/ 518] Overall Loss 2.566243 Objective Loss 2.566243 LR 0.000016 Time 0.321176 +2025-05-17 00:12:00,246 - Epoch: [158][ 300/ 518] Overall Loss 2.567177 Objective Loss 2.567177 LR 0.000016 Time 0.320701 +2025-05-17 00:12:16,183 - Epoch: [158][ 350/ 518] Overall Loss 2.568858 Objective Loss 2.568858 LR 0.000016 Time 0.320417 +2025-05-17 00:12:32,107 - Epoch: [158][ 400/ 518] Overall Loss 2.565092 Objective Loss 2.565092 LR 0.000016 Time 0.320174 +2025-05-17 00:12:48,032 - Epoch: [158][ 450/ 518] Overall Loss 2.562168 Objective Loss 2.562168 LR 0.000016 Time 0.319985 +2025-05-17 00:13:03,986 - Epoch: [158][ 500/ 518] Overall Loss 2.560925 Objective Loss 2.560925 LR 0.000016 Time 0.319894 +2025-05-17 00:13:09,631 - Epoch: [158][ 518/ 518] Overall Loss 2.562374 Objective Loss 2.562374 LR 0.000016 Time 0.319674 +2025-05-17 00:13:09,665 - --- validate (epoch=158)----------- +2025-05-17 00:13:09,666 - 4952 samples (32 per mini-batch) +2025-05-17 00:13:09,669 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:13:21,768 - Epoch: [158][ 50/ 155] Loss 2.900809 mAP 0.500810 +2025-05-17 00:13:34,353 - Epoch: [158][ 100/ 155] Loss 2.924531 mAP 0.485955 +2025-05-17 00:13:47,357 - Epoch: [158][ 150/ 155] Loss 2.916397 mAP 0.487625 +2025-05-17 00:13:50,304 - Epoch: [158][ 155/ 155] Loss 2.921624 mAP 0.484029 +2025-05-17 00:13:50,345 - ==> mAP: 0.48403 Loss: 2.922 + +2025-05-17 00:13:50,355 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:13:50,355 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:13:50,457 - + +2025-05-17 00:13:50,457 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:14:07,117 - Epoch: [159][ 50/ 518] Overall Loss 2.544079 Objective Loss 2.544079 LR 0.000016 Time 0.333121 +2025-05-17 00:14:22,981 - Epoch: [159][ 100/ 518] Overall Loss 2.524279 Objective Loss 2.524279 LR 0.000016 Time 0.325190 +2025-05-17 00:14:38,854 - Epoch: [159][ 150/ 518] Overall Loss 2.539655 Objective Loss 2.539655 LR 0.000016 Time 0.322607 +2025-05-17 00:14:54,766 - Epoch: [159][ 200/ 518] Overall Loss 2.530756 Objective Loss 2.530756 LR 0.000016 Time 0.321510 +2025-05-17 00:15:10,664 - Epoch: [159][ 250/ 518] Overall Loss 2.542847 Objective Loss 2.542847 LR 0.000016 Time 0.320796 +2025-05-17 00:15:26,558 - Epoch: [159][ 300/ 518] Overall Loss 2.544766 Objective Loss 2.544766 LR 0.000016 Time 0.320307 +2025-05-17 00:15:42,473 - Epoch: [159][ 350/ 518] Overall Loss 2.544255 Objective Loss 2.544255 LR 0.000016 Time 0.320016 +2025-05-17 00:15:58,393 - Epoch: [159][ 400/ 518] Overall Loss 2.549752 Objective Loss 2.549752 LR 0.000016 Time 0.319812 +2025-05-17 00:16:14,309 - Epoch: [159][ 450/ 518] Overall Loss 2.551883 Objective Loss 2.551883 LR 0.000016 Time 0.319645 +2025-05-17 00:16:30,242 - Epoch: [159][ 500/ 518] Overall Loss 2.550262 Objective Loss 2.550262 LR 0.000016 Time 0.319544 +2025-05-17 00:16:35,876 - Epoch: [159][ 518/ 518] Overall Loss 2.550491 Objective Loss 2.550491 LR 0.000016 Time 0.319316 +2025-05-17 00:16:35,910 - --- validate (epoch=159)----------- +2025-05-17 00:16:35,911 - 4952 samples (32 per mini-batch) +2025-05-17 00:16:35,914 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:16:48,322 - Epoch: [159][ 50/ 155] Loss 2.909722 mAP 0.492837 +2025-05-17 00:17:00,692 - Epoch: [159][ 100/ 155] Loss 2.910270 mAP 0.485480 +2025-05-17 00:17:13,792 - Epoch: [159][ 150/ 155] Loss 2.917141 mAP 0.482739 +2025-05-17 00:17:16,763 - Epoch: [159][ 155/ 155] Loss 2.914269 mAP 0.484710 +2025-05-17 00:17:16,802 - ==> mAP: 0.48471 Loss: 2.914 + +2025-05-17 00:17:16,812 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:17:16,812 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:17:16,914 - + +2025-05-17 00:17:16,914 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:17:33,672 - Epoch: [160][ 50/ 518] Overall Loss 2.611294 Objective Loss 2.611294 LR 0.000016 Time 0.335076 +2025-05-17 00:17:49,527 - Epoch: [160][ 100/ 518] Overall Loss 2.568310 Objective Loss 2.568310 LR 0.000016 Time 0.326079 +2025-05-17 00:18:05,418 - Epoch: [160][ 150/ 518] Overall Loss 2.560778 Objective Loss 2.560778 LR 0.000016 Time 0.323319 +2025-05-17 00:18:21,330 - Epoch: [160][ 200/ 518] Overall Loss 2.564623 Objective Loss 2.564623 LR 0.000016 Time 0.322046 +2025-05-17 00:18:37,231 - Epoch: [160][ 250/ 518] Overall Loss 2.560295 Objective Loss 2.560295 LR 0.000016 Time 0.321234 +2025-05-17 00:18:53,142 - Epoch: [160][ 300/ 518] Overall Loss 2.556807 Objective Loss 2.556807 LR 0.000016 Time 0.320731 +2025-05-17 00:19:09,053 - Epoch: [160][ 350/ 518] Overall Loss 2.559299 Objective Loss 2.559299 LR 0.000016 Time 0.320367 +2025-05-17 00:19:24,976 - Epoch: [160][ 400/ 518] Overall Loss 2.560390 Objective Loss 2.560390 LR 0.000016 Time 0.320127 +2025-05-17 00:19:40,889 - Epoch: [160][ 450/ 518] Overall Loss 2.556891 Objective Loss 2.556891 LR 0.000016 Time 0.319917 +2025-05-17 00:19:56,817 - Epoch: [160][ 500/ 518] Overall Loss 2.552148 Objective Loss 2.552148 LR 0.000016 Time 0.319780 +2025-05-17 00:20:02,436 - Epoch: [160][ 518/ 518] Overall Loss 2.551280 Objective Loss 2.551280 LR 0.000016 Time 0.319514 +2025-05-17 00:20:02,474 - --- validate (epoch=160)----------- +2025-05-17 00:20:02,474 - 4952 samples (32 per mini-batch) +2025-05-17 00:20:02,478 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:20:14,560 - Epoch: [160][ 50/ 155] Loss 2.927078 mAP 0.486783 +2025-05-17 00:20:26,994 - Epoch: [160][ 100/ 155] Loss 2.910103 mAP 0.494576 +2025-05-17 00:20:39,762 - Epoch: [160][ 150/ 155] Loss 2.917346 mAP 0.485893 +2025-05-17 00:20:42,632 - Epoch: [160][ 155/ 155] Loss 2.915238 mAP 0.485440 +2025-05-17 00:20:42,667 - ==> mAP: 0.48544 Loss: 2.915 + +2025-05-17 00:20:42,677 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:20:42,677 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:20:42,777 - + +2025-05-17 00:20:42,778 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:20:59,801 - Epoch: [161][ 50/ 518] Overall Loss 2.568074 Objective Loss 2.568074 LR 0.000016 Time 0.340403 +2025-05-17 00:21:15,682 - Epoch: [161][ 100/ 518] Overall Loss 2.564463 Objective Loss 2.564463 LR 0.000016 Time 0.329006 +2025-05-17 00:21:31,580 - Epoch: [161][ 150/ 518] Overall Loss 2.556751 Objective Loss 2.556751 LR 0.000016 Time 0.325322 +2025-05-17 00:21:47,506 - Epoch: [161][ 200/ 518] Overall Loss 2.558694 Objective Loss 2.558694 LR 0.000016 Time 0.323614 +2025-05-17 00:22:03,469 - Epoch: [161][ 250/ 518] Overall Loss 2.551129 Objective Loss 2.551129 LR 0.000016 Time 0.322743 +2025-05-17 00:22:19,383 - Epoch: [161][ 300/ 518] Overall Loss 2.550409 Objective Loss 2.550409 LR 0.000016 Time 0.321996 +2025-05-17 00:22:35,321 - Epoch: [161][ 350/ 518] Overall Loss 2.550460 Objective Loss 2.550460 LR 0.000016 Time 0.321529 +2025-05-17 00:22:51,256 - Epoch: [161][ 400/ 518] Overall Loss 2.547279 Objective Loss 2.547279 LR 0.000016 Time 0.321174 +2025-05-17 00:23:07,177 - Epoch: [161][ 450/ 518] Overall Loss 2.546210 Objective Loss 2.546210 LR 0.000016 Time 0.320867 +2025-05-17 00:23:23,093 - Epoch: [161][ 500/ 518] Overall Loss 2.547243 Objective Loss 2.547243 LR 0.000016 Time 0.320610 +2025-05-17 00:23:28,701 - Epoch: [161][ 518/ 518] Overall Loss 2.549409 Objective Loss 2.549409 LR 0.000016 Time 0.320293 +2025-05-17 00:23:28,739 - --- validate (epoch=161)----------- +2025-05-17 00:23:28,740 - 4952 samples (32 per mini-batch) +2025-05-17 00:23:28,743 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:23:41,004 - Epoch: [161][ 50/ 155] Loss 2.904421 mAP 0.502582 +2025-05-17 00:23:53,556 - Epoch: [161][ 100/ 155] Loss 2.922147 mAP 0.487184 +2025-05-17 00:24:06,838 - Epoch: [161][ 150/ 155] Loss 2.913895 mAP 0.488237 +2025-05-17 00:24:09,825 - Epoch: [161][ 155/ 155] Loss 2.914154 mAP 0.486982 +2025-05-17 00:24:09,860 - ==> mAP: 0.48698 Loss: 2.914 + +2025-05-17 00:24:09,870 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:24:09,870 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:24:09,966 - + +2025-05-17 00:24:09,966 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:24:26,833 - Epoch: [162][ 50/ 518] Overall Loss 2.570248 Objective Loss 2.570248 LR 0.000016 Time 0.337270 +2025-05-17 00:24:42,715 - Epoch: [162][ 100/ 518] Overall Loss 2.580220 Objective Loss 2.580220 LR 0.000016 Time 0.327454 +2025-05-17 00:24:58,589 - Epoch: [162][ 150/ 518] Overall Loss 2.580174 Objective Loss 2.580174 LR 0.000016 Time 0.324117 +2025-05-17 00:25:14,469 - Epoch: [162][ 200/ 518] Overall Loss 2.584197 Objective Loss 2.584197 LR 0.000016 Time 0.322483 +2025-05-17 00:25:30,383 - Epoch: [162][ 250/ 518] Overall Loss 2.569885 Objective Loss 2.569885 LR 0.000016 Time 0.321640 +2025-05-17 00:25:46,282 - Epoch: [162][ 300/ 518] Overall Loss 2.563452 Objective Loss 2.563452 LR 0.000016 Time 0.321028 +2025-05-17 00:26:02,191 - Epoch: [162][ 350/ 518] Overall Loss 2.556720 Objective Loss 2.556720 LR 0.000016 Time 0.320617 +2025-05-17 00:26:18,094 - Epoch: [162][ 400/ 518] Overall Loss 2.552062 Objective Loss 2.552062 LR 0.000016 Time 0.320294 +2025-05-17 00:26:34,013 - Epoch: [162][ 450/ 518] Overall Loss 2.551738 Objective Loss 2.551738 LR 0.000016 Time 0.320079 +2025-05-17 00:26:49,918 - Epoch: [162][ 500/ 518] Overall Loss 2.554551 Objective Loss 2.554551 LR 0.000016 Time 0.319881 +2025-05-17 00:26:55,555 - Epoch: [162][ 518/ 518] Overall Loss 2.554221 Objective Loss 2.554221 LR 0.000016 Time 0.319645 +2025-05-17 00:26:55,588 - --- validate (epoch=162)----------- +2025-05-17 00:26:55,589 - 4952 samples (32 per mini-batch) +2025-05-17 00:26:55,592 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:27:07,935 - Epoch: [162][ 50/ 155] Loss 2.918345 mAP 0.496028 +2025-05-17 00:27:20,417 - Epoch: [162][ 100/ 155] Loss 2.898410 mAP 0.495908 +2025-05-17 00:27:33,637 - Epoch: [162][ 150/ 155] Loss 2.909786 mAP 0.487768 +2025-05-17 00:27:36,625 - Epoch: [162][ 155/ 155] Loss 2.913069 mAP 0.487489 +2025-05-17 00:27:36,670 - ==> mAP: 0.48749 Loss: 2.913 + +2025-05-17 00:27:36,680 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:27:36,680 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:27:36,774 - + +2025-05-17 00:27:36,774 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:27:53,459 - Epoch: [163][ 50/ 518] Overall Loss 2.552446 Objective Loss 2.552446 LR 0.000016 Time 0.333629 +2025-05-17 00:28:09,343 - Epoch: [163][ 100/ 518] Overall Loss 2.549773 Objective Loss 2.549773 LR 0.000016 Time 0.325646 +2025-05-17 00:28:25,221 - Epoch: [163][ 150/ 518] Overall Loss 2.541187 Objective Loss 2.541187 LR 0.000016 Time 0.322944 +2025-05-17 00:28:41,119 - Epoch: [163][ 200/ 518] Overall Loss 2.546129 Objective Loss 2.546129 LR 0.000016 Time 0.321696 +2025-05-17 00:28:57,031 - Epoch: [163][ 250/ 518] Overall Loss 2.551922 Objective Loss 2.551922 LR 0.000016 Time 0.320998 +2025-05-17 00:29:12,962 - Epoch: [163][ 300/ 518] Overall Loss 2.544526 Objective Loss 2.544526 LR 0.000016 Time 0.320598 +2025-05-17 00:29:28,875 - Epoch: [163][ 350/ 518] Overall Loss 2.548083 Objective Loss 2.548083 LR 0.000016 Time 0.320262 +2025-05-17 00:29:44,803 - Epoch: [163][ 400/ 518] Overall Loss 2.541939 Objective Loss 2.541939 LR 0.000016 Time 0.320049 +2025-05-17 00:30:00,724 - Epoch: [163][ 450/ 518] Overall Loss 2.549853 Objective Loss 2.549853 LR 0.000016 Time 0.319865 +2025-05-17 00:30:16,638 - Epoch: [163][ 500/ 518] Overall Loss 2.546725 Objective Loss 2.546725 LR 0.000016 Time 0.319705 +2025-05-17 00:30:22,247 - Epoch: [163][ 518/ 518] Overall Loss 2.548354 Objective Loss 2.548354 LR 0.000016 Time 0.319423 +2025-05-17 00:30:22,286 - --- validate (epoch=163)----------- +2025-05-17 00:30:22,287 - 4952 samples (32 per mini-batch) +2025-05-17 00:30:22,290 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:30:34,558 - Epoch: [163][ 50/ 155] Loss 2.900289 mAP 0.483699 +2025-05-17 00:30:47,178 - Epoch: [163][ 100/ 155] Loss 2.921592 mAP 0.482270 +2025-05-17 00:31:00,239 - Epoch: [163][ 150/ 155] Loss 2.918390 mAP 0.480882 +2025-05-17 00:31:03,208 - Epoch: [163][ 155/ 155] Loss 2.912551 mAP 0.482328 +2025-05-17 00:31:03,247 - ==> mAP: 0.48233 Loss: 2.913 + +2025-05-17 00:31:03,257 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:31:03,257 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:31:03,356 - + +2025-05-17 00:31:03,356 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:31:20,043 - Epoch: [164][ 50/ 518] Overall Loss 2.572976 Objective Loss 2.572976 LR 0.000016 Time 0.333681 +2025-05-17 00:31:35,907 - Epoch: [164][ 100/ 518] Overall Loss 2.567723 Objective Loss 2.567723 LR 0.000016 Time 0.325462 +2025-05-17 00:31:51,794 - Epoch: [164][ 150/ 518] Overall Loss 2.546294 Objective Loss 2.546294 LR 0.000016 Time 0.322882 +2025-05-17 00:32:07,694 - Epoch: [164][ 200/ 518] Overall Loss 2.541827 Objective Loss 2.541827 LR 0.000016 Time 0.321659 +2025-05-17 00:32:23,608 - Epoch: [164][ 250/ 518] Overall Loss 2.536602 Objective Loss 2.536602 LR 0.000016 Time 0.320980 +2025-05-17 00:32:39,528 - Epoch: [164][ 300/ 518] Overall Loss 2.543130 Objective Loss 2.543130 LR 0.000016 Time 0.320546 +2025-05-17 00:32:55,465 - Epoch: [164][ 350/ 518] Overall Loss 2.551053 Objective Loss 2.551053 LR 0.000016 Time 0.320284 +2025-05-17 00:33:11,403 - Epoch: [164][ 400/ 518] Overall Loss 2.545873 Objective Loss 2.545873 LR 0.000016 Time 0.320093 +2025-05-17 00:33:27,352 - Epoch: [164][ 450/ 518] Overall Loss 2.542990 Objective Loss 2.542990 LR 0.000016 Time 0.319968 +2025-05-17 00:33:43,275 - Epoch: [164][ 500/ 518] Overall Loss 2.546570 Objective Loss 2.546570 LR 0.000016 Time 0.319815 +2025-05-17 00:33:48,904 - Epoch: [164][ 518/ 518] Overall Loss 2.545445 Objective Loss 2.545445 LR 0.000016 Time 0.319568 +2025-05-17 00:33:48,940 - --- validate (epoch=164)----------- +2025-05-17 00:33:48,940 - 4952 samples (32 per mini-batch) +2025-05-17 00:33:48,944 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:34:01,271 - Epoch: [164][ 50/ 155] Loss 2.936644 mAP 0.506384 +2025-05-17 00:34:13,655 - Epoch: [164][ 100/ 155] Loss 2.916754 mAP 0.496518 +2025-05-17 00:34:26,748 - Epoch: [164][ 150/ 155] Loss 2.915986 mAP 0.489002 +2025-05-17 00:34:29,657 - Epoch: [164][ 155/ 155] Loss 2.918456 mAP 0.485722 +2025-05-17 00:34:29,695 - ==> mAP: 0.48572 Loss: 2.918 + +2025-05-17 00:34:29,705 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:34:29,705 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:34:29,801 - + +2025-05-17 00:34:29,802 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:34:46,409 - Epoch: [165][ 50/ 518] Overall Loss 2.488918 Objective Loss 2.488918 LR 0.000016 Time 0.332084 +2025-05-17 00:35:02,264 - Epoch: [165][ 100/ 518] Overall Loss 2.521337 Objective Loss 2.521337 LR 0.000016 Time 0.324576 +2025-05-17 00:35:18,146 - Epoch: [165][ 150/ 518] Overall Loss 2.519884 Objective Loss 2.519884 LR 0.000016 Time 0.322259 +2025-05-17 00:35:34,043 - Epoch: [165][ 200/ 518] Overall Loss 2.531490 Objective Loss 2.531490 LR 0.000016 Time 0.321175 +2025-05-17 00:35:49,952 - Epoch: [165][ 250/ 518] Overall Loss 2.530023 Objective Loss 2.530023 LR 0.000016 Time 0.320572 +2025-05-17 00:36:05,855 - Epoch: [165][ 300/ 518] Overall Loss 2.538264 Objective Loss 2.538264 LR 0.000016 Time 0.320150 +2025-05-17 00:36:21,764 - Epoch: [165][ 350/ 518] Overall Loss 2.538679 Objective Loss 2.538679 LR 0.000016 Time 0.319864 +2025-05-17 00:36:37,664 - Epoch: [165][ 400/ 518] Overall Loss 2.543542 Objective Loss 2.543542 LR 0.000016 Time 0.319630 +2025-05-17 00:36:53,567 - Epoch: [165][ 450/ 518] Overall Loss 2.541004 Objective Loss 2.541004 LR 0.000016 Time 0.319452 +2025-05-17 00:37:09,464 - Epoch: [165][ 500/ 518] Overall Loss 2.544289 Objective Loss 2.544289 LR 0.000016 Time 0.319298 +2025-05-17 00:37:15,071 - Epoch: [165][ 518/ 518] Overall Loss 2.547546 Objective Loss 2.547546 LR 0.000016 Time 0.319028 +2025-05-17 00:37:15,105 - --- validate (epoch=165)----------- +2025-05-17 00:37:15,106 - 4952 samples (32 per mini-batch) +2025-05-17 00:37:15,109 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:37:27,292 - Epoch: [165][ 50/ 155] Loss 2.900653 mAP 0.505234 +2025-05-17 00:37:39,732 - Epoch: [165][ 100/ 155] Loss 2.911896 mAP 0.494426 +2025-05-17 00:37:52,671 - Epoch: [165][ 150/ 155] Loss 2.907196 mAP 0.488556 +2025-05-17 00:37:55,580 - Epoch: [165][ 155/ 155] Loss 2.911161 mAP 0.487892 +2025-05-17 00:37:55,619 - ==> mAP: 0.48789 Loss: 2.911 + +2025-05-17 00:37:55,630 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:37:55,630 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:37:55,725 - + +2025-05-17 00:37:55,725 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:38:12,417 - Epoch: [166][ 50/ 518] Overall Loss 2.547165 Objective Loss 2.547165 LR 0.000016 Time 0.333771 +2025-05-17 00:38:28,308 - Epoch: [166][ 100/ 518] Overall Loss 2.528279 Objective Loss 2.528279 LR 0.000016 Time 0.325785 +2025-05-17 00:38:44,212 - Epoch: [166][ 150/ 518] Overall Loss 2.531011 Objective Loss 2.531011 LR 0.000016 Time 0.323212 +2025-05-17 00:39:00,129 - Epoch: [166][ 200/ 518] Overall Loss 2.541975 Objective Loss 2.541975 LR 0.000016 Time 0.321990 +2025-05-17 00:39:16,058 - Epoch: [166][ 250/ 518] Overall Loss 2.559723 Objective Loss 2.559723 LR 0.000016 Time 0.321303 +2025-05-17 00:39:31,960 - Epoch: [166][ 300/ 518] Overall Loss 2.559846 Objective Loss 2.559846 LR 0.000016 Time 0.320759 +2025-05-17 00:39:47,890 - Epoch: [166][ 350/ 518] Overall Loss 2.555202 Objective Loss 2.555202 LR 0.000016 Time 0.320446 +2025-05-17 00:40:03,815 - Epoch: [166][ 400/ 518] Overall Loss 2.556016 Objective Loss 2.556016 LR 0.000016 Time 0.320200 +2025-05-17 00:40:19,727 - Epoch: [166][ 450/ 518] Overall Loss 2.557126 Objective Loss 2.557126 LR 0.000016 Time 0.319980 +2025-05-17 00:40:35,644 - Epoch: [166][ 500/ 518] Overall Loss 2.555951 Objective Loss 2.555951 LR 0.000016 Time 0.319814 +2025-05-17 00:40:41,265 - Epoch: [166][ 518/ 518] Overall Loss 2.557789 Objective Loss 2.557789 LR 0.000016 Time 0.319552 +2025-05-17 00:40:41,304 - --- validate (epoch=166)----------- +2025-05-17 00:40:41,305 - 4952 samples (32 per mini-batch) +2025-05-17 00:40:41,308 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:40:53,385 - Epoch: [166][ 50/ 155] Loss 2.922065 mAP 0.487913 +2025-05-17 00:41:05,939 - Epoch: [166][ 100/ 155] Loss 2.910643 mAP 0.482945 +2025-05-17 00:41:18,939 - Epoch: [166][ 150/ 155] Loss 2.901399 mAP 0.482437 +2025-05-17 00:41:21,865 - Epoch: [166][ 155/ 155] Loss 2.903707 mAP 0.481043 +2025-05-17 00:41:21,905 - ==> mAP: 0.48104 Loss: 2.904 + +2025-05-17 00:41:21,915 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:41:21,915 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:41:22,012 - + +2025-05-17 00:41:22,012 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:41:38,774 - Epoch: [167][ 50/ 518] Overall Loss 2.521993 Objective Loss 2.521993 LR 0.000016 Time 0.335172 +2025-05-17 00:41:54,661 - Epoch: [167][ 100/ 518] Overall Loss 2.530639 Objective Loss 2.530639 LR 0.000016 Time 0.326444 +2025-05-17 00:42:10,579 - Epoch: [167][ 150/ 518] Overall Loss 2.538525 Objective Loss 2.538525 LR 0.000016 Time 0.323748 +2025-05-17 00:42:26,515 - Epoch: [167][ 200/ 518] Overall Loss 2.535219 Objective Loss 2.535219 LR 0.000016 Time 0.322484 +2025-05-17 00:42:42,448 - Epoch: [167][ 250/ 518] Overall Loss 2.550621 Objective Loss 2.550621 LR 0.000016 Time 0.321717 +2025-05-17 00:42:58,392 - Epoch: [167][ 300/ 518] Overall Loss 2.551089 Objective Loss 2.551089 LR 0.000016 Time 0.321243 +2025-05-17 00:43:14,344 - Epoch: [167][ 350/ 518] Overall Loss 2.553374 Objective Loss 2.553374 LR 0.000016 Time 0.320927 +2025-05-17 00:43:30,265 - Epoch: [167][ 400/ 518] Overall Loss 2.548132 Objective Loss 2.548132 LR 0.000016 Time 0.320609 +2025-05-17 00:43:46,197 - Epoch: [167][ 450/ 518] Overall Loss 2.547721 Objective Loss 2.547721 LR 0.000016 Time 0.320388 +2025-05-17 00:44:02,121 - Epoch: [167][ 500/ 518] Overall Loss 2.547847 Objective Loss 2.547847 LR 0.000016 Time 0.320195 +2025-05-17 00:44:07,749 - Epoch: [167][ 518/ 518] Overall Loss 2.547868 Objective Loss 2.547868 LR 0.000016 Time 0.319933 +2025-05-17 00:44:07,782 - --- validate (epoch=167)----------- +2025-05-17 00:44:07,783 - 4952 samples (32 per mini-batch) +2025-05-17 00:44:07,786 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:44:20,055 - Epoch: [167][ 50/ 155] Loss 2.869327 mAP 0.503012 +2025-05-17 00:44:32,795 - Epoch: [167][ 100/ 155] Loss 2.918779 mAP 0.484488 +2025-05-17 00:44:46,077 - Epoch: [167][ 150/ 155] Loss 2.913537 mAP 0.486929 +2025-05-17 00:44:48,900 - Epoch: [167][ 155/ 155] Loss 2.917166 mAP 0.485523 +2025-05-17 00:44:48,936 - ==> mAP: 0.48552 Loss: 2.917 + +2025-05-17 00:44:48,946 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:44:48,946 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:44:49,048 - + +2025-05-17 00:44:49,049 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:45:06,090 - Epoch: [168][ 50/ 518] Overall Loss 2.657927 Objective Loss 2.657927 LR 0.000016 Time 0.340771 +2025-05-17 00:45:21,975 - Epoch: [168][ 100/ 518] Overall Loss 2.590150 Objective Loss 2.590150 LR 0.000016 Time 0.329222 +2025-05-17 00:45:37,872 - Epoch: [168][ 150/ 518] Overall Loss 2.576407 Objective Loss 2.576407 LR 0.000016 Time 0.325460 +2025-05-17 00:45:53,806 - Epoch: [168][ 200/ 518] Overall Loss 2.563321 Objective Loss 2.563321 LR 0.000016 Time 0.323762 +2025-05-17 00:46:09,743 - Epoch: [168][ 250/ 518] Overall Loss 2.550320 Objective Loss 2.550320 LR 0.000016 Time 0.322753 +2025-05-17 00:46:25,674 - Epoch: [168][ 300/ 518] Overall Loss 2.546854 Objective Loss 2.546854 LR 0.000016 Time 0.322063 +2025-05-17 00:46:41,595 - Epoch: [168][ 350/ 518] Overall Loss 2.540610 Objective Loss 2.540610 LR 0.000016 Time 0.321541 +2025-05-17 00:46:57,523 - Epoch: [168][ 400/ 518] Overall Loss 2.547155 Objective Loss 2.547155 LR 0.000016 Time 0.321165 +2025-05-17 00:47:13,442 - Epoch: [168][ 450/ 518] Overall Loss 2.544467 Objective Loss 2.544467 LR 0.000016 Time 0.320853 +2025-05-17 00:47:29,379 - Epoch: [168][ 500/ 518] Overall Loss 2.544605 Objective Loss 2.544605 LR 0.000016 Time 0.320640 +2025-05-17 00:47:35,006 - Epoch: [168][ 518/ 518] Overall Loss 2.545052 Objective Loss 2.545052 LR 0.000016 Time 0.320360 +2025-05-17 00:47:35,048 - --- validate (epoch=168)----------- +2025-05-17 00:47:35,048 - 4952 samples (32 per mini-batch) +2025-05-17 00:47:35,052 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:47:47,119 - Epoch: [168][ 50/ 155] Loss 2.922218 mAP 0.492851 +2025-05-17 00:47:59,557 - Epoch: [168][ 100/ 155] Loss 2.910523 mAP 0.489365 +2025-05-17 00:48:12,376 - Epoch: [168][ 150/ 155] Loss 2.911534 mAP 0.485739 +2025-05-17 00:48:15,266 - Epoch: [168][ 155/ 155] Loss 2.917585 mAP 0.486084 +2025-05-17 00:48:15,304 - ==> mAP: 0.48608 Loss: 2.918 + +2025-05-17 00:48:15,314 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:48:15,314 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:48:15,416 - + +2025-05-17 00:48:15,417 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:48:32,146 - Epoch: [169][ 50/ 518] Overall Loss 2.539833 Objective Loss 2.539833 LR 0.000016 Time 0.334524 +2025-05-17 00:48:48,058 - Epoch: [169][ 100/ 518] Overall Loss 2.549517 Objective Loss 2.549517 LR 0.000016 Time 0.326368 +2025-05-17 00:49:03,956 - Epoch: [169][ 150/ 518] Overall Loss 2.554278 Objective Loss 2.554278 LR 0.000016 Time 0.323566 +2025-05-17 00:49:19,896 - Epoch: [169][ 200/ 518] Overall Loss 2.560844 Objective Loss 2.560844 LR 0.000016 Time 0.322367 +2025-05-17 00:49:35,836 - Epoch: [169][ 250/ 518] Overall Loss 2.553530 Objective Loss 2.553530 LR 0.000016 Time 0.321654 +2025-05-17 00:49:51,789 - Epoch: [169][ 300/ 518] Overall Loss 2.554931 Objective Loss 2.554931 LR 0.000016 Time 0.321219 +2025-05-17 00:50:07,736 - Epoch: [169][ 350/ 518] Overall Loss 2.544942 Objective Loss 2.544942 LR 0.000016 Time 0.320892 +2025-05-17 00:50:23,696 - Epoch: [169][ 400/ 518] Overall Loss 2.542195 Objective Loss 2.542195 LR 0.000016 Time 0.320677 +2025-05-17 00:50:39,613 - Epoch: [169][ 450/ 518] Overall Loss 2.547776 Objective Loss 2.547776 LR 0.000016 Time 0.320415 +2025-05-17 00:50:55,541 - Epoch: [169][ 500/ 518] Overall Loss 2.547935 Objective Loss 2.547935 LR 0.000016 Time 0.320228 +2025-05-17 00:51:01,164 - Epoch: [169][ 518/ 518] Overall Loss 2.547569 Objective Loss 2.547569 LR 0.000016 Time 0.319955 +2025-05-17 00:51:01,199 - --- validate (epoch=169)----------- +2025-05-17 00:51:01,200 - 4952 samples (32 per mini-batch) +2025-05-17 00:51:01,204 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:51:13,220 - Epoch: [169][ 50/ 155] Loss 2.950433 mAP 0.493620 +2025-05-17 00:51:25,617 - Epoch: [169][ 100/ 155] Loss 2.919359 mAP 0.483241 +2025-05-17 00:51:38,512 - Epoch: [169][ 150/ 155] Loss 2.917165 mAP 0.482622 +2025-05-17 00:51:41,390 - Epoch: [169][ 155/ 155] Loss 2.916248 mAP 0.484713 +2025-05-17 00:51:41,430 - ==> mAP: 0.48471 Loss: 2.916 + +2025-05-17 00:51:41,439 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:51:41,439 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:51:41,535 - + +2025-05-17 00:51:41,535 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:51:58,215 - Epoch: [170][ 50/ 518] Overall Loss 2.543835 Objective Loss 2.543835 LR 0.000016 Time 0.333528 +2025-05-17 00:52:14,122 - Epoch: [170][ 100/ 518] Overall Loss 2.566840 Objective Loss 2.566840 LR 0.000016 Time 0.325830 +2025-05-17 00:52:30,063 - Epoch: [170][ 150/ 518] Overall Loss 2.554116 Objective Loss 2.554116 LR 0.000016 Time 0.323488 +2025-05-17 00:52:45,957 - Epoch: [170][ 200/ 518] Overall Loss 2.555081 Objective Loss 2.555081 LR 0.000016 Time 0.322079 +2025-05-17 00:53:01,875 - Epoch: [170][ 250/ 518] Overall Loss 2.549818 Objective Loss 2.549818 LR 0.000016 Time 0.321331 +2025-05-17 00:53:17,813 - Epoch: [170][ 300/ 518] Overall Loss 2.555572 Objective Loss 2.555572 LR 0.000016 Time 0.320899 +2025-05-17 00:53:33,748 - Epoch: [170][ 350/ 518] Overall Loss 2.554455 Objective Loss 2.554455 LR 0.000016 Time 0.320584 +2025-05-17 00:53:49,705 - Epoch: [170][ 400/ 518] Overall Loss 2.561315 Objective Loss 2.561315 LR 0.000016 Time 0.320402 +2025-05-17 00:54:05,636 - Epoch: [170][ 450/ 518] Overall Loss 2.556280 Objective Loss 2.556280 LR 0.000016 Time 0.320201 +2025-05-17 00:54:21,571 - Epoch: [170][ 500/ 518] Overall Loss 2.553740 Objective Loss 2.553740 LR 0.000016 Time 0.320049 +2025-05-17 00:54:27,200 - Epoch: [170][ 518/ 518] Overall Loss 2.555401 Objective Loss 2.555401 LR 0.000016 Time 0.319793 +2025-05-17 00:54:27,239 - --- validate (epoch=170)----------- +2025-05-17 00:54:27,240 - 4952 samples (32 per mini-batch) +2025-05-17 00:54:27,244 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:54:39,480 - Epoch: [170][ 50/ 155] Loss 2.888662 mAP 0.492969 +2025-05-17 00:54:51,973 - Epoch: [170][ 100/ 155] Loss 2.909855 mAP 0.494380 +2025-05-17 00:55:04,939 - Epoch: [170][ 150/ 155] Loss 2.918373 mAP 0.485845 +2025-05-17 00:55:07,912 - Epoch: [170][ 155/ 155] Loss 2.916486 mAP 0.484649 +2025-05-17 00:55:07,951 - ==> mAP: 0.48465 Loss: 2.916 + +2025-05-17 00:55:07,962 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:55:07,962 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:55:08,060 - + +2025-05-17 00:55:08,061 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:55:25,000 - Epoch: [171][ 50/ 518] Overall Loss 2.563594 Objective Loss 2.563594 LR 0.000016 Time 0.338727 +2025-05-17 00:55:40,914 - Epoch: [171][ 100/ 518] Overall Loss 2.567126 Objective Loss 2.567126 LR 0.000016 Time 0.328495 +2025-05-17 00:55:56,829 - Epoch: [171][ 150/ 518] Overall Loss 2.577092 Objective Loss 2.577092 LR 0.000016 Time 0.325089 +2025-05-17 00:56:12,730 - Epoch: [171][ 200/ 518] Overall Loss 2.573706 Objective Loss 2.573706 LR 0.000016 Time 0.323318 +2025-05-17 00:56:28,650 - Epoch: [171][ 250/ 518] Overall Loss 2.573476 Objective Loss 2.573476 LR 0.000016 Time 0.322330 +2025-05-17 00:56:44,570 - Epoch: [171][ 300/ 518] Overall Loss 2.559325 Objective Loss 2.559325 LR 0.000016 Time 0.321674 +2025-05-17 00:57:00,513 - Epoch: [171][ 350/ 518] Overall Loss 2.552271 Objective Loss 2.552271 LR 0.000016 Time 0.321270 +2025-05-17 00:57:16,454 - Epoch: [171][ 400/ 518] Overall Loss 2.562055 Objective Loss 2.562055 LR 0.000016 Time 0.320962 +2025-05-17 00:57:32,398 - Epoch: [171][ 450/ 518] Overall Loss 2.560441 Objective Loss 2.560441 LR 0.000016 Time 0.320729 +2025-05-17 00:57:48,340 - Epoch: [171][ 500/ 518] Overall Loss 2.564669 Objective Loss 2.564669 LR 0.000016 Time 0.320540 +2025-05-17 00:57:53,974 - Epoch: [171][ 518/ 518] Overall Loss 2.561059 Objective Loss 2.561059 LR 0.000016 Time 0.320277 +2025-05-17 00:57:54,012 - --- validate (epoch=171)----------- +2025-05-17 00:57:54,013 - 4952 samples (32 per mini-batch) +2025-05-17 00:57:54,017 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 00:58:06,175 - Epoch: [171][ 50/ 155] Loss 2.962845 mAP 0.480461 +2025-05-17 00:58:18,563 - Epoch: [171][ 100/ 155] Loss 2.918455 mAP 0.480133 +2025-05-17 00:58:31,674 - Epoch: [171][ 150/ 155] Loss 2.919100 mAP 0.481078 +2025-05-17 00:58:34,645 - Epoch: [171][ 155/ 155] Loss 2.913456 mAP 0.483296 +2025-05-17 00:58:34,680 - ==> mAP: 0.48330 Loss: 2.913 + +2025-05-17 00:58:34,690 - ==> Best [mAP: 0.490555 vloss: 2.917984 Params: 2177088 on epoch: 150] +2025-05-17 00:58:34,690 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 00:58:34,788 - + +2025-05-17 00:58:34,789 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 00:58:51,635 - Epoch: [172][ 50/ 518] Overall Loss 2.570996 Objective Loss 2.570996 LR 0.000016 Time 0.336858 +2025-05-17 00:59:07,530 - Epoch: [172][ 100/ 518] Overall Loss 2.559434 Objective Loss 2.559434 LR 0.000016 Time 0.327365 +2025-05-17 00:59:23,419 - Epoch: [172][ 150/ 518] Overall Loss 2.554513 Objective Loss 2.554513 LR 0.000016 Time 0.324167 +2025-05-17 00:59:39,324 - Epoch: [172][ 200/ 518] Overall Loss 2.536043 Objective Loss 2.536043 LR 0.000016 Time 0.322646 +2025-05-17 00:59:55,236 - Epoch: [172][ 250/ 518] Overall Loss 2.542557 Objective Loss 2.542557 LR 0.000016 Time 0.321760 +2025-05-17 01:00:11,169 - Epoch: [172][ 300/ 518] Overall Loss 2.548098 Objective Loss 2.548098 LR 0.000016 Time 0.321238 +2025-05-17 01:00:27,123 - Epoch: [172][ 350/ 518] Overall Loss 2.551831 Objective Loss 2.551831 LR 0.000016 Time 0.320926 +2025-05-17 01:00:43,070 - Epoch: [172][ 400/ 518] Overall Loss 2.554086 Objective Loss 2.554086 LR 0.000016 Time 0.320678 +2025-05-17 01:00:59,022 - Epoch: [172][ 450/ 518] Overall Loss 2.547627 Objective Loss 2.547627 LR 0.000016 Time 0.320494 +2025-05-17 01:01:14,995 - Epoch: [172][ 500/ 518] Overall Loss 2.546178 Objective Loss 2.546178 LR 0.000016 Time 0.320390 +2025-05-17 01:01:20,636 - Epoch: [172][ 518/ 518] Overall Loss 2.549713 Objective Loss 2.549713 LR 0.000016 Time 0.320145 +2025-05-17 01:01:20,674 - --- validate (epoch=172)----------- +2025-05-17 01:01:20,675 - 4952 samples (32 per mini-batch) +2025-05-17 01:01:20,678 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:01:32,797 - Epoch: [172][ 50/ 155] Loss 2.928260 mAP 0.498501 +2025-05-17 01:01:45,216 - Epoch: [172][ 100/ 155] Loss 2.928247 mAP 0.492905 +2025-05-17 01:01:58,292 - Epoch: [172][ 150/ 155] Loss 2.922555 mAP 0.491025 +2025-05-17 01:02:01,291 - Epoch: [172][ 155/ 155] Loss 2.923876 mAP 0.491273 +2025-05-17 01:02:01,331 - ==> mAP: 0.49127 Loss: 2.924 + +2025-05-17 01:02:01,342 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:02:01,342 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:02:01,480 - + +2025-05-17 01:02:01,481 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:02:18,158 - Epoch: [173][ 50/ 518] Overall Loss 2.513376 Objective Loss 2.513376 LR 0.000016 Time 0.333485 +2025-05-17 01:02:34,055 - Epoch: [173][ 100/ 518] Overall Loss 2.534459 Objective Loss 2.534459 LR 0.000016 Time 0.325704 +2025-05-17 01:02:49,969 - Epoch: [173][ 150/ 518] Overall Loss 2.543256 Objective Loss 2.543256 LR 0.000016 Time 0.323229 +2025-05-17 01:03:05,899 - Epoch: [173][ 200/ 518] Overall Loss 2.561534 Objective Loss 2.561534 LR 0.000016 Time 0.322065 +2025-05-17 01:03:21,831 - Epoch: [173][ 250/ 518] Overall Loss 2.566593 Objective Loss 2.566593 LR 0.000016 Time 0.321378 +2025-05-17 01:03:37,775 - Epoch: [173][ 300/ 518] Overall Loss 2.559523 Objective Loss 2.559523 LR 0.000016 Time 0.320959 +2025-05-17 01:03:53,716 - Epoch: [173][ 350/ 518] Overall Loss 2.560240 Objective Loss 2.560240 LR 0.000016 Time 0.320649 +2025-05-17 01:04:09,654 - Epoch: [173][ 400/ 518] Overall Loss 2.564717 Objective Loss 2.564717 LR 0.000016 Time 0.320412 +2025-05-17 01:04:25,587 - Epoch: [173][ 450/ 518] Overall Loss 2.556179 Objective Loss 2.556179 LR 0.000016 Time 0.320214 +2025-05-17 01:04:41,514 - Epoch: [173][ 500/ 518] Overall Loss 2.557609 Objective Loss 2.557609 LR 0.000016 Time 0.320046 +2025-05-17 01:04:47,124 - Epoch: [173][ 518/ 518] Overall Loss 2.557998 Objective Loss 2.557998 LR 0.000016 Time 0.319754 +2025-05-17 01:04:47,161 - --- validate (epoch=173)----------- +2025-05-17 01:04:47,162 - 4952 samples (32 per mini-batch) +2025-05-17 01:04:47,165 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:04:59,346 - Epoch: [173][ 50/ 155] Loss 2.890900 mAP 0.513988 +2025-05-17 01:05:11,634 - Epoch: [173][ 100/ 155] Loss 2.919348 mAP 0.495579 +2025-05-17 01:05:24,535 - Epoch: [173][ 150/ 155] Loss 2.918772 mAP 0.484564 +2025-05-17 01:05:27,374 - Epoch: [173][ 155/ 155] Loss 2.915028 mAP 0.484265 +2025-05-17 01:05:27,413 - ==> mAP: 0.48426 Loss: 2.915 + +2025-05-17 01:05:27,423 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:05:27,424 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:05:27,515 - + +2025-05-17 01:05:27,515 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:05:44,269 - Epoch: [174][ 50/ 518] Overall Loss 2.508790 Objective Loss 2.508790 LR 0.000016 Time 0.335025 +2025-05-17 01:06:00,158 - Epoch: [174][ 100/ 518] Overall Loss 2.503611 Objective Loss 2.503611 LR 0.000016 Time 0.326389 +2025-05-17 01:06:16,071 - Epoch: [174][ 150/ 518] Overall Loss 2.520931 Objective Loss 2.520931 LR 0.000016 Time 0.323677 +2025-05-17 01:06:32,007 - Epoch: [174][ 200/ 518] Overall Loss 2.527397 Objective Loss 2.527397 LR 0.000016 Time 0.322432 +2025-05-17 01:06:47,946 - Epoch: [174][ 250/ 518] Overall Loss 2.537206 Objective Loss 2.537206 LR 0.000016 Time 0.321701 +2025-05-17 01:07:03,884 - Epoch: [174][ 300/ 518] Overall Loss 2.529288 Objective Loss 2.529288 LR 0.000016 Time 0.321209 +2025-05-17 01:07:19,821 - Epoch: [174][ 350/ 518] Overall Loss 2.534065 Objective Loss 2.534065 LR 0.000016 Time 0.320855 +2025-05-17 01:07:35,769 - Epoch: [174][ 400/ 518] Overall Loss 2.533669 Objective Loss 2.533669 LR 0.000016 Time 0.320616 +2025-05-17 01:07:51,709 - Epoch: [174][ 450/ 518] Overall Loss 2.532882 Objective Loss 2.532882 LR 0.000016 Time 0.320413 +2025-05-17 01:08:07,651 - Epoch: [174][ 500/ 518] Overall Loss 2.532436 Objective Loss 2.532436 LR 0.000016 Time 0.320254 +2025-05-17 01:08:13,269 - Epoch: [174][ 518/ 518] Overall Loss 2.533630 Objective Loss 2.533630 LR 0.000016 Time 0.319970 +2025-05-17 01:08:13,303 - --- validate (epoch=174)----------- +2025-05-17 01:08:13,304 - 4952 samples (32 per mini-batch) +2025-05-17 01:08:13,307 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:08:25,604 - Epoch: [174][ 50/ 155] Loss 2.911541 mAP 0.486596 +2025-05-17 01:08:38,249 - Epoch: [174][ 100/ 155] Loss 2.921732 mAP 0.486153 +2025-05-17 01:08:51,489 - Epoch: [174][ 150/ 155] Loss 2.908684 mAP 0.489799 +2025-05-17 01:08:54,495 - Epoch: [174][ 155/ 155] Loss 2.911512 mAP 0.488711 +2025-05-17 01:08:54,537 - ==> mAP: 0.48871 Loss: 2.912 + +2025-05-17 01:08:54,547 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:08:54,547 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:08:54,645 - + +2025-05-17 01:08:54,645 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:09:11,520 - Epoch: [175][ 50/ 518] Overall Loss 2.582048 Objective Loss 2.582048 LR 0.000016 Time 0.337414 +2025-05-17 01:09:27,391 - Epoch: [175][ 100/ 518] Overall Loss 2.579909 Objective Loss 2.579909 LR 0.000016 Time 0.327412 +2025-05-17 01:09:43,264 - Epoch: [175][ 150/ 518] Overall Loss 2.564044 Objective Loss 2.564044 LR 0.000016 Time 0.324090 +2025-05-17 01:09:59,167 - Epoch: [175][ 200/ 518] Overall Loss 2.560763 Objective Loss 2.560763 LR 0.000016 Time 0.322577 +2025-05-17 01:10:15,079 - Epoch: [175][ 250/ 518] Overall Loss 2.555172 Objective Loss 2.555172 LR 0.000016 Time 0.321705 +2025-05-17 01:10:31,000 - Epoch: [175][ 300/ 518] Overall Loss 2.557937 Objective Loss 2.557937 LR 0.000016 Time 0.321153 +2025-05-17 01:10:46,916 - Epoch: [175][ 350/ 518] Overall Loss 2.555231 Objective Loss 2.555231 LR 0.000016 Time 0.320744 +2025-05-17 01:11:02,829 - Epoch: [175][ 400/ 518] Overall Loss 2.548717 Objective Loss 2.548717 LR 0.000016 Time 0.320431 +2025-05-17 01:11:18,741 - Epoch: [175][ 450/ 518] Overall Loss 2.552586 Objective Loss 2.552586 LR 0.000016 Time 0.320187 +2025-05-17 01:11:34,663 - Epoch: [175][ 500/ 518] Overall Loss 2.548046 Objective Loss 2.548046 LR 0.000016 Time 0.320010 +2025-05-17 01:11:40,300 - Epoch: [175][ 518/ 518] Overall Loss 2.549760 Objective Loss 2.549760 LR 0.000016 Time 0.319772 +2025-05-17 01:11:40,336 - --- validate (epoch=175)----------- +2025-05-17 01:11:40,336 - 4952 samples (32 per mini-batch) +2025-05-17 01:11:40,340 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:11:52,435 - Epoch: [175][ 50/ 155] Loss 2.927792 mAP 0.500374 +2025-05-17 01:12:04,684 - Epoch: [175][ 100/ 155] Loss 2.932384 mAP 0.490498 +2025-05-17 01:12:17,609 - Epoch: [175][ 150/ 155] Loss 2.915329 mAP 0.488973 +2025-05-17 01:12:20,496 - Epoch: [175][ 155/ 155] Loss 2.914511 mAP 0.487877 +2025-05-17 01:12:20,537 - ==> mAP: 0.48788 Loss: 2.915 + +2025-05-17 01:12:20,547 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:12:20,547 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:12:20,640 - + +2025-05-17 01:12:20,641 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:12:37,351 - Epoch: [176][ 50/ 518] Overall Loss 2.487684 Objective Loss 2.487684 LR 0.000016 Time 0.334139 +2025-05-17 01:12:53,227 - Epoch: [176][ 100/ 518] Overall Loss 2.506436 Objective Loss 2.506436 LR 0.000016 Time 0.325818 +2025-05-17 01:13:09,141 - Epoch: [176][ 150/ 518] Overall Loss 2.517445 Objective Loss 2.517445 LR 0.000016 Time 0.323303 +2025-05-17 01:13:25,051 - Epoch: [176][ 200/ 518] Overall Loss 2.504517 Objective Loss 2.504517 LR 0.000016 Time 0.322022 +2025-05-17 01:13:40,963 - Epoch: [176][ 250/ 518] Overall Loss 2.509607 Objective Loss 2.509607 LR 0.000016 Time 0.321261 +2025-05-17 01:13:56,883 - Epoch: [176][ 300/ 518] Overall Loss 2.517925 Objective Loss 2.517925 LR 0.000016 Time 0.320780 +2025-05-17 01:14:12,804 - Epoch: [176][ 350/ 518] Overall Loss 2.528864 Objective Loss 2.528864 LR 0.000016 Time 0.320441 +2025-05-17 01:14:28,733 - Epoch: [176][ 400/ 518] Overall Loss 2.531122 Objective Loss 2.531122 LR 0.000016 Time 0.320206 +2025-05-17 01:14:44,656 - Epoch: [176][ 450/ 518] Overall Loss 2.536759 Objective Loss 2.536759 LR 0.000016 Time 0.320010 +2025-05-17 01:15:00,581 - Epoch: [176][ 500/ 518] Overall Loss 2.540083 Objective Loss 2.540083 LR 0.000016 Time 0.319856 +2025-05-17 01:15:06,221 - Epoch: [176][ 518/ 518] Overall Loss 2.542537 Objective Loss 2.542537 LR 0.000016 Time 0.319629 +2025-05-17 01:15:06,254 - --- validate (epoch=176)----------- +2025-05-17 01:15:06,255 - 4952 samples (32 per mini-batch) +2025-05-17 01:15:06,258 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:15:18,544 - Epoch: [176][ 50/ 155] Loss 2.895720 mAP 0.493678 +2025-05-17 01:15:30,872 - Epoch: [176][ 100/ 155] Loss 2.916966 mAP 0.487056 +2025-05-17 01:15:43,846 - Epoch: [176][ 150/ 155] Loss 2.912893 mAP 0.487814 +2025-05-17 01:15:46,738 - Epoch: [176][ 155/ 155] Loss 2.914067 mAP 0.486107 +2025-05-17 01:15:46,773 - ==> mAP: 0.48611 Loss: 2.914 + +2025-05-17 01:15:46,783 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:15:46,783 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:15:46,879 - + +2025-05-17 01:15:46,879 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:16:03,564 - Epoch: [177][ 50/ 518] Overall Loss 2.542208 Objective Loss 2.542208 LR 0.000016 Time 0.333614 +2025-05-17 01:16:19,460 - Epoch: [177][ 100/ 518] Overall Loss 2.519618 Objective Loss 2.519618 LR 0.000016 Time 0.325764 +2025-05-17 01:16:35,364 - Epoch: [177][ 150/ 518] Overall Loss 2.519092 Objective Loss 2.519092 LR 0.000016 Time 0.323196 +2025-05-17 01:16:51,298 - Epoch: [177][ 200/ 518] Overall Loss 2.525391 Objective Loss 2.525391 LR 0.000016 Time 0.322066 +2025-05-17 01:17:07,228 - Epoch: [177][ 250/ 518] Overall Loss 2.533591 Objective Loss 2.533591 LR 0.000016 Time 0.321369 +2025-05-17 01:17:23,153 - Epoch: [177][ 300/ 518] Overall Loss 2.540953 Objective Loss 2.540953 LR 0.000016 Time 0.320889 +2025-05-17 01:17:39,077 - Epoch: [177][ 350/ 518] Overall Loss 2.541483 Objective Loss 2.541483 LR 0.000016 Time 0.320541 +2025-05-17 01:17:54,994 - Epoch: [177][ 400/ 518] Overall Loss 2.544680 Objective Loss 2.544680 LR 0.000016 Time 0.320262 +2025-05-17 01:18:10,910 - Epoch: [177][ 450/ 518] Overall Loss 2.544816 Objective Loss 2.544816 LR 0.000016 Time 0.320044 +2025-05-17 01:18:26,859 - Epoch: [177][ 500/ 518] Overall Loss 2.542287 Objective Loss 2.542287 LR 0.000016 Time 0.319937 +2025-05-17 01:18:32,493 - Epoch: [177][ 518/ 518] Overall Loss 2.541773 Objective Loss 2.541773 LR 0.000016 Time 0.319696 +2025-05-17 01:18:32,528 - --- validate (epoch=177)----------- +2025-05-17 01:18:32,529 - 4952 samples (32 per mini-batch) +2025-05-17 01:18:32,533 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:18:44,785 - Epoch: [177][ 50/ 155] Loss 2.934043 mAP 0.481560 +2025-05-17 01:18:57,316 - Epoch: [177][ 100/ 155] Loss 2.913610 mAP 0.490837 +2025-05-17 01:19:10,338 - Epoch: [177][ 150/ 155] Loss 2.906264 mAP 0.485682 +2025-05-17 01:19:13,292 - Epoch: [177][ 155/ 155] Loss 2.910125 mAP 0.485533 +2025-05-17 01:19:13,333 - ==> mAP: 0.48553 Loss: 2.910 + +2025-05-17 01:19:13,344 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:19:13,344 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:19:13,441 - + +2025-05-17 01:19:13,441 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:19:30,065 - Epoch: [178][ 50/ 518] Overall Loss 2.454770 Objective Loss 2.454770 LR 0.000016 Time 0.332395 +2025-05-17 01:19:45,956 - Epoch: [178][ 100/ 518] Overall Loss 2.498458 Objective Loss 2.498458 LR 0.000016 Time 0.325101 +2025-05-17 01:20:01,862 - Epoch: [178][ 150/ 518] Overall Loss 2.521557 Objective Loss 2.521557 LR 0.000016 Time 0.322769 +2025-05-17 01:20:17,761 - Epoch: [178][ 200/ 518] Overall Loss 2.527313 Objective Loss 2.527313 LR 0.000016 Time 0.321565 +2025-05-17 01:20:33,671 - Epoch: [178][ 250/ 518] Overall Loss 2.530632 Objective Loss 2.530632 LR 0.000016 Time 0.320888 +2025-05-17 01:20:49,593 - Epoch: [178][ 300/ 518] Overall Loss 2.527634 Objective Loss 2.527634 LR 0.000016 Time 0.320479 +2025-05-17 01:21:05,516 - Epoch: [178][ 350/ 518] Overall Loss 2.524315 Objective Loss 2.524315 LR 0.000016 Time 0.320185 +2025-05-17 01:21:21,439 - Epoch: [178][ 400/ 518] Overall Loss 2.528675 Objective Loss 2.528675 LR 0.000016 Time 0.319968 +2025-05-17 01:21:37,359 - Epoch: [178][ 450/ 518] Overall Loss 2.531981 Objective Loss 2.531981 LR 0.000016 Time 0.319791 +2025-05-17 01:21:53,286 - Epoch: [178][ 500/ 518] Overall Loss 2.532158 Objective Loss 2.532158 LR 0.000016 Time 0.319664 +2025-05-17 01:21:58,916 - Epoch: [178][ 518/ 518] Overall Loss 2.533894 Objective Loss 2.533894 LR 0.000016 Time 0.319424 +2025-05-17 01:21:58,953 - --- validate (epoch=178)----------- +2025-05-17 01:21:58,954 - 4952 samples (32 per mini-batch) +2025-05-17 01:21:58,957 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:22:11,104 - Epoch: [178][ 50/ 155] Loss 2.919195 mAP 0.479904 +2025-05-17 01:22:23,382 - Epoch: [178][ 100/ 155] Loss 2.927344 mAP 0.479166 +2025-05-17 01:22:36,275 - Epoch: [178][ 150/ 155] Loss 2.915255 mAP 0.484569 +2025-05-17 01:22:39,153 - Epoch: [178][ 155/ 155] Loss 2.914107 mAP 0.483460 +2025-05-17 01:22:39,193 - ==> mAP: 0.48346 Loss: 2.914 + +2025-05-17 01:22:39,203 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:22:39,203 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:22:39,298 - + +2025-05-17 01:22:39,298 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:22:56,003 - Epoch: [179][ 50/ 518] Overall Loss 2.551568 Objective Loss 2.551568 LR 0.000016 Time 0.334034 +2025-05-17 01:23:11,911 - Epoch: [179][ 100/ 518] Overall Loss 2.560506 Objective Loss 2.560506 LR 0.000016 Time 0.326081 +2025-05-17 01:23:27,815 - Epoch: [179][ 150/ 518] Overall Loss 2.546999 Objective Loss 2.546999 LR 0.000016 Time 0.323409 +2025-05-17 01:23:43,756 - Epoch: [179][ 200/ 518] Overall Loss 2.544855 Objective Loss 2.544855 LR 0.000016 Time 0.322260 +2025-05-17 01:23:59,714 - Epoch: [179][ 250/ 518] Overall Loss 2.544267 Objective Loss 2.544267 LR 0.000016 Time 0.321638 +2025-05-17 01:24:15,649 - Epoch: [179][ 300/ 518] Overall Loss 2.527627 Objective Loss 2.527627 LR 0.000016 Time 0.321145 +2025-05-17 01:24:31,592 - Epoch: [179][ 350/ 518] Overall Loss 2.535762 Objective Loss 2.535762 LR 0.000016 Time 0.320816 +2025-05-17 01:24:47,533 - Epoch: [179][ 400/ 518] Overall Loss 2.537789 Objective Loss 2.537789 LR 0.000016 Time 0.320565 +2025-05-17 01:25:03,456 - Epoch: [179][ 450/ 518] Overall Loss 2.541165 Objective Loss 2.541165 LR 0.000016 Time 0.320328 +2025-05-17 01:25:19,365 - Epoch: [179][ 500/ 518] Overall Loss 2.541468 Objective Loss 2.541468 LR 0.000016 Time 0.320112 +2025-05-17 01:25:24,974 - Epoch: [179][ 518/ 518] Overall Loss 2.543067 Objective Loss 2.543067 LR 0.000016 Time 0.319816 +2025-05-17 01:25:25,011 - --- validate (epoch=179)----------- +2025-05-17 01:25:25,012 - 4952 samples (32 per mini-batch) +2025-05-17 01:25:25,015 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:25:37,050 - Epoch: [179][ 50/ 155] Loss 2.910007 mAP 0.493928 +2025-05-17 01:25:49,442 - Epoch: [179][ 100/ 155] Loss 2.913147 mAP 0.485606 +2025-05-17 01:26:02,237 - Epoch: [179][ 150/ 155] Loss 2.911290 mAP 0.488715 +2025-05-17 01:26:05,093 - Epoch: [179][ 155/ 155] Loss 2.910563 mAP 0.488353 +2025-05-17 01:26:05,133 - ==> mAP: 0.48835 Loss: 2.911 + +2025-05-17 01:26:05,143 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:26:05,144 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:26:05,238 - + +2025-05-17 01:26:05,238 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:26:22,084 - Epoch: [180][ 50/ 518] Overall Loss 2.578314 Objective Loss 2.578314 LR 0.000016 Time 0.336835 +2025-05-17 01:26:37,962 - Epoch: [180][ 100/ 518] Overall Loss 2.542603 Objective Loss 2.542603 LR 0.000016 Time 0.327194 +2025-05-17 01:26:53,859 - Epoch: [180][ 150/ 518] Overall Loss 2.531702 Objective Loss 2.531702 LR 0.000016 Time 0.324098 +2025-05-17 01:27:09,765 - Epoch: [180][ 200/ 518] Overall Loss 2.554873 Objective Loss 2.554873 LR 0.000016 Time 0.322601 +2025-05-17 01:27:25,673 - Epoch: [180][ 250/ 518] Overall Loss 2.560413 Objective Loss 2.560413 LR 0.000016 Time 0.321707 +2025-05-17 01:27:41,582 - Epoch: [180][ 300/ 518] Overall Loss 2.559041 Objective Loss 2.559041 LR 0.000016 Time 0.321114 +2025-05-17 01:27:57,519 - Epoch: [180][ 350/ 518] Overall Loss 2.555667 Objective Loss 2.555667 LR 0.000016 Time 0.320771 +2025-05-17 01:28:13,451 - Epoch: [180][ 400/ 518] Overall Loss 2.557802 Objective Loss 2.557802 LR 0.000016 Time 0.320503 +2025-05-17 01:28:29,400 - Epoch: [180][ 450/ 518] Overall Loss 2.546680 Objective Loss 2.546680 LR 0.000016 Time 0.320333 +2025-05-17 01:28:45,349 - Epoch: [180][ 500/ 518] Overall Loss 2.547483 Objective Loss 2.547483 LR 0.000016 Time 0.320196 +2025-05-17 01:28:50,983 - Epoch: [180][ 518/ 518] Overall Loss 2.548297 Objective Loss 2.548297 LR 0.000016 Time 0.319946 +2025-05-17 01:28:51,022 - --- validate (epoch=180)----------- +2025-05-17 01:28:51,023 - 4952 samples (32 per mini-batch) +2025-05-17 01:28:51,026 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:29:03,284 - Epoch: [180][ 50/ 155] Loss 2.945298 mAP 0.486989 +2025-05-17 01:29:15,677 - Epoch: [180][ 100/ 155] Loss 2.918971 mAP 0.490749 +2025-05-17 01:29:28,777 - Epoch: [180][ 150/ 155] Loss 2.919173 mAP 0.485846 +2025-05-17 01:29:31,742 - Epoch: [180][ 155/ 155] Loss 2.912681 mAP 0.487048 +2025-05-17 01:29:31,777 - ==> mAP: 0.48705 Loss: 2.913 + +2025-05-17 01:29:31,787 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:29:31,787 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:29:31,880 - + +2025-05-17 01:29:31,880 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:29:48,746 - Epoch: [181][ 50/ 518] Overall Loss 2.552567 Objective Loss 2.552567 LR 0.000016 Time 0.337245 +2025-05-17 01:30:04,607 - Epoch: [181][ 100/ 518] Overall Loss 2.532732 Objective Loss 2.532732 LR 0.000016 Time 0.327225 +2025-05-17 01:30:20,488 - Epoch: [181][ 150/ 518] Overall Loss 2.535481 Objective Loss 2.535481 LR 0.000016 Time 0.324017 +2025-05-17 01:30:36,386 - Epoch: [181][ 200/ 518] Overall Loss 2.539817 Objective Loss 2.539817 LR 0.000016 Time 0.322495 +2025-05-17 01:30:52,335 - Epoch: [181][ 250/ 518] Overall Loss 2.558704 Objective Loss 2.558704 LR 0.000016 Time 0.321790 +2025-05-17 01:31:08,271 - Epoch: [181][ 300/ 518] Overall Loss 2.552344 Objective Loss 2.552344 LR 0.000016 Time 0.321277 +2025-05-17 01:31:24,205 - Epoch: [181][ 350/ 518] Overall Loss 2.545397 Objective Loss 2.545397 LR 0.000016 Time 0.320902 +2025-05-17 01:31:40,145 - Epoch: [181][ 400/ 518] Overall Loss 2.540718 Objective Loss 2.540718 LR 0.000016 Time 0.320640 +2025-05-17 01:31:56,083 - Epoch: [181][ 450/ 518] Overall Loss 2.537608 Objective Loss 2.537608 LR 0.000016 Time 0.320428 +2025-05-17 01:32:12,027 - Epoch: [181][ 500/ 518] Overall Loss 2.543186 Objective Loss 2.543186 LR 0.000016 Time 0.320270 +2025-05-17 01:32:17,651 - Epoch: [181][ 518/ 518] Overall Loss 2.545524 Objective Loss 2.545524 LR 0.000016 Time 0.319998 +2025-05-17 01:32:17,688 - --- validate (epoch=181)----------- +2025-05-17 01:32:17,688 - 4952 samples (32 per mini-batch) +2025-05-17 01:32:17,692 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:32:29,678 - Epoch: [181][ 50/ 155] Loss 2.918333 mAP 0.489648 +2025-05-17 01:32:42,087 - Epoch: [181][ 100/ 155] Loss 2.897402 mAP 0.491346 +2025-05-17 01:32:54,911 - Epoch: [181][ 150/ 155] Loss 2.914897 mAP 0.486960 +2025-05-17 01:32:57,769 - Epoch: [181][ 155/ 155] Loss 2.916019 mAP 0.487360 +2025-05-17 01:32:57,810 - ==> mAP: 0.48736 Loss: 2.916 + +2025-05-17 01:32:57,820 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:32:57,820 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:32:57,914 - + +2025-05-17 01:32:57,914 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:33:14,595 - Epoch: [182][ 50/ 518] Overall Loss 2.552986 Objective Loss 2.552986 LR 0.000016 Time 0.333563 +2025-05-17 01:33:30,465 - Epoch: [182][ 100/ 518] Overall Loss 2.544440 Objective Loss 2.544440 LR 0.000016 Time 0.325467 +2025-05-17 01:33:46,361 - Epoch: [182][ 150/ 518] Overall Loss 2.540487 Objective Loss 2.540487 LR 0.000016 Time 0.322945 +2025-05-17 01:34:02,309 - Epoch: [182][ 200/ 518] Overall Loss 2.547787 Objective Loss 2.547787 LR 0.000016 Time 0.321941 +2025-05-17 01:34:18,259 - Epoch: [182][ 250/ 518] Overall Loss 2.547068 Objective Loss 2.547068 LR 0.000016 Time 0.321352 +2025-05-17 01:34:34,217 - Epoch: [182][ 300/ 518] Overall Loss 2.553162 Objective Loss 2.553162 LR 0.000016 Time 0.320985 +2025-05-17 01:34:50,184 - Epoch: [182][ 350/ 518] Overall Loss 2.553695 Objective Loss 2.553695 LR 0.000016 Time 0.320748 +2025-05-17 01:35:06,111 - Epoch: [182][ 400/ 518] Overall Loss 2.551889 Objective Loss 2.551889 LR 0.000016 Time 0.320469 +2025-05-17 01:35:22,031 - Epoch: [182][ 450/ 518] Overall Loss 2.550767 Objective Loss 2.550767 LR 0.000016 Time 0.320236 +2025-05-17 01:35:37,972 - Epoch: [182][ 500/ 518] Overall Loss 2.545836 Objective Loss 2.545836 LR 0.000016 Time 0.320093 +2025-05-17 01:35:43,590 - Epoch: [182][ 518/ 518] Overall Loss 2.546902 Objective Loss 2.546902 LR 0.000016 Time 0.319815 +2025-05-17 01:35:43,629 - --- validate (epoch=182)----------- +2025-05-17 01:35:43,630 - 4952 samples (32 per mini-batch) +2025-05-17 01:35:43,633 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:35:55,985 - Epoch: [182][ 50/ 155] Loss 2.917624 mAP 0.477067 +2025-05-17 01:36:08,426 - Epoch: [182][ 100/ 155] Loss 2.921247 mAP 0.490230 +2025-05-17 01:36:21,577 - Epoch: [182][ 150/ 155] Loss 2.927396 mAP 0.484276 +2025-05-17 01:36:24,557 - Epoch: [182][ 155/ 155] Loss 2.926874 mAP 0.484966 +2025-05-17 01:36:24,595 - ==> mAP: 0.48497 Loss: 2.927 + +2025-05-17 01:36:24,606 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:36:24,606 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:36:24,703 - + +2025-05-17 01:36:24,703 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:36:41,397 - Epoch: [183][ 50/ 518] Overall Loss 2.529145 Objective Loss 2.529145 LR 0.000016 Time 0.333787 +2025-05-17 01:36:57,256 - Epoch: [183][ 100/ 518] Overall Loss 2.526343 Objective Loss 2.526343 LR 0.000016 Time 0.325477 +2025-05-17 01:37:13,133 - Epoch: [183][ 150/ 518] Overall Loss 2.549876 Objective Loss 2.549876 LR 0.000016 Time 0.322826 +2025-05-17 01:37:29,027 - Epoch: [183][ 200/ 518] Overall Loss 2.538578 Objective Loss 2.538578 LR 0.000016 Time 0.321585 +2025-05-17 01:37:44,937 - Epoch: [183][ 250/ 518] Overall Loss 2.540650 Objective Loss 2.540650 LR 0.000016 Time 0.320902 +2025-05-17 01:38:00,849 - Epoch: [183][ 300/ 518] Overall Loss 2.541545 Objective Loss 2.541545 LR 0.000016 Time 0.320455 +2025-05-17 01:38:16,779 - Epoch: [183][ 350/ 518] Overall Loss 2.543239 Objective Loss 2.543239 LR 0.000016 Time 0.320188 +2025-05-17 01:38:32,692 - Epoch: [183][ 400/ 518] Overall Loss 2.544753 Objective Loss 2.544753 LR 0.000016 Time 0.319942 +2025-05-17 01:38:48,602 - Epoch: [183][ 450/ 518] Overall Loss 2.540481 Objective Loss 2.540481 LR 0.000016 Time 0.319748 +2025-05-17 01:39:04,520 - Epoch: [183][ 500/ 518] Overall Loss 2.540436 Objective Loss 2.540436 LR 0.000016 Time 0.319607 +2025-05-17 01:39:10,134 - Epoch: [183][ 518/ 518] Overall Loss 2.538930 Objective Loss 2.538930 LR 0.000016 Time 0.319337 +2025-05-17 01:39:10,171 - --- validate (epoch=183)----------- +2025-05-17 01:39:10,172 - 4952 samples (32 per mini-batch) +2025-05-17 01:39:10,175 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:39:22,208 - Epoch: [183][ 50/ 155] Loss 2.888439 mAP 0.498552 +2025-05-17 01:39:34,436 - Epoch: [183][ 100/ 155] Loss 2.908900 mAP 0.490432 +2025-05-17 01:39:47,372 - Epoch: [183][ 150/ 155] Loss 2.907388 mAP 0.487726 +2025-05-17 01:39:50,254 - Epoch: [183][ 155/ 155] Loss 2.911670 mAP 0.488644 +2025-05-17 01:39:50,292 - ==> mAP: 0.48864 Loss: 2.912 + +2025-05-17 01:39:50,303 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:39:50,304 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:39:50,397 - + +2025-05-17 01:39:50,398 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:40:07,178 - Epoch: [184][ 50/ 518] Overall Loss 2.560373 Objective Loss 2.560373 LR 0.000016 Time 0.335538 +2025-05-17 01:40:23,087 - Epoch: [184][ 100/ 518] Overall Loss 2.552613 Objective Loss 2.552613 LR 0.000016 Time 0.326853 +2025-05-17 01:40:38,990 - Epoch: [184][ 150/ 518] Overall Loss 2.562805 Objective Loss 2.562805 LR 0.000016 Time 0.323922 +2025-05-17 01:40:54,918 - Epoch: [184][ 200/ 518] Overall Loss 2.573080 Objective Loss 2.573080 LR 0.000016 Time 0.322575 +2025-05-17 01:41:10,864 - Epoch: [184][ 250/ 518] Overall Loss 2.563742 Objective Loss 2.563742 LR 0.000016 Time 0.321842 +2025-05-17 01:41:26,792 - Epoch: [184][ 300/ 518] Overall Loss 2.559068 Objective Loss 2.559068 LR 0.000016 Time 0.321292 +2025-05-17 01:41:42,735 - Epoch: [184][ 350/ 518] Overall Loss 2.555786 Objective Loss 2.555786 LR 0.000016 Time 0.320943 +2025-05-17 01:41:58,673 - Epoch: [184][ 400/ 518] Overall Loss 2.549113 Objective Loss 2.549113 LR 0.000016 Time 0.320669 +2025-05-17 01:42:14,618 - Epoch: [184][ 450/ 518] Overall Loss 2.544268 Objective Loss 2.544268 LR 0.000016 Time 0.320470 +2025-05-17 01:42:30,559 - Epoch: [184][ 500/ 518] Overall Loss 2.540897 Objective Loss 2.540897 LR 0.000016 Time 0.320304 +2025-05-17 01:42:36,176 - Epoch: [184][ 518/ 518] Overall Loss 2.542675 Objective Loss 2.542675 LR 0.000016 Time 0.320018 +2025-05-17 01:42:36,214 - --- validate (epoch=184)----------- +2025-05-17 01:42:36,214 - 4952 samples (32 per mini-batch) +2025-05-17 01:42:36,218 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:42:48,459 - Epoch: [184][ 50/ 155] Loss 2.944700 mAP 0.473737 +2025-05-17 01:43:00,784 - Epoch: [184][ 100/ 155] Loss 2.921630 mAP 0.484300 +2025-05-17 01:43:13,893 - Epoch: [184][ 150/ 155] Loss 2.913656 mAP 0.490578 +2025-05-17 01:43:16,880 - Epoch: [184][ 155/ 155] Loss 2.915788 mAP 0.488069 +2025-05-17 01:43:16,926 - ==> mAP: 0.48807 Loss: 2.916 + +2025-05-17 01:43:16,937 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:43:16,937 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:43:17,034 - + +2025-05-17 01:43:17,034 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:43:33,778 - Epoch: [185][ 50/ 518] Overall Loss 2.597494 Objective Loss 2.597494 LR 0.000016 Time 0.334799 +2025-05-17 01:43:49,658 - Epoch: [185][ 100/ 518] Overall Loss 2.548874 Objective Loss 2.548874 LR 0.000016 Time 0.326196 +2025-05-17 01:44:05,530 - Epoch: [185][ 150/ 518] Overall Loss 2.540159 Objective Loss 2.540159 LR 0.000016 Time 0.323271 +2025-05-17 01:44:21,454 - Epoch: [185][ 200/ 518] Overall Loss 2.546096 Objective Loss 2.546096 LR 0.000016 Time 0.322069 +2025-05-17 01:44:37,381 - Epoch: [185][ 250/ 518] Overall Loss 2.548287 Objective Loss 2.548287 LR 0.000016 Time 0.321360 +2025-05-17 01:44:53,321 - Epoch: [185][ 300/ 518] Overall Loss 2.554044 Objective Loss 2.554044 LR 0.000016 Time 0.320932 +2025-05-17 01:45:09,259 - Epoch: [185][ 350/ 518] Overall Loss 2.551802 Objective Loss 2.551802 LR 0.000016 Time 0.320620 +2025-05-17 01:45:25,205 - Epoch: [185][ 400/ 518] Overall Loss 2.548750 Objective Loss 2.548750 LR 0.000016 Time 0.320405 +2025-05-17 01:45:41,115 - Epoch: [185][ 450/ 518] Overall Loss 2.547745 Objective Loss 2.547745 LR 0.000016 Time 0.320159 +2025-05-17 01:45:57,027 - Epoch: [185][ 500/ 518] Overall Loss 2.545100 Objective Loss 2.545100 LR 0.000016 Time 0.319962 +2025-05-17 01:46:02,650 - Epoch: [185][ 518/ 518] Overall Loss 2.542605 Objective Loss 2.542605 LR 0.000016 Time 0.319699 +2025-05-17 01:46:02,688 - --- validate (epoch=185)----------- +2025-05-17 01:46:02,689 - 4952 samples (32 per mini-batch) +2025-05-17 01:46:02,692 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:46:14,958 - Epoch: [185][ 50/ 155] Loss 2.975362 mAP 0.469156 +2025-05-17 01:46:27,616 - Epoch: [185][ 100/ 155] Loss 2.934177 mAP 0.478127 +2025-05-17 01:46:40,732 - Epoch: [185][ 150/ 155] Loss 2.915199 mAP 0.487437 +2025-05-17 01:46:43,714 - Epoch: [185][ 155/ 155] Loss 2.914890 mAP 0.485875 +2025-05-17 01:46:43,755 - ==> mAP: 0.48587 Loss: 2.915 + +2025-05-17 01:46:43,765 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:46:43,765 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:46:43,863 - + +2025-05-17 01:46:43,863 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:47:00,486 - Epoch: [186][ 50/ 518] Overall Loss 2.521339 Objective Loss 2.521339 LR 0.000016 Time 0.332383 +2025-05-17 01:47:16,358 - Epoch: [186][ 100/ 518] Overall Loss 2.498841 Objective Loss 2.498841 LR 0.000016 Time 0.324911 +2025-05-17 01:47:32,245 - Epoch: [186][ 150/ 518] Overall Loss 2.507063 Objective Loss 2.507063 LR 0.000016 Time 0.322507 +2025-05-17 01:47:48,172 - Epoch: [186][ 200/ 518] Overall Loss 2.522303 Objective Loss 2.522303 LR 0.000016 Time 0.321513 +2025-05-17 01:48:04,078 - Epoch: [186][ 250/ 518] Overall Loss 2.529498 Objective Loss 2.529498 LR 0.000016 Time 0.320831 +2025-05-17 01:48:19,996 - Epoch: [186][ 300/ 518] Overall Loss 2.536606 Objective Loss 2.536606 LR 0.000016 Time 0.320416 +2025-05-17 01:48:35,917 - Epoch: [186][ 350/ 518] Overall Loss 2.542272 Objective Loss 2.542272 LR 0.000016 Time 0.320129 +2025-05-17 01:48:51,840 - Epoch: [186][ 400/ 518] Overall Loss 2.538974 Objective Loss 2.538974 LR 0.000016 Time 0.319917 +2025-05-17 01:49:07,755 - Epoch: [186][ 450/ 518] Overall Loss 2.537490 Objective Loss 2.537490 LR 0.000016 Time 0.319737 +2025-05-17 01:49:23,685 - Epoch: [186][ 500/ 518] Overall Loss 2.539051 Objective Loss 2.539051 LR 0.000016 Time 0.319620 +2025-05-17 01:49:29,304 - Epoch: [186][ 518/ 518] Overall Loss 2.543153 Objective Loss 2.543153 LR 0.000016 Time 0.319362 +2025-05-17 01:49:29,340 - --- validate (epoch=186)----------- +2025-05-17 01:49:29,340 - 4952 samples (32 per mini-batch) +2025-05-17 01:49:29,344 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:49:41,755 - Epoch: [186][ 50/ 155] Loss 2.925536 mAP 0.491789 +2025-05-17 01:49:54,059 - Epoch: [186][ 100/ 155] Loss 2.929777 mAP 0.485019 +2025-05-17 01:50:07,015 - Epoch: [186][ 150/ 155] Loss 2.919531 mAP 0.486127 +2025-05-17 01:50:09,913 - Epoch: [186][ 155/ 155] Loss 2.916696 mAP 0.485832 +2025-05-17 01:50:09,949 - ==> mAP: 0.48583 Loss: 2.917 + +2025-05-17 01:50:09,959 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:50:09,959 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:50:10,054 - + +2025-05-17 01:50:10,054 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:50:26,853 - Epoch: [187][ 50/ 518] Overall Loss 2.521111 Objective Loss 2.521111 LR 0.000016 Time 0.335911 +2025-05-17 01:50:42,751 - Epoch: [187][ 100/ 518] Overall Loss 2.541498 Objective Loss 2.541498 LR 0.000016 Time 0.326926 +2025-05-17 01:50:58,665 - Epoch: [187][ 150/ 518] Overall Loss 2.538506 Objective Loss 2.538506 LR 0.000016 Time 0.324044 +2025-05-17 01:51:14,608 - Epoch: [187][ 200/ 518] Overall Loss 2.545871 Objective Loss 2.545871 LR 0.000016 Time 0.322746 +2025-05-17 01:51:30,521 - Epoch: [187][ 250/ 518] Overall Loss 2.546764 Objective Loss 2.546764 LR 0.000016 Time 0.321841 +2025-05-17 01:51:46,443 - Epoch: [187][ 300/ 518] Overall Loss 2.552629 Objective Loss 2.552629 LR 0.000016 Time 0.321272 +2025-05-17 01:52:02,363 - Epoch: [187][ 350/ 518] Overall Loss 2.548373 Objective Loss 2.548373 LR 0.000016 Time 0.320859 +2025-05-17 01:52:18,278 - Epoch: [187][ 400/ 518] Overall Loss 2.546085 Objective Loss 2.546085 LR 0.000016 Time 0.320538 +2025-05-17 01:52:34,228 - Epoch: [187][ 450/ 518] Overall Loss 2.544613 Objective Loss 2.544613 LR 0.000016 Time 0.320364 +2025-05-17 01:52:50,139 - Epoch: [187][ 500/ 518] Overall Loss 2.540467 Objective Loss 2.540467 LR 0.000016 Time 0.320148 +2025-05-17 01:52:55,775 - Epoch: [187][ 518/ 518] Overall Loss 2.536629 Objective Loss 2.536629 LR 0.000016 Time 0.319903 +2025-05-17 01:52:55,814 - --- validate (epoch=187)----------- +2025-05-17 01:52:55,814 - 4952 samples (32 per mini-batch) +2025-05-17 01:52:55,818 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:53:08,027 - Epoch: [187][ 50/ 155] Loss 2.938998 mAP 0.462915 +2025-05-17 01:53:20,243 - Epoch: [187][ 100/ 155] Loss 2.900817 mAP 0.480538 +2025-05-17 01:53:33,264 - Epoch: [187][ 150/ 155] Loss 2.912953 mAP 0.483835 +2025-05-17 01:53:36,177 - Epoch: [187][ 155/ 155] Loss 2.910278 mAP 0.485187 +2025-05-17 01:53:36,212 - ==> mAP: 0.48519 Loss: 2.910 + +2025-05-17 01:53:36,223 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:53:36,223 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:53:36,318 - + +2025-05-17 01:53:36,318 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:53:53,076 - Epoch: [188][ 50/ 518] Overall Loss 2.522829 Objective Loss 2.522829 LR 0.000016 Time 0.335102 +2025-05-17 01:54:08,973 - Epoch: [188][ 100/ 518] Overall Loss 2.556336 Objective Loss 2.556336 LR 0.000016 Time 0.326515 +2025-05-17 01:54:24,896 - Epoch: [188][ 150/ 518] Overall Loss 2.568961 Objective Loss 2.568961 LR 0.000016 Time 0.323827 +2025-05-17 01:54:40,804 - Epoch: [188][ 200/ 518] Overall Loss 2.565239 Objective Loss 2.565239 LR 0.000016 Time 0.322402 +2025-05-17 01:54:56,733 - Epoch: [188][ 250/ 518] Overall Loss 2.560800 Objective Loss 2.560800 LR 0.000016 Time 0.321635 +2025-05-17 01:55:12,658 - Epoch: [188][ 300/ 518] Overall Loss 2.560472 Objective Loss 2.560472 LR 0.000016 Time 0.321109 +2025-05-17 01:55:28,574 - Epoch: [188][ 350/ 518] Overall Loss 2.547108 Objective Loss 2.547108 LR 0.000016 Time 0.320707 +2025-05-17 01:55:44,503 - Epoch: [188][ 400/ 518] Overall Loss 2.547697 Objective Loss 2.547697 LR 0.000016 Time 0.320438 +2025-05-17 01:56:00,434 - Epoch: [188][ 450/ 518] Overall Loss 2.544334 Objective Loss 2.544334 LR 0.000016 Time 0.320235 +2025-05-17 01:56:16,390 - Epoch: [188][ 500/ 518] Overall Loss 2.538417 Objective Loss 2.538417 LR 0.000016 Time 0.320122 +2025-05-17 01:56:22,031 - Epoch: [188][ 518/ 518] Overall Loss 2.538883 Objective Loss 2.538883 LR 0.000016 Time 0.319887 +2025-05-17 01:56:22,066 - --- validate (epoch=188)----------- +2025-05-17 01:56:22,067 - 4952 samples (32 per mini-batch) +2025-05-17 01:56:22,070 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 01:56:33,989 - Epoch: [188][ 50/ 155] Loss 2.932420 mAP 0.493385 +2025-05-17 01:56:46,357 - Epoch: [188][ 100/ 155] Loss 2.934972 mAP 0.479827 +2025-05-17 01:56:59,076 - Epoch: [188][ 150/ 155] Loss 2.914485 mAP 0.480860 +2025-05-17 01:57:01,905 - Epoch: [188][ 155/ 155] Loss 2.914540 mAP 0.481179 +2025-05-17 01:57:01,943 - ==> mAP: 0.48118 Loss: 2.915 + +2025-05-17 01:57:01,953 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 01:57:01,953 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 01:57:02,047 - + +2025-05-17 01:57:02,048 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 01:57:19,063 - Epoch: [189][ 50/ 518] Overall Loss 2.599661 Objective Loss 2.599661 LR 0.000016 Time 0.340231 +2025-05-17 01:57:34,934 - Epoch: [189][ 100/ 518] Overall Loss 2.531532 Objective Loss 2.531532 LR 0.000016 Time 0.328813 +2025-05-17 01:57:50,818 - Epoch: [189][ 150/ 518] Overall Loss 2.541600 Objective Loss 2.541600 LR 0.000016 Time 0.325100 +2025-05-17 01:58:06,737 - Epoch: [189][ 200/ 518] Overall Loss 2.547949 Objective Loss 2.547949 LR 0.000016 Time 0.323412 +2025-05-17 01:58:22,654 - Epoch: [189][ 250/ 518] Overall Loss 2.540997 Objective Loss 2.540997 LR 0.000016 Time 0.322394 +2025-05-17 01:58:38,576 - Epoch: [189][ 300/ 518] Overall Loss 2.541045 Objective Loss 2.541045 LR 0.000016 Time 0.321731 +2025-05-17 01:58:54,514 - Epoch: [189][ 350/ 518] Overall Loss 2.539943 Objective Loss 2.539943 LR 0.000016 Time 0.321304 +2025-05-17 01:59:10,461 - Epoch: [189][ 400/ 518] Overall Loss 2.539133 Objective Loss 2.539133 LR 0.000016 Time 0.321006 +2025-05-17 01:59:26,400 - Epoch: [189][ 450/ 518] Overall Loss 2.544910 Objective Loss 2.544910 LR 0.000016 Time 0.320757 +2025-05-17 01:59:42,330 - Epoch: [189][ 500/ 518] Overall Loss 2.541638 Objective Loss 2.541638 LR 0.000016 Time 0.320539 +2025-05-17 01:59:47,962 - Epoch: [189][ 518/ 518] Overall Loss 2.541107 Objective Loss 2.541107 LR 0.000016 Time 0.320273 +2025-05-17 01:59:48,001 - --- validate (epoch=189)----------- +2025-05-17 01:59:48,002 - 4952 samples (32 per mini-batch) +2025-05-17 01:59:48,005 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:00:00,448 - Epoch: [189][ 50/ 155] Loss 2.866677 mAP 0.495143 +2025-05-17 02:00:13,003 - Epoch: [189][ 100/ 155] Loss 2.878500 mAP 0.485313 +2025-05-17 02:00:25,852 - Epoch: [189][ 150/ 155] Loss 2.910922 mAP 0.481741 +2025-05-17 02:00:28,717 - Epoch: [189][ 155/ 155] Loss 2.910026 mAP 0.483646 +2025-05-17 02:00:28,753 - ==> mAP: 0.48365 Loss: 2.910 + +2025-05-17 02:00:28,764 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:00:28,764 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:00:28,858 - + +2025-05-17 02:00:28,859 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:00:45,658 - Epoch: [190][ 50/ 518] Overall Loss 2.553407 Objective Loss 2.553407 LR 0.000016 Time 0.335928 +2025-05-17 02:01:01,533 - Epoch: [190][ 100/ 518] Overall Loss 2.533243 Objective Loss 2.533243 LR 0.000016 Time 0.326708 +2025-05-17 02:01:17,431 - Epoch: [190][ 150/ 518] Overall Loss 2.547028 Objective Loss 2.547028 LR 0.000016 Time 0.323784 +2025-05-17 02:01:33,334 - Epoch: [190][ 200/ 518] Overall Loss 2.541083 Objective Loss 2.541083 LR 0.000016 Time 0.322349 +2025-05-17 02:01:49,252 - Epoch: [190][ 250/ 518] Overall Loss 2.535694 Objective Loss 2.535694 LR 0.000016 Time 0.321545 +2025-05-17 02:02:05,170 - Epoch: [190][ 300/ 518] Overall Loss 2.529810 Objective Loss 2.529810 LR 0.000016 Time 0.321011 +2025-05-17 02:02:21,092 - Epoch: [190][ 350/ 518] Overall Loss 2.542743 Objective Loss 2.542743 LR 0.000016 Time 0.320640 +2025-05-17 02:02:37,010 - Epoch: [190][ 400/ 518] Overall Loss 2.535431 Objective Loss 2.535431 LR 0.000016 Time 0.320352 +2025-05-17 02:02:52,934 - Epoch: [190][ 450/ 518] Overall Loss 2.543830 Objective Loss 2.543830 LR 0.000016 Time 0.320143 +2025-05-17 02:03:08,860 - Epoch: [190][ 500/ 518] Overall Loss 2.540076 Objective Loss 2.540076 LR 0.000016 Time 0.319977 +2025-05-17 02:03:14,494 - Epoch: [190][ 518/ 518] Overall Loss 2.539981 Objective Loss 2.539981 LR 0.000016 Time 0.319735 +2025-05-17 02:03:14,529 - --- validate (epoch=190)----------- +2025-05-17 02:03:14,530 - 4952 samples (32 per mini-batch) +2025-05-17 02:03:14,533 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:03:26,533 - Epoch: [190][ 50/ 155] Loss 2.942019 mAP 0.502544 +2025-05-17 02:03:38,895 - Epoch: [190][ 100/ 155] Loss 2.911935 mAP 0.502776 +2025-05-17 02:03:51,566 - Epoch: [190][ 150/ 155] Loss 2.913801 mAP 0.486511 +2025-05-17 02:03:54,436 - Epoch: [190][ 155/ 155] Loss 2.915799 mAP 0.484907 +2025-05-17 02:03:54,474 - ==> mAP: 0.48491 Loss: 2.916 + +2025-05-17 02:03:54,484 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:03:54,484 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:03:54,578 - + +2025-05-17 02:03:54,579 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:04:11,380 - Epoch: [191][ 50/ 518] Overall Loss 2.522407 Objective Loss 2.522407 LR 0.000016 Time 0.335952 +2025-05-17 02:04:27,267 - Epoch: [191][ 100/ 518] Overall Loss 2.524369 Objective Loss 2.524369 LR 0.000016 Time 0.326840 +2025-05-17 02:04:43,158 - Epoch: [191][ 150/ 518] Overall Loss 2.507033 Objective Loss 2.507033 LR 0.000016 Time 0.323823 +2025-05-17 02:04:59,071 - Epoch: [191][ 200/ 518] Overall Loss 2.518159 Objective Loss 2.518159 LR 0.000016 Time 0.322431 +2025-05-17 02:05:15,028 - Epoch: [191][ 250/ 518] Overall Loss 2.530972 Objective Loss 2.530972 LR 0.000016 Time 0.321770 +2025-05-17 02:05:30,963 - Epoch: [191][ 300/ 518] Overall Loss 2.534232 Objective Loss 2.534232 LR 0.000016 Time 0.321254 +2025-05-17 02:05:46,859 - Epoch: [191][ 350/ 518] Overall Loss 2.529433 Objective Loss 2.529433 LR 0.000016 Time 0.320776 +2025-05-17 02:06:02,766 - Epoch: [191][ 400/ 518] Overall Loss 2.533150 Objective Loss 2.533150 LR 0.000016 Time 0.320444 +2025-05-17 02:06:18,656 - Epoch: [191][ 450/ 518] Overall Loss 2.531886 Objective Loss 2.531886 LR 0.000016 Time 0.320148 +2025-05-17 02:06:34,560 - Epoch: [191][ 500/ 518] Overall Loss 2.528109 Objective Loss 2.528109 LR 0.000016 Time 0.319938 +2025-05-17 02:06:40,181 - Epoch: [191][ 518/ 518] Overall Loss 2.525596 Objective Loss 2.525596 LR 0.000016 Time 0.319670 +2025-05-17 02:06:40,217 - --- validate (epoch=191)----------- +2025-05-17 02:06:40,218 - 4952 samples (32 per mini-batch) +2025-05-17 02:06:40,222 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:06:52,494 - Epoch: [191][ 50/ 155] Loss 2.949503 mAP 0.492769 +2025-05-17 02:07:04,759 - Epoch: [191][ 100/ 155] Loss 2.917002 mAP 0.492124 +2025-05-17 02:07:17,786 - Epoch: [191][ 150/ 155] Loss 2.921821 mAP 0.490122 +2025-05-17 02:07:20,703 - Epoch: [191][ 155/ 155] Loss 2.922993 mAP 0.490358 +2025-05-17 02:07:20,741 - ==> mAP: 0.49036 Loss: 2.923 + +2025-05-17 02:07:20,751 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:07:20,751 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:07:20,846 - + +2025-05-17 02:07:20,847 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:07:37,635 - Epoch: [192][ 50/ 518] Overall Loss 2.579339 Objective Loss 2.579339 LR 0.000016 Time 0.335700 +2025-05-17 02:07:53,532 - Epoch: [192][ 100/ 518] Overall Loss 2.567933 Objective Loss 2.567933 LR 0.000016 Time 0.326815 +2025-05-17 02:08:09,403 - Epoch: [192][ 150/ 518] Overall Loss 2.571865 Objective Loss 2.571865 LR 0.000016 Time 0.323676 +2025-05-17 02:08:25,321 - Epoch: [192][ 200/ 518] Overall Loss 2.571441 Objective Loss 2.571441 LR 0.000016 Time 0.322344 +2025-05-17 02:08:41,226 - Epoch: [192][ 250/ 518] Overall Loss 2.563642 Objective Loss 2.563642 LR 0.000016 Time 0.321490 +2025-05-17 02:08:57,128 - Epoch: [192][ 300/ 518] Overall Loss 2.551255 Objective Loss 2.551255 LR 0.000016 Time 0.320911 +2025-05-17 02:09:13,037 - Epoch: [192][ 350/ 518] Overall Loss 2.544506 Objective Loss 2.544506 LR 0.000016 Time 0.320520 +2025-05-17 02:09:28,948 - Epoch: [192][ 400/ 518] Overall Loss 2.537971 Objective Loss 2.537971 LR 0.000016 Time 0.320228 +2025-05-17 02:09:44,872 - Epoch: [192][ 450/ 518] Overall Loss 2.538879 Objective Loss 2.538879 LR 0.000016 Time 0.320033 +2025-05-17 02:10:00,825 - Epoch: [192][ 500/ 518] Overall Loss 2.541428 Objective Loss 2.541428 LR 0.000016 Time 0.319933 +2025-05-17 02:10:06,460 - Epoch: [192][ 518/ 518] Overall Loss 2.541585 Objective Loss 2.541585 LR 0.000016 Time 0.319694 +2025-05-17 02:10:06,495 - --- validate (epoch=192)----------- +2025-05-17 02:10:06,496 - 4952 samples (32 per mini-batch) +2025-05-17 02:10:06,499 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:10:18,614 - Epoch: [192][ 50/ 155] Loss 2.892947 mAP 0.495509 +2025-05-17 02:10:31,010 - Epoch: [192][ 100/ 155] Loss 2.928443 mAP 0.484085 +2025-05-17 02:10:43,994 - Epoch: [192][ 150/ 155] Loss 2.909866 mAP 0.486052 +2025-05-17 02:10:46,909 - Epoch: [192][ 155/ 155] Loss 2.915565 mAP 0.484635 +2025-05-17 02:10:46,947 - ==> mAP: 0.48463 Loss: 2.916 + +2025-05-17 02:10:46,957 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:10:46,958 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:10:47,053 - + +2025-05-17 02:10:47,053 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:11:03,874 - Epoch: [193][ 50/ 518] Overall Loss 2.503362 Objective Loss 2.503362 LR 0.000016 Time 0.336353 +2025-05-17 02:11:19,750 - Epoch: [193][ 100/ 518] Overall Loss 2.521873 Objective Loss 2.521873 LR 0.000016 Time 0.326922 +2025-05-17 02:11:35,636 - Epoch: [193][ 150/ 518] Overall Loss 2.538104 Objective Loss 2.538104 LR 0.000016 Time 0.323847 +2025-05-17 02:11:51,553 - Epoch: [193][ 200/ 518] Overall Loss 2.533979 Objective Loss 2.533979 LR 0.000016 Time 0.322465 +2025-05-17 02:12:07,469 - Epoch: [193][ 250/ 518] Overall Loss 2.545003 Objective Loss 2.545003 LR 0.000016 Time 0.321631 +2025-05-17 02:12:23,385 - Epoch: [193][ 300/ 518] Overall Loss 2.553092 Objective Loss 2.553092 LR 0.000016 Time 0.321078 +2025-05-17 02:12:39,312 - Epoch: [193][ 350/ 518] Overall Loss 2.554411 Objective Loss 2.554411 LR 0.000016 Time 0.320711 +2025-05-17 02:12:55,256 - Epoch: [193][ 400/ 518] Overall Loss 2.552533 Objective Loss 2.552533 LR 0.000016 Time 0.320481 +2025-05-17 02:13:11,205 - Epoch: [193][ 450/ 518] Overall Loss 2.550728 Objective Loss 2.550728 LR 0.000016 Time 0.320312 +2025-05-17 02:13:27,152 - Epoch: [193][ 500/ 518] Overall Loss 2.553720 Objective Loss 2.553720 LR 0.000016 Time 0.320173 +2025-05-17 02:13:32,781 - Epoch: [193][ 518/ 518] Overall Loss 2.548694 Objective Loss 2.548694 LR 0.000016 Time 0.319914 +2025-05-17 02:13:32,819 - --- validate (epoch=193)----------- +2025-05-17 02:13:32,819 - 4952 samples (32 per mini-batch) +2025-05-17 02:13:32,823 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:13:44,892 - Epoch: [193][ 50/ 155] Loss 2.857373 mAP 0.507083 +2025-05-17 02:13:57,307 - Epoch: [193][ 100/ 155] Loss 2.917049 mAP 0.485031 +2025-05-17 02:14:10,252 - Epoch: [193][ 150/ 155] Loss 2.910391 mAP 0.485623 +2025-05-17 02:14:13,100 - Epoch: [193][ 155/ 155] Loss 2.909397 mAP 0.485873 +2025-05-17 02:14:13,135 - ==> mAP: 0.48587 Loss: 2.909 + +2025-05-17 02:14:13,145 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:14:13,145 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:14:13,239 - + +2025-05-17 02:14:13,239 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:14:30,063 - Epoch: [194][ 50/ 518] Overall Loss 2.508894 Objective Loss 2.508894 LR 0.000016 Time 0.336404 +2025-05-17 02:14:45,967 - Epoch: [194][ 100/ 518] Overall Loss 2.524568 Objective Loss 2.524568 LR 0.000016 Time 0.327238 +2025-05-17 02:15:01,877 - Epoch: [194][ 150/ 518] Overall Loss 2.537321 Objective Loss 2.537321 LR 0.000016 Time 0.324219 +2025-05-17 02:15:17,795 - Epoch: [194][ 200/ 518] Overall Loss 2.533554 Objective Loss 2.533554 LR 0.000016 Time 0.322751 +2025-05-17 02:15:33,719 - Epoch: [194][ 250/ 518] Overall Loss 2.535948 Objective Loss 2.535948 LR 0.000016 Time 0.321892 +2025-05-17 02:15:49,667 - Epoch: [194][ 300/ 518] Overall Loss 2.537978 Objective Loss 2.537978 LR 0.000016 Time 0.321399 +2025-05-17 02:16:05,609 - Epoch: [194][ 350/ 518] Overall Loss 2.546259 Objective Loss 2.546259 LR 0.000016 Time 0.321031 +2025-05-17 02:16:21,533 - Epoch: [194][ 400/ 518] Overall Loss 2.538156 Objective Loss 2.538156 LR 0.000016 Time 0.320710 +2025-05-17 02:16:37,470 - Epoch: [194][ 450/ 518] Overall Loss 2.542417 Objective Loss 2.542417 LR 0.000016 Time 0.320489 +2025-05-17 02:16:53,407 - Epoch: [194][ 500/ 518] Overall Loss 2.537535 Objective Loss 2.537535 LR 0.000016 Time 0.320313 +2025-05-17 02:16:59,033 - Epoch: [194][ 518/ 518] Overall Loss 2.541409 Objective Loss 2.541409 LR 0.000016 Time 0.320042 +2025-05-17 02:16:59,072 - --- validate (epoch=194)----------- +2025-05-17 02:16:59,073 - 4952 samples (32 per mini-batch) +2025-05-17 02:16:59,076 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:17:11,299 - Epoch: [194][ 50/ 155] Loss 2.949159 mAP 0.485532 +2025-05-17 02:17:23,564 - Epoch: [194][ 100/ 155] Loss 2.919927 mAP 0.491877 +2025-05-17 02:17:36,471 - Epoch: [194][ 150/ 155] Loss 2.914670 mAP 0.485578 +2025-05-17 02:17:39,388 - Epoch: [194][ 155/ 155] Loss 2.912909 mAP 0.486860 +2025-05-17 02:17:39,428 - ==> mAP: 0.48686 Loss: 2.913 + +2025-05-17 02:17:39,438 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:17:39,438 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:17:39,533 - + +2025-05-17 02:17:39,534 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:17:56,419 - Epoch: [195][ 50/ 518] Overall Loss 2.615649 Objective Loss 2.615649 LR 0.000016 Time 0.337646 +2025-05-17 02:18:12,277 - Epoch: [195][ 100/ 518] Overall Loss 2.534813 Objective Loss 2.534813 LR 0.000016 Time 0.327388 +2025-05-17 02:18:28,150 - Epoch: [195][ 150/ 518] Overall Loss 2.534234 Objective Loss 2.534234 LR 0.000016 Time 0.324073 +2025-05-17 02:18:44,036 - Epoch: [195][ 200/ 518] Overall Loss 2.551624 Objective Loss 2.551624 LR 0.000016 Time 0.322481 +2025-05-17 02:18:59,955 - Epoch: [195][ 250/ 518] Overall Loss 2.542498 Objective Loss 2.542498 LR 0.000016 Time 0.321658 +2025-05-17 02:19:15,896 - Epoch: [195][ 300/ 518] Overall Loss 2.543380 Objective Loss 2.543380 LR 0.000016 Time 0.321181 +2025-05-17 02:19:31,849 - Epoch: [195][ 350/ 518] Overall Loss 2.543272 Objective Loss 2.543272 LR 0.000016 Time 0.320876 +2025-05-17 02:19:47,783 - Epoch: [195][ 400/ 518] Overall Loss 2.539078 Objective Loss 2.539078 LR 0.000016 Time 0.320601 +2025-05-17 02:20:03,699 - Epoch: [195][ 450/ 518] Overall Loss 2.536320 Objective Loss 2.536320 LR 0.000016 Time 0.320344 +2025-05-17 02:20:19,612 - Epoch: [195][ 500/ 518] Overall Loss 2.545254 Objective Loss 2.545254 LR 0.000016 Time 0.320134 +2025-05-17 02:20:25,245 - Epoch: [195][ 518/ 518] Overall Loss 2.547051 Objective Loss 2.547051 LR 0.000016 Time 0.319884 +2025-05-17 02:20:25,282 - --- validate (epoch=195)----------- +2025-05-17 02:20:25,283 - 4952 samples (32 per mini-batch) +2025-05-17 02:20:25,287 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:20:37,395 - Epoch: [195][ 50/ 155] Loss 2.905358 mAP 0.489643 +2025-05-17 02:20:49,819 - Epoch: [195][ 100/ 155] Loss 2.908492 mAP 0.486605 +2025-05-17 02:21:02,693 - Epoch: [195][ 150/ 155] Loss 2.912689 mAP 0.488198 +2025-05-17 02:21:05,612 - Epoch: [195][ 155/ 155] Loss 2.910720 mAP 0.488497 +2025-05-17 02:21:05,653 - ==> mAP: 0.48850 Loss: 2.911 + +2025-05-17 02:21:05,663 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:21:05,663 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:21:05,754 - + +2025-05-17 02:21:05,754 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:21:22,502 - Epoch: [196][ 50/ 518] Overall Loss 2.567340 Objective Loss 2.567340 LR 0.000016 Time 0.334882 +2025-05-17 02:21:38,408 - Epoch: [196][ 100/ 518] Overall Loss 2.559297 Objective Loss 2.559297 LR 0.000016 Time 0.326497 +2025-05-17 02:21:54,286 - Epoch: [196][ 150/ 518] Overall Loss 2.546824 Objective Loss 2.546824 LR 0.000016 Time 0.323510 +2025-05-17 02:22:10,195 - Epoch: [196][ 200/ 518] Overall Loss 2.548402 Objective Loss 2.548402 LR 0.000016 Time 0.322175 +2025-05-17 02:22:26,135 - Epoch: [196][ 250/ 518] Overall Loss 2.539695 Objective Loss 2.539695 LR 0.000016 Time 0.321497 +2025-05-17 02:22:42,075 - Epoch: [196][ 300/ 518] Overall Loss 2.533967 Objective Loss 2.533967 LR 0.000016 Time 0.321043 +2025-05-17 02:22:58,026 - Epoch: [196][ 350/ 518] Overall Loss 2.536311 Objective Loss 2.536311 LR 0.000016 Time 0.320754 +2025-05-17 02:23:13,976 - Epoch: [196][ 400/ 518] Overall Loss 2.534018 Objective Loss 2.534018 LR 0.000016 Time 0.320533 +2025-05-17 02:23:29,910 - Epoch: [196][ 450/ 518] Overall Loss 2.538540 Objective Loss 2.538540 LR 0.000016 Time 0.320325 +2025-05-17 02:23:45,841 - Epoch: [196][ 500/ 518] Overall Loss 2.537135 Objective Loss 2.537135 LR 0.000016 Time 0.320152 +2025-05-17 02:23:51,451 - Epoch: [196][ 518/ 518] Overall Loss 2.539460 Objective Loss 2.539460 LR 0.000016 Time 0.319856 +2025-05-17 02:23:51,489 - --- validate (epoch=196)----------- +2025-05-17 02:23:51,490 - 4952 samples (32 per mini-batch) +2025-05-17 02:23:51,493 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:24:03,783 - Epoch: [196][ 50/ 155] Loss 2.892004 mAP 0.502074 +2025-05-17 02:24:15,994 - Epoch: [196][ 100/ 155] Loss 2.878080 mAP 0.495227 +2025-05-17 02:24:28,861 - Epoch: [196][ 150/ 155] Loss 2.898704 mAP 0.487185 +2025-05-17 02:24:31,726 - Epoch: [196][ 155/ 155] Loss 2.905912 mAP 0.485153 +2025-05-17 02:24:31,771 - ==> mAP: 0.48515 Loss: 2.906 + +2025-05-17 02:24:31,781 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:24:31,781 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:24:31,861 - + +2025-05-17 02:24:31,861 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:24:48,526 - Epoch: [197][ 50/ 518] Overall Loss 2.491158 Objective Loss 2.491158 LR 0.000016 Time 0.333224 +2025-05-17 02:25:04,388 - Epoch: [197][ 100/ 518] Overall Loss 2.515220 Objective Loss 2.515220 LR 0.000016 Time 0.325225 +2025-05-17 02:25:20,278 - Epoch: [197][ 150/ 518] Overall Loss 2.533543 Objective Loss 2.533543 LR 0.000016 Time 0.322740 +2025-05-17 02:25:36,170 - Epoch: [197][ 200/ 518] Overall Loss 2.530143 Objective Loss 2.530143 LR 0.000016 Time 0.321512 +2025-05-17 02:25:52,099 - Epoch: [197][ 250/ 518] Overall Loss 2.524307 Objective Loss 2.524307 LR 0.000016 Time 0.320923 +2025-05-17 02:26:08,018 - Epoch: [197][ 300/ 518] Overall Loss 2.514803 Objective Loss 2.514803 LR 0.000016 Time 0.320493 +2025-05-17 02:26:23,928 - Epoch: [197][ 350/ 518] Overall Loss 2.523958 Objective Loss 2.523958 LR 0.000016 Time 0.320165 +2025-05-17 02:26:39,846 - Epoch: [197][ 400/ 518] Overall Loss 2.526104 Objective Loss 2.526104 LR 0.000016 Time 0.319936 +2025-05-17 02:26:55,771 - Epoch: [197][ 450/ 518] Overall Loss 2.525654 Objective Loss 2.525654 LR 0.000016 Time 0.319774 +2025-05-17 02:27:11,690 - Epoch: [197][ 500/ 518] Overall Loss 2.529048 Objective Loss 2.529048 LR 0.000016 Time 0.319632 +2025-05-17 02:27:17,304 - Epoch: [197][ 518/ 518] Overall Loss 2.529127 Objective Loss 2.529127 LR 0.000016 Time 0.319363 +2025-05-17 02:27:17,344 - --- validate (epoch=197)----------- +2025-05-17 02:27:17,344 - 4952 samples (32 per mini-batch) +2025-05-17 02:27:17,348 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:27:29,407 - Epoch: [197][ 50/ 155] Loss 2.929249 mAP 0.495009 +2025-05-17 02:27:41,835 - Epoch: [197][ 100/ 155] Loss 2.940518 mAP 0.483077 +2025-05-17 02:27:54,752 - Epoch: [197][ 150/ 155] Loss 2.904533 mAP 0.489976 +2025-05-17 02:27:57,720 - Epoch: [197][ 155/ 155] Loss 2.906260 mAP 0.488665 +2025-05-17 02:27:57,755 - ==> mAP: 0.48867 Loss: 2.906 + +2025-05-17 02:27:57,765 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:27:57,765 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:27:57,863 - + +2025-05-17 02:27:57,863 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:28:14,536 - Epoch: [198][ 50/ 518] Overall Loss 2.534345 Objective Loss 2.534345 LR 0.000016 Time 0.333383 +2025-05-17 02:28:30,400 - Epoch: [198][ 100/ 518] Overall Loss 2.535972 Objective Loss 2.535972 LR 0.000016 Time 0.325325 +2025-05-17 02:28:46,282 - Epoch: [198][ 150/ 518] Overall Loss 2.566643 Objective Loss 2.566643 LR 0.000016 Time 0.322755 +2025-05-17 02:29:02,218 - Epoch: [198][ 200/ 518] Overall Loss 2.556231 Objective Loss 2.556231 LR 0.000016 Time 0.321746 +2025-05-17 02:29:18,147 - Epoch: [198][ 250/ 518] Overall Loss 2.545708 Objective Loss 2.545708 LR 0.000016 Time 0.321107 +2025-05-17 02:29:34,065 - Epoch: [198][ 300/ 518] Overall Loss 2.548258 Objective Loss 2.548258 LR 0.000016 Time 0.320647 +2025-05-17 02:29:49,982 - Epoch: [198][ 350/ 518] Overall Loss 2.555358 Objective Loss 2.555358 LR 0.000016 Time 0.320315 +2025-05-17 02:30:05,904 - Epoch: [198][ 400/ 518] Overall Loss 2.553259 Objective Loss 2.553259 LR 0.000016 Time 0.320078 +2025-05-17 02:30:21,836 - Epoch: [198][ 450/ 518] Overall Loss 2.546858 Objective Loss 2.546858 LR 0.000016 Time 0.319917 +2025-05-17 02:30:37,769 - Epoch: [198][ 500/ 518] Overall Loss 2.543447 Objective Loss 2.543447 LR 0.000016 Time 0.319790 +2025-05-17 02:30:43,404 - Epoch: [198][ 518/ 518] Overall Loss 2.546398 Objective Loss 2.546398 LR 0.000016 Time 0.319554 +2025-05-17 02:30:43,443 - --- validate (epoch=198)----------- +2025-05-17 02:30:43,444 - 4952 samples (32 per mini-batch) +2025-05-17 02:30:43,447 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:30:55,909 - Epoch: [198][ 50/ 155] Loss 2.898978 mAP 0.484105 +2025-05-17 02:31:08,422 - Epoch: [198][ 100/ 155] Loss 2.911433 mAP 0.484176 +2025-05-17 02:31:21,630 - Epoch: [198][ 150/ 155] Loss 2.909287 mAP 0.488265 +2025-05-17 02:31:24,604 - Epoch: [198][ 155/ 155] Loss 2.906334 mAP 0.488682 +2025-05-17 02:31:24,645 - ==> mAP: 0.48868 Loss: 2.906 + +2025-05-17 02:31:24,655 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:31:24,655 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:31:24,749 - + +2025-05-17 02:31:24,749 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:31:41,645 - Epoch: [199][ 50/ 518] Overall Loss 2.535080 Objective Loss 2.535080 LR 0.000016 Time 0.337855 +2025-05-17 02:31:57,500 - Epoch: [199][ 100/ 518] Overall Loss 2.495974 Objective Loss 2.495974 LR 0.000016 Time 0.327459 +2025-05-17 02:32:13,378 - Epoch: [199][ 150/ 518] Overall Loss 2.495061 Objective Loss 2.495061 LR 0.000016 Time 0.324158 +2025-05-17 02:32:29,291 - Epoch: [199][ 200/ 518] Overall Loss 2.504725 Objective Loss 2.504725 LR 0.000016 Time 0.322676 +2025-05-17 02:32:45,212 - Epoch: [199][ 250/ 518] Overall Loss 2.509599 Objective Loss 2.509599 LR 0.000016 Time 0.321820 +2025-05-17 02:33:01,153 - Epoch: [199][ 300/ 518] Overall Loss 2.514695 Objective Loss 2.514695 LR 0.000016 Time 0.321320 +2025-05-17 02:33:17,090 - Epoch: [199][ 350/ 518] Overall Loss 2.523220 Objective Loss 2.523220 LR 0.000016 Time 0.320950 +2025-05-17 02:33:33,043 - Epoch: [199][ 400/ 518] Overall Loss 2.526712 Objective Loss 2.526712 LR 0.000016 Time 0.320711 +2025-05-17 02:33:48,989 - Epoch: [199][ 450/ 518] Overall Loss 2.528361 Objective Loss 2.528361 LR 0.000016 Time 0.320510 +2025-05-17 02:34:04,924 - Epoch: [199][ 500/ 518] Overall Loss 2.530336 Objective Loss 2.530336 LR 0.000016 Time 0.320327 +2025-05-17 02:34:10,559 - Epoch: [199][ 518/ 518] Overall Loss 2.531194 Objective Loss 2.531194 LR 0.000016 Time 0.320075 +2025-05-17 02:34:10,593 - --- validate (epoch=199)----------- +2025-05-17 02:34:10,593 - 4952 samples (32 per mini-batch) +2025-05-17 02:34:10,597 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:34:22,905 - Epoch: [199][ 50/ 155] Loss 2.879533 mAP 0.493261 +2025-05-17 02:34:35,423 - Epoch: [199][ 100/ 155] Loss 2.905988 mAP 0.489944 +2025-05-17 02:34:48,613 - Epoch: [199][ 150/ 155] Loss 2.920138 mAP 0.483874 +2025-05-17 02:34:51,576 - Epoch: [199][ 155/ 155] Loss 2.916726 mAP 0.486843 +2025-05-17 02:34:51,613 - ==> mAP: 0.48684 Loss: 2.917 + +2025-05-17 02:34:51,623 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:34:51,623 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:34:51,721 - + +2025-05-17 02:34:51,721 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:35:08,533 - Epoch: [200][ 50/ 518] Overall Loss 2.577821 Objective Loss 2.577821 LR 0.000004 Time 0.336172 +2025-05-17 02:35:24,433 - Epoch: [200][ 100/ 518] Overall Loss 2.549684 Objective Loss 2.549684 LR 0.000004 Time 0.327082 +2025-05-17 02:35:40,347 - Epoch: [200][ 150/ 518] Overall Loss 2.546070 Objective Loss 2.546070 LR 0.000004 Time 0.324144 +2025-05-17 02:35:56,282 - Epoch: [200][ 200/ 518] Overall Loss 2.540955 Objective Loss 2.540955 LR 0.000004 Time 0.322780 +2025-05-17 02:36:12,213 - Epoch: [200][ 250/ 518] Overall Loss 2.535204 Objective Loss 2.535204 LR 0.000004 Time 0.321946 +2025-05-17 02:36:28,140 - Epoch: [200][ 300/ 518] Overall Loss 2.535918 Objective Loss 2.535918 LR 0.000004 Time 0.321373 +2025-05-17 02:36:44,069 - Epoch: [200][ 350/ 518] Overall Loss 2.532701 Objective Loss 2.532701 LR 0.000004 Time 0.320973 +2025-05-17 02:36:59,995 - Epoch: [200][ 400/ 518] Overall Loss 2.536633 Objective Loss 2.536633 LR 0.000004 Time 0.320664 +2025-05-17 02:37:15,946 - Epoch: [200][ 450/ 518] Overall Loss 2.536934 Objective Loss 2.536934 LR 0.000004 Time 0.320479 +2025-05-17 02:37:31,872 - Epoch: [200][ 500/ 518] Overall Loss 2.537559 Objective Loss 2.537559 LR 0.000004 Time 0.320282 +2025-05-17 02:37:37,505 - Epoch: [200][ 518/ 518] Overall Loss 2.539129 Objective Loss 2.539129 LR 0.000004 Time 0.320024 +2025-05-17 02:37:37,540 - --- validate (epoch=200)----------- +2025-05-17 02:37:37,541 - 4952 samples (32 per mini-batch) +2025-05-17 02:37:37,545 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:37:49,649 - Epoch: [200][ 50/ 155] Loss 2.935830 mAP 0.480991 +2025-05-17 02:38:02,163 - Epoch: [200][ 100/ 155] Loss 2.921212 mAP 0.486944 +2025-05-17 02:38:14,977 - Epoch: [200][ 150/ 155] Loss 2.914452 mAP 0.487454 +2025-05-17 02:38:17,876 - Epoch: [200][ 155/ 155] Loss 2.912534 mAP 0.486715 +2025-05-17 02:38:17,915 - ==> mAP: 0.48672 Loss: 2.913 + +2025-05-17 02:38:17,925 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:38:17,925 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:38:18,021 - + +2025-05-17 02:38:18,021 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:38:34,758 - Epoch: [201][ 50/ 518] Overall Loss 2.483109 Objective Loss 2.483109 LR 0.000004 Time 0.334686 +2025-05-17 02:38:50,656 - Epoch: [201][ 100/ 518] Overall Loss 2.497284 Objective Loss 2.497284 LR 0.000004 Time 0.326315 +2025-05-17 02:39:06,553 - Epoch: [201][ 150/ 518] Overall Loss 2.502547 Objective Loss 2.502547 LR 0.000004 Time 0.323514 +2025-05-17 02:39:22,465 - Epoch: [201][ 200/ 518] Overall Loss 2.493462 Objective Loss 2.493462 LR 0.000004 Time 0.322190 +2025-05-17 02:39:38,399 - Epoch: [201][ 250/ 518] Overall Loss 2.504601 Objective Loss 2.504601 LR 0.000004 Time 0.321485 +2025-05-17 02:39:54,359 - Epoch: [201][ 300/ 518] Overall Loss 2.499963 Objective Loss 2.499963 LR 0.000004 Time 0.321101 +2025-05-17 02:40:10,302 - Epoch: [201][ 350/ 518] Overall Loss 2.506221 Objective Loss 2.506221 LR 0.000004 Time 0.320781 +2025-05-17 02:40:26,241 - Epoch: [201][ 400/ 518] Overall Loss 2.509376 Objective Loss 2.509376 LR 0.000004 Time 0.320528 +2025-05-17 02:40:42,150 - Epoch: [201][ 450/ 518] Overall Loss 2.515280 Objective Loss 2.515280 LR 0.000004 Time 0.320265 +2025-05-17 02:40:58,070 - Epoch: [201][ 500/ 518] Overall Loss 2.516182 Objective Loss 2.516182 LR 0.000004 Time 0.320077 +2025-05-17 02:41:03,684 - Epoch: [201][ 518/ 518] Overall Loss 2.518132 Objective Loss 2.518132 LR 0.000004 Time 0.319791 +2025-05-17 02:41:03,719 - --- validate (epoch=201)----------- +2025-05-17 02:41:03,719 - 4952 samples (32 per mini-batch) +2025-05-17 02:41:03,723 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:41:15,996 - Epoch: [201][ 50/ 155] Loss 2.937049 mAP 0.503053 +2025-05-17 02:41:28,375 - Epoch: [201][ 100/ 155] Loss 2.926431 mAP 0.484793 +2025-05-17 02:41:41,355 - Epoch: [201][ 150/ 155] Loss 2.906258 mAP 0.487925 +2025-05-17 02:41:44,253 - Epoch: [201][ 155/ 155] Loss 2.905337 mAP 0.487421 +2025-05-17 02:41:44,297 - ==> mAP: 0.48742 Loss: 2.905 + +2025-05-17 02:41:44,307 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:41:44,307 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:41:44,402 - + +2025-05-17 02:41:44,402 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:42:01,098 - Epoch: [202][ 50/ 518] Overall Loss 2.520652 Objective Loss 2.520652 LR 0.000004 Time 0.333855 +2025-05-17 02:42:16,985 - Epoch: [202][ 100/ 518] Overall Loss 2.507739 Objective Loss 2.507739 LR 0.000004 Time 0.325791 +2025-05-17 02:42:32,880 - Epoch: [202][ 150/ 518] Overall Loss 2.522032 Objective Loss 2.522032 LR 0.000004 Time 0.323151 +2025-05-17 02:42:48,801 - Epoch: [202][ 200/ 518] Overall Loss 2.527405 Objective Loss 2.527405 LR 0.000004 Time 0.321962 +2025-05-17 02:43:04,718 - Epoch: [202][ 250/ 518] Overall Loss 2.515523 Objective Loss 2.515523 LR 0.000004 Time 0.321237 +2025-05-17 02:43:20,646 - Epoch: [202][ 300/ 518] Overall Loss 2.527249 Objective Loss 2.527249 LR 0.000004 Time 0.320787 +2025-05-17 02:43:36,583 - Epoch: [202][ 350/ 518] Overall Loss 2.531627 Objective Loss 2.531627 LR 0.000004 Time 0.320494 +2025-05-17 02:43:52,533 - Epoch: [202][ 400/ 518] Overall Loss 2.529533 Objective Loss 2.529533 LR 0.000004 Time 0.320304 +2025-05-17 02:44:08,477 - Epoch: [202][ 450/ 518] Overall Loss 2.529131 Objective Loss 2.529131 LR 0.000004 Time 0.320145 +2025-05-17 02:44:24,420 - Epoch: [202][ 500/ 518] Overall Loss 2.523790 Objective Loss 2.523790 LR 0.000004 Time 0.320016 +2025-05-17 02:44:30,049 - Epoch: [202][ 518/ 518] Overall Loss 2.526139 Objective Loss 2.526139 LR 0.000004 Time 0.319762 +2025-05-17 02:44:30,087 - --- validate (epoch=202)----------- +2025-05-17 02:44:30,088 - 4952 samples (32 per mini-batch) +2025-05-17 02:44:30,092 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:44:42,138 - Epoch: [202][ 50/ 155] Loss 2.870288 mAP 0.491456 +2025-05-17 02:44:54,538 - Epoch: [202][ 100/ 155] Loss 2.919517 mAP 0.483138 +2025-05-17 02:45:07,456 - Epoch: [202][ 150/ 155] Loss 2.908662 mAP 0.484149 +2025-05-17 02:45:10,366 - Epoch: [202][ 155/ 155] Loss 2.911520 mAP 0.484047 +2025-05-17 02:45:10,406 - ==> mAP: 0.48405 Loss: 2.912 + +2025-05-17 02:45:10,416 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:45:10,416 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:45:10,511 - + +2025-05-17 02:45:10,511 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:45:27,322 - Epoch: [203][ 50/ 518] Overall Loss 2.474296 Objective Loss 2.474296 LR 0.000004 Time 0.336143 +2025-05-17 02:45:43,211 - Epoch: [203][ 100/ 518] Overall Loss 2.511884 Objective Loss 2.511884 LR 0.000004 Time 0.326957 +2025-05-17 02:45:59,101 - Epoch: [203][ 150/ 518] Overall Loss 2.519328 Objective Loss 2.519328 LR 0.000004 Time 0.323898 +2025-05-17 02:46:15,009 - Epoch: [203][ 200/ 518] Overall Loss 2.529295 Objective Loss 2.529295 LR 0.000004 Time 0.322458 +2025-05-17 02:46:30,926 - Epoch: [203][ 250/ 518] Overall Loss 2.514213 Objective Loss 2.514213 LR 0.000004 Time 0.321627 +2025-05-17 02:46:46,832 - Epoch: [203][ 300/ 518] Overall Loss 2.522280 Objective Loss 2.522280 LR 0.000004 Time 0.321040 +2025-05-17 02:47:02,744 - Epoch: [203][ 350/ 518] Overall Loss 2.518982 Objective Loss 2.518982 LR 0.000004 Time 0.320636 +2025-05-17 02:47:18,670 - Epoch: [203][ 400/ 518] Overall Loss 2.523294 Objective Loss 2.523294 LR 0.000004 Time 0.320369 +2025-05-17 02:47:34,589 - Epoch: [203][ 450/ 518] Overall Loss 2.521382 Objective Loss 2.521382 LR 0.000004 Time 0.320146 +2025-05-17 02:47:50,519 - Epoch: [203][ 500/ 518] Overall Loss 2.529880 Objective Loss 2.529880 LR 0.000004 Time 0.319989 +2025-05-17 02:47:56,144 - Epoch: [203][ 518/ 518] Overall Loss 2.531962 Objective Loss 2.531962 LR 0.000004 Time 0.319729 +2025-05-17 02:47:56,181 - --- validate (epoch=203)----------- +2025-05-17 02:47:56,182 - 4952 samples (32 per mini-batch) +2025-05-17 02:47:56,186 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:48:08,415 - Epoch: [203][ 50/ 155] Loss 2.924746 mAP 0.496707 +2025-05-17 02:48:20,911 - Epoch: [203][ 100/ 155] Loss 2.924823 mAP 0.492973 +2025-05-17 02:48:33,708 - Epoch: [203][ 150/ 155] Loss 2.909096 mAP 0.490072 +2025-05-17 02:48:36,561 - Epoch: [203][ 155/ 155] Loss 2.913930 mAP 0.488518 +2025-05-17 02:48:36,602 - ==> mAP: 0.48852 Loss: 2.914 + +2025-05-17 02:48:36,612 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:48:36,612 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:48:36,705 - + +2025-05-17 02:48:36,706 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:48:53,494 - Epoch: [204][ 50/ 518] Overall Loss 2.526424 Objective Loss 2.526424 LR 0.000004 Time 0.335698 +2025-05-17 02:49:09,362 - Epoch: [204][ 100/ 518] Overall Loss 2.534516 Objective Loss 2.534516 LR 0.000004 Time 0.326516 +2025-05-17 02:49:25,254 - Epoch: [204][ 150/ 518] Overall Loss 2.552957 Objective Loss 2.552957 LR 0.000004 Time 0.323622 +2025-05-17 02:49:41,156 - Epoch: [204][ 200/ 518] Overall Loss 2.555562 Objective Loss 2.555562 LR 0.000004 Time 0.322220 +2025-05-17 02:49:57,056 - Epoch: [204][ 250/ 518] Overall Loss 2.543668 Objective Loss 2.543668 LR 0.000004 Time 0.321372 +2025-05-17 02:50:12,982 - Epoch: [204][ 300/ 518] Overall Loss 2.553315 Objective Loss 2.553315 LR 0.000004 Time 0.320893 +2025-05-17 02:50:28,920 - Epoch: [204][ 350/ 518] Overall Loss 2.549616 Objective Loss 2.549616 LR 0.000004 Time 0.320585 +2025-05-17 02:50:44,854 - Epoch: [204][ 400/ 518] Overall Loss 2.536290 Objective Loss 2.536290 LR 0.000004 Time 0.320345 +2025-05-17 02:51:00,772 - Epoch: [204][ 450/ 518] Overall Loss 2.537462 Objective Loss 2.537462 LR 0.000004 Time 0.320122 +2025-05-17 02:51:16,700 - Epoch: [204][ 500/ 518] Overall Loss 2.539350 Objective Loss 2.539350 LR 0.000004 Time 0.319964 +2025-05-17 02:51:22,313 - Epoch: [204][ 518/ 518] Overall Loss 2.539028 Objective Loss 2.539028 LR 0.000004 Time 0.319682 +2025-05-17 02:51:22,350 - --- validate (epoch=204)----------- +2025-05-17 02:51:22,351 - 4952 samples (32 per mini-batch) +2025-05-17 02:51:22,354 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:51:34,633 - Epoch: [204][ 50/ 155] Loss 2.897982 mAP 0.494404 +2025-05-17 02:51:46,870 - Epoch: [204][ 100/ 155] Loss 2.910251 mAP 0.487020 +2025-05-17 02:51:59,849 - Epoch: [204][ 150/ 155] Loss 2.914387 mAP 0.486347 +2025-05-17 02:52:02,766 - Epoch: [204][ 155/ 155] Loss 2.913240 mAP 0.486924 +2025-05-17 02:52:02,804 - ==> mAP: 0.48692 Loss: 2.913 + +2025-05-17 02:52:02,814 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:52:02,814 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:52:02,909 - + +2025-05-17 02:52:02,910 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:52:19,645 - Epoch: [205][ 50/ 518] Overall Loss 2.564835 Objective Loss 2.564835 LR 0.000004 Time 0.334632 +2025-05-17 02:52:35,501 - Epoch: [205][ 100/ 518] Overall Loss 2.542468 Objective Loss 2.542468 LR 0.000004 Time 0.325870 +2025-05-17 02:52:51,391 - Epoch: [205][ 150/ 518] Overall Loss 2.530572 Objective Loss 2.530572 LR 0.000004 Time 0.323172 +2025-05-17 02:53:07,292 - Epoch: [205][ 200/ 518] Overall Loss 2.539335 Objective Loss 2.539335 LR 0.000004 Time 0.321875 +2025-05-17 02:53:23,213 - Epoch: [205][ 250/ 518] Overall Loss 2.529625 Objective Loss 2.529625 LR 0.000004 Time 0.321183 +2025-05-17 02:53:39,155 - Epoch: [205][ 300/ 518] Overall Loss 2.530493 Objective Loss 2.530493 LR 0.000004 Time 0.320789 +2025-05-17 02:53:55,071 - Epoch: [205][ 350/ 518] Overall Loss 2.532822 Objective Loss 2.532822 LR 0.000004 Time 0.320436 +2025-05-17 02:54:10,996 - Epoch: [205][ 400/ 518] Overall Loss 2.531182 Objective Loss 2.531182 LR 0.000004 Time 0.320190 +2025-05-17 02:54:26,938 - Epoch: [205][ 450/ 518] Overall Loss 2.531050 Objective Loss 2.531050 LR 0.000004 Time 0.320039 +2025-05-17 02:54:42,889 - Epoch: [205][ 500/ 518] Overall Loss 2.530044 Objective Loss 2.530044 LR 0.000004 Time 0.319935 +2025-05-17 02:54:48,529 - Epoch: [205][ 518/ 518] Overall Loss 2.531906 Objective Loss 2.531906 LR 0.000004 Time 0.319705 +2025-05-17 02:54:48,567 - --- validate (epoch=205)----------- +2025-05-17 02:54:48,568 - 4952 samples (32 per mini-batch) +2025-05-17 02:54:48,571 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:55:00,825 - Epoch: [205][ 50/ 155] Loss 2.852124 mAP 0.502255 +2025-05-17 02:55:13,456 - Epoch: [205][ 100/ 155] Loss 2.874445 mAP 0.495292 +2025-05-17 02:55:26,307 - Epoch: [205][ 150/ 155] Loss 2.904366 mAP 0.486807 +2025-05-17 02:55:29,205 - Epoch: [205][ 155/ 155] Loss 2.901209 mAP 0.487452 +2025-05-17 02:55:29,243 - ==> mAP: 0.48745 Loss: 2.901 + +2025-05-17 02:55:29,253 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:55:29,253 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:55:29,348 - + +2025-05-17 02:55:29,348 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:55:46,133 - Epoch: [206][ 50/ 518] Overall Loss 2.522792 Objective Loss 2.522792 LR 0.000004 Time 0.335634 +2025-05-17 02:56:01,992 - Epoch: [206][ 100/ 518] Overall Loss 2.529260 Objective Loss 2.529260 LR 0.000004 Time 0.326394 +2025-05-17 02:56:17,872 - Epoch: [206][ 150/ 518] Overall Loss 2.513355 Objective Loss 2.513355 LR 0.000004 Time 0.323458 +2025-05-17 02:56:33,779 - Epoch: [206][ 200/ 518] Overall Loss 2.513282 Objective Loss 2.513282 LR 0.000004 Time 0.322122 +2025-05-17 02:56:49,714 - Epoch: [206][ 250/ 518] Overall Loss 2.516617 Objective Loss 2.516617 LR 0.000004 Time 0.321431 +2025-05-17 02:57:05,676 - Epoch: [206][ 300/ 518] Overall Loss 2.524461 Objective Loss 2.524461 LR 0.000004 Time 0.321065 +2025-05-17 02:57:21,629 - Epoch: [206][ 350/ 518] Overall Loss 2.526147 Objective Loss 2.526147 LR 0.000004 Time 0.320778 +2025-05-17 02:57:37,552 - Epoch: [206][ 400/ 518] Overall Loss 2.521884 Objective Loss 2.521884 LR 0.000004 Time 0.320485 +2025-05-17 02:57:53,484 - Epoch: [206][ 450/ 518] Overall Loss 2.523474 Objective Loss 2.523474 LR 0.000004 Time 0.320276 +2025-05-17 02:58:09,436 - Epoch: [206][ 500/ 518] Overall Loss 2.526147 Objective Loss 2.526147 LR 0.000004 Time 0.320152 +2025-05-17 02:58:15,079 - Epoch: [206][ 518/ 518] Overall Loss 2.526327 Objective Loss 2.526327 LR 0.000004 Time 0.319920 +2025-05-17 02:58:15,112 - --- validate (epoch=206)----------- +2025-05-17 02:58:15,113 - 4952 samples (32 per mini-batch) +2025-05-17 02:58:15,117 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 02:58:27,410 - Epoch: [206][ 50/ 155] Loss 2.920591 mAP 0.502647 +2025-05-17 02:58:39,747 - Epoch: [206][ 100/ 155] Loss 2.910677 mAP 0.496428 +2025-05-17 02:58:52,726 - Epoch: [206][ 150/ 155] Loss 2.914282 mAP 0.490196 +2025-05-17 02:58:55,606 - Epoch: [206][ 155/ 155] Loss 2.911880 mAP 0.489396 +2025-05-17 02:58:55,645 - ==> mAP: 0.48940 Loss: 2.912 + +2025-05-17 02:58:55,655 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 02:58:55,655 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 02:58:55,746 - + +2025-05-17 02:58:55,747 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 02:59:12,589 - Epoch: [207][ 50/ 518] Overall Loss 2.494220 Objective Loss 2.494220 LR 0.000004 Time 0.336791 +2025-05-17 02:59:28,486 - Epoch: [207][ 100/ 518] Overall Loss 2.513525 Objective Loss 2.513525 LR 0.000004 Time 0.327348 +2025-05-17 02:59:44,392 - Epoch: [207][ 150/ 518] Overall Loss 2.507765 Objective Loss 2.507765 LR 0.000004 Time 0.324269 +2025-05-17 03:00:00,310 - Epoch: [207][ 200/ 518] Overall Loss 2.529615 Objective Loss 2.529615 LR 0.000004 Time 0.322788 +2025-05-17 03:00:16,253 - Epoch: [207][ 250/ 518] Overall Loss 2.543032 Objective Loss 2.543032 LR 0.000004 Time 0.322003 +2025-05-17 03:00:32,191 - Epoch: [207][ 300/ 518] Overall Loss 2.538879 Objective Loss 2.538879 LR 0.000004 Time 0.321461 +2025-05-17 03:00:48,128 - Epoch: [207][ 350/ 518] Overall Loss 2.531689 Objective Loss 2.531689 LR 0.000004 Time 0.321069 +2025-05-17 03:01:04,043 - Epoch: [207][ 400/ 518] Overall Loss 2.536527 Objective Loss 2.536527 LR 0.000004 Time 0.320720 +2025-05-17 03:01:19,956 - Epoch: [207][ 450/ 518] Overall Loss 2.535133 Objective Loss 2.535133 LR 0.000004 Time 0.320445 +2025-05-17 03:01:35,890 - Epoch: [207][ 500/ 518] Overall Loss 2.534093 Objective Loss 2.534093 LR 0.000004 Time 0.320267 +2025-05-17 03:01:41,519 - Epoch: [207][ 518/ 518] Overall Loss 2.531312 Objective Loss 2.531312 LR 0.000004 Time 0.320003 +2025-05-17 03:01:41,555 - --- validate (epoch=207)----------- +2025-05-17 03:01:41,556 - 4952 samples (32 per mini-batch) +2025-05-17 03:01:41,559 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:01:53,754 - Epoch: [207][ 50/ 155] Loss 2.911832 mAP 0.463198 +2025-05-17 03:02:06,326 - Epoch: [207][ 100/ 155] Loss 2.924273 mAP 0.476136 +2025-05-17 03:02:19,390 - Epoch: [207][ 150/ 155] Loss 2.918135 mAP 0.486415 +2025-05-17 03:02:22,353 - Epoch: [207][ 155/ 155] Loss 2.912772 mAP 0.484546 +2025-05-17 03:02:22,392 - ==> mAP: 0.48455 Loss: 2.913 + +2025-05-17 03:02:22,402 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:02:22,402 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:02:22,497 - + +2025-05-17 03:02:22,497 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:02:39,175 - Epoch: [208][ 50/ 518] Overall Loss 2.560109 Objective Loss 2.560109 LR 0.000004 Time 0.333480 +2025-05-17 03:02:55,051 - Epoch: [208][ 100/ 518] Overall Loss 2.543298 Objective Loss 2.543298 LR 0.000004 Time 0.325496 +2025-05-17 03:03:10,942 - Epoch: [208][ 150/ 518] Overall Loss 2.556578 Objective Loss 2.556578 LR 0.000004 Time 0.322928 +2025-05-17 03:03:26,850 - Epoch: [208][ 200/ 518] Overall Loss 2.544736 Objective Loss 2.544736 LR 0.000004 Time 0.321728 +2025-05-17 03:03:42,779 - Epoch: [208][ 250/ 518] Overall Loss 2.544326 Objective Loss 2.544326 LR 0.000004 Time 0.321097 +2025-05-17 03:03:58,718 - Epoch: [208][ 300/ 518] Overall Loss 2.536248 Objective Loss 2.536248 LR 0.000004 Time 0.320707 +2025-05-17 03:04:14,642 - Epoch: [208][ 350/ 518] Overall Loss 2.531842 Objective Loss 2.531842 LR 0.000004 Time 0.320386 +2025-05-17 03:04:30,570 - Epoch: [208][ 400/ 518] Overall Loss 2.527583 Objective Loss 2.527583 LR 0.000004 Time 0.320155 +2025-05-17 03:04:46,520 - Epoch: [208][ 450/ 518] Overall Loss 2.523990 Objective Loss 2.523990 LR 0.000004 Time 0.320025 +2025-05-17 03:05:02,448 - Epoch: [208][ 500/ 518] Overall Loss 2.523859 Objective Loss 2.523859 LR 0.000004 Time 0.319875 +2025-05-17 03:05:08,073 - Epoch: [208][ 518/ 518] Overall Loss 2.524016 Objective Loss 2.524016 LR 0.000004 Time 0.319618 +2025-05-17 03:05:08,112 - --- validate (epoch=208)----------- +2025-05-17 03:05:08,113 - 4952 samples (32 per mini-batch) +2025-05-17 03:05:08,116 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:05:20,329 - Epoch: [208][ 50/ 155] Loss 2.891405 mAP 0.485539 +2025-05-17 03:05:32,635 - Epoch: [208][ 100/ 155] Loss 2.910150 mAP 0.482996 +2025-05-17 03:05:45,609 - Epoch: [208][ 150/ 155] Loss 2.912723 mAP 0.481825 +2025-05-17 03:05:48,538 - Epoch: [208][ 155/ 155] Loss 2.910254 mAP 0.483619 +2025-05-17 03:05:48,575 - ==> mAP: 0.48362 Loss: 2.910 + +2025-05-17 03:05:48,585 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:05:48,585 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:05:48,680 - + +2025-05-17 03:05:48,681 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:06:05,352 - Epoch: [209][ 50/ 518] Overall Loss 2.529755 Objective Loss 2.529755 LR 0.000004 Time 0.333350 +2025-05-17 03:06:21,237 - Epoch: [209][ 100/ 518] Overall Loss 2.534503 Objective Loss 2.534503 LR 0.000004 Time 0.325516 +2025-05-17 03:06:37,107 - Epoch: [209][ 150/ 518] Overall Loss 2.525155 Objective Loss 2.525155 LR 0.000004 Time 0.322804 +2025-05-17 03:06:53,045 - Epoch: [209][ 200/ 518] Overall Loss 2.526433 Objective Loss 2.526433 LR 0.000004 Time 0.321790 +2025-05-17 03:07:08,978 - Epoch: [209][ 250/ 518] Overall Loss 2.532567 Objective Loss 2.532567 LR 0.000004 Time 0.321160 +2025-05-17 03:07:24,920 - Epoch: [209][ 300/ 518] Overall Loss 2.537016 Objective Loss 2.537016 LR 0.000004 Time 0.320773 +2025-05-17 03:07:40,830 - Epoch: [209][ 350/ 518] Overall Loss 2.539064 Objective Loss 2.539064 LR 0.000004 Time 0.320402 +2025-05-17 03:07:56,768 - Epoch: [209][ 400/ 518] Overall Loss 2.547459 Objective Loss 2.547459 LR 0.000004 Time 0.320194 +2025-05-17 03:08:12,721 - Epoch: [209][ 450/ 518] Overall Loss 2.548804 Objective Loss 2.548804 LR 0.000004 Time 0.320067 +2025-05-17 03:08:28,685 - Epoch: [209][ 500/ 518] Overall Loss 2.545337 Objective Loss 2.545337 LR 0.000004 Time 0.319987 +2025-05-17 03:08:34,314 - Epoch: [209][ 518/ 518] Overall Loss 2.545161 Objective Loss 2.545161 LR 0.000004 Time 0.319734 +2025-05-17 03:08:34,351 - --- validate (epoch=209)----------- +2025-05-17 03:08:34,352 - 4952 samples (32 per mini-batch) +2025-05-17 03:08:34,355 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:08:46,562 - Epoch: [209][ 50/ 155] Loss 2.948112 mAP 0.478184 +2025-05-17 03:08:59,194 - Epoch: [209][ 100/ 155] Loss 2.939503 mAP 0.483810 +2025-05-17 03:09:12,431 - Epoch: [209][ 150/ 155] Loss 2.923048 mAP 0.488003 +2025-05-17 03:09:15,429 - Epoch: [209][ 155/ 155] Loss 2.921227 mAP 0.488385 +2025-05-17 03:09:15,466 - ==> mAP: 0.48838 Loss: 2.921 + +2025-05-17 03:09:15,476 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:09:15,477 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:09:15,577 - + +2025-05-17 03:09:15,578 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:09:32,370 - Epoch: [210][ 50/ 518] Overall Loss 2.549625 Objective Loss 2.549625 LR 0.000004 Time 0.335773 +2025-05-17 03:09:48,270 - Epoch: [210][ 100/ 518] Overall Loss 2.539818 Objective Loss 2.539818 LR 0.000004 Time 0.326874 +2025-05-17 03:10:04,164 - Epoch: [210][ 150/ 518] Overall Loss 2.527450 Objective Loss 2.527450 LR 0.000004 Time 0.323875 +2025-05-17 03:10:20,071 - Epoch: [210][ 200/ 518] Overall Loss 2.522890 Objective Loss 2.522890 LR 0.000004 Time 0.322435 +2025-05-17 03:10:35,986 - Epoch: [210][ 250/ 518] Overall Loss 2.528610 Objective Loss 2.528610 LR 0.000004 Time 0.321606 +2025-05-17 03:10:51,911 - Epoch: [210][ 300/ 518] Overall Loss 2.537832 Objective Loss 2.537832 LR 0.000004 Time 0.321085 +2025-05-17 03:11:07,847 - Epoch: [210][ 350/ 518] Overall Loss 2.536623 Objective Loss 2.536623 LR 0.000004 Time 0.320743 +2025-05-17 03:11:23,772 - Epoch: [210][ 400/ 518] Overall Loss 2.535927 Objective Loss 2.535927 LR 0.000004 Time 0.320461 +2025-05-17 03:11:39,699 - Epoch: [210][ 450/ 518] Overall Loss 2.537960 Objective Loss 2.537960 LR 0.000004 Time 0.320246 +2025-05-17 03:11:55,662 - Epoch: [210][ 500/ 518] Overall Loss 2.536644 Objective Loss 2.536644 LR 0.000004 Time 0.320146 +2025-05-17 03:12:01,303 - Epoch: [210][ 518/ 518] Overall Loss 2.549635 Objective Loss 2.549635 LR 0.000004 Time 0.319911 +2025-05-17 03:12:01,341 - --- validate (epoch=210)----------- +2025-05-17 03:12:01,341 - 4952 samples (32 per mini-batch) +2025-05-17 03:12:01,345 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:12:13,688 - Epoch: [210][ 50/ 155] Loss 2.915668 mAP 0.483261 +2025-05-17 03:12:26,166 - Epoch: [210][ 100/ 155] Loss 2.910298 mAP 0.493349 +2025-05-17 03:12:39,345 - Epoch: [210][ 150/ 155] Loss 2.906356 mAP 0.489826 +2025-05-17 03:12:42,328 - Epoch: [210][ 155/ 155] Loss 2.908128 mAP 0.489498 +2025-05-17 03:12:42,370 - ==> mAP: 0.48950 Loss: 2.908 + +2025-05-17 03:12:42,380 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:12:42,380 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:12:42,477 - + +2025-05-17 03:12:42,477 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:12:59,292 - Epoch: [211][ 50/ 518] Overall Loss 2.517592 Objective Loss 2.517592 LR 0.000004 Time 0.336243 +2025-05-17 03:13:15,171 - Epoch: [211][ 100/ 518] Overall Loss 2.517343 Objective Loss 2.517343 LR 0.000004 Time 0.326899 +2025-05-17 03:13:31,076 - Epoch: [211][ 150/ 518] Overall Loss 2.519663 Objective Loss 2.519663 LR 0.000004 Time 0.323961 +2025-05-17 03:13:47,012 - Epoch: [211][ 200/ 518] Overall Loss 2.532757 Objective Loss 2.532757 LR 0.000004 Time 0.322646 +2025-05-17 03:14:02,935 - Epoch: [211][ 250/ 518] Overall Loss 2.537476 Objective Loss 2.537476 LR 0.000004 Time 0.321805 +2025-05-17 03:14:18,861 - Epoch: [211][ 300/ 518] Overall Loss 2.529053 Objective Loss 2.529053 LR 0.000004 Time 0.321253 +2025-05-17 03:14:34,790 - Epoch: [211][ 350/ 518] Overall Loss 2.529751 Objective Loss 2.529751 LR 0.000004 Time 0.320869 +2025-05-17 03:14:50,722 - Epoch: [211][ 400/ 518] Overall Loss 2.533415 Objective Loss 2.533415 LR 0.000004 Time 0.320588 +2025-05-17 03:15:06,673 - Epoch: [211][ 450/ 518] Overall Loss 2.534241 Objective Loss 2.534241 LR 0.000004 Time 0.320411 +2025-05-17 03:15:22,629 - Epoch: [211][ 500/ 518] Overall Loss 2.533586 Objective Loss 2.533586 LR 0.000004 Time 0.320281 +2025-05-17 03:15:28,253 - Epoch: [211][ 518/ 518] Overall Loss 2.535990 Objective Loss 2.535990 LR 0.000004 Time 0.320007 +2025-05-17 03:15:28,290 - --- validate (epoch=211)----------- +2025-05-17 03:15:28,291 - 4952 samples (32 per mini-batch) +2025-05-17 03:15:28,294 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:15:40,480 - Epoch: [211][ 50/ 155] Loss 2.934395 mAP 0.476407 +2025-05-17 03:15:53,047 - Epoch: [211][ 100/ 155] Loss 2.920636 mAP 0.477547 +2025-05-17 03:16:06,270 - Epoch: [211][ 150/ 155] Loss 2.908326 mAP 0.488030 +2025-05-17 03:16:09,119 - Epoch: [211][ 155/ 155] Loss 2.903985 mAP 0.488247 +2025-05-17 03:16:09,158 - ==> mAP: 0.48825 Loss: 2.904 + +2025-05-17 03:16:09,168 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:16:09,168 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:16:09,404 - + +2025-05-17 03:16:09,405 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:16:26,197 - Epoch: [212][ 50/ 518] Overall Loss 2.546658 Objective Loss 2.546658 LR 0.000004 Time 0.335782 +2025-05-17 03:16:42,058 - Epoch: [212][ 100/ 518] Overall Loss 2.532550 Objective Loss 2.532550 LR 0.000004 Time 0.326491 +2025-05-17 03:16:57,944 - Epoch: [212][ 150/ 518] Overall Loss 2.534441 Objective Loss 2.534441 LR 0.000004 Time 0.323564 +2025-05-17 03:17:13,875 - Epoch: [212][ 200/ 518] Overall Loss 2.531391 Objective Loss 2.531391 LR 0.000004 Time 0.322324 +2025-05-17 03:17:29,799 - Epoch: [212][ 250/ 518] Overall Loss 2.530244 Objective Loss 2.530244 LR 0.000004 Time 0.321547 +2025-05-17 03:17:45,718 - Epoch: [212][ 300/ 518] Overall Loss 2.525467 Objective Loss 2.525467 LR 0.000004 Time 0.321018 +2025-05-17 03:18:01,653 - Epoch: [212][ 350/ 518] Overall Loss 2.532566 Objective Loss 2.532566 LR 0.000004 Time 0.320685 +2025-05-17 03:18:17,596 - Epoch: [212][ 400/ 518] Overall Loss 2.537145 Objective Loss 2.537145 LR 0.000004 Time 0.320454 +2025-05-17 03:18:33,539 - Epoch: [212][ 450/ 518] Overall Loss 2.537152 Objective Loss 2.537152 LR 0.000004 Time 0.320276 +2025-05-17 03:18:49,489 - Epoch: [212][ 500/ 518] Overall Loss 2.534133 Objective Loss 2.534133 LR 0.000004 Time 0.320148 +2025-05-17 03:18:55,113 - Epoch: [212][ 518/ 518] Overall Loss 2.533430 Objective Loss 2.533430 LR 0.000004 Time 0.319879 +2025-05-17 03:18:55,151 - --- validate (epoch=212)----------- +2025-05-17 03:18:55,152 - 4952 samples (32 per mini-batch) +2025-05-17 03:18:55,155 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:19:07,350 - Epoch: [212][ 50/ 155] Loss 2.917683 mAP 0.478847 +2025-05-17 03:19:19,973 - Epoch: [212][ 100/ 155] Loss 2.919295 mAP 0.482350 +2025-05-17 03:19:33,008 - Epoch: [212][ 150/ 155] Loss 2.909345 mAP 0.483780 +2025-05-17 03:19:35,959 - Epoch: [212][ 155/ 155] Loss 2.907991 mAP 0.484923 +2025-05-17 03:19:35,995 - ==> mAP: 0.48492 Loss: 2.908 + +2025-05-17 03:19:36,005 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:19:36,005 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:19:36,102 - + +2025-05-17 03:19:36,103 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:19:52,979 - Epoch: [213][ 50/ 518] Overall Loss 2.574115 Objective Loss 2.574115 LR 0.000004 Time 0.337463 +2025-05-17 03:20:08,873 - Epoch: [213][ 100/ 518] Overall Loss 2.544698 Objective Loss 2.544698 LR 0.000004 Time 0.327665 +2025-05-17 03:20:24,766 - Epoch: [213][ 150/ 518] Overall Loss 2.539688 Objective Loss 2.539688 LR 0.000004 Time 0.324391 +2025-05-17 03:20:40,661 - Epoch: [213][ 200/ 518] Overall Loss 2.542564 Objective Loss 2.542564 LR 0.000004 Time 0.322763 +2025-05-17 03:20:56,562 - Epoch: [213][ 250/ 518] Overall Loss 2.541663 Objective Loss 2.541663 LR 0.000004 Time 0.321808 +2025-05-17 03:21:12,466 - Epoch: [213][ 300/ 518] Overall Loss 2.541099 Objective Loss 2.541099 LR 0.000004 Time 0.321183 +2025-05-17 03:21:28,399 - Epoch: [213][ 350/ 518] Overall Loss 2.536902 Objective Loss 2.536902 LR 0.000004 Time 0.320821 +2025-05-17 03:21:44,324 - Epoch: [213][ 400/ 518] Overall Loss 2.536812 Objective Loss 2.536812 LR 0.000004 Time 0.320529 +2025-05-17 03:22:00,276 - Epoch: [213][ 450/ 518] Overall Loss 2.540665 Objective Loss 2.540665 LR 0.000004 Time 0.320362 +2025-05-17 03:22:16,214 - Epoch: [213][ 500/ 518] Overall Loss 2.538781 Objective Loss 2.538781 LR 0.000004 Time 0.320201 +2025-05-17 03:22:21,837 - Epoch: [213][ 518/ 518] Overall Loss 2.541071 Objective Loss 2.541071 LR 0.000004 Time 0.319928 +2025-05-17 03:22:21,872 - --- validate (epoch=213)----------- +2025-05-17 03:22:21,873 - 4952 samples (32 per mini-batch) +2025-05-17 03:22:21,877 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:22:34,208 - Epoch: [213][ 50/ 155] Loss 2.908356 mAP 0.485778 +2025-05-17 03:22:46,659 - Epoch: [213][ 100/ 155] Loss 2.902766 mAP 0.490639 +2025-05-17 03:22:59,841 - Epoch: [213][ 150/ 155] Loss 2.908951 mAP 0.485831 +2025-05-17 03:23:02,820 - Epoch: [213][ 155/ 155] Loss 2.908383 mAP 0.485641 +2025-05-17 03:23:02,856 - ==> mAP: 0.48564 Loss: 2.908 + +2025-05-17 03:23:02,867 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:23:02,867 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:23:02,965 - + +2025-05-17 03:23:02,965 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:23:19,781 - Epoch: [214][ 50/ 518] Overall Loss 2.524967 Objective Loss 2.524967 LR 0.000004 Time 0.336247 +2025-05-17 03:23:35,695 - Epoch: [214][ 100/ 518] Overall Loss 2.502693 Objective Loss 2.502693 LR 0.000004 Time 0.327259 +2025-05-17 03:23:51,591 - Epoch: [214][ 150/ 518] Overall Loss 2.500703 Objective Loss 2.500703 LR 0.000004 Time 0.324139 +2025-05-17 03:24:07,532 - Epoch: [214][ 200/ 518] Overall Loss 2.524056 Objective Loss 2.524056 LR 0.000004 Time 0.322808 +2025-05-17 03:24:23,451 - Epoch: [214][ 250/ 518] Overall Loss 2.529753 Objective Loss 2.529753 LR 0.000004 Time 0.321916 +2025-05-17 03:24:39,382 - Epoch: [214][ 300/ 518] Overall Loss 2.528724 Objective Loss 2.528724 LR 0.000004 Time 0.321364 +2025-05-17 03:24:55,316 - Epoch: [214][ 350/ 518] Overall Loss 2.524134 Objective Loss 2.524134 LR 0.000004 Time 0.320978 +2025-05-17 03:25:11,240 - Epoch: [214][ 400/ 518] Overall Loss 2.524300 Objective Loss 2.524300 LR 0.000004 Time 0.320664 +2025-05-17 03:25:27,212 - Epoch: [214][ 450/ 518] Overall Loss 2.529483 Objective Loss 2.529483 LR 0.000004 Time 0.320526 +2025-05-17 03:25:43,170 - Epoch: [214][ 500/ 518] Overall Loss 2.529232 Objective Loss 2.529232 LR 0.000004 Time 0.320389 +2025-05-17 03:25:48,796 - Epoch: [214][ 518/ 518] Overall Loss 2.533407 Objective Loss 2.533407 LR 0.000004 Time 0.320116 +2025-05-17 03:25:48,830 - --- validate (epoch=214)----------- +2025-05-17 03:25:48,831 - 4952 samples (32 per mini-batch) +2025-05-17 03:25:48,834 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:26:01,010 - Epoch: [214][ 50/ 155] Loss 2.891298 mAP 0.489803 +2025-05-17 03:26:13,523 - Epoch: [214][ 100/ 155] Loss 2.905208 mAP 0.493622 +2025-05-17 03:26:26,498 - Epoch: [214][ 150/ 155] Loss 2.906320 mAP 0.487567 +2025-05-17 03:26:29,478 - Epoch: [214][ 155/ 155] Loss 2.912252 mAP 0.484587 +2025-05-17 03:26:29,519 - ==> mAP: 0.48459 Loss: 2.912 + +2025-05-17 03:26:29,530 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:26:29,530 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:26:29,633 - + +2025-05-17 03:26:29,633 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:26:46,395 - Epoch: [215][ 50/ 518] Overall Loss 2.583711 Objective Loss 2.583711 LR 0.000004 Time 0.335174 +2025-05-17 03:27:02,309 - Epoch: [215][ 100/ 518] Overall Loss 2.570253 Objective Loss 2.570253 LR 0.000004 Time 0.326719 +2025-05-17 03:27:18,224 - Epoch: [215][ 150/ 518] Overall Loss 2.553483 Objective Loss 2.553483 LR 0.000004 Time 0.323905 +2025-05-17 03:27:34,166 - Epoch: [215][ 200/ 518] Overall Loss 2.543183 Objective Loss 2.543183 LR 0.000004 Time 0.322639 +2025-05-17 03:27:50,105 - Epoch: [215][ 250/ 518] Overall Loss 2.534919 Objective Loss 2.534919 LR 0.000004 Time 0.321863 +2025-05-17 03:28:06,038 - Epoch: [215][ 300/ 518] Overall Loss 2.530645 Objective Loss 2.530645 LR 0.000004 Time 0.321326 +2025-05-17 03:28:21,955 - Epoch: [215][ 350/ 518] Overall Loss 2.528035 Objective Loss 2.528035 LR 0.000004 Time 0.320897 +2025-05-17 03:28:37,873 - Epoch: [215][ 400/ 518] Overall Loss 2.523306 Objective Loss 2.523306 LR 0.000004 Time 0.320577 +2025-05-17 03:28:53,784 - Epoch: [215][ 450/ 518] Overall Loss 2.522184 Objective Loss 2.522184 LR 0.000004 Time 0.320314 +2025-05-17 03:29:09,698 - Epoch: [215][ 500/ 518] Overall Loss 2.521508 Objective Loss 2.521508 LR 0.000004 Time 0.320108 +2025-05-17 03:29:15,318 - Epoch: [215][ 518/ 518] Overall Loss 2.518136 Objective Loss 2.518136 LR 0.000004 Time 0.319833 +2025-05-17 03:29:15,355 - --- validate (epoch=215)----------- +2025-05-17 03:29:15,356 - 4952 samples (32 per mini-batch) +2025-05-17 03:29:15,359 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:29:27,432 - Epoch: [215][ 50/ 155] Loss 2.902974 mAP 0.492115 +2025-05-17 03:29:39,923 - Epoch: [215][ 100/ 155] Loss 2.908858 mAP 0.487289 +2025-05-17 03:29:52,910 - Epoch: [215][ 150/ 155] Loss 2.902748 mAP 0.488914 +2025-05-17 03:29:55,839 - Epoch: [215][ 155/ 155] Loss 2.905546 mAP 0.488236 +2025-05-17 03:29:55,876 - ==> mAP: 0.48824 Loss: 2.906 + +2025-05-17 03:29:55,886 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:29:55,886 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:29:55,987 - + +2025-05-17 03:29:55,987 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:30:12,828 - Epoch: [216][ 50/ 518] Overall Loss 2.544421 Objective Loss 2.544421 LR 0.000004 Time 0.336752 +2025-05-17 03:30:28,734 - Epoch: [216][ 100/ 518] Overall Loss 2.546432 Objective Loss 2.546432 LR 0.000004 Time 0.327423 +2025-05-17 03:30:44,640 - Epoch: [216][ 150/ 518] Overall Loss 2.530754 Objective Loss 2.530754 LR 0.000004 Time 0.324316 +2025-05-17 03:31:00,576 - Epoch: [216][ 200/ 518] Overall Loss 2.526704 Objective Loss 2.526704 LR 0.000004 Time 0.322912 +2025-05-17 03:31:16,497 - Epoch: [216][ 250/ 518] Overall Loss 2.533483 Objective Loss 2.533483 LR 0.000004 Time 0.322010 +2025-05-17 03:31:32,410 - Epoch: [216][ 300/ 518] Overall Loss 2.532476 Objective Loss 2.532476 LR 0.000004 Time 0.321383 +2025-05-17 03:31:48,328 - Epoch: [216][ 350/ 518] Overall Loss 2.531320 Objective Loss 2.531320 LR 0.000004 Time 0.320949 +2025-05-17 03:32:04,282 - Epoch: [216][ 400/ 518] Overall Loss 2.531144 Objective Loss 2.531144 LR 0.000004 Time 0.320714 +2025-05-17 03:32:20,231 - Epoch: [216][ 450/ 518] Overall Loss 2.529272 Objective Loss 2.529272 LR 0.000004 Time 0.320520 +2025-05-17 03:32:36,191 - Epoch: [216][ 500/ 518] Overall Loss 2.534663 Objective Loss 2.534663 LR 0.000004 Time 0.320386 +2025-05-17 03:32:41,815 - Epoch: [216][ 518/ 518] Overall Loss 2.534337 Objective Loss 2.534337 LR 0.000004 Time 0.320108 +2025-05-17 03:32:41,851 - --- validate (epoch=216)----------- +2025-05-17 03:32:41,852 - 4952 samples (32 per mini-batch) +2025-05-17 03:32:41,855 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:32:54,173 - Epoch: [216][ 50/ 155] Loss 2.925662 mAP 0.481769 +2025-05-17 03:33:06,497 - Epoch: [216][ 100/ 155] Loss 2.897840 mAP 0.484315 +2025-05-17 03:33:19,603 - Epoch: [216][ 150/ 155] Loss 2.909280 mAP 0.485187 +2025-05-17 03:33:22,579 - Epoch: [216][ 155/ 155] Loss 2.906546 mAP 0.485627 +2025-05-17 03:33:22,618 - ==> mAP: 0.48563 Loss: 2.907 + +2025-05-17 03:33:22,628 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:33:22,628 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:33:22,734 - + +2025-05-17 03:33:22,734 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:33:39,487 - Epoch: [217][ 50/ 518] Overall Loss 2.421267 Objective Loss 2.421267 LR 0.000004 Time 0.334988 +2025-05-17 03:33:55,377 - Epoch: [217][ 100/ 518] Overall Loss 2.502912 Objective Loss 2.502912 LR 0.000004 Time 0.326393 +2025-05-17 03:34:11,305 - Epoch: [217][ 150/ 518] Overall Loss 2.509903 Objective Loss 2.509903 LR 0.000004 Time 0.323775 +2025-05-17 03:34:27,241 - Epoch: [217][ 200/ 518] Overall Loss 2.520554 Objective Loss 2.520554 LR 0.000004 Time 0.322509 +2025-05-17 03:34:43,190 - Epoch: [217][ 250/ 518] Overall Loss 2.518995 Objective Loss 2.518995 LR 0.000004 Time 0.321800 +2025-05-17 03:34:59,116 - Epoch: [217][ 300/ 518] Overall Loss 2.519604 Objective Loss 2.519604 LR 0.000004 Time 0.321251 +2025-05-17 03:35:15,023 - Epoch: [217][ 350/ 518] Overall Loss 2.522775 Objective Loss 2.522775 LR 0.000004 Time 0.320804 +2025-05-17 03:35:30,937 - Epoch: [217][ 400/ 518] Overall Loss 2.524311 Objective Loss 2.524311 LR 0.000004 Time 0.320486 +2025-05-17 03:35:46,865 - Epoch: [217][ 450/ 518] Overall Loss 2.524053 Objective Loss 2.524053 LR 0.000004 Time 0.320270 +2025-05-17 03:36:02,816 - Epoch: [217][ 500/ 518] Overall Loss 2.526155 Objective Loss 2.526155 LR 0.000004 Time 0.320142 +2025-05-17 03:36:08,433 - Epoch: [217][ 518/ 518] Overall Loss 2.529262 Objective Loss 2.529262 LR 0.000004 Time 0.319861 +2025-05-17 03:36:08,472 - --- validate (epoch=217)----------- +2025-05-17 03:36:08,473 - 4952 samples (32 per mini-batch) +2025-05-17 03:36:08,476 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:36:20,627 - Epoch: [217][ 50/ 155] Loss 2.918550 mAP 0.500208 +2025-05-17 03:36:32,935 - Epoch: [217][ 100/ 155] Loss 2.928542 mAP 0.482256 +2025-05-17 03:36:45,833 - Epoch: [217][ 150/ 155] Loss 2.912003 mAP 0.486182 +2025-05-17 03:36:48,714 - Epoch: [217][ 155/ 155] Loss 2.906440 mAP 0.489924 +2025-05-17 03:36:48,757 - ==> mAP: 0.48992 Loss: 2.906 + +2025-05-17 03:36:48,767 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:36:48,767 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:36:48,866 - + +2025-05-17 03:36:48,866 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:37:05,697 - Epoch: [218][ 50/ 518] Overall Loss 2.497720 Objective Loss 2.497720 LR 0.000004 Time 0.336561 +2025-05-17 03:37:21,613 - Epoch: [218][ 100/ 518] Overall Loss 2.503340 Objective Loss 2.503340 LR 0.000004 Time 0.327434 +2025-05-17 03:37:37,518 - Epoch: [218][ 150/ 518] Overall Loss 2.500189 Objective Loss 2.500189 LR 0.000004 Time 0.324314 +2025-05-17 03:37:53,452 - Epoch: [218][ 200/ 518] Overall Loss 2.505663 Objective Loss 2.505663 LR 0.000004 Time 0.322904 +2025-05-17 03:38:09,381 - Epoch: [218][ 250/ 518] Overall Loss 2.511029 Objective Loss 2.511029 LR 0.000004 Time 0.322033 +2025-05-17 03:38:25,298 - Epoch: [218][ 300/ 518] Overall Loss 2.511017 Objective Loss 2.511017 LR 0.000004 Time 0.321415 +2025-05-17 03:38:41,210 - Epoch: [218][ 350/ 518] Overall Loss 2.512140 Objective Loss 2.512140 LR 0.000004 Time 0.320956 +2025-05-17 03:38:57,132 - Epoch: [218][ 400/ 518] Overall Loss 2.516250 Objective Loss 2.516250 LR 0.000004 Time 0.320639 +2025-05-17 03:39:13,065 - Epoch: [218][ 450/ 518] Overall Loss 2.513276 Objective Loss 2.513276 LR 0.000004 Time 0.320418 +2025-05-17 03:39:29,005 - Epoch: [218][ 500/ 518] Overall Loss 2.517553 Objective Loss 2.517553 LR 0.000004 Time 0.320254 +2025-05-17 03:39:34,636 - Epoch: [218][ 518/ 518] Overall Loss 2.516703 Objective Loss 2.516703 LR 0.000004 Time 0.319994 +2025-05-17 03:39:34,675 - --- validate (epoch=218)----------- +2025-05-17 03:39:34,676 - 4952 samples (32 per mini-batch) +2025-05-17 03:39:34,679 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:39:46,961 - Epoch: [218][ 50/ 155] Loss 2.878721 mAP 0.495382 +2025-05-17 03:39:59,554 - Epoch: [218][ 100/ 155] Loss 2.886107 mAP 0.495722 +2025-05-17 03:40:12,689 - Epoch: [218][ 150/ 155] Loss 2.902263 mAP 0.488612 +2025-05-17 03:40:15,662 - Epoch: [218][ 155/ 155] Loss 2.906827 mAP 0.486205 +2025-05-17 03:40:15,701 - ==> mAP: 0.48621 Loss: 2.907 + +2025-05-17 03:40:15,712 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:40:15,712 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:40:15,816 - + +2025-05-17 03:40:15,817 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:40:32,537 - Epoch: [219][ 50/ 518] Overall Loss 2.496222 Objective Loss 2.496222 LR 0.000004 Time 0.334340 +2025-05-17 03:40:48,442 - Epoch: [219][ 100/ 518] Overall Loss 2.520328 Objective Loss 2.520328 LR 0.000004 Time 0.326217 +2025-05-17 03:41:04,360 - Epoch: [219][ 150/ 518] Overall Loss 2.530737 Objective Loss 2.530737 LR 0.000004 Time 0.323592 +2025-05-17 03:41:20,270 - Epoch: [219][ 200/ 518] Overall Loss 2.524210 Objective Loss 2.524210 LR 0.000004 Time 0.322235 +2025-05-17 03:41:36,189 - Epoch: [219][ 250/ 518] Overall Loss 2.529196 Objective Loss 2.529196 LR 0.000004 Time 0.321462 +2025-05-17 03:41:52,126 - Epoch: [219][ 300/ 518] Overall Loss 2.534314 Objective Loss 2.534314 LR 0.000004 Time 0.321004 +2025-05-17 03:42:08,064 - Epoch: [219][ 350/ 518] Overall Loss 2.529012 Objective Loss 2.529012 LR 0.000004 Time 0.320683 +2025-05-17 03:42:24,020 - Epoch: [219][ 400/ 518] Overall Loss 2.533943 Objective Loss 2.533943 LR 0.000004 Time 0.320484 +2025-05-17 03:42:39,992 - Epoch: [219][ 450/ 518] Overall Loss 2.527175 Objective Loss 2.527175 LR 0.000004 Time 0.320368 +2025-05-17 03:42:55,975 - Epoch: [219][ 500/ 518] Overall Loss 2.527437 Objective Loss 2.527437 LR 0.000004 Time 0.320296 +2025-05-17 03:43:01,626 - Epoch: [219][ 518/ 518] Overall Loss 2.531354 Objective Loss 2.531354 LR 0.000004 Time 0.320073 +2025-05-17 03:43:01,664 - --- validate (epoch=219)----------- +2025-05-17 03:43:01,665 - 4952 samples (32 per mini-batch) +2025-05-17 03:43:01,668 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:43:13,664 - Epoch: [219][ 50/ 155] Loss 2.891632 mAP 0.496570 +2025-05-17 03:43:25,960 - Epoch: [219][ 100/ 155] Loss 2.909970 mAP 0.487519 +2025-05-17 03:43:38,665 - Epoch: [219][ 150/ 155] Loss 2.923480 mAP 0.483430 +2025-05-17 03:43:41,491 - Epoch: [219][ 155/ 155] Loss 2.918981 mAP 0.484216 +2025-05-17 03:43:41,531 - ==> mAP: 0.48422 Loss: 2.919 + +2025-05-17 03:43:41,541 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:43:41,541 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:43:41,644 - + +2025-05-17 03:43:41,645 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:43:58,351 - Epoch: [220][ 50/ 518] Overall Loss 2.504890 Objective Loss 2.504890 LR 0.000004 Time 0.334051 +2025-05-17 03:44:14,246 - Epoch: [220][ 100/ 518] Overall Loss 2.526479 Objective Loss 2.526479 LR 0.000004 Time 0.325966 +2025-05-17 03:44:30,170 - Epoch: [220][ 150/ 518] Overall Loss 2.509757 Objective Loss 2.509757 LR 0.000004 Time 0.323467 +2025-05-17 03:44:46,080 - Epoch: [220][ 200/ 518] Overall Loss 2.519498 Objective Loss 2.519498 LR 0.000004 Time 0.322145 +2025-05-17 03:45:01,998 - Epoch: [220][ 250/ 518] Overall Loss 2.517006 Objective Loss 2.517006 LR 0.000004 Time 0.321382 +2025-05-17 03:45:17,933 - Epoch: [220][ 300/ 518] Overall Loss 2.520599 Objective Loss 2.520599 LR 0.000004 Time 0.320931 +2025-05-17 03:45:33,859 - Epoch: [220][ 350/ 518] Overall Loss 2.521177 Objective Loss 2.521177 LR 0.000004 Time 0.320584 +2025-05-17 03:45:49,794 - Epoch: [220][ 400/ 518] Overall Loss 2.521488 Objective Loss 2.521488 LR 0.000004 Time 0.320346 +2025-05-17 03:46:05,716 - Epoch: [220][ 450/ 518] Overall Loss 2.523473 Objective Loss 2.523473 LR 0.000004 Time 0.320132 +2025-05-17 03:46:21,638 - Epoch: [220][ 500/ 518] Overall Loss 2.524082 Objective Loss 2.524082 LR 0.000004 Time 0.319962 +2025-05-17 03:46:27,249 - Epoch: [220][ 518/ 518] Overall Loss 2.522351 Objective Loss 2.522351 LR 0.000004 Time 0.319673 +2025-05-17 03:46:27,281 - --- validate (epoch=220)----------- +2025-05-17 03:46:27,282 - 4952 samples (32 per mini-batch) +2025-05-17 03:46:27,286 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:46:39,512 - Epoch: [220][ 50/ 155] Loss 2.952028 mAP 0.485594 +2025-05-17 03:46:51,839 - Epoch: [220][ 100/ 155] Loss 2.923316 mAP 0.485603 +2025-05-17 03:47:04,805 - Epoch: [220][ 150/ 155] Loss 2.913418 mAP 0.487723 +2025-05-17 03:47:07,727 - Epoch: [220][ 155/ 155] Loss 2.907187 mAP 0.489792 +2025-05-17 03:47:07,765 - ==> mAP: 0.48979 Loss: 2.907 + +2025-05-17 03:47:07,775 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:47:07,775 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:47:07,874 - + +2025-05-17 03:47:07,874 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:47:24,650 - Epoch: [221][ 50/ 518] Overall Loss 2.567524 Objective Loss 2.567524 LR 0.000004 Time 0.335430 +2025-05-17 03:47:40,534 - Epoch: [221][ 100/ 518] Overall Loss 2.544135 Objective Loss 2.544135 LR 0.000004 Time 0.326555 +2025-05-17 03:47:56,433 - Epoch: [221][ 150/ 518] Overall Loss 2.519632 Objective Loss 2.519632 LR 0.000004 Time 0.323688 +2025-05-17 03:48:12,337 - Epoch: [221][ 200/ 518] Overall Loss 2.524981 Objective Loss 2.524981 LR 0.000004 Time 0.322281 +2025-05-17 03:48:28,239 - Epoch: [221][ 250/ 518] Overall Loss 2.530725 Objective Loss 2.530725 LR 0.000004 Time 0.321429 +2025-05-17 03:48:44,149 - Epoch: [221][ 300/ 518] Overall Loss 2.531812 Objective Loss 2.531812 LR 0.000004 Time 0.320887 +2025-05-17 03:49:00,058 - Epoch: [221][ 350/ 518] Overall Loss 2.524460 Objective Loss 2.524460 LR 0.000004 Time 0.320496 +2025-05-17 03:49:15,981 - Epoch: [221][ 400/ 518] Overall Loss 2.530758 Objective Loss 2.530758 LR 0.000004 Time 0.320240 +2025-05-17 03:49:31,915 - Epoch: [221][ 450/ 518] Overall Loss 2.521956 Objective Loss 2.521956 LR 0.000004 Time 0.320065 +2025-05-17 03:49:47,864 - Epoch: [221][ 500/ 518] Overall Loss 2.519787 Objective Loss 2.519787 LR 0.000004 Time 0.319955 +2025-05-17 03:49:53,475 - Epoch: [221][ 518/ 518] Overall Loss 2.522008 Objective Loss 2.522008 LR 0.000004 Time 0.319668 +2025-05-17 03:49:53,511 - --- validate (epoch=221)----------- +2025-05-17 03:49:53,512 - 4952 samples (32 per mini-batch) +2025-05-17 03:49:53,515 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:50:05,695 - Epoch: [221][ 50/ 155] Loss 2.895161 mAP 0.502525 +2025-05-17 03:50:18,339 - Epoch: [221][ 100/ 155] Loss 2.896491 mAP 0.488798 +2025-05-17 03:50:31,365 - Epoch: [221][ 150/ 155] Loss 2.909339 mAP 0.484694 +2025-05-17 03:50:34,292 - Epoch: [221][ 155/ 155] Loss 2.905831 mAP 0.485162 +2025-05-17 03:50:34,330 - ==> mAP: 0.48516 Loss: 2.906 + +2025-05-17 03:50:34,340 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:50:34,340 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:50:34,437 - + +2025-05-17 03:50:34,437 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:50:51,335 - Epoch: [222][ 50/ 518] Overall Loss 2.533400 Objective Loss 2.533400 LR 0.000004 Time 0.337879 +2025-05-17 03:51:07,202 - Epoch: [222][ 100/ 518] Overall Loss 2.546855 Objective Loss 2.546855 LR 0.000004 Time 0.327604 +2025-05-17 03:51:23,085 - Epoch: [222][ 150/ 518] Overall Loss 2.565608 Objective Loss 2.565608 LR 0.000004 Time 0.324279 +2025-05-17 03:51:38,992 - Epoch: [222][ 200/ 518] Overall Loss 2.548503 Objective Loss 2.548503 LR 0.000004 Time 0.322743 +2025-05-17 03:51:54,902 - Epoch: [222][ 250/ 518] Overall Loss 2.534488 Objective Loss 2.534488 LR 0.000004 Time 0.321830 +2025-05-17 03:52:10,828 - Epoch: [222][ 300/ 518] Overall Loss 2.531704 Objective Loss 2.531704 LR 0.000004 Time 0.321277 +2025-05-17 03:52:26,773 - Epoch: [222][ 350/ 518] Overall Loss 2.527974 Objective Loss 2.527974 LR 0.000004 Time 0.320934 +2025-05-17 03:52:42,727 - Epoch: [222][ 400/ 518] Overall Loss 2.520241 Objective Loss 2.520241 LR 0.000004 Time 0.320701 +2025-05-17 03:52:58,640 - Epoch: [222][ 450/ 518] Overall Loss 2.522971 Objective Loss 2.522971 LR 0.000004 Time 0.320429 +2025-05-17 03:53:14,554 - Epoch: [222][ 500/ 518] Overall Loss 2.523206 Objective Loss 2.523206 LR 0.000004 Time 0.320210 +2025-05-17 03:53:20,166 - Epoch: [222][ 518/ 518] Overall Loss 2.525815 Objective Loss 2.525815 LR 0.000004 Time 0.319918 +2025-05-17 03:53:20,206 - --- validate (epoch=222)----------- +2025-05-17 03:53:20,206 - 4952 samples (32 per mini-batch) +2025-05-17 03:53:20,209 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:53:32,646 - Epoch: [222][ 50/ 155] Loss 2.936788 mAP 0.484265 +2025-05-17 03:53:45,185 - Epoch: [222][ 100/ 155] Loss 2.909051 mAP 0.493051 +2025-05-17 03:53:58,300 - Epoch: [222][ 150/ 155] Loss 2.908851 mAP 0.488295 +2025-05-17 03:54:01,174 - Epoch: [222][ 155/ 155] Loss 2.914906 mAP 0.487557 +2025-05-17 03:54:01,211 - ==> mAP: 0.48756 Loss: 2.915 + +2025-05-17 03:54:01,221 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:54:01,221 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:54:01,315 - + +2025-05-17 03:54:01,316 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:54:18,001 - Epoch: [223][ 50/ 518] Overall Loss 2.536845 Objective Loss 2.536845 LR 0.000004 Time 0.333644 +2025-05-17 03:54:33,899 - Epoch: [223][ 100/ 518] Overall Loss 2.528695 Objective Loss 2.528695 LR 0.000004 Time 0.325788 +2025-05-17 03:54:49,799 - Epoch: [223][ 150/ 518] Overall Loss 2.520840 Objective Loss 2.520840 LR 0.000004 Time 0.323188 +2025-05-17 03:55:05,701 - Epoch: [223][ 200/ 518] Overall Loss 2.516625 Objective Loss 2.516625 LR 0.000004 Time 0.321895 +2025-05-17 03:55:21,605 - Epoch: [223][ 250/ 518] Overall Loss 2.527581 Objective Loss 2.527581 LR 0.000004 Time 0.321129 +2025-05-17 03:55:37,530 - Epoch: [223][ 300/ 518] Overall Loss 2.531561 Objective Loss 2.531561 LR 0.000004 Time 0.320689 +2025-05-17 03:55:53,479 - Epoch: [223][ 350/ 518] Overall Loss 2.532646 Objective Loss 2.532646 LR 0.000004 Time 0.320442 +2025-05-17 03:56:09,408 - Epoch: [223][ 400/ 518] Overall Loss 2.534500 Objective Loss 2.534500 LR 0.000004 Time 0.320207 +2025-05-17 03:56:25,341 - Epoch: [223][ 450/ 518] Overall Loss 2.539251 Objective Loss 2.539251 LR 0.000004 Time 0.320033 +2025-05-17 03:56:41,269 - Epoch: [223][ 500/ 518] Overall Loss 2.537640 Objective Loss 2.537640 LR 0.000004 Time 0.319884 +2025-05-17 03:56:46,891 - Epoch: [223][ 518/ 518] Overall Loss 2.536842 Objective Loss 2.536842 LR 0.000004 Time 0.319619 +2025-05-17 03:56:46,936 - --- validate (epoch=223)----------- +2025-05-17 03:56:46,936 - 4952 samples (32 per mini-batch) +2025-05-17 03:56:46,940 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 03:56:59,274 - Epoch: [223][ 50/ 155] Loss 2.939064 mAP 0.477476 +2025-05-17 03:57:12,014 - Epoch: [223][ 100/ 155] Loss 2.924056 mAP 0.486739 +2025-05-17 03:57:25,192 - Epoch: [223][ 150/ 155] Loss 2.913076 mAP 0.488213 +2025-05-17 03:57:28,024 - Epoch: [223][ 155/ 155] Loss 2.911296 mAP 0.488042 +2025-05-17 03:57:28,063 - ==> mAP: 0.48804 Loss: 2.911 + +2025-05-17 03:57:28,073 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 03:57:28,073 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 03:57:28,169 - + +2025-05-17 03:57:28,170 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 03:57:44,869 - Epoch: [224][ 50/ 518] Overall Loss 2.573953 Objective Loss 2.573953 LR 0.000004 Time 0.333918 +2025-05-17 03:58:00,741 - Epoch: [224][ 100/ 518] Overall Loss 2.524857 Objective Loss 2.524857 LR 0.000004 Time 0.325669 +2025-05-17 03:58:16,653 - Epoch: [224][ 150/ 518] Overall Loss 2.541570 Objective Loss 2.541570 LR 0.000004 Time 0.323187 +2025-05-17 03:58:32,583 - Epoch: [224][ 200/ 518] Overall Loss 2.525133 Objective Loss 2.525133 LR 0.000004 Time 0.322037 +2025-05-17 03:58:48,505 - Epoch: [224][ 250/ 518] Overall Loss 2.531989 Objective Loss 2.531989 LR 0.000004 Time 0.321314 +2025-05-17 03:59:04,441 - Epoch: [224][ 300/ 518] Overall Loss 2.528873 Objective Loss 2.528873 LR 0.000004 Time 0.320878 +2025-05-17 03:59:20,381 - Epoch: [224][ 350/ 518] Overall Loss 2.536016 Objective Loss 2.536016 LR 0.000004 Time 0.320577 +2025-05-17 03:59:36,296 - Epoch: [224][ 400/ 518] Overall Loss 2.536207 Objective Loss 2.536207 LR 0.000004 Time 0.320291 +2025-05-17 03:59:52,239 - Epoch: [224][ 450/ 518] Overall Loss 2.541211 Objective Loss 2.541211 LR 0.000004 Time 0.320130 +2025-05-17 04:00:08,195 - Epoch: [224][ 500/ 518] Overall Loss 2.542709 Objective Loss 2.542709 LR 0.000004 Time 0.320027 +2025-05-17 04:00:13,815 - Epoch: [224][ 518/ 518] Overall Loss 2.542003 Objective Loss 2.542003 LR 0.000004 Time 0.319755 +2025-05-17 04:00:13,854 - --- validate (epoch=224)----------- +2025-05-17 04:00:13,855 - 4952 samples (32 per mini-batch) +2025-05-17 04:00:13,859 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:00:26,046 - Epoch: [224][ 50/ 155] Loss 2.871020 mAP 0.479541 +2025-05-17 04:00:38,253 - Epoch: [224][ 100/ 155] Loss 2.893892 mAP 0.479739 +2025-05-17 04:00:51,135 - Epoch: [224][ 150/ 155] Loss 2.902219 mAP 0.487744 +2025-05-17 04:00:54,006 - Epoch: [224][ 155/ 155] Loss 2.907250 mAP 0.487513 +2025-05-17 04:00:54,045 - ==> mAP: 0.48751 Loss: 2.907 + +2025-05-17 04:00:54,055 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:00:54,055 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:00:54,149 - + +2025-05-17 04:00:54,150 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:01:11,061 - Epoch: [225][ 50/ 518] Overall Loss 2.507145 Objective Loss 2.507145 LR 0.000004 Time 0.338166 +2025-05-17 04:01:26,946 - Epoch: [225][ 100/ 518] Overall Loss 2.504729 Objective Loss 2.504729 LR 0.000004 Time 0.327922 +2025-05-17 04:01:42,841 - Epoch: [225][ 150/ 518] Overall Loss 2.509977 Objective Loss 2.509977 LR 0.000004 Time 0.324573 +2025-05-17 04:01:58,752 - Epoch: [225][ 200/ 518] Overall Loss 2.529854 Objective Loss 2.529854 LR 0.000004 Time 0.322982 +2025-05-17 04:02:14,654 - Epoch: [225][ 250/ 518] Overall Loss 2.526701 Objective Loss 2.526701 LR 0.000004 Time 0.321990 +2025-05-17 04:02:30,573 - Epoch: [225][ 300/ 518] Overall Loss 2.525908 Objective Loss 2.525908 LR 0.000004 Time 0.321382 +2025-05-17 04:02:46,516 - Epoch: [225][ 350/ 518] Overall Loss 2.534222 Objective Loss 2.534222 LR 0.000004 Time 0.321022 +2025-05-17 04:03:02,465 - Epoch: [225][ 400/ 518] Overall Loss 2.523940 Objective Loss 2.523940 LR 0.000004 Time 0.320764 +2025-05-17 04:03:18,402 - Epoch: [225][ 450/ 518] Overall Loss 2.525996 Objective Loss 2.525996 LR 0.000004 Time 0.320538 +2025-05-17 04:03:34,341 - Epoch: [225][ 500/ 518] Overall Loss 2.528210 Objective Loss 2.528210 LR 0.000004 Time 0.320359 +2025-05-17 04:03:39,964 - Epoch: [225][ 518/ 518] Overall Loss 2.528480 Objective Loss 2.528480 LR 0.000004 Time 0.320081 +2025-05-17 04:03:40,002 - --- validate (epoch=225)----------- +2025-05-17 04:03:40,003 - 4952 samples (32 per mini-batch) +2025-05-17 04:03:40,006 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:03:52,181 - Epoch: [225][ 50/ 155] Loss 2.884141 mAP 0.492942 +2025-05-17 04:04:04,715 - Epoch: [225][ 100/ 155] Loss 2.916077 mAP 0.485296 +2025-05-17 04:04:17,819 - Epoch: [225][ 150/ 155] Loss 2.910731 mAP 0.483368 +2025-05-17 04:04:20,626 - Epoch: [225][ 155/ 155] Loss 2.907874 mAP 0.484782 +2025-05-17 04:04:20,663 - ==> mAP: 0.48478 Loss: 2.908 + +2025-05-17 04:04:20,674 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:04:20,674 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:04:20,770 - + +2025-05-17 04:04:20,770 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:04:37,467 - Epoch: [226][ 50/ 518] Overall Loss 2.509119 Objective Loss 2.509119 LR 0.000004 Time 0.333856 +2025-05-17 04:04:53,368 - Epoch: [226][ 100/ 518] Overall Loss 2.528083 Objective Loss 2.528083 LR 0.000004 Time 0.325936 +2025-05-17 04:05:09,277 - Epoch: [226][ 150/ 518] Overall Loss 2.533298 Objective Loss 2.533298 LR 0.000004 Time 0.323342 +2025-05-17 04:05:25,183 - Epoch: [226][ 200/ 518] Overall Loss 2.519302 Objective Loss 2.519302 LR 0.000004 Time 0.322032 +2025-05-17 04:05:41,100 - Epoch: [226][ 250/ 518] Overall Loss 2.524132 Objective Loss 2.524132 LR 0.000004 Time 0.321289 +2025-05-17 04:05:57,024 - Epoch: [226][ 300/ 518] Overall Loss 2.514608 Objective Loss 2.514608 LR 0.000004 Time 0.320819 +2025-05-17 04:06:12,946 - Epoch: [226][ 350/ 518] Overall Loss 2.522625 Objective Loss 2.522625 LR 0.000004 Time 0.320474 +2025-05-17 04:06:28,898 - Epoch: [226][ 400/ 518] Overall Loss 2.526528 Objective Loss 2.526528 LR 0.000004 Time 0.320295 +2025-05-17 04:06:44,834 - Epoch: [226][ 450/ 518] Overall Loss 2.519480 Objective Loss 2.519480 LR 0.000004 Time 0.320118 +2025-05-17 04:07:00,765 - Epoch: [226][ 500/ 518] Overall Loss 2.529864 Objective Loss 2.529864 LR 0.000004 Time 0.319966 +2025-05-17 04:07:06,403 - Epoch: [226][ 518/ 518] Overall Loss 2.530426 Objective Loss 2.530426 LR 0.000004 Time 0.319730 +2025-05-17 04:07:06,437 - --- validate (epoch=226)----------- +2025-05-17 04:07:06,438 - 4952 samples (32 per mini-batch) +2025-05-17 04:07:06,441 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:07:18,701 - Epoch: [226][ 50/ 155] Loss 2.909764 mAP 0.487072 +2025-05-17 04:07:30,964 - Epoch: [226][ 100/ 155] Loss 2.923984 mAP 0.489422 +2025-05-17 04:07:44,045 - Epoch: [226][ 150/ 155] Loss 2.911407 mAP 0.487855 +2025-05-17 04:07:47,038 - Epoch: [226][ 155/ 155] Loss 2.910268 mAP 0.486711 +2025-05-17 04:07:47,077 - ==> mAP: 0.48671 Loss: 2.910 + +2025-05-17 04:07:47,087 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:07:47,087 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:07:47,184 - + +2025-05-17 04:07:47,184 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:08:03,955 - Epoch: [227][ 50/ 518] Overall Loss 2.535898 Objective Loss 2.535898 LR 0.000004 Time 0.335350 +2025-05-17 04:08:19,848 - Epoch: [227][ 100/ 518] Overall Loss 2.496679 Objective Loss 2.496679 LR 0.000004 Time 0.326588 +2025-05-17 04:08:35,732 - Epoch: [227][ 150/ 518] Overall Loss 2.504805 Objective Loss 2.504805 LR 0.000004 Time 0.323616 +2025-05-17 04:08:51,628 - Epoch: [227][ 200/ 518] Overall Loss 2.507218 Objective Loss 2.507218 LR 0.000004 Time 0.322185 +2025-05-17 04:09:07,548 - Epoch: [227][ 250/ 518] Overall Loss 2.515069 Objective Loss 2.515069 LR 0.000004 Time 0.321424 +2025-05-17 04:09:23,466 - Epoch: [227][ 300/ 518] Overall Loss 2.526100 Objective Loss 2.526100 LR 0.000004 Time 0.320911 +2025-05-17 04:09:39,383 - Epoch: [227][ 350/ 518] Overall Loss 2.528815 Objective Loss 2.528815 LR 0.000004 Time 0.320542 +2025-05-17 04:09:55,335 - Epoch: [227][ 400/ 518] Overall Loss 2.524583 Objective Loss 2.524583 LR 0.000004 Time 0.320350 +2025-05-17 04:10:11,270 - Epoch: [227][ 450/ 518] Overall Loss 2.524685 Objective Loss 2.524685 LR 0.000004 Time 0.320165 +2025-05-17 04:10:27,193 - Epoch: [227][ 500/ 518] Overall Loss 2.525463 Objective Loss 2.525463 LR 0.000004 Time 0.319993 +2025-05-17 04:10:32,822 - Epoch: [227][ 518/ 518] Overall Loss 2.532022 Objective Loss 2.532022 LR 0.000004 Time 0.319740 +2025-05-17 04:10:32,857 - --- validate (epoch=227)----------- +2025-05-17 04:10:32,858 - 4952 samples (32 per mini-batch) +2025-05-17 04:10:32,861 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:10:44,953 - Epoch: [227][ 50/ 155] Loss 2.914579 mAP 0.484873 +2025-05-17 04:10:57,534 - Epoch: [227][ 100/ 155] Loss 2.889751 mAP 0.479310 +2025-05-17 04:11:10,445 - Epoch: [227][ 150/ 155] Loss 2.901925 mAP 0.483702 +2025-05-17 04:11:13,154 - Epoch: [227][ 155/ 155] Loss 2.905018 mAP 0.483426 +2025-05-17 04:11:13,189 - ==> mAP: 0.48343 Loss: 2.905 + +2025-05-17 04:11:13,199 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:11:13,199 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:11:13,294 - + +2025-05-17 04:11:13,294 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:11:30,127 - Epoch: [228][ 50/ 518] Overall Loss 2.499128 Objective Loss 2.499128 LR 0.000004 Time 0.336581 +2025-05-17 04:11:46,020 - Epoch: [228][ 100/ 518] Overall Loss 2.503568 Objective Loss 2.503568 LR 0.000004 Time 0.327212 +2025-05-17 04:12:01,895 - Epoch: [228][ 150/ 518] Overall Loss 2.502920 Objective Loss 2.502920 LR 0.000004 Time 0.323972 +2025-05-17 04:12:17,801 - Epoch: [228][ 200/ 518] Overall Loss 2.510116 Objective Loss 2.510116 LR 0.000004 Time 0.322499 +2025-05-17 04:12:33,725 - Epoch: [228][ 250/ 518] Overall Loss 2.523204 Objective Loss 2.523204 LR 0.000004 Time 0.321692 +2025-05-17 04:12:49,696 - Epoch: [228][ 300/ 518] Overall Loss 2.535200 Objective Loss 2.535200 LR 0.000004 Time 0.321313 +2025-05-17 04:13:05,640 - Epoch: [228][ 350/ 518] Overall Loss 2.528242 Objective Loss 2.528242 LR 0.000004 Time 0.320962 +2025-05-17 04:13:21,607 - Epoch: [228][ 400/ 518] Overall Loss 2.532690 Objective Loss 2.532690 LR 0.000004 Time 0.320757 +2025-05-17 04:13:37,558 - Epoch: [228][ 450/ 518] Overall Loss 2.525722 Objective Loss 2.525722 LR 0.000004 Time 0.320562 +2025-05-17 04:13:53,514 - Epoch: [228][ 500/ 518] Overall Loss 2.530962 Objective Loss 2.530962 LR 0.000004 Time 0.320418 +2025-05-17 04:13:59,132 - Epoch: [228][ 518/ 518] Overall Loss 2.529208 Objective Loss 2.529208 LR 0.000004 Time 0.320128 +2025-05-17 04:13:59,170 - --- validate (epoch=228)----------- +2025-05-17 04:13:59,171 - 4952 samples (32 per mini-batch) +2025-05-17 04:13:59,174 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:14:11,505 - Epoch: [228][ 50/ 155] Loss 2.893510 mAP 0.504581 +2025-05-17 04:14:23,980 - Epoch: [228][ 100/ 155] Loss 2.903814 mAP 0.492242 +2025-05-17 04:14:37,082 - Epoch: [228][ 150/ 155] Loss 2.903188 mAP 0.487153 +2025-05-17 04:14:40,041 - Epoch: [228][ 155/ 155] Loss 2.904227 mAP 0.486069 +2025-05-17 04:14:40,075 - ==> mAP: 0.48607 Loss: 2.904 + +2025-05-17 04:14:40,085 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:14:40,085 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:14:40,183 - + +2025-05-17 04:14:40,183 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:14:56,915 - Epoch: [229][ 50/ 518] Overall Loss 2.523466 Objective Loss 2.523466 LR 0.000004 Time 0.334568 +2025-05-17 04:15:12,796 - Epoch: [229][ 100/ 518] Overall Loss 2.514883 Objective Loss 2.514883 LR 0.000004 Time 0.326082 +2025-05-17 04:15:28,689 - Epoch: [229][ 150/ 518] Overall Loss 2.511592 Objective Loss 2.511592 LR 0.000004 Time 0.323339 +2025-05-17 04:15:44,601 - Epoch: [229][ 200/ 518] Overall Loss 2.529918 Objective Loss 2.529918 LR 0.000004 Time 0.322056 +2025-05-17 04:16:00,548 - Epoch: [229][ 250/ 518] Overall Loss 2.533808 Objective Loss 2.533808 LR 0.000004 Time 0.321431 +2025-05-17 04:16:16,468 - Epoch: [229][ 300/ 518] Overall Loss 2.543915 Objective Loss 2.543915 LR 0.000004 Time 0.320923 +2025-05-17 04:16:32,389 - Epoch: [229][ 350/ 518] Overall Loss 2.543838 Objective Loss 2.543838 LR 0.000004 Time 0.320562 +2025-05-17 04:16:48,319 - Epoch: [229][ 400/ 518] Overall Loss 2.539156 Objective Loss 2.539156 LR 0.000004 Time 0.320312 +2025-05-17 04:17:04,250 - Epoch: [229][ 450/ 518] Overall Loss 2.537579 Objective Loss 2.537579 LR 0.000004 Time 0.320124 +2025-05-17 04:17:20,167 - Epoch: [229][ 500/ 518] Overall Loss 2.540160 Objective Loss 2.540160 LR 0.000004 Time 0.319943 +2025-05-17 04:17:25,790 - Epoch: [229][ 518/ 518] Overall Loss 2.534917 Objective Loss 2.534917 LR 0.000004 Time 0.319680 +2025-05-17 04:17:25,830 - --- validate (epoch=229)----------- +2025-05-17 04:17:25,831 - 4952 samples (32 per mini-batch) +2025-05-17 04:17:25,834 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:17:37,969 - Epoch: [229][ 50/ 155] Loss 2.891439 mAP 0.488950 +2025-05-17 04:17:50,531 - Epoch: [229][ 100/ 155] Loss 2.899767 mAP 0.491923 +2025-05-17 04:18:03,622 - Epoch: [229][ 150/ 155] Loss 2.912332 mAP 0.486874 +2025-05-17 04:18:06,474 - Epoch: [229][ 155/ 155] Loss 2.907398 mAP 0.486789 +2025-05-17 04:18:06,515 - ==> mAP: 0.48679 Loss: 2.907 + +2025-05-17 04:18:06,526 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:18:06,526 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:18:06,624 - + +2025-05-17 04:18:06,624 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:18:23,549 - Epoch: [230][ 50/ 518] Overall Loss 2.461867 Objective Loss 2.461867 LR 0.000004 Time 0.338438 +2025-05-17 04:18:39,456 - Epoch: [230][ 100/ 518] Overall Loss 2.494875 Objective Loss 2.494875 LR 0.000004 Time 0.328279 +2025-05-17 04:18:55,367 - Epoch: [230][ 150/ 518] Overall Loss 2.499089 Objective Loss 2.499089 LR 0.000004 Time 0.324924 +2025-05-17 04:19:11,308 - Epoch: [230][ 200/ 518] Overall Loss 2.507586 Objective Loss 2.507586 LR 0.000004 Time 0.323394 +2025-05-17 04:19:27,240 - Epoch: [230][ 250/ 518] Overall Loss 2.517248 Objective Loss 2.517248 LR 0.000004 Time 0.322439 +2025-05-17 04:19:43,173 - Epoch: [230][ 300/ 518] Overall Loss 2.516672 Objective Loss 2.516672 LR 0.000004 Time 0.321807 +2025-05-17 04:19:59,096 - Epoch: [230][ 350/ 518] Overall Loss 2.519304 Objective Loss 2.519304 LR 0.000004 Time 0.321323 +2025-05-17 04:20:15,026 - Epoch: [230][ 400/ 518] Overall Loss 2.522563 Objective Loss 2.522563 LR 0.000004 Time 0.320981 +2025-05-17 04:20:30,955 - Epoch: [230][ 450/ 518] Overall Loss 2.522557 Objective Loss 2.522557 LR 0.000004 Time 0.320713 +2025-05-17 04:20:46,877 - Epoch: [230][ 500/ 518] Overall Loss 2.527200 Objective Loss 2.527200 LR 0.000004 Time 0.320482 +2025-05-17 04:20:52,492 - Epoch: [230][ 518/ 518] Overall Loss 2.526346 Objective Loss 2.526346 LR 0.000004 Time 0.320185 +2025-05-17 04:20:52,534 - --- validate (epoch=230)----------- +2025-05-17 04:20:52,535 - 4952 samples (32 per mini-batch) +2025-05-17 04:20:52,538 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:21:04,594 - Epoch: [230][ 50/ 155] Loss 2.904378 mAP 0.497349 +2025-05-17 04:21:16,804 - Epoch: [230][ 100/ 155] Loss 2.912691 mAP 0.485807 +2025-05-17 04:21:29,700 - Epoch: [230][ 150/ 155] Loss 2.913394 mAP 0.485960 +2025-05-17 04:21:32,579 - Epoch: [230][ 155/ 155] Loss 2.916205 mAP 0.486757 +2025-05-17 04:21:32,621 - ==> mAP: 0.48676 Loss: 2.916 + +2025-05-17 04:21:32,631 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:21:32,631 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:21:32,726 - + +2025-05-17 04:21:32,726 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:21:49,486 - Epoch: [231][ 50/ 518] Overall Loss 2.490634 Objective Loss 2.490634 LR 0.000004 Time 0.335129 +2025-05-17 04:22:05,359 - Epoch: [231][ 100/ 518] Overall Loss 2.525821 Objective Loss 2.525821 LR 0.000004 Time 0.326286 +2025-05-17 04:22:21,247 - Epoch: [231][ 150/ 518] Overall Loss 2.535044 Objective Loss 2.535044 LR 0.000004 Time 0.323435 +2025-05-17 04:22:37,154 - Epoch: [231][ 200/ 518] Overall Loss 2.533230 Objective Loss 2.533230 LR 0.000004 Time 0.322108 +2025-05-17 04:22:53,087 - Epoch: [231][ 250/ 518] Overall Loss 2.531884 Objective Loss 2.531884 LR 0.000004 Time 0.321416 +2025-05-17 04:23:09,010 - Epoch: [231][ 300/ 518] Overall Loss 2.528397 Objective Loss 2.528397 LR 0.000004 Time 0.320921 +2025-05-17 04:23:24,926 - Epoch: [231][ 350/ 518] Overall Loss 2.527824 Objective Loss 2.527824 LR 0.000004 Time 0.320545 +2025-05-17 04:23:40,840 - Epoch: [231][ 400/ 518] Overall Loss 2.530779 Objective Loss 2.530779 LR 0.000004 Time 0.320260 +2025-05-17 04:23:56,758 - Epoch: [231][ 450/ 518] Overall Loss 2.530013 Objective Loss 2.530013 LR 0.000004 Time 0.320045 +2025-05-17 04:24:12,691 - Epoch: [231][ 500/ 518] Overall Loss 2.534672 Objective Loss 2.534672 LR 0.000004 Time 0.319905 +2025-05-17 04:24:18,320 - Epoch: [231][ 518/ 518] Overall Loss 2.530811 Objective Loss 2.530811 LR 0.000004 Time 0.319654 +2025-05-17 04:24:18,357 - --- validate (epoch=231)----------- +2025-05-17 04:24:18,357 - 4952 samples (32 per mini-batch) +2025-05-17 04:24:18,361 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:24:30,764 - Epoch: [231][ 50/ 155] Loss 2.916951 mAP 0.489359 +2025-05-17 04:24:43,414 - Epoch: [231][ 100/ 155] Loss 2.918648 mAP 0.490950 +2025-05-17 04:24:56,758 - Epoch: [231][ 150/ 155] Loss 2.905965 mAP 0.489026 +2025-05-17 04:24:59,762 - Epoch: [231][ 155/ 155] Loss 2.908445 mAP 0.485866 +2025-05-17 04:24:59,803 - ==> mAP: 0.48587 Loss: 2.908 + +2025-05-17 04:24:59,813 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:24:59,814 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:24:59,911 - + +2025-05-17 04:24:59,911 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:25:16,553 - Epoch: [232][ 50/ 518] Overall Loss 2.534406 Objective Loss 2.534406 LR 0.000004 Time 0.332763 +2025-05-17 04:25:32,426 - Epoch: [232][ 100/ 518] Overall Loss 2.532979 Objective Loss 2.532979 LR 0.000004 Time 0.325101 +2025-05-17 04:25:48,340 - Epoch: [232][ 150/ 518] Overall Loss 2.528961 Objective Loss 2.528961 LR 0.000004 Time 0.322822 +2025-05-17 04:26:04,240 - Epoch: [232][ 200/ 518] Overall Loss 2.537710 Objective Loss 2.537710 LR 0.000004 Time 0.321614 +2025-05-17 04:26:20,154 - Epoch: [232][ 250/ 518] Overall Loss 2.535558 Objective Loss 2.535558 LR 0.000004 Time 0.320940 +2025-05-17 04:26:36,066 - Epoch: [232][ 300/ 518] Overall Loss 2.535768 Objective Loss 2.535768 LR 0.000004 Time 0.320486 +2025-05-17 04:26:51,996 - Epoch: [232][ 350/ 518] Overall Loss 2.538906 Objective Loss 2.538906 LR 0.000004 Time 0.320213 +2025-05-17 04:27:07,944 - Epoch: [232][ 400/ 518] Overall Loss 2.539725 Objective Loss 2.539725 LR 0.000004 Time 0.320055 +2025-05-17 04:27:23,878 - Epoch: [232][ 450/ 518] Overall Loss 2.533667 Objective Loss 2.533667 LR 0.000004 Time 0.319900 +2025-05-17 04:27:39,813 - Epoch: [232][ 500/ 518] Overall Loss 2.533888 Objective Loss 2.533888 LR 0.000004 Time 0.319777 +2025-05-17 04:27:45,448 - Epoch: [232][ 518/ 518] Overall Loss 2.535393 Objective Loss 2.535393 LR 0.000004 Time 0.319543 +2025-05-17 04:27:45,488 - --- validate (epoch=232)----------- +2025-05-17 04:27:45,489 - 4952 samples (32 per mini-batch) +2025-05-17 04:27:45,492 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:27:57,743 - Epoch: [232][ 50/ 155] Loss 2.919049 mAP 0.499051 +2025-05-17 04:28:10,350 - Epoch: [232][ 100/ 155] Loss 2.922135 mAP 0.486713 +2025-05-17 04:28:23,348 - Epoch: [232][ 150/ 155] Loss 2.908871 mAP 0.489331 +2025-05-17 04:28:26,102 - Epoch: [232][ 155/ 155] Loss 2.908918 mAP 0.488637 +2025-05-17 04:28:26,142 - ==> mAP: 0.48864 Loss: 2.909 + +2025-05-17 04:28:26,287 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:28:26,287 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:28:26,382 - + +2025-05-17 04:28:26,382 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:28:43,262 - Epoch: [233][ 50/ 518] Overall Loss 2.535529 Objective Loss 2.535529 LR 0.000004 Time 0.337531 +2025-05-17 04:28:59,160 - Epoch: [233][ 100/ 518] Overall Loss 2.524632 Objective Loss 2.524632 LR 0.000004 Time 0.327734 +2025-05-17 04:29:15,059 - Epoch: [233][ 150/ 518] Overall Loss 2.532930 Objective Loss 2.532930 LR 0.000004 Time 0.324481 +2025-05-17 04:29:30,960 - Epoch: [233][ 200/ 518] Overall Loss 2.545228 Objective Loss 2.545228 LR 0.000004 Time 0.322859 +2025-05-17 04:29:46,871 - Epoch: [233][ 250/ 518] Overall Loss 2.540245 Objective Loss 2.540245 LR 0.000004 Time 0.321930 +2025-05-17 04:30:02,781 - Epoch: [233][ 300/ 518] Overall Loss 2.541935 Objective Loss 2.541935 LR 0.000004 Time 0.321304 +2025-05-17 04:30:18,725 - Epoch: [233][ 350/ 518] Overall Loss 2.542729 Objective Loss 2.542729 LR 0.000004 Time 0.320954 +2025-05-17 04:30:34,670 - Epoch: [233][ 400/ 518] Overall Loss 2.539396 Objective Loss 2.539396 LR 0.000004 Time 0.320697 +2025-05-17 04:30:50,595 - Epoch: [233][ 450/ 518] Overall Loss 2.527238 Objective Loss 2.527238 LR 0.000004 Time 0.320450 +2025-05-17 04:31:06,519 - Epoch: [233][ 500/ 518] Overall Loss 2.524846 Objective Loss 2.524846 LR 0.000004 Time 0.320251 +2025-05-17 04:31:12,132 - Epoch: [233][ 518/ 518] Overall Loss 2.528302 Objective Loss 2.528302 LR 0.000004 Time 0.319957 +2025-05-17 04:31:12,166 - --- validate (epoch=233)----------- +2025-05-17 04:31:12,167 - 4952 samples (32 per mini-batch) +2025-05-17 04:31:12,170 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:31:24,284 - Epoch: [233][ 50/ 155] Loss 2.907074 mAP 0.486238 +2025-05-17 04:31:36,864 - Epoch: [233][ 100/ 155] Loss 2.894519 mAP 0.485864 +2025-05-17 04:31:49,869 - Epoch: [233][ 150/ 155] Loss 2.916049 mAP 0.484695 +2025-05-17 04:31:52,804 - Epoch: [233][ 155/ 155] Loss 2.911286 mAP 0.484533 +2025-05-17 04:31:52,844 - ==> mAP: 0.48453 Loss: 2.911 + +2025-05-17 04:31:52,854 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:31:52,855 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:31:52,952 - + +2025-05-17 04:31:52,953 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:32:09,545 - Epoch: [234][ 50/ 518] Overall Loss 2.502668 Objective Loss 2.502668 LR 0.000004 Time 0.331784 +2025-05-17 04:32:25,425 - Epoch: [234][ 100/ 518] Overall Loss 2.524274 Objective Loss 2.524274 LR 0.000004 Time 0.324676 +2025-05-17 04:32:41,299 - Epoch: [234][ 150/ 518] Overall Loss 2.507938 Objective Loss 2.507938 LR 0.000004 Time 0.322275 +2025-05-17 04:32:57,229 - Epoch: [234][ 200/ 518] Overall Loss 2.523567 Objective Loss 2.523567 LR 0.000004 Time 0.321351 +2025-05-17 04:33:13,171 - Epoch: [234][ 250/ 518] Overall Loss 2.521898 Objective Loss 2.521898 LR 0.000004 Time 0.320844 +2025-05-17 04:33:29,109 - Epoch: [234][ 300/ 518] Overall Loss 2.520166 Objective Loss 2.520166 LR 0.000004 Time 0.320494 +2025-05-17 04:33:45,052 - Epoch: [234][ 350/ 518] Overall Loss 2.515759 Objective Loss 2.515759 LR 0.000004 Time 0.320260 +2025-05-17 04:34:01,002 - Epoch: [234][ 400/ 518] Overall Loss 2.517804 Objective Loss 2.517804 LR 0.000004 Time 0.320102 +2025-05-17 04:34:16,937 - Epoch: [234][ 450/ 518] Overall Loss 2.514186 Objective Loss 2.514186 LR 0.000004 Time 0.319942 +2025-05-17 04:34:32,856 - Epoch: [234][ 500/ 518] Overall Loss 2.516202 Objective Loss 2.516202 LR 0.000004 Time 0.319785 +2025-05-17 04:34:38,468 - Epoch: [234][ 518/ 518] Overall Loss 2.514131 Objective Loss 2.514131 LR 0.000004 Time 0.319505 +2025-05-17 04:34:38,506 - --- validate (epoch=234)----------- +2025-05-17 04:34:38,507 - 4952 samples (32 per mini-batch) +2025-05-17 04:34:38,510 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:34:50,684 - Epoch: [234][ 50/ 155] Loss 2.887321 mAP 0.486374 +2025-05-17 04:35:02,991 - Epoch: [234][ 100/ 155] Loss 2.911060 mAP 0.489243 +2025-05-17 04:35:16,087 - Epoch: [234][ 150/ 155] Loss 2.914478 mAP 0.487258 +2025-05-17 04:35:19,075 - Epoch: [234][ 155/ 155] Loss 2.911450 mAP 0.487298 +2025-05-17 04:35:19,113 - ==> mAP: 0.48730 Loss: 2.911 + +2025-05-17 04:35:19,124 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:35:19,124 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:35:19,221 - + +2025-05-17 04:35:19,222 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:35:36,059 - Epoch: [235][ 50/ 518] Overall Loss 2.551486 Objective Loss 2.551486 LR 0.000004 Time 0.336670 +2025-05-17 04:35:51,943 - Epoch: [235][ 100/ 518] Overall Loss 2.532421 Objective Loss 2.532421 LR 0.000004 Time 0.327175 +2025-05-17 04:36:07,835 - Epoch: [235][ 150/ 518] Overall Loss 2.528067 Objective Loss 2.528067 LR 0.000004 Time 0.324055 +2025-05-17 04:36:23,761 - Epoch: [235][ 200/ 518] Overall Loss 2.534189 Objective Loss 2.534189 LR 0.000004 Time 0.322667 +2025-05-17 04:36:39,711 - Epoch: [235][ 250/ 518] Overall Loss 2.534552 Objective Loss 2.534552 LR 0.000004 Time 0.321933 +2025-05-17 04:36:55,650 - Epoch: [235][ 300/ 518] Overall Loss 2.528790 Objective Loss 2.528790 LR 0.000004 Time 0.321404 +2025-05-17 04:37:11,604 - Epoch: [235][ 350/ 518] Overall Loss 2.523539 Objective Loss 2.523539 LR 0.000004 Time 0.321070 +2025-05-17 04:37:27,554 - Epoch: [235][ 400/ 518] Overall Loss 2.533485 Objective Loss 2.533485 LR 0.000004 Time 0.320809 +2025-05-17 04:37:43,488 - Epoch: [235][ 450/ 518] Overall Loss 2.527391 Objective Loss 2.527391 LR 0.000004 Time 0.320571 +2025-05-17 04:37:59,411 - Epoch: [235][ 500/ 518] Overall Loss 2.531933 Objective Loss 2.531933 LR 0.000004 Time 0.320359 +2025-05-17 04:38:05,045 - Epoch: [235][ 518/ 518] Overall Loss 2.531242 Objective Loss 2.531242 LR 0.000004 Time 0.320101 +2025-05-17 04:38:05,082 - --- validate (epoch=235)----------- +2025-05-17 04:38:05,083 - 4952 samples (32 per mini-batch) +2025-05-17 04:38:05,086 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:38:17,073 - Epoch: [235][ 50/ 155] Loss 2.894167 mAP 0.523592 +2025-05-17 04:38:29,467 - Epoch: [235][ 100/ 155] Loss 2.904198 mAP 0.492966 +2025-05-17 04:38:42,356 - Epoch: [235][ 150/ 155] Loss 2.910407 mAP 0.485226 +2025-05-17 04:38:45,229 - Epoch: [235][ 155/ 155] Loss 2.906890 mAP 0.487536 +2025-05-17 04:38:45,264 - ==> mAP: 0.48754 Loss: 2.907 + +2025-05-17 04:38:45,275 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:38:45,275 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:38:45,370 - + +2025-05-17 04:38:45,370 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:39:02,329 - Epoch: [236][ 50/ 518] Overall Loss 2.478193 Objective Loss 2.478193 LR 0.000004 Time 0.339104 +2025-05-17 04:39:18,221 - Epoch: [236][ 100/ 518] Overall Loss 2.516116 Objective Loss 2.516116 LR 0.000004 Time 0.328468 +2025-05-17 04:39:34,120 - Epoch: [236][ 150/ 518] Overall Loss 2.529077 Objective Loss 2.529077 LR 0.000004 Time 0.324964 +2025-05-17 04:39:50,036 - Epoch: [236][ 200/ 518] Overall Loss 2.535728 Objective Loss 2.535728 LR 0.000004 Time 0.323302 +2025-05-17 04:40:05,955 - Epoch: [236][ 250/ 518] Overall Loss 2.544603 Objective Loss 2.544603 LR 0.000004 Time 0.322311 +2025-05-17 04:40:21,864 - Epoch: [236][ 300/ 518] Overall Loss 2.526408 Objective Loss 2.526408 LR 0.000004 Time 0.321620 +2025-05-17 04:40:37,797 - Epoch: [236][ 350/ 518] Overall Loss 2.527601 Objective Loss 2.527601 LR 0.000004 Time 0.321196 +2025-05-17 04:40:53,732 - Epoch: [236][ 400/ 518] Overall Loss 2.534398 Objective Loss 2.534398 LR 0.000004 Time 0.320882 +2025-05-17 04:41:09,667 - Epoch: [236][ 450/ 518] Overall Loss 2.536917 Objective Loss 2.536917 LR 0.000004 Time 0.320637 +2025-05-17 04:41:25,592 - Epoch: [236][ 500/ 518] Overall Loss 2.532973 Objective Loss 2.532973 LR 0.000004 Time 0.320423 +2025-05-17 04:41:31,204 - Epoch: [236][ 518/ 518] Overall Loss 2.529651 Objective Loss 2.529651 LR 0.000004 Time 0.320121 +2025-05-17 04:41:31,243 - --- validate (epoch=236)----------- +2025-05-17 04:41:31,244 - 4952 samples (32 per mini-batch) +2025-05-17 04:41:31,247 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:41:43,438 - Epoch: [236][ 50/ 155] Loss 2.912030 mAP 0.493976 +2025-05-17 04:41:55,805 - Epoch: [236][ 100/ 155] Loss 2.911474 mAP 0.485977 +2025-05-17 04:42:08,867 - Epoch: [236][ 150/ 155] Loss 2.913281 mAP 0.484614 +2025-05-17 04:42:11,759 - Epoch: [236][ 155/ 155] Loss 2.906987 mAP 0.485054 +2025-05-17 04:42:11,800 - ==> mAP: 0.48505 Loss: 2.907 + +2025-05-17 04:42:11,811 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:42:11,811 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:42:11,905 - + +2025-05-17 04:42:11,905 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:42:28,724 - Epoch: [237][ 50/ 518] Overall Loss 2.511035 Objective Loss 2.511035 LR 0.000004 Time 0.336303 +2025-05-17 04:42:44,633 - Epoch: [237][ 100/ 518] Overall Loss 2.524282 Objective Loss 2.524282 LR 0.000004 Time 0.327238 +2025-05-17 04:43:00,541 - Epoch: [237][ 150/ 518] Overall Loss 2.547344 Objective Loss 2.547344 LR 0.000004 Time 0.324205 +2025-05-17 04:43:16,479 - Epoch: [237][ 200/ 518] Overall Loss 2.525481 Objective Loss 2.525481 LR 0.000004 Time 0.322840 +2025-05-17 04:43:32,423 - Epoch: [237][ 250/ 518] Overall Loss 2.517007 Objective Loss 2.517007 LR 0.000004 Time 0.322044 +2025-05-17 04:43:48,372 - Epoch: [237][ 300/ 518] Overall Loss 2.511376 Objective Loss 2.511376 LR 0.000004 Time 0.321533 +2025-05-17 04:44:04,298 - Epoch: [237][ 350/ 518] Overall Loss 2.509672 Objective Loss 2.509672 LR 0.000004 Time 0.321099 +2025-05-17 04:44:20,236 - Epoch: [237][ 400/ 518] Overall Loss 2.518305 Objective Loss 2.518305 LR 0.000004 Time 0.320804 +2025-05-17 04:44:36,162 - Epoch: [237][ 450/ 518] Overall Loss 2.521360 Objective Loss 2.521360 LR 0.000004 Time 0.320548 +2025-05-17 04:44:52,090 - Epoch: [237][ 500/ 518] Overall Loss 2.523388 Objective Loss 2.523388 LR 0.000004 Time 0.320346 +2025-05-17 04:44:57,728 - Epoch: [237][ 518/ 518] Overall Loss 2.525864 Objective Loss 2.525864 LR 0.000004 Time 0.320099 +2025-05-17 04:44:57,769 - --- validate (epoch=237)----------- +2025-05-17 04:44:57,770 - 4952 samples (32 per mini-batch) +2025-05-17 04:44:57,773 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:45:09,790 - Epoch: [237][ 50/ 155] Loss 2.906657 mAP 0.498276 +2025-05-17 04:45:22,094 - Epoch: [237][ 100/ 155] Loss 2.915141 mAP 0.486723 +2025-05-17 04:45:34,985 - Epoch: [237][ 150/ 155] Loss 2.907162 mAP 0.487856 +2025-05-17 04:45:37,712 - Epoch: [237][ 155/ 155] Loss 2.911216 mAP 0.487532 +2025-05-17 04:45:37,747 - ==> mAP: 0.48753 Loss: 2.911 + +2025-05-17 04:45:37,757 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:45:37,757 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:45:37,848 - + +2025-05-17 04:45:37,848 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:45:54,685 - Epoch: [238][ 50/ 518] Overall Loss 2.514485 Objective Loss 2.514485 LR 0.000004 Time 0.336673 +2025-05-17 04:46:10,560 - Epoch: [238][ 100/ 518] Overall Loss 2.484765 Objective Loss 2.484765 LR 0.000004 Time 0.327071 +2025-05-17 04:46:26,470 - Epoch: [238][ 150/ 518] Overall Loss 2.495255 Objective Loss 2.495255 LR 0.000004 Time 0.324111 +2025-05-17 04:46:42,376 - Epoch: [238][ 200/ 518] Overall Loss 2.511693 Objective Loss 2.511693 LR 0.000004 Time 0.322609 +2025-05-17 04:46:58,281 - Epoch: [238][ 250/ 518] Overall Loss 2.508329 Objective Loss 2.508329 LR 0.000004 Time 0.321701 +2025-05-17 04:47:14,216 - Epoch: [238][ 300/ 518] Overall Loss 2.509522 Objective Loss 2.509522 LR 0.000004 Time 0.321200 +2025-05-17 04:47:30,153 - Epoch: [238][ 350/ 518] Overall Loss 2.513204 Objective Loss 2.513204 LR 0.000004 Time 0.320847 +2025-05-17 04:47:46,097 - Epoch: [238][ 400/ 518] Overall Loss 2.516354 Objective Loss 2.516354 LR 0.000004 Time 0.320598 +2025-05-17 04:48:02,051 - Epoch: [238][ 450/ 518] Overall Loss 2.521181 Objective Loss 2.521181 LR 0.000004 Time 0.320427 +2025-05-17 04:48:18,001 - Epoch: [238][ 500/ 518] Overall Loss 2.520827 Objective Loss 2.520827 LR 0.000004 Time 0.320284 +2025-05-17 04:48:23,638 - Epoch: [238][ 518/ 518] Overall Loss 2.521476 Objective Loss 2.521476 LR 0.000004 Time 0.320035 +2025-05-17 04:48:23,678 - --- validate (epoch=238)----------- +2025-05-17 04:48:23,679 - 4952 samples (32 per mini-batch) +2025-05-17 04:48:23,683 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:48:35,883 - Epoch: [238][ 50/ 155] Loss 2.881405 mAP 0.483737 +2025-05-17 04:48:48,139 - Epoch: [238][ 100/ 155] Loss 2.900096 mAP 0.483981 +2025-05-17 04:49:01,099 - Epoch: [238][ 150/ 155] Loss 2.916192 mAP 0.486694 +2025-05-17 04:49:03,949 - Epoch: [238][ 155/ 155] Loss 2.912744 mAP 0.487923 +2025-05-17 04:49:03,990 - ==> mAP: 0.48792 Loss: 2.913 + +2025-05-17 04:49:04,000 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:49:04,000 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:49:04,094 - + +2025-05-17 04:49:04,094 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:49:20,930 - Epoch: [239][ 50/ 518] Overall Loss 2.521668 Objective Loss 2.521668 LR 0.000004 Time 0.336650 +2025-05-17 04:49:36,814 - Epoch: [239][ 100/ 518] Overall Loss 2.551936 Objective Loss 2.551936 LR 0.000004 Time 0.327151 +2025-05-17 04:49:52,689 - Epoch: [239][ 150/ 518] Overall Loss 2.543106 Objective Loss 2.543106 LR 0.000004 Time 0.323928 +2025-05-17 04:50:08,606 - Epoch: [239][ 200/ 518] Overall Loss 2.548621 Objective Loss 2.548621 LR 0.000004 Time 0.322526 +2025-05-17 04:50:24,539 - Epoch: [239][ 250/ 518] Overall Loss 2.545781 Objective Loss 2.545781 LR 0.000004 Time 0.321747 +2025-05-17 04:50:40,455 - Epoch: [239][ 300/ 518] Overall Loss 2.537912 Objective Loss 2.537912 LR 0.000004 Time 0.321175 +2025-05-17 04:50:56,380 - Epoch: [239][ 350/ 518] Overall Loss 2.540333 Objective Loss 2.540333 LR 0.000004 Time 0.320788 +2025-05-17 04:51:12,332 - Epoch: [239][ 400/ 518] Overall Loss 2.534053 Objective Loss 2.534053 LR 0.000004 Time 0.320568 +2025-05-17 04:51:28,268 - Epoch: [239][ 450/ 518] Overall Loss 2.533724 Objective Loss 2.533724 LR 0.000004 Time 0.320362 +2025-05-17 04:51:44,212 - Epoch: [239][ 500/ 518] Overall Loss 2.530761 Objective Loss 2.530761 LR 0.000004 Time 0.320212 +2025-05-17 04:51:49,828 - Epoch: [239][ 518/ 518] Overall Loss 2.531501 Objective Loss 2.531501 LR 0.000004 Time 0.319927 +2025-05-17 04:51:49,867 - --- validate (epoch=239)----------- +2025-05-17 04:51:49,868 - 4952 samples (32 per mini-batch) +2025-05-17 04:51:49,871 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:52:02,067 - Epoch: [239][ 50/ 155] Loss 2.929426 mAP 0.479313 +2025-05-17 04:52:14,335 - Epoch: [239][ 100/ 155] Loss 2.931857 mAP 0.476528 +2025-05-17 04:52:27,222 - Epoch: [239][ 150/ 155] Loss 2.903893 mAP 0.483671 +2025-05-17 04:52:30,108 - Epoch: [239][ 155/ 155] Loss 2.904996 mAP 0.482562 +2025-05-17 04:52:30,145 - ==> mAP: 0.48256 Loss: 2.905 + +2025-05-17 04:52:30,156 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:52:30,156 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:52:30,250 - + +2025-05-17 04:52:30,251 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:52:46,964 - Epoch: [240][ 50/ 518] Overall Loss 2.550417 Objective Loss 2.550417 LR 0.000004 Time 0.334211 +2025-05-17 04:53:02,861 - Epoch: [240][ 100/ 518] Overall Loss 2.544517 Objective Loss 2.544517 LR 0.000004 Time 0.326068 +2025-05-17 04:53:18,766 - Epoch: [240][ 150/ 518] Overall Loss 2.529809 Objective Loss 2.529809 LR 0.000004 Time 0.323409 +2025-05-17 04:53:34,682 - Epoch: [240][ 200/ 518] Overall Loss 2.534790 Objective Loss 2.534790 LR 0.000004 Time 0.322130 +2025-05-17 04:53:50,590 - Epoch: [240][ 250/ 518] Overall Loss 2.540393 Objective Loss 2.540393 LR 0.000004 Time 0.321332 +2025-05-17 04:54:06,508 - Epoch: [240][ 300/ 518] Overall Loss 2.535588 Objective Loss 2.535588 LR 0.000004 Time 0.320832 +2025-05-17 04:54:22,448 - Epoch: [240][ 350/ 518] Overall Loss 2.527718 Objective Loss 2.527718 LR 0.000004 Time 0.320541 +2025-05-17 04:54:38,375 - Epoch: [240][ 400/ 518] Overall Loss 2.519372 Objective Loss 2.519372 LR 0.000004 Time 0.320286 +2025-05-17 04:54:54,303 - Epoch: [240][ 450/ 518] Overall Loss 2.520140 Objective Loss 2.520140 LR 0.000004 Time 0.320094 +2025-05-17 04:55:10,230 - Epoch: [240][ 500/ 518] Overall Loss 2.527326 Objective Loss 2.527326 LR 0.000004 Time 0.319935 +2025-05-17 04:55:15,851 - Epoch: [240][ 518/ 518] Overall Loss 2.529797 Objective Loss 2.529797 LR 0.000004 Time 0.319669 +2025-05-17 04:55:15,889 - --- validate (epoch=240)----------- +2025-05-17 04:55:15,890 - 4952 samples (32 per mini-batch) +2025-05-17 04:55:15,893 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:55:28,010 - Epoch: [240][ 50/ 155] Loss 2.894952 mAP 0.477438 +2025-05-17 04:55:40,577 - Epoch: [240][ 100/ 155] Loss 2.914025 mAP 0.481344 +2025-05-17 04:55:53,717 - Epoch: [240][ 150/ 155] Loss 2.912255 mAP 0.485472 +2025-05-17 04:55:56,516 - Epoch: [240][ 155/ 155] Loss 2.912829 mAP 0.485703 +2025-05-17 04:55:56,555 - ==> mAP: 0.48570 Loss: 2.913 + +2025-05-17 04:55:56,566 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:55:56,566 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:55:56,663 - + +2025-05-17 04:55:56,663 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:56:13,625 - Epoch: [241][ 50/ 518] Overall Loss 2.602483 Objective Loss 2.602483 LR 0.000004 Time 0.339171 +2025-05-17 04:56:29,514 - Epoch: [241][ 100/ 518] Overall Loss 2.562959 Objective Loss 2.562959 LR 0.000004 Time 0.328465 +2025-05-17 04:56:45,393 - Epoch: [241][ 150/ 518] Overall Loss 2.543440 Objective Loss 2.543440 LR 0.000004 Time 0.324835 +2025-05-17 04:57:01,329 - Epoch: [241][ 200/ 518] Overall Loss 2.534553 Objective Loss 2.534553 LR 0.000004 Time 0.323304 +2025-05-17 04:57:17,289 - Epoch: [241][ 250/ 518] Overall Loss 2.534683 Objective Loss 2.534683 LR 0.000004 Time 0.322478 +2025-05-17 04:57:33,224 - Epoch: [241][ 300/ 518] Overall Loss 2.535888 Objective Loss 2.535888 LR 0.000004 Time 0.321848 +2025-05-17 04:57:49,155 - Epoch: [241][ 350/ 518] Overall Loss 2.533703 Objective Loss 2.533703 LR 0.000004 Time 0.321385 +2025-05-17 04:58:05,087 - Epoch: [241][ 400/ 518] Overall Loss 2.526101 Objective Loss 2.526101 LR 0.000004 Time 0.321041 +2025-05-17 04:58:21,006 - Epoch: [241][ 450/ 518] Overall Loss 2.524736 Objective Loss 2.524736 LR 0.000004 Time 0.320743 +2025-05-17 04:58:36,953 - Epoch: [241][ 500/ 518] Overall Loss 2.530170 Objective Loss 2.530170 LR 0.000004 Time 0.320561 +2025-05-17 04:58:42,577 - Epoch: [241][ 518/ 518] Overall Loss 2.534007 Objective Loss 2.534007 LR 0.000004 Time 0.320278 +2025-05-17 04:58:42,615 - --- validate (epoch=241)----------- +2025-05-17 04:58:42,616 - 4952 samples (32 per mini-batch) +2025-05-17 04:58:42,619 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 04:58:54,812 - Epoch: [241][ 50/ 155] Loss 2.933359 mAP 0.485422 +2025-05-17 04:59:07,254 - Epoch: [241][ 100/ 155] Loss 2.905197 mAP 0.487509 +2025-05-17 04:59:20,304 - Epoch: [241][ 150/ 155] Loss 2.912469 mAP 0.483074 +2025-05-17 04:59:23,165 - Epoch: [241][ 155/ 155] Loss 2.908283 mAP 0.487257 +2025-05-17 04:59:23,204 - ==> mAP: 0.48726 Loss: 2.908 + +2025-05-17 04:59:23,214 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 04:59:23,214 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 04:59:23,309 - + +2025-05-17 04:59:23,309 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 04:59:40,139 - Epoch: [242][ 50/ 518] Overall Loss 2.510179 Objective Loss 2.510179 LR 0.000004 Time 0.336535 +2025-05-17 04:59:56,039 - Epoch: [242][ 100/ 518] Overall Loss 2.512729 Objective Loss 2.512729 LR 0.000004 Time 0.327262 +2025-05-17 05:00:11,930 - Epoch: [242][ 150/ 518] Overall Loss 2.525265 Objective Loss 2.525265 LR 0.000004 Time 0.324109 +2025-05-17 05:00:27,834 - Epoch: [242][ 200/ 518] Overall Loss 2.531772 Objective Loss 2.531772 LR 0.000004 Time 0.322596 +2025-05-17 05:00:43,734 - Epoch: [242][ 250/ 518] Overall Loss 2.525094 Objective Loss 2.525094 LR 0.000004 Time 0.321670 +2025-05-17 05:00:59,664 - Epoch: [242][ 300/ 518] Overall Loss 2.525325 Objective Loss 2.525325 LR 0.000004 Time 0.321156 +2025-05-17 05:01:15,583 - Epoch: [242][ 350/ 518] Overall Loss 2.523333 Objective Loss 2.523333 LR 0.000004 Time 0.320756 +2025-05-17 05:01:31,501 - Epoch: [242][ 400/ 518] Overall Loss 2.527028 Objective Loss 2.527028 LR 0.000004 Time 0.320455 +2025-05-17 05:01:47,453 - Epoch: [242][ 450/ 518] Overall Loss 2.525964 Objective Loss 2.525964 LR 0.000004 Time 0.320296 +2025-05-17 05:02:03,383 - Epoch: [242][ 500/ 518] Overall Loss 2.521141 Objective Loss 2.521141 LR 0.000004 Time 0.320125 +2025-05-17 05:02:08,989 - Epoch: [242][ 518/ 518] Overall Loss 2.524705 Objective Loss 2.524705 LR 0.000004 Time 0.319823 +2025-05-17 05:02:09,022 - --- validate (epoch=242)----------- +2025-05-17 05:02:09,023 - 4952 samples (32 per mini-batch) +2025-05-17 05:02:09,026 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:02:21,461 - Epoch: [242][ 50/ 155] Loss 2.879759 mAP 0.490842 +2025-05-17 05:02:33,814 - Epoch: [242][ 100/ 155] Loss 2.902096 mAP 0.483947 +2025-05-17 05:02:46,961 - Epoch: [242][ 150/ 155] Loss 2.905555 mAP 0.487239 +2025-05-17 05:02:49,884 - Epoch: [242][ 155/ 155] Loss 2.909560 mAP 0.487136 +2025-05-17 05:02:49,918 - ==> mAP: 0.48714 Loss: 2.910 + +2025-05-17 05:02:49,929 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 05:02:49,929 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 05:02:50,021 - + +2025-05-17 05:02:50,021 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:03:07,014 - Epoch: [243][ 50/ 518] Overall Loss 2.566396 Objective Loss 2.566396 LR 0.000004 Time 0.339800 +2025-05-17 05:03:22,901 - Epoch: [243][ 100/ 518] Overall Loss 2.539072 Objective Loss 2.539072 LR 0.000004 Time 0.328757 +2025-05-17 05:03:38,801 - Epoch: [243][ 150/ 518] Overall Loss 2.538687 Objective Loss 2.538687 LR 0.000004 Time 0.325163 +2025-05-17 05:03:54,715 - Epoch: [243][ 200/ 518] Overall Loss 2.531182 Objective Loss 2.531182 LR 0.000004 Time 0.323440 +2025-05-17 05:04:10,634 - Epoch: [243][ 250/ 518] Overall Loss 2.538465 Objective Loss 2.538465 LR 0.000004 Time 0.322424 +2025-05-17 05:04:26,604 - Epoch: [243][ 300/ 518] Overall Loss 2.527838 Objective Loss 2.527838 LR 0.000004 Time 0.321919 +2025-05-17 05:04:42,552 - Epoch: [243][ 350/ 518] Overall Loss 2.526455 Objective Loss 2.526455 LR 0.000004 Time 0.321492 +2025-05-17 05:04:58,508 - Epoch: [243][ 400/ 518] Overall Loss 2.533754 Objective Loss 2.533754 LR 0.000004 Time 0.321196 +2025-05-17 05:05:14,481 - Epoch: [243][ 450/ 518] Overall Loss 2.538880 Objective Loss 2.538880 LR 0.000004 Time 0.321000 +2025-05-17 05:05:30,414 - Epoch: [243][ 500/ 518] Overall Loss 2.532018 Objective Loss 2.532018 LR 0.000004 Time 0.320764 +2025-05-17 05:05:36,030 - Epoch: [243][ 518/ 518] Overall Loss 2.532593 Objective Loss 2.532593 LR 0.000004 Time 0.320459 +2025-05-17 05:05:36,068 - --- validate (epoch=243)----------- +2025-05-17 05:05:36,069 - 4952 samples (32 per mini-batch) +2025-05-17 05:05:36,072 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:05:48,171 - Epoch: [243][ 50/ 155] Loss 2.899100 mAP 0.491367 +2025-05-17 05:06:00,556 - Epoch: [243][ 100/ 155] Loss 2.895465 mAP 0.486440 +2025-05-17 05:06:13,312 - Epoch: [243][ 150/ 155] Loss 2.906798 mAP 0.483897 +2025-05-17 05:06:16,175 - Epoch: [243][ 155/ 155] Loss 2.905444 mAP 0.485647 +2025-05-17 05:06:16,211 - ==> mAP: 0.48565 Loss: 2.905 + +2025-05-17 05:06:16,221 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 05:06:16,221 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 05:06:16,317 - + +2025-05-17 05:06:16,317 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:06:33,342 - Epoch: [244][ 50/ 518] Overall Loss 2.550620 Objective Loss 2.550620 LR 0.000004 Time 0.340437 +2025-05-17 05:06:49,227 - Epoch: [244][ 100/ 518] Overall Loss 2.540337 Objective Loss 2.540337 LR 0.000004 Time 0.329065 +2025-05-17 05:07:05,116 - Epoch: [244][ 150/ 518] Overall Loss 2.537279 Objective Loss 2.537279 LR 0.000004 Time 0.325292 +2025-05-17 05:07:21,023 - Epoch: [244][ 200/ 518] Overall Loss 2.543453 Objective Loss 2.543453 LR 0.000004 Time 0.323499 +2025-05-17 05:07:36,955 - Epoch: [244][ 250/ 518] Overall Loss 2.532628 Objective Loss 2.532628 LR 0.000004 Time 0.322526 +2025-05-17 05:07:52,896 - Epoch: [244][ 300/ 518] Overall Loss 2.534973 Objective Loss 2.534973 LR 0.000004 Time 0.321905 +2025-05-17 05:08:08,845 - Epoch: [244][ 350/ 518] Overall Loss 2.535207 Objective Loss 2.535207 LR 0.000004 Time 0.321487 +2025-05-17 05:08:24,769 - Epoch: [244][ 400/ 518] Overall Loss 2.531779 Objective Loss 2.531779 LR 0.000004 Time 0.321107 +2025-05-17 05:08:40,684 - Epoch: [244][ 450/ 518] Overall Loss 2.527342 Objective Loss 2.527342 LR 0.000004 Time 0.320794 +2025-05-17 05:08:56,608 - Epoch: [244][ 500/ 518] Overall Loss 2.524594 Objective Loss 2.524594 LR 0.000004 Time 0.320559 +2025-05-17 05:09:02,228 - Epoch: [244][ 518/ 518] Overall Loss 2.523273 Objective Loss 2.523273 LR 0.000004 Time 0.320270 +2025-05-17 05:09:02,266 - --- validate (epoch=244)----------- +2025-05-17 05:09:02,267 - 4952 samples (32 per mini-batch) +2025-05-17 05:09:02,270 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:09:14,724 - Epoch: [244][ 50/ 155] Loss 2.921696 mAP 0.475856 +2025-05-17 05:09:27,001 - Epoch: [244][ 100/ 155] Loss 2.912925 mAP 0.482569 +2025-05-17 05:09:40,018 - Epoch: [244][ 150/ 155] Loss 2.905767 mAP 0.490033 +2025-05-17 05:09:42,896 - Epoch: [244][ 155/ 155] Loss 2.906406 mAP 0.489698 +2025-05-17 05:09:42,935 - ==> mAP: 0.48970 Loss: 2.906 + +2025-05-17 05:09:42,945 - ==> Best [mAP: 0.491273 vloss: 2.923876 Params: 2177088 on epoch: 172] +2025-05-17 05:09:42,946 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 05:09:43,039 - + +2025-05-17 05:09:43,040 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:09:59,871 - Epoch: [245][ 50/ 518] Overall Loss 2.541009 Objective Loss 2.541009 LR 0.000004 Time 0.336572 +2025-05-17 05:10:15,757 - Epoch: [245][ 100/ 518] Overall Loss 2.541674 Objective Loss 2.541674 LR 0.000004 Time 0.327130 +2025-05-17 05:10:31,655 - Epoch: [245][ 150/ 518] Overall Loss 2.538876 Objective Loss 2.538876 LR 0.000004 Time 0.324071 +2025-05-17 05:10:47,597 - Epoch: [245][ 200/ 518] Overall Loss 2.537700 Objective Loss 2.537700 LR 0.000004 Time 0.322759 +2025-05-17 05:11:03,549 - Epoch: [245][ 250/ 518] Overall Loss 2.544213 Objective Loss 2.544213 LR 0.000004 Time 0.322011 +2025-05-17 05:11:19,513 - Epoch: [245][ 300/ 518] Overall Loss 2.543250 Objective Loss 2.543250 LR 0.000004 Time 0.321553 +2025-05-17 05:11:35,458 - Epoch: [245][ 350/ 518] Overall Loss 2.538517 Objective Loss 2.538517 LR 0.000004 Time 0.321172 +2025-05-17 05:11:51,409 - Epoch: [245][ 400/ 518] Overall Loss 2.537836 Objective Loss 2.537836 LR 0.000004 Time 0.320902 +2025-05-17 05:12:07,366 - Epoch: [245][ 450/ 518] Overall Loss 2.533676 Objective Loss 2.533676 LR 0.000004 Time 0.320705 +2025-05-17 05:12:23,295 - Epoch: [245][ 500/ 518] Overall Loss 2.536453 Objective Loss 2.536453 LR 0.000004 Time 0.320489 +2025-05-17 05:12:28,925 - Epoch: [245][ 518/ 518] Overall Loss 2.537256 Objective Loss 2.537256 LR 0.000004 Time 0.320221 +2025-05-17 05:12:28,960 - --- validate (epoch=245)----------- +2025-05-17 05:12:28,961 - 4952 samples (32 per mini-batch) +2025-05-17 05:12:28,964 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:12:41,308 - Epoch: [245][ 50/ 155] Loss 2.925760 mAP 0.497511 +2025-05-17 05:12:54,068 - Epoch: [245][ 100/ 155] Loss 2.900168 mAP 0.492036 +2025-05-17 05:13:07,365 - Epoch: [245][ 150/ 155] Loss 2.917019 mAP 0.491363 +2025-05-17 05:13:10,390 - Epoch: [245][ 155/ 155] Loss 2.914685 mAP 0.491557 +2025-05-17 05:13:10,430 - ==> mAP: 0.49156 Loss: 2.915 + +2025-05-17 05:13:10,441 - ==> Best [mAP: 0.491557 vloss: 2.914685 Params: 2177088 on epoch: 245] +2025-05-17 05:13:10,441 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 05:13:10,564 - + +2025-05-17 05:13:10,565 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:13:27,300 - Epoch: [246][ 50/ 518] Overall Loss 2.578231 Objective Loss 2.578231 LR 0.000004 Time 0.334646 +2025-05-17 05:13:43,200 - Epoch: [246][ 100/ 518] Overall Loss 2.580304 Objective Loss 2.580304 LR 0.000004 Time 0.326312 +2025-05-17 05:13:59,094 - Epoch: [246][ 150/ 518] Overall Loss 2.556096 Objective Loss 2.556096 LR 0.000004 Time 0.323495 +2025-05-17 05:14:15,009 - Epoch: [246][ 200/ 518] Overall Loss 2.533407 Objective Loss 2.533407 LR 0.000004 Time 0.322192 +2025-05-17 05:14:30,933 - Epoch: [246][ 250/ 518] Overall Loss 2.537530 Objective Loss 2.537530 LR 0.000004 Time 0.321446 +2025-05-17 05:14:46,873 - Epoch: [246][ 300/ 518] Overall Loss 2.528730 Objective Loss 2.528730 LR 0.000004 Time 0.320999 +2025-05-17 05:15:02,785 - Epoch: [246][ 350/ 518] Overall Loss 2.529580 Objective Loss 2.529580 LR 0.000004 Time 0.320604 +2025-05-17 05:15:18,730 - Epoch: [246][ 400/ 518] Overall Loss 2.528929 Objective Loss 2.528929 LR 0.000004 Time 0.320388 +2025-05-17 05:15:34,690 - Epoch: [246][ 450/ 518] Overall Loss 2.532269 Objective Loss 2.532269 LR 0.000004 Time 0.320255 +2025-05-17 05:15:50,638 - Epoch: [246][ 500/ 518] Overall Loss 2.532434 Objective Loss 2.532434 LR 0.000004 Time 0.320125 +2025-05-17 05:15:56,258 - Epoch: [246][ 518/ 518] Overall Loss 2.533074 Objective Loss 2.533074 LR 0.000004 Time 0.319850 +2025-05-17 05:15:56,298 - --- validate (epoch=246)----------- +2025-05-17 05:15:56,299 - 4952 samples (32 per mini-batch) +2025-05-17 05:15:56,302 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:16:08,464 - Epoch: [246][ 50/ 155] Loss 2.913663 mAP 0.483438 +2025-05-17 05:16:20,756 - Epoch: [246][ 100/ 155] Loss 2.903559 mAP 0.482753 +2025-05-17 05:16:33,609 - Epoch: [246][ 150/ 155] Loss 2.910630 mAP 0.481326 +2025-05-17 05:16:36,448 - Epoch: [246][ 155/ 155] Loss 2.906601 mAP 0.482874 +2025-05-17 05:16:36,487 - ==> mAP: 0.48287 Loss: 2.907 + +2025-05-17 05:16:36,498 - ==> Best [mAP: 0.491557 vloss: 2.914685 Params: 2177088 on epoch: 245] +2025-05-17 05:16:36,498 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 05:16:36,591 - + +2025-05-17 05:16:36,592 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:16:53,429 - Epoch: [247][ 50/ 518] Overall Loss 2.547789 Objective Loss 2.547789 LR 0.000004 Time 0.336676 +2025-05-17 05:17:09,337 - Epoch: [247][ 100/ 518] Overall Loss 2.509085 Objective Loss 2.509085 LR 0.000004 Time 0.327418 +2025-05-17 05:17:25,247 - Epoch: [247][ 150/ 518] Overall Loss 2.504316 Objective Loss 2.504316 LR 0.000004 Time 0.324335 +2025-05-17 05:17:41,165 - Epoch: [247][ 200/ 518] Overall Loss 2.515525 Objective Loss 2.515525 LR 0.000004 Time 0.322837 +2025-05-17 05:17:57,094 - Epoch: [247][ 250/ 518] Overall Loss 2.508260 Objective Loss 2.508260 LR 0.000004 Time 0.321980 +2025-05-17 05:18:13,015 - Epoch: [247][ 300/ 518] Overall Loss 2.510877 Objective Loss 2.510877 LR 0.000004 Time 0.321384 +2025-05-17 05:18:28,973 - Epoch: [247][ 350/ 518] Overall Loss 2.518762 Objective Loss 2.518762 LR 0.000004 Time 0.321065 +2025-05-17 05:18:44,918 - Epoch: [247][ 400/ 518] Overall Loss 2.520638 Objective Loss 2.520638 LR 0.000004 Time 0.320792 +2025-05-17 05:19:00,841 - Epoch: [247][ 450/ 518] Overall Loss 2.523710 Objective Loss 2.523710 LR 0.000004 Time 0.320530 +2025-05-17 05:19:16,780 - Epoch: [247][ 500/ 518] Overall Loss 2.525187 Objective Loss 2.525187 LR 0.000004 Time 0.320353 +2025-05-17 05:19:22,410 - Epoch: [247][ 518/ 518] Overall Loss 2.525695 Objective Loss 2.525695 LR 0.000004 Time 0.320089 +2025-05-17 05:19:22,449 - --- validate (epoch=247)----------- +2025-05-17 05:19:22,450 - 4952 samples (32 per mini-batch) +2025-05-17 05:19:22,453 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:19:34,866 - Epoch: [247][ 50/ 155] Loss 2.958187 mAP 0.461392 +2025-05-17 05:19:47,311 - Epoch: [247][ 100/ 155] Loss 2.931575 mAP 0.476163 +2025-05-17 05:20:00,368 - Epoch: [247][ 150/ 155] Loss 2.907942 mAP 0.488680 +2025-05-17 05:20:03,240 - Epoch: [247][ 155/ 155] Loss 2.911529 mAP 0.487729 +2025-05-17 05:20:03,280 - ==> mAP: 0.48773 Loss: 2.912 + +2025-05-17 05:20:03,290 - ==> Best [mAP: 0.491557 vloss: 2.914685 Params: 2177088 on epoch: 245] +2025-05-17 05:20:03,291 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 05:20:03,385 - + +2025-05-17 05:20:03,385 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:20:20,133 - Epoch: [248][ 50/ 518] Overall Loss 2.577831 Objective Loss 2.577831 LR 0.000004 Time 0.334900 +2025-05-17 05:20:36,022 - Epoch: [248][ 100/ 518] Overall Loss 2.552323 Objective Loss 2.552323 LR 0.000004 Time 0.326334 +2025-05-17 05:20:51,897 - Epoch: [248][ 150/ 518] Overall Loss 2.557735 Objective Loss 2.557735 LR 0.000004 Time 0.323380 +2025-05-17 05:21:07,798 - Epoch: [248][ 200/ 518] Overall Loss 2.540406 Objective Loss 2.540406 LR 0.000004 Time 0.322038 +2025-05-17 05:21:23,706 - Epoch: [248][ 250/ 518] Overall Loss 2.538425 Objective Loss 2.538425 LR 0.000004 Time 0.321255 +2025-05-17 05:21:39,618 - Epoch: [248][ 300/ 518] Overall Loss 2.538994 Objective Loss 2.538994 LR 0.000004 Time 0.320751 +2025-05-17 05:21:55,540 - Epoch: [248][ 350/ 518] Overall Loss 2.538322 Objective Loss 2.538322 LR 0.000004 Time 0.320417 +2025-05-17 05:22:11,473 - Epoch: [248][ 400/ 518] Overall Loss 2.538107 Objective Loss 2.538107 LR 0.000004 Time 0.320195 +2025-05-17 05:22:27,409 - Epoch: [248][ 450/ 518] Overall Loss 2.538952 Objective Loss 2.538952 LR 0.000004 Time 0.320028 +2025-05-17 05:22:43,342 - Epoch: [248][ 500/ 518] Overall Loss 2.537092 Objective Loss 2.537092 LR 0.000004 Time 0.319888 +2025-05-17 05:22:48,970 - Epoch: [248][ 518/ 518] Overall Loss 2.538208 Objective Loss 2.538208 LR 0.000004 Time 0.319637 +2025-05-17 05:22:49,010 - --- validate (epoch=248)----------- +2025-05-17 05:22:49,011 - 4952 samples (32 per mini-batch) +2025-05-17 05:22:49,015 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:23:01,037 - Epoch: [248][ 50/ 155] Loss 2.920700 mAP 0.493727 +2025-05-17 05:23:13,462 - Epoch: [248][ 100/ 155] Loss 2.915113 mAP 0.485677 +2025-05-17 05:23:26,369 - Epoch: [248][ 150/ 155] Loss 2.907810 mAP 0.488273 +2025-05-17 05:23:29,284 - Epoch: [248][ 155/ 155] Loss 2.909687 mAP 0.488782 +2025-05-17 05:23:29,323 - ==> mAP: 0.48878 Loss: 2.910 + +2025-05-17 05:23:29,333 - ==> Best [mAP: 0.491557 vloss: 2.914685 Params: 2177088 on epoch: 245] +2025-05-17 05:23:29,334 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 05:23:29,427 - + +2025-05-17 05:23:29,428 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:23:46,135 - Epoch: [249][ 50/ 518] Overall Loss 2.532403 Objective Loss 2.532403 LR 0.000004 Time 0.334080 +2025-05-17 05:24:02,024 - Epoch: [249][ 100/ 518] Overall Loss 2.554051 Objective Loss 2.554051 LR 0.000004 Time 0.325918 +2025-05-17 05:24:17,928 - Epoch: [249][ 150/ 518] Overall Loss 2.534950 Objective Loss 2.534950 LR 0.000004 Time 0.323299 +2025-05-17 05:24:33,852 - Epoch: [249][ 200/ 518] Overall Loss 2.524856 Objective Loss 2.524856 LR 0.000004 Time 0.322088 +2025-05-17 05:24:49,782 - Epoch: [249][ 250/ 518] Overall Loss 2.533115 Objective Loss 2.533115 LR 0.000004 Time 0.321384 +2025-05-17 05:25:05,705 - Epoch: [249][ 300/ 518] Overall Loss 2.536117 Objective Loss 2.536117 LR 0.000004 Time 0.320895 +2025-05-17 05:25:21,624 - Epoch: [249][ 350/ 518] Overall Loss 2.533827 Objective Loss 2.533827 LR 0.000004 Time 0.320531 +2025-05-17 05:25:37,556 - Epoch: [249][ 400/ 518] Overall Loss 2.538893 Objective Loss 2.538893 LR 0.000004 Time 0.320293 +2025-05-17 05:25:53,507 - Epoch: [249][ 450/ 518] Overall Loss 2.536899 Objective Loss 2.536899 LR 0.000004 Time 0.320151 +2025-05-17 05:26:09,455 - Epoch: [249][ 500/ 518] Overall Loss 2.534877 Objective Loss 2.534877 LR 0.000004 Time 0.320029 +2025-05-17 05:26:15,081 - Epoch: [249][ 518/ 518] Overall Loss 2.536743 Objective Loss 2.536743 LR 0.000004 Time 0.319770 +2025-05-17 05:26:15,123 - --- validate (epoch=249)----------- +2025-05-17 05:26:15,124 - 4952 samples (32 per mini-batch) +2025-05-17 05:26:15,127 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:26:27,256 - Epoch: [249][ 50/ 155] Loss 2.888588 mAP 0.481393 +2025-05-17 05:26:39,831 - Epoch: [249][ 100/ 155] Loss 2.897161 mAP 0.483056 +2025-05-17 05:26:52,990 - Epoch: [249][ 150/ 155] Loss 2.905665 mAP 0.483918 +2025-05-17 05:26:55,943 - Epoch: [249][ 155/ 155] Loss 2.904061 mAP 0.486170 +2025-05-17 05:26:55,984 - ==> mAP: 0.48617 Loss: 2.904 + +2025-05-17 05:26:55,995 - ==> Best [mAP: 0.491557 vloss: 2.914685 Params: 2177088 on epoch: 245] +2025-05-17 05:26:55,995 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_checkpoint.pth.tar +2025-05-17 05:26:56,092 - Initiating quantization aware training (QAT)... +2025-05-17 05:26:56,099 - Collecting statistics for quantization aware training (QAT)... +2025-05-17 05:29:19,418 - + +2025-05-17 05:29:19,419 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:29:39,904 - Epoch: [250][ 50/ 518] Overall Loss 3.486155 Objective Loss 3.486155 LR 0.000004 Time 0.409642 +2025-05-17 05:29:58,861 - Epoch: [250][ 100/ 518] Overall Loss 3.281517 Objective Loss 3.281517 LR 0.000004 Time 0.394382 +2025-05-17 05:30:17,819 - Epoch: [250][ 150/ 518] Overall Loss 3.154542 Objective Loss 3.154542 LR 0.000004 Time 0.389302 +2025-05-17 05:30:36,792 - Epoch: [250][ 200/ 518] Overall Loss 3.085080 Objective Loss 3.085080 LR 0.000004 Time 0.386833 +2025-05-17 05:30:55,774 - Epoch: [250][ 250/ 518] Overall Loss 3.028125 Objective Loss 3.028125 LR 0.000004 Time 0.385390 +2025-05-17 05:31:14,962 - Epoch: [250][ 300/ 518] Overall Loss 2.995499 Objective Loss 2.995499 LR 0.000004 Time 0.385116 +2025-05-17 05:31:33,949 - Epoch: [250][ 350/ 518] Overall Loss 2.969907 Objective Loss 2.969907 LR 0.000004 Time 0.384344 +2025-05-17 05:31:52,934 - Epoch: [250][ 400/ 518] Overall Loss 2.947755 Objective Loss 2.947755 LR 0.000004 Time 0.383760 +2025-05-17 05:32:11,937 - Epoch: [250][ 450/ 518] Overall Loss 2.919199 Objective Loss 2.919199 LR 0.000004 Time 0.383347 +2025-05-17 05:32:30,901 - Epoch: [250][ 500/ 518] Overall Loss 2.899789 Objective Loss 2.899789 LR 0.000004 Time 0.382937 +2025-05-17 05:32:37,578 - Epoch: [250][ 518/ 518] Overall Loss 2.897175 Objective Loss 2.897175 LR 0.000004 Time 0.382520 +2025-05-17 05:32:37,665 - --- validate (epoch=250)----------- +2025-05-17 05:32:37,666 - 4952 samples (32 per mini-batch) +2025-05-17 05:32:37,669 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:32:52,479 - Epoch: [250][ 50/ 155] Loss 2.926910 mAP 0.482386 +2025-05-17 05:33:07,490 - Epoch: [250][ 100/ 155] Loss 2.881744 mAP 0.490465 +2025-05-17 05:33:23,006 - Epoch: [250][ 150/ 155] Loss 2.906891 mAP 0.493940 +2025-05-17 05:33:25,918 - Epoch: [250][ 155/ 155] Loss 2.909246 mAP 0.492767 +2025-05-17 05:33:25,984 - ==> mAP: 0.49277 Loss: 2.909 + +2025-05-17 05:33:25,992 - ==> Best [mAP: 0.492767 vloss: 2.909246 Params: 2177088 on epoch: 250] +2025-05-17 05:33:25,992 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 05:33:26,075 - + +2025-05-17 05:33:26,075 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:33:46,470 - Epoch: [251][ 50/ 518] Overall Loss 2.647114 Objective Loss 2.647114 LR 0.000004 Time 0.407818 +2025-05-17 05:34:05,469 - Epoch: [251][ 100/ 518] Overall Loss 2.691157 Objective Loss 2.691157 LR 0.000004 Time 0.393888 +2025-05-17 05:34:24,475 - Epoch: [251][ 150/ 518] Overall Loss 2.706199 Objective Loss 2.706199 LR 0.000004 Time 0.389296 +2025-05-17 05:34:43,524 - Epoch: [251][ 200/ 518] Overall Loss 2.690882 Objective Loss 2.690882 LR 0.000004 Time 0.387211 +2025-05-17 05:35:02,578 - Epoch: [251][ 250/ 518] Overall Loss 2.684870 Objective Loss 2.684870 LR 0.000004 Time 0.385984 +2025-05-17 05:35:21,561 - Epoch: [251][ 300/ 518] Overall Loss 2.686467 Objective Loss 2.686467 LR 0.000004 Time 0.384924 +2025-05-17 05:35:40,721 - Epoch: [251][ 350/ 518] Overall Loss 2.689403 Objective Loss 2.689403 LR 0.000004 Time 0.384674 +2025-05-17 05:35:59,742 - Epoch: [251][ 400/ 518] Overall Loss 2.691530 Objective Loss 2.691530 LR 0.000004 Time 0.384140 +2025-05-17 05:36:18,782 - Epoch: [251][ 450/ 518] Overall Loss 2.694462 Objective Loss 2.694462 LR 0.000004 Time 0.383766 +2025-05-17 05:36:37,781 - Epoch: [251][ 500/ 518] Overall Loss 2.695413 Objective Loss 2.695413 LR 0.000004 Time 0.383387 +2025-05-17 05:36:44,481 - Epoch: [251][ 518/ 518] Overall Loss 2.692884 Objective Loss 2.692884 LR 0.000004 Time 0.382998 +2025-05-17 05:36:44,561 - --- validate (epoch=251)----------- +2025-05-17 05:36:44,562 - 4952 samples (32 per mini-batch) +2025-05-17 05:36:44,565 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:36:59,227 - Epoch: [251][ 50/ 155] Loss 2.844823 mAP 0.499936 +2025-05-17 05:37:14,128 - Epoch: [251][ 100/ 155] Loss 2.852827 mAP 0.484544 +2025-05-17 05:37:29,556 - Epoch: [251][ 150/ 155] Loss 2.860467 mAP 0.487891 +2025-05-17 05:37:32,432 - Epoch: [251][ 155/ 155] Loss 2.857871 mAP 0.489513 +2025-05-17 05:37:32,485 - ==> mAP: 0.48951 Loss: 2.858 + +2025-05-17 05:37:32,492 - ==> Best [mAP: 0.492767 vloss: 2.909246 Params: 2177088 on epoch: 250] +2025-05-17 05:37:32,492 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 05:37:32,580 - + +2025-05-17 05:37:32,581 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:37:52,829 - Epoch: [252][ 50/ 518] Overall Loss 2.710758 Objective Loss 2.710758 LR 0.000004 Time 0.404899 +2025-05-17 05:38:11,820 - Epoch: [252][ 100/ 518] Overall Loss 2.690450 Objective Loss 2.690450 LR 0.000004 Time 0.392345 +2025-05-17 05:38:30,827 - Epoch: [252][ 150/ 518] Overall Loss 2.666283 Objective Loss 2.666283 LR 0.000004 Time 0.388269 +2025-05-17 05:38:49,817 - Epoch: [252][ 200/ 518] Overall Loss 2.666775 Objective Loss 2.666775 LR 0.000004 Time 0.386145 +2025-05-17 05:39:08,802 - Epoch: [252][ 250/ 518] Overall Loss 2.654594 Objective Loss 2.654594 LR 0.000004 Time 0.384852 +2025-05-17 05:39:27,967 - Epoch: [252][ 300/ 518] Overall Loss 2.647600 Objective Loss 2.647600 LR 0.000004 Time 0.384588 +2025-05-17 05:39:46,972 - Epoch: [252][ 350/ 518] Overall Loss 2.649856 Objective Loss 2.649856 LR 0.000004 Time 0.383943 +2025-05-17 05:40:05,984 - Epoch: [252][ 400/ 518] Overall Loss 2.648269 Objective Loss 2.648269 LR 0.000004 Time 0.383479 +2025-05-17 05:40:25,019 - Epoch: [252][ 450/ 518] Overall Loss 2.641068 Objective Loss 2.641068 LR 0.000004 Time 0.383169 +2025-05-17 05:40:44,054 - Epoch: [252][ 500/ 518] Overall Loss 2.636489 Objective Loss 2.636489 LR 0.000004 Time 0.382920 +2025-05-17 05:40:50,772 - Epoch: [252][ 518/ 518] Overall Loss 2.636483 Objective Loss 2.636483 LR 0.000004 Time 0.382581 +2025-05-17 05:40:50,845 - --- validate (epoch=252)----------- +2025-05-17 05:40:50,845 - 4952 samples (32 per mini-batch) +2025-05-17 05:40:50,849 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:41:05,773 - Epoch: [252][ 50/ 155] Loss 2.843235 mAP 0.487614 +2025-05-17 05:41:20,665 - Epoch: [252][ 100/ 155] Loss 2.852491 mAP 0.480325 +2025-05-17 05:41:36,305 - Epoch: [252][ 150/ 155] Loss 2.818925 mAP 0.496166 +2025-05-17 05:41:39,318 - Epoch: [252][ 155/ 155] Loss 2.817933 mAP 0.495674 +2025-05-17 05:41:39,378 - ==> mAP: 0.49567 Loss: 2.818 + +2025-05-17 05:41:39,386 - ==> Best [mAP: 0.495674 vloss: 2.817933 Params: 2177088 on epoch: 252] +2025-05-17 05:41:39,386 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 05:41:39,497 - + +2025-05-17 05:41:39,498 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:41:59,516 - Epoch: [253][ 50/ 518] Overall Loss 2.652420 Objective Loss 2.652420 LR 0.000004 Time 0.400301 +2025-05-17 05:42:18,492 - Epoch: [253][ 100/ 518] Overall Loss 2.632823 Objective Loss 2.632823 LR 0.000004 Time 0.389898 +2025-05-17 05:42:37,470 - Epoch: [253][ 150/ 518] Overall Loss 2.602160 Objective Loss 2.602160 LR 0.000004 Time 0.386448 +2025-05-17 05:42:56,462 - Epoch: [253][ 200/ 518] Overall Loss 2.610420 Objective Loss 2.610420 LR 0.000004 Time 0.384789 +2025-05-17 05:43:15,443 - Epoch: [253][ 250/ 518] Overall Loss 2.609936 Objective Loss 2.609936 LR 0.000004 Time 0.383751 +2025-05-17 05:43:34,455 - Epoch: [253][ 300/ 518] Overall Loss 2.621633 Objective Loss 2.621633 LR 0.000004 Time 0.383163 +2025-05-17 05:43:53,454 - Epoch: [253][ 350/ 518] Overall Loss 2.623527 Objective Loss 2.623527 LR 0.000004 Time 0.382704 +2025-05-17 05:44:12,648 - Epoch: [253][ 400/ 518] Overall Loss 2.629392 Objective Loss 2.629392 LR 0.000004 Time 0.382849 +2025-05-17 05:44:31,653 - Epoch: [253][ 450/ 518] Overall Loss 2.628875 Objective Loss 2.628875 LR 0.000004 Time 0.382540 +2025-05-17 05:44:50,648 - Epoch: [253][ 500/ 518] Overall Loss 2.630589 Objective Loss 2.630589 LR 0.000004 Time 0.382275 +2025-05-17 05:44:57,364 - Epoch: [253][ 518/ 518] Overall Loss 2.631963 Objective Loss 2.631963 LR 0.000004 Time 0.381955 +2025-05-17 05:44:57,443 - --- validate (epoch=253)----------- +2025-05-17 05:44:57,444 - 4952 samples (32 per mini-batch) +2025-05-17 05:44:57,447 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:45:12,230 - Epoch: [253][ 50/ 155] Loss 2.887839 mAP 0.472009 +2025-05-17 05:45:27,225 - Epoch: [253][ 100/ 155] Loss 2.840672 mAP 0.483970 +2025-05-17 05:45:42,474 - Epoch: [253][ 150/ 155] Loss 2.815341 mAP 0.492562 +2025-05-17 05:45:45,467 - Epoch: [253][ 155/ 155] Loss 2.812964 mAP 0.492396 +2025-05-17 05:45:45,526 - ==> mAP: 0.49240 Loss: 2.813 + +2025-05-17 05:45:45,533 - ==> Best [mAP: 0.495674 vloss: 2.817933 Params: 2177088 on epoch: 252] +2025-05-17 05:45:45,533 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 05:45:45,622 - + +2025-05-17 05:45:45,622 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:46:06,000 - Epoch: [254][ 50/ 518] Overall Loss 2.636205 Objective Loss 2.636205 LR 0.000004 Time 0.407497 +2025-05-17 05:46:24,988 - Epoch: [254][ 100/ 518] Overall Loss 2.628659 Objective Loss 2.628659 LR 0.000004 Time 0.393614 +2025-05-17 05:46:43,958 - Epoch: [254][ 150/ 518] Overall Loss 2.617194 Objective Loss 2.617194 LR 0.000004 Time 0.388866 +2025-05-17 05:47:02,936 - Epoch: [254][ 200/ 518] Overall Loss 2.608358 Objective Loss 2.608358 LR 0.000004 Time 0.386534 +2025-05-17 05:47:21,931 - Epoch: [254][ 250/ 518] Overall Loss 2.609902 Objective Loss 2.609902 LR 0.000004 Time 0.385202 +2025-05-17 05:47:40,931 - Epoch: [254][ 300/ 518] Overall Loss 2.615786 Objective Loss 2.615786 LR 0.000004 Time 0.384333 +2025-05-17 05:47:59,965 - Epoch: [254][ 350/ 518] Overall Loss 2.612715 Objective Loss 2.612715 LR 0.000004 Time 0.383806 +2025-05-17 05:48:19,177 - Epoch: [254][ 400/ 518] Overall Loss 2.611642 Objective Loss 2.611642 LR 0.000004 Time 0.383860 +2025-05-17 05:48:38,195 - Epoch: [254][ 450/ 518] Overall Loss 2.616111 Objective Loss 2.616111 LR 0.000004 Time 0.383469 +2025-05-17 05:48:57,255 - Epoch: [254][ 500/ 518] Overall Loss 2.614315 Objective Loss 2.614315 LR 0.000004 Time 0.383240 +2025-05-17 05:49:03,944 - Epoch: [254][ 518/ 518] Overall Loss 2.613509 Objective Loss 2.613509 LR 0.000004 Time 0.382834 +2025-05-17 05:49:04,019 - --- validate (epoch=254)----------- +2025-05-17 05:49:04,020 - 4952 samples (32 per mini-batch) +2025-05-17 05:49:04,023 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:49:18,919 - Epoch: [254][ 50/ 155] Loss 2.816301 mAP 0.493915 +2025-05-17 05:49:34,090 - Epoch: [254][ 100/ 155] Loss 2.802498 mAP 0.493033 +2025-05-17 05:49:49,672 - Epoch: [254][ 150/ 155] Loss 2.820129 mAP 0.489405 +2025-05-17 05:49:52,672 - Epoch: [254][ 155/ 155] Loss 2.815929 mAP 0.489324 +2025-05-17 05:49:52,734 - ==> mAP: 0.48932 Loss: 2.816 + +2025-05-17 05:49:52,742 - ==> Best [mAP: 0.495674 vloss: 2.817933 Params: 2177088 on epoch: 252] +2025-05-17 05:49:52,742 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 05:49:52,838 - + +2025-05-17 05:49:52,838 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:50:13,028 - Epoch: [255][ 50/ 518] Overall Loss 2.621130 Objective Loss 2.621130 LR 0.000004 Time 0.403732 +2025-05-17 05:50:31,993 - Epoch: [255][ 100/ 518] Overall Loss 2.617324 Objective Loss 2.617324 LR 0.000004 Time 0.391498 +2025-05-17 05:50:50,999 - Epoch: [255][ 150/ 518] Overall Loss 2.591981 Objective Loss 2.591981 LR 0.000004 Time 0.387699 +2025-05-17 05:51:09,986 - Epoch: [255][ 200/ 518] Overall Loss 2.592174 Objective Loss 2.592174 LR 0.000004 Time 0.385702 +2025-05-17 05:51:28,980 - Epoch: [255][ 250/ 518] Overall Loss 2.604481 Objective Loss 2.604481 LR 0.000004 Time 0.384533 +2025-05-17 05:51:47,979 - Epoch: [255][ 300/ 518] Overall Loss 2.604268 Objective Loss 2.604268 LR 0.000004 Time 0.383771 +2025-05-17 05:52:07,030 - Epoch: [255][ 350/ 518] Overall Loss 2.607634 Objective Loss 2.607634 LR 0.000004 Time 0.383374 +2025-05-17 05:52:26,053 - Epoch: [255][ 400/ 518] Overall Loss 2.607073 Objective Loss 2.607073 LR 0.000004 Time 0.383007 +2025-05-17 05:52:45,210 - Epoch: [255][ 450/ 518] Overall Loss 2.604805 Objective Loss 2.604805 LR 0.000004 Time 0.383020 +2025-05-17 05:53:04,219 - Epoch: [255][ 500/ 518] Overall Loss 2.602008 Objective Loss 2.602008 LR 0.000004 Time 0.382733 +2025-05-17 05:53:10,941 - Epoch: [255][ 518/ 518] Overall Loss 2.600169 Objective Loss 2.600169 LR 0.000004 Time 0.382410 +2025-05-17 05:53:11,016 - --- validate (epoch=255)----------- +2025-05-17 05:53:11,017 - 4952 samples (32 per mini-batch) +2025-05-17 05:53:11,020 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:53:25,670 - Epoch: [255][ 50/ 155] Loss 2.878581 mAP 0.488323 +2025-05-17 05:53:40,559 - Epoch: [255][ 100/ 155] Loss 2.815722 mAP 0.491706 +2025-05-17 05:53:55,848 - Epoch: [255][ 150/ 155] Loss 2.803132 mAP 0.490270 +2025-05-17 05:53:58,857 - Epoch: [255][ 155/ 155] Loss 2.808403 mAP 0.488365 +2025-05-17 05:53:58,910 - ==> mAP: 0.48836 Loss: 2.808 + +2025-05-17 05:53:58,918 - ==> Best [mAP: 0.495674 vloss: 2.817933 Params: 2177088 on epoch: 252] +2025-05-17 05:53:58,918 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 05:53:59,013 - + +2025-05-17 05:53:59,013 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:54:18,975 - Epoch: [256][ 50/ 518] Overall Loss 2.582847 Objective Loss 2.582847 LR 0.000004 Time 0.399151 +2025-05-17 05:54:38,112 - Epoch: [256][ 100/ 518] Overall Loss 2.598888 Objective Loss 2.598888 LR 0.000004 Time 0.390934 +2025-05-17 05:54:57,085 - Epoch: [256][ 150/ 518] Overall Loss 2.600120 Objective Loss 2.600120 LR 0.000004 Time 0.387105 +2025-05-17 05:55:16,107 - Epoch: [256][ 200/ 518] Overall Loss 2.601927 Objective Loss 2.601927 LR 0.000004 Time 0.385432 +2025-05-17 05:55:35,146 - Epoch: [256][ 250/ 518] Overall Loss 2.603990 Objective Loss 2.603990 LR 0.000004 Time 0.384495 +2025-05-17 05:55:54,166 - Epoch: [256][ 300/ 518] Overall Loss 2.594682 Objective Loss 2.594682 LR 0.000004 Time 0.383812 +2025-05-17 05:56:13,164 - Epoch: [256][ 350/ 518] Overall Loss 2.589264 Objective Loss 2.589264 LR 0.000004 Time 0.383257 +2025-05-17 05:56:32,165 - Epoch: [256][ 400/ 518] Overall Loss 2.593113 Objective Loss 2.593113 LR 0.000004 Time 0.382850 +2025-05-17 05:56:51,169 - Epoch: [256][ 450/ 518] Overall Loss 2.588394 Objective Loss 2.588394 LR 0.000004 Time 0.382538 +2025-05-17 05:57:10,355 - Epoch: [256][ 500/ 518] Overall Loss 2.587278 Objective Loss 2.587278 LR 0.000004 Time 0.382654 +2025-05-17 05:57:17,065 - Epoch: [256][ 518/ 518] Overall Loss 2.588959 Objective Loss 2.588959 LR 0.000004 Time 0.382310 +2025-05-17 05:57:17,141 - --- validate (epoch=256)----------- +2025-05-17 05:57:17,142 - 4952 samples (32 per mini-batch) +2025-05-17 05:57:17,145 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 05:57:32,076 - Epoch: [256][ 50/ 155] Loss 2.817375 mAP 0.505569 +2025-05-17 05:57:47,327 - Epoch: [256][ 100/ 155] Loss 2.775563 mAP 0.500935 +2025-05-17 05:58:02,769 - Epoch: [256][ 150/ 155] Loss 2.799512 mAP 0.493355 +2025-05-17 05:58:05,862 - Epoch: [256][ 155/ 155] Loss 2.799764 mAP 0.492724 +2025-05-17 05:58:05,919 - ==> mAP: 0.49272 Loss: 2.800 + +2025-05-17 05:58:05,927 - ==> Best [mAP: 0.495674 vloss: 2.817933 Params: 2177088 on epoch: 252] +2025-05-17 05:58:05,927 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 05:58:06,028 - + +2025-05-17 05:58:06,028 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 05:58:26,226 - Epoch: [257][ 50/ 518] Overall Loss 2.561871 Objective Loss 2.561871 LR 0.000004 Time 0.403887 +2025-05-17 05:58:45,197 - Epoch: [257][ 100/ 518] Overall Loss 2.563676 Objective Loss 2.563676 LR 0.000004 Time 0.391638 +2025-05-17 05:59:04,363 - Epoch: [257][ 150/ 518] Overall Loss 2.589812 Objective Loss 2.589812 LR 0.000004 Time 0.388860 +2025-05-17 05:59:23,364 - Epoch: [257][ 200/ 518] Overall Loss 2.589893 Objective Loss 2.589893 LR 0.000004 Time 0.386643 +2025-05-17 05:59:42,348 - Epoch: [257][ 250/ 518] Overall Loss 2.593236 Objective Loss 2.593236 LR 0.000004 Time 0.385247 +2025-05-17 06:00:01,371 - Epoch: [257][ 300/ 518] Overall Loss 2.592357 Objective Loss 2.592357 LR 0.000004 Time 0.384443 +2025-05-17 06:00:20,430 - Epoch: [257][ 350/ 518] Overall Loss 2.589650 Objective Loss 2.589650 LR 0.000004 Time 0.383975 +2025-05-17 06:00:39,476 - Epoch: [257][ 400/ 518] Overall Loss 2.586834 Objective Loss 2.586834 LR 0.000004 Time 0.383592 +2025-05-17 06:00:58,485 - Epoch: [257][ 450/ 518] Overall Loss 2.590588 Objective Loss 2.590588 LR 0.000004 Time 0.383209 +2025-05-17 06:01:17,494 - Epoch: [257][ 500/ 518] Overall Loss 2.592038 Objective Loss 2.592038 LR 0.000004 Time 0.382904 +2025-05-17 06:01:24,372 - Epoch: [257][ 518/ 518] Overall Loss 2.589609 Objective Loss 2.589609 LR 0.000004 Time 0.382875 +2025-05-17 06:01:24,450 - --- validate (epoch=257)----------- +2025-05-17 06:01:24,450 - 4952 samples (32 per mini-batch) +2025-05-17 06:01:24,454 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:01:39,142 - Epoch: [257][ 50/ 155] Loss 2.829374 mAP 0.496057 +2025-05-17 06:01:53,849 - Epoch: [257][ 100/ 155] Loss 2.819524 mAP 0.488394 +2025-05-17 06:02:09,287 - Epoch: [257][ 150/ 155] Loss 2.794124 mAP 0.495815 +2025-05-17 06:02:12,318 - Epoch: [257][ 155/ 155] Loss 2.791942 mAP 0.496977 +2025-05-17 06:02:12,372 - ==> mAP: 0.49698 Loss: 2.792 + +2025-05-17 06:02:12,380 - ==> Best [mAP: 0.496977 vloss: 2.791942 Params: 2177088 on epoch: 257] +2025-05-17 06:02:12,380 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:02:12,505 - + +2025-05-17 06:02:12,505 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:02:32,437 - Epoch: [258][ 50/ 518] Overall Loss 2.561647 Objective Loss 2.561647 LR 0.000004 Time 0.398587 +2025-05-17 06:02:51,438 - Epoch: [258][ 100/ 518] Overall Loss 2.578559 Objective Loss 2.578559 LR 0.000004 Time 0.389292 +2025-05-17 06:03:10,426 - Epoch: [258][ 150/ 518] Overall Loss 2.582963 Objective Loss 2.582963 LR 0.000004 Time 0.386114 +2025-05-17 06:03:29,671 - Epoch: [258][ 200/ 518] Overall Loss 2.576993 Objective Loss 2.576993 LR 0.000004 Time 0.385804 +2025-05-17 06:03:48,665 - Epoch: [258][ 250/ 518] Overall Loss 2.575689 Objective Loss 2.575689 LR 0.000004 Time 0.384614 +2025-05-17 06:04:07,681 - Epoch: [258][ 300/ 518] Overall Loss 2.575003 Objective Loss 2.575003 LR 0.000004 Time 0.383899 +2025-05-17 06:04:26,745 - Epoch: [258][ 350/ 518] Overall Loss 2.586743 Objective Loss 2.586743 LR 0.000004 Time 0.383522 +2025-05-17 06:04:45,779 - Epoch: [258][ 400/ 518] Overall Loss 2.593352 Objective Loss 2.593352 LR 0.000004 Time 0.383164 +2025-05-17 06:05:04,784 - Epoch: [258][ 450/ 518] Overall Loss 2.589270 Objective Loss 2.589270 LR 0.000004 Time 0.382819 +2025-05-17 06:05:23,784 - Epoch: [258][ 500/ 518] Overall Loss 2.591502 Objective Loss 2.591502 LR 0.000004 Time 0.382536 +2025-05-17 06:05:30,494 - Epoch: [258][ 518/ 518] Overall Loss 2.591061 Objective Loss 2.591061 LR 0.000004 Time 0.382195 +2025-05-17 06:05:30,570 - --- validate (epoch=258)----------- +2025-05-17 06:05:30,571 - 4952 samples (32 per mini-batch) +2025-05-17 06:05:30,574 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:05:45,755 - Epoch: [258][ 50/ 155] Loss 2.784399 mAP 0.494767 +2025-05-17 06:06:00,925 - Epoch: [258][ 100/ 155] Loss 2.766723 mAP 0.493535 +2025-05-17 06:06:16,772 - Epoch: [258][ 150/ 155] Loss 2.778859 mAP 0.488886 +2025-05-17 06:06:19,912 - Epoch: [258][ 155/ 155] Loss 2.776966 mAP 0.491174 +2025-05-17 06:06:19,967 - ==> mAP: 0.49117 Loss: 2.777 + +2025-05-17 06:06:19,975 - ==> Best [mAP: 0.496977 vloss: 2.791942 Params: 2177088 on epoch: 257] +2025-05-17 06:06:19,975 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:06:20,073 - + +2025-05-17 06:06:20,073 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:06:40,237 - Epoch: [259][ 50/ 518] Overall Loss 2.602618 Objective Loss 2.602618 LR 0.000004 Time 0.403221 +2025-05-17 06:06:59,208 - Epoch: [259][ 100/ 518] Overall Loss 2.591890 Objective Loss 2.591890 LR 0.000004 Time 0.391307 +2025-05-17 06:07:18,188 - Epoch: [259][ 150/ 518] Overall Loss 2.592855 Objective Loss 2.592855 LR 0.000004 Time 0.387402 +2025-05-17 06:07:37,381 - Epoch: [259][ 200/ 518] Overall Loss 2.590427 Objective Loss 2.590427 LR 0.000004 Time 0.386510 +2025-05-17 06:07:56,407 - Epoch: [259][ 250/ 518] Overall Loss 2.578095 Objective Loss 2.578095 LR 0.000004 Time 0.385305 +2025-05-17 06:08:15,452 - Epoch: [259][ 300/ 518] Overall Loss 2.579268 Objective Loss 2.579268 LR 0.000004 Time 0.384565 +2025-05-17 06:08:34,494 - Epoch: [259][ 350/ 518] Overall Loss 2.578128 Objective Loss 2.578128 LR 0.000004 Time 0.384031 +2025-05-17 06:08:53,508 - Epoch: [259][ 400/ 518] Overall Loss 2.581861 Objective Loss 2.581861 LR 0.000004 Time 0.383559 +2025-05-17 06:09:12,505 - Epoch: [259][ 450/ 518] Overall Loss 2.584642 Objective Loss 2.584642 LR 0.000004 Time 0.383155 +2025-05-17 06:09:31,704 - Epoch: [259][ 500/ 518] Overall Loss 2.583457 Objective Loss 2.583457 LR 0.000004 Time 0.383234 +2025-05-17 06:09:38,417 - Epoch: [259][ 518/ 518] Overall Loss 2.585429 Objective Loss 2.585429 LR 0.000004 Time 0.382876 +2025-05-17 06:09:38,492 - --- validate (epoch=259)----------- +2025-05-17 06:09:38,493 - 4952 samples (32 per mini-batch) +2025-05-17 06:09:38,496 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:09:53,135 - Epoch: [259][ 50/ 155] Loss 2.828519 mAP 0.504176 +2025-05-17 06:10:07,983 - Epoch: [259][ 100/ 155] Loss 2.796726 mAP 0.490240 +2025-05-17 06:10:23,233 - Epoch: [259][ 150/ 155] Loss 2.794063 mAP 0.490003 +2025-05-17 06:10:26,237 - Epoch: [259][ 155/ 155] Loss 2.793126 mAP 0.490062 +2025-05-17 06:10:26,289 - ==> mAP: 0.49006 Loss: 2.793 + +2025-05-17 06:10:26,297 - ==> Best [mAP: 0.496977 vloss: 2.791942 Params: 2177088 on epoch: 257] +2025-05-17 06:10:26,297 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:10:26,385 - + +2025-05-17 06:10:26,385 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:10:46,465 - Epoch: [260][ 50/ 518] Overall Loss 2.588681 Objective Loss 2.588681 LR 0.000004 Time 0.401535 +2025-05-17 06:11:05,421 - Epoch: [260][ 100/ 518] Overall Loss 2.590535 Objective Loss 2.590535 LR 0.000004 Time 0.390320 +2025-05-17 06:11:24,577 - Epoch: [260][ 150/ 518] Overall Loss 2.590765 Objective Loss 2.590765 LR 0.000004 Time 0.387911 +2025-05-17 06:11:43,576 - Epoch: [260][ 200/ 518] Overall Loss 2.586623 Objective Loss 2.586623 LR 0.000004 Time 0.385921 +2025-05-17 06:12:02,583 - Epoch: [260][ 250/ 518] Overall Loss 2.583554 Objective Loss 2.583554 LR 0.000004 Time 0.384759 +2025-05-17 06:12:21,584 - Epoch: [260][ 300/ 518] Overall Loss 2.588730 Objective Loss 2.588730 LR 0.000004 Time 0.383966 +2025-05-17 06:12:40,608 - Epoch: [260][ 350/ 518] Overall Loss 2.578942 Objective Loss 2.578942 LR 0.000004 Time 0.383465 +2025-05-17 06:12:59,601 - Epoch: [260][ 400/ 518] Overall Loss 2.581276 Objective Loss 2.581276 LR 0.000004 Time 0.383012 +2025-05-17 06:13:18,781 - Epoch: [260][ 450/ 518] Overall Loss 2.585537 Objective Loss 2.585537 LR 0.000004 Time 0.383075 +2025-05-17 06:13:37,779 - Epoch: [260][ 500/ 518] Overall Loss 2.585556 Objective Loss 2.585556 LR 0.000004 Time 0.382761 +2025-05-17 06:13:44,496 - Epoch: [260][ 518/ 518] Overall Loss 2.584598 Objective Loss 2.584598 LR 0.000004 Time 0.382426 +2025-05-17 06:13:44,574 - --- validate (epoch=260)----------- +2025-05-17 06:13:44,575 - 4952 samples (32 per mini-batch) +2025-05-17 06:13:44,578 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:13:59,576 - Epoch: [260][ 50/ 155] Loss 2.825337 mAP 0.480287 +2025-05-17 06:14:14,890 - Epoch: [260][ 100/ 155] Loss 2.815736 mAP 0.485905 +2025-05-17 06:14:30,443 - Epoch: [260][ 150/ 155] Loss 2.799717 mAP 0.484024 +2025-05-17 06:14:33,523 - Epoch: [260][ 155/ 155] Loss 2.802435 mAP 0.484299 +2025-05-17 06:14:33,582 - ==> mAP: 0.48430 Loss: 2.802 + +2025-05-17 06:14:33,590 - ==> Best [mAP: 0.496977 vloss: 2.791942 Params: 2177088 on epoch: 257] +2025-05-17 06:14:33,590 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:14:33,681 - + +2025-05-17 06:14:33,681 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:14:53,707 - Epoch: [261][ 50/ 518] Overall Loss 2.618417 Objective Loss 2.618417 LR 0.000004 Time 0.400462 +2025-05-17 06:15:12,713 - Epoch: [261][ 100/ 518] Overall Loss 2.619477 Objective Loss 2.619477 LR 0.000004 Time 0.390278 +2025-05-17 06:15:31,727 - Epoch: [261][ 150/ 518] Overall Loss 2.609733 Objective Loss 2.609733 LR 0.000004 Time 0.386941 +2025-05-17 06:15:50,772 - Epoch: [261][ 200/ 518] Overall Loss 2.606929 Objective Loss 2.606929 LR 0.000004 Time 0.385423 +2025-05-17 06:16:09,786 - Epoch: [261][ 250/ 518] Overall Loss 2.603954 Objective Loss 2.603954 LR 0.000004 Time 0.384394 +2025-05-17 06:16:28,782 - Epoch: [261][ 300/ 518] Overall Loss 2.601736 Objective Loss 2.601736 LR 0.000004 Time 0.383641 +2025-05-17 06:16:47,766 - Epoch: [261][ 350/ 518] Overall Loss 2.595085 Objective Loss 2.595085 LR 0.000004 Time 0.383072 +2025-05-17 06:17:06,754 - Epoch: [261][ 400/ 518] Overall Loss 2.592706 Objective Loss 2.592706 LR 0.000004 Time 0.382655 +2025-05-17 06:17:25,931 - Epoch: [261][ 450/ 518] Overall Loss 2.587795 Objective Loss 2.587795 LR 0.000004 Time 0.382751 +2025-05-17 06:17:44,931 - Epoch: [261][ 500/ 518] Overall Loss 2.583569 Objective Loss 2.583569 LR 0.000004 Time 0.382475 +2025-05-17 06:17:51,620 - Epoch: [261][ 518/ 518] Overall Loss 2.582984 Objective Loss 2.582984 LR 0.000004 Time 0.382096 +2025-05-17 06:17:51,697 - --- validate (epoch=261)----------- +2025-05-17 06:17:51,698 - 4952 samples (32 per mini-batch) +2025-05-17 06:17:51,701 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:18:06,492 - Epoch: [261][ 50/ 155] Loss 2.758492 mAP 0.490468 +2025-05-17 06:18:21,470 - Epoch: [261][ 100/ 155] Loss 2.775083 mAP 0.492141 +2025-05-17 06:18:36,848 - Epoch: [261][ 150/ 155] Loss 2.781416 mAP 0.495467 +2025-05-17 06:18:39,880 - Epoch: [261][ 155/ 155] Loss 2.774638 mAP 0.494715 +2025-05-17 06:18:39,933 - ==> mAP: 0.49472 Loss: 2.775 + +2025-05-17 06:18:39,941 - ==> Best [mAP: 0.496977 vloss: 2.791942 Params: 2177088 on epoch: 257] +2025-05-17 06:18:39,941 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:18:40,029 - + +2025-05-17 06:18:40,030 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:19:00,140 - Epoch: [262][ 50/ 518] Overall Loss 2.537315 Objective Loss 2.537315 LR 0.000004 Time 0.402142 +2025-05-17 06:19:19,272 - Epoch: [262][ 100/ 518] Overall Loss 2.560968 Objective Loss 2.560968 LR 0.000004 Time 0.392375 +2025-05-17 06:19:38,248 - Epoch: [262][ 150/ 518] Overall Loss 2.581225 Objective Loss 2.581225 LR 0.000004 Time 0.388083 +2025-05-17 06:19:57,301 - Epoch: [262][ 200/ 518] Overall Loss 2.578741 Objective Loss 2.578741 LR 0.000004 Time 0.386323 +2025-05-17 06:20:16,335 - Epoch: [262][ 250/ 518] Overall Loss 2.576521 Objective Loss 2.576521 LR 0.000004 Time 0.385189 +2025-05-17 06:20:35,383 - Epoch: [262][ 300/ 518] Overall Loss 2.573827 Objective Loss 2.573827 LR 0.000004 Time 0.384480 +2025-05-17 06:20:54,424 - Epoch: [262][ 350/ 518] Overall Loss 2.572465 Objective Loss 2.572465 LR 0.000004 Time 0.383955 +2025-05-17 06:21:13,596 - Epoch: [262][ 400/ 518] Overall Loss 2.577571 Objective Loss 2.577571 LR 0.000004 Time 0.383887 +2025-05-17 06:21:32,613 - Epoch: [262][ 450/ 518] Overall Loss 2.578678 Objective Loss 2.578678 LR 0.000004 Time 0.383491 +2025-05-17 06:21:51,639 - Epoch: [262][ 500/ 518] Overall Loss 2.577061 Objective Loss 2.577061 LR 0.000004 Time 0.383192 +2025-05-17 06:21:58,338 - Epoch: [262][ 518/ 518] Overall Loss 2.572595 Objective Loss 2.572595 LR 0.000004 Time 0.382807 +2025-05-17 06:21:58,415 - --- validate (epoch=262)----------- +2025-05-17 06:21:58,416 - 4952 samples (32 per mini-batch) +2025-05-17 06:21:58,419 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:22:13,295 - Epoch: [262][ 50/ 155] Loss 2.763065 mAP 0.501340 +2025-05-17 06:22:28,509 - Epoch: [262][ 100/ 155] Loss 2.795945 mAP 0.492224 +2025-05-17 06:22:44,115 - Epoch: [262][ 150/ 155] Loss 2.782102 mAP 0.492390 +2025-05-17 06:22:46,983 - Epoch: [262][ 155/ 155] Loss 2.787097 mAP 0.492016 +2025-05-17 06:22:47,036 - ==> mAP: 0.49202 Loss: 2.787 + +2025-05-17 06:22:47,044 - ==> Best [mAP: 0.496977 vloss: 2.791942 Params: 2177088 on epoch: 257] +2025-05-17 06:22:47,044 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:22:47,131 - + +2025-05-17 06:22:47,132 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:23:07,243 - Epoch: [263][ 50/ 518] Overall Loss 2.575670 Objective Loss 2.575670 LR 0.000004 Time 0.402162 +2025-05-17 06:23:26,474 - Epoch: [263][ 100/ 518] Overall Loss 2.572515 Objective Loss 2.572515 LR 0.000004 Time 0.393386 +2025-05-17 06:23:45,451 - Epoch: [263][ 150/ 518] Overall Loss 2.569843 Objective Loss 2.569843 LR 0.000004 Time 0.388759 +2025-05-17 06:24:04,423 - Epoch: [263][ 200/ 518] Overall Loss 2.569257 Objective Loss 2.569257 LR 0.000004 Time 0.386426 +2025-05-17 06:24:23,401 - Epoch: [263][ 250/ 518] Overall Loss 2.570614 Objective Loss 2.570614 LR 0.000004 Time 0.385047 +2025-05-17 06:24:42,387 - Epoch: [263][ 300/ 518] Overall Loss 2.566223 Objective Loss 2.566223 LR 0.000004 Time 0.384153 +2025-05-17 06:25:01,534 - Epoch: [263][ 350/ 518] Overall Loss 2.573018 Objective Loss 2.573018 LR 0.000004 Time 0.383977 +2025-05-17 06:25:20,546 - Epoch: [263][ 400/ 518] Overall Loss 2.573904 Objective Loss 2.573904 LR 0.000004 Time 0.383508 +2025-05-17 06:25:39,562 - Epoch: [263][ 450/ 518] Overall Loss 2.578075 Objective Loss 2.578075 LR 0.000004 Time 0.383151 +2025-05-17 06:25:58,575 - Epoch: [263][ 500/ 518] Overall Loss 2.575674 Objective Loss 2.575674 LR 0.000004 Time 0.382859 +2025-05-17 06:26:05,270 - Epoch: [263][ 518/ 518] Overall Loss 2.572093 Objective Loss 2.572093 LR 0.000004 Time 0.382478 +2025-05-17 06:26:05,339 - --- validate (epoch=263)----------- +2025-05-17 06:26:05,340 - 4952 samples (32 per mini-batch) +2025-05-17 06:26:05,343 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:26:20,067 - Epoch: [263][ 50/ 155] Loss 2.765761 mAP 0.491530 +2025-05-17 06:26:34,965 - Epoch: [263][ 100/ 155] Loss 2.775306 mAP 0.495093 +2025-05-17 06:26:50,341 - Epoch: [263][ 150/ 155] Loss 2.794139 mAP 0.485954 +2025-05-17 06:26:53,209 - Epoch: [263][ 155/ 155] Loss 2.794323 mAP 0.488295 +2025-05-17 06:26:53,261 - ==> mAP: 0.48829 Loss: 2.794 + +2025-05-17 06:26:53,269 - ==> Best [mAP: 0.496977 vloss: 2.791942 Params: 2177088 on epoch: 257] +2025-05-17 06:26:53,269 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:26:53,357 - + +2025-05-17 06:26:53,357 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:27:13,781 - Epoch: [264][ 50/ 518] Overall Loss 2.576952 Objective Loss 2.576952 LR 0.000004 Time 0.408423 +2025-05-17 06:27:32,822 - Epoch: [264][ 100/ 518] Overall Loss 2.585623 Objective Loss 2.585623 LR 0.000004 Time 0.394614 +2025-05-17 06:27:51,792 - Epoch: [264][ 150/ 518] Overall Loss 2.589170 Objective Loss 2.589170 LR 0.000004 Time 0.389531 +2025-05-17 06:28:10,794 - Epoch: [264][ 200/ 518] Overall Loss 2.572577 Objective Loss 2.572577 LR 0.000004 Time 0.387153 +2025-05-17 06:28:29,782 - Epoch: [264][ 250/ 518] Overall Loss 2.572639 Objective Loss 2.572639 LR 0.000004 Time 0.385672 +2025-05-17 06:28:49,023 - Epoch: [264][ 300/ 518] Overall Loss 2.576888 Objective Loss 2.576888 LR 0.000004 Time 0.385526 +2025-05-17 06:29:08,083 - Epoch: [264][ 350/ 518] Overall Loss 2.578270 Objective Loss 2.578270 LR 0.000004 Time 0.384907 +2025-05-17 06:29:27,122 - Epoch: [264][ 400/ 518] Overall Loss 2.579720 Objective Loss 2.579720 LR 0.000004 Time 0.384389 +2025-05-17 06:29:46,115 - Epoch: [264][ 450/ 518] Overall Loss 2.575369 Objective Loss 2.575369 LR 0.000004 Time 0.383882 +2025-05-17 06:30:05,162 - Epoch: [264][ 500/ 518] Overall Loss 2.570649 Objective Loss 2.570649 LR 0.000004 Time 0.383586 +2025-05-17 06:30:11,872 - Epoch: [264][ 518/ 518] Overall Loss 2.571598 Objective Loss 2.571598 LR 0.000004 Time 0.383211 +2025-05-17 06:30:11,949 - --- validate (epoch=264)----------- +2025-05-17 06:30:11,950 - 4952 samples (32 per mini-batch) +2025-05-17 06:30:11,953 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:30:27,115 - Epoch: [264][ 50/ 155] Loss 2.735604 mAP 0.523345 +2025-05-17 06:30:42,218 - Epoch: [264][ 100/ 155] Loss 2.740969 mAP 0.504695 +2025-05-17 06:30:57,939 - Epoch: [264][ 150/ 155] Loss 2.768092 mAP 0.500040 +2025-05-17 06:31:01,047 - Epoch: [264][ 155/ 155] Loss 2.770152 mAP 0.497710 +2025-05-17 06:31:01,101 - ==> mAP: 0.49771 Loss: 2.770 + +2025-05-17 06:31:01,109 - ==> Best [mAP: 0.497710 vloss: 2.770152 Params: 2177088 on epoch: 264] +2025-05-17 06:31:01,109 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:31:01,224 - + +2025-05-17 06:31:01,224 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:31:21,237 - Epoch: [265][ 50/ 518] Overall Loss 2.505725 Objective Loss 2.505725 LR 0.000004 Time 0.400181 +2025-05-17 06:31:40,174 - Epoch: [265][ 100/ 518] Overall Loss 2.537939 Objective Loss 2.537939 LR 0.000004 Time 0.389455 +2025-05-17 06:31:59,127 - Epoch: [265][ 150/ 518] Overall Loss 2.535085 Objective Loss 2.535085 LR 0.000004 Time 0.385980 +2025-05-17 06:32:18,123 - Epoch: [265][ 200/ 518] Overall Loss 2.540878 Objective Loss 2.540878 LR 0.000004 Time 0.384463 +2025-05-17 06:32:37,119 - Epoch: [265][ 250/ 518] Overall Loss 2.552495 Objective Loss 2.552495 LR 0.000004 Time 0.383551 +2025-05-17 06:32:56,147 - Epoch: [265][ 300/ 518] Overall Loss 2.562326 Objective Loss 2.562326 LR 0.000004 Time 0.383051 +2025-05-17 06:33:15,169 - Epoch: [265][ 350/ 518] Overall Loss 2.565061 Objective Loss 2.565061 LR 0.000004 Time 0.382675 +2025-05-17 06:33:34,370 - Epoch: [265][ 400/ 518] Overall Loss 2.564451 Objective Loss 2.564451 LR 0.000004 Time 0.382841 +2025-05-17 06:33:53,382 - Epoch: [265][ 450/ 518] Overall Loss 2.563150 Objective Loss 2.563150 LR 0.000004 Time 0.382551 +2025-05-17 06:34:12,363 - Epoch: [265][ 500/ 518] Overall Loss 2.558247 Objective Loss 2.558247 LR 0.000004 Time 0.382254 +2025-05-17 06:34:19,064 - Epoch: [265][ 518/ 518] Overall Loss 2.557282 Objective Loss 2.557282 LR 0.000004 Time 0.381908 +2025-05-17 06:34:19,141 - --- validate (epoch=265)----------- +2025-05-17 06:34:19,142 - 4952 samples (32 per mini-batch) +2025-05-17 06:34:19,145 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:34:34,188 - Epoch: [265][ 50/ 155] Loss 2.820370 mAP 0.488936 +2025-05-17 06:34:49,544 - Epoch: [265][ 100/ 155] Loss 2.791319 mAP 0.494007 +2025-05-17 06:35:05,373 - Epoch: [265][ 150/ 155] Loss 2.788112 mAP 0.494226 +2025-05-17 06:35:08,339 - Epoch: [265][ 155/ 155] Loss 2.784116 mAP 0.493587 +2025-05-17 06:35:08,394 - ==> mAP: 0.49359 Loss: 2.784 + +2025-05-17 06:35:08,402 - ==> Best [mAP: 0.497710 vloss: 2.770152 Params: 2177088 on epoch: 264] +2025-05-17 06:35:08,402 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:35:08,493 - + +2025-05-17 06:35:08,493 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:35:28,543 - Epoch: [266][ 50/ 518] Overall Loss 2.543810 Objective Loss 2.543810 LR 0.000004 Time 0.400930 +2025-05-17 06:35:47,718 - Epoch: [266][ 100/ 518] Overall Loss 2.575044 Objective Loss 2.575044 LR 0.000004 Time 0.392203 +2025-05-17 06:36:06,751 - Epoch: [266][ 150/ 518] Overall Loss 2.570813 Objective Loss 2.570813 LR 0.000004 Time 0.388344 +2025-05-17 06:36:25,754 - Epoch: [266][ 200/ 518] Overall Loss 2.568020 Objective Loss 2.568020 LR 0.000004 Time 0.386268 +2025-05-17 06:36:44,733 - Epoch: [266][ 250/ 518] Overall Loss 2.568905 Objective Loss 2.568905 LR 0.000004 Time 0.384925 +2025-05-17 06:37:03,701 - Epoch: [266][ 300/ 518] Overall Loss 2.555865 Objective Loss 2.555865 LR 0.000004 Time 0.383995 +2025-05-17 06:37:22,692 - Epoch: [266][ 350/ 518] Overall Loss 2.558128 Objective Loss 2.558128 LR 0.000004 Time 0.383395 +2025-05-17 06:37:41,699 - Epoch: [266][ 400/ 518] Overall Loss 2.558516 Objective Loss 2.558516 LR 0.000004 Time 0.382985 +2025-05-17 06:38:00,865 - Epoch: [266][ 450/ 518] Overall Loss 2.558620 Objective Loss 2.558620 LR 0.000004 Time 0.383020 +2025-05-17 06:38:19,859 - Epoch: [266][ 500/ 518] Overall Loss 2.558995 Objective Loss 2.558995 LR 0.000004 Time 0.382702 +2025-05-17 06:38:26,574 - Epoch: [266][ 518/ 518] Overall Loss 2.559634 Objective Loss 2.559634 LR 0.000004 Time 0.382366 +2025-05-17 06:38:26,653 - --- validate (epoch=266)----------- +2025-05-17 06:38:26,654 - 4952 samples (32 per mini-batch) +2025-05-17 06:38:26,657 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:38:41,364 - Epoch: [266][ 50/ 155] Loss 2.763586 mAP 0.521043 +2025-05-17 06:38:56,220 - Epoch: [266][ 100/ 155] Loss 2.765778 mAP 0.499591 +2025-05-17 06:39:11,740 - Epoch: [266][ 150/ 155] Loss 2.771333 mAP 0.498073 +2025-05-17 06:39:14,768 - Epoch: [266][ 155/ 155] Loss 2.772443 mAP 0.495327 +2025-05-17 06:39:14,821 - ==> mAP: 0.49533 Loss: 2.772 + +2025-05-17 06:39:14,829 - ==> Best [mAP: 0.497710 vloss: 2.770152 Params: 2177088 on epoch: 264] +2025-05-17 06:39:14,829 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:39:14,918 - + +2025-05-17 06:39:14,918 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:39:34,911 - Epoch: [267][ 50/ 518] Overall Loss 2.531031 Objective Loss 2.531031 LR 0.000004 Time 0.399787 +2025-05-17 06:39:54,069 - Epoch: [267][ 100/ 518] Overall Loss 2.506581 Objective Loss 2.506581 LR 0.000004 Time 0.391456 +2025-05-17 06:40:13,050 - Epoch: [267][ 150/ 518] Overall Loss 2.518837 Objective Loss 2.518837 LR 0.000004 Time 0.387505 +2025-05-17 06:40:32,020 - Epoch: [267][ 200/ 518] Overall Loss 2.538404 Objective Loss 2.538404 LR 0.000004 Time 0.385468 +2025-05-17 06:40:51,006 - Epoch: [267][ 250/ 518] Overall Loss 2.550348 Objective Loss 2.550348 LR 0.000004 Time 0.384315 +2025-05-17 06:41:09,984 - Epoch: [267][ 300/ 518] Overall Loss 2.561016 Objective Loss 2.561016 LR 0.000004 Time 0.383518 +2025-05-17 06:41:29,006 - Epoch: [267][ 350/ 518] Overall Loss 2.566393 Objective Loss 2.566393 LR 0.000004 Time 0.383074 +2025-05-17 06:41:48,215 - Epoch: [267][ 400/ 518] Overall Loss 2.569888 Objective Loss 2.569888 LR 0.000004 Time 0.383209 +2025-05-17 06:42:07,235 - Epoch: [267][ 450/ 518] Overall Loss 2.566769 Objective Loss 2.566769 LR 0.000004 Time 0.382896 +2025-05-17 06:42:26,232 - Epoch: [267][ 500/ 518] Overall Loss 2.567115 Objective Loss 2.567115 LR 0.000004 Time 0.382598 +2025-05-17 06:42:32,926 - Epoch: [267][ 518/ 518] Overall Loss 2.564794 Objective Loss 2.564794 LR 0.000004 Time 0.382225 +2025-05-17 06:42:33,002 - --- validate (epoch=267)----------- +2025-05-17 06:42:33,003 - 4952 samples (32 per mini-batch) +2025-05-17 06:42:33,006 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:42:47,996 - Epoch: [267][ 50/ 155] Loss 2.787256 mAP 0.488275 +2025-05-17 06:43:03,044 - Epoch: [267][ 100/ 155] Loss 2.788650 mAP 0.486704 +2025-05-17 06:43:18,587 - Epoch: [267][ 150/ 155] Loss 2.773551 mAP 0.490468 +2025-05-17 06:43:21,444 - Epoch: [267][ 155/ 155] Loss 2.776627 mAP 0.489579 +2025-05-17 06:43:21,506 - ==> mAP: 0.48958 Loss: 2.777 + +2025-05-17 06:43:21,514 - ==> Best [mAP: 0.497710 vloss: 2.770152 Params: 2177088 on epoch: 264] +2025-05-17 06:43:21,514 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:43:21,602 - + +2025-05-17 06:43:21,602 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:43:41,913 - Epoch: [268][ 50/ 518] Overall Loss 2.535468 Objective Loss 2.535468 LR 0.000004 Time 0.406138 +2025-05-17 06:44:00,883 - Epoch: [268][ 100/ 518] Overall Loss 2.547041 Objective Loss 2.547041 LR 0.000004 Time 0.392761 +2025-05-17 06:44:19,868 - Epoch: [268][ 150/ 518] Overall Loss 2.560779 Objective Loss 2.560779 LR 0.000004 Time 0.388395 +2025-05-17 06:44:38,866 - Epoch: [268][ 200/ 518] Overall Loss 2.571973 Objective Loss 2.571973 LR 0.000004 Time 0.386281 +2025-05-17 06:44:57,892 - Epoch: [268][ 250/ 518] Overall Loss 2.579919 Objective Loss 2.579919 LR 0.000004 Time 0.385125 +2025-05-17 06:45:16,899 - Epoch: [268][ 300/ 518] Overall Loss 2.570561 Objective Loss 2.570561 LR 0.000004 Time 0.384290 +2025-05-17 06:45:35,919 - Epoch: [268][ 350/ 518] Overall Loss 2.574094 Objective Loss 2.574094 LR 0.000004 Time 0.383730 +2025-05-17 06:45:55,111 - Epoch: [268][ 400/ 518] Overall Loss 2.577363 Objective Loss 2.577363 LR 0.000004 Time 0.383739 +2025-05-17 06:46:14,125 - Epoch: [268][ 450/ 518] Overall Loss 2.576489 Objective Loss 2.576489 LR 0.000004 Time 0.383354 +2025-05-17 06:46:33,131 - Epoch: [268][ 500/ 518] Overall Loss 2.578788 Objective Loss 2.578788 LR 0.000004 Time 0.383027 +2025-05-17 06:46:39,859 - Epoch: [268][ 518/ 518] Overall Loss 2.580341 Objective Loss 2.580341 LR 0.000004 Time 0.382704 +2025-05-17 06:46:39,934 - --- validate (epoch=268)----------- +2025-05-17 06:46:39,935 - 4952 samples (32 per mini-batch) +2025-05-17 06:46:39,938 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:46:54,750 - Epoch: [268][ 50/ 155] Loss 2.753545 mAP 0.493741 +2025-05-17 06:47:09,875 - Epoch: [268][ 100/ 155] Loss 2.775975 mAP 0.493140 +2025-05-17 06:47:25,220 - Epoch: [268][ 150/ 155] Loss 2.769901 mAP 0.495224 +2025-05-17 06:47:28,252 - Epoch: [268][ 155/ 155] Loss 2.772623 mAP 0.495123 +2025-05-17 06:47:28,307 - ==> mAP: 0.49512 Loss: 2.773 + +2025-05-17 06:47:28,315 - ==> Best [mAP: 0.497710 vloss: 2.770152 Params: 2177088 on epoch: 264] +2025-05-17 06:47:28,315 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:47:28,399 - + +2025-05-17 06:47:28,399 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:47:48,525 - Epoch: [269][ 50/ 518] Overall Loss 2.613779 Objective Loss 2.613779 LR 0.000004 Time 0.402444 +2025-05-17 06:48:07,621 - Epoch: [269][ 100/ 518] Overall Loss 2.570954 Objective Loss 2.570954 LR 0.000004 Time 0.392168 +2025-05-17 06:48:26,654 - Epoch: [269][ 150/ 518] Overall Loss 2.562525 Objective Loss 2.562525 LR 0.000004 Time 0.388332 +2025-05-17 06:48:45,688 - Epoch: [269][ 200/ 518] Overall Loss 2.553661 Objective Loss 2.553661 LR 0.000004 Time 0.386412 +2025-05-17 06:49:04,693 - Epoch: [269][ 250/ 518] Overall Loss 2.556095 Objective Loss 2.556095 LR 0.000004 Time 0.385145 +2025-05-17 06:49:23,680 - Epoch: [269][ 300/ 518] Overall Loss 2.550474 Objective Loss 2.550474 LR 0.000004 Time 0.384240 +2025-05-17 06:49:42,674 - Epoch: [269][ 350/ 518] Overall Loss 2.544260 Objective Loss 2.544260 LR 0.000004 Time 0.383615 +2025-05-17 06:50:01,710 - Epoch: [269][ 400/ 518] Overall Loss 2.537729 Objective Loss 2.537729 LR 0.000004 Time 0.383250 +2025-05-17 06:50:20,748 - Epoch: [269][ 450/ 518] Overall Loss 2.539902 Objective Loss 2.539902 LR 0.000004 Time 0.382971 +2025-05-17 06:50:39,972 - Epoch: [269][ 500/ 518] Overall Loss 2.540887 Objective Loss 2.540887 LR 0.000004 Time 0.383122 +2025-05-17 06:50:46,687 - Epoch: [269][ 518/ 518] Overall Loss 2.538279 Objective Loss 2.538279 LR 0.000004 Time 0.382770 +2025-05-17 06:50:46,762 - --- validate (epoch=269)----------- +2025-05-17 06:50:46,763 - 4952 samples (32 per mini-batch) +2025-05-17 06:50:46,766 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:51:01,479 - Epoch: [269][ 50/ 155] Loss 2.780246 mAP 0.489318 +2025-05-17 06:51:16,291 - Epoch: [269][ 100/ 155] Loss 2.746733 mAP 0.493884 +2025-05-17 06:51:31,832 - Epoch: [269][ 150/ 155] Loss 2.761814 mAP 0.490638 +2025-05-17 06:51:34,827 - Epoch: [269][ 155/ 155] Loss 2.760500 mAP 0.493023 +2025-05-17 06:51:34,880 - ==> mAP: 0.49302 Loss: 2.760 + +2025-05-17 06:51:34,889 - ==> Best [mAP: 0.497710 vloss: 2.770152 Params: 2177088 on epoch: 264] +2025-05-17 06:51:34,889 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:51:34,976 - + +2025-05-17 06:51:34,976 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:51:55,138 - Epoch: [270][ 50/ 518] Overall Loss 2.540367 Objective Loss 2.540367 LR 0.000004 Time 0.403166 +2025-05-17 06:52:14,348 - Epoch: [270][ 100/ 518] Overall Loss 2.550685 Objective Loss 2.550685 LR 0.000004 Time 0.393675 +2025-05-17 06:52:33,376 - Epoch: [270][ 150/ 518] Overall Loss 2.565763 Objective Loss 2.565763 LR 0.000004 Time 0.389294 +2025-05-17 06:52:52,418 - Epoch: [270][ 200/ 518] Overall Loss 2.557875 Objective Loss 2.557875 LR 0.000004 Time 0.387175 +2025-05-17 06:53:11,438 - Epoch: [270][ 250/ 518] Overall Loss 2.556905 Objective Loss 2.556905 LR 0.000004 Time 0.385814 +2025-05-17 06:53:30,444 - Epoch: [270][ 300/ 518] Overall Loss 2.556252 Objective Loss 2.556252 LR 0.000004 Time 0.384843 +2025-05-17 06:53:49,464 - Epoch: [270][ 350/ 518] Overall Loss 2.550068 Objective Loss 2.550068 LR 0.000004 Time 0.384202 +2025-05-17 06:54:08,679 - Epoch: [270][ 400/ 518] Overall Loss 2.545368 Objective Loss 2.545368 LR 0.000004 Time 0.384213 +2025-05-17 06:54:27,743 - Epoch: [270][ 450/ 518] Overall Loss 2.554152 Objective Loss 2.554152 LR 0.000004 Time 0.383886 +2025-05-17 06:54:46,826 - Epoch: [270][ 500/ 518] Overall Loss 2.559293 Objective Loss 2.559293 LR 0.000004 Time 0.383662 +2025-05-17 06:54:53,558 - Epoch: [270][ 518/ 518] Overall Loss 2.558899 Objective Loss 2.558899 LR 0.000004 Time 0.383325 +2025-05-17 06:54:53,638 - --- validate (epoch=270)----------- +2025-05-17 06:54:53,639 - 4952 samples (32 per mini-batch) +2025-05-17 06:54:53,642 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:55:08,453 - Epoch: [270][ 50/ 155] Loss 2.807820 mAP 0.471962 +2025-05-17 06:55:23,540 - Epoch: [270][ 100/ 155] Loss 2.806655 mAP 0.487307 +2025-05-17 06:55:39,120 - Epoch: [270][ 150/ 155] Loss 2.778003 mAP 0.497828 +2025-05-17 06:55:42,105 - Epoch: [270][ 155/ 155] Loss 2.784220 mAP 0.495262 +2025-05-17 06:55:42,163 - ==> mAP: 0.49526 Loss: 2.784 + +2025-05-17 06:55:42,172 - ==> Best [mAP: 0.497710 vloss: 2.770152 Params: 2177088 on epoch: 264] +2025-05-17 06:55:42,172 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:55:42,262 - + +2025-05-17 06:55:42,263 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 06:56:02,541 - Epoch: [271][ 50/ 518] Overall Loss 2.603301 Objective Loss 2.603301 LR 0.000004 Time 0.405487 +2025-05-17 06:56:21,525 - Epoch: [271][ 100/ 518] Overall Loss 2.593272 Objective Loss 2.593272 LR 0.000004 Time 0.392577 +2025-05-17 06:56:40,510 - Epoch: [271][ 150/ 518] Overall Loss 2.569817 Objective Loss 2.569817 LR 0.000004 Time 0.388270 +2025-05-17 06:56:59,519 - Epoch: [271][ 200/ 518] Overall Loss 2.581964 Objective Loss 2.581964 LR 0.000004 Time 0.386246 +2025-05-17 06:57:18,537 - Epoch: [271][ 250/ 518] Overall Loss 2.568392 Objective Loss 2.568392 LR 0.000004 Time 0.385063 +2025-05-17 06:57:37,553 - Epoch: [271][ 300/ 518] Overall Loss 2.556632 Objective Loss 2.556632 LR 0.000004 Time 0.384272 +2025-05-17 06:57:56,542 - Epoch: [271][ 350/ 518] Overall Loss 2.555234 Objective Loss 2.555234 LR 0.000004 Time 0.383627 +2025-05-17 06:58:15,576 - Epoch: [271][ 400/ 518] Overall Loss 2.554658 Objective Loss 2.554658 LR 0.000004 Time 0.383255 +2025-05-17 06:58:34,745 - Epoch: [271][ 450/ 518] Overall Loss 2.554137 Objective Loss 2.554137 LR 0.000004 Time 0.383267 +2025-05-17 06:58:53,743 - Epoch: [271][ 500/ 518] Overall Loss 2.555539 Objective Loss 2.555539 LR 0.000004 Time 0.382933 +2025-05-17 06:59:00,434 - Epoch: [271][ 518/ 518] Overall Loss 2.558327 Objective Loss 2.558327 LR 0.000004 Time 0.382543 +2025-05-17 06:59:00,509 - --- validate (epoch=271)----------- +2025-05-17 06:59:00,510 - 4952 samples (32 per mini-batch) +2025-05-17 06:59:00,513 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 06:59:15,598 - Epoch: [271][ 50/ 155] Loss 2.780890 mAP 0.492455 +2025-05-17 06:59:30,992 - Epoch: [271][ 100/ 155] Loss 2.772767 mAP 0.496975 +2025-05-17 06:59:46,771 - Epoch: [271][ 150/ 155] Loss 2.776862 mAP 0.495856 +2025-05-17 06:59:49,917 - Epoch: [271][ 155/ 155] Loss 2.773444 mAP 0.497778 +2025-05-17 06:59:49,970 - ==> mAP: 0.49778 Loss: 2.773 + +2025-05-17 06:59:49,979 - ==> Best [mAP: 0.497778 vloss: 2.773444 Params: 2177088 on epoch: 271] +2025-05-17 06:59:49,979 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 06:59:50,095 - + +2025-05-17 06:59:50,095 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:00:10,147 - Epoch: [272][ 50/ 518] Overall Loss 2.620191 Objective Loss 2.620191 LR 0.000004 Time 0.400977 +2025-05-17 07:00:29,340 - Epoch: [272][ 100/ 518] Overall Loss 2.577249 Objective Loss 2.577249 LR 0.000004 Time 0.392410 +2025-05-17 07:00:48,371 - Epoch: [272][ 150/ 518] Overall Loss 2.570441 Objective Loss 2.570441 LR 0.000004 Time 0.388471 +2025-05-17 07:01:07,400 - Epoch: [272][ 200/ 518] Overall Loss 2.574423 Objective Loss 2.574423 LR 0.000004 Time 0.386495 +2025-05-17 07:01:26,389 - Epoch: [272][ 250/ 518] Overall Loss 2.579873 Objective Loss 2.579873 LR 0.000004 Time 0.385146 +2025-05-17 07:01:45,413 - Epoch: [272][ 300/ 518] Overall Loss 2.568549 Objective Loss 2.568549 LR 0.000004 Time 0.384367 +2025-05-17 07:02:04,402 - Epoch: [272][ 350/ 518] Overall Loss 2.559870 Objective Loss 2.559870 LR 0.000004 Time 0.383706 +2025-05-17 07:02:23,399 - Epoch: [272][ 400/ 518] Overall Loss 2.554395 Objective Loss 2.554395 LR 0.000004 Time 0.383233 +2025-05-17 07:02:42,403 - Epoch: [272][ 450/ 518] Overall Loss 2.555569 Objective Loss 2.555569 LR 0.000004 Time 0.382879 +2025-05-17 07:03:01,546 - Epoch: [272][ 500/ 518] Overall Loss 2.551014 Objective Loss 2.551014 LR 0.000004 Time 0.382875 +2025-05-17 07:03:08,259 - Epoch: [272][ 518/ 518] Overall Loss 2.550733 Objective Loss 2.550733 LR 0.000004 Time 0.382530 +2025-05-17 07:03:08,338 - --- validate (epoch=272)----------- +2025-05-17 07:03:08,339 - 4952 samples (32 per mini-batch) +2025-05-17 07:03:08,343 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:03:23,085 - Epoch: [272][ 50/ 155] Loss 2.715615 mAP 0.513439 +2025-05-17 07:03:38,033 - Epoch: [272][ 100/ 155] Loss 2.754579 mAP 0.497007 +2025-05-17 07:03:53,824 - Epoch: [272][ 150/ 155] Loss 2.774435 mAP 0.491075 +2025-05-17 07:03:56,949 - Epoch: [272][ 155/ 155] Loss 2.776194 mAP 0.491446 +2025-05-17 07:03:57,003 - ==> mAP: 0.49145 Loss: 2.776 + +2025-05-17 07:03:57,012 - ==> Best [mAP: 0.497778 vloss: 2.773444 Params: 2177088 on epoch: 271] +2025-05-17 07:03:57,012 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:03:57,102 - + +2025-05-17 07:03:57,102 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:04:17,285 - Epoch: [273][ 50/ 518] Overall Loss 2.518202 Objective Loss 2.518202 LR 0.000004 Time 0.403575 +2025-05-17 07:04:36,456 - Epoch: [273][ 100/ 518] Overall Loss 2.555535 Objective Loss 2.555535 LR 0.000004 Time 0.393498 +2025-05-17 07:04:55,487 - Epoch: [273][ 150/ 518] Overall Loss 2.537849 Objective Loss 2.537849 LR 0.000004 Time 0.389201 +2025-05-17 07:05:14,531 - Epoch: [273][ 200/ 518] Overall Loss 2.535445 Objective Loss 2.535445 LR 0.000004 Time 0.387115 +2025-05-17 07:05:33,512 - Epoch: [273][ 250/ 518] Overall Loss 2.537920 Objective Loss 2.537920 LR 0.000004 Time 0.385613 +2025-05-17 07:05:52,523 - Epoch: [273][ 300/ 518] Overall Loss 2.547074 Objective Loss 2.547074 LR 0.000004 Time 0.384710 +2025-05-17 07:06:11,551 - Epoch: [273][ 350/ 518] Overall Loss 2.553176 Objective Loss 2.553176 LR 0.000004 Time 0.384112 +2025-05-17 07:06:30,726 - Epoch: [273][ 400/ 518] Overall Loss 2.556091 Objective Loss 2.556091 LR 0.000004 Time 0.384034 +2025-05-17 07:06:49,779 - Epoch: [273][ 450/ 518] Overall Loss 2.555692 Objective Loss 2.555692 LR 0.000004 Time 0.383701 +2025-05-17 07:07:08,799 - Epoch: [273][ 500/ 518] Overall Loss 2.551514 Objective Loss 2.551514 LR 0.000004 Time 0.383370 +2025-05-17 07:07:15,479 - Epoch: [273][ 518/ 518] Overall Loss 2.551650 Objective Loss 2.551650 LR 0.000004 Time 0.382943 +2025-05-17 07:07:15,558 - --- validate (epoch=273)----------- +2025-05-17 07:07:15,559 - 4952 samples (32 per mini-batch) +2025-05-17 07:07:15,562 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:07:30,211 - Epoch: [273][ 50/ 155] Loss 2.703968 mAP 0.503767 +2025-05-17 07:07:45,248 - Epoch: [273][ 100/ 155] Loss 2.751967 mAP 0.492827 +2025-05-17 07:08:00,498 - Epoch: [273][ 150/ 155] Loss 2.754446 mAP 0.496036 +2025-05-17 07:08:03,489 - Epoch: [273][ 155/ 155] Loss 2.761488 mAP 0.496004 +2025-05-17 07:08:03,541 - ==> mAP: 0.49600 Loss: 2.761 + +2025-05-17 07:08:03,550 - ==> Best [mAP: 0.497778 vloss: 2.773444 Params: 2177088 on epoch: 271] +2025-05-17 07:08:03,550 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:08:03,637 - + +2025-05-17 07:08:03,637 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:08:23,810 - Epoch: [274][ 50/ 518] Overall Loss 2.520733 Objective Loss 2.520733 LR 0.000004 Time 0.403397 +2025-05-17 07:08:42,821 - Epoch: [274][ 100/ 518] Overall Loss 2.543939 Objective Loss 2.543939 LR 0.000004 Time 0.391793 +2025-05-17 07:09:01,830 - Epoch: [274][ 150/ 518] Overall Loss 2.547647 Objective Loss 2.547647 LR 0.000004 Time 0.387915 +2025-05-17 07:09:20,824 - Epoch: [274][ 200/ 518] Overall Loss 2.541627 Objective Loss 2.541627 LR 0.000004 Time 0.385901 +2025-05-17 07:09:39,827 - Epoch: [274][ 250/ 518] Overall Loss 2.547567 Objective Loss 2.547567 LR 0.000004 Time 0.384725 +2025-05-17 07:09:58,829 - Epoch: [274][ 300/ 518] Overall Loss 2.548885 Objective Loss 2.548885 LR 0.000004 Time 0.383942 +2025-05-17 07:10:17,832 - Epoch: [274][ 350/ 518] Overall Loss 2.548961 Objective Loss 2.548961 LR 0.000004 Time 0.383384 +2025-05-17 07:10:36,826 - Epoch: [274][ 400/ 518] Overall Loss 2.549765 Objective Loss 2.549765 LR 0.000004 Time 0.382941 +2025-05-17 07:10:55,968 - Epoch: [274][ 450/ 518] Overall Loss 2.547811 Objective Loss 2.547811 LR 0.000004 Time 0.382928 +2025-05-17 07:11:14,958 - Epoch: [274][ 500/ 518] Overall Loss 2.546108 Objective Loss 2.546108 LR 0.000004 Time 0.382611 +2025-05-17 07:11:21,661 - Epoch: [274][ 518/ 518] Overall Loss 2.546763 Objective Loss 2.546763 LR 0.000004 Time 0.382255 +2025-05-17 07:11:21,736 - --- validate (epoch=274)----------- +2025-05-17 07:11:21,737 - 4952 samples (32 per mini-batch) +2025-05-17 07:11:21,740 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:11:36,545 - Epoch: [274][ 50/ 155] Loss 2.792128 mAP 0.502534 +2025-05-17 07:11:51,514 - Epoch: [274][ 100/ 155] Loss 2.766861 mAP 0.494606 +2025-05-17 07:12:06,740 - Epoch: [274][ 150/ 155] Loss 2.770435 mAP 0.490127 +2025-05-17 07:12:09,733 - Epoch: [274][ 155/ 155] Loss 2.767510 mAP 0.491862 +2025-05-17 07:12:09,787 - ==> mAP: 0.49186 Loss: 2.768 + +2025-05-17 07:12:09,795 - ==> Best [mAP: 0.497778 vloss: 2.773444 Params: 2177088 on epoch: 271] +2025-05-17 07:12:09,795 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:12:09,882 - + +2025-05-17 07:12:09,883 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:12:29,919 - Epoch: [275][ 50/ 518] Overall Loss 2.575774 Objective Loss 2.575774 LR 0.000004 Time 0.400674 +2025-05-17 07:12:49,113 - Epoch: [275][ 100/ 518] Overall Loss 2.566413 Objective Loss 2.566413 LR 0.000004 Time 0.392261 +2025-05-17 07:13:08,101 - Epoch: [275][ 150/ 518] Overall Loss 2.556154 Objective Loss 2.556154 LR 0.000004 Time 0.388085 +2025-05-17 07:13:27,104 - Epoch: [275][ 200/ 518] Overall Loss 2.555862 Objective Loss 2.555862 LR 0.000004 Time 0.386072 +2025-05-17 07:13:46,136 - Epoch: [275][ 250/ 518] Overall Loss 2.547039 Objective Loss 2.547039 LR 0.000004 Time 0.384985 +2025-05-17 07:14:05,178 - Epoch: [275][ 300/ 518] Overall Loss 2.555701 Objective Loss 2.555701 LR 0.000004 Time 0.384292 +2025-05-17 07:14:24,178 - Epoch: [275][ 350/ 518] Overall Loss 2.560410 Objective Loss 2.560410 LR 0.000004 Time 0.383674 +2025-05-17 07:14:43,396 - Epoch: [275][ 400/ 518] Overall Loss 2.558205 Objective Loss 2.558205 LR 0.000004 Time 0.383758 +2025-05-17 07:15:02,371 - Epoch: [275][ 450/ 518] Overall Loss 2.557243 Objective Loss 2.557243 LR 0.000004 Time 0.383281 +2025-05-17 07:15:21,427 - Epoch: [275][ 500/ 518] Overall Loss 2.559790 Objective Loss 2.559790 LR 0.000004 Time 0.383064 +2025-05-17 07:15:28,130 - Epoch: [275][ 518/ 518] Overall Loss 2.560961 Objective Loss 2.560961 LR 0.000004 Time 0.382692 +2025-05-17 07:15:28,201 - --- validate (epoch=275)----------- +2025-05-17 07:15:28,202 - 4952 samples (32 per mini-batch) +2025-05-17 07:15:28,205 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:15:43,264 - Epoch: [275][ 50/ 155] Loss 2.737845 mAP 0.479616 +2025-05-17 07:15:58,606 - Epoch: [275][ 100/ 155] Loss 2.774333 mAP 0.492875 +2025-05-17 07:16:14,490 - Epoch: [275][ 150/ 155] Loss 2.771235 mAP 0.495260 +2025-05-17 07:16:17,470 - Epoch: [275][ 155/ 155] Loss 2.768830 mAP 0.495504 +2025-05-17 07:16:17,525 - ==> mAP: 0.49550 Loss: 2.769 + +2025-05-17 07:16:17,533 - ==> Best [mAP: 0.497778 vloss: 2.773444 Params: 2177088 on epoch: 271] +2025-05-17 07:16:17,534 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:16:17,624 - + +2025-05-17 07:16:17,625 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:16:37,946 - Epoch: [276][ 50/ 518] Overall Loss 2.594060 Objective Loss 2.594060 LR 0.000004 Time 0.406362 +2025-05-17 07:16:56,918 - Epoch: [276][ 100/ 518] Overall Loss 2.568704 Objective Loss 2.568704 LR 0.000004 Time 0.392885 +2025-05-17 07:17:15,959 - Epoch: [276][ 150/ 518] Overall Loss 2.552436 Objective Loss 2.552436 LR 0.000004 Time 0.388861 +2025-05-17 07:17:35,012 - Epoch: [276][ 200/ 518] Overall Loss 2.547979 Objective Loss 2.547979 LR 0.000004 Time 0.386906 +2025-05-17 07:17:54,068 - Epoch: [276][ 250/ 518] Overall Loss 2.570427 Objective Loss 2.570427 LR 0.000004 Time 0.385746 +2025-05-17 07:18:13,125 - Epoch: [276][ 300/ 518] Overall Loss 2.568740 Objective Loss 2.568740 LR 0.000004 Time 0.384977 +2025-05-17 07:18:32,137 - Epoch: [276][ 350/ 518] Overall Loss 2.565235 Objective Loss 2.565235 LR 0.000004 Time 0.384296 +2025-05-17 07:18:51,127 - Epoch: [276][ 400/ 518] Overall Loss 2.565547 Objective Loss 2.565547 LR 0.000004 Time 0.383731 +2025-05-17 07:19:10,305 - Epoch: [276][ 450/ 518] Overall Loss 2.556712 Objective Loss 2.556712 LR 0.000004 Time 0.383708 +2025-05-17 07:19:29,303 - Epoch: [276][ 500/ 518] Overall Loss 2.561574 Objective Loss 2.561574 LR 0.000004 Time 0.383331 +2025-05-17 07:19:35,997 - Epoch: [276][ 518/ 518] Overall Loss 2.563366 Objective Loss 2.563366 LR 0.000004 Time 0.382933 +2025-05-17 07:19:36,073 - --- validate (epoch=276)----------- +2025-05-17 07:19:36,074 - 4952 samples (32 per mini-batch) +2025-05-17 07:19:36,077 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:19:50,838 - Epoch: [276][ 50/ 155] Loss 2.753609 mAP 0.487680 +2025-05-17 07:20:05,992 - Epoch: [276][ 100/ 155] Loss 2.773473 mAP 0.486021 +2025-05-17 07:20:21,468 - Epoch: [276][ 150/ 155] Loss 2.781264 mAP 0.486529 +2025-05-17 07:20:24,555 - Epoch: [276][ 155/ 155] Loss 2.777111 mAP 0.488183 +2025-05-17 07:20:24,610 - ==> mAP: 0.48818 Loss: 2.777 + +2025-05-17 07:20:24,618 - ==> Best [mAP: 0.497778 vloss: 2.773444 Params: 2177088 on epoch: 271] +2025-05-17 07:20:24,618 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:20:24,708 - + +2025-05-17 07:20:24,708 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:20:44,862 - Epoch: [277][ 50/ 518] Overall Loss 2.533056 Objective Loss 2.533056 LR 0.000004 Time 0.403005 +2025-05-17 07:21:04,064 - Epoch: [277][ 100/ 518] Overall Loss 2.515903 Objective Loss 2.515903 LR 0.000004 Time 0.393515 +2025-05-17 07:21:23,108 - Epoch: [277][ 150/ 518] Overall Loss 2.511018 Objective Loss 2.511018 LR 0.000004 Time 0.389292 +2025-05-17 07:21:42,145 - Epoch: [277][ 200/ 518] Overall Loss 2.514465 Objective Loss 2.514465 LR 0.000004 Time 0.387154 +2025-05-17 07:22:01,147 - Epoch: [277][ 250/ 518] Overall Loss 2.520568 Objective Loss 2.520568 LR 0.000004 Time 0.385723 +2025-05-17 07:22:20,162 - Epoch: [277][ 300/ 518] Overall Loss 2.528209 Objective Loss 2.528209 LR 0.000004 Time 0.384817 +2025-05-17 07:22:39,167 - Epoch: [277][ 350/ 518] Overall Loss 2.529786 Objective Loss 2.529786 LR 0.000004 Time 0.384139 +2025-05-17 07:22:58,160 - Epoch: [277][ 400/ 518] Overall Loss 2.530190 Objective Loss 2.530190 LR 0.000004 Time 0.383600 +2025-05-17 07:23:17,165 - Epoch: [277][ 450/ 518] Overall Loss 2.530229 Objective Loss 2.530229 LR 0.000004 Time 0.383208 +2025-05-17 07:23:36,354 - Epoch: [277][ 500/ 518] Overall Loss 2.530440 Objective Loss 2.530440 LR 0.000004 Time 0.383263 +2025-05-17 07:23:43,070 - Epoch: [277][ 518/ 518] Overall Loss 2.531340 Objective Loss 2.531340 LR 0.000004 Time 0.382910 +2025-05-17 07:23:43,150 - --- validate (epoch=277)----------- +2025-05-17 07:23:43,151 - 4952 samples (32 per mini-batch) +2025-05-17 07:23:43,154 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:23:57,985 - Epoch: [277][ 50/ 155] Loss 2.764110 mAP 0.505477 +2025-05-17 07:24:12,887 - Epoch: [277][ 100/ 155] Loss 2.757582 mAP 0.502047 +2025-05-17 07:24:28,450 - Epoch: [277][ 150/ 155] Loss 2.764151 mAP 0.496908 +2025-05-17 07:24:31,501 - Epoch: [277][ 155/ 155] Loss 2.765060 mAP 0.496109 +2025-05-17 07:24:31,555 - ==> mAP: 0.49611 Loss: 2.765 + +2025-05-17 07:24:31,563 - ==> Best [mAP: 0.497778 vloss: 2.773444 Params: 2177088 on epoch: 271] +2025-05-17 07:24:31,563 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:24:31,651 - + +2025-05-17 07:24:31,652 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:24:51,586 - Epoch: [278][ 50/ 518] Overall Loss 2.519931 Objective Loss 2.519931 LR 0.000004 Time 0.398613 +2025-05-17 07:25:10,557 - Epoch: [278][ 100/ 518] Overall Loss 2.510470 Objective Loss 2.510470 LR 0.000004 Time 0.389013 +2025-05-17 07:25:29,751 - Epoch: [278][ 150/ 518] Overall Loss 2.521285 Objective Loss 2.521285 LR 0.000004 Time 0.387291 +2025-05-17 07:25:48,787 - Epoch: [278][ 200/ 518] Overall Loss 2.532443 Objective Loss 2.532443 LR 0.000004 Time 0.385643 +2025-05-17 07:26:07,830 - Epoch: [278][ 250/ 518] Overall Loss 2.538106 Objective Loss 2.538106 LR 0.000004 Time 0.384682 +2025-05-17 07:26:26,867 - Epoch: [278][ 300/ 518] Overall Loss 2.543338 Objective Loss 2.543338 LR 0.000004 Time 0.384023 +2025-05-17 07:26:45,873 - Epoch: [278][ 350/ 518] Overall Loss 2.541103 Objective Loss 2.541103 LR 0.000004 Time 0.383461 +2025-05-17 07:27:04,914 - Epoch: [278][ 400/ 518] Overall Loss 2.537600 Objective Loss 2.537600 LR 0.000004 Time 0.383130 +2025-05-17 07:27:23,930 - Epoch: [278][ 450/ 518] Overall Loss 2.537324 Objective Loss 2.537324 LR 0.000004 Time 0.382815 +2025-05-17 07:27:42,931 - Epoch: [278][ 500/ 518] Overall Loss 2.540607 Objective Loss 2.540607 LR 0.000004 Time 0.382534 +2025-05-17 07:27:49,786 - Epoch: [278][ 518/ 518] Overall Loss 2.539591 Objective Loss 2.539591 LR 0.000004 Time 0.382473 +2025-05-17 07:27:49,861 - --- validate (epoch=278)----------- +2025-05-17 07:27:49,862 - 4952 samples (32 per mini-batch) +2025-05-17 07:27:49,865 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:28:04,680 - Epoch: [278][ 50/ 155] Loss 2.773913 mAP 0.521221 +2025-05-17 07:28:19,566 - Epoch: [278][ 100/ 155] Loss 2.769825 mAP 0.502868 +2025-05-17 07:28:35,061 - Epoch: [278][ 150/ 155] Loss 2.768980 mAP 0.499447 +2025-05-17 07:28:38,085 - Epoch: [278][ 155/ 155] Loss 2.768126 mAP 0.498911 +2025-05-17 07:28:38,139 - ==> mAP: 0.49891 Loss: 2.768 + +2025-05-17 07:28:38,147 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 07:28:38,148 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:28:38,259 - + +2025-05-17 07:28:38,259 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:28:58,251 - Epoch: [279][ 50/ 518] Overall Loss 2.529089 Objective Loss 2.529089 LR 0.000004 Time 0.399768 +2025-05-17 07:29:17,404 - Epoch: [279][ 100/ 518] Overall Loss 2.530254 Objective Loss 2.530254 LR 0.000004 Time 0.391406 +2025-05-17 07:29:36,425 - Epoch: [279][ 150/ 518] Overall Loss 2.520077 Objective Loss 2.520077 LR 0.000004 Time 0.387738 +2025-05-17 07:29:55,404 - Epoch: [279][ 200/ 518] Overall Loss 2.539735 Objective Loss 2.539735 LR 0.000004 Time 0.385696 +2025-05-17 07:30:14,381 - Epoch: [279][ 250/ 518] Overall Loss 2.545597 Objective Loss 2.545597 LR 0.000004 Time 0.384457 +2025-05-17 07:30:33,377 - Epoch: [279][ 300/ 518] Overall Loss 2.548570 Objective Loss 2.548570 LR 0.000004 Time 0.383696 +2025-05-17 07:30:52,389 - Epoch: [279][ 350/ 518] Overall Loss 2.551341 Objective Loss 2.551341 LR 0.000004 Time 0.383200 +2025-05-17 07:31:11,540 - Epoch: [279][ 400/ 518] Overall Loss 2.551738 Objective Loss 2.551738 LR 0.000004 Time 0.383174 +2025-05-17 07:31:30,555 - Epoch: [279][ 450/ 518] Overall Loss 2.547606 Objective Loss 2.547606 LR 0.000004 Time 0.382852 +2025-05-17 07:31:49,548 - Epoch: [279][ 500/ 518] Overall Loss 2.545142 Objective Loss 2.545142 LR 0.000004 Time 0.382551 +2025-05-17 07:31:56,239 - Epoch: [279][ 518/ 518] Overall Loss 2.545183 Objective Loss 2.545183 LR 0.000004 Time 0.382173 +2025-05-17 07:31:56,317 - --- validate (epoch=279)----------- +2025-05-17 07:31:56,318 - 4952 samples (32 per mini-batch) +2025-05-17 07:31:56,321 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:32:11,152 - Epoch: [279][ 50/ 155] Loss 2.778142 mAP 0.500206 +2025-05-17 07:32:26,242 - Epoch: [279][ 100/ 155] Loss 2.766158 mAP 0.495293 +2025-05-17 07:32:42,000 - Epoch: [279][ 150/ 155] Loss 2.762780 mAP 0.496907 +2025-05-17 07:32:44,974 - Epoch: [279][ 155/ 155] Loss 2.766334 mAP 0.495052 +2025-05-17 07:32:45,029 - ==> mAP: 0.49505 Loss: 2.766 + +2025-05-17 07:32:45,038 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 07:32:45,038 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:32:45,128 - + +2025-05-17 07:32:45,128 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:33:05,250 - Epoch: [280][ 50/ 518] Overall Loss 2.524651 Objective Loss 2.524651 LR 0.000004 Time 0.402372 +2025-05-17 07:33:24,430 - Epoch: [280][ 100/ 518] Overall Loss 2.517610 Objective Loss 2.517610 LR 0.000004 Time 0.392969 +2025-05-17 07:33:43,446 - Epoch: [280][ 150/ 518] Overall Loss 2.522426 Objective Loss 2.522426 LR 0.000004 Time 0.388745 +2025-05-17 07:34:02,463 - Epoch: [280][ 200/ 518] Overall Loss 2.526434 Objective Loss 2.526434 LR 0.000004 Time 0.386636 +2025-05-17 07:34:21,462 - Epoch: [280][ 250/ 518] Overall Loss 2.529159 Objective Loss 2.529159 LR 0.000004 Time 0.385300 +2025-05-17 07:34:40,492 - Epoch: [280][ 300/ 518] Overall Loss 2.546165 Objective Loss 2.546165 LR 0.000004 Time 0.384513 +2025-05-17 07:34:59,569 - Epoch: [280][ 350/ 518] Overall Loss 2.538902 Objective Loss 2.538902 LR 0.000004 Time 0.384085 +2025-05-17 07:35:18,609 - Epoch: [280][ 400/ 518] Overall Loss 2.541887 Objective Loss 2.541887 LR 0.000004 Time 0.383672 +2025-05-17 07:35:37,630 - Epoch: [280][ 450/ 518] Overall Loss 2.541492 Objective Loss 2.541492 LR 0.000004 Time 0.383310 +2025-05-17 07:35:56,778 - Epoch: [280][ 500/ 518] Overall Loss 2.543441 Objective Loss 2.543441 LR 0.000004 Time 0.383272 +2025-05-17 07:36:03,491 - Epoch: [280][ 518/ 518] Overall Loss 2.546267 Objective Loss 2.546267 LR 0.000004 Time 0.382911 +2025-05-17 07:36:03,569 - --- validate (epoch=280)----------- +2025-05-17 07:36:03,570 - 4952 samples (32 per mini-batch) +2025-05-17 07:36:03,573 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:36:18,326 - Epoch: [280][ 50/ 155] Loss 2.748771 mAP 0.494082 +2025-05-17 07:36:33,381 - Epoch: [280][ 100/ 155] Loss 2.758411 mAP 0.500592 +2025-05-17 07:36:48,803 - Epoch: [280][ 150/ 155] Loss 2.766836 mAP 0.492470 +2025-05-17 07:36:51,888 - Epoch: [280][ 155/ 155] Loss 2.763780 mAP 0.495901 +2025-05-17 07:36:51,940 - ==> mAP: 0.49590 Loss: 2.764 + +2025-05-17 07:36:51,948 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 07:36:51,948 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:36:52,039 - + +2025-05-17 07:36:52,039 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:37:12,232 - Epoch: [281][ 50/ 518] Overall Loss 2.582792 Objective Loss 2.582792 LR 0.000004 Time 0.403792 +2025-05-17 07:37:31,177 - Epoch: [281][ 100/ 518] Overall Loss 2.562404 Objective Loss 2.562404 LR 0.000004 Time 0.391334 +2025-05-17 07:37:50,404 - Epoch: [281][ 150/ 518] Overall Loss 2.558362 Objective Loss 2.558362 LR 0.000004 Time 0.389059 +2025-05-17 07:38:09,414 - Epoch: [281][ 200/ 518] Overall Loss 2.553902 Objective Loss 2.553902 LR 0.000004 Time 0.386843 +2025-05-17 07:38:28,429 - Epoch: [281][ 250/ 518] Overall Loss 2.548160 Objective Loss 2.548160 LR 0.000004 Time 0.385529 +2025-05-17 07:38:47,483 - Epoch: [281][ 300/ 518] Overall Loss 2.545417 Objective Loss 2.545417 LR 0.000004 Time 0.384783 +2025-05-17 07:39:06,521 - Epoch: [281][ 350/ 518] Overall Loss 2.548015 Objective Loss 2.548015 LR 0.000004 Time 0.384207 +2025-05-17 07:39:25,569 - Epoch: [281][ 400/ 518] Overall Loss 2.547642 Objective Loss 2.547642 LR 0.000004 Time 0.383798 +2025-05-17 07:39:44,625 - Epoch: [281][ 450/ 518] Overall Loss 2.545094 Objective Loss 2.545094 LR 0.000004 Time 0.383500 +2025-05-17 07:40:03,679 - Epoch: [281][ 500/ 518] Overall Loss 2.537869 Objective Loss 2.537869 LR 0.000004 Time 0.383256 +2025-05-17 07:40:10,555 - Epoch: [281][ 518/ 518] Overall Loss 2.537885 Objective Loss 2.537885 LR 0.000004 Time 0.383211 +2025-05-17 07:40:10,628 - --- validate (epoch=281)----------- +2025-05-17 07:40:10,629 - 4952 samples (32 per mini-batch) +2025-05-17 07:40:10,632 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:40:25,647 - Epoch: [281][ 50/ 155] Loss 2.759533 mAP 0.499148 +2025-05-17 07:40:40,815 - Epoch: [281][ 100/ 155] Loss 2.746132 mAP 0.504540 +2025-05-17 07:40:56,709 - Epoch: [281][ 150/ 155] Loss 2.765214 mAP 0.497456 +2025-05-17 07:40:59,844 - Epoch: [281][ 155/ 155] Loss 2.766374 mAP 0.496165 +2025-05-17 07:40:59,898 - ==> mAP: 0.49617 Loss: 2.766 + +2025-05-17 07:40:59,907 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 07:40:59,907 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:40:59,998 - + +2025-05-17 07:40:59,998 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:41:20,072 - Epoch: [282][ 50/ 518] Overall Loss 2.549031 Objective Loss 2.549031 LR 0.000004 Time 0.401417 +2025-05-17 07:41:39,065 - Epoch: [282][ 100/ 518] Overall Loss 2.540796 Objective Loss 2.540796 LR 0.000004 Time 0.390631 +2025-05-17 07:41:58,117 - Epoch: [282][ 150/ 518] Overall Loss 2.530532 Objective Loss 2.530532 LR 0.000004 Time 0.387423 +2025-05-17 07:42:17,320 - Epoch: [282][ 200/ 518] Overall Loss 2.526744 Objective Loss 2.526744 LR 0.000004 Time 0.386581 +2025-05-17 07:42:36,384 - Epoch: [282][ 250/ 518] Overall Loss 2.531508 Objective Loss 2.531508 LR 0.000004 Time 0.385517 +2025-05-17 07:42:55,410 - Epoch: [282][ 300/ 518] Overall Loss 2.521889 Objective Loss 2.521889 LR 0.000004 Time 0.384681 +2025-05-17 07:43:14,451 - Epoch: [282][ 350/ 518] Overall Loss 2.527837 Objective Loss 2.527837 LR 0.000004 Time 0.384126 +2025-05-17 07:43:33,478 - Epoch: [282][ 400/ 518] Overall Loss 2.540538 Objective Loss 2.540538 LR 0.000004 Time 0.383678 +2025-05-17 07:43:52,525 - Epoch: [282][ 450/ 518] Overall Loss 2.540453 Objective Loss 2.540453 LR 0.000004 Time 0.383371 +2025-05-17 07:44:11,538 - Epoch: [282][ 500/ 518] Overall Loss 2.543317 Objective Loss 2.543317 LR 0.000004 Time 0.383060 +2025-05-17 07:44:18,264 - Epoch: [282][ 518/ 518] Overall Loss 2.542023 Objective Loss 2.542023 LR 0.000004 Time 0.382731 +2025-05-17 07:44:18,339 - --- validate (epoch=282)----------- +2025-05-17 07:44:18,340 - 4952 samples (32 per mini-batch) +2025-05-17 07:44:18,343 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:44:33,441 - Epoch: [282][ 50/ 155] Loss 2.725672 mAP 0.509940 +2025-05-17 07:44:48,511 - Epoch: [282][ 100/ 155] Loss 2.763869 mAP 0.498989 +2025-05-17 07:45:04,184 - Epoch: [282][ 150/ 155] Loss 2.759077 mAP 0.497312 +2025-05-17 07:45:07,288 - Epoch: [282][ 155/ 155] Loss 2.758992 mAP 0.496575 +2025-05-17 07:45:07,343 - ==> mAP: 0.49658 Loss: 2.759 + +2025-05-17 07:45:07,352 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 07:45:07,352 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:45:07,442 - + +2025-05-17 07:45:07,442 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:45:27,458 - Epoch: [283][ 50/ 518] Overall Loss 2.584221 Objective Loss 2.584221 LR 0.000004 Time 0.400242 +2025-05-17 07:45:46,409 - Epoch: [283][ 100/ 518] Overall Loss 2.549148 Objective Loss 2.549148 LR 0.000004 Time 0.389618 +2025-05-17 07:46:05,434 - Epoch: [283][ 150/ 518] Overall Loss 2.554156 Objective Loss 2.554156 LR 0.000004 Time 0.386573 +2025-05-17 07:46:24,445 - Epoch: [283][ 200/ 518] Overall Loss 2.560254 Objective Loss 2.560254 LR 0.000004 Time 0.384981 +2025-05-17 07:46:43,623 - Epoch: [283][ 250/ 518] Overall Loss 2.558785 Objective Loss 2.558785 LR 0.000004 Time 0.384690 +2025-05-17 07:47:02,629 - Epoch: [283][ 300/ 518] Overall Loss 2.552170 Objective Loss 2.552170 LR 0.000004 Time 0.383923 +2025-05-17 07:47:21,601 - Epoch: [283][ 350/ 518] Overall Loss 2.556563 Objective Loss 2.556563 LR 0.000004 Time 0.383279 +2025-05-17 07:47:40,668 - Epoch: [283][ 400/ 518] Overall Loss 2.548585 Objective Loss 2.548585 LR 0.000004 Time 0.383034 +2025-05-17 07:47:59,660 - Epoch: [283][ 450/ 518] Overall Loss 2.545285 Objective Loss 2.545285 LR 0.000004 Time 0.382676 +2025-05-17 07:48:18,647 - Epoch: [283][ 500/ 518] Overall Loss 2.544954 Objective Loss 2.544954 LR 0.000004 Time 0.382381 +2025-05-17 07:48:25,355 - Epoch: [283][ 518/ 518] Overall Loss 2.545500 Objective Loss 2.545500 LR 0.000004 Time 0.382042 +2025-05-17 07:48:25,438 - --- validate (epoch=283)----------- +2025-05-17 07:48:25,439 - 4952 samples (32 per mini-batch) +2025-05-17 07:48:25,443 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:48:40,224 - Epoch: [283][ 50/ 155] Loss 2.763202 mAP 0.501156 +2025-05-17 07:48:54,943 - Epoch: [283][ 100/ 155] Loss 2.770357 mAP 0.498109 +2025-05-17 07:49:10,375 - Epoch: [283][ 150/ 155] Loss 2.762954 mAP 0.492829 +2025-05-17 07:49:13,410 - Epoch: [283][ 155/ 155] Loss 2.761978 mAP 0.492605 +2025-05-17 07:49:13,463 - ==> mAP: 0.49260 Loss: 2.762 + +2025-05-17 07:49:13,471 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 07:49:13,472 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:49:13,559 - + +2025-05-17 07:49:13,560 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:49:33,631 - Epoch: [284][ 50/ 518] Overall Loss 2.576199 Objective Loss 2.576199 LR 0.000004 Time 0.401349 +2025-05-17 07:49:52,589 - Epoch: [284][ 100/ 518] Overall Loss 2.545998 Objective Loss 2.545998 LR 0.000004 Time 0.390249 +2025-05-17 07:50:11,596 - Epoch: [284][ 150/ 518] Overall Loss 2.543046 Objective Loss 2.543046 LR 0.000004 Time 0.386869 +2025-05-17 07:50:30,573 - Epoch: [284][ 200/ 518] Overall Loss 2.537241 Objective Loss 2.537241 LR 0.000004 Time 0.385034 +2025-05-17 07:50:49,770 - Epoch: [284][ 250/ 518] Overall Loss 2.533036 Objective Loss 2.533036 LR 0.000004 Time 0.384812 +2025-05-17 07:51:08,757 - Epoch: [284][ 300/ 518] Overall Loss 2.526822 Objective Loss 2.526822 LR 0.000004 Time 0.383961 +2025-05-17 07:51:27,746 - Epoch: [284][ 350/ 518] Overall Loss 2.525606 Objective Loss 2.525606 LR 0.000004 Time 0.383359 +2025-05-17 07:51:46,738 - Epoch: [284][ 400/ 518] Overall Loss 2.534653 Objective Loss 2.534653 LR 0.000004 Time 0.382916 +2025-05-17 07:52:05,753 - Epoch: [284][ 450/ 518] Overall Loss 2.537984 Objective Loss 2.537984 LR 0.000004 Time 0.382624 +2025-05-17 07:52:24,760 - Epoch: [284][ 500/ 518] Overall Loss 2.533691 Objective Loss 2.533691 LR 0.000004 Time 0.382372 +2025-05-17 07:52:31,586 - Epoch: [284][ 518/ 518] Overall Loss 2.533304 Objective Loss 2.533304 LR 0.000004 Time 0.382261 +2025-05-17 07:52:31,658 - --- validate (epoch=284)----------- +2025-05-17 07:52:31,659 - 4952 samples (32 per mini-batch) +2025-05-17 07:52:31,662 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:52:46,634 - Epoch: [284][ 50/ 155] Loss 2.792518 mAP 0.509035 +2025-05-17 07:53:01,752 - Epoch: [284][ 100/ 155] Loss 2.802006 mAP 0.491315 +2025-05-17 07:53:17,563 - Epoch: [284][ 150/ 155] Loss 2.768835 mAP 0.497655 +2025-05-17 07:53:20,686 - Epoch: [284][ 155/ 155] Loss 2.766446 mAP 0.497097 +2025-05-17 07:53:20,741 - ==> mAP: 0.49710 Loss: 2.766 + +2025-05-17 07:53:20,750 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 07:53:20,750 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:53:20,841 - + +2025-05-17 07:53:20,841 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:53:40,766 - Epoch: [285][ 50/ 518] Overall Loss 2.567766 Objective Loss 2.567766 LR 0.000004 Time 0.398425 +2025-05-17 07:53:59,734 - Epoch: [285][ 100/ 518] Overall Loss 2.563333 Objective Loss 2.563333 LR 0.000004 Time 0.388875 +2025-05-17 07:54:18,938 - Epoch: [285][ 150/ 518] Overall Loss 2.550901 Objective Loss 2.550901 LR 0.000004 Time 0.387271 +2025-05-17 07:54:37,940 - Epoch: [285][ 200/ 518] Overall Loss 2.536918 Objective Loss 2.536918 LR 0.000004 Time 0.385457 +2025-05-17 07:54:56,936 - Epoch: [285][ 250/ 518] Overall Loss 2.540709 Objective Loss 2.540709 LR 0.000004 Time 0.384348 +2025-05-17 07:55:15,932 - Epoch: [285][ 300/ 518] Overall Loss 2.545662 Objective Loss 2.545662 LR 0.000004 Time 0.383605 +2025-05-17 07:55:34,925 - Epoch: [285][ 350/ 518] Overall Loss 2.539795 Objective Loss 2.539795 LR 0.000004 Time 0.383066 +2025-05-17 07:55:53,921 - Epoch: [285][ 400/ 518] Overall Loss 2.534351 Objective Loss 2.534351 LR 0.000004 Time 0.382668 +2025-05-17 07:56:13,062 - Epoch: [285][ 450/ 518] Overall Loss 2.529914 Objective Loss 2.529914 LR 0.000004 Time 0.382684 +2025-05-17 07:56:32,069 - Epoch: [285][ 500/ 518] Overall Loss 2.532841 Objective Loss 2.532841 LR 0.000004 Time 0.382427 +2025-05-17 07:56:38,788 - Epoch: [285][ 518/ 518] Overall Loss 2.534490 Objective Loss 2.534490 LR 0.000004 Time 0.382108 +2025-05-17 07:56:38,858 - --- validate (epoch=285)----------- +2025-05-17 07:56:38,859 - 4952 samples (32 per mini-batch) +2025-05-17 07:56:38,863 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 07:56:53,789 - Epoch: [285][ 50/ 155] Loss 2.711063 mAP 0.512641 +2025-05-17 07:57:08,954 - Epoch: [285][ 100/ 155] Loss 2.756742 mAP 0.497220 +2025-05-17 07:57:24,523 - Epoch: [285][ 150/ 155] Loss 2.755546 mAP 0.498971 +2025-05-17 07:57:27,613 - Epoch: [285][ 155/ 155] Loss 2.758350 mAP 0.498023 +2025-05-17 07:57:27,666 - ==> mAP: 0.49802 Loss: 2.758 + +2025-05-17 07:57:27,674 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 07:57:27,674 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 07:57:27,761 - + +2025-05-17 07:57:27,761 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 07:57:47,999 - Epoch: [286][ 50/ 518] Overall Loss 2.501666 Objective Loss 2.501666 LR 0.000004 Time 0.404686 +2025-05-17 07:58:07,163 - Epoch: [286][ 100/ 518] Overall Loss 2.524122 Objective Loss 2.524122 LR 0.000004 Time 0.393974 +2025-05-17 07:58:26,165 - Epoch: [286][ 150/ 518] Overall Loss 2.532356 Objective Loss 2.532356 LR 0.000004 Time 0.389322 +2025-05-17 07:58:45,235 - Epoch: [286][ 200/ 518] Overall Loss 2.506766 Objective Loss 2.506766 LR 0.000004 Time 0.387338 +2025-05-17 07:59:04,258 - Epoch: [286][ 250/ 518] Overall Loss 2.517311 Objective Loss 2.517311 LR 0.000004 Time 0.385962 +2025-05-17 07:59:23,311 - Epoch: [286][ 300/ 518] Overall Loss 2.529095 Objective Loss 2.529095 LR 0.000004 Time 0.385142 +2025-05-17 07:59:42,332 - Epoch: [286][ 350/ 518] Overall Loss 2.529493 Objective Loss 2.529493 LR 0.000004 Time 0.384465 +2025-05-17 08:00:01,358 - Epoch: [286][ 400/ 518] Overall Loss 2.535012 Objective Loss 2.535012 LR 0.000004 Time 0.383968 +2025-05-17 08:00:20,368 - Epoch: [286][ 450/ 518] Overall Loss 2.536481 Objective Loss 2.536481 LR 0.000004 Time 0.383547 +2025-05-17 08:00:39,507 - Epoch: [286][ 500/ 518] Overall Loss 2.540317 Objective Loss 2.540317 LR 0.000004 Time 0.383468 +2025-05-17 08:00:46,179 - Epoch: [286][ 518/ 518] Overall Loss 2.539104 Objective Loss 2.539104 LR 0.000004 Time 0.383022 +2025-05-17 08:00:46,255 - --- validate (epoch=286)----------- +2025-05-17 08:00:46,256 - 4952 samples (32 per mini-batch) +2025-05-17 08:00:46,259 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:01:01,292 - Epoch: [286][ 50/ 155] Loss 2.731442 mAP 0.505611 +2025-05-17 08:01:16,394 - Epoch: [286][ 100/ 155] Loss 2.770968 mAP 0.496829 +2025-05-17 08:01:32,152 - Epoch: [286][ 150/ 155] Loss 2.769302 mAP 0.489157 +2025-05-17 08:01:35,263 - Epoch: [286][ 155/ 155] Loss 2.758486 mAP 0.492612 +2025-05-17 08:01:35,318 - ==> mAP: 0.49261 Loss: 2.758 + +2025-05-17 08:01:35,327 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 08:01:35,328 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:01:35,418 - + +2025-05-17 08:01:35,419 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:01:55,442 - Epoch: [287][ 50/ 518] Overall Loss 2.568367 Objective Loss 2.568367 LR 0.000004 Time 0.400395 +2025-05-17 08:02:14,385 - Epoch: [287][ 100/ 518] Overall Loss 2.572567 Objective Loss 2.572567 LR 0.000004 Time 0.389609 +2025-05-17 08:02:33,556 - Epoch: [287][ 150/ 518] Overall Loss 2.570961 Objective Loss 2.570961 LR 0.000004 Time 0.387538 +2025-05-17 08:02:52,562 - Epoch: [287][ 200/ 518] Overall Loss 2.574533 Objective Loss 2.574533 LR 0.000004 Time 0.385678 +2025-05-17 08:03:11,537 - Epoch: [287][ 250/ 518] Overall Loss 2.567882 Objective Loss 2.567882 LR 0.000004 Time 0.384441 +2025-05-17 08:03:30,584 - Epoch: [287][ 300/ 518] Overall Loss 2.561480 Objective Loss 2.561480 LR 0.000004 Time 0.383852 +2025-05-17 08:03:49,594 - Epoch: [287][ 350/ 518] Overall Loss 2.561646 Objective Loss 2.561646 LR 0.000004 Time 0.383330 +2025-05-17 08:04:08,611 - Epoch: [287][ 400/ 518] Overall Loss 2.556016 Objective Loss 2.556016 LR 0.000004 Time 0.382953 +2025-05-17 08:04:27,610 - Epoch: [287][ 450/ 518] Overall Loss 2.545349 Objective Loss 2.545349 LR 0.000004 Time 0.382620 +2025-05-17 08:04:46,667 - Epoch: [287][ 500/ 518] Overall Loss 2.549056 Objective Loss 2.549056 LR 0.000004 Time 0.382470 +2025-05-17 08:04:53,519 - Epoch: [287][ 518/ 518] Overall Loss 2.551514 Objective Loss 2.551514 LR 0.000004 Time 0.382405 +2025-05-17 08:04:53,597 - --- validate (epoch=287)----------- +2025-05-17 08:04:53,598 - 4952 samples (32 per mini-batch) +2025-05-17 08:04:53,601 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:05:08,436 - Epoch: [287][ 50/ 155] Loss 2.770728 mAP 0.498723 +2025-05-17 08:05:23,345 - Epoch: [287][ 100/ 155] Loss 2.751783 mAP 0.497341 +2025-05-17 08:05:39,035 - Epoch: [287][ 150/ 155] Loss 2.757606 mAP 0.495101 +2025-05-17 08:05:42,152 - Epoch: [287][ 155/ 155] Loss 2.758786 mAP 0.494454 +2025-05-17 08:05:42,213 - ==> mAP: 0.49445 Loss: 2.759 + +2025-05-17 08:05:42,222 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 08:05:42,222 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:05:42,313 - + +2025-05-17 08:05:42,313 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:06:02,200 - Epoch: [288][ 50/ 518] Overall Loss 2.595665 Objective Loss 2.595665 LR 0.000004 Time 0.397665 +2025-05-17 08:06:21,401 - Epoch: [288][ 100/ 518] Overall Loss 2.556743 Objective Loss 2.556743 LR 0.000004 Time 0.390839 +2025-05-17 08:06:40,408 - Epoch: [288][ 150/ 518] Overall Loss 2.553170 Objective Loss 2.553170 LR 0.000004 Time 0.387269 +2025-05-17 08:06:59,430 - Epoch: [288][ 200/ 518] Overall Loss 2.548850 Objective Loss 2.548850 LR 0.000004 Time 0.385553 +2025-05-17 08:07:18,480 - Epoch: [288][ 250/ 518] Overall Loss 2.539432 Objective Loss 2.539432 LR 0.000004 Time 0.384639 +2025-05-17 08:07:37,483 - Epoch: [288][ 300/ 518] Overall Loss 2.542471 Objective Loss 2.542471 LR 0.000004 Time 0.383873 +2025-05-17 08:07:56,494 - Epoch: [288][ 350/ 518] Overall Loss 2.545105 Objective Loss 2.545105 LR 0.000004 Time 0.383348 +2025-05-17 08:08:15,699 - Epoch: [288][ 400/ 518] Overall Loss 2.536852 Objective Loss 2.536852 LR 0.000004 Time 0.383440 +2025-05-17 08:08:34,696 - Epoch: [288][ 450/ 518] Overall Loss 2.532732 Objective Loss 2.532732 LR 0.000004 Time 0.383047 +2025-05-17 08:08:53,687 - Epoch: [288][ 500/ 518] Overall Loss 2.534373 Objective Loss 2.534373 LR 0.000004 Time 0.382721 +2025-05-17 08:09:00,368 - Epoch: [288][ 518/ 518] Overall Loss 2.533917 Objective Loss 2.533917 LR 0.000004 Time 0.382320 +2025-05-17 08:09:00,445 - --- validate (epoch=288)----------- +2025-05-17 08:09:00,446 - 4952 samples (32 per mini-batch) +2025-05-17 08:09:00,449 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:09:15,456 - Epoch: [288][ 50/ 155] Loss 2.738353 mAP 0.520239 +2025-05-17 08:09:30,683 - Epoch: [288][ 100/ 155] Loss 2.758939 mAP 0.507144 +2025-05-17 08:09:46,340 - Epoch: [288][ 150/ 155] Loss 2.757775 mAP 0.493199 +2025-05-17 08:09:49,287 - Epoch: [288][ 155/ 155] Loss 2.765754 mAP 0.494172 +2025-05-17 08:09:49,349 - ==> mAP: 0.49417 Loss: 2.766 + +2025-05-17 08:09:49,358 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 08:09:49,358 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:09:49,444 - + +2025-05-17 08:09:49,444 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:10:09,504 - Epoch: [289][ 50/ 518] Overall Loss 2.576844 Objective Loss 2.576844 LR 0.000004 Time 0.401132 +2025-05-17 08:10:28,719 - Epoch: [289][ 100/ 518] Overall Loss 2.565404 Objective Loss 2.565404 LR 0.000004 Time 0.392704 +2025-05-17 08:10:47,717 - Epoch: [289][ 150/ 518] Overall Loss 2.571050 Objective Loss 2.571050 LR 0.000004 Time 0.388452 +2025-05-17 08:11:06,728 - Epoch: [289][ 200/ 518] Overall Loss 2.556514 Objective Loss 2.556514 LR 0.000004 Time 0.386388 +2025-05-17 08:11:25,772 - Epoch: [289][ 250/ 518] Overall Loss 2.559690 Objective Loss 2.559690 LR 0.000004 Time 0.385284 +2025-05-17 08:11:44,761 - Epoch: [289][ 300/ 518] Overall Loss 2.553914 Objective Loss 2.553914 LR 0.000004 Time 0.384361 +2025-05-17 08:12:03,755 - Epoch: [289][ 350/ 518] Overall Loss 2.559414 Objective Loss 2.559414 LR 0.000004 Time 0.383718 +2025-05-17 08:12:22,757 - Epoch: [289][ 400/ 518] Overall Loss 2.552637 Objective Loss 2.552637 LR 0.000004 Time 0.383255 +2025-05-17 08:12:41,931 - Epoch: [289][ 450/ 518] Overall Loss 2.548511 Objective Loss 2.548511 LR 0.000004 Time 0.383278 +2025-05-17 08:13:00,974 - Epoch: [289][ 500/ 518] Overall Loss 2.546133 Objective Loss 2.546133 LR 0.000004 Time 0.383035 +2025-05-17 08:13:07,687 - Epoch: [289][ 518/ 518] Overall Loss 2.548319 Objective Loss 2.548319 LR 0.000004 Time 0.382684 +2025-05-17 08:13:07,763 - --- validate (epoch=289)----------- +2025-05-17 08:13:07,764 - 4952 samples (32 per mini-batch) +2025-05-17 08:13:07,767 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:13:22,639 - Epoch: [289][ 50/ 155] Loss 2.784881 mAP 0.481034 +2025-05-17 08:13:37,770 - Epoch: [289][ 100/ 155] Loss 2.775813 mAP 0.496548 +2025-05-17 08:13:53,569 - Epoch: [289][ 150/ 155] Loss 2.772065 mAP 0.493435 +2025-05-17 08:13:56,680 - Epoch: [289][ 155/ 155] Loss 2.772164 mAP 0.493881 +2025-05-17 08:13:56,735 - ==> mAP: 0.49388 Loss: 2.772 + +2025-05-17 08:13:56,744 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 08:13:56,744 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:13:56,835 - + +2025-05-17 08:13:56,835 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:14:16,989 - Epoch: [290][ 50/ 518] Overall Loss 2.482023 Objective Loss 2.482023 LR 0.000004 Time 0.402992 +2025-05-17 08:14:36,107 - Epoch: [290][ 100/ 518] Overall Loss 2.527444 Objective Loss 2.527444 LR 0.000004 Time 0.392665 +2025-05-17 08:14:55,065 - Epoch: [290][ 150/ 518] Overall Loss 2.528375 Objective Loss 2.528375 LR 0.000004 Time 0.388155 +2025-05-17 08:15:14,089 - Epoch: [290][ 200/ 518] Overall Loss 2.546081 Objective Loss 2.546081 LR 0.000004 Time 0.386230 +2025-05-17 08:15:33,161 - Epoch: [290][ 250/ 518] Overall Loss 2.545048 Objective Loss 2.545048 LR 0.000004 Time 0.385271 +2025-05-17 08:15:52,274 - Epoch: [290][ 300/ 518] Overall Loss 2.553033 Objective Loss 2.553033 LR 0.000004 Time 0.384766 +2025-05-17 08:16:11,303 - Epoch: [290][ 350/ 518] Overall Loss 2.543426 Objective Loss 2.543426 LR 0.000004 Time 0.384164 +2025-05-17 08:16:30,345 - Epoch: [290][ 400/ 518] Overall Loss 2.547523 Objective Loss 2.547523 LR 0.000004 Time 0.383748 +2025-05-17 08:16:49,427 - Epoch: [290][ 450/ 518] Overall Loss 2.556073 Objective Loss 2.556073 LR 0.000004 Time 0.383512 +2025-05-17 08:17:08,602 - Epoch: [290][ 500/ 518] Overall Loss 2.554267 Objective Loss 2.554267 LR 0.000004 Time 0.383508 +2025-05-17 08:17:15,296 - Epoch: [290][ 518/ 518] Overall Loss 2.552599 Objective Loss 2.552599 LR 0.000004 Time 0.383104 +2025-05-17 08:17:15,371 - --- validate (epoch=290)----------- +2025-05-17 08:17:15,372 - 4952 samples (32 per mini-batch) +2025-05-17 08:17:15,375 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:17:30,089 - Epoch: [290][ 50/ 155] Loss 2.754274 mAP 0.512509 +2025-05-17 08:17:44,840 - Epoch: [290][ 100/ 155] Loss 2.792428 mAP 0.489820 +2025-05-17 08:18:00,341 - Epoch: [290][ 150/ 155] Loss 2.771426 mAP 0.490536 +2025-05-17 08:18:03,373 - Epoch: [290][ 155/ 155] Loss 2.767710 mAP 0.492713 +2025-05-17 08:18:03,426 - ==> mAP: 0.49271 Loss: 2.768 + +2025-05-17 08:18:03,434 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 08:18:03,434 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:18:03,524 - + +2025-05-17 08:18:03,524 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:18:23,510 - Epoch: [291][ 50/ 518] Overall Loss 2.503381 Objective Loss 2.503381 LR 0.000004 Time 0.399646 +2025-05-17 08:18:42,722 - Epoch: [291][ 100/ 518] Overall Loss 2.514651 Objective Loss 2.514651 LR 0.000004 Time 0.391934 +2025-05-17 08:19:01,677 - Epoch: [291][ 150/ 518] Overall Loss 2.510635 Objective Loss 2.510635 LR 0.000004 Time 0.387647 +2025-05-17 08:19:20,654 - Epoch: [291][ 200/ 518] Overall Loss 2.516429 Objective Loss 2.516429 LR 0.000004 Time 0.385614 +2025-05-17 08:19:39,664 - Epoch: [291][ 250/ 518] Overall Loss 2.519115 Objective Loss 2.519115 LR 0.000004 Time 0.384528 +2025-05-17 08:19:58,647 - Epoch: [291][ 300/ 518] Overall Loss 2.524368 Objective Loss 2.524368 LR 0.000004 Time 0.383712 +2025-05-17 08:20:17,661 - Epoch: [291][ 350/ 518] Overall Loss 2.528417 Objective Loss 2.528417 LR 0.000004 Time 0.383217 +2025-05-17 08:20:36,884 - Epoch: [291][ 400/ 518] Overall Loss 2.531207 Objective Loss 2.531207 LR 0.000004 Time 0.383372 +2025-05-17 08:20:55,939 - Epoch: [291][ 450/ 518] Overall Loss 2.535515 Objective Loss 2.535515 LR 0.000004 Time 0.383118 +2025-05-17 08:21:14,954 - Epoch: [291][ 500/ 518] Overall Loss 2.535823 Objective Loss 2.535823 LR 0.000004 Time 0.382834 +2025-05-17 08:21:21,638 - Epoch: [291][ 518/ 518] Overall Loss 2.534180 Objective Loss 2.534180 LR 0.000004 Time 0.382433 +2025-05-17 08:21:21,715 - --- validate (epoch=291)----------- +2025-05-17 08:21:21,716 - 4952 samples (32 per mini-batch) +2025-05-17 08:21:21,719 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:21:36,758 - Epoch: [291][ 50/ 155] Loss 2.790594 mAP 0.492714 +2025-05-17 08:21:52,063 - Epoch: [291][ 100/ 155] Loss 2.766599 mAP 0.482996 +2025-05-17 08:22:07,841 - Epoch: [291][ 150/ 155] Loss 2.770690 mAP 0.490787 +2025-05-17 08:22:10,811 - Epoch: [291][ 155/ 155] Loss 2.772951 mAP 0.490888 +2025-05-17 08:22:10,870 - ==> mAP: 0.49089 Loss: 2.773 + +2025-05-17 08:22:10,879 - ==> Best [mAP: 0.498911 vloss: 2.768126 Params: 2177088 on epoch: 278] +2025-05-17 08:22:10,879 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:22:10,969 - + +2025-05-17 08:22:10,969 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:22:31,361 - Epoch: [292][ 50/ 518] Overall Loss 2.596125 Objective Loss 2.596125 LR 0.000004 Time 0.407768 +2025-05-17 08:22:50,403 - Epoch: [292][ 100/ 518] Overall Loss 2.599000 Objective Loss 2.599000 LR 0.000004 Time 0.394303 +2025-05-17 08:23:09,445 - Epoch: [292][ 150/ 518] Overall Loss 2.558031 Objective Loss 2.558031 LR 0.000004 Time 0.389806 +2025-05-17 08:23:28,430 - Epoch: [292][ 200/ 518] Overall Loss 2.544687 Objective Loss 2.544687 LR 0.000004 Time 0.387275 +2025-05-17 08:23:47,476 - Epoch: [292][ 250/ 518] Overall Loss 2.543125 Objective Loss 2.543125 LR 0.000004 Time 0.386001 +2025-05-17 08:24:06,515 - Epoch: [292][ 300/ 518] Overall Loss 2.546759 Objective Loss 2.546759 LR 0.000004 Time 0.385126 +2025-05-17 08:24:25,747 - Epoch: [292][ 350/ 518] Overall Loss 2.548720 Objective Loss 2.548720 LR 0.000004 Time 0.385055 +2025-05-17 08:24:44,818 - Epoch: [292][ 400/ 518] Overall Loss 2.544353 Objective Loss 2.544353 LR 0.000004 Time 0.384599 +2025-05-17 08:25:03,793 - Epoch: [292][ 450/ 518] Overall Loss 2.542109 Objective Loss 2.542109 LR 0.000004 Time 0.384030 +2025-05-17 08:25:22,776 - Epoch: [292][ 500/ 518] Overall Loss 2.537671 Objective Loss 2.537671 LR 0.000004 Time 0.383590 +2025-05-17 08:25:29,501 - Epoch: [292][ 518/ 518] Overall Loss 2.540508 Objective Loss 2.540508 LR 0.000004 Time 0.383242 +2025-05-17 08:25:29,575 - --- validate (epoch=292)----------- +2025-05-17 08:25:29,576 - 4952 samples (32 per mini-batch) +2025-05-17 08:25:29,580 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:25:44,781 - Epoch: [292][ 50/ 155] Loss 2.775827 mAP 0.487300 +2025-05-17 08:25:59,907 - Epoch: [292][ 100/ 155] Loss 2.774916 mAP 0.495794 +2025-05-17 08:26:15,667 - Epoch: [292][ 150/ 155] Loss 2.763191 mAP 0.499629 +2025-05-17 08:26:18,720 - Epoch: [292][ 155/ 155] Loss 2.759028 mAP 0.499302 +2025-05-17 08:26:18,772 - ==> mAP: 0.49930 Loss: 2.759 + +2025-05-17 08:26:18,781 - ==> Best [mAP: 0.499302 vloss: 2.759028 Params: 2177088 on epoch: 292] +2025-05-17 08:26:18,781 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:26:18,894 - + +2025-05-17 08:26:18,894 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:26:38,825 - Epoch: [293][ 50/ 518] Overall Loss 2.566578 Objective Loss 2.566578 LR 0.000004 Time 0.398545 +2025-05-17 08:26:57,781 - Epoch: [293][ 100/ 518] Overall Loss 2.543344 Objective Loss 2.543344 LR 0.000004 Time 0.388824 +2025-05-17 08:27:16,761 - Epoch: [293][ 150/ 518] Overall Loss 2.546669 Objective Loss 2.546669 LR 0.000004 Time 0.385741 +2025-05-17 08:27:35,722 - Epoch: [293][ 200/ 518] Overall Loss 2.537807 Objective Loss 2.537807 LR 0.000004 Time 0.384103 +2025-05-17 08:27:54,739 - Epoch: [293][ 250/ 518] Overall Loss 2.537413 Objective Loss 2.537413 LR 0.000004 Time 0.383347 +2025-05-17 08:28:13,969 - Epoch: [293][ 300/ 518] Overall Loss 2.540587 Objective Loss 2.540587 LR 0.000004 Time 0.383552 +2025-05-17 08:28:33,021 - Epoch: [293][ 350/ 518] Overall Loss 2.534854 Objective Loss 2.534854 LR 0.000004 Time 0.383193 +2025-05-17 08:28:52,077 - Epoch: [293][ 400/ 518] Overall Loss 2.532656 Objective Loss 2.532656 LR 0.000004 Time 0.382930 +2025-05-17 08:29:11,126 - Epoch: [293][ 450/ 518] Overall Loss 2.534660 Objective Loss 2.534660 LR 0.000004 Time 0.382712 +2025-05-17 08:29:30,138 - Epoch: [293][ 500/ 518] Overall Loss 2.534914 Objective Loss 2.534914 LR 0.000004 Time 0.382462 +2025-05-17 08:29:36,830 - Epoch: [293][ 518/ 518] Overall Loss 2.533776 Objective Loss 2.533776 LR 0.000004 Time 0.382090 +2025-05-17 08:29:36,904 - --- validate (epoch=293)----------- +2025-05-17 08:29:36,904 - 4952 samples (32 per mini-batch) +2025-05-17 08:29:36,908 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:29:52,061 - Epoch: [293][ 50/ 155] Loss 2.760047 mAP 0.511476 +2025-05-17 08:30:06,949 - Epoch: [293][ 100/ 155] Loss 2.765238 mAP 0.497172 +2025-05-17 08:30:22,607 - Epoch: [293][ 150/ 155] Loss 2.760748 mAP 0.494682 +2025-05-17 08:30:25,747 - Epoch: [293][ 155/ 155] Loss 2.758822 mAP 0.495518 +2025-05-17 08:30:25,802 - ==> mAP: 0.49552 Loss: 2.759 + +2025-05-17 08:30:25,811 - ==> Best [mAP: 0.499302 vloss: 2.759028 Params: 2177088 on epoch: 292] +2025-05-17 08:30:25,811 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:30:25,901 - + +2025-05-17 08:30:25,902 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:30:45,933 - Epoch: [294][ 50/ 518] Overall Loss 2.577090 Objective Loss 2.577090 LR 0.000004 Time 0.400559 +2025-05-17 08:31:04,883 - Epoch: [294][ 100/ 518] Overall Loss 2.563201 Objective Loss 2.563201 LR 0.000004 Time 0.389766 +2025-05-17 08:31:23,885 - Epoch: [294][ 150/ 518] Overall Loss 2.552829 Objective Loss 2.552829 LR 0.000004 Time 0.386516 +2025-05-17 08:31:42,878 - Epoch: [294][ 200/ 518] Overall Loss 2.556942 Objective Loss 2.556942 LR 0.000004 Time 0.384848 +2025-05-17 08:32:01,883 - Epoch: [294][ 250/ 518] Overall Loss 2.554228 Objective Loss 2.554228 LR 0.000004 Time 0.383895 +2025-05-17 08:32:21,104 - Epoch: [294][ 300/ 518] Overall Loss 2.550296 Objective Loss 2.550296 LR 0.000004 Time 0.383977 +2025-05-17 08:32:40,118 - Epoch: [294][ 350/ 518] Overall Loss 2.551350 Objective Loss 2.551350 LR 0.000004 Time 0.383447 +2025-05-17 08:32:59,139 - Epoch: [294][ 400/ 518] Overall Loss 2.545780 Objective Loss 2.545780 LR 0.000004 Time 0.383068 +2025-05-17 08:33:18,135 - Epoch: [294][ 450/ 518] Overall Loss 2.540991 Objective Loss 2.540991 LR 0.000004 Time 0.382716 +2025-05-17 08:33:37,144 - Epoch: [294][ 500/ 518] Overall Loss 2.535615 Objective Loss 2.535615 LR 0.000004 Time 0.382459 +2025-05-17 08:33:43,860 - Epoch: [294][ 518/ 518] Overall Loss 2.534024 Objective Loss 2.534024 LR 0.000004 Time 0.382132 +2025-05-17 08:33:43,933 - --- validate (epoch=294)----------- +2025-05-17 08:33:43,934 - 4952 samples (32 per mini-batch) +2025-05-17 08:33:43,938 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:33:59,090 - Epoch: [294][ 50/ 155] Loss 2.840446 mAP 0.489166 +2025-05-17 08:34:13,968 - Epoch: [294][ 100/ 155] Loss 2.784552 mAP 0.497103 +2025-05-17 08:34:29,683 - Epoch: [294][ 150/ 155] Loss 2.756749 mAP 0.500603 +2025-05-17 08:34:32,811 - Epoch: [294][ 155/ 155] Loss 2.760340 mAP 0.500620 +2025-05-17 08:34:32,871 - ==> mAP: 0.50062 Loss: 2.760 + +2025-05-17 08:34:32,880 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 08:34:32,880 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:34:32,995 - + +2025-05-17 08:34:32,995 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:34:53,086 - Epoch: [295][ 50/ 518] Overall Loss 2.499145 Objective Loss 2.499145 LR 0.000004 Time 0.401742 +2025-05-17 08:35:12,050 - Epoch: [295][ 100/ 518] Overall Loss 2.521779 Objective Loss 2.521779 LR 0.000004 Time 0.390506 +2025-05-17 08:35:31,033 - Epoch: [295][ 150/ 518] Overall Loss 2.525515 Objective Loss 2.525515 LR 0.000004 Time 0.386882 +2025-05-17 08:35:50,023 - Epoch: [295][ 200/ 518] Overall Loss 2.520520 Objective Loss 2.520520 LR 0.000004 Time 0.385107 +2025-05-17 08:36:09,037 - Epoch: [295][ 250/ 518] Overall Loss 2.528443 Objective Loss 2.528443 LR 0.000004 Time 0.384136 +2025-05-17 08:36:28,229 - Epoch: [295][ 300/ 518] Overall Loss 2.540506 Objective Loss 2.540506 LR 0.000004 Time 0.384082 +2025-05-17 08:36:47,266 - Epoch: [295][ 350/ 518] Overall Loss 2.545537 Objective Loss 2.545537 LR 0.000004 Time 0.383602 +2025-05-17 08:37:06,290 - Epoch: [295][ 400/ 518] Overall Loss 2.538328 Objective Loss 2.538328 LR 0.000004 Time 0.383209 +2025-05-17 08:37:25,322 - Epoch: [295][ 450/ 518] Overall Loss 2.530135 Objective Loss 2.530135 LR 0.000004 Time 0.382922 +2025-05-17 08:37:44,326 - Epoch: [295][ 500/ 518] Overall Loss 2.532877 Objective Loss 2.532877 LR 0.000004 Time 0.382636 +2025-05-17 08:37:51,026 - Epoch: [295][ 518/ 518] Overall Loss 2.533590 Objective Loss 2.533590 LR 0.000004 Time 0.382273 +2025-05-17 08:37:51,105 - --- validate (epoch=295)----------- +2025-05-17 08:37:51,106 - 4952 samples (32 per mini-batch) +2025-05-17 08:37:51,109 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:38:06,036 - Epoch: [295][ 50/ 155] Loss 2.764950 mAP 0.521915 +2025-05-17 08:38:21,006 - Epoch: [295][ 100/ 155] Loss 2.749723 mAP 0.513988 +2025-05-17 08:38:36,613 - Epoch: [295][ 150/ 155] Loss 2.772892 mAP 0.497833 +2025-05-17 08:38:39,690 - Epoch: [295][ 155/ 155] Loss 2.768955 mAP 0.498679 +2025-05-17 08:38:39,743 - ==> mAP: 0.49868 Loss: 2.769 + +2025-05-17 08:38:39,752 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 08:38:39,752 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:38:39,840 - + +2025-05-17 08:38:39,840 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:38:59,847 - Epoch: [296][ 50/ 518] Overall Loss 2.539269 Objective Loss 2.539269 LR 0.000004 Time 0.400064 +2025-05-17 08:39:18,781 - Epoch: [296][ 100/ 518] Overall Loss 2.543394 Objective Loss 2.543394 LR 0.000004 Time 0.389365 +2025-05-17 08:39:37,729 - Epoch: [296][ 150/ 518] Overall Loss 2.531812 Objective Loss 2.531812 LR 0.000004 Time 0.385883 +2025-05-17 08:39:56,689 - Epoch: [296][ 200/ 518] Overall Loss 2.527920 Objective Loss 2.527920 LR 0.000004 Time 0.384210 +2025-05-17 08:40:15,873 - Epoch: [296][ 250/ 518] Overall Loss 2.528481 Objective Loss 2.528481 LR 0.000004 Time 0.384099 +2025-05-17 08:40:34,917 - Epoch: [296][ 300/ 518] Overall Loss 2.516663 Objective Loss 2.516663 LR 0.000004 Time 0.383558 +2025-05-17 08:40:53,942 - Epoch: [296][ 350/ 518] Overall Loss 2.519799 Objective Loss 2.519799 LR 0.000004 Time 0.383120 +2025-05-17 08:41:12,971 - Epoch: [296][ 400/ 518] Overall Loss 2.526022 Objective Loss 2.526022 LR 0.000004 Time 0.382800 +2025-05-17 08:41:32,002 - Epoch: [296][ 450/ 518] Overall Loss 2.524287 Objective Loss 2.524287 LR 0.000004 Time 0.382557 +2025-05-17 08:41:51,020 - Epoch: [296][ 500/ 518] Overall Loss 2.526670 Objective Loss 2.526670 LR 0.000004 Time 0.382334 +2025-05-17 08:41:57,733 - Epoch: [296][ 518/ 518] Overall Loss 2.526478 Objective Loss 2.526478 LR 0.000004 Time 0.382007 +2025-05-17 08:41:57,808 - --- validate (epoch=296)----------- +2025-05-17 08:41:57,809 - 4952 samples (32 per mini-batch) +2025-05-17 08:41:57,812 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:42:12,738 - Epoch: [296][ 50/ 155] Loss 2.756582 mAP 0.494833 +2025-05-17 08:42:27,608 - Epoch: [296][ 100/ 155] Loss 2.754657 mAP 0.506544 +2025-05-17 08:42:43,309 - Epoch: [296][ 150/ 155] Loss 2.744373 mAP 0.501706 +2025-05-17 08:42:46,400 - Epoch: [296][ 155/ 155] Loss 2.747173 mAP 0.499977 +2025-05-17 08:42:46,454 - ==> mAP: 0.49998 Loss: 2.747 + +2025-05-17 08:42:46,463 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 08:42:46,463 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:42:46,553 - + +2025-05-17 08:42:46,553 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:43:06,473 - Epoch: [297][ 50/ 518] Overall Loss 2.516519 Objective Loss 2.516519 LR 0.000004 Time 0.398327 +2025-05-17 08:43:25,477 - Epoch: [297][ 100/ 518] Overall Loss 2.542691 Objective Loss 2.542691 LR 0.000004 Time 0.389202 +2025-05-17 08:43:44,659 - Epoch: [297][ 150/ 518] Overall Loss 2.536054 Objective Loss 2.536054 LR 0.000004 Time 0.387341 +2025-05-17 08:44:03,684 - Epoch: [297][ 200/ 518] Overall Loss 2.535720 Objective Loss 2.535720 LR 0.000004 Time 0.385623 +2025-05-17 08:44:22,693 - Epoch: [297][ 250/ 518] Overall Loss 2.528812 Objective Loss 2.528812 LR 0.000004 Time 0.384531 +2025-05-17 08:44:41,699 - Epoch: [297][ 300/ 518] Overall Loss 2.529793 Objective Loss 2.529793 LR 0.000004 Time 0.383792 +2025-05-17 08:45:00,679 - Epoch: [297][ 350/ 518] Overall Loss 2.531808 Objective Loss 2.531808 LR 0.000004 Time 0.383189 +2025-05-17 08:45:19,721 - Epoch: [297][ 400/ 518] Overall Loss 2.533622 Objective Loss 2.533622 LR 0.000004 Time 0.382894 +2025-05-17 08:45:38,882 - Epoch: [297][ 450/ 518] Overall Loss 2.532648 Objective Loss 2.532648 LR 0.000004 Time 0.382927 +2025-05-17 08:45:57,865 - Epoch: [297][ 500/ 518] Overall Loss 2.535698 Objective Loss 2.535698 LR 0.000004 Time 0.382598 +2025-05-17 08:46:04,576 - Epoch: [297][ 518/ 518] Overall Loss 2.534129 Objective Loss 2.534129 LR 0.000004 Time 0.382257 +2025-05-17 08:46:04,651 - --- validate (epoch=297)----------- +2025-05-17 08:46:04,652 - 4952 samples (32 per mini-batch) +2025-05-17 08:46:04,655 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:46:19,602 - Epoch: [297][ 50/ 155] Loss 2.728645 mAP 0.514243 +2025-05-17 08:46:34,701 - Epoch: [297][ 100/ 155] Loss 2.739892 mAP 0.503028 +2025-05-17 08:46:50,056 - Epoch: [297][ 150/ 155] Loss 2.757199 mAP 0.496013 +2025-05-17 08:46:53,051 - Epoch: [297][ 155/ 155] Loss 2.754043 mAP 0.497177 +2025-05-17 08:46:53,111 - ==> mAP: 0.49718 Loss: 2.754 + +2025-05-17 08:46:53,120 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 08:46:53,120 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:46:53,207 - + +2025-05-17 08:46:53,208 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:47:13,406 - Epoch: [298][ 50/ 518] Overall Loss 2.594848 Objective Loss 2.594848 LR 0.000004 Time 0.403895 +2025-05-17 08:47:32,589 - Epoch: [298][ 100/ 518] Overall Loss 2.536198 Objective Loss 2.536198 LR 0.000004 Time 0.393771 +2025-05-17 08:47:51,554 - Epoch: [298][ 150/ 518] Overall Loss 2.524167 Objective Loss 2.524167 LR 0.000004 Time 0.388937 +2025-05-17 08:48:10,547 - Epoch: [298][ 200/ 518] Overall Loss 2.524201 Objective Loss 2.524201 LR 0.000004 Time 0.386664 +2025-05-17 08:48:29,549 - Epoch: [298][ 250/ 518] Overall Loss 2.537352 Objective Loss 2.537352 LR 0.000004 Time 0.385334 +2025-05-17 08:48:48,562 - Epoch: [298][ 300/ 518] Overall Loss 2.539209 Objective Loss 2.539209 LR 0.000004 Time 0.384482 +2025-05-17 08:49:07,614 - Epoch: [298][ 350/ 518] Overall Loss 2.542614 Objective Loss 2.542614 LR 0.000004 Time 0.383990 +2025-05-17 08:49:26,785 - Epoch: [298][ 400/ 518] Overall Loss 2.541843 Objective Loss 2.541843 LR 0.000004 Time 0.383915 +2025-05-17 08:49:45,823 - Epoch: [298][ 450/ 518] Overall Loss 2.533353 Objective Loss 2.533353 LR 0.000004 Time 0.383563 +2025-05-17 08:50:04,816 - Epoch: [298][ 500/ 518] Overall Loss 2.529123 Objective Loss 2.529123 LR 0.000004 Time 0.383189 +2025-05-17 08:50:11,487 - Epoch: [298][ 518/ 518] Overall Loss 2.528443 Objective Loss 2.528443 LR 0.000004 Time 0.382751 +2025-05-17 08:50:11,562 - --- validate (epoch=298)----------- +2025-05-17 08:50:11,563 - 4952 samples (32 per mini-batch) +2025-05-17 08:50:11,566 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:50:26,604 - Epoch: [298][ 50/ 155] Loss 2.775647 mAP 0.499013 +2025-05-17 08:50:41,711 - Epoch: [298][ 100/ 155] Loss 2.766372 mAP 0.496862 +2025-05-17 08:50:57,265 - Epoch: [298][ 150/ 155] Loss 2.778260 mAP 0.490849 +2025-05-17 08:51:00,151 - Epoch: [298][ 155/ 155] Loss 2.775481 mAP 0.490736 +2025-05-17 08:51:00,211 - ==> mAP: 0.49074 Loss: 2.775 + +2025-05-17 08:51:00,220 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 08:51:00,221 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:51:00,306 - + +2025-05-17 08:51:00,307 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:51:20,544 - Epoch: [299][ 50/ 518] Overall Loss 2.549287 Objective Loss 2.549287 LR 0.000004 Time 0.404677 +2025-05-17 08:51:39,528 - Epoch: [299][ 100/ 518] Overall Loss 2.525609 Objective Loss 2.525609 LR 0.000004 Time 0.392176 +2025-05-17 08:51:58,503 - Epoch: [299][ 150/ 518] Overall Loss 2.522236 Objective Loss 2.522236 LR 0.000004 Time 0.387938 +2025-05-17 08:52:17,534 - Epoch: [299][ 200/ 518] Overall Loss 2.538485 Objective Loss 2.538485 LR 0.000004 Time 0.386109 +2025-05-17 08:52:36,553 - Epoch: [299][ 250/ 518] Overall Loss 2.544523 Objective Loss 2.544523 LR 0.000004 Time 0.384957 +2025-05-17 08:52:55,595 - Epoch: [299][ 300/ 518] Overall Loss 2.545166 Objective Loss 2.545166 LR 0.000004 Time 0.384268 +2025-05-17 08:53:14,813 - Epoch: [299][ 350/ 518] Overall Loss 2.541366 Objective Loss 2.541366 LR 0.000004 Time 0.384279 +2025-05-17 08:53:33,812 - Epoch: [299][ 400/ 518] Overall Loss 2.542082 Objective Loss 2.542082 LR 0.000004 Time 0.383739 +2025-05-17 08:53:52,809 - Epoch: [299][ 450/ 518] Overall Loss 2.534989 Objective Loss 2.534989 LR 0.000004 Time 0.383314 +2025-05-17 08:54:11,796 - Epoch: [299][ 500/ 518] Overall Loss 2.532727 Objective Loss 2.532727 LR 0.000004 Time 0.382954 +2025-05-17 08:54:18,479 - Epoch: [299][ 518/ 518] Overall Loss 2.534133 Objective Loss 2.534133 LR 0.000004 Time 0.382547 +2025-05-17 08:54:18,556 - --- validate (epoch=299)----------- +2025-05-17 08:54:18,557 - 4952 samples (32 per mini-batch) +2025-05-17 08:54:18,560 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:54:33,549 - Epoch: [299][ 50/ 155] Loss 2.724450 mAP 0.514695 +2025-05-17 08:54:48,830 - Epoch: [299][ 100/ 155] Loss 2.755710 mAP 0.500638 +2025-05-17 08:55:04,524 - Epoch: [299][ 150/ 155] Loss 2.757638 mAP 0.497490 +2025-05-17 08:55:07,484 - Epoch: [299][ 155/ 155] Loss 2.752744 mAP 0.497102 +2025-05-17 08:55:07,540 - ==> mAP: 0.49710 Loss: 2.753 + +2025-05-17 08:55:07,550 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 08:55:07,550 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:55:07,645 - + +2025-05-17 08:55:07,646 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:55:28,000 - Epoch: [300][ 50/ 518] Overall Loss 2.467008 Objective Loss 2.467008 LR 0.000004 Time 0.407017 +2025-05-17 08:55:46,988 - Epoch: [300][ 100/ 518] Overall Loss 2.487736 Objective Loss 2.487736 LR 0.000004 Time 0.393370 +2025-05-17 08:56:06,024 - Epoch: [300][ 150/ 518] Overall Loss 2.500172 Objective Loss 2.500172 LR 0.000004 Time 0.389151 +2025-05-17 08:56:25,063 - Epoch: [300][ 200/ 518] Overall Loss 2.504337 Objective Loss 2.504337 LR 0.000004 Time 0.387055 +2025-05-17 08:56:44,071 - Epoch: [300][ 250/ 518] Overall Loss 2.514947 Objective Loss 2.514947 LR 0.000004 Time 0.385669 +2025-05-17 08:57:03,271 - Epoch: [300][ 300/ 518] Overall Loss 2.529508 Objective Loss 2.529508 LR 0.000004 Time 0.385388 +2025-05-17 08:57:22,281 - Epoch: [300][ 350/ 518] Overall Loss 2.530220 Objective Loss 2.530220 LR 0.000004 Time 0.384644 +2025-05-17 08:57:41,281 - Epoch: [300][ 400/ 518] Overall Loss 2.533579 Objective Loss 2.533579 LR 0.000004 Time 0.384060 +2025-05-17 08:58:00,301 - Epoch: [300][ 450/ 518] Overall Loss 2.534031 Objective Loss 2.534031 LR 0.000004 Time 0.383651 +2025-05-17 08:58:19,333 - Epoch: [300][ 500/ 518] Overall Loss 2.536487 Objective Loss 2.536487 LR 0.000004 Time 0.383347 +2025-05-17 08:58:26,033 - Epoch: [300][ 518/ 518] Overall Loss 2.533799 Objective Loss 2.533799 LR 0.000004 Time 0.382959 +2025-05-17 08:58:26,110 - --- validate (epoch=300)----------- +2025-05-17 08:58:26,111 - 4952 samples (32 per mini-batch) +2025-05-17 08:58:26,114 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 08:58:41,258 - Epoch: [300][ 50/ 155] Loss 2.730192 mAP 0.489307 +2025-05-17 08:58:56,388 - Epoch: [300][ 100/ 155] Loss 2.735907 mAP 0.499316 +2025-05-17 08:59:12,187 - Epoch: [300][ 150/ 155] Loss 2.762155 mAP 0.495231 +2025-05-17 08:59:15,228 - Epoch: [300][ 155/ 155] Loss 2.763062 mAP 0.494856 +2025-05-17 08:59:15,281 - ==> mAP: 0.49486 Loss: 2.763 + +2025-05-17 08:59:15,290 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 08:59:15,291 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 08:59:15,384 - + +2025-05-17 08:59:15,384 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 08:59:35,497 - Epoch: [301][ 50/ 518] Overall Loss 2.511732 Objective Loss 2.511732 LR 0.000004 Time 0.402181 +2025-05-17 08:59:54,484 - Epoch: [301][ 100/ 518] Overall Loss 2.518218 Objective Loss 2.518218 LR 0.000004 Time 0.390954 +2025-05-17 09:00:13,467 - Epoch: [301][ 150/ 518] Overall Loss 2.509748 Objective Loss 2.509748 LR 0.000004 Time 0.387180 +2025-05-17 09:00:32,433 - Epoch: [301][ 200/ 518] Overall Loss 2.515898 Objective Loss 2.515898 LR 0.000004 Time 0.385210 +2025-05-17 09:00:51,612 - Epoch: [301][ 250/ 518] Overall Loss 2.513790 Objective Loss 2.513790 LR 0.000004 Time 0.384879 +2025-05-17 09:01:10,622 - Epoch: [301][ 300/ 518] Overall Loss 2.514922 Objective Loss 2.514922 LR 0.000004 Time 0.384094 +2025-05-17 09:01:29,613 - Epoch: [301][ 350/ 518] Overall Loss 2.515057 Objective Loss 2.515057 LR 0.000004 Time 0.383480 +2025-05-17 09:01:48,650 - Epoch: [301][ 400/ 518] Overall Loss 2.516535 Objective Loss 2.516535 LR 0.000004 Time 0.383135 +2025-05-17 09:02:07,695 - Epoch: [301][ 450/ 518] Overall Loss 2.521608 Objective Loss 2.521608 LR 0.000004 Time 0.382885 +2025-05-17 09:02:26,728 - Epoch: [301][ 500/ 518] Overall Loss 2.524586 Objective Loss 2.524586 LR 0.000004 Time 0.382661 +2025-05-17 09:02:33,606 - Epoch: [301][ 518/ 518] Overall Loss 2.524955 Objective Loss 2.524955 LR 0.000004 Time 0.382641 +2025-05-17 09:02:33,677 - --- validate (epoch=301)----------- +2025-05-17 09:02:33,678 - 4952 samples (32 per mini-batch) +2025-05-17 09:02:33,681 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:02:48,473 - Epoch: [301][ 50/ 155] Loss 2.764923 mAP 0.513055 +2025-05-17 09:03:03,368 - Epoch: [301][ 100/ 155] Loss 2.736998 mAP 0.500534 +2025-05-17 09:03:18,841 - Epoch: [301][ 150/ 155] Loss 2.755739 mAP 0.495007 +2025-05-17 09:03:21,870 - Epoch: [301][ 155/ 155] Loss 2.759709 mAP 0.495711 +2025-05-17 09:03:21,923 - ==> mAP: 0.49571 Loss: 2.760 + +2025-05-17 09:03:21,932 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:03:21,932 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:03:22,020 - + +2025-05-17 09:03:22,020 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:03:42,015 - Epoch: [302][ 50/ 518] Overall Loss 2.464971 Objective Loss 2.464971 LR 0.000004 Time 0.399832 +2025-05-17 09:04:00,945 - Epoch: [302][ 100/ 518] Overall Loss 2.502453 Objective Loss 2.502453 LR 0.000004 Time 0.389202 +2025-05-17 09:04:19,889 - Epoch: [302][ 150/ 518] Overall Loss 2.513972 Objective Loss 2.513972 LR 0.000004 Time 0.385748 +2025-05-17 09:04:39,043 - Epoch: [302][ 200/ 518] Overall Loss 2.523553 Objective Loss 2.523553 LR 0.000004 Time 0.385075 +2025-05-17 09:04:58,057 - Epoch: [302][ 250/ 518] Overall Loss 2.527727 Objective Loss 2.527727 LR 0.000004 Time 0.384111 +2025-05-17 09:05:17,083 - Epoch: [302][ 300/ 518] Overall Loss 2.526446 Objective Loss 2.526446 LR 0.000004 Time 0.383512 +2025-05-17 09:05:36,097 - Epoch: [302][ 350/ 518] Overall Loss 2.522904 Objective Loss 2.522904 LR 0.000004 Time 0.383047 +2025-05-17 09:05:55,136 - Epoch: [302][ 400/ 518] Overall Loss 2.518479 Objective Loss 2.518479 LR 0.000004 Time 0.382761 +2025-05-17 09:06:14,170 - Epoch: [302][ 450/ 518] Overall Loss 2.528627 Objective Loss 2.528627 LR 0.000004 Time 0.382526 +2025-05-17 09:06:33,185 - Epoch: [302][ 500/ 518] Overall Loss 2.532812 Objective Loss 2.532812 LR 0.000004 Time 0.382303 +2025-05-17 09:06:39,867 - Epoch: [302][ 518/ 518] Overall Loss 2.533451 Objective Loss 2.533451 LR 0.000004 Time 0.381916 +2025-05-17 09:06:39,943 - --- validate (epoch=302)----------- +2025-05-17 09:06:39,944 - 4952 samples (32 per mini-batch) +2025-05-17 09:06:39,947 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:06:54,974 - Epoch: [302][ 50/ 155] Loss 2.758843 mAP 0.497819 +2025-05-17 09:07:10,009 - Epoch: [302][ 100/ 155] Loss 2.776828 mAP 0.489028 +2025-05-17 09:07:25,455 - Epoch: [302][ 150/ 155] Loss 2.755749 mAP 0.495869 +2025-05-17 09:07:28,504 - Epoch: [302][ 155/ 155] Loss 2.755007 mAP 0.495028 +2025-05-17 09:07:28,558 - ==> mAP: 0.49503 Loss: 2.755 + +2025-05-17 09:07:28,568 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:07:28,568 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:07:28,660 - + +2025-05-17 09:07:28,660 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:07:48,794 - Epoch: [303][ 50/ 518] Overall Loss 2.564531 Objective Loss 2.564531 LR 0.000004 Time 0.402598 +2025-05-17 09:08:07,805 - Epoch: [303][ 100/ 518] Overall Loss 2.557253 Objective Loss 2.557253 LR 0.000004 Time 0.391402 +2025-05-17 09:08:26,832 - Epoch: [303][ 150/ 518] Overall Loss 2.560645 Objective Loss 2.560645 LR 0.000004 Time 0.387779 +2025-05-17 09:08:45,830 - Epoch: [303][ 200/ 518] Overall Loss 2.536615 Objective Loss 2.536615 LR 0.000004 Time 0.385821 +2025-05-17 09:09:04,993 - Epoch: [303][ 250/ 518] Overall Loss 2.531010 Objective Loss 2.531010 LR 0.000004 Time 0.385302 +2025-05-17 09:09:24,039 - Epoch: [303][ 300/ 518] Overall Loss 2.532379 Objective Loss 2.532379 LR 0.000004 Time 0.384568 +2025-05-17 09:09:43,072 - Epoch: [303][ 350/ 518] Overall Loss 2.534534 Objective Loss 2.534534 LR 0.000004 Time 0.384008 +2025-05-17 09:10:02,087 - Epoch: [303][ 400/ 518] Overall Loss 2.539833 Objective Loss 2.539833 LR 0.000004 Time 0.383542 +2025-05-17 09:10:21,086 - Epoch: [303][ 450/ 518] Overall Loss 2.535864 Objective Loss 2.535864 LR 0.000004 Time 0.383144 +2025-05-17 09:10:40,083 - Epoch: [303][ 500/ 518] Overall Loss 2.536896 Objective Loss 2.536896 LR 0.000004 Time 0.382821 +2025-05-17 09:10:46,907 - Epoch: [303][ 518/ 518] Overall Loss 2.536808 Objective Loss 2.536808 LR 0.000004 Time 0.382690 +2025-05-17 09:10:46,984 - --- validate (epoch=303)----------- +2025-05-17 09:10:46,985 - 4952 samples (32 per mini-batch) +2025-05-17 09:10:46,988 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:11:01,702 - Epoch: [303][ 50/ 155] Loss 2.802571 mAP 0.485190 +2025-05-17 09:11:16,579 - Epoch: [303][ 100/ 155] Loss 2.758839 mAP 0.489126 +2025-05-17 09:11:32,117 - Epoch: [303][ 150/ 155] Loss 2.752747 mAP 0.493772 +2025-05-17 09:11:35,153 - Epoch: [303][ 155/ 155] Loss 2.756127 mAP 0.493479 +2025-05-17 09:11:35,206 - ==> mAP: 0.49348 Loss: 2.756 + +2025-05-17 09:11:35,215 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:11:35,216 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:11:35,306 - + +2025-05-17 09:11:35,307 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:11:55,422 - Epoch: [304][ 50/ 518] Overall Loss 2.534659 Objective Loss 2.534659 LR 0.000004 Time 0.402225 +2025-05-17 09:12:14,363 - Epoch: [304][ 100/ 518] Overall Loss 2.551471 Objective Loss 2.551471 LR 0.000004 Time 0.390510 +2025-05-17 09:12:33,535 - Epoch: [304][ 150/ 518] Overall Loss 2.529640 Objective Loss 2.529640 LR 0.000004 Time 0.388144 +2025-05-17 09:12:52,532 - Epoch: [304][ 200/ 518] Overall Loss 2.522797 Objective Loss 2.522797 LR 0.000004 Time 0.386090 +2025-05-17 09:13:11,508 - Epoch: [304][ 250/ 518] Overall Loss 2.526495 Objective Loss 2.526495 LR 0.000004 Time 0.384770 +2025-05-17 09:13:30,490 - Epoch: [304][ 300/ 518] Overall Loss 2.521208 Objective Loss 2.521208 LR 0.000004 Time 0.383911 +2025-05-17 09:13:49,469 - Epoch: [304][ 350/ 518] Overall Loss 2.523424 Objective Loss 2.523424 LR 0.000004 Time 0.383289 +2025-05-17 09:14:08,444 - Epoch: [304][ 400/ 518] Overall Loss 2.524445 Objective Loss 2.524445 LR 0.000004 Time 0.382812 +2025-05-17 09:14:27,551 - Epoch: [304][ 450/ 518] Overall Loss 2.515176 Objective Loss 2.515176 LR 0.000004 Time 0.382733 +2025-05-17 09:14:46,517 - Epoch: [304][ 500/ 518] Overall Loss 2.518309 Objective Loss 2.518309 LR 0.000004 Time 0.382391 +2025-05-17 09:14:53,201 - Epoch: [304][ 518/ 518] Overall Loss 2.521925 Objective Loss 2.521925 LR 0.000004 Time 0.382005 +2025-05-17 09:14:53,275 - --- validate (epoch=304)----------- +2025-05-17 09:14:53,275 - 4952 samples (32 per mini-batch) +2025-05-17 09:14:53,278 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:15:07,997 - Epoch: [304][ 50/ 155] Loss 2.789175 mAP 0.510451 +2025-05-17 09:15:22,956 - Epoch: [304][ 100/ 155] Loss 2.762209 mAP 0.498201 +2025-05-17 09:15:38,223 - Epoch: [304][ 150/ 155] Loss 2.751982 mAP 0.500107 +2025-05-17 09:15:41,220 - Epoch: [304][ 155/ 155] Loss 2.748671 mAP 0.499291 +2025-05-17 09:15:41,278 - ==> mAP: 0.49929 Loss: 2.749 + +2025-05-17 09:15:41,287 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:15:41,287 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:15:41,381 - + +2025-05-17 09:15:41,381 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:16:01,738 - Epoch: [305][ 50/ 518] Overall Loss 2.530592 Objective Loss 2.530592 LR 0.000004 Time 0.407057 +2025-05-17 09:16:20,722 - Epoch: [305][ 100/ 518] Overall Loss 2.542369 Objective Loss 2.542369 LR 0.000004 Time 0.393363 +2025-05-17 09:16:39,727 - Epoch: [305][ 150/ 518] Overall Loss 2.539670 Objective Loss 2.539670 LR 0.000004 Time 0.388936 +2025-05-17 09:16:58,751 - Epoch: [305][ 200/ 518] Overall Loss 2.535984 Objective Loss 2.535984 LR 0.000004 Time 0.386814 +2025-05-17 09:17:17,764 - Epoch: [305][ 250/ 518] Overall Loss 2.529220 Objective Loss 2.529220 LR 0.000004 Time 0.385499 +2025-05-17 09:17:36,787 - Epoch: [305][ 300/ 518] Overall Loss 2.524047 Objective Loss 2.524047 LR 0.000004 Time 0.384656 +2025-05-17 09:17:55,966 - Epoch: [305][ 350/ 518] Overall Loss 2.520774 Objective Loss 2.520774 LR 0.000004 Time 0.384500 +2025-05-17 09:18:14,990 - Epoch: [305][ 400/ 518] Overall Loss 2.522526 Objective Loss 2.522526 LR 0.000004 Time 0.383994 +2025-05-17 09:18:34,035 - Epoch: [305][ 450/ 518] Overall Loss 2.520087 Objective Loss 2.520087 LR 0.000004 Time 0.383647 +2025-05-17 09:18:53,070 - Epoch: [305][ 500/ 518] Overall Loss 2.519207 Objective Loss 2.519207 LR 0.000004 Time 0.383350 +2025-05-17 09:18:59,774 - Epoch: [305][ 518/ 518] Overall Loss 2.522302 Objective Loss 2.522302 LR 0.000004 Time 0.382970 +2025-05-17 09:18:59,849 - --- validate (epoch=305)----------- +2025-05-17 09:18:59,850 - 4952 samples (32 per mini-batch) +2025-05-17 09:18:59,853 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:19:14,707 - Epoch: [305][ 50/ 155] Loss 2.705119 mAP 0.526729 +2025-05-17 09:19:29,803 - Epoch: [305][ 100/ 155] Loss 2.746607 mAP 0.509464 +2025-05-17 09:19:45,309 - Epoch: [305][ 150/ 155] Loss 2.765869 mAP 0.496136 +2025-05-17 09:19:48,228 - Epoch: [305][ 155/ 155] Loss 2.765009 mAP 0.497373 +2025-05-17 09:19:48,282 - ==> mAP: 0.49737 Loss: 2.765 + +2025-05-17 09:19:48,291 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:19:48,291 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:19:48,378 - + +2025-05-17 09:19:48,379 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:20:08,626 - Epoch: [306][ 50/ 518] Overall Loss 2.552047 Objective Loss 2.552047 LR 0.000004 Time 0.404869 +2025-05-17 09:20:27,613 - Epoch: [306][ 100/ 518] Overall Loss 2.540997 Objective Loss 2.540997 LR 0.000004 Time 0.392299 +2025-05-17 09:20:46,631 - Epoch: [306][ 150/ 518] Overall Loss 2.518411 Objective Loss 2.518411 LR 0.000004 Time 0.388309 +2025-05-17 09:21:05,608 - Epoch: [306][ 200/ 518] Overall Loss 2.519594 Objective Loss 2.519594 LR 0.000004 Time 0.386111 +2025-05-17 09:21:24,666 - Epoch: [306][ 250/ 518] Overall Loss 2.520803 Objective Loss 2.520803 LR 0.000004 Time 0.385119 +2025-05-17 09:21:43,686 - Epoch: [306][ 300/ 518] Overall Loss 2.517742 Objective Loss 2.517742 LR 0.000004 Time 0.384329 +2025-05-17 09:22:02,851 - Epoch: [306][ 350/ 518] Overall Loss 2.524881 Objective Loss 2.524881 LR 0.000004 Time 0.384180 +2025-05-17 09:22:21,836 - Epoch: [306][ 400/ 518] Overall Loss 2.520504 Objective Loss 2.520504 LR 0.000004 Time 0.383615 +2025-05-17 09:22:40,834 - Epoch: [306][ 450/ 518] Overall Loss 2.521793 Objective Loss 2.521793 LR 0.000004 Time 0.383206 +2025-05-17 09:22:59,870 - Epoch: [306][ 500/ 518] Overall Loss 2.522284 Objective Loss 2.522284 LR 0.000004 Time 0.382955 +2025-05-17 09:23:06,567 - Epoch: [306][ 518/ 518] Overall Loss 2.526416 Objective Loss 2.526416 LR 0.000004 Time 0.382578 +2025-05-17 09:23:06,642 - --- validate (epoch=306)----------- +2025-05-17 09:23:06,643 - 4952 samples (32 per mini-batch) +2025-05-17 09:23:06,646 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:23:21,308 - Epoch: [306][ 50/ 155] Loss 2.726171 mAP 0.495726 +2025-05-17 09:23:36,278 - Epoch: [306][ 100/ 155] Loss 2.745134 mAP 0.497167 +2025-05-17 09:23:51,699 - Epoch: [306][ 150/ 155] Loss 2.756162 mAP 0.496765 +2025-05-17 09:23:54,550 - Epoch: [306][ 155/ 155] Loss 2.761526 mAP 0.497441 +2025-05-17 09:23:54,602 - ==> mAP: 0.49744 Loss: 2.762 + +2025-05-17 09:23:54,611 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:23:54,611 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:23:54,700 - + +2025-05-17 09:23:54,701 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:24:14,971 - Epoch: [307][ 50/ 518] Overall Loss 2.522459 Objective Loss 2.522459 LR 0.000004 Time 0.405333 +2025-05-17 09:24:33,962 - Epoch: [307][ 100/ 518] Overall Loss 2.535130 Objective Loss 2.535130 LR 0.000004 Time 0.392570 +2025-05-17 09:24:52,948 - Epoch: [307][ 150/ 518] Overall Loss 2.528653 Objective Loss 2.528653 LR 0.000004 Time 0.388278 +2025-05-17 09:25:11,949 - Epoch: [307][ 200/ 518] Overall Loss 2.524878 Objective Loss 2.524878 LR 0.000004 Time 0.386206 +2025-05-17 09:25:30,973 - Epoch: [307][ 250/ 518] Overall Loss 2.529337 Objective Loss 2.529337 LR 0.000004 Time 0.385056 +2025-05-17 09:25:49,970 - Epoch: [307][ 300/ 518] Overall Loss 2.520027 Objective Loss 2.520027 LR 0.000004 Time 0.384200 +2025-05-17 09:26:08,970 - Epoch: [307][ 350/ 518] Overall Loss 2.526198 Objective Loss 2.526198 LR 0.000004 Time 0.383594 +2025-05-17 09:26:27,970 - Epoch: [307][ 400/ 518] Overall Loss 2.523461 Objective Loss 2.523461 LR 0.000004 Time 0.383141 +2025-05-17 09:26:47,128 - Epoch: [307][ 450/ 518] Overall Loss 2.523350 Objective Loss 2.523350 LR 0.000004 Time 0.383142 +2025-05-17 09:27:06,183 - Epoch: [307][ 500/ 518] Overall Loss 2.529598 Objective Loss 2.529598 LR 0.000004 Time 0.382936 +2025-05-17 09:27:12,907 - Epoch: [307][ 518/ 518] Overall Loss 2.530948 Objective Loss 2.530948 LR 0.000004 Time 0.382609 +2025-05-17 09:27:12,986 - --- validate (epoch=307)----------- +2025-05-17 09:27:12,987 - 4952 samples (32 per mini-batch) +2025-05-17 09:27:12,991 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:27:27,752 - Epoch: [307][ 50/ 155] Loss 2.765060 mAP 0.502308 +2025-05-17 09:27:42,766 - Epoch: [307][ 100/ 155] Loss 2.776274 mAP 0.493778 +2025-05-17 09:27:58,159 - Epoch: [307][ 150/ 155] Loss 2.758859 mAP 0.495336 +2025-05-17 09:28:01,178 - Epoch: [307][ 155/ 155] Loss 2.756138 mAP 0.496341 +2025-05-17 09:28:01,231 - ==> mAP: 0.49634 Loss: 2.756 + +2025-05-17 09:28:01,241 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:28:01,241 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:28:01,331 - + +2025-05-17 09:28:01,331 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:28:21,408 - Epoch: [308][ 50/ 518] Overall Loss 2.542272 Objective Loss 2.542272 LR 0.000004 Time 0.401459 +2025-05-17 09:28:40,581 - Epoch: [308][ 100/ 518] Overall Loss 2.558998 Objective Loss 2.558998 LR 0.000004 Time 0.392450 +2025-05-17 09:28:59,602 - Epoch: [308][ 150/ 518] Overall Loss 2.542320 Objective Loss 2.542320 LR 0.000004 Time 0.388430 +2025-05-17 09:29:18,584 - Epoch: [308][ 200/ 518] Overall Loss 2.545928 Objective Loss 2.545928 LR 0.000004 Time 0.386226 +2025-05-17 09:29:37,583 - Epoch: [308][ 250/ 518] Overall Loss 2.534710 Objective Loss 2.534710 LR 0.000004 Time 0.384974 +2025-05-17 09:29:56,619 - Epoch: [308][ 300/ 518] Overall Loss 2.520000 Objective Loss 2.520000 LR 0.000004 Time 0.384262 +2025-05-17 09:30:15,653 - Epoch: [308][ 350/ 518] Overall Loss 2.521183 Objective Loss 2.521183 LR 0.000004 Time 0.383747 +2025-05-17 09:30:34,706 - Epoch: [308][ 400/ 518] Overall Loss 2.522680 Objective Loss 2.522680 LR 0.000004 Time 0.383411 +2025-05-17 09:30:53,724 - Epoch: [308][ 450/ 518] Overall Loss 2.519787 Objective Loss 2.519787 LR 0.000004 Time 0.383068 +2025-05-17 09:31:12,878 - Epoch: [308][ 500/ 518] Overall Loss 2.519491 Objective Loss 2.519491 LR 0.000004 Time 0.383067 +2025-05-17 09:31:19,592 - Epoch: [308][ 518/ 518] Overall Loss 2.519119 Objective Loss 2.519119 LR 0.000004 Time 0.382716 +2025-05-17 09:31:19,662 - --- validate (epoch=308)----------- +2025-05-17 09:31:19,663 - 4952 samples (32 per mini-batch) +2025-05-17 09:31:19,667 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:31:34,402 - Epoch: [308][ 50/ 155] Loss 2.800331 mAP 0.498607 +2025-05-17 09:31:49,431 - Epoch: [308][ 100/ 155] Loss 2.781605 mAP 0.493803 +2025-05-17 09:32:05,128 - Epoch: [308][ 150/ 155] Loss 2.762479 mAP 0.496799 +2025-05-17 09:32:08,125 - Epoch: [308][ 155/ 155] Loss 2.759482 mAP 0.495963 +2025-05-17 09:32:08,178 - ==> mAP: 0.49596 Loss: 2.759 + +2025-05-17 09:32:08,187 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:32:08,187 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:32:08,274 - + +2025-05-17 09:32:08,275 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:32:28,307 - Epoch: [309][ 50/ 518] Overall Loss 2.539243 Objective Loss 2.539243 LR 0.000004 Time 0.400588 +2025-05-17 09:32:47,465 - Epoch: [309][ 100/ 518] Overall Loss 2.518712 Objective Loss 2.518712 LR 0.000004 Time 0.391862 +2025-05-17 09:33:06,506 - Epoch: [309][ 150/ 518] Overall Loss 2.522209 Objective Loss 2.522209 LR 0.000004 Time 0.388178 +2025-05-17 09:33:25,525 - Epoch: [309][ 200/ 518] Overall Loss 2.520776 Objective Loss 2.520776 LR 0.000004 Time 0.386226 +2025-05-17 09:33:44,569 - Epoch: [309][ 250/ 518] Overall Loss 2.532812 Objective Loss 2.532812 LR 0.000004 Time 0.385150 +2025-05-17 09:34:03,589 - Epoch: [309][ 300/ 518] Overall Loss 2.533640 Objective Loss 2.533640 LR 0.000004 Time 0.384357 +2025-05-17 09:34:22,556 - Epoch: [309][ 350/ 518] Overall Loss 2.533190 Objective Loss 2.533190 LR 0.000004 Time 0.383635 +2025-05-17 09:34:41,785 - Epoch: [309][ 400/ 518] Overall Loss 2.531336 Objective Loss 2.531336 LR 0.000004 Time 0.383751 +2025-05-17 09:35:00,810 - Epoch: [309][ 450/ 518] Overall Loss 2.525896 Objective Loss 2.525896 LR 0.000004 Time 0.383390 +2025-05-17 09:35:19,835 - Epoch: [309][ 500/ 518] Overall Loss 2.525793 Objective Loss 2.525793 LR 0.000004 Time 0.383099 +2025-05-17 09:35:26,537 - Epoch: [309][ 518/ 518] Overall Loss 2.522506 Objective Loss 2.522506 LR 0.000004 Time 0.382723 +2025-05-17 09:35:26,608 - --- validate (epoch=309)----------- +2025-05-17 09:35:26,609 - 4952 samples (32 per mini-batch) +2025-05-17 09:35:26,612 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:35:41,399 - Epoch: [309][ 50/ 155] Loss 2.762568 mAP 0.493784 +2025-05-17 09:35:56,287 - Epoch: [309][ 100/ 155] Loss 2.748680 mAP 0.498676 +2025-05-17 09:36:11,667 - Epoch: [309][ 150/ 155] Loss 2.765410 mAP 0.490396 +2025-05-17 09:36:14,530 - Epoch: [309][ 155/ 155] Loss 2.765984 mAP 0.491834 +2025-05-17 09:36:14,583 - ==> mAP: 0.49183 Loss: 2.766 + +2025-05-17 09:36:14,592 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:36:14,592 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:36:14,679 - + +2025-05-17 09:36:14,680 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:36:34,708 - Epoch: [310][ 50/ 518] Overall Loss 2.548339 Objective Loss 2.548339 LR 0.000004 Time 0.400509 +2025-05-17 09:36:53,833 - Epoch: [310][ 100/ 518] Overall Loss 2.515767 Objective Loss 2.515767 LR 0.000004 Time 0.391490 +2025-05-17 09:37:12,802 - Epoch: [310][ 150/ 518] Overall Loss 2.509981 Objective Loss 2.509981 LR 0.000004 Time 0.387446 +2025-05-17 09:37:31,771 - Epoch: [310][ 200/ 518] Overall Loss 2.509442 Objective Loss 2.509442 LR 0.000004 Time 0.385420 +2025-05-17 09:37:50,759 - Epoch: [310][ 250/ 518] Overall Loss 2.511230 Objective Loss 2.511230 LR 0.000004 Time 0.384285 +2025-05-17 09:38:09,749 - Epoch: [310][ 300/ 518] Overall Loss 2.513108 Objective Loss 2.513108 LR 0.000004 Time 0.383532 +2025-05-17 09:38:28,744 - Epoch: [310][ 350/ 518] Overall Loss 2.514045 Objective Loss 2.514045 LR 0.000004 Time 0.383010 +2025-05-17 09:38:47,719 - Epoch: [310][ 400/ 518] Overall Loss 2.516170 Objective Loss 2.516170 LR 0.000004 Time 0.382566 +2025-05-17 09:39:06,861 - Epoch: [310][ 450/ 518] Overall Loss 2.515267 Objective Loss 2.515267 LR 0.000004 Time 0.382594 +2025-05-17 09:39:25,849 - Epoch: [310][ 500/ 518] Overall Loss 2.519047 Objective Loss 2.519047 LR 0.000004 Time 0.382308 +2025-05-17 09:39:32,561 - Epoch: [310][ 518/ 518] Overall Loss 2.519216 Objective Loss 2.519216 LR 0.000004 Time 0.381980 +2025-05-17 09:39:32,637 - --- validate (epoch=310)----------- +2025-05-17 09:39:32,638 - 4952 samples (32 per mini-batch) +2025-05-17 09:39:32,641 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:39:47,421 - Epoch: [310][ 50/ 155] Loss 2.812011 mAP 0.495186 +2025-05-17 09:40:02,354 - Epoch: [310][ 100/ 155] Loss 2.788126 mAP 0.484590 +2025-05-17 09:40:17,740 - Epoch: [310][ 150/ 155] Loss 2.756525 mAP 0.496189 +2025-05-17 09:40:20,784 - Epoch: [310][ 155/ 155] Loss 2.756019 mAP 0.496694 +2025-05-17 09:40:20,838 - ==> mAP: 0.49669 Loss: 2.756 + +2025-05-17 09:40:20,847 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:40:20,847 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:40:20,936 - + +2025-05-17 09:40:20,936 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:40:41,023 - Epoch: [311][ 50/ 518] Overall Loss 2.509962 Objective Loss 2.509962 LR 0.000004 Time 0.401680 +2025-05-17 09:41:00,006 - Epoch: [311][ 100/ 518] Overall Loss 2.509061 Objective Loss 2.509061 LR 0.000004 Time 0.390653 +2025-05-17 09:41:19,227 - Epoch: [311][ 150/ 518] Overall Loss 2.507773 Objective Loss 2.507773 LR 0.000004 Time 0.388571 +2025-05-17 09:41:38,263 - Epoch: [311][ 200/ 518] Overall Loss 2.515059 Objective Loss 2.515059 LR 0.000004 Time 0.386603 +2025-05-17 09:41:57,271 - Epoch: [311][ 250/ 518] Overall Loss 2.512663 Objective Loss 2.512663 LR 0.000004 Time 0.385310 +2025-05-17 09:42:16,292 - Epoch: [311][ 300/ 518] Overall Loss 2.520070 Objective Loss 2.520070 LR 0.000004 Time 0.384494 +2025-05-17 09:42:35,370 - Epoch: [311][ 350/ 518] Overall Loss 2.519425 Objective Loss 2.519425 LR 0.000004 Time 0.384074 +2025-05-17 09:42:54,392 - Epoch: [311][ 400/ 518] Overall Loss 2.516169 Objective Loss 2.516169 LR 0.000004 Time 0.383616 +2025-05-17 09:43:13,388 - Epoch: [311][ 450/ 518] Overall Loss 2.513052 Objective Loss 2.513052 LR 0.000004 Time 0.383203 +2025-05-17 09:43:32,429 - Epoch: [311][ 500/ 518] Overall Loss 2.515587 Objective Loss 2.515587 LR 0.000004 Time 0.382962 +2025-05-17 09:43:39,274 - Epoch: [311][ 518/ 518] Overall Loss 2.514816 Objective Loss 2.514816 LR 0.000004 Time 0.382868 +2025-05-17 09:43:39,352 - --- validate (epoch=311)----------- +2025-05-17 09:43:39,353 - 4952 samples (32 per mini-batch) +2025-05-17 09:43:39,356 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:43:54,233 - Epoch: [311][ 50/ 155] Loss 2.742838 mAP 0.507514 +2025-05-17 09:44:09,196 - Epoch: [311][ 100/ 155] Loss 2.768561 mAP 0.499532 +2025-05-17 09:44:24,834 - Epoch: [311][ 150/ 155] Loss 2.753132 mAP 0.497696 +2025-05-17 09:44:27,830 - Epoch: [311][ 155/ 155] Loss 2.747866 mAP 0.499123 +2025-05-17 09:44:27,882 - ==> mAP: 0.49912 Loss: 2.748 + +2025-05-17 09:44:27,892 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:44:27,892 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:44:27,979 - + +2025-05-17 09:44:27,979 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:44:48,085 - Epoch: [312][ 50/ 518] Overall Loss 2.506671 Objective Loss 2.506671 LR 0.000004 Time 0.402053 +2025-05-17 09:45:07,049 - Epoch: [312][ 100/ 518] Overall Loss 2.523310 Objective Loss 2.523310 LR 0.000004 Time 0.390654 +2025-05-17 09:45:26,215 - Epoch: [312][ 150/ 518] Overall Loss 2.514353 Objective Loss 2.514353 LR 0.000004 Time 0.388196 +2025-05-17 09:45:45,236 - Epoch: [312][ 200/ 518] Overall Loss 2.512252 Objective Loss 2.512252 LR 0.000004 Time 0.386252 +2025-05-17 09:46:04,266 - Epoch: [312][ 250/ 518] Overall Loss 2.510894 Objective Loss 2.510894 LR 0.000004 Time 0.385117 +2025-05-17 09:46:23,292 - Epoch: [312][ 300/ 518] Overall Loss 2.505091 Objective Loss 2.505091 LR 0.000004 Time 0.384347 +2025-05-17 09:46:42,274 - Epoch: [312][ 350/ 518] Overall Loss 2.506689 Objective Loss 2.506689 LR 0.000004 Time 0.383673 +2025-05-17 09:47:01,281 - Epoch: [312][ 400/ 518] Overall Loss 2.505660 Objective Loss 2.505660 LR 0.000004 Time 0.383228 +2025-05-17 09:47:20,449 - Epoch: [312][ 450/ 518] Overall Loss 2.509390 Objective Loss 2.509390 LR 0.000004 Time 0.383239 +2025-05-17 09:47:39,484 - Epoch: [312][ 500/ 518] Overall Loss 2.513401 Objective Loss 2.513401 LR 0.000004 Time 0.382984 +2025-05-17 09:47:46,202 - Epoch: [312][ 518/ 518] Overall Loss 2.517393 Objective Loss 2.517393 LR 0.000004 Time 0.382645 +2025-05-17 09:47:46,271 - --- validate (epoch=312)----------- +2025-05-17 09:47:46,272 - 4952 samples (32 per mini-batch) +2025-05-17 09:47:46,275 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:48:01,237 - Epoch: [312][ 50/ 155] Loss 2.776524 mAP 0.467921 +2025-05-17 09:48:16,336 - Epoch: [312][ 100/ 155] Loss 2.777834 mAP 0.480333 +2025-05-17 09:48:31,750 - Epoch: [312][ 150/ 155] Loss 2.764956 mAP 0.488628 +2025-05-17 09:48:34,724 - Epoch: [312][ 155/ 155] Loss 2.761751 mAP 0.489179 +2025-05-17 09:48:34,776 - ==> mAP: 0.48918 Loss: 2.762 + +2025-05-17 09:48:34,785 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:48:34,785 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:48:34,873 - + +2025-05-17 09:48:34,874 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:48:54,974 - Epoch: [313][ 50/ 518] Overall Loss 2.530557 Objective Loss 2.530557 LR 0.000004 Time 0.401932 +2025-05-17 09:49:14,154 - Epoch: [313][ 100/ 518] Overall Loss 2.524135 Objective Loss 2.524135 LR 0.000004 Time 0.392759 +2025-05-17 09:49:33,161 - Epoch: [313][ 150/ 518] Overall Loss 2.514186 Objective Loss 2.514186 LR 0.000004 Time 0.388549 +2025-05-17 09:49:52,145 - Epoch: [313][ 200/ 518] Overall Loss 2.525242 Objective Loss 2.525242 LR 0.000004 Time 0.386321 +2025-05-17 09:50:11,126 - Epoch: [313][ 250/ 518] Overall Loss 2.523823 Objective Loss 2.523823 LR 0.000004 Time 0.384976 +2025-05-17 09:50:30,106 - Epoch: [313][ 300/ 518] Overall Loss 2.529955 Objective Loss 2.529955 LR 0.000004 Time 0.384076 +2025-05-17 09:50:49,127 - Epoch: [313][ 350/ 518] Overall Loss 2.533520 Objective Loss 2.533520 LR 0.000004 Time 0.383552 +2025-05-17 09:51:08,123 - Epoch: [313][ 400/ 518] Overall Loss 2.531124 Objective Loss 2.531124 LR 0.000004 Time 0.383094 +2025-05-17 09:51:27,272 - Epoch: [313][ 450/ 518] Overall Loss 2.532115 Objective Loss 2.532115 LR 0.000004 Time 0.383077 +2025-05-17 09:51:46,264 - Epoch: [313][ 500/ 518] Overall Loss 2.528718 Objective Loss 2.528718 LR 0.000004 Time 0.382752 +2025-05-17 09:51:52,976 - Epoch: [313][ 518/ 518] Overall Loss 2.526367 Objective Loss 2.526367 LR 0.000004 Time 0.382407 +2025-05-17 09:51:53,047 - --- validate (epoch=313)----------- +2025-05-17 09:51:53,048 - 4952 samples (32 per mini-batch) +2025-05-17 09:51:53,051 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:52:07,938 - Epoch: [313][ 50/ 155] Loss 2.761615 mAP 0.503904 +2025-05-17 09:52:22,971 - Epoch: [313][ 100/ 155] Loss 2.766645 mAP 0.499517 +2025-05-17 09:52:38,328 - Epoch: [313][ 150/ 155] Loss 2.761682 mAP 0.500178 +2025-05-17 09:52:41,363 - Epoch: [313][ 155/ 155] Loss 2.758011 mAP 0.499646 +2025-05-17 09:52:41,415 - ==> mAP: 0.49965 Loss: 2.758 + +2025-05-17 09:52:41,423 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:52:41,423 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:52:41,511 - + +2025-05-17 09:52:41,511 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:53:01,545 - Epoch: [314][ 50/ 518] Overall Loss 2.562568 Objective Loss 2.562568 LR 0.000004 Time 0.400621 +2025-05-17 09:53:20,702 - Epoch: [314][ 100/ 518] Overall Loss 2.505203 Objective Loss 2.505203 LR 0.000004 Time 0.391869 +2025-05-17 09:53:39,707 - Epoch: [314][ 150/ 518] Overall Loss 2.518968 Objective Loss 2.518968 LR 0.000004 Time 0.387937 +2025-05-17 09:53:58,697 - Epoch: [314][ 200/ 518] Overall Loss 2.520024 Objective Loss 2.520024 LR 0.000004 Time 0.385897 +2025-05-17 09:54:17,682 - Epoch: [314][ 250/ 518] Overall Loss 2.527066 Objective Loss 2.527066 LR 0.000004 Time 0.384653 +2025-05-17 09:54:36,703 - Epoch: [314][ 300/ 518] Overall Loss 2.527110 Objective Loss 2.527110 LR 0.000004 Time 0.383945 +2025-05-17 09:54:55,744 - Epoch: [314][ 350/ 518] Overall Loss 2.522539 Objective Loss 2.522539 LR 0.000004 Time 0.383496 +2025-05-17 09:55:14,947 - Epoch: [314][ 400/ 518] Overall Loss 2.522819 Objective Loss 2.522819 LR 0.000004 Time 0.383562 +2025-05-17 09:55:33,968 - Epoch: [314][ 450/ 518] Overall Loss 2.521643 Objective Loss 2.521643 LR 0.000004 Time 0.383211 +2025-05-17 09:55:52,981 - Epoch: [314][ 500/ 518] Overall Loss 2.519846 Objective Loss 2.519846 LR 0.000004 Time 0.382914 +2025-05-17 09:55:59,675 - Epoch: [314][ 518/ 518] Overall Loss 2.523578 Objective Loss 2.523578 LR 0.000004 Time 0.382530 +2025-05-17 09:55:59,752 - --- validate (epoch=314)----------- +2025-05-17 09:55:59,753 - 4952 samples (32 per mini-batch) +2025-05-17 09:55:59,756 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 09:56:14,741 - Epoch: [314][ 50/ 155] Loss 2.778540 mAP 0.504687 +2025-05-17 09:56:30,018 - Epoch: [314][ 100/ 155] Loss 2.768659 mAP 0.488515 +2025-05-17 09:56:45,805 - Epoch: [314][ 150/ 155] Loss 2.767569 mAP 0.496240 +2025-05-17 09:56:48,766 - Epoch: [314][ 155/ 155] Loss 2.762591 mAP 0.497584 +2025-05-17 09:56:48,819 - ==> mAP: 0.49758 Loss: 2.763 + +2025-05-17 09:56:48,828 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 09:56:48,828 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 09:56:48,918 - + +2025-05-17 09:56:48,918 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 09:57:09,016 - Epoch: [315][ 50/ 518] Overall Loss 2.509600 Objective Loss 2.509600 LR 0.000004 Time 0.401906 +2025-05-17 09:57:28,193 - Epoch: [315][ 100/ 518] Overall Loss 2.509600 Objective Loss 2.509600 LR 0.000004 Time 0.392706 +2025-05-17 09:57:47,170 - Epoch: [315][ 150/ 518] Overall Loss 2.496329 Objective Loss 2.496329 LR 0.000004 Time 0.388308 +2025-05-17 09:58:06,187 - Epoch: [315][ 200/ 518] Overall Loss 2.502545 Objective Loss 2.502545 LR 0.000004 Time 0.386313 +2025-05-17 09:58:25,205 - Epoch: [315][ 250/ 518] Overall Loss 2.500116 Objective Loss 2.500116 LR 0.000004 Time 0.385115 +2025-05-17 09:58:44,238 - Epoch: [315][ 300/ 518] Overall Loss 2.502186 Objective Loss 2.502186 LR 0.000004 Time 0.384371 +2025-05-17 09:59:03,451 - Epoch: [315][ 350/ 518] Overall Loss 2.506870 Objective Loss 2.506870 LR 0.000004 Time 0.384353 +2025-05-17 09:59:22,515 - Epoch: [315][ 400/ 518] Overall Loss 2.511491 Objective Loss 2.511491 LR 0.000004 Time 0.383966 +2025-05-17 09:59:41,534 - Epoch: [315][ 450/ 518] Overall Loss 2.506843 Objective Loss 2.506843 LR 0.000004 Time 0.383567 +2025-05-17 10:00:00,585 - Epoch: [315][ 500/ 518] Overall Loss 2.513304 Objective Loss 2.513304 LR 0.000004 Time 0.383310 +2025-05-17 10:00:07,275 - Epoch: [315][ 518/ 518] Overall Loss 2.516954 Objective Loss 2.516954 LR 0.000004 Time 0.382904 +2025-05-17 10:00:07,350 - --- validate (epoch=315)----------- +2025-05-17 10:00:07,351 - 4952 samples (32 per mini-batch) +2025-05-17 10:00:07,354 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:00:22,336 - Epoch: [315][ 50/ 155] Loss 2.771138 mAP 0.507480 +2025-05-17 10:00:37,632 - Epoch: [315][ 100/ 155] Loss 2.746242 mAP 0.505296 +2025-05-17 10:00:53,379 - Epoch: [315][ 150/ 155] Loss 2.747690 mAP 0.496103 +2025-05-17 10:00:56,345 - Epoch: [315][ 155/ 155] Loss 2.756689 mAP 0.494406 +2025-05-17 10:00:56,399 - ==> mAP: 0.49441 Loss: 2.757 + +2025-05-17 10:00:56,408 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 10:00:56,408 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:00:56,498 - + +2025-05-17 10:00:56,498 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:01:16,788 - Epoch: [316][ 50/ 518] Overall Loss 2.534405 Objective Loss 2.534405 LR 0.000004 Time 0.405738 +2025-05-17 10:01:35,813 - Epoch: [316][ 100/ 518] Overall Loss 2.522723 Objective Loss 2.522723 LR 0.000004 Time 0.393112 +2025-05-17 10:01:54,852 - Epoch: [316][ 150/ 518] Overall Loss 2.529516 Objective Loss 2.529516 LR 0.000004 Time 0.388991 +2025-05-17 10:02:13,862 - Epoch: [316][ 200/ 518] Overall Loss 2.524296 Objective Loss 2.524296 LR 0.000004 Time 0.386788 +2025-05-17 10:02:32,851 - Epoch: [316][ 250/ 518] Overall Loss 2.512863 Objective Loss 2.512863 LR 0.000004 Time 0.385382 +2025-05-17 10:02:52,000 - Epoch: [316][ 300/ 518] Overall Loss 2.513514 Objective Loss 2.513514 LR 0.000004 Time 0.384977 +2025-05-17 10:03:11,003 - Epoch: [316][ 350/ 518] Overall Loss 2.514937 Objective Loss 2.514937 LR 0.000004 Time 0.384271 +2025-05-17 10:03:29,993 - Epoch: [316][ 400/ 518] Overall Loss 2.518042 Objective Loss 2.518042 LR 0.000004 Time 0.383710 +2025-05-17 10:03:48,984 - Epoch: [316][ 450/ 518] Overall Loss 2.520015 Objective Loss 2.520015 LR 0.000004 Time 0.383275 +2025-05-17 10:04:07,987 - Epoch: [316][ 500/ 518] Overall Loss 2.513930 Objective Loss 2.513930 LR 0.000004 Time 0.382951 +2025-05-17 10:04:14,673 - Epoch: [316][ 518/ 518] Overall Loss 2.517331 Objective Loss 2.517331 LR 0.000004 Time 0.382550 +2025-05-17 10:04:14,741 - --- validate (epoch=316)----------- +2025-05-17 10:04:14,742 - 4952 samples (32 per mini-batch) +2025-05-17 10:04:14,745 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:04:29,859 - Epoch: [316][ 50/ 155] Loss 2.747395 mAP 0.503063 +2025-05-17 10:04:44,852 - Epoch: [316][ 100/ 155] Loss 2.769680 mAP 0.497248 +2025-05-17 10:05:00,551 - Epoch: [316][ 150/ 155] Loss 2.759120 mAP 0.493902 +2025-05-17 10:05:03,671 - Epoch: [316][ 155/ 155] Loss 2.759720 mAP 0.495410 +2025-05-17 10:05:03,724 - ==> mAP: 0.49541 Loss: 2.760 + +2025-05-17 10:05:03,733 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 10:05:03,733 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:05:03,823 - + +2025-05-17 10:05:03,823 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:05:23,857 - Epoch: [317][ 50/ 518] Overall Loss 2.491965 Objective Loss 2.491965 LR 0.000004 Time 0.400590 +2025-05-17 10:05:42,802 - Epoch: [317][ 100/ 518] Overall Loss 2.520989 Objective Loss 2.520989 LR 0.000004 Time 0.389733 +2025-05-17 10:06:01,775 - Epoch: [317][ 150/ 518] Overall Loss 2.528627 Objective Loss 2.528627 LR 0.000004 Time 0.386305 +2025-05-17 10:06:20,755 - Epoch: [317][ 200/ 518] Overall Loss 2.520826 Objective Loss 2.520826 LR 0.000004 Time 0.384619 +2025-05-17 10:06:39,742 - Epoch: [317][ 250/ 518] Overall Loss 2.519387 Objective Loss 2.519387 LR 0.000004 Time 0.383638 +2025-05-17 10:06:58,930 - Epoch: [317][ 300/ 518] Overall Loss 2.524586 Objective Loss 2.524586 LR 0.000004 Time 0.383656 +2025-05-17 10:07:17,920 - Epoch: [317][ 350/ 518] Overall Loss 2.523456 Objective Loss 2.523456 LR 0.000004 Time 0.383102 +2025-05-17 10:07:36,910 - Epoch: [317][ 400/ 518] Overall Loss 2.524183 Objective Loss 2.524183 LR 0.000004 Time 0.382684 +2025-05-17 10:07:55,907 - Epoch: [317][ 450/ 518] Overall Loss 2.522768 Objective Loss 2.522768 LR 0.000004 Time 0.382377 +2025-05-17 10:08:14,932 - Epoch: [317][ 500/ 518] Overall Loss 2.529699 Objective Loss 2.529699 LR 0.000004 Time 0.382188 +2025-05-17 10:08:21,610 - Epoch: [317][ 518/ 518] Overall Loss 2.527242 Objective Loss 2.527242 LR 0.000004 Time 0.381799 +2025-05-17 10:08:21,689 - --- validate (epoch=317)----------- +2025-05-17 10:08:21,690 - 4952 samples (32 per mini-batch) +2025-05-17 10:08:21,693 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:08:36,620 - Epoch: [317][ 50/ 155] Loss 2.761379 mAP 0.498400 +2025-05-17 10:08:51,402 - Epoch: [317][ 100/ 155] Loss 2.775023 mAP 0.490538 +2025-05-17 10:09:06,928 - Epoch: [317][ 150/ 155] Loss 2.768113 mAP 0.494302 +2025-05-17 10:09:09,964 - Epoch: [317][ 155/ 155] Loss 2.770003 mAP 0.494407 +2025-05-17 10:09:10,017 - ==> mAP: 0.49441 Loss: 2.770 + +2025-05-17 10:09:10,026 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 10:09:10,026 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:09:10,115 - + +2025-05-17 10:09:10,115 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:09:30,111 - Epoch: [318][ 50/ 518] Overall Loss 2.524778 Objective Loss 2.524778 LR 0.000004 Time 0.399847 +2025-05-17 10:09:49,066 - Epoch: [318][ 100/ 518] Overall Loss 2.526572 Objective Loss 2.526572 LR 0.000004 Time 0.389463 +2025-05-17 10:10:08,049 - Epoch: [318][ 150/ 518] Overall Loss 2.512233 Objective Loss 2.512233 LR 0.000004 Time 0.386186 +2025-05-17 10:10:27,017 - Epoch: [318][ 200/ 518] Overall Loss 2.511193 Objective Loss 2.511193 LR 0.000004 Time 0.384471 +2025-05-17 10:10:46,020 - Epoch: [318][ 250/ 518] Overall Loss 2.516559 Objective Loss 2.516559 LR 0.000004 Time 0.383587 +2025-05-17 10:11:04,998 - Epoch: [318][ 300/ 518] Overall Loss 2.516470 Objective Loss 2.516470 LR 0.000004 Time 0.382909 +2025-05-17 10:11:24,206 - Epoch: [318][ 350/ 518] Overall Loss 2.517612 Objective Loss 2.517612 LR 0.000004 Time 0.383085 +2025-05-17 10:11:43,247 - Epoch: [318][ 400/ 518] Overall Loss 2.523191 Objective Loss 2.523191 LR 0.000004 Time 0.382800 +2025-05-17 10:12:02,285 - Epoch: [318][ 450/ 518] Overall Loss 2.519814 Objective Loss 2.519814 LR 0.000004 Time 0.382572 +2025-05-17 10:12:21,324 - Epoch: [318][ 500/ 518] Overall Loss 2.515469 Objective Loss 2.515469 LR 0.000004 Time 0.382390 +2025-05-17 10:12:28,044 - Epoch: [318][ 518/ 518] Overall Loss 2.514155 Objective Loss 2.514155 LR 0.000004 Time 0.382075 +2025-05-17 10:12:28,121 - --- validate (epoch=318)----------- +2025-05-17 10:12:28,122 - 4952 samples (32 per mini-batch) +2025-05-17 10:12:28,125 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:12:43,058 - Epoch: [318][ 50/ 155] Loss 2.753842 mAP 0.494363 +2025-05-17 10:12:58,128 - Epoch: [318][ 100/ 155] Loss 2.765376 mAP 0.487550 +2025-05-17 10:13:13,651 - Epoch: [318][ 150/ 155] Loss 2.766261 mAP 0.492656 +2025-05-17 10:13:16,546 - Epoch: [318][ 155/ 155] Loss 2.763639 mAP 0.493874 +2025-05-17 10:13:16,599 - ==> mAP: 0.49387 Loss: 2.764 + +2025-05-17 10:13:16,608 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 10:13:16,608 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:13:16,696 - + +2025-05-17 10:13:16,696 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:13:36,936 - Epoch: [319][ 50/ 518] Overall Loss 2.511542 Objective Loss 2.511542 LR 0.000004 Time 0.404732 +2025-05-17 10:13:55,911 - Epoch: [319][ 100/ 518] Overall Loss 2.529526 Objective Loss 2.529526 LR 0.000004 Time 0.392100 +2025-05-17 10:14:14,957 - Epoch: [319][ 150/ 518] Overall Loss 2.518959 Objective Loss 2.518959 LR 0.000004 Time 0.388374 +2025-05-17 10:14:33,947 - Epoch: [319][ 200/ 518] Overall Loss 2.519909 Objective Loss 2.519909 LR 0.000004 Time 0.386220 +2025-05-17 10:14:52,942 - Epoch: [319][ 250/ 518] Overall Loss 2.525173 Objective Loss 2.525173 LR 0.000004 Time 0.384953 +2025-05-17 10:15:11,925 - Epoch: [319][ 300/ 518] Overall Loss 2.520082 Objective Loss 2.520082 LR 0.000004 Time 0.384065 +2025-05-17 10:15:30,929 - Epoch: [319][ 350/ 518] Overall Loss 2.518579 Objective Loss 2.518579 LR 0.000004 Time 0.383494 +2025-05-17 10:15:49,966 - Epoch: [319][ 400/ 518] Overall Loss 2.527011 Objective Loss 2.527011 LR 0.000004 Time 0.383145 +2025-05-17 10:16:09,161 - Epoch: [319][ 450/ 518] Overall Loss 2.525180 Objective Loss 2.525180 LR 0.000004 Time 0.383228 +2025-05-17 10:16:28,177 - Epoch: [319][ 500/ 518] Overall Loss 2.525058 Objective Loss 2.525058 LR 0.000004 Time 0.382934 +2025-05-17 10:16:34,892 - Epoch: [319][ 518/ 518] Overall Loss 2.524111 Objective Loss 2.524111 LR 0.000004 Time 0.382589 +2025-05-17 10:16:34,967 - --- validate (epoch=319)----------- +2025-05-17 10:16:34,968 - 4952 samples (32 per mini-batch) +2025-05-17 10:16:34,971 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:16:50,036 - Epoch: [319][ 50/ 155] Loss 2.754135 mAP 0.486138 +2025-05-17 10:17:05,339 - Epoch: [319][ 100/ 155] Loss 2.761888 mAP 0.496723 +2025-05-17 10:17:21,060 - Epoch: [319][ 150/ 155] Loss 2.757892 mAP 0.497987 +2025-05-17 10:17:24,187 - Epoch: [319][ 155/ 155] Loss 2.760456 mAP 0.497696 +2025-05-17 10:17:24,242 - ==> mAP: 0.49770 Loss: 2.760 + +2025-05-17 10:17:24,251 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 10:17:24,252 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:17:24,342 - + +2025-05-17 10:17:24,342 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:17:44,537 - Epoch: [320][ 50/ 518] Overall Loss 2.522960 Objective Loss 2.522960 LR 0.000004 Time 0.403824 +2025-05-17 10:18:03,762 - Epoch: [320][ 100/ 518] Overall Loss 2.516266 Objective Loss 2.516266 LR 0.000004 Time 0.394164 +2025-05-17 10:18:22,797 - Epoch: [320][ 150/ 518] Overall Loss 2.522254 Objective Loss 2.522254 LR 0.000004 Time 0.389671 +2025-05-17 10:18:41,840 - Epoch: [320][ 200/ 518] Overall Loss 2.523918 Objective Loss 2.523918 LR 0.000004 Time 0.387466 +2025-05-17 10:19:00,874 - Epoch: [320][ 250/ 518] Overall Loss 2.520776 Objective Loss 2.520776 LR 0.000004 Time 0.386103 +2025-05-17 10:19:19,882 - Epoch: [320][ 300/ 518] Overall Loss 2.505828 Objective Loss 2.505828 LR 0.000004 Time 0.385108 +2025-05-17 10:19:38,934 - Epoch: [320][ 350/ 518] Overall Loss 2.507857 Objective Loss 2.507857 LR 0.000004 Time 0.384526 +2025-05-17 10:19:58,111 - Epoch: [320][ 400/ 518] Overall Loss 2.512314 Objective Loss 2.512314 LR 0.000004 Time 0.384400 +2025-05-17 10:20:17,113 - Epoch: [320][ 450/ 518] Overall Loss 2.515881 Objective Loss 2.515881 LR 0.000004 Time 0.383914 +2025-05-17 10:20:36,133 - Epoch: [320][ 500/ 518] Overall Loss 2.520770 Objective Loss 2.520770 LR 0.000004 Time 0.383560 +2025-05-17 10:20:42,826 - Epoch: [320][ 518/ 518] Overall Loss 2.521162 Objective Loss 2.521162 LR 0.000004 Time 0.383151 +2025-05-17 10:20:42,903 - --- validate (epoch=320)----------- +2025-05-17 10:20:42,904 - 4952 samples (32 per mini-batch) +2025-05-17 10:20:42,907 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:20:57,658 - Epoch: [320][ 50/ 155] Loss 2.758909 mAP 0.504099 +2025-05-17 10:21:12,626 - Epoch: [320][ 100/ 155] Loss 2.750000 mAP 0.500144 +2025-05-17 10:21:28,168 - Epoch: [320][ 150/ 155] Loss 2.752460 mAP 0.496253 +2025-05-17 10:21:31,083 - Epoch: [320][ 155/ 155] Loss 2.748384 mAP 0.499629 +2025-05-17 10:21:31,137 - ==> mAP: 0.49963 Loss: 2.748 + +2025-05-17 10:21:31,146 - ==> Best [mAP: 0.500620 vloss: 2.760340 Params: 2177088 on epoch: 294] +2025-05-17 10:21:31,146 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:21:31,234 - + +2025-05-17 10:21:31,234 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:21:51,413 - Epoch: [321][ 50/ 518] Overall Loss 2.532582 Objective Loss 2.532582 LR 0.000004 Time 0.403498 +2025-05-17 10:22:10,351 - Epoch: [321][ 100/ 518] Overall Loss 2.539075 Objective Loss 2.539075 LR 0.000004 Time 0.391123 +2025-05-17 10:22:29,367 - Epoch: [321][ 150/ 518] Overall Loss 2.536791 Objective Loss 2.536791 LR 0.000004 Time 0.387515 +2025-05-17 10:22:48,373 - Epoch: [321][ 200/ 518] Overall Loss 2.531405 Objective Loss 2.531405 LR 0.000004 Time 0.385665 +2025-05-17 10:23:07,373 - Epoch: [321][ 250/ 518] Overall Loss 2.523987 Objective Loss 2.523987 LR 0.000004 Time 0.384528 +2025-05-17 10:23:26,547 - Epoch: [321][ 300/ 518] Overall Loss 2.526247 Objective Loss 2.526247 LR 0.000004 Time 0.384348 +2025-05-17 10:23:45,578 - Epoch: [321][ 350/ 518] Overall Loss 2.524368 Objective Loss 2.524368 LR 0.000004 Time 0.383814 +2025-05-17 10:24:04,599 - Epoch: [321][ 400/ 518] Overall Loss 2.527381 Objective Loss 2.527381 LR 0.000004 Time 0.383388 +2025-05-17 10:24:23,563 - Epoch: [321][ 450/ 518] Overall Loss 2.527934 Objective Loss 2.527934 LR 0.000004 Time 0.382929 +2025-05-17 10:24:42,544 - Epoch: [321][ 500/ 518] Overall Loss 2.523135 Objective Loss 2.523135 LR 0.000004 Time 0.382595 +2025-05-17 10:24:49,221 - Epoch: [321][ 518/ 518] Overall Loss 2.520325 Objective Loss 2.520325 LR 0.000004 Time 0.382189 +2025-05-17 10:24:49,294 - --- validate (epoch=321)----------- +2025-05-17 10:24:49,294 - 4952 samples (32 per mini-batch) +2025-05-17 10:24:49,298 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:25:04,211 - Epoch: [321][ 50/ 155] Loss 2.746008 mAP 0.508115 +2025-05-17 10:25:19,023 - Epoch: [321][ 100/ 155] Loss 2.749613 mAP 0.503146 +2025-05-17 10:25:34,466 - Epoch: [321][ 150/ 155] Loss 2.761138 mAP 0.501335 +2025-05-17 10:25:37,511 - Epoch: [321][ 155/ 155] Loss 2.758762 mAP 0.501039 +2025-05-17 10:25:37,572 - ==> mAP: 0.50104 Loss: 2.759 + +2025-05-17 10:25:37,580 - ==> Best [mAP: 0.501039 vloss: 2.758762 Params: 2177088 on epoch: 321] +2025-05-17 10:25:37,580 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:25:37,692 - + +2025-05-17 10:25:37,692 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:25:57,856 - Epoch: [322][ 50/ 518] Overall Loss 2.509656 Objective Loss 2.509656 LR 0.000004 Time 0.403196 +2025-05-17 10:26:16,862 - Epoch: [322][ 100/ 518] Overall Loss 2.492708 Objective Loss 2.492708 LR 0.000004 Time 0.391652 +2025-05-17 10:26:35,909 - Epoch: [322][ 150/ 518] Overall Loss 2.515475 Objective Loss 2.515475 LR 0.000004 Time 0.388078 +2025-05-17 10:26:54,890 - Epoch: [322][ 200/ 518] Overall Loss 2.521056 Objective Loss 2.521056 LR 0.000004 Time 0.385959 +2025-05-17 10:27:13,904 - Epoch: [322][ 250/ 518] Overall Loss 2.532685 Objective Loss 2.532685 LR 0.000004 Time 0.384817 +2025-05-17 10:27:32,901 - Epoch: [322][ 300/ 518] Overall Loss 2.533489 Objective Loss 2.533489 LR 0.000004 Time 0.384002 +2025-05-17 10:27:51,913 - Epoch: [322][ 350/ 518] Overall Loss 2.529401 Objective Loss 2.529401 LR 0.000004 Time 0.383461 +2025-05-17 10:28:11,115 - Epoch: [322][ 400/ 518] Overall Loss 2.529513 Objective Loss 2.529513 LR 0.000004 Time 0.383531 +2025-05-17 10:28:30,187 - Epoch: [322][ 450/ 518] Overall Loss 2.525067 Objective Loss 2.525067 LR 0.000004 Time 0.383298 +2025-05-17 10:28:49,229 - Epoch: [322][ 500/ 518] Overall Loss 2.525146 Objective Loss 2.525146 LR 0.000004 Time 0.383050 +2025-05-17 10:28:55,955 - Epoch: [322][ 518/ 518] Overall Loss 2.526282 Objective Loss 2.526282 LR 0.000004 Time 0.382724 +2025-05-17 10:28:56,035 - --- validate (epoch=322)----------- +2025-05-17 10:28:56,036 - 4952 samples (32 per mini-batch) +2025-05-17 10:28:56,039 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:29:10,882 - Epoch: [322][ 50/ 155] Loss 2.714963 mAP 0.503370 +2025-05-17 10:29:25,979 - Epoch: [322][ 100/ 155] Loss 2.758727 mAP 0.492478 +2025-05-17 10:29:41,360 - Epoch: [322][ 150/ 155] Loss 2.768895 mAP 0.495822 +2025-05-17 10:29:44,395 - Epoch: [322][ 155/ 155] Loss 2.764943 mAP 0.497348 +2025-05-17 10:29:44,449 - ==> mAP: 0.49735 Loss: 2.765 + +2025-05-17 10:29:44,458 - ==> Best [mAP: 0.501039 vloss: 2.758762 Params: 2177088 on epoch: 321] +2025-05-17 10:29:44,458 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:29:44,546 - + +2025-05-17 10:29:44,546 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:30:04,717 - Epoch: [323][ 50/ 518] Overall Loss 2.555189 Objective Loss 2.555189 LR 0.000004 Time 0.403335 +2025-05-17 10:30:23,733 - Epoch: [323][ 100/ 518] Overall Loss 2.537005 Objective Loss 2.537005 LR 0.000004 Time 0.391825 +2025-05-17 10:30:42,763 - Epoch: [323][ 150/ 518] Overall Loss 2.535602 Objective Loss 2.535602 LR 0.000004 Time 0.388080 +2025-05-17 10:31:01,821 - Epoch: [323][ 200/ 518] Overall Loss 2.517798 Objective Loss 2.517798 LR 0.000004 Time 0.386344 +2025-05-17 10:31:20,885 - Epoch: [323][ 250/ 518] Overall Loss 2.515989 Objective Loss 2.515989 LR 0.000004 Time 0.385329 +2025-05-17 10:31:39,920 - Epoch: [323][ 300/ 518] Overall Loss 2.523072 Objective Loss 2.523072 LR 0.000004 Time 0.384556 +2025-05-17 10:31:58,939 - Epoch: [323][ 350/ 518] Overall Loss 2.509519 Objective Loss 2.509519 LR 0.000004 Time 0.383955 +2025-05-17 10:32:17,906 - Epoch: [323][ 400/ 518] Overall Loss 2.509920 Objective Loss 2.509920 LR 0.000004 Time 0.383376 +2025-05-17 10:32:37,070 - Epoch: [323][ 450/ 518] Overall Loss 2.508790 Objective Loss 2.508790 LR 0.000004 Time 0.383363 +2025-05-17 10:32:56,093 - Epoch: [323][ 500/ 518] Overall Loss 2.512957 Objective Loss 2.512957 LR 0.000004 Time 0.383070 +2025-05-17 10:33:02,784 - Epoch: [323][ 518/ 518] Overall Loss 2.514022 Objective Loss 2.514022 LR 0.000004 Time 0.382676 +2025-05-17 10:33:02,851 - --- validate (epoch=323)----------- +2025-05-17 10:33:02,852 - 4952 samples (32 per mini-batch) +2025-05-17 10:33:02,855 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:33:17,571 - Epoch: [323][ 50/ 155] Loss 2.701146 mAP 0.493557 +2025-05-17 10:33:32,626 - Epoch: [323][ 100/ 155] Loss 2.753016 mAP 0.498106 +2025-05-17 10:33:48,061 - Epoch: [323][ 150/ 155] Loss 2.757849 mAP 0.496090 +2025-05-17 10:33:51,117 - Epoch: [323][ 155/ 155] Loss 2.754174 mAP 0.496744 +2025-05-17 10:33:51,171 - ==> mAP: 0.49674 Loss: 2.754 + +2025-05-17 10:33:51,180 - ==> Best [mAP: 0.501039 vloss: 2.758762 Params: 2177088 on epoch: 321] +2025-05-17 10:33:51,180 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:33:51,267 - + +2025-05-17 10:33:51,268 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:34:11,373 - Epoch: [324][ 50/ 518] Overall Loss 2.511088 Objective Loss 2.511088 LR 0.000004 Time 0.402037 +2025-05-17 10:34:30,560 - Epoch: [324][ 100/ 518] Overall Loss 2.496601 Objective Loss 2.496601 LR 0.000004 Time 0.392872 +2025-05-17 10:34:49,555 - Epoch: [324][ 150/ 518] Overall Loss 2.525883 Objective Loss 2.525883 LR 0.000004 Time 0.388542 +2025-05-17 10:35:08,550 - Epoch: [324][ 200/ 518] Overall Loss 2.519852 Objective Loss 2.519852 LR 0.000004 Time 0.386372 +2025-05-17 10:35:27,534 - Epoch: [324][ 250/ 518] Overall Loss 2.516419 Objective Loss 2.516419 LR 0.000004 Time 0.385030 +2025-05-17 10:35:46,531 - Epoch: [324][ 300/ 518] Overall Loss 2.511434 Objective Loss 2.511434 LR 0.000004 Time 0.384179 +2025-05-17 10:36:05,540 - Epoch: [324][ 350/ 518] Overall Loss 2.508844 Objective Loss 2.508844 LR 0.000004 Time 0.383601 +2025-05-17 10:36:24,538 - Epoch: [324][ 400/ 518] Overall Loss 2.508926 Objective Loss 2.508926 LR 0.000004 Time 0.383145 +2025-05-17 10:36:43,514 - Epoch: [324][ 450/ 518] Overall Loss 2.507791 Objective Loss 2.507791 LR 0.000004 Time 0.382740 +2025-05-17 10:37:02,653 - Epoch: [324][ 500/ 518] Overall Loss 2.505455 Objective Loss 2.505455 LR 0.000004 Time 0.382740 +2025-05-17 10:37:09,346 - Epoch: [324][ 518/ 518] Overall Loss 2.507349 Objective Loss 2.507349 LR 0.000004 Time 0.382361 +2025-05-17 10:37:09,423 - --- validate (epoch=324)----------- +2025-05-17 10:37:09,424 - 4952 samples (32 per mini-batch) +2025-05-17 10:37:09,427 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:37:24,337 - Epoch: [324][ 50/ 155] Loss 2.749962 mAP 0.498384 +2025-05-17 10:37:39,406 - Epoch: [324][ 100/ 155] Loss 2.763073 mAP 0.489211 +2025-05-17 10:37:55,237 - Epoch: [324][ 150/ 155] Loss 2.782122 mAP 0.491189 +2025-05-17 10:37:58,345 - Epoch: [324][ 155/ 155] Loss 2.777792 mAP 0.493808 +2025-05-17 10:37:58,403 - ==> mAP: 0.49381 Loss: 2.778 + +2025-05-17 10:37:58,413 - ==> Best [mAP: 0.501039 vloss: 2.758762 Params: 2177088 on epoch: 321] +2025-05-17 10:37:58,413 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:37:58,504 - + +2025-05-17 10:37:58,504 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:38:18,626 - Epoch: [325][ 50/ 518] Overall Loss 2.466816 Objective Loss 2.466816 LR 0.000004 Time 0.402371 +2025-05-17 10:38:37,755 - Epoch: [325][ 100/ 518] Overall Loss 2.506539 Objective Loss 2.506539 LR 0.000004 Time 0.392458 +2025-05-17 10:38:56,715 - Epoch: [325][ 150/ 518] Overall Loss 2.491839 Objective Loss 2.491839 LR 0.000004 Time 0.388034 +2025-05-17 10:39:15,741 - Epoch: [325][ 200/ 518] Overall Loss 2.504626 Objective Loss 2.504626 LR 0.000004 Time 0.386148 +2025-05-17 10:39:34,730 - Epoch: [325][ 250/ 518] Overall Loss 2.496357 Objective Loss 2.496357 LR 0.000004 Time 0.384874 +2025-05-17 10:39:53,719 - Epoch: [325][ 300/ 518] Overall Loss 2.497274 Objective Loss 2.497274 LR 0.000004 Time 0.384021 +2025-05-17 10:40:12,757 - Epoch: [325][ 350/ 518] Overall Loss 2.511238 Objective Loss 2.511238 LR 0.000004 Time 0.383550 +2025-05-17 10:40:31,960 - Epoch: [325][ 400/ 518] Overall Loss 2.514580 Objective Loss 2.514580 LR 0.000004 Time 0.383611 +2025-05-17 10:40:51,006 - Epoch: [325][ 450/ 518] Overall Loss 2.515685 Objective Loss 2.515685 LR 0.000004 Time 0.383312 +2025-05-17 10:41:10,023 - Epoch: [325][ 500/ 518] Overall Loss 2.521186 Objective Loss 2.521186 LR 0.000004 Time 0.383011 +2025-05-17 10:41:16,717 - Epoch: [325][ 518/ 518] Overall Loss 2.520067 Objective Loss 2.520067 LR 0.000004 Time 0.382623 +2025-05-17 10:41:16,786 - --- validate (epoch=325)----------- +2025-05-17 10:41:16,787 - 4952 samples (32 per mini-batch) +2025-05-17 10:41:16,790 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:41:31,583 - Epoch: [325][ 50/ 155] Loss 2.781458 mAP 0.498703 +2025-05-17 10:41:46,599 - Epoch: [325][ 100/ 155] Loss 2.784190 mAP 0.495187 +2025-05-17 10:42:02,077 - Epoch: [325][ 150/ 155] Loss 2.769782 mAP 0.490853 +2025-05-17 10:42:04,986 - Epoch: [325][ 155/ 155] Loss 2.766918 mAP 0.491193 +2025-05-17 10:42:05,040 - ==> mAP: 0.49119 Loss: 2.767 + +2025-05-17 10:42:05,049 - ==> Best [mAP: 0.501039 vloss: 2.758762 Params: 2177088 on epoch: 321] +2025-05-17 10:42:05,049 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:42:05,137 - + +2025-05-17 10:42:05,137 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:42:25,433 - Epoch: [326][ 50/ 518] Overall Loss 2.481403 Objective Loss 2.481403 LR 0.000004 Time 0.405855 +2025-05-17 10:42:44,389 - Epoch: [326][ 100/ 518] Overall Loss 2.475584 Objective Loss 2.475584 LR 0.000004 Time 0.392467 +2025-05-17 10:43:03,401 - Epoch: [326][ 150/ 518] Overall Loss 2.489358 Objective Loss 2.489358 LR 0.000004 Time 0.388387 +2025-05-17 10:43:22,370 - Epoch: [326][ 200/ 518] Overall Loss 2.494467 Objective Loss 2.494467 LR 0.000004 Time 0.386130 +2025-05-17 10:43:41,370 - Epoch: [326][ 250/ 518] Overall Loss 2.502481 Objective Loss 2.502481 LR 0.000004 Time 0.384898 +2025-05-17 10:44:00,375 - Epoch: [326][ 300/ 518] Overall Loss 2.499167 Objective Loss 2.499167 LR 0.000004 Time 0.384095 +2025-05-17 10:44:19,370 - Epoch: [326][ 350/ 518] Overall Loss 2.494857 Objective Loss 2.494857 LR 0.000004 Time 0.383492 +2025-05-17 10:44:38,363 - Epoch: [326][ 400/ 518] Overall Loss 2.500203 Objective Loss 2.500203 LR 0.000004 Time 0.383036 +2025-05-17 10:44:57,541 - Epoch: [326][ 450/ 518] Overall Loss 2.495133 Objective Loss 2.495133 LR 0.000004 Time 0.383091 +2025-05-17 10:45:16,545 - Epoch: [326][ 500/ 518] Overall Loss 2.498327 Objective Loss 2.498327 LR 0.000004 Time 0.382786 +2025-05-17 10:45:23,262 - Epoch: [326][ 518/ 518] Overall Loss 2.500195 Objective Loss 2.500195 LR 0.000004 Time 0.382452 +2025-05-17 10:45:23,340 - --- validate (epoch=326)----------- +2025-05-17 10:45:23,341 - 4952 samples (32 per mini-batch) +2025-05-17 10:45:23,344 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:45:38,316 - Epoch: [326][ 50/ 155] Loss 2.743931 mAP 0.490186 +2025-05-17 10:45:53,626 - Epoch: [326][ 100/ 155] Loss 2.749898 mAP 0.493234 +2025-05-17 10:46:09,219 - Epoch: [326][ 150/ 155] Loss 2.747984 mAP 0.495978 +2025-05-17 10:46:12,310 - Epoch: [326][ 155/ 155] Loss 2.750864 mAP 0.495589 +2025-05-17 10:46:12,364 - ==> mAP: 0.49559 Loss: 2.751 + +2025-05-17 10:46:12,374 - ==> Best [mAP: 0.501039 vloss: 2.758762 Params: 2177088 on epoch: 321] +2025-05-17 10:46:12,374 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:46:12,460 - + +2025-05-17 10:46:12,460 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:46:32,446 - Epoch: [327][ 50/ 518] Overall Loss 2.559216 Objective Loss 2.559216 LR 0.000004 Time 0.399660 +2025-05-17 10:46:51,581 - Epoch: [327][ 100/ 518] Overall Loss 2.578225 Objective Loss 2.578225 LR 0.000004 Time 0.391167 +2025-05-17 10:47:10,574 - Epoch: [327][ 150/ 518] Overall Loss 2.563152 Objective Loss 2.563152 LR 0.000004 Time 0.387391 +2025-05-17 10:47:29,594 - Epoch: [327][ 200/ 518] Overall Loss 2.559313 Objective Loss 2.559313 LR 0.000004 Time 0.385639 +2025-05-17 10:47:48,652 - Epoch: [327][ 250/ 518] Overall Loss 2.552713 Objective Loss 2.552713 LR 0.000004 Time 0.384737 +2025-05-17 10:48:07,658 - Epoch: [327][ 300/ 518] Overall Loss 2.539933 Objective Loss 2.539933 LR 0.000004 Time 0.383965 +2025-05-17 10:48:26,649 - Epoch: [327][ 350/ 518] Overall Loss 2.538609 Objective Loss 2.538609 LR 0.000004 Time 0.383369 +2025-05-17 10:48:45,870 - Epoch: [327][ 400/ 518] Overall Loss 2.535522 Objective Loss 2.535522 LR 0.000004 Time 0.383500 +2025-05-17 10:49:04,908 - Epoch: [327][ 450/ 518] Overall Loss 2.528839 Objective Loss 2.528839 LR 0.000004 Time 0.383193 +2025-05-17 10:49:23,922 - Epoch: [327][ 500/ 518] Overall Loss 2.527708 Objective Loss 2.527708 LR 0.000004 Time 0.382900 +2025-05-17 10:49:30,640 - Epoch: [327][ 518/ 518] Overall Loss 2.528734 Objective Loss 2.528734 LR 0.000004 Time 0.382563 +2025-05-17 10:49:30,719 - --- validate (epoch=327)----------- +2025-05-17 10:49:30,720 - 4952 samples (32 per mini-batch) +2025-05-17 10:49:30,724 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:49:45,672 - Epoch: [327][ 50/ 155] Loss 2.813324 mAP 0.488908 +2025-05-17 10:50:00,825 - Epoch: [327][ 100/ 155] Loss 2.769528 mAP 0.494017 +2025-05-17 10:50:16,623 - Epoch: [327][ 150/ 155] Loss 2.747624 mAP 0.500712 +2025-05-17 10:50:19,608 - Epoch: [327][ 155/ 155] Loss 2.748736 mAP 0.500763 +2025-05-17 10:50:19,663 - ==> mAP: 0.50076 Loss: 2.749 + +2025-05-17 10:50:19,672 - ==> Best [mAP: 0.501039 vloss: 2.758762 Params: 2177088 on epoch: 321] +2025-05-17 10:50:19,672 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:50:19,757 - + +2025-05-17 10:50:19,757 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:50:40,102 - Epoch: [328][ 50/ 518] Overall Loss 2.475895 Objective Loss 2.475895 LR 0.000004 Time 0.406826 +2025-05-17 10:50:59,118 - Epoch: [328][ 100/ 518] Overall Loss 2.491329 Objective Loss 2.491329 LR 0.000004 Time 0.393556 +2025-05-17 10:51:18,143 - Epoch: [328][ 150/ 518] Overall Loss 2.489906 Objective Loss 2.489906 LR 0.000004 Time 0.389197 +2025-05-17 10:51:37,153 - Epoch: [328][ 200/ 518] Overall Loss 2.486547 Objective Loss 2.486547 LR 0.000004 Time 0.386941 +2025-05-17 10:51:56,142 - Epoch: [328][ 250/ 518] Overall Loss 2.495872 Objective Loss 2.495872 LR 0.000004 Time 0.385506 +2025-05-17 10:52:15,150 - Epoch: [328][ 300/ 518] Overall Loss 2.506310 Objective Loss 2.506310 LR 0.000004 Time 0.384611 +2025-05-17 10:52:34,366 - Epoch: [328][ 350/ 518] Overall Loss 2.509878 Objective Loss 2.509878 LR 0.000004 Time 0.384566 +2025-05-17 10:52:53,396 - Epoch: [328][ 400/ 518] Overall Loss 2.514730 Objective Loss 2.514730 LR 0.000004 Time 0.384068 +2025-05-17 10:53:12,394 - Epoch: [328][ 450/ 518] Overall Loss 2.516154 Objective Loss 2.516154 LR 0.000004 Time 0.383608 +2025-05-17 10:53:31,378 - Epoch: [328][ 500/ 518] Overall Loss 2.519627 Objective Loss 2.519627 LR 0.000004 Time 0.383213 +2025-05-17 10:53:38,090 - Epoch: [328][ 518/ 518] Overall Loss 2.515793 Objective Loss 2.515793 LR 0.000004 Time 0.382853 +2025-05-17 10:53:38,169 - --- validate (epoch=328)----------- +2025-05-17 10:53:38,170 - 4952 samples (32 per mini-batch) +2025-05-17 10:53:38,174 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:53:53,079 - Epoch: [328][ 50/ 155] Loss 2.697822 mAP 0.523097 +2025-05-17 10:54:07,916 - Epoch: [328][ 100/ 155] Loss 2.732769 mAP 0.506526 +2025-05-17 10:54:23,524 - Epoch: [328][ 150/ 155] Loss 2.756926 mAP 0.498244 +2025-05-17 10:54:26,609 - Epoch: [328][ 155/ 155] Loss 2.753084 mAP 0.499046 +2025-05-17 10:54:26,670 - ==> mAP: 0.49905 Loss: 2.753 + +2025-05-17 10:54:26,679 - ==> Best [mAP: 0.501039 vloss: 2.758762 Params: 2177088 on epoch: 321] +2025-05-17 10:54:26,679 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:54:26,768 - + +2025-05-17 10:54:26,768 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:54:46,858 - Epoch: [329][ 50/ 518] Overall Loss 2.479812 Objective Loss 2.479812 LR 0.000004 Time 0.401745 +2025-05-17 10:55:05,868 - Epoch: [329][ 100/ 518] Overall Loss 2.481295 Objective Loss 2.481295 LR 0.000004 Time 0.390960 +2025-05-17 10:55:24,873 - Epoch: [329][ 150/ 518] Overall Loss 2.476944 Objective Loss 2.476944 LR 0.000004 Time 0.387339 +2025-05-17 10:55:43,826 - Epoch: [329][ 200/ 518] Overall Loss 2.489713 Objective Loss 2.489713 LR 0.000004 Time 0.385259 +2025-05-17 10:56:02,813 - Epoch: [329][ 250/ 518] Overall Loss 2.492692 Objective Loss 2.492692 LR 0.000004 Time 0.384154 +2025-05-17 10:56:21,775 - Epoch: [329][ 300/ 518] Overall Loss 2.509911 Objective Loss 2.509911 LR 0.000004 Time 0.383330 +2025-05-17 10:56:40,734 - Epoch: [329][ 350/ 518] Overall Loss 2.511109 Objective Loss 2.511109 LR 0.000004 Time 0.382733 +2025-05-17 10:56:59,907 - Epoch: [329][ 400/ 518] Overall Loss 2.515320 Objective Loss 2.515320 LR 0.000004 Time 0.382820 +2025-05-17 10:57:18,894 - Epoch: [329][ 450/ 518] Overall Loss 2.518384 Objective Loss 2.518384 LR 0.000004 Time 0.382476 +2025-05-17 10:57:37,885 - Epoch: [329][ 500/ 518] Overall Loss 2.518252 Objective Loss 2.518252 LR 0.000004 Time 0.382208 +2025-05-17 10:57:44,578 - Epoch: [329][ 518/ 518] Overall Loss 2.517954 Objective Loss 2.517954 LR 0.000004 Time 0.381846 +2025-05-17 10:57:44,654 - --- validate (epoch=329)----------- +2025-05-17 10:57:44,655 - 4952 samples (32 per mini-batch) +2025-05-17 10:57:44,658 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 10:57:59,771 - Epoch: [329][ 50/ 155] Loss 2.729049 mAP 0.515520 +2025-05-17 10:58:15,148 - Epoch: [329][ 100/ 155] Loss 2.733148 mAP 0.509187 +2025-05-17 10:58:31,024 - Epoch: [329][ 150/ 155] Loss 2.743397 mAP 0.506684 +2025-05-17 10:58:34,037 - Epoch: [329][ 155/ 155] Loss 2.746593 mAP 0.506541 +2025-05-17 10:58:34,091 - ==> mAP: 0.50654 Loss: 2.747 + +2025-05-17 10:58:34,100 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 10:58:34,100 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 10:58:34,215 - + +2025-05-17 10:58:34,215 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 10:58:54,300 - Epoch: [330][ 50/ 518] Overall Loss 2.546983 Objective Loss 2.546983 LR 0.000004 Time 0.401618 +2025-05-17 10:59:13,288 - Epoch: [330][ 100/ 518] Overall Loss 2.518298 Objective Loss 2.518298 LR 0.000004 Time 0.390682 +2025-05-17 10:59:32,278 - Epoch: [330][ 150/ 518] Overall Loss 2.508259 Objective Loss 2.508259 LR 0.000004 Time 0.387041 +2025-05-17 10:59:51,269 - Epoch: [330][ 200/ 518] Overall Loss 2.502792 Objective Loss 2.502792 LR 0.000004 Time 0.385231 +2025-05-17 11:00:10,262 - Epoch: [330][ 250/ 518] Overall Loss 2.498549 Objective Loss 2.498549 LR 0.000004 Time 0.384153 +2025-05-17 11:00:29,253 - Epoch: [330][ 300/ 518] Overall Loss 2.498426 Objective Loss 2.498426 LR 0.000004 Time 0.383425 +2025-05-17 11:00:48,251 - Epoch: [330][ 350/ 518] Overall Loss 2.506038 Objective Loss 2.506038 LR 0.000004 Time 0.382929 +2025-05-17 11:01:07,255 - Epoch: [330][ 400/ 518] Overall Loss 2.501667 Objective Loss 2.501667 LR 0.000004 Time 0.382569 +2025-05-17 11:01:26,429 - Epoch: [330][ 450/ 518] Overall Loss 2.508987 Objective Loss 2.508987 LR 0.000004 Time 0.382669 +2025-05-17 11:01:45,442 - Epoch: [330][ 500/ 518] Overall Loss 2.513266 Objective Loss 2.513266 LR 0.000004 Time 0.382425 +2025-05-17 11:01:52,152 - Epoch: [330][ 518/ 518] Overall Loss 2.514656 Objective Loss 2.514656 LR 0.000004 Time 0.382088 +2025-05-17 11:01:52,232 - --- validate (epoch=330)----------- +2025-05-17 11:01:52,233 - 4952 samples (32 per mini-batch) +2025-05-17 11:01:52,236 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:02:06,978 - Epoch: [330][ 50/ 155] Loss 2.730953 mAP 0.489054 +2025-05-17 11:02:21,927 - Epoch: [330][ 100/ 155] Loss 2.746767 mAP 0.493175 +2025-05-17 11:02:37,178 - Epoch: [330][ 150/ 155] Loss 2.762268 mAP 0.492594 +2025-05-17 11:02:40,195 - Epoch: [330][ 155/ 155] Loss 2.762114 mAP 0.493478 +2025-05-17 11:02:40,247 - ==> mAP: 0.49348 Loss: 2.762 + +2025-05-17 11:02:40,256 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:02:40,257 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:02:40,344 - + +2025-05-17 11:02:40,344 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:03:00,287 - Epoch: [331][ 50/ 518] Overall Loss 2.539472 Objective Loss 2.539472 LR 0.000004 Time 0.398798 +2025-05-17 11:03:19,458 - Epoch: [331][ 100/ 518] Overall Loss 2.547453 Objective Loss 2.547453 LR 0.000004 Time 0.391092 +2025-05-17 11:03:38,451 - Epoch: [331][ 150/ 518] Overall Loss 2.530590 Objective Loss 2.530590 LR 0.000004 Time 0.387342 +2025-05-17 11:03:57,433 - Epoch: [331][ 200/ 518] Overall Loss 2.523962 Objective Loss 2.523962 LR 0.000004 Time 0.385409 +2025-05-17 11:04:16,415 - Epoch: [331][ 250/ 518] Overall Loss 2.509303 Objective Loss 2.509303 LR 0.000004 Time 0.384250 +2025-05-17 11:04:35,420 - Epoch: [331][ 300/ 518] Overall Loss 2.517188 Objective Loss 2.517188 LR 0.000004 Time 0.383557 +2025-05-17 11:04:54,447 - Epoch: [331][ 350/ 518] Overall Loss 2.516157 Objective Loss 2.516157 LR 0.000004 Time 0.383122 +2025-05-17 11:05:13,614 - Epoch: [331][ 400/ 518] Overall Loss 2.516439 Objective Loss 2.516439 LR 0.000004 Time 0.383147 +2025-05-17 11:05:32,617 - Epoch: [331][ 450/ 518] Overall Loss 2.521840 Objective Loss 2.521840 LR 0.000004 Time 0.382800 +2025-05-17 11:05:51,618 - Epoch: [331][ 500/ 518] Overall Loss 2.522346 Objective Loss 2.522346 LR 0.000004 Time 0.382519 +2025-05-17 11:05:58,328 - Epoch: [331][ 518/ 518] Overall Loss 2.518790 Objective Loss 2.518790 LR 0.000004 Time 0.382180 +2025-05-17 11:05:58,403 - --- validate (epoch=331)----------- +2025-05-17 11:05:58,404 - 4952 samples (32 per mini-batch) +2025-05-17 11:05:58,407 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:06:13,118 - Epoch: [331][ 50/ 155] Loss 2.741633 mAP 0.491608 +2025-05-17 11:06:28,033 - Epoch: [331][ 100/ 155] Loss 2.763530 mAP 0.487648 +2025-05-17 11:06:43,448 - Epoch: [331][ 150/ 155] Loss 2.771535 mAP 0.493573 +2025-05-17 11:06:46,320 - Epoch: [331][ 155/ 155] Loss 2.774567 mAP 0.492073 +2025-05-17 11:06:46,378 - ==> mAP: 0.49207 Loss: 2.775 + +2025-05-17 11:06:46,387 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:06:46,387 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:06:46,475 - + +2025-05-17 11:06:46,475 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:07:06,838 - Epoch: [332][ 50/ 518] Overall Loss 2.484789 Objective Loss 2.484789 LR 0.000004 Time 0.407186 +2025-05-17 11:07:25,839 - Epoch: [332][ 100/ 518] Overall Loss 2.518321 Objective Loss 2.518321 LR 0.000004 Time 0.393595 +2025-05-17 11:07:44,880 - Epoch: [332][ 150/ 518] Overall Loss 2.511870 Objective Loss 2.511870 LR 0.000004 Time 0.389330 +2025-05-17 11:08:03,895 - Epoch: [332][ 200/ 518] Overall Loss 2.521753 Objective Loss 2.521753 LR 0.000004 Time 0.387068 +2025-05-17 11:08:22,922 - Epoch: [332][ 250/ 518] Overall Loss 2.519229 Objective Loss 2.519229 LR 0.000004 Time 0.385756 +2025-05-17 11:08:41,921 - Epoch: [332][ 300/ 518] Overall Loss 2.520867 Objective Loss 2.520867 LR 0.000004 Time 0.384790 +2025-05-17 11:09:01,122 - Epoch: [332][ 350/ 518] Overall Loss 2.511929 Objective Loss 2.511929 LR 0.000004 Time 0.384677 +2025-05-17 11:09:20,127 - Epoch: [332][ 400/ 518] Overall Loss 2.512261 Objective Loss 2.512261 LR 0.000004 Time 0.384100 +2025-05-17 11:09:39,151 - Epoch: [332][ 450/ 518] Overall Loss 2.521538 Objective Loss 2.521538 LR 0.000004 Time 0.383697 +2025-05-17 11:09:58,173 - Epoch: [332][ 500/ 518] Overall Loss 2.514790 Objective Loss 2.514790 LR 0.000004 Time 0.383368 +2025-05-17 11:10:04,887 - Epoch: [332][ 518/ 518] Overall Loss 2.515706 Objective Loss 2.515706 LR 0.000004 Time 0.383007 +2025-05-17 11:10:04,967 - --- validate (epoch=332)----------- +2025-05-17 11:10:04,968 - 4952 samples (32 per mini-batch) +2025-05-17 11:10:04,971 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:10:20,142 - Epoch: [332][ 50/ 155] Loss 2.754394 mAP 0.496595 +2025-05-17 11:10:35,239 - Epoch: [332][ 100/ 155] Loss 2.758859 mAP 0.504224 +2025-05-17 11:10:50,986 - Epoch: [332][ 150/ 155] Loss 2.753205 mAP 0.503741 +2025-05-17 11:10:54,142 - Epoch: [332][ 155/ 155] Loss 2.748099 mAP 0.503058 +2025-05-17 11:10:54,195 - ==> mAP: 0.50306 Loss: 2.748 + +2025-05-17 11:10:54,204 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:10:54,204 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:10:54,290 - + +2025-05-17 11:10:54,290 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:11:14,461 - Epoch: [333][ 50/ 518] Overall Loss 2.552427 Objective Loss 2.552427 LR 0.000004 Time 0.403335 +2025-05-17 11:11:33,447 - Epoch: [333][ 100/ 518] Overall Loss 2.506206 Objective Loss 2.506206 LR 0.000004 Time 0.391520 +2025-05-17 11:11:52,429 - Epoch: [333][ 150/ 518] Overall Loss 2.493376 Objective Loss 2.493376 LR 0.000004 Time 0.387553 +2025-05-17 11:12:11,439 - Epoch: [333][ 200/ 518] Overall Loss 2.513698 Objective Loss 2.513698 LR 0.000004 Time 0.385709 +2025-05-17 11:12:30,470 - Epoch: [333][ 250/ 518] Overall Loss 2.522474 Objective Loss 2.522474 LR 0.000004 Time 0.384691 +2025-05-17 11:12:49,679 - Epoch: [333][ 300/ 518] Overall Loss 2.511680 Objective Loss 2.511680 LR 0.000004 Time 0.384603 +2025-05-17 11:13:08,697 - Epoch: [333][ 350/ 518] Overall Loss 2.510781 Objective Loss 2.510781 LR 0.000004 Time 0.383992 +2025-05-17 11:13:27,698 - Epoch: [333][ 400/ 518] Overall Loss 2.511521 Objective Loss 2.511521 LR 0.000004 Time 0.383495 +2025-05-17 11:13:46,671 - Epoch: [333][ 450/ 518] Overall Loss 2.517246 Objective Loss 2.517246 LR 0.000004 Time 0.383044 +2025-05-17 11:14:05,690 - Epoch: [333][ 500/ 518] Overall Loss 2.519973 Objective Loss 2.519973 LR 0.000004 Time 0.382775 +2025-05-17 11:14:12,388 - Epoch: [333][ 518/ 518] Overall Loss 2.519784 Objective Loss 2.519784 LR 0.000004 Time 0.382404 +2025-05-17 11:14:12,464 - --- validate (epoch=333)----------- +2025-05-17 11:14:12,465 - 4952 samples (32 per mini-batch) +2025-05-17 11:14:12,469 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:14:27,520 - Epoch: [333][ 50/ 155] Loss 2.752773 mAP 0.503801 +2025-05-17 11:14:42,568 - Epoch: [333][ 100/ 155] Loss 2.758217 mAP 0.494159 +2025-05-17 11:14:58,369 - Epoch: [333][ 150/ 155] Loss 2.753171 mAP 0.490203 +2025-05-17 11:15:01,495 - Epoch: [333][ 155/ 155] Loss 2.753920 mAP 0.491777 +2025-05-17 11:15:01,549 - ==> mAP: 0.49178 Loss: 2.754 + +2025-05-17 11:15:01,559 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:15:01,559 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:15:01,649 - + +2025-05-17 11:15:01,649 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:15:21,741 - Epoch: [334][ 50/ 518] Overall Loss 2.491640 Objective Loss 2.491640 LR 0.000004 Time 0.401766 +2025-05-17 11:15:40,694 - Epoch: [334][ 100/ 518] Overall Loss 2.466871 Objective Loss 2.466871 LR 0.000004 Time 0.390400 +2025-05-17 11:15:59,677 - Epoch: [334][ 150/ 518] Overall Loss 2.464378 Objective Loss 2.464378 LR 0.000004 Time 0.386813 +2025-05-17 11:16:18,655 - Epoch: [334][ 200/ 518] Overall Loss 2.487508 Objective Loss 2.487508 LR 0.000004 Time 0.384990 +2025-05-17 11:16:37,672 - Epoch: [334][ 250/ 518] Overall Loss 2.499541 Objective Loss 2.499541 LR 0.000004 Time 0.384061 +2025-05-17 11:16:56,876 - Epoch: [334][ 300/ 518] Overall Loss 2.509414 Objective Loss 2.509414 LR 0.000004 Time 0.384058 +2025-05-17 11:17:15,887 - Epoch: [334][ 350/ 518] Overall Loss 2.513003 Objective Loss 2.513003 LR 0.000004 Time 0.383506 +2025-05-17 11:17:34,888 - Epoch: [334][ 400/ 518] Overall Loss 2.506843 Objective Loss 2.506843 LR 0.000004 Time 0.383068 +2025-05-17 11:17:53,890 - Epoch: [334][ 450/ 518] Overall Loss 2.504857 Objective Loss 2.504857 LR 0.000004 Time 0.382728 +2025-05-17 11:18:12,903 - Epoch: [334][ 500/ 518] Overall Loss 2.505739 Objective Loss 2.505739 LR 0.000004 Time 0.382479 +2025-05-17 11:18:19,622 - Epoch: [334][ 518/ 518] Overall Loss 2.503984 Objective Loss 2.503984 LR 0.000004 Time 0.382159 +2025-05-17 11:18:19,700 - --- validate (epoch=334)----------- +2025-05-17 11:18:19,701 - 4952 samples (32 per mini-batch) +2025-05-17 11:18:19,704 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:18:34,645 - Epoch: [334][ 50/ 155] Loss 2.799150 mAP 0.491962 +2025-05-17 11:18:49,534 - Epoch: [334][ 100/ 155] Loss 2.762543 mAP 0.496884 +2025-05-17 11:19:04,985 - Epoch: [334][ 150/ 155] Loss 2.755290 mAP 0.498029 +2025-05-17 11:19:07,995 - Epoch: [334][ 155/ 155] Loss 2.756386 mAP 0.497346 +2025-05-17 11:19:08,047 - ==> mAP: 0.49735 Loss: 2.756 + +2025-05-17 11:19:08,057 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:19:08,057 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:19:08,144 - + +2025-05-17 11:19:08,144 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:19:28,195 - Epoch: [335][ 50/ 518] Overall Loss 2.507414 Objective Loss 2.507414 LR 0.000004 Time 0.400957 +2025-05-17 11:19:47,122 - Epoch: [335][ 100/ 518] Overall Loss 2.494710 Objective Loss 2.494710 LR 0.000004 Time 0.389735 +2025-05-17 11:20:06,093 - Epoch: [335][ 150/ 518] Overall Loss 2.514794 Objective Loss 2.514794 LR 0.000004 Time 0.386286 +2025-05-17 11:20:25,101 - Epoch: [335][ 200/ 518] Overall Loss 2.516516 Objective Loss 2.516516 LR 0.000004 Time 0.384751 +2025-05-17 11:20:44,278 - Epoch: [335][ 250/ 518] Overall Loss 2.526910 Objective Loss 2.526910 LR 0.000004 Time 0.384504 +2025-05-17 11:21:03,298 - Epoch: [335][ 300/ 518] Overall Loss 2.529618 Objective Loss 2.529618 LR 0.000004 Time 0.383817 +2025-05-17 11:21:22,344 - Epoch: [335][ 350/ 518] Overall Loss 2.526710 Objective Loss 2.526710 LR 0.000004 Time 0.383400 +2025-05-17 11:21:41,379 - Epoch: [335][ 400/ 518] Overall Loss 2.523280 Objective Loss 2.523280 LR 0.000004 Time 0.383062 +2025-05-17 11:22:00,426 - Epoch: [335][ 450/ 518] Overall Loss 2.523972 Objective Loss 2.523972 LR 0.000004 Time 0.382825 +2025-05-17 11:22:19,459 - Epoch: [335][ 500/ 518] Overall Loss 2.520890 Objective Loss 2.520890 LR 0.000004 Time 0.382604 +2025-05-17 11:22:26,136 - Epoch: [335][ 518/ 518] Overall Loss 2.520699 Objective Loss 2.520699 LR 0.000004 Time 0.382199 +2025-05-17 11:22:26,213 - --- validate (epoch=335)----------- +2025-05-17 11:22:26,214 - 4952 samples (32 per mini-batch) +2025-05-17 11:22:26,217 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:22:41,253 - Epoch: [335][ 50/ 155] Loss 2.778299 mAP 0.485653 +2025-05-17 11:22:56,299 - Epoch: [335][ 100/ 155] Loss 2.763493 mAP 0.496181 +2025-05-17 11:23:11,906 - Epoch: [335][ 150/ 155] Loss 2.757031 mAP 0.502551 +2025-05-17 11:23:14,920 - Epoch: [335][ 155/ 155] Loss 2.757859 mAP 0.502233 +2025-05-17 11:23:14,974 - ==> mAP: 0.50223 Loss: 2.758 + +2025-05-17 11:23:14,983 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:23:14,983 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:23:15,071 - + +2025-05-17 11:23:15,071 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:23:35,033 - Epoch: [336][ 50/ 518] Overall Loss 2.454452 Objective Loss 2.454452 LR 0.000004 Time 0.399167 +2025-05-17 11:23:54,042 - Epoch: [336][ 100/ 518] Overall Loss 2.483215 Objective Loss 2.483215 LR 0.000004 Time 0.389666 +2025-05-17 11:24:13,030 - Epoch: [336][ 150/ 518] Overall Loss 2.493290 Objective Loss 2.493290 LR 0.000004 Time 0.386357 +2025-05-17 11:24:32,244 - Epoch: [336][ 200/ 518] Overall Loss 2.502323 Objective Loss 2.502323 LR 0.000004 Time 0.385832 +2025-05-17 11:24:51,246 - Epoch: [336][ 250/ 518] Overall Loss 2.499714 Objective Loss 2.499714 LR 0.000004 Time 0.384669 +2025-05-17 11:25:10,243 - Epoch: [336][ 300/ 518] Overall Loss 2.499043 Objective Loss 2.499043 LR 0.000004 Time 0.383876 +2025-05-17 11:25:29,274 - Epoch: [336][ 350/ 518] Overall Loss 2.502862 Objective Loss 2.502862 LR 0.000004 Time 0.383407 +2025-05-17 11:25:48,274 - Epoch: [336][ 400/ 518] Overall Loss 2.502269 Objective Loss 2.502269 LR 0.000004 Time 0.382979 +2025-05-17 11:26:07,265 - Epoch: [336][ 450/ 518] Overall Loss 2.496833 Objective Loss 2.496833 LR 0.000004 Time 0.382624 +2025-05-17 11:26:26,302 - Epoch: [336][ 500/ 518] Overall Loss 2.497867 Objective Loss 2.497867 LR 0.000004 Time 0.382434 +2025-05-17 11:26:33,022 - Epoch: [336][ 518/ 518] Overall Loss 2.501338 Objective Loss 2.501338 LR 0.000004 Time 0.382117 +2025-05-17 11:26:33,094 - --- validate (epoch=336)----------- +2025-05-17 11:26:33,095 - 4952 samples (32 per mini-batch) +2025-05-17 11:26:33,098 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:26:48,266 - Epoch: [336][ 50/ 155] Loss 2.755659 mAP 0.510302 +2025-05-17 11:27:03,391 - Epoch: [336][ 100/ 155] Loss 2.763275 mAP 0.497297 +2025-05-17 11:27:19,295 - Epoch: [336][ 150/ 155] Loss 2.767624 mAP 0.497508 +2025-05-17 11:27:22,438 - Epoch: [336][ 155/ 155] Loss 2.769172 mAP 0.497166 +2025-05-17 11:27:22,503 - ==> mAP: 0.49717 Loss: 2.769 + +2025-05-17 11:27:22,512 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:27:22,512 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:27:22,597 - + +2025-05-17 11:27:22,597 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:27:42,660 - Epoch: [337][ 50/ 518] Overall Loss 2.570994 Objective Loss 2.570994 LR 0.000004 Time 0.401179 +2025-05-17 11:28:01,755 - Epoch: [337][ 100/ 518] Overall Loss 2.553650 Objective Loss 2.553650 LR 0.000004 Time 0.391535 +2025-05-17 11:28:20,831 - Epoch: [337][ 150/ 518] Overall Loss 2.529907 Objective Loss 2.529907 LR 0.000004 Time 0.388188 +2025-05-17 11:28:39,832 - Epoch: [337][ 200/ 518] Overall Loss 2.520203 Objective Loss 2.520203 LR 0.000004 Time 0.386144 +2025-05-17 11:28:58,783 - Epoch: [337][ 250/ 518] Overall Loss 2.516679 Objective Loss 2.516679 LR 0.000004 Time 0.384713 +2025-05-17 11:29:17,969 - Epoch: [337][ 300/ 518] Overall Loss 2.509031 Objective Loss 2.509031 LR 0.000004 Time 0.384545 +2025-05-17 11:29:36,988 - Epoch: [337][ 350/ 518] Overall Loss 2.507365 Objective Loss 2.507365 LR 0.000004 Time 0.383947 +2025-05-17 11:29:56,063 - Epoch: [337][ 400/ 518] Overall Loss 2.512811 Objective Loss 2.512811 LR 0.000004 Time 0.383638 +2025-05-17 11:30:15,142 - Epoch: [337][ 450/ 518] Overall Loss 2.511634 Objective Loss 2.511634 LR 0.000004 Time 0.383408 +2025-05-17 11:30:34,127 - Epoch: [337][ 500/ 518] Overall Loss 2.515371 Objective Loss 2.515371 LR 0.000004 Time 0.383035 +2025-05-17 11:30:40,844 - Epoch: [337][ 518/ 518] Overall Loss 2.516415 Objective Loss 2.516415 LR 0.000004 Time 0.382691 +2025-05-17 11:30:40,924 - --- validate (epoch=337)----------- +2025-05-17 11:30:40,925 - 4952 samples (32 per mini-batch) +2025-05-17 11:30:40,928 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:30:56,127 - Epoch: [337][ 50/ 155] Loss 2.748128 mAP 0.498715 +2025-05-17 11:31:11,167 - Epoch: [337][ 100/ 155] Loss 2.725468 mAP 0.499086 +2025-05-17 11:31:26,898 - Epoch: [337][ 150/ 155] Loss 2.742028 mAP 0.494091 +2025-05-17 11:31:30,014 - Epoch: [337][ 155/ 155] Loss 2.742428 mAP 0.497004 +2025-05-17 11:31:30,069 - ==> mAP: 0.49700 Loss: 2.742 + +2025-05-17 11:31:30,079 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:31:30,079 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:31:30,168 - + +2025-05-17 11:31:30,169 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:31:50,339 - Epoch: [338][ 50/ 518] Overall Loss 2.521954 Objective Loss 2.521954 LR 0.000004 Time 0.403335 +2025-05-17 11:32:09,290 - Epoch: [338][ 100/ 518] Overall Loss 2.522679 Objective Loss 2.522679 LR 0.000004 Time 0.391164 +2025-05-17 11:32:28,248 - Epoch: [338][ 150/ 518] Overall Loss 2.525296 Objective Loss 2.525296 LR 0.000004 Time 0.387155 +2025-05-17 11:32:47,247 - Epoch: [338][ 200/ 518] Overall Loss 2.526165 Objective Loss 2.526165 LR 0.000004 Time 0.385356 +2025-05-17 11:33:06,248 - Epoch: [338][ 250/ 518] Overall Loss 2.519576 Objective Loss 2.519576 LR 0.000004 Time 0.384288 +2025-05-17 11:33:25,418 - Epoch: [338][ 300/ 518] Overall Loss 2.514842 Objective Loss 2.514842 LR 0.000004 Time 0.384135 +2025-05-17 11:33:44,427 - Epoch: [338][ 350/ 518] Overall Loss 2.510156 Objective Loss 2.510156 LR 0.000004 Time 0.383566 +2025-05-17 11:34:03,421 - Epoch: [338][ 400/ 518] Overall Loss 2.507820 Objective Loss 2.507820 LR 0.000004 Time 0.383101 +2025-05-17 11:34:22,416 - Epoch: [338][ 450/ 518] Overall Loss 2.505052 Objective Loss 2.505052 LR 0.000004 Time 0.382744 +2025-05-17 11:34:41,405 - Epoch: [338][ 500/ 518] Overall Loss 2.508923 Objective Loss 2.508923 LR 0.000004 Time 0.382445 +2025-05-17 11:34:48,124 - Epoch: [338][ 518/ 518] Overall Loss 2.514546 Objective Loss 2.514546 LR 0.000004 Time 0.382125 +2025-05-17 11:34:48,199 - --- validate (epoch=338)----------- +2025-05-17 11:34:48,200 - 4952 samples (32 per mini-batch) +2025-05-17 11:34:48,204 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:35:03,187 - Epoch: [338][ 50/ 155] Loss 2.795545 mAP 0.489107 +2025-05-17 11:35:17,945 - Epoch: [338][ 100/ 155] Loss 2.767023 mAP 0.500721 +2025-05-17 11:35:33,478 - Epoch: [338][ 150/ 155] Loss 2.752220 mAP 0.500804 +2025-05-17 11:35:36,554 - Epoch: [338][ 155/ 155] Loss 2.746789 mAP 0.502771 +2025-05-17 11:35:36,609 - ==> mAP: 0.50277 Loss: 2.747 + +2025-05-17 11:35:36,618 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:35:36,618 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:35:36,707 - + +2025-05-17 11:35:36,707 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:35:56,697 - Epoch: [339][ 50/ 518] Overall Loss 2.544046 Objective Loss 2.544046 LR 0.000004 Time 0.399728 +2025-05-17 11:36:15,667 - Epoch: [339][ 100/ 518] Overall Loss 2.501589 Objective Loss 2.501589 LR 0.000004 Time 0.389547 +2025-05-17 11:36:34,687 - Epoch: [339][ 150/ 518] Overall Loss 2.495766 Objective Loss 2.495766 LR 0.000004 Time 0.386497 +2025-05-17 11:36:53,649 - Epoch: [339][ 200/ 518] Overall Loss 2.499897 Objective Loss 2.499897 LR 0.000004 Time 0.384675 +2025-05-17 11:37:12,634 - Epoch: [339][ 250/ 518] Overall Loss 2.513122 Objective Loss 2.513122 LR 0.000004 Time 0.383675 +2025-05-17 11:37:31,633 - Epoch: [339][ 300/ 518] Overall Loss 2.519033 Objective Loss 2.519033 LR 0.000004 Time 0.383055 +2025-05-17 11:37:50,637 - Epoch: [339][ 350/ 518] Overall Loss 2.524414 Objective Loss 2.524414 LR 0.000004 Time 0.382627 +2025-05-17 11:38:09,837 - Epoch: [339][ 400/ 518] Overall Loss 2.525674 Objective Loss 2.525674 LR 0.000004 Time 0.382796 +2025-05-17 11:38:28,873 - Epoch: [339][ 450/ 518] Overall Loss 2.529689 Objective Loss 2.529689 LR 0.000004 Time 0.382562 +2025-05-17 11:38:47,901 - Epoch: [339][ 500/ 518] Overall Loss 2.524077 Objective Loss 2.524077 LR 0.000004 Time 0.382361 +2025-05-17 11:38:54,636 - Epoch: [339][ 518/ 518] Overall Loss 2.519826 Objective Loss 2.519826 LR 0.000004 Time 0.382075 +2025-05-17 11:38:54,711 - --- validate (epoch=339)----------- +2025-05-17 11:38:54,712 - 4952 samples (32 per mini-batch) +2025-05-17 11:38:54,715 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:39:09,749 - Epoch: [339][ 50/ 155] Loss 2.754256 mAP 0.491745 +2025-05-17 11:39:25,024 - Epoch: [339][ 100/ 155] Loss 2.755957 mAP 0.496374 +2025-05-17 11:39:40,579 - Epoch: [339][ 150/ 155] Loss 2.747282 mAP 0.493789 +2025-05-17 11:39:43,692 - Epoch: [339][ 155/ 155] Loss 2.746445 mAP 0.494582 +2025-05-17 11:39:43,747 - ==> mAP: 0.49458 Loss: 2.746 + +2025-05-17 11:39:43,757 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:39:43,757 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:39:43,847 - + +2025-05-17 11:39:43,847 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:40:04,079 - Epoch: [340][ 50/ 518] Overall Loss 2.474166 Objective Loss 2.474166 LR 0.000004 Time 0.404562 +2025-05-17 11:40:23,110 - Epoch: [340][ 100/ 518] Overall Loss 2.469326 Objective Loss 2.469326 LR 0.000004 Time 0.392584 +2025-05-17 11:40:42,151 - Epoch: [340][ 150/ 518] Overall Loss 2.458258 Objective Loss 2.458258 LR 0.000004 Time 0.388655 +2025-05-17 11:41:01,112 - Epoch: [340][ 200/ 518] Overall Loss 2.474889 Objective Loss 2.474889 LR 0.000004 Time 0.386291 +2025-05-17 11:41:20,103 - Epoch: [340][ 250/ 518] Overall Loss 2.484545 Objective Loss 2.484545 LR 0.000004 Time 0.384993 +2025-05-17 11:41:39,084 - Epoch: [340][ 300/ 518] Overall Loss 2.490517 Objective Loss 2.490517 LR 0.000004 Time 0.384092 +2025-05-17 11:41:58,226 - Epoch: [340][ 350/ 518] Overall Loss 2.496098 Objective Loss 2.496098 LR 0.000004 Time 0.383910 +2025-05-17 11:42:17,204 - Epoch: [340][ 400/ 518] Overall Loss 2.498161 Objective Loss 2.498161 LR 0.000004 Time 0.383364 +2025-05-17 11:42:36,186 - Epoch: [340][ 450/ 518] Overall Loss 2.496648 Objective Loss 2.496648 LR 0.000004 Time 0.382948 +2025-05-17 11:42:55,187 - Epoch: [340][ 500/ 518] Overall Loss 2.505289 Objective Loss 2.505289 LR 0.000004 Time 0.382653 +2025-05-17 11:43:01,892 - Epoch: [340][ 518/ 518] Overall Loss 2.503660 Objective Loss 2.503660 LR 0.000004 Time 0.382298 +2025-05-17 11:43:01,970 - --- validate (epoch=340)----------- +2025-05-17 11:43:01,970 - 4952 samples (32 per mini-batch) +2025-05-17 11:43:01,973 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:43:16,824 - Epoch: [340][ 50/ 155] Loss 2.786442 mAP 0.507387 +2025-05-17 11:43:31,550 - Epoch: [340][ 100/ 155] Loss 2.749010 mAP 0.495861 +2025-05-17 11:43:47,141 - Epoch: [340][ 150/ 155] Loss 2.748295 mAP 0.493760 +2025-05-17 11:43:50,196 - Epoch: [340][ 155/ 155] Loss 2.750971 mAP 0.494739 +2025-05-17 11:43:50,250 - ==> mAP: 0.49474 Loss: 2.751 + +2025-05-17 11:43:50,260 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:43:50,260 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:43:50,348 - + +2025-05-17 11:43:50,348 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:44:10,549 - Epoch: [341][ 50/ 518] Overall Loss 2.490364 Objective Loss 2.490364 LR 0.000004 Time 0.403945 +2025-05-17 11:44:29,510 - Epoch: [341][ 100/ 518] Overall Loss 2.503026 Objective Loss 2.503026 LR 0.000004 Time 0.391565 +2025-05-17 11:44:48,491 - Epoch: [341][ 150/ 518] Overall Loss 2.519656 Objective Loss 2.519656 LR 0.000004 Time 0.387577 +2025-05-17 11:45:07,436 - Epoch: [341][ 200/ 518] Overall Loss 2.521351 Objective Loss 2.521351 LR 0.000004 Time 0.385398 +2025-05-17 11:45:26,411 - Epoch: [341][ 250/ 518] Overall Loss 2.524061 Objective Loss 2.524061 LR 0.000004 Time 0.384217 +2025-05-17 11:45:45,590 - Epoch: [341][ 300/ 518] Overall Loss 2.521786 Objective Loss 2.521786 LR 0.000004 Time 0.384104 +2025-05-17 11:46:04,595 - Epoch: [341][ 350/ 518] Overall Loss 2.520998 Objective Loss 2.520998 LR 0.000004 Time 0.383530 +2025-05-17 11:46:23,579 - Epoch: [341][ 400/ 518] Overall Loss 2.526244 Objective Loss 2.526244 LR 0.000004 Time 0.383046 +2025-05-17 11:46:42,630 - Epoch: [341][ 450/ 518] Overall Loss 2.520040 Objective Loss 2.520040 LR 0.000004 Time 0.382817 +2025-05-17 11:47:01,677 - Epoch: [341][ 500/ 518] Overall Loss 2.519565 Objective Loss 2.519565 LR 0.000004 Time 0.382629 +2025-05-17 11:47:08,370 - Epoch: [341][ 518/ 518] Overall Loss 2.523658 Objective Loss 2.523658 LR 0.000004 Time 0.382253 +2025-05-17 11:47:08,447 - --- validate (epoch=341)----------- +2025-05-17 11:47:08,448 - 4952 samples (32 per mini-batch) +2025-05-17 11:47:08,451 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:47:23,499 - Epoch: [341][ 50/ 155] Loss 2.692827 mAP 0.515444 +2025-05-17 11:47:38,318 - Epoch: [341][ 100/ 155] Loss 2.743345 mAP 0.501339 +2025-05-17 11:47:53,782 - Epoch: [341][ 150/ 155] Loss 2.753093 mAP 0.498018 +2025-05-17 11:47:56,845 - Epoch: [341][ 155/ 155] Loss 2.743764 mAP 0.499132 +2025-05-17 11:47:56,899 - ==> mAP: 0.49913 Loss: 2.744 + +2025-05-17 11:47:56,908 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:47:56,908 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:47:56,997 - + +2025-05-17 11:47:56,997 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:48:16,988 - Epoch: [342][ 50/ 518] Overall Loss 2.513694 Objective Loss 2.513694 LR 0.000004 Time 0.399743 +2025-05-17 11:48:35,967 - Epoch: [342][ 100/ 518] Overall Loss 2.525660 Objective Loss 2.525660 LR 0.000004 Time 0.389655 +2025-05-17 11:48:54,965 - Epoch: [342][ 150/ 518] Overall Loss 2.529067 Objective Loss 2.529067 LR 0.000004 Time 0.386417 +2025-05-17 11:49:13,956 - Epoch: [342][ 200/ 518] Overall Loss 2.533809 Objective Loss 2.533809 LR 0.000004 Time 0.384763 +2025-05-17 11:49:32,984 - Epoch: [342][ 250/ 518] Overall Loss 2.525960 Objective Loss 2.525960 LR 0.000004 Time 0.383919 +2025-05-17 11:49:52,168 - Epoch: [342][ 300/ 518] Overall Loss 2.520152 Objective Loss 2.520152 LR 0.000004 Time 0.383876 +2025-05-17 11:50:11,149 - Epoch: [342][ 350/ 518] Overall Loss 2.519837 Objective Loss 2.519837 LR 0.000004 Time 0.383264 +2025-05-17 11:50:30,114 - Epoch: [342][ 400/ 518] Overall Loss 2.514472 Objective Loss 2.514472 LR 0.000004 Time 0.382765 +2025-05-17 11:50:49,108 - Epoch: [342][ 450/ 518] Overall Loss 2.517899 Objective Loss 2.517899 LR 0.000004 Time 0.382442 +2025-05-17 11:51:08,100 - Epoch: [342][ 500/ 518] Overall Loss 2.512922 Objective Loss 2.512922 LR 0.000004 Time 0.382181 +2025-05-17 11:51:14,825 - Epoch: [342][ 518/ 518] Overall Loss 2.511782 Objective Loss 2.511782 LR 0.000004 Time 0.381881 +2025-05-17 11:51:14,900 - --- validate (epoch=342)----------- +2025-05-17 11:51:14,901 - 4952 samples (32 per mini-batch) +2025-05-17 11:51:14,904 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:51:29,863 - Epoch: [342][ 50/ 155] Loss 2.769534 mAP 0.516201 +2025-05-17 11:51:45,008 - Epoch: [342][ 100/ 155] Loss 2.775798 mAP 0.499671 +2025-05-17 11:52:00,573 - Epoch: [342][ 150/ 155] Loss 2.766413 mAP 0.498988 +2025-05-17 11:52:03,509 - Epoch: [342][ 155/ 155] Loss 2.764069 mAP 0.500309 +2025-05-17 11:52:03,562 - ==> mAP: 0.50031 Loss: 2.764 + +2025-05-17 11:52:03,571 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:52:03,571 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:52:03,659 - + +2025-05-17 11:52:03,660 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:52:23,967 - Epoch: [343][ 50/ 518] Overall Loss 2.505751 Objective Loss 2.505751 LR 0.000004 Time 0.406069 +2025-05-17 11:52:42,956 - Epoch: [343][ 100/ 518] Overall Loss 2.508286 Objective Loss 2.508286 LR 0.000004 Time 0.392913 +2025-05-17 11:53:01,958 - Epoch: [343][ 150/ 518] Overall Loss 2.524410 Objective Loss 2.524410 LR 0.000004 Time 0.388615 +2025-05-17 11:53:20,941 - Epoch: [343][ 200/ 518] Overall Loss 2.508046 Objective Loss 2.508046 LR 0.000004 Time 0.386367 +2025-05-17 11:53:39,928 - Epoch: [343][ 250/ 518] Overall Loss 2.498691 Objective Loss 2.498691 LR 0.000004 Time 0.385038 +2025-05-17 11:53:58,910 - Epoch: [343][ 300/ 518] Overall Loss 2.504608 Objective Loss 2.504608 LR 0.000004 Time 0.384133 +2025-05-17 11:54:17,922 - Epoch: [343][ 350/ 518] Overall Loss 2.503682 Objective Loss 2.503682 LR 0.000004 Time 0.383576 +2025-05-17 11:54:37,120 - Epoch: [343][ 400/ 518] Overall Loss 2.504623 Objective Loss 2.504623 LR 0.000004 Time 0.383622 +2025-05-17 11:54:56,176 - Epoch: [343][ 450/ 518] Overall Loss 2.505674 Objective Loss 2.505674 LR 0.000004 Time 0.383341 +2025-05-17 11:55:15,200 - Epoch: [343][ 500/ 518] Overall Loss 2.508946 Objective Loss 2.508946 LR 0.000004 Time 0.383055 +2025-05-17 11:55:21,930 - Epoch: [343][ 518/ 518] Overall Loss 2.510936 Objective Loss 2.510936 LR 0.000004 Time 0.382736 +2025-05-17 11:55:22,008 - --- validate (epoch=343)----------- +2025-05-17 11:55:22,009 - 4952 samples (32 per mini-batch) +2025-05-17 11:55:22,012 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:55:36,932 - Epoch: [343][ 50/ 155] Loss 2.793458 mAP 0.500766 +2025-05-17 11:55:52,093 - Epoch: [343][ 100/ 155] Loss 2.759152 mAP 0.505304 +2025-05-17 11:56:07,686 - Epoch: [343][ 150/ 155] Loss 2.748495 mAP 0.498234 +2025-05-17 11:56:10,558 - Epoch: [343][ 155/ 155] Loss 2.748955 mAP 0.497354 +2025-05-17 11:56:10,613 - ==> mAP: 0.49735 Loss: 2.749 + +2025-05-17 11:56:10,622 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 11:56:10,622 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 11:56:10,709 - + +2025-05-17 11:56:10,710 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 11:56:30,875 - Epoch: [344][ 50/ 518] Overall Loss 2.480953 Objective Loss 2.480953 LR 0.000004 Time 0.403240 +2025-05-17 11:56:50,070 - Epoch: [344][ 100/ 518] Overall Loss 2.502261 Objective Loss 2.502261 LR 0.000004 Time 0.393554 +2025-05-17 11:57:09,056 - Epoch: [344][ 150/ 518] Overall Loss 2.501178 Objective Loss 2.501178 LR 0.000004 Time 0.388937 +2025-05-17 11:57:28,035 - Epoch: [344][ 200/ 518] Overall Loss 2.505202 Objective Loss 2.505202 LR 0.000004 Time 0.386592 +2025-05-17 11:57:47,020 - Epoch: [344][ 250/ 518] Overall Loss 2.495366 Objective Loss 2.495366 LR 0.000004 Time 0.385208 +2025-05-17 11:58:06,025 - Epoch: [344][ 300/ 518] Overall Loss 2.484271 Objective Loss 2.484271 LR 0.000004 Time 0.384354 +2025-05-17 11:58:25,006 - Epoch: [344][ 350/ 518] Overall Loss 2.491723 Objective Loss 2.491723 LR 0.000004 Time 0.383674 +2025-05-17 11:58:43,998 - Epoch: [344][ 400/ 518] Overall Loss 2.498562 Objective Loss 2.498562 LR 0.000004 Time 0.383192 +2025-05-17 11:59:03,106 - Epoch: [344][ 450/ 518] Overall Loss 2.506954 Objective Loss 2.506954 LR 0.000004 Time 0.383073 +2025-05-17 11:59:22,085 - Epoch: [344][ 500/ 518] Overall Loss 2.505375 Objective Loss 2.505375 LR 0.000004 Time 0.382723 +2025-05-17 11:59:28,802 - Epoch: [344][ 518/ 518] Overall Loss 2.506501 Objective Loss 2.506501 LR 0.000004 Time 0.382389 +2025-05-17 11:59:28,881 - --- validate (epoch=344)----------- +2025-05-17 11:59:28,882 - 4952 samples (32 per mini-batch) +2025-05-17 11:59:28,885 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 11:59:43,727 - Epoch: [344][ 50/ 155] Loss 2.806984 mAP 0.502231 +2025-05-17 11:59:58,690 - Epoch: [344][ 100/ 155] Loss 2.789068 mAP 0.492240 +2025-05-17 12:00:13,983 - Epoch: [344][ 150/ 155] Loss 2.763330 mAP 0.495807 +2025-05-17 12:00:17,011 - Epoch: [344][ 155/ 155] Loss 2.764068 mAP 0.495689 +2025-05-17 12:00:17,074 - ==> mAP: 0.49569 Loss: 2.764 + +2025-05-17 12:00:17,083 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:00:17,084 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:00:17,171 - + +2025-05-17 12:00:17,171 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:00:37,130 - Epoch: [345][ 50/ 518] Overall Loss 2.529484 Objective Loss 2.529484 LR 0.000004 Time 0.399111 +2025-05-17 12:00:56,116 - Epoch: [345][ 100/ 518] Overall Loss 2.526553 Objective Loss 2.526553 LR 0.000004 Time 0.389404 +2025-05-17 12:01:15,366 - Epoch: [345][ 150/ 518] Overall Loss 2.523838 Objective Loss 2.523838 LR 0.000004 Time 0.387929 +2025-05-17 12:01:34,401 - Epoch: [345][ 200/ 518] Overall Loss 2.509796 Objective Loss 2.509796 LR 0.000004 Time 0.386114 +2025-05-17 12:01:53,470 - Epoch: [345][ 250/ 518] Overall Loss 2.506410 Objective Loss 2.506410 LR 0.000004 Time 0.385165 +2025-05-17 12:02:12,534 - Epoch: [345][ 300/ 518] Overall Loss 2.501477 Objective Loss 2.501477 LR 0.000004 Time 0.384515 +2025-05-17 12:02:31,589 - Epoch: [345][ 350/ 518] Overall Loss 2.507132 Objective Loss 2.507132 LR 0.000004 Time 0.384024 +2025-05-17 12:02:50,752 - Epoch: [345][ 400/ 518] Overall Loss 2.510199 Objective Loss 2.510199 LR 0.000004 Time 0.383926 +2025-05-17 12:03:09,746 - Epoch: [345][ 450/ 518] Overall Loss 2.516870 Objective Loss 2.516870 LR 0.000004 Time 0.383474 +2025-05-17 12:03:28,736 - Epoch: [345][ 500/ 518] Overall Loss 2.516429 Objective Loss 2.516429 LR 0.000004 Time 0.383104 +2025-05-17 12:03:35,422 - Epoch: [345][ 518/ 518] Overall Loss 2.512517 Objective Loss 2.512517 LR 0.000004 Time 0.382698 +2025-05-17 12:03:35,497 - --- validate (epoch=345)----------- +2025-05-17 12:03:35,498 - 4952 samples (32 per mini-batch) +2025-05-17 12:03:35,501 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:03:50,430 - Epoch: [345][ 50/ 155] Loss 2.738098 mAP 0.506091 +2025-05-17 12:04:05,349 - Epoch: [345][ 100/ 155] Loss 2.737641 mAP 0.511742 +2025-05-17 12:04:20,622 - Epoch: [345][ 150/ 155] Loss 2.740964 mAP 0.504993 +2025-05-17 12:04:23,623 - Epoch: [345][ 155/ 155] Loss 2.751421 mAP 0.504886 +2025-05-17 12:04:23,677 - ==> mAP: 0.50489 Loss: 2.751 + +2025-05-17 12:04:23,686 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:04:23,686 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:04:23,773 - + +2025-05-17 12:04:23,774 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:04:44,249 - Epoch: [346][ 50/ 518] Overall Loss 2.533030 Objective Loss 2.533030 LR 0.000004 Time 0.409437 +2025-05-17 12:05:03,308 - Epoch: [346][ 100/ 518] Overall Loss 2.520071 Objective Loss 2.520071 LR 0.000004 Time 0.395299 +2025-05-17 12:05:22,340 - Epoch: [346][ 150/ 518] Overall Loss 2.512477 Objective Loss 2.512477 LR 0.000004 Time 0.390414 +2025-05-17 12:05:41,371 - Epoch: [346][ 200/ 518] Overall Loss 2.523324 Objective Loss 2.523324 LR 0.000004 Time 0.387957 +2025-05-17 12:06:00,374 - Epoch: [346][ 250/ 518] Overall Loss 2.520233 Objective Loss 2.520233 LR 0.000004 Time 0.386370 +2025-05-17 12:06:19,414 - Epoch: [346][ 300/ 518] Overall Loss 2.524700 Objective Loss 2.524700 LR 0.000004 Time 0.385439 +2025-05-17 12:06:38,469 - Epoch: [346][ 350/ 518] Overall Loss 2.526897 Objective Loss 2.526897 LR 0.000004 Time 0.384817 +2025-05-17 12:06:57,715 - Epoch: [346][ 400/ 518] Overall Loss 2.523317 Objective Loss 2.523317 LR 0.000004 Time 0.384827 +2025-05-17 12:07:16,748 - Epoch: [346][ 450/ 518] Overall Loss 2.520741 Objective Loss 2.520741 LR 0.000004 Time 0.384364 +2025-05-17 12:07:35,820 - Epoch: [346][ 500/ 518] Overall Loss 2.519834 Objective Loss 2.519834 LR 0.000004 Time 0.384069 +2025-05-17 12:07:42,525 - Epoch: [346][ 518/ 518] Overall Loss 2.518059 Objective Loss 2.518059 LR 0.000004 Time 0.383667 +2025-05-17 12:07:42,601 - --- validate (epoch=346)----------- +2025-05-17 12:07:42,602 - 4952 samples (32 per mini-batch) +2025-05-17 12:07:42,605 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:07:57,570 - Epoch: [346][ 50/ 155] Loss 2.720413 mAP 0.490972 +2025-05-17 12:08:12,822 - Epoch: [346][ 100/ 155] Loss 2.760555 mAP 0.498661 +2025-05-17 12:08:28,595 - Epoch: [346][ 150/ 155] Loss 2.749174 mAP 0.498507 +2025-05-17 12:08:31,565 - Epoch: [346][ 155/ 155] Loss 2.748910 mAP 0.499622 +2025-05-17 12:08:31,620 - ==> mAP: 0.49962 Loss: 2.749 + +2025-05-17 12:08:31,629 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:08:31,629 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:08:31,720 - + +2025-05-17 12:08:31,720 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:08:51,799 - Epoch: [347][ 50/ 518] Overall Loss 2.471970 Objective Loss 2.471970 LR 0.000004 Time 0.401516 +2025-05-17 12:09:10,975 - Epoch: [347][ 100/ 518] Overall Loss 2.477394 Objective Loss 2.477394 LR 0.000004 Time 0.392502 +2025-05-17 12:09:30,025 - Epoch: [347][ 150/ 518] Overall Loss 2.482827 Objective Loss 2.482827 LR 0.000004 Time 0.388665 +2025-05-17 12:09:49,044 - Epoch: [347][ 200/ 518] Overall Loss 2.473805 Objective Loss 2.473805 LR 0.000004 Time 0.386590 +2025-05-17 12:10:08,042 - Epoch: [347][ 250/ 518] Overall Loss 2.485263 Objective Loss 2.485263 LR 0.000004 Time 0.385260 +2025-05-17 12:10:27,045 - Epoch: [347][ 300/ 518] Overall Loss 2.490858 Objective Loss 2.490858 LR 0.000004 Time 0.384389 +2025-05-17 12:10:46,230 - Epoch: [347][ 350/ 518] Overall Loss 2.494417 Objective Loss 2.494417 LR 0.000004 Time 0.384286 +2025-05-17 12:11:05,274 - Epoch: [347][ 400/ 518] Overall Loss 2.498442 Objective Loss 2.498442 LR 0.000004 Time 0.383858 +2025-05-17 12:11:24,291 - Epoch: [347][ 450/ 518] Overall Loss 2.500042 Objective Loss 2.500042 LR 0.000004 Time 0.383466 +2025-05-17 12:11:43,314 - Epoch: [347][ 500/ 518] Overall Loss 2.504037 Objective Loss 2.504037 LR 0.000004 Time 0.383163 +2025-05-17 12:11:50,029 - Epoch: [347][ 518/ 518] Overall Loss 2.502244 Objective Loss 2.502244 LR 0.000004 Time 0.382811 +2025-05-17 12:11:50,106 - --- validate (epoch=347)----------- +2025-05-17 12:11:50,107 - 4952 samples (32 per mini-batch) +2025-05-17 12:11:50,110 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:12:04,838 - Epoch: [347][ 50/ 155] Loss 2.748163 mAP 0.502593 +2025-05-17 12:12:19,769 - Epoch: [347][ 100/ 155] Loss 2.751131 mAP 0.498900 +2025-05-17 12:12:35,202 - Epoch: [347][ 150/ 155] Loss 2.743468 mAP 0.498687 +2025-05-17 12:12:38,077 - Epoch: [347][ 155/ 155] Loss 2.740311 mAP 0.499354 +2025-05-17 12:12:38,129 - ==> mAP: 0.49935 Loss: 2.740 + +2025-05-17 12:12:38,139 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:12:38,139 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:12:38,226 - + +2025-05-17 12:12:38,227 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:12:58,561 - Epoch: [348][ 50/ 518] Overall Loss 2.523529 Objective Loss 2.523529 LR 0.000004 Time 0.406612 +2025-05-17 12:13:17,554 - Epoch: [348][ 100/ 518] Overall Loss 2.515764 Objective Loss 2.515764 LR 0.000004 Time 0.393225 +2025-05-17 12:13:36,536 - Epoch: [348][ 150/ 518] Overall Loss 2.511090 Objective Loss 2.511090 LR 0.000004 Time 0.388691 +2025-05-17 12:13:55,544 - Epoch: [348][ 200/ 518] Overall Loss 2.528201 Objective Loss 2.528201 LR 0.000004 Time 0.386549 +2025-05-17 12:14:14,551 - Epoch: [348][ 250/ 518] Overall Loss 2.531697 Objective Loss 2.531697 LR 0.000004 Time 0.385262 +2025-05-17 12:14:33,583 - Epoch: [348][ 300/ 518] Overall Loss 2.524273 Objective Loss 2.524273 LR 0.000004 Time 0.384490 +2025-05-17 12:14:52,623 - Epoch: [348][ 350/ 518] Overall Loss 2.517309 Objective Loss 2.517309 LR 0.000004 Time 0.383961 +2025-05-17 12:15:11,817 - Epoch: [348][ 400/ 518] Overall Loss 2.517008 Objective Loss 2.517008 LR 0.000004 Time 0.383948 +2025-05-17 12:15:30,812 - Epoch: [348][ 450/ 518] Overall Loss 2.517898 Objective Loss 2.517898 LR 0.000004 Time 0.383495 +2025-05-17 12:15:49,827 - Epoch: [348][ 500/ 518] Overall Loss 2.521675 Objective Loss 2.521675 LR 0.000004 Time 0.383175 +2025-05-17 12:15:56,533 - Epoch: [348][ 518/ 518] Overall Loss 2.518649 Objective Loss 2.518649 LR 0.000004 Time 0.382804 +2025-05-17 12:15:56,603 - --- validate (epoch=348)----------- +2025-05-17 12:15:56,604 - 4952 samples (32 per mini-batch) +2025-05-17 12:15:56,607 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:16:11,484 - Epoch: [348][ 50/ 155] Loss 2.768466 mAP 0.490343 +2025-05-17 12:16:26,563 - Epoch: [348][ 100/ 155] Loss 2.751238 mAP 0.493140 +2025-05-17 12:16:42,004 - Epoch: [348][ 150/ 155] Loss 2.754950 mAP 0.497626 +2025-05-17 12:16:45,064 - Epoch: [348][ 155/ 155] Loss 2.757012 mAP 0.498967 +2025-05-17 12:16:45,118 - ==> mAP: 0.49897 Loss: 2.757 + +2025-05-17 12:16:45,127 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:16:45,127 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:16:45,216 - + +2025-05-17 12:16:45,216 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:17:05,263 - Epoch: [349][ 50/ 518] Overall Loss 2.509111 Objective Loss 2.509111 LR 0.000004 Time 0.400876 +2025-05-17 12:17:24,471 - Epoch: [349][ 100/ 518] Overall Loss 2.539633 Objective Loss 2.539633 LR 0.000004 Time 0.392501 +2025-05-17 12:17:43,471 - Epoch: [349][ 150/ 518] Overall Loss 2.538548 Objective Loss 2.538548 LR 0.000004 Time 0.388326 +2025-05-17 12:18:02,478 - Epoch: [349][ 200/ 518] Overall Loss 2.524510 Objective Loss 2.524510 LR 0.000004 Time 0.386275 +2025-05-17 12:18:21,478 - Epoch: [349][ 250/ 518] Overall Loss 2.515519 Objective Loss 2.515519 LR 0.000004 Time 0.385017 +2025-05-17 12:18:40,452 - Epoch: [349][ 300/ 518] Overall Loss 2.512973 Objective Loss 2.512973 LR 0.000004 Time 0.384089 +2025-05-17 12:18:59,450 - Epoch: [349][ 350/ 518] Overall Loss 2.517757 Objective Loss 2.517757 LR 0.000004 Time 0.383497 +2025-05-17 12:19:18,628 - Epoch: [349][ 400/ 518] Overall Loss 2.515405 Objective Loss 2.515405 LR 0.000004 Time 0.383499 +2025-05-17 12:19:37,619 - Epoch: [349][ 450/ 518] Overall Loss 2.514593 Objective Loss 2.514593 LR 0.000004 Time 0.383088 +2025-05-17 12:19:56,612 - Epoch: [349][ 500/ 518] Overall Loss 2.515022 Objective Loss 2.515022 LR 0.000004 Time 0.382764 +2025-05-17 12:20:03,331 - Epoch: [349][ 518/ 518] Overall Loss 2.517031 Objective Loss 2.517031 LR 0.000004 Time 0.382433 +2025-05-17 12:20:03,410 - --- validate (epoch=349)----------- +2025-05-17 12:20:03,411 - 4952 samples (32 per mini-batch) +2025-05-17 12:20:03,415 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:20:18,429 - Epoch: [349][ 50/ 155] Loss 2.774224 mAP 0.508859 +2025-05-17 12:20:33,540 - Epoch: [349][ 100/ 155] Loss 2.755603 mAP 0.508131 +2025-05-17 12:20:49,129 - Epoch: [349][ 150/ 155] Loss 2.748674 mAP 0.503882 +2025-05-17 12:20:52,063 - Epoch: [349][ 155/ 155] Loss 2.751656 mAP 0.502405 +2025-05-17 12:20:52,117 - ==> mAP: 0.50240 Loss: 2.752 + +2025-05-17 12:20:52,127 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:20:52,127 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:20:52,215 - + +2025-05-17 12:20:52,215 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:21:12,524 - Epoch: [350][ 50/ 518] Overall Loss 2.478668 Objective Loss 2.478668 LR 0.000004 Time 0.406090 +2025-05-17 12:21:31,523 - Epoch: [350][ 100/ 518] Overall Loss 2.482040 Objective Loss 2.482040 LR 0.000004 Time 0.393033 +2025-05-17 12:21:50,499 - Epoch: [350][ 150/ 518] Overall Loss 2.497726 Objective Loss 2.497726 LR 0.000004 Time 0.388516 +2025-05-17 12:22:09,485 - Epoch: [350][ 200/ 518] Overall Loss 2.499135 Objective Loss 2.499135 LR 0.000004 Time 0.386311 +2025-05-17 12:22:28,476 - Epoch: [350][ 250/ 518] Overall Loss 2.507002 Objective Loss 2.507002 LR 0.000004 Time 0.385007 +2025-05-17 12:22:47,625 - Epoch: [350][ 300/ 518] Overall Loss 2.507293 Objective Loss 2.507293 LR 0.000004 Time 0.384665 +2025-05-17 12:23:06,628 - Epoch: [350][ 350/ 518] Overall Loss 2.503003 Objective Loss 2.503003 LR 0.000004 Time 0.384004 +2025-05-17 12:23:25,658 - Epoch: [350][ 400/ 518] Overall Loss 2.504415 Objective Loss 2.504415 LR 0.000004 Time 0.383576 +2025-05-17 12:23:44,687 - Epoch: [350][ 450/ 518] Overall Loss 2.502402 Objective Loss 2.502402 LR 0.000004 Time 0.383239 +2025-05-17 12:24:03,713 - Epoch: [350][ 500/ 518] Overall Loss 2.506958 Objective Loss 2.506958 LR 0.000004 Time 0.382966 +2025-05-17 12:24:10,428 - Epoch: [350][ 518/ 518] Overall Loss 2.508636 Objective Loss 2.508636 LR 0.000004 Time 0.382619 +2025-05-17 12:24:10,503 - --- validate (epoch=350)----------- +2025-05-17 12:24:10,504 - 4952 samples (32 per mini-batch) +2025-05-17 12:24:10,507 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:24:25,561 - Epoch: [350][ 50/ 155] Loss 2.745827 mAP 0.507985 +2025-05-17 12:24:40,436 - Epoch: [350][ 100/ 155] Loss 2.741024 mAP 0.507961 +2025-05-17 12:24:55,991 - Epoch: [350][ 150/ 155] Loss 2.757375 mAP 0.500160 +2025-05-17 12:24:59,033 - Epoch: [350][ 155/ 155] Loss 2.753714 mAP 0.499972 +2025-05-17 12:24:59,086 - ==> mAP: 0.49997 Loss: 2.754 + +2025-05-17 12:24:59,096 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:24:59,096 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:24:59,184 - + +2025-05-17 12:24:59,185 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:25:19,260 - Epoch: [351][ 50/ 518] Overall Loss 2.521758 Objective Loss 2.521758 LR 0.000004 Time 0.401436 +2025-05-17 12:25:38,225 - Epoch: [351][ 100/ 518] Overall Loss 2.535988 Objective Loss 2.535988 LR 0.000004 Time 0.390356 +2025-05-17 12:25:57,180 - Epoch: [351][ 150/ 518] Overall Loss 2.533817 Objective Loss 2.533817 LR 0.000004 Time 0.386595 +2025-05-17 12:26:16,160 - Epoch: [351][ 200/ 518] Overall Loss 2.531176 Objective Loss 2.531176 LR 0.000004 Time 0.384840 +2025-05-17 12:26:35,167 - Epoch: [351][ 250/ 518] Overall Loss 2.507395 Objective Loss 2.507395 LR 0.000004 Time 0.383893 +2025-05-17 12:26:54,178 - Epoch: [351][ 300/ 518] Overall Loss 2.510816 Objective Loss 2.510816 LR 0.000004 Time 0.383277 +2025-05-17 12:27:13,374 - Epoch: [351][ 350/ 518] Overall Loss 2.504122 Objective Loss 2.504122 LR 0.000004 Time 0.383365 +2025-05-17 12:27:32,380 - Epoch: [351][ 400/ 518] Overall Loss 2.507161 Objective Loss 2.507161 LR 0.000004 Time 0.382958 +2025-05-17 12:27:51,391 - Epoch: [351][ 450/ 518] Overall Loss 2.504605 Objective Loss 2.504605 LR 0.000004 Time 0.382649 +2025-05-17 12:28:10,390 - Epoch: [351][ 500/ 518] Overall Loss 2.505406 Objective Loss 2.505406 LR 0.000004 Time 0.382381 +2025-05-17 12:28:17,104 - Epoch: [351][ 518/ 518] Overall Loss 2.506542 Objective Loss 2.506542 LR 0.000004 Time 0.382054 +2025-05-17 12:28:17,181 - --- validate (epoch=351)----------- +2025-05-17 12:28:17,182 - 4952 samples (32 per mini-batch) +2025-05-17 12:28:17,185 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:28:31,944 - Epoch: [351][ 50/ 155] Loss 2.730943 mAP 0.519212 +2025-05-17 12:28:46,995 - Epoch: [351][ 100/ 155] Loss 2.748614 mAP 0.508483 +2025-05-17 12:29:02,531 - Epoch: [351][ 150/ 155] Loss 2.737697 mAP 0.505149 +2025-05-17 12:29:05,438 - Epoch: [351][ 155/ 155] Loss 2.748387 mAP 0.501864 +2025-05-17 12:29:05,491 - ==> mAP: 0.50186 Loss: 2.748 + +2025-05-17 12:29:05,501 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:29:05,501 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:29:05,589 - + +2025-05-17 12:29:05,589 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:29:25,920 - Epoch: [352][ 50/ 518] Overall Loss 2.480955 Objective Loss 2.480955 LR 0.000004 Time 0.406563 +2025-05-17 12:29:44,893 - Epoch: [352][ 100/ 518] Overall Loss 2.488390 Objective Loss 2.488390 LR 0.000004 Time 0.393001 +2025-05-17 12:30:03,867 - Epoch: [352][ 150/ 518] Overall Loss 2.482617 Objective Loss 2.482617 LR 0.000004 Time 0.388483 +2025-05-17 12:30:22,907 - Epoch: [352][ 200/ 518] Overall Loss 2.494269 Objective Loss 2.494269 LR 0.000004 Time 0.386556 +2025-05-17 12:30:41,970 - Epoch: [352][ 250/ 518] Overall Loss 2.500468 Objective Loss 2.500468 LR 0.000004 Time 0.385495 +2025-05-17 12:31:00,997 - Epoch: [352][ 300/ 518] Overall Loss 2.508205 Objective Loss 2.508205 LR 0.000004 Time 0.384665 +2025-05-17 12:31:20,038 - Epoch: [352][ 350/ 518] Overall Loss 2.515053 Objective Loss 2.515053 LR 0.000004 Time 0.384112 +2025-05-17 12:31:39,210 - Epoch: [352][ 400/ 518] Overall Loss 2.519861 Objective Loss 2.519861 LR 0.000004 Time 0.384026 +2025-05-17 12:31:58,216 - Epoch: [352][ 450/ 518] Overall Loss 2.517260 Objective Loss 2.517260 LR 0.000004 Time 0.383589 +2025-05-17 12:32:17,253 - Epoch: [352][ 500/ 518] Overall Loss 2.516800 Objective Loss 2.516800 LR 0.000004 Time 0.383302 +2025-05-17 12:32:23,968 - Epoch: [352][ 518/ 518] Overall Loss 2.514078 Objective Loss 2.514078 LR 0.000004 Time 0.382944 +2025-05-17 12:32:24,047 - --- validate (epoch=352)----------- +2025-05-17 12:32:24,048 - 4952 samples (32 per mini-batch) +2025-05-17 12:32:24,051 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:32:38,869 - Epoch: [352][ 50/ 155] Loss 2.771439 mAP 0.501634 +2025-05-17 12:32:53,950 - Epoch: [352][ 100/ 155] Loss 2.729136 mAP 0.507361 +2025-05-17 12:33:09,310 - Epoch: [352][ 150/ 155] Loss 2.750664 mAP 0.498290 +2025-05-17 12:33:12,349 - Epoch: [352][ 155/ 155] Loss 2.753542 mAP 0.498155 +2025-05-17 12:33:12,403 - ==> mAP: 0.49816 Loss: 2.754 + +2025-05-17 12:33:12,412 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:33:12,412 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:33:12,501 - + +2025-05-17 12:33:12,501 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:33:32,751 - Epoch: [353][ 50/ 518] Overall Loss 2.504449 Objective Loss 2.504449 LR 0.000004 Time 0.404933 +2025-05-17 12:33:51,777 - Epoch: [353][ 100/ 518] Overall Loss 2.488610 Objective Loss 2.488610 LR 0.000004 Time 0.392721 +2025-05-17 12:34:10,791 - Epoch: [353][ 150/ 518] Overall Loss 2.481640 Objective Loss 2.481640 LR 0.000004 Time 0.388570 +2025-05-17 12:34:29,785 - Epoch: [353][ 200/ 518] Overall Loss 2.489477 Objective Loss 2.489477 LR 0.000004 Time 0.386389 +2025-05-17 12:34:48,799 - Epoch: [353][ 250/ 518] Overall Loss 2.501242 Objective Loss 2.501242 LR 0.000004 Time 0.385162 +2025-05-17 12:35:07,790 - Epoch: [353][ 300/ 518] Overall Loss 2.499882 Objective Loss 2.499882 LR 0.000004 Time 0.384267 +2025-05-17 12:35:26,973 - Epoch: [353][ 350/ 518] Overall Loss 2.497640 Objective Loss 2.497640 LR 0.000004 Time 0.384178 +2025-05-17 12:35:45,998 - Epoch: [353][ 400/ 518] Overall Loss 2.495707 Objective Loss 2.495707 LR 0.000004 Time 0.383714 +2025-05-17 12:36:05,039 - Epoch: [353][ 450/ 518] Overall Loss 2.499760 Objective Loss 2.499760 LR 0.000004 Time 0.383392 +2025-05-17 12:36:24,044 - Epoch: [353][ 500/ 518] Overall Loss 2.501544 Objective Loss 2.501544 LR 0.000004 Time 0.383060 +2025-05-17 12:36:30,736 - Epoch: [353][ 518/ 518] Overall Loss 2.499298 Objective Loss 2.499298 LR 0.000004 Time 0.382666 +2025-05-17 12:36:30,811 - --- validate (epoch=353)----------- +2025-05-17 12:36:30,812 - 4952 samples (32 per mini-batch) +2025-05-17 12:36:30,815 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:36:45,968 - Epoch: [353][ 50/ 155] Loss 2.746831 mAP 0.507083 +2025-05-17 12:37:01,177 - Epoch: [353][ 100/ 155] Loss 2.760469 mAP 0.500133 +2025-05-17 12:37:17,002 - Epoch: [353][ 150/ 155] Loss 2.757401 mAP 0.499553 +2025-05-17 12:37:20,169 - Epoch: [353][ 155/ 155] Loss 2.756974 mAP 0.499313 +2025-05-17 12:37:20,225 - ==> mAP: 0.49931 Loss: 2.757 + +2025-05-17 12:37:20,235 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:37:20,235 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:37:20,325 - + +2025-05-17 12:37:20,326 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:37:40,292 - Epoch: [354][ 50/ 518] Overall Loss 2.576526 Objective Loss 2.576526 LR 0.000004 Time 0.399257 +2025-05-17 12:37:59,304 - Epoch: [354][ 100/ 518] Overall Loss 2.584453 Objective Loss 2.584453 LR 0.000004 Time 0.389736 +2025-05-17 12:38:18,253 - Epoch: [354][ 150/ 518] Overall Loss 2.561792 Objective Loss 2.561792 LR 0.000004 Time 0.386143 +2025-05-17 12:38:37,248 - Epoch: [354][ 200/ 518] Overall Loss 2.534178 Objective Loss 2.534178 LR 0.000004 Time 0.384578 +2025-05-17 12:38:56,248 - Epoch: [354][ 250/ 518] Overall Loss 2.527208 Objective Loss 2.527208 LR 0.000004 Time 0.383658 +2025-05-17 12:39:15,254 - Epoch: [354][ 300/ 518] Overall Loss 2.517883 Objective Loss 2.517883 LR 0.000004 Time 0.383063 +2025-05-17 12:39:34,237 - Epoch: [354][ 350/ 518] Overall Loss 2.510538 Objective Loss 2.510538 LR 0.000004 Time 0.382572 +2025-05-17 12:39:53,439 - Epoch: [354][ 400/ 518] Overall Loss 2.519461 Objective Loss 2.519461 LR 0.000004 Time 0.382754 +2025-05-17 12:40:12,483 - Epoch: [354][ 450/ 518] Overall Loss 2.514322 Objective Loss 2.514322 LR 0.000004 Time 0.382542 +2025-05-17 12:40:31,517 - Epoch: [354][ 500/ 518] Overall Loss 2.514341 Objective Loss 2.514341 LR 0.000004 Time 0.382355 +2025-05-17 12:40:38,231 - Epoch: [354][ 518/ 518] Overall Loss 2.513352 Objective Loss 2.513352 LR 0.000004 Time 0.382028 +2025-05-17 12:40:38,303 - --- validate (epoch=354)----------- +2025-05-17 12:40:38,303 - 4952 samples (32 per mini-batch) +2025-05-17 12:40:38,307 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:40:53,339 - Epoch: [354][ 50/ 155] Loss 2.722206 mAP 0.507480 +2025-05-17 12:41:08,639 - Epoch: [354][ 100/ 155] Loss 2.736503 mAP 0.495851 +2025-05-17 12:41:24,453 - Epoch: [354][ 150/ 155] Loss 2.742247 mAP 0.496801 +2025-05-17 12:41:27,439 - Epoch: [354][ 155/ 155] Loss 2.744587 mAP 0.496398 +2025-05-17 12:41:27,496 - ==> mAP: 0.49640 Loss: 2.745 + +2025-05-17 12:41:27,506 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:41:27,507 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:41:27,597 - + +2025-05-17 12:41:27,597 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:41:47,599 - Epoch: [355][ 50/ 518] Overall Loss 2.504403 Objective Loss 2.504403 LR 0.000004 Time 0.399955 +2025-05-17 12:42:06,789 - Epoch: [355][ 100/ 518] Overall Loss 2.480750 Objective Loss 2.480750 LR 0.000004 Time 0.391871 +2025-05-17 12:42:25,750 - Epoch: [355][ 150/ 518] Overall Loss 2.511431 Objective Loss 2.511431 LR 0.000004 Time 0.387643 +2025-05-17 12:42:44,775 - Epoch: [355][ 200/ 518] Overall Loss 2.506550 Objective Loss 2.506550 LR 0.000004 Time 0.385854 +2025-05-17 12:43:03,806 - Epoch: [355][ 250/ 518] Overall Loss 2.502146 Objective Loss 2.502146 LR 0.000004 Time 0.384802 +2025-05-17 12:43:22,799 - Epoch: [355][ 300/ 518] Overall Loss 2.508747 Objective Loss 2.508747 LR 0.000004 Time 0.383975 +2025-05-17 12:43:42,009 - Epoch: [355][ 350/ 518] Overall Loss 2.512416 Objective Loss 2.512416 LR 0.000004 Time 0.384002 +2025-05-17 12:44:01,010 - Epoch: [355][ 400/ 518] Overall Loss 2.511557 Objective Loss 2.511557 LR 0.000004 Time 0.383501 +2025-05-17 12:44:19,994 - Epoch: [355][ 450/ 518] Overall Loss 2.517039 Objective Loss 2.517039 LR 0.000004 Time 0.383075 +2025-05-17 12:44:39,041 - Epoch: [355][ 500/ 518] Overall Loss 2.514170 Objective Loss 2.514170 LR 0.000004 Time 0.382860 +2025-05-17 12:44:45,754 - Epoch: [355][ 518/ 518] Overall Loss 2.512497 Objective Loss 2.512497 LR 0.000004 Time 0.382514 +2025-05-17 12:44:45,831 - --- validate (epoch=355)----------- +2025-05-17 12:44:45,832 - 4952 samples (32 per mini-batch) +2025-05-17 12:44:45,835 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:45:00,871 - Epoch: [355][ 50/ 155] Loss 2.831463 mAP 0.501727 +2025-05-17 12:45:16,227 - Epoch: [355][ 100/ 155] Loss 2.769400 mAP 0.502290 +2025-05-17 12:45:32,087 - Epoch: [355][ 150/ 155] Loss 2.751327 mAP 0.502609 +2025-05-17 12:45:35,072 - Epoch: [355][ 155/ 155] Loss 2.756015 mAP 0.501132 +2025-05-17 12:45:35,126 - ==> mAP: 0.50113 Loss: 2.756 + +2025-05-17 12:45:35,137 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:45:35,137 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:45:35,227 - + +2025-05-17 12:45:35,227 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:45:55,427 - Epoch: [356][ 50/ 518] Overall Loss 2.467031 Objective Loss 2.467031 LR 0.000004 Time 0.403933 +2025-05-17 12:46:14,427 - Epoch: [356][ 100/ 518] Overall Loss 2.449693 Objective Loss 2.449693 LR 0.000004 Time 0.391958 +2025-05-17 12:46:33,385 - Epoch: [356][ 150/ 518] Overall Loss 2.469876 Objective Loss 2.469876 LR 0.000004 Time 0.387684 +2025-05-17 12:46:52,382 - Epoch: [356][ 200/ 518] Overall Loss 2.476491 Objective Loss 2.476491 LR 0.000004 Time 0.385742 +2025-05-17 12:47:11,391 - Epoch: [356][ 250/ 518] Overall Loss 2.487738 Objective Loss 2.487738 LR 0.000004 Time 0.384623 +2025-05-17 12:47:30,446 - Epoch: [356][ 300/ 518] Overall Loss 2.486344 Objective Loss 2.486344 LR 0.000004 Time 0.384032 +2025-05-17 12:47:49,636 - Epoch: [356][ 350/ 518] Overall Loss 2.495361 Objective Loss 2.495361 LR 0.000004 Time 0.383997 +2025-05-17 12:48:08,644 - Epoch: [356][ 400/ 518] Overall Loss 2.501570 Objective Loss 2.501570 LR 0.000004 Time 0.383515 +2025-05-17 12:48:27,674 - Epoch: [356][ 450/ 518] Overall Loss 2.502118 Objective Loss 2.502118 LR 0.000004 Time 0.383187 +2025-05-17 12:48:46,677 - Epoch: [356][ 500/ 518] Overall Loss 2.500502 Objective Loss 2.500502 LR 0.000004 Time 0.382871 +2025-05-17 12:48:53,389 - Epoch: [356][ 518/ 518] Overall Loss 2.503464 Objective Loss 2.503464 LR 0.000004 Time 0.382523 +2025-05-17 12:48:53,465 - --- validate (epoch=356)----------- +2025-05-17 12:48:53,466 - 4952 samples (32 per mini-batch) +2025-05-17 12:48:53,469 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:49:08,303 - Epoch: [356][ 50/ 155] Loss 2.712618 mAP 0.513290 +2025-05-17 12:49:23,066 - Epoch: [356][ 100/ 155] Loss 2.734134 mAP 0.497319 +2025-05-17 12:49:38,478 - Epoch: [356][ 150/ 155] Loss 2.755755 mAP 0.496921 +2025-05-17 12:49:41,505 - Epoch: [356][ 155/ 155] Loss 2.758412 mAP 0.497043 +2025-05-17 12:49:41,558 - ==> mAP: 0.49704 Loss: 2.758 + +2025-05-17 12:49:41,568 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:49:41,568 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:49:41,655 - + +2025-05-17 12:49:41,656 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:50:01,650 - Epoch: [357][ 50/ 518] Overall Loss 2.446418 Objective Loss 2.446418 LR 0.000004 Time 0.399826 +2025-05-17 12:50:20,589 - Epoch: [357][ 100/ 518] Overall Loss 2.508043 Objective Loss 2.508043 LR 0.000004 Time 0.389284 +2025-05-17 12:50:39,575 - Epoch: [357][ 150/ 518] Overall Loss 2.519027 Objective Loss 2.519027 LR 0.000004 Time 0.386092 +2025-05-17 12:50:58,553 - Epoch: [357][ 200/ 518] Overall Loss 2.505481 Objective Loss 2.505481 LR 0.000004 Time 0.384453 +2025-05-17 12:51:17,562 - Epoch: [357][ 250/ 518] Overall Loss 2.509496 Objective Loss 2.509496 LR 0.000004 Time 0.383595 +2025-05-17 12:51:36,590 - Epoch: [357][ 300/ 518] Overall Loss 2.506316 Objective Loss 2.506316 LR 0.000004 Time 0.383086 +2025-05-17 12:51:55,602 - Epoch: [357][ 350/ 518] Overall Loss 2.503967 Objective Loss 2.503967 LR 0.000004 Time 0.382676 +2025-05-17 12:52:14,777 - Epoch: [357][ 400/ 518] Overall Loss 2.509598 Objective Loss 2.509598 LR 0.000004 Time 0.382776 +2025-05-17 12:52:33,777 - Epoch: [357][ 450/ 518] Overall Loss 2.512626 Objective Loss 2.512626 LR 0.000004 Time 0.382463 +2025-05-17 12:52:52,789 - Epoch: [357][ 500/ 518] Overall Loss 2.515787 Objective Loss 2.515787 LR 0.000004 Time 0.382240 +2025-05-17 12:52:59,478 - Epoch: [357][ 518/ 518] Overall Loss 2.511002 Objective Loss 2.511002 LR 0.000004 Time 0.381869 +2025-05-17 12:52:59,553 - --- validate (epoch=357)----------- +2025-05-17 12:52:59,553 - 4952 samples (32 per mini-batch) +2025-05-17 12:52:59,556 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:53:14,375 - Epoch: [357][ 50/ 155] Loss 2.761928 mAP 0.497714 +2025-05-17 12:53:29,468 - Epoch: [357][ 100/ 155] Loss 2.752539 mAP 0.499592 +2025-05-17 12:53:44,929 - Epoch: [357][ 150/ 155] Loss 2.757502 mAP 0.497645 +2025-05-17 12:53:47,982 - Epoch: [357][ 155/ 155] Loss 2.756248 mAP 0.498030 +2025-05-17 12:53:48,035 - ==> mAP: 0.49803 Loss: 2.756 + +2025-05-17 12:53:48,045 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:53:48,045 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:53:48,133 - + +2025-05-17 12:53:48,133 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:54:08,354 - Epoch: [358][ 50/ 518] Overall Loss 2.497722 Objective Loss 2.497722 LR 0.000004 Time 0.404344 +2025-05-17 12:54:27,384 - Epoch: [358][ 100/ 518] Overall Loss 2.508178 Objective Loss 2.508178 LR 0.000004 Time 0.392461 +2025-05-17 12:54:46,345 - Epoch: [358][ 150/ 518] Overall Loss 2.520971 Objective Loss 2.520971 LR 0.000004 Time 0.388038 +2025-05-17 12:55:05,374 - Epoch: [358][ 200/ 518] Overall Loss 2.511175 Objective Loss 2.511175 LR 0.000004 Time 0.386168 +2025-05-17 12:55:24,413 - Epoch: [358][ 250/ 518] Overall Loss 2.504144 Objective Loss 2.504144 LR 0.000004 Time 0.385089 +2025-05-17 12:55:43,438 - Epoch: [358][ 300/ 518] Overall Loss 2.503835 Objective Loss 2.503835 LR 0.000004 Time 0.384319 +2025-05-17 12:56:02,625 - Epoch: [358][ 350/ 518] Overall Loss 2.498505 Objective Loss 2.498505 LR 0.000004 Time 0.384232 +2025-05-17 12:56:21,695 - Epoch: [358][ 400/ 518] Overall Loss 2.500802 Objective Loss 2.500802 LR 0.000004 Time 0.383876 +2025-05-17 12:56:40,730 - Epoch: [358][ 450/ 518] Overall Loss 2.509848 Objective Loss 2.509848 LR 0.000004 Time 0.383521 +2025-05-17 12:56:59,772 - Epoch: [358][ 500/ 518] Overall Loss 2.510870 Objective Loss 2.510870 LR 0.000004 Time 0.383252 +2025-05-17 12:57:06,489 - Epoch: [358][ 518/ 518] Overall Loss 2.509909 Objective Loss 2.509909 LR 0.000004 Time 0.382901 +2025-05-17 12:57:06,569 - --- validate (epoch=358)----------- +2025-05-17 12:57:06,570 - 4952 samples (32 per mini-batch) +2025-05-17 12:57:06,573 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 12:57:21,343 - Epoch: [358][ 50/ 155] Loss 2.765054 mAP 0.499822 +2025-05-17 12:57:36,284 - Epoch: [358][ 100/ 155] Loss 2.760400 mAP 0.491775 +2025-05-17 12:57:51,830 - Epoch: [358][ 150/ 155] Loss 2.759764 mAP 0.492155 +2025-05-17 12:57:54,721 - Epoch: [358][ 155/ 155] Loss 2.761947 mAP 0.491945 +2025-05-17 12:57:54,774 - ==> mAP: 0.49194 Loss: 2.762 + +2025-05-17 12:57:54,784 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 12:57:54,784 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 12:57:54,872 - + +2025-05-17 12:57:54,872 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 12:58:15,077 - Epoch: [359][ 50/ 518] Overall Loss 2.512016 Objective Loss 2.512016 LR 0.000004 Time 0.404033 +2025-05-17 12:58:34,101 - Epoch: [359][ 100/ 518] Overall Loss 2.516379 Objective Loss 2.516379 LR 0.000004 Time 0.392243 +2025-05-17 12:58:53,102 - Epoch: [359][ 150/ 518] Overall Loss 2.507214 Objective Loss 2.507214 LR 0.000004 Time 0.388162 +2025-05-17 12:59:12,095 - Epoch: [359][ 200/ 518] Overall Loss 2.512745 Objective Loss 2.512745 LR 0.000004 Time 0.386079 +2025-05-17 12:59:31,103 - Epoch: [359][ 250/ 518] Overall Loss 2.506653 Objective Loss 2.506653 LR 0.000004 Time 0.384891 +2025-05-17 12:59:50,272 - Epoch: [359][ 300/ 518] Overall Loss 2.505226 Objective Loss 2.505226 LR 0.000004 Time 0.384634 +2025-05-17 13:00:09,265 - Epoch: [359][ 350/ 518] Overall Loss 2.518314 Objective Loss 2.518314 LR 0.000004 Time 0.383950 +2025-05-17 13:00:28,268 - Epoch: [359][ 400/ 518] Overall Loss 2.516377 Objective Loss 2.516377 LR 0.000004 Time 0.383461 +2025-05-17 13:00:47,280 - Epoch: [359][ 450/ 518] Overall Loss 2.509506 Objective Loss 2.509506 LR 0.000004 Time 0.383101 +2025-05-17 13:01:06,295 - Epoch: [359][ 500/ 518] Overall Loss 2.507294 Objective Loss 2.507294 LR 0.000004 Time 0.382817 +2025-05-17 13:01:13,029 - Epoch: [359][ 518/ 518] Overall Loss 2.507040 Objective Loss 2.507040 LR 0.000004 Time 0.382514 +2025-05-17 13:01:13,107 - --- validate (epoch=359)----------- +2025-05-17 13:01:13,108 - 4952 samples (32 per mini-batch) +2025-05-17 13:01:13,111 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:01:28,176 - Epoch: [359][ 50/ 155] Loss 2.809344 mAP 0.492854 +2025-05-17 13:01:43,091 - Epoch: [359][ 100/ 155] Loss 2.770226 mAP 0.492697 +2025-05-17 13:01:58,748 - Epoch: [359][ 150/ 155] Loss 2.754661 mAP 0.500819 +2025-05-17 13:02:01,850 - Epoch: [359][ 155/ 155] Loss 2.749503 mAP 0.503531 +2025-05-17 13:02:01,903 - ==> mAP: 0.50353 Loss: 2.750 + +2025-05-17 13:02:01,913 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 13:02:01,913 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:02:02,002 - + +2025-05-17 13:02:02,002 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:02:21,999 - Epoch: [360][ 50/ 518] Overall Loss 2.492042 Objective Loss 2.492042 LR 0.000004 Time 0.399858 +2025-05-17 13:02:40,964 - Epoch: [360][ 100/ 518] Overall Loss 2.516817 Objective Loss 2.516817 LR 0.000004 Time 0.389572 +2025-05-17 13:02:59,958 - Epoch: [360][ 150/ 518] Overall Loss 2.494852 Objective Loss 2.494852 LR 0.000004 Time 0.386337 +2025-05-17 13:03:18,985 - Epoch: [360][ 200/ 518] Overall Loss 2.486884 Objective Loss 2.486884 LR 0.000004 Time 0.384882 +2025-05-17 13:03:37,951 - Epoch: [360][ 250/ 518] Overall Loss 2.485363 Objective Loss 2.485363 LR 0.000004 Time 0.383766 +2025-05-17 13:03:57,142 - Epoch: [360][ 300/ 518] Overall Loss 2.491858 Objective Loss 2.491858 LR 0.000004 Time 0.383769 +2025-05-17 13:04:16,187 - Epoch: [360][ 350/ 518] Overall Loss 2.486588 Objective Loss 2.486588 LR 0.000004 Time 0.383354 +2025-05-17 13:04:35,236 - Epoch: [360][ 400/ 518] Overall Loss 2.492209 Objective Loss 2.492209 LR 0.000004 Time 0.383055 +2025-05-17 13:04:54,258 - Epoch: [360][ 450/ 518] Overall Loss 2.492658 Objective Loss 2.492658 LR 0.000004 Time 0.382760 +2025-05-17 13:05:13,286 - Epoch: [360][ 500/ 518] Overall Loss 2.500066 Objective Loss 2.500066 LR 0.000004 Time 0.382539 +2025-05-17 13:05:20,012 - Epoch: [360][ 518/ 518] Overall Loss 2.502569 Objective Loss 2.502569 LR 0.000004 Time 0.382230 +2025-05-17 13:05:20,088 - --- validate (epoch=360)----------- +2025-05-17 13:05:20,089 - 4952 samples (32 per mini-batch) +2025-05-17 13:05:20,092 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:05:35,023 - Epoch: [360][ 50/ 155] Loss 2.734749 mAP 0.503013 +2025-05-17 13:05:49,900 - Epoch: [360][ 100/ 155] Loss 2.729226 mAP 0.509999 +2025-05-17 13:06:05,420 - Epoch: [360][ 150/ 155] Loss 2.744815 mAP 0.503808 +2025-05-17 13:06:08,487 - Epoch: [360][ 155/ 155] Loss 2.746754 mAP 0.503435 +2025-05-17 13:06:08,541 - ==> mAP: 0.50344 Loss: 2.747 + +2025-05-17 13:06:08,551 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 13:06:08,551 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:06:08,639 - + +2025-05-17 13:06:08,639 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:06:28,799 - Epoch: [361][ 50/ 518] Overall Loss 2.512628 Objective Loss 2.512628 LR 0.000004 Time 0.403135 +2025-05-17 13:06:47,818 - Epoch: [361][ 100/ 518] Overall Loss 2.515107 Objective Loss 2.515107 LR 0.000004 Time 0.391747 +2025-05-17 13:07:06,796 - Epoch: [361][ 150/ 518] Overall Loss 2.519639 Objective Loss 2.519639 LR 0.000004 Time 0.387674 +2025-05-17 13:07:25,779 - Epoch: [361][ 200/ 518] Overall Loss 2.518859 Objective Loss 2.518859 LR 0.000004 Time 0.385666 +2025-05-17 13:07:45,014 - Epoch: [361][ 250/ 518] Overall Loss 2.520908 Objective Loss 2.520908 LR 0.000004 Time 0.385470 +2025-05-17 13:08:04,057 - Epoch: [361][ 300/ 518] Overall Loss 2.509956 Objective Loss 2.509956 LR 0.000004 Time 0.384697 +2025-05-17 13:08:23,040 - Epoch: [361][ 350/ 518] Overall Loss 2.504352 Objective Loss 2.504352 LR 0.000004 Time 0.383975 +2025-05-17 13:08:42,058 - Epoch: [361][ 400/ 518] Overall Loss 2.508326 Objective Loss 2.508326 LR 0.000004 Time 0.383520 +2025-05-17 13:09:01,060 - Epoch: [361][ 450/ 518] Overall Loss 2.508001 Objective Loss 2.508001 LR 0.000004 Time 0.383129 +2025-05-17 13:09:20,069 - Epoch: [361][ 500/ 518] Overall Loss 2.510981 Objective Loss 2.510981 LR 0.000004 Time 0.382833 +2025-05-17 13:09:26,879 - Epoch: [361][ 518/ 518] Overall Loss 2.506184 Objective Loss 2.506184 LR 0.000004 Time 0.382674 +2025-05-17 13:09:26,954 - --- validate (epoch=361)----------- +2025-05-17 13:09:26,955 - 4952 samples (32 per mini-batch) +2025-05-17 13:09:26,958 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:09:41,965 - Epoch: [361][ 50/ 155] Loss 2.765070 mAP 0.491857 +2025-05-17 13:09:57,081 - Epoch: [361][ 100/ 155] Loss 2.737968 mAP 0.495874 +2025-05-17 13:10:12,875 - Epoch: [361][ 150/ 155] Loss 2.758809 mAP 0.492401 +2025-05-17 13:10:15,993 - Epoch: [361][ 155/ 155] Loss 2.762899 mAP 0.492517 +2025-05-17 13:10:16,047 - ==> mAP: 0.49252 Loss: 2.763 + +2025-05-17 13:10:16,057 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 13:10:16,057 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:10:16,147 - + +2025-05-17 13:10:16,148 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:10:36,304 - Epoch: [362][ 50/ 518] Overall Loss 2.575382 Objective Loss 2.575382 LR 0.000004 Time 0.403050 +2025-05-17 13:10:55,238 - Epoch: [362][ 100/ 518] Overall Loss 2.534248 Objective Loss 2.534248 LR 0.000004 Time 0.390853 +2025-05-17 13:11:14,176 - Epoch: [362][ 150/ 518] Overall Loss 2.522293 Objective Loss 2.522293 LR 0.000004 Time 0.386814 +2025-05-17 13:11:33,359 - Epoch: [362][ 200/ 518] Overall Loss 2.519024 Objective Loss 2.519024 LR 0.000004 Time 0.386018 +2025-05-17 13:11:52,332 - Epoch: [362][ 250/ 518] Overall Loss 2.516992 Objective Loss 2.516992 LR 0.000004 Time 0.384701 +2025-05-17 13:12:11,299 - Epoch: [362][ 300/ 518] Overall Loss 2.510662 Objective Loss 2.510662 LR 0.000004 Time 0.383804 +2025-05-17 13:12:30,276 - Epoch: [362][ 350/ 518] Overall Loss 2.515877 Objective Loss 2.515877 LR 0.000004 Time 0.383193 +2025-05-17 13:12:49,272 - Epoch: [362][ 400/ 518] Overall Loss 2.513708 Objective Loss 2.513708 LR 0.000004 Time 0.382779 +2025-05-17 13:13:08,287 - Epoch: [362][ 450/ 518] Overall Loss 2.516401 Objective Loss 2.516401 LR 0.000004 Time 0.382504 +2025-05-17 13:13:27,292 - Epoch: [362][ 500/ 518] Overall Loss 2.510532 Objective Loss 2.510532 LR 0.000004 Time 0.382260 +2025-05-17 13:13:34,001 - Epoch: [362][ 518/ 518] Overall Loss 2.511839 Objective Loss 2.511839 LR 0.000004 Time 0.381928 +2025-05-17 13:13:34,080 - --- validate (epoch=362)----------- +2025-05-17 13:13:34,081 - 4952 samples (32 per mini-batch) +2025-05-17 13:13:34,084 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:13:49,098 - Epoch: [362][ 50/ 155] Loss 2.761967 mAP 0.490988 +2025-05-17 13:14:04,028 - Epoch: [362][ 100/ 155] Loss 2.749334 mAP 0.498062 +2025-05-17 13:14:19,564 - Epoch: [362][ 150/ 155] Loss 2.770770 mAP 0.498644 +2025-05-17 13:14:22,597 - Epoch: [362][ 155/ 155] Loss 2.768833 mAP 0.495527 +2025-05-17 13:14:22,650 - ==> mAP: 0.49553 Loss: 2.769 + +2025-05-17 13:14:22,660 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 13:14:22,660 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:14:22,747 - + +2025-05-17 13:14:22,747 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:14:42,843 - Epoch: [363][ 50/ 518] Overall Loss 2.486414 Objective Loss 2.486414 LR 0.000004 Time 0.401846 +2025-05-17 13:15:01,838 - Epoch: [363][ 100/ 518] Overall Loss 2.483248 Objective Loss 2.483248 LR 0.000004 Time 0.390863 +2025-05-17 13:15:20,792 - Epoch: [363][ 150/ 518] Overall Loss 2.508746 Objective Loss 2.508746 LR 0.000004 Time 0.386928 +2025-05-17 13:15:39,953 - Epoch: [363][ 200/ 518] Overall Loss 2.518904 Objective Loss 2.518904 LR 0.000004 Time 0.385997 +2025-05-17 13:15:58,995 - Epoch: [363][ 250/ 518] Overall Loss 2.507233 Objective Loss 2.507233 LR 0.000004 Time 0.384961 +2025-05-17 13:16:18,006 - Epoch: [363][ 300/ 518] Overall Loss 2.505086 Objective Loss 2.505086 LR 0.000004 Time 0.384165 +2025-05-17 13:16:37,000 - Epoch: [363][ 350/ 518] Overall Loss 2.503552 Objective Loss 2.503552 LR 0.000004 Time 0.383550 +2025-05-17 13:16:56,032 - Epoch: [363][ 400/ 518] Overall Loss 2.508077 Objective Loss 2.508077 LR 0.000004 Time 0.383185 +2025-05-17 13:17:15,074 - Epoch: [363][ 450/ 518] Overall Loss 2.503065 Objective Loss 2.503065 LR 0.000004 Time 0.382922 +2025-05-17 13:17:34,255 - Epoch: [363][ 500/ 518] Overall Loss 2.502875 Objective Loss 2.502875 LR 0.000004 Time 0.382989 +2025-05-17 13:17:40,972 - Epoch: [363][ 518/ 518] Overall Loss 2.511381 Objective Loss 2.511381 LR 0.000004 Time 0.382647 +2025-05-17 13:17:41,050 - --- validate (epoch=363)----------- +2025-05-17 13:17:41,051 - 4952 samples (32 per mini-batch) +2025-05-17 13:17:41,054 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:17:55,813 - Epoch: [363][ 50/ 155] Loss 2.797428 mAP 0.490984 +2025-05-17 13:18:10,679 - Epoch: [363][ 100/ 155] Loss 2.756420 mAP 0.501810 +2025-05-17 13:18:26,241 - Epoch: [363][ 150/ 155] Loss 2.753431 mAP 0.499029 +2025-05-17 13:18:29,290 - Epoch: [363][ 155/ 155] Loss 2.753645 mAP 0.499426 +2025-05-17 13:18:29,343 - ==> mAP: 0.49943 Loss: 2.754 + +2025-05-17 13:18:29,353 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 13:18:29,353 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:18:29,442 - + +2025-05-17 13:18:29,442 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:18:49,510 - Epoch: [364][ 50/ 518] Overall Loss 2.574329 Objective Loss 2.574329 LR 0.000004 Time 0.401275 +2025-05-17 13:19:08,670 - Epoch: [364][ 100/ 518] Overall Loss 2.501189 Objective Loss 2.501189 LR 0.000004 Time 0.392225 +2025-05-17 13:19:27,650 - Epoch: [364][ 150/ 518] Overall Loss 2.511867 Objective Loss 2.511867 LR 0.000004 Time 0.388010 +2025-05-17 13:19:46,632 - Epoch: [364][ 200/ 518] Overall Loss 2.512134 Objective Loss 2.512134 LR 0.000004 Time 0.385911 +2025-05-17 13:20:05,646 - Epoch: [364][ 250/ 518] Overall Loss 2.503956 Objective Loss 2.503956 LR 0.000004 Time 0.384782 +2025-05-17 13:20:24,643 - Epoch: [364][ 300/ 518] Overall Loss 2.507598 Objective Loss 2.507598 LR 0.000004 Time 0.383971 +2025-05-17 13:20:43,645 - Epoch: [364][ 350/ 518] Overall Loss 2.511242 Objective Loss 2.511242 LR 0.000004 Time 0.383405 +2025-05-17 13:21:02,838 - Epoch: [364][ 400/ 518] Overall Loss 2.512715 Objective Loss 2.512715 LR 0.000004 Time 0.383460 +2025-05-17 13:21:21,858 - Epoch: [364][ 450/ 518] Overall Loss 2.510289 Objective Loss 2.510289 LR 0.000004 Time 0.383116 +2025-05-17 13:21:40,857 - Epoch: [364][ 500/ 518] Overall Loss 2.508498 Objective Loss 2.508498 LR 0.000004 Time 0.382801 +2025-05-17 13:21:47,569 - Epoch: [364][ 518/ 518] Overall Loss 2.512389 Objective Loss 2.512389 LR 0.000004 Time 0.382456 +2025-05-17 13:21:47,640 - --- validate (epoch=364)----------- +2025-05-17 13:21:47,641 - 4952 samples (32 per mini-batch) +2025-05-17 13:21:47,644 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:22:02,487 - Epoch: [364][ 50/ 155] Loss 2.750151 mAP 0.502881 +2025-05-17 13:22:17,490 - Epoch: [364][ 100/ 155] Loss 2.785931 mAP 0.485847 +2025-05-17 13:22:33,028 - Epoch: [364][ 150/ 155] Loss 2.776613 mAP 0.492604 +2025-05-17 13:22:35,936 - Epoch: [364][ 155/ 155] Loss 2.771361 mAP 0.494511 +2025-05-17 13:22:35,990 - ==> mAP: 0.49451 Loss: 2.771 + +2025-05-17 13:22:36,000 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 13:22:36,000 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:22:36,090 - + +2025-05-17 13:22:36,091 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:22:56,193 - Epoch: [365][ 50/ 518] Overall Loss 2.528678 Objective Loss 2.528678 LR 0.000004 Time 0.401966 +2025-05-17 13:23:15,361 - Epoch: [365][ 100/ 518] Overall Loss 2.491593 Objective Loss 2.491593 LR 0.000004 Time 0.392658 +2025-05-17 13:23:34,308 - Epoch: [365][ 150/ 518] Overall Loss 2.506309 Objective Loss 2.506309 LR 0.000004 Time 0.388074 +2025-05-17 13:23:53,293 - Epoch: [365][ 200/ 518] Overall Loss 2.492978 Objective Loss 2.492978 LR 0.000004 Time 0.385977 +2025-05-17 13:24:12,288 - Epoch: [365][ 250/ 518] Overall Loss 2.498881 Objective Loss 2.498881 LR 0.000004 Time 0.384756 +2025-05-17 13:24:31,278 - Epoch: [365][ 300/ 518] Overall Loss 2.506094 Objective Loss 2.506094 LR 0.000004 Time 0.383926 +2025-05-17 13:24:50,304 - Epoch: [365][ 350/ 518] Overall Loss 2.513697 Objective Loss 2.513697 LR 0.000004 Time 0.383436 +2025-05-17 13:25:09,310 - Epoch: [365][ 400/ 518] Overall Loss 2.510020 Objective Loss 2.510020 LR 0.000004 Time 0.383017 +2025-05-17 13:25:28,491 - Epoch: [365][ 450/ 518] Overall Loss 2.502749 Objective Loss 2.502749 LR 0.000004 Time 0.383080 +2025-05-17 13:25:47,518 - Epoch: [365][ 500/ 518] Overall Loss 2.502381 Objective Loss 2.502381 LR 0.000004 Time 0.382825 +2025-05-17 13:25:54,210 - Epoch: [365][ 518/ 518] Overall Loss 2.502324 Objective Loss 2.502324 LR 0.000004 Time 0.382440 +2025-05-17 13:25:54,292 - --- validate (epoch=365)----------- +2025-05-17 13:25:54,293 - 4952 samples (32 per mini-batch) +2025-05-17 13:25:54,296 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:26:09,255 - Epoch: [365][ 50/ 155] Loss 2.774330 mAP 0.501428 +2025-05-17 13:26:24,604 - Epoch: [365][ 100/ 155] Loss 2.748229 mAP 0.502191 +2025-05-17 13:26:40,206 - Epoch: [365][ 150/ 155] Loss 2.748756 mAP 0.503402 +2025-05-17 13:26:43,330 - Epoch: [365][ 155/ 155] Loss 2.749376 mAP 0.502246 +2025-05-17 13:26:43,385 - ==> mAP: 0.50225 Loss: 2.749 + +2025-05-17 13:26:43,395 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 13:26:43,395 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:26:43,485 - + +2025-05-17 13:26:43,485 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:27:03,554 - Epoch: [366][ 50/ 518] Overall Loss 2.529819 Objective Loss 2.529819 LR 0.000004 Time 0.401310 +2025-05-17 13:27:22,603 - Epoch: [366][ 100/ 518] Overall Loss 2.527123 Objective Loss 2.527123 LR 0.000004 Time 0.391131 +2025-05-17 13:27:41,796 - Epoch: [366][ 150/ 518] Overall Loss 2.519991 Objective Loss 2.519991 LR 0.000004 Time 0.388703 +2025-05-17 13:28:00,785 - Epoch: [366][ 200/ 518] Overall Loss 2.514869 Objective Loss 2.514869 LR 0.000004 Time 0.386464 +2025-05-17 13:28:19,832 - Epoch: [366][ 250/ 518] Overall Loss 2.500791 Objective Loss 2.500791 LR 0.000004 Time 0.385357 +2025-05-17 13:28:38,842 - Epoch: [366][ 300/ 518] Overall Loss 2.506282 Objective Loss 2.506282 LR 0.000004 Time 0.384494 +2025-05-17 13:28:57,891 - Epoch: [366][ 350/ 518] Overall Loss 2.502088 Objective Loss 2.502088 LR 0.000004 Time 0.383989 +2025-05-17 13:29:16,942 - Epoch: [366][ 400/ 518] Overall Loss 2.505438 Objective Loss 2.505438 LR 0.000004 Time 0.383616 +2025-05-17 13:29:35,996 - Epoch: [366][ 450/ 518] Overall Loss 2.506144 Objective Loss 2.506144 LR 0.000004 Time 0.383333 +2025-05-17 13:29:55,062 - Epoch: [366][ 500/ 518] Overall Loss 2.505003 Objective Loss 2.505003 LR 0.000004 Time 0.383130 +2025-05-17 13:30:01,948 - Epoch: [366][ 518/ 518] Overall Loss 2.504868 Objective Loss 2.504868 LR 0.000004 Time 0.383109 +2025-05-17 13:30:02,027 - --- validate (epoch=366)----------- +2025-05-17 13:30:02,028 - 4952 samples (32 per mini-batch) +2025-05-17 13:30:02,031 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:30:16,838 - Epoch: [366][ 50/ 155] Loss 2.739273 mAP 0.491772 +2025-05-17 13:30:31,698 - Epoch: [366][ 100/ 155] Loss 2.763314 mAP 0.494851 +2025-05-17 13:30:47,188 - Epoch: [366][ 150/ 155] Loss 2.762399 mAP 0.495947 +2025-05-17 13:30:50,227 - Epoch: [366][ 155/ 155] Loss 2.764920 mAP 0.495136 +2025-05-17 13:30:50,287 - ==> mAP: 0.49514 Loss: 2.765 + +2025-05-17 13:30:50,297 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 13:30:50,297 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:30:50,375 - + +2025-05-17 13:30:50,375 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:31:10,289 - Epoch: [367][ 50/ 518] Overall Loss 2.499222 Objective Loss 2.499222 LR 0.000004 Time 0.398205 +2025-05-17 13:31:29,241 - Epoch: [367][ 100/ 518] Overall Loss 2.476908 Objective Loss 2.476908 LR 0.000004 Time 0.388609 +2025-05-17 13:31:48,398 - Epoch: [367][ 150/ 518] Overall Loss 2.475467 Objective Loss 2.475467 LR 0.000004 Time 0.386780 +2025-05-17 13:32:07,397 - Epoch: [367][ 200/ 518] Overall Loss 2.488348 Objective Loss 2.488348 LR 0.000004 Time 0.385074 +2025-05-17 13:32:26,412 - Epoch: [367][ 250/ 518] Overall Loss 2.486830 Objective Loss 2.486830 LR 0.000004 Time 0.384111 +2025-05-17 13:32:45,445 - Epoch: [367][ 300/ 518] Overall Loss 2.496533 Objective Loss 2.496533 LR 0.000004 Time 0.383534 +2025-05-17 13:33:04,494 - Epoch: [367][ 350/ 518] Overall Loss 2.494349 Objective Loss 2.494349 LR 0.000004 Time 0.383166 +2025-05-17 13:33:23,513 - Epoch: [367][ 400/ 518] Overall Loss 2.497540 Objective Loss 2.497540 LR 0.000004 Time 0.382816 +2025-05-17 13:33:42,687 - Epoch: [367][ 450/ 518] Overall Loss 2.496672 Objective Loss 2.496672 LR 0.000004 Time 0.382886 +2025-05-17 13:34:01,710 - Epoch: [367][ 500/ 518] Overall Loss 2.503395 Objective Loss 2.503395 LR 0.000004 Time 0.382642 +2025-05-17 13:34:08,421 - Epoch: [367][ 518/ 518] Overall Loss 2.503170 Objective Loss 2.503170 LR 0.000004 Time 0.382299 +2025-05-17 13:34:08,496 - --- validate (epoch=367)----------- +2025-05-17 13:34:08,497 - 4952 samples (32 per mini-batch) +2025-05-17 13:34:08,500 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:34:23,347 - Epoch: [367][ 50/ 155] Loss 2.763072 mAP 0.505221 +2025-05-17 13:34:38,421 - Epoch: [367][ 100/ 155] Loss 2.740695 mAP 0.504416 +2025-05-17 13:34:53,929 - Epoch: [367][ 150/ 155] Loss 2.746241 mAP 0.504049 +2025-05-17 13:34:56,989 - Epoch: [367][ 155/ 155] Loss 2.747521 mAP 0.504573 +2025-05-17 13:34:57,043 - ==> mAP: 0.50457 Loss: 2.748 + +2025-05-17 13:34:57,053 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 13:34:57,053 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:34:57,141 - + +2025-05-17 13:34:57,141 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:35:17,435 - Epoch: [368][ 50/ 518] Overall Loss 2.457787 Objective Loss 2.457787 LR 0.000004 Time 0.405825 +2025-05-17 13:35:36,438 - Epoch: [368][ 100/ 518] Overall Loss 2.470006 Objective Loss 2.470006 LR 0.000004 Time 0.392927 +2025-05-17 13:35:55,412 - Epoch: [368][ 150/ 518] Overall Loss 2.480244 Objective Loss 2.480244 LR 0.000004 Time 0.388438 +2025-05-17 13:36:14,425 - Epoch: [368][ 200/ 518] Overall Loss 2.499849 Objective Loss 2.499849 LR 0.000004 Time 0.386388 +2025-05-17 13:36:33,416 - Epoch: [368][ 250/ 518] Overall Loss 2.504609 Objective Loss 2.504609 LR 0.000004 Time 0.385073 +2025-05-17 13:36:52,426 - Epoch: [368][ 300/ 518] Overall Loss 2.511650 Objective Loss 2.511650 LR 0.000004 Time 0.384255 +2025-05-17 13:37:11,613 - Epoch: [368][ 350/ 518] Overall Loss 2.519819 Objective Loss 2.519819 LR 0.000004 Time 0.384179 +2025-05-17 13:37:30,624 - Epoch: [368][ 400/ 518] Overall Loss 2.517405 Objective Loss 2.517405 LR 0.000004 Time 0.383681 +2025-05-17 13:37:49,678 - Epoch: [368][ 450/ 518] Overall Loss 2.520447 Objective Loss 2.520447 LR 0.000004 Time 0.383389 +2025-05-17 13:38:08,745 - Epoch: [368][ 500/ 518] Overall Loss 2.525861 Objective Loss 2.525861 LR 0.000004 Time 0.383183 +2025-05-17 13:38:15,467 - Epoch: [368][ 518/ 518] Overall Loss 2.521070 Objective Loss 2.521070 LR 0.000004 Time 0.382844 +2025-05-17 13:38:15,544 - --- validate (epoch=368)----------- +2025-05-17 13:38:15,545 - 4952 samples (32 per mini-batch) +2025-05-17 13:38:15,549 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:38:30,438 - Epoch: [368][ 50/ 155] Loss 2.722063 mAP 0.516562 +2025-05-17 13:38:45,614 - Epoch: [368][ 100/ 155] Loss 2.745457 mAP 0.504568 +2025-05-17 13:39:01,339 - Epoch: [368][ 150/ 155] Loss 2.750734 mAP 0.503900 +2025-05-17 13:39:04,287 - Epoch: [368][ 155/ 155] Loss 2.748112 mAP 0.501439 +2025-05-17 13:39:04,346 - ==> mAP: 0.50144 Loss: 2.748 + +2025-05-17 13:39:04,356 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 13:39:04,356 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:39:04,447 - + +2025-05-17 13:39:04,447 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:39:24,662 - Epoch: [369][ 50/ 518] Overall Loss 2.514698 Objective Loss 2.514698 LR 0.000004 Time 0.404216 +2025-05-17 13:39:43,688 - Epoch: [369][ 100/ 518] Overall Loss 2.527840 Objective Loss 2.527840 LR 0.000004 Time 0.392360 +2025-05-17 13:40:02,686 - Epoch: [369][ 150/ 518] Overall Loss 2.507180 Objective Loss 2.507180 LR 0.000004 Time 0.388222 +2025-05-17 13:40:21,696 - Epoch: [369][ 200/ 518] Overall Loss 2.485960 Objective Loss 2.485960 LR 0.000004 Time 0.386211 +2025-05-17 13:40:40,771 - Epoch: [369][ 250/ 518] Overall Loss 2.496733 Objective Loss 2.496733 LR 0.000004 Time 0.385265 +2025-05-17 13:40:59,808 - Epoch: [369][ 300/ 518] Overall Loss 2.505091 Objective Loss 2.505091 LR 0.000004 Time 0.384507 +2025-05-17 13:41:19,020 - Epoch: [369][ 350/ 518] Overall Loss 2.502066 Objective Loss 2.502066 LR 0.000004 Time 0.384469 +2025-05-17 13:41:38,074 - Epoch: [369][ 400/ 518] Overall Loss 2.498042 Objective Loss 2.498042 LR 0.000004 Time 0.384043 +2025-05-17 13:41:57,128 - Epoch: [369][ 450/ 518] Overall Loss 2.503213 Objective Loss 2.503213 LR 0.000004 Time 0.383711 +2025-05-17 13:42:16,153 - Epoch: [369][ 500/ 518] Overall Loss 2.499162 Objective Loss 2.499162 LR 0.000004 Time 0.383388 +2025-05-17 13:42:22,865 - Epoch: [369][ 518/ 518] Overall Loss 2.500290 Objective Loss 2.500290 LR 0.000004 Time 0.383023 +2025-05-17 13:42:22,942 - --- validate (epoch=369)----------- +2025-05-17 13:42:22,943 - 4952 samples (32 per mini-batch) +2025-05-17 13:42:22,946 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:42:38,169 - Epoch: [369][ 50/ 155] Loss 2.747757 mAP 0.506271 +2025-05-17 13:42:53,142 - Epoch: [369][ 100/ 155] Loss 2.753172 mAP 0.507056 +2025-05-17 13:43:08,850 - Epoch: [369][ 150/ 155] Loss 2.757666 mAP 0.503787 +2025-05-17 13:43:11,980 - Epoch: [369][ 155/ 155] Loss 2.758560 mAP 0.503632 +2025-05-17 13:43:12,035 - ==> mAP: 0.50363 Loss: 2.759 + +2025-05-17 13:43:12,045 - ==> Best [mAP: 0.506541 vloss: 2.746593 Params: 2177088 on epoch: 329] +2025-05-17 13:43:12,045 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:43:12,139 - + +2025-05-17 13:43:12,139 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:43:32,165 - Epoch: [370][ 50/ 518] Overall Loss 2.495707 Objective Loss 2.495707 LR 0.000004 Time 0.400439 +2025-05-17 13:43:51,133 - Epoch: [370][ 100/ 518] Overall Loss 2.521053 Objective Loss 2.521053 LR 0.000004 Time 0.389883 +2025-05-17 13:44:10,116 - Epoch: [370][ 150/ 518] Overall Loss 2.511184 Objective Loss 2.511184 LR 0.000004 Time 0.386468 +2025-05-17 13:44:29,087 - Epoch: [370][ 200/ 518] Overall Loss 2.512912 Objective Loss 2.512912 LR 0.000004 Time 0.384702 +2025-05-17 13:44:48,069 - Epoch: [370][ 250/ 518] Overall Loss 2.516610 Objective Loss 2.516610 LR 0.000004 Time 0.383685 +2025-05-17 13:45:07,072 - Epoch: [370][ 300/ 518] Overall Loss 2.516249 Objective Loss 2.516249 LR 0.000004 Time 0.383074 +2025-05-17 13:45:26,097 - Epoch: [370][ 350/ 518] Overall Loss 2.514701 Objective Loss 2.514701 LR 0.000004 Time 0.382707 +2025-05-17 13:45:45,292 - Epoch: [370][ 400/ 518] Overall Loss 2.511857 Objective Loss 2.511857 LR 0.000004 Time 0.382854 +2025-05-17 13:46:04,321 - Epoch: [370][ 450/ 518] Overall Loss 2.510227 Objective Loss 2.510227 LR 0.000004 Time 0.382600 +2025-05-17 13:46:23,365 - Epoch: [370][ 500/ 518] Overall Loss 2.514225 Objective Loss 2.514225 LR 0.000004 Time 0.382425 +2025-05-17 13:46:30,042 - Epoch: [370][ 518/ 518] Overall Loss 2.515682 Objective Loss 2.515682 LR 0.000004 Time 0.382025 +2025-05-17 13:46:30,124 - --- validate (epoch=370)----------- +2025-05-17 13:46:30,125 - 4952 samples (32 per mini-batch) +2025-05-17 13:46:30,128 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:46:45,053 - Epoch: [370][ 50/ 155] Loss 2.781435 mAP 0.493919 +2025-05-17 13:47:00,405 - Epoch: [370][ 100/ 155] Loss 2.747624 mAP 0.507727 +2025-05-17 13:47:16,079 - Epoch: [370][ 150/ 155] Loss 2.750282 mAP 0.508703 +2025-05-17 13:47:19,212 - Epoch: [370][ 155/ 155] Loss 2.750493 mAP 0.507587 +2025-05-17 13:47:19,272 - ==> mAP: 0.50759 Loss: 2.750 + +2025-05-17 13:47:19,282 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 13:47:19,283 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:47:19,408 - + +2025-05-17 13:47:19,408 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:47:39,657 - Epoch: [371][ 50/ 518] Overall Loss 2.510595 Objective Loss 2.510595 LR 0.000004 Time 0.404916 +2025-05-17 13:47:58,683 - Epoch: [371][ 100/ 518] Overall Loss 2.531286 Objective Loss 2.531286 LR 0.000004 Time 0.392706 +2025-05-17 13:48:17,682 - Epoch: [371][ 150/ 518] Overall Loss 2.529289 Objective Loss 2.529289 LR 0.000004 Time 0.388458 +2025-05-17 13:48:36,666 - Epoch: [371][ 200/ 518] Overall Loss 2.533671 Objective Loss 2.533671 LR 0.000004 Time 0.386255 +2025-05-17 13:48:55,666 - Epoch: [371][ 250/ 518] Overall Loss 2.528195 Objective Loss 2.528195 LR 0.000004 Time 0.385000 +2025-05-17 13:49:14,693 - Epoch: [371][ 300/ 518] Overall Loss 2.525349 Objective Loss 2.525349 LR 0.000004 Time 0.384253 +2025-05-17 13:49:33,858 - Epoch: [371][ 350/ 518] Overall Loss 2.529687 Objective Loss 2.529687 LR 0.000004 Time 0.384113 +2025-05-17 13:49:52,922 - Epoch: [371][ 400/ 518] Overall Loss 2.520715 Objective Loss 2.520715 LR 0.000004 Time 0.383757 +2025-05-17 13:50:11,988 - Epoch: [371][ 450/ 518] Overall Loss 2.516611 Objective Loss 2.516611 LR 0.000004 Time 0.383485 +2025-05-17 13:50:30,988 - Epoch: [371][ 500/ 518] Overall Loss 2.508715 Objective Loss 2.508715 LR 0.000004 Time 0.383134 +2025-05-17 13:50:37,718 - Epoch: [371][ 518/ 518] Overall Loss 2.508213 Objective Loss 2.508213 LR 0.000004 Time 0.382812 +2025-05-17 13:50:37,798 - --- validate (epoch=371)----------- +2025-05-17 13:50:37,798 - 4952 samples (32 per mini-batch) +2025-05-17 13:50:37,802 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:50:52,860 - Epoch: [371][ 50/ 155] Loss 2.698238 mAP 0.528571 +2025-05-17 13:51:08,216 - Epoch: [371][ 100/ 155] Loss 2.736179 mAP 0.507033 +2025-05-17 13:51:24,012 - Epoch: [371][ 150/ 155] Loss 2.735473 mAP 0.504930 +2025-05-17 13:51:27,002 - Epoch: [371][ 155/ 155] Loss 2.738199 mAP 0.504191 +2025-05-17 13:51:27,057 - ==> mAP: 0.50419 Loss: 2.738 + +2025-05-17 13:51:27,067 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 13:51:27,067 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:51:27,157 - + +2025-05-17 13:51:27,157 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:51:47,492 - Epoch: [372][ 50/ 518] Overall Loss 2.493374 Objective Loss 2.493374 LR 0.000004 Time 0.406631 +2025-05-17 13:52:06,490 - Epoch: [372][ 100/ 518] Overall Loss 2.491672 Objective Loss 2.491672 LR 0.000004 Time 0.393277 +2025-05-17 13:52:25,499 - Epoch: [372][ 150/ 518] Overall Loss 2.490290 Objective Loss 2.490290 LR 0.000004 Time 0.388902 +2025-05-17 13:52:44,499 - Epoch: [372][ 200/ 518] Overall Loss 2.481993 Objective Loss 2.481993 LR 0.000004 Time 0.386671 +2025-05-17 13:53:03,500 - Epoch: [372][ 250/ 518] Overall Loss 2.490223 Objective Loss 2.490223 LR 0.000004 Time 0.385336 +2025-05-17 13:53:22,512 - Epoch: [372][ 300/ 518] Overall Loss 2.493264 Objective Loss 2.493264 LR 0.000004 Time 0.384482 +2025-05-17 13:53:41,718 - Epoch: [372][ 350/ 518] Overall Loss 2.488873 Objective Loss 2.488873 LR 0.000004 Time 0.384429 +2025-05-17 13:54:00,734 - Epoch: [372][ 400/ 518] Overall Loss 2.494609 Objective Loss 2.494609 LR 0.000004 Time 0.383911 +2025-05-17 13:54:19,751 - Epoch: [372][ 450/ 518] Overall Loss 2.494397 Objective Loss 2.494397 LR 0.000004 Time 0.383511 +2025-05-17 13:54:38,777 - Epoch: [372][ 500/ 518] Overall Loss 2.494842 Objective Loss 2.494842 LR 0.000004 Time 0.383209 +2025-05-17 13:54:45,483 - Epoch: [372][ 518/ 518] Overall Loss 2.497135 Objective Loss 2.497135 LR 0.000004 Time 0.382837 +2025-05-17 13:54:45,558 - --- validate (epoch=372)----------- +2025-05-17 13:54:45,559 - 4952 samples (32 per mini-batch) +2025-05-17 13:54:45,563 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:55:00,636 - Epoch: [372][ 50/ 155] Loss 2.729169 mAP 0.502042 +2025-05-17 13:55:15,929 - Epoch: [372][ 100/ 155] Loss 2.766578 mAP 0.501858 +2025-05-17 13:55:31,646 - Epoch: [372][ 150/ 155] Loss 2.744994 mAP 0.498231 +2025-05-17 13:55:34,608 - Epoch: [372][ 155/ 155] Loss 2.743002 mAP 0.499076 +2025-05-17 13:55:34,663 - ==> mAP: 0.49908 Loss: 2.743 + +2025-05-17 13:55:34,673 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 13:55:34,673 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:55:34,763 - + +2025-05-17 13:55:34,763 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 13:55:54,850 - Epoch: [373][ 50/ 518] Overall Loss 2.503330 Objective Loss 2.503330 LR 0.000004 Time 0.401667 +2025-05-17 13:56:14,042 - Epoch: [373][ 100/ 518] Overall Loss 2.466493 Objective Loss 2.466493 LR 0.000004 Time 0.392748 +2025-05-17 13:56:33,038 - Epoch: [373][ 150/ 518] Overall Loss 2.478411 Objective Loss 2.478411 LR 0.000004 Time 0.388462 +2025-05-17 13:56:52,097 - Epoch: [373][ 200/ 518] Overall Loss 2.496936 Objective Loss 2.496936 LR 0.000004 Time 0.386636 +2025-05-17 13:57:11,136 - Epoch: [373][ 250/ 518] Overall Loss 2.505181 Objective Loss 2.505181 LR 0.000004 Time 0.385465 +2025-05-17 13:57:30,164 - Epoch: [373][ 300/ 518] Overall Loss 2.498983 Objective Loss 2.498983 LR 0.000004 Time 0.384642 +2025-05-17 13:57:49,215 - Epoch: [373][ 350/ 518] Overall Loss 2.501397 Objective Loss 2.501397 LR 0.000004 Time 0.384124 +2025-05-17 13:58:08,362 - Epoch: [373][ 400/ 518] Overall Loss 2.503798 Objective Loss 2.503798 LR 0.000004 Time 0.383972 +2025-05-17 13:58:27,374 - Epoch: [373][ 450/ 518] Overall Loss 2.503355 Objective Loss 2.503355 LR 0.000004 Time 0.383555 +2025-05-17 13:58:46,392 - Epoch: [373][ 500/ 518] Overall Loss 2.494433 Objective Loss 2.494433 LR 0.000004 Time 0.383233 +2025-05-17 13:58:53,105 - Epoch: [373][ 518/ 518] Overall Loss 2.492555 Objective Loss 2.492555 LR 0.000004 Time 0.382874 +2025-05-17 13:58:53,180 - --- validate (epoch=373)----------- +2025-05-17 13:58:53,181 - 4952 samples (32 per mini-batch) +2025-05-17 13:58:53,184 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 13:59:07,968 - Epoch: [373][ 50/ 155] Loss 2.770727 mAP 0.500063 +2025-05-17 13:59:22,978 - Epoch: [373][ 100/ 155] Loss 2.738145 mAP 0.506915 +2025-05-17 13:59:38,473 - Epoch: [373][ 150/ 155] Loss 2.747575 mAP 0.505173 +2025-05-17 13:59:41,383 - Epoch: [373][ 155/ 155] Loss 2.749207 mAP 0.505956 +2025-05-17 13:59:41,437 - ==> mAP: 0.50596 Loss: 2.749 + +2025-05-17 13:59:41,446 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 13:59:41,446 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 13:59:41,533 - + +2025-05-17 13:59:41,534 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:00:01,760 - Epoch: [374][ 50/ 518] Overall Loss 2.537716 Objective Loss 2.537716 LR 0.000004 Time 0.404443 +2025-05-17 14:00:20,734 - Epoch: [374][ 100/ 518] Overall Loss 2.503765 Objective Loss 2.503765 LR 0.000004 Time 0.391952 +2025-05-17 14:00:39,705 - Epoch: [374][ 150/ 518] Overall Loss 2.501719 Objective Loss 2.501719 LR 0.000004 Time 0.387769 +2025-05-17 14:00:58,692 - Epoch: [374][ 200/ 518] Overall Loss 2.497543 Objective Loss 2.497543 LR 0.000004 Time 0.385756 +2025-05-17 14:01:17,710 - Epoch: [374][ 250/ 518] Overall Loss 2.498794 Objective Loss 2.498794 LR 0.000004 Time 0.384669 +2025-05-17 14:01:36,748 - Epoch: [374][ 300/ 518] Overall Loss 2.504507 Objective Loss 2.504507 LR 0.000004 Time 0.384018 +2025-05-17 14:01:55,770 - Epoch: [374][ 350/ 518] Overall Loss 2.498944 Objective Loss 2.498944 LR 0.000004 Time 0.383501 +2025-05-17 14:02:14,976 - Epoch: [374][ 400/ 518] Overall Loss 2.498500 Objective Loss 2.498500 LR 0.000004 Time 0.383578 +2025-05-17 14:02:34,002 - Epoch: [374][ 450/ 518] Overall Loss 2.497405 Objective Loss 2.497405 LR 0.000004 Time 0.383237 +2025-05-17 14:02:53,013 - Epoch: [374][ 500/ 518] Overall Loss 2.495429 Objective Loss 2.495429 LR 0.000004 Time 0.382932 +2025-05-17 14:02:59,730 - Epoch: [374][ 518/ 518] Overall Loss 2.494555 Objective Loss 2.494555 LR 0.000004 Time 0.382592 +2025-05-17 14:02:59,807 - --- validate (epoch=374)----------- +2025-05-17 14:02:59,808 - 4952 samples (32 per mini-batch) +2025-05-17 14:02:59,811 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:03:14,797 - Epoch: [374][ 50/ 155] Loss 2.708972 mAP 0.524606 +2025-05-17 14:03:30,130 - Epoch: [374][ 100/ 155] Loss 2.741422 mAP 0.502837 +2025-05-17 14:03:45,861 - Epoch: [374][ 150/ 155] Loss 2.746129 mAP 0.506907 +2025-05-17 14:03:48,977 - Epoch: [374][ 155/ 155] Loss 2.745287 mAP 0.505648 +2025-05-17 14:03:49,031 - ==> mAP: 0.50565 Loss: 2.745 + +2025-05-17 14:03:49,042 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:03:49,042 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:03:49,132 - + +2025-05-17 14:03:49,132 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:04:09,471 - Epoch: [375][ 50/ 518] Overall Loss 2.498599 Objective Loss 2.498599 LR 0.000004 Time 0.406702 +2025-05-17 14:04:28,481 - Epoch: [375][ 100/ 518] Overall Loss 2.528551 Objective Loss 2.528551 LR 0.000004 Time 0.393445 +2025-05-17 14:04:47,465 - Epoch: [375][ 150/ 518] Overall Loss 2.504854 Objective Loss 2.504854 LR 0.000004 Time 0.388850 +2025-05-17 14:05:06,496 - Epoch: [375][ 200/ 518] Overall Loss 2.512766 Objective Loss 2.512766 LR 0.000004 Time 0.386787 +2025-05-17 14:05:25,528 - Epoch: [375][ 250/ 518] Overall Loss 2.527904 Objective Loss 2.527904 LR 0.000004 Time 0.385554 +2025-05-17 14:05:44,576 - Epoch: [375][ 300/ 518] Overall Loss 2.528619 Objective Loss 2.528619 LR 0.000004 Time 0.384787 +2025-05-17 14:06:03,623 - Epoch: [375][ 350/ 518] Overall Loss 2.520499 Objective Loss 2.520499 LR 0.000004 Time 0.384233 +2025-05-17 14:06:22,677 - Epoch: [375][ 400/ 518] Overall Loss 2.521205 Objective Loss 2.521205 LR 0.000004 Time 0.383838 +2025-05-17 14:06:41,890 - Epoch: [375][ 450/ 518] Overall Loss 2.520638 Objective Loss 2.520638 LR 0.000004 Time 0.383884 +2025-05-17 14:07:00,924 - Epoch: [375][ 500/ 518] Overall Loss 2.519021 Objective Loss 2.519021 LR 0.000004 Time 0.383562 +2025-05-17 14:07:07,661 - Epoch: [375][ 518/ 518] Overall Loss 2.521259 Objective Loss 2.521259 LR 0.000004 Time 0.383239 +2025-05-17 14:07:07,740 - --- validate (epoch=375)----------- +2025-05-17 14:07:07,741 - 4952 samples (32 per mini-batch) +2025-05-17 14:07:07,744 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:07:22,790 - Epoch: [375][ 50/ 155] Loss 2.731304 mAP 0.497013 +2025-05-17 14:07:38,199 - Epoch: [375][ 100/ 155] Loss 2.751916 mAP 0.495430 +2025-05-17 14:07:53,943 - Epoch: [375][ 150/ 155] Loss 2.750709 mAP 0.504204 +2025-05-17 14:07:57,074 - Epoch: [375][ 155/ 155] Loss 2.751258 mAP 0.501519 +2025-05-17 14:07:57,129 - ==> mAP: 0.50152 Loss: 2.751 + +2025-05-17 14:07:57,139 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:07:57,140 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:07:57,230 - + +2025-05-17 14:07:57,230 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:08:17,344 - Epoch: [376][ 50/ 518] Overall Loss 2.481788 Objective Loss 2.481788 LR 0.000004 Time 0.402200 +2025-05-17 14:08:36,452 - Epoch: [376][ 100/ 518] Overall Loss 2.470677 Objective Loss 2.470677 LR 0.000004 Time 0.392176 +2025-05-17 14:08:55,429 - Epoch: [376][ 150/ 518] Overall Loss 2.478960 Objective Loss 2.478960 LR 0.000004 Time 0.387951 +2025-05-17 14:09:14,451 - Epoch: [376][ 200/ 518] Overall Loss 2.510630 Objective Loss 2.510630 LR 0.000004 Time 0.386072 +2025-05-17 14:09:33,444 - Epoch: [376][ 250/ 518] Overall Loss 2.513531 Objective Loss 2.513531 LR 0.000004 Time 0.384822 +2025-05-17 14:09:52,453 - Epoch: [376][ 300/ 518] Overall Loss 2.510778 Objective Loss 2.510778 LR 0.000004 Time 0.384044 +2025-05-17 14:10:11,476 - Epoch: [376][ 350/ 518] Overall Loss 2.505267 Objective Loss 2.505267 LR 0.000004 Time 0.383529 +2025-05-17 14:10:30,463 - Epoch: [376][ 400/ 518] Overall Loss 2.505268 Objective Loss 2.505268 LR 0.000004 Time 0.383052 +2025-05-17 14:10:49,447 - Epoch: [376][ 450/ 518] Overall Loss 2.498275 Objective Loss 2.498275 LR 0.000004 Time 0.382677 +2025-05-17 14:11:08,609 - Epoch: [376][ 500/ 518] Overall Loss 2.501260 Objective Loss 2.501260 LR 0.000004 Time 0.382728 +2025-05-17 14:11:15,331 - Epoch: [376][ 518/ 518] Overall Loss 2.497324 Objective Loss 2.497324 LR 0.000004 Time 0.382405 +2025-05-17 14:11:15,408 - --- validate (epoch=376)----------- +2025-05-17 14:11:15,409 - 4952 samples (32 per mini-batch) +2025-05-17 14:11:15,412 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:11:30,290 - Epoch: [376][ 50/ 155] Loss 2.708126 mAP 0.508536 +2025-05-17 14:11:45,346 - Epoch: [376][ 100/ 155] Loss 2.760537 mAP 0.503099 +2025-05-17 14:12:01,041 - Epoch: [376][ 150/ 155] Loss 2.752593 mAP 0.501763 +2025-05-17 14:12:04,149 - Epoch: [376][ 155/ 155] Loss 2.747039 mAP 0.501704 +2025-05-17 14:12:04,209 - ==> mAP: 0.50170 Loss: 2.747 + +2025-05-17 14:12:04,219 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:12:04,219 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:12:04,309 - + +2025-05-17 14:12:04,309 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:12:24,493 - Epoch: [377][ 50/ 518] Overall Loss 2.508352 Objective Loss 2.508352 LR 0.000004 Time 0.403601 +2025-05-17 14:12:43,658 - Epoch: [377][ 100/ 518] Overall Loss 2.535128 Objective Loss 2.535128 LR 0.000004 Time 0.393441 +2025-05-17 14:13:02,615 - Epoch: [377][ 150/ 518] Overall Loss 2.522354 Objective Loss 2.522354 LR 0.000004 Time 0.388667 +2025-05-17 14:13:21,581 - Epoch: [377][ 200/ 518] Overall Loss 2.509490 Objective Loss 2.509490 LR 0.000004 Time 0.386322 +2025-05-17 14:13:40,594 - Epoch: [377][ 250/ 518] Overall Loss 2.505612 Objective Loss 2.505612 LR 0.000004 Time 0.385106 +2025-05-17 14:13:59,600 - Epoch: [377][ 300/ 518] Overall Loss 2.508695 Objective Loss 2.508695 LR 0.000004 Time 0.384269 +2025-05-17 14:14:18,573 - Epoch: [377][ 350/ 518] Overall Loss 2.514181 Objective Loss 2.514181 LR 0.000004 Time 0.383580 +2025-05-17 14:14:37,563 - Epoch: [377][ 400/ 518] Overall Loss 2.514441 Objective Loss 2.514441 LR 0.000004 Time 0.383104 +2025-05-17 14:14:56,590 - Epoch: [377][ 450/ 518] Overall Loss 2.510187 Objective Loss 2.510187 LR 0.000004 Time 0.382819 +2025-05-17 14:15:15,813 - Epoch: [377][ 500/ 518] Overall Loss 2.503492 Objective Loss 2.503492 LR 0.000004 Time 0.382980 +2025-05-17 14:15:22,526 - Epoch: [377][ 518/ 518] Overall Loss 2.502309 Objective Loss 2.502309 LR 0.000004 Time 0.382631 +2025-05-17 14:15:22,605 - --- validate (epoch=377)----------- +2025-05-17 14:15:22,606 - 4952 samples (32 per mini-batch) +2025-05-17 14:15:22,609 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:15:37,475 - Epoch: [377][ 50/ 155] Loss 2.710675 mAP 0.513511 +2025-05-17 14:15:52,323 - Epoch: [377][ 100/ 155] Loss 2.740945 mAP 0.502890 +2025-05-17 14:16:07,901 - Epoch: [377][ 150/ 155] Loss 2.742078 mAP 0.503439 +2025-05-17 14:16:11,044 - Epoch: [377][ 155/ 155] Loss 2.742985 mAP 0.501860 +2025-05-17 14:16:11,098 - ==> mAP: 0.50186 Loss: 2.743 + +2025-05-17 14:16:11,108 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:16:11,108 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:16:11,198 - + +2025-05-17 14:16:11,199 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:16:31,215 - Epoch: [378][ 50/ 518] Overall Loss 2.523723 Objective Loss 2.523723 LR 0.000004 Time 0.400243 +2025-05-17 14:16:50,152 - Epoch: [378][ 100/ 518] Overall Loss 2.477572 Objective Loss 2.477572 LR 0.000004 Time 0.389483 +2025-05-17 14:17:09,336 - Epoch: [378][ 150/ 518] Overall Loss 2.496853 Objective Loss 2.496853 LR 0.000004 Time 0.387545 +2025-05-17 14:17:28,341 - Epoch: [378][ 200/ 518] Overall Loss 2.494740 Objective Loss 2.494740 LR 0.000004 Time 0.385676 +2025-05-17 14:17:47,346 - Epoch: [378][ 250/ 518] Overall Loss 2.498429 Objective Loss 2.498429 LR 0.000004 Time 0.384554 +2025-05-17 14:18:06,361 - Epoch: [378][ 300/ 518] Overall Loss 2.503261 Objective Loss 2.503261 LR 0.000004 Time 0.383843 +2025-05-17 14:18:25,406 - Epoch: [378][ 350/ 518] Overall Loss 2.497978 Objective Loss 2.497978 LR 0.000004 Time 0.383419 +2025-05-17 14:18:44,441 - Epoch: [378][ 400/ 518] Overall Loss 2.507941 Objective Loss 2.507941 LR 0.000004 Time 0.383076 +2025-05-17 14:19:03,416 - Epoch: [378][ 450/ 518] Overall Loss 2.508930 Objective Loss 2.508930 LR 0.000004 Time 0.382677 +2025-05-17 14:19:22,400 - Epoch: [378][ 500/ 518] Overall Loss 2.501243 Objective Loss 2.501243 LR 0.000004 Time 0.382374 +2025-05-17 14:19:29,254 - Epoch: [378][ 518/ 518] Overall Loss 2.503589 Objective Loss 2.503589 LR 0.000004 Time 0.382317 +2025-05-17 14:19:29,323 - --- validate (epoch=378)----------- +2025-05-17 14:19:29,324 - 4952 samples (32 per mini-batch) +2025-05-17 14:19:29,327 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:19:44,287 - Epoch: [378][ 50/ 155] Loss 2.734998 mAP 0.504839 +2025-05-17 14:19:59,204 - Epoch: [378][ 100/ 155] Loss 2.747438 mAP 0.490020 +2025-05-17 14:20:14,786 - Epoch: [378][ 150/ 155] Loss 2.744270 mAP 0.497490 +2025-05-17 14:20:17,918 - Epoch: [378][ 155/ 155] Loss 2.747960 mAP 0.497442 +2025-05-17 14:20:17,973 - ==> mAP: 0.49744 Loss: 2.748 + +2025-05-17 14:20:17,983 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:20:17,983 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:20:18,074 - + +2025-05-17 14:20:18,074 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:20:38,179 - Epoch: [379][ 50/ 518] Overall Loss 2.534952 Objective Loss 2.534952 LR 0.000004 Time 0.402025 +2025-05-17 14:20:57,122 - Epoch: [379][ 100/ 518] Overall Loss 2.508850 Objective Loss 2.508850 LR 0.000004 Time 0.390425 +2025-05-17 14:21:16,090 - Epoch: [379][ 150/ 518] Overall Loss 2.495443 Objective Loss 2.495443 LR 0.000004 Time 0.386733 +2025-05-17 14:21:35,258 - Epoch: [379][ 200/ 518] Overall Loss 2.501297 Objective Loss 2.501297 LR 0.000004 Time 0.385881 +2025-05-17 14:21:54,255 - Epoch: [379][ 250/ 518] Overall Loss 2.506313 Objective Loss 2.506313 LR 0.000004 Time 0.384690 +2025-05-17 14:22:13,277 - Epoch: [379][ 300/ 518] Overall Loss 2.497599 Objective Loss 2.497599 LR 0.000004 Time 0.383978 +2025-05-17 14:22:32,312 - Epoch: [379][ 350/ 518] Overall Loss 2.494792 Objective Loss 2.494792 LR 0.000004 Time 0.383505 +2025-05-17 14:22:51,367 - Epoch: [379][ 400/ 518] Overall Loss 2.493775 Objective Loss 2.493775 LR 0.000004 Time 0.383203 +2025-05-17 14:23:10,399 - Epoch: [379][ 450/ 518] Overall Loss 2.497203 Objective Loss 2.497203 LR 0.000004 Time 0.382915 +2025-05-17 14:23:29,429 - Epoch: [379][ 500/ 518] Overall Loss 2.498367 Objective Loss 2.498367 LR 0.000004 Time 0.382683 +2025-05-17 14:23:36,131 - Epoch: [379][ 518/ 518] Overall Loss 2.499918 Objective Loss 2.499918 LR 0.000004 Time 0.382322 +2025-05-17 14:23:36,205 - --- validate (epoch=379)----------- +2025-05-17 14:23:36,206 - 4952 samples (32 per mini-batch) +2025-05-17 14:23:36,210 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:23:51,270 - Epoch: [379][ 50/ 155] Loss 2.788383 mAP 0.475721 +2025-05-17 14:24:06,393 - Epoch: [379][ 100/ 155] Loss 2.766612 mAP 0.493821 +2025-05-17 14:24:22,214 - Epoch: [379][ 150/ 155] Loss 2.752337 mAP 0.501597 +2025-05-17 14:24:25,360 - Epoch: [379][ 155/ 155] Loss 2.758666 mAP 0.499851 +2025-05-17 14:24:25,416 - ==> mAP: 0.49985 Loss: 2.759 + +2025-05-17 14:24:25,426 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:24:25,426 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:24:25,516 - + +2025-05-17 14:24:25,517 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:24:45,489 - Epoch: [380][ 50/ 518] Overall Loss 2.505713 Objective Loss 2.505713 LR 0.000004 Time 0.399377 +2025-05-17 14:25:04,463 - Epoch: [380][ 100/ 518] Overall Loss 2.515977 Objective Loss 2.515977 LR 0.000004 Time 0.389415 +2025-05-17 14:25:23,504 - Epoch: [380][ 150/ 518] Overall Loss 2.519727 Objective Loss 2.519727 LR 0.000004 Time 0.386548 +2025-05-17 14:25:42,542 - Epoch: [380][ 200/ 518] Overall Loss 2.532883 Objective Loss 2.532883 LR 0.000004 Time 0.385094 +2025-05-17 14:26:01,518 - Epoch: [380][ 250/ 518] Overall Loss 2.521430 Objective Loss 2.521430 LR 0.000004 Time 0.383976 +2025-05-17 14:26:20,664 - Epoch: [380][ 300/ 518] Overall Loss 2.515215 Objective Loss 2.515215 LR 0.000004 Time 0.383794 +2025-05-17 14:26:39,695 - Epoch: [380][ 350/ 518] Overall Loss 2.515439 Objective Loss 2.515439 LR 0.000004 Time 0.383340 +2025-05-17 14:26:58,728 - Epoch: [380][ 400/ 518] Overall Loss 2.513744 Objective Loss 2.513744 LR 0.000004 Time 0.383000 +2025-05-17 14:27:17,711 - Epoch: [380][ 450/ 518] Overall Loss 2.504520 Objective Loss 2.504520 LR 0.000004 Time 0.382627 +2025-05-17 14:27:36,706 - Epoch: [380][ 500/ 518] Overall Loss 2.506889 Objective Loss 2.506889 LR 0.000004 Time 0.382352 +2025-05-17 14:27:43,408 - Epoch: [380][ 518/ 518] Overall Loss 2.509735 Objective Loss 2.509735 LR 0.000004 Time 0.382003 +2025-05-17 14:27:43,485 - --- validate (epoch=380)----------- +2025-05-17 14:27:43,486 - 4952 samples (32 per mini-batch) +2025-05-17 14:27:43,490 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:27:58,195 - Epoch: [380][ 50/ 155] Loss 2.752967 mAP 0.512084 +2025-05-17 14:28:13,149 - Epoch: [380][ 100/ 155] Loss 2.735675 mAP 0.512508 +2025-05-17 14:28:28,635 - Epoch: [380][ 150/ 155] Loss 2.744972 mAP 0.506489 +2025-05-17 14:28:31,524 - Epoch: [380][ 155/ 155] Loss 2.748260 mAP 0.504828 +2025-05-17 14:28:31,577 - ==> mAP: 0.50483 Loss: 2.748 + +2025-05-17 14:28:31,586 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:28:31,587 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:28:31,674 - + +2025-05-17 14:28:31,675 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:28:51,774 - Epoch: [381][ 50/ 518] Overall Loss 2.531726 Objective Loss 2.531726 LR 0.000004 Time 0.401933 +2025-05-17 14:29:10,714 - Epoch: [381][ 100/ 518] Overall Loss 2.489990 Objective Loss 2.489990 LR 0.000004 Time 0.390347 +2025-05-17 14:29:29,747 - Epoch: [381][ 150/ 518] Overall Loss 2.490588 Objective Loss 2.490588 LR 0.000004 Time 0.387113 +2025-05-17 14:29:48,760 - Epoch: [381][ 200/ 518] Overall Loss 2.490425 Objective Loss 2.490425 LR 0.000004 Time 0.385394 +2025-05-17 14:30:07,760 - Epoch: [381][ 250/ 518] Overall Loss 2.501762 Objective Loss 2.501762 LR 0.000004 Time 0.384313 +2025-05-17 14:30:26,989 - Epoch: [381][ 300/ 518] Overall Loss 2.501764 Objective Loss 2.501764 LR 0.000004 Time 0.384353 +2025-05-17 14:30:46,016 - Epoch: [381][ 350/ 518] Overall Loss 2.502585 Objective Loss 2.502585 LR 0.000004 Time 0.383807 +2025-05-17 14:31:05,004 - Epoch: [381][ 400/ 518] Overall Loss 2.503277 Objective Loss 2.503277 LR 0.000004 Time 0.383298 +2025-05-17 14:31:24,041 - Epoch: [381][ 450/ 518] Overall Loss 2.503312 Objective Loss 2.503312 LR 0.000004 Time 0.383010 +2025-05-17 14:31:43,070 - Epoch: [381][ 500/ 518] Overall Loss 2.507953 Objective Loss 2.507953 LR 0.000004 Time 0.382765 +2025-05-17 14:31:49,794 - Epoch: [381][ 518/ 518] Overall Loss 2.502331 Objective Loss 2.502331 LR 0.000004 Time 0.382444 +2025-05-17 14:31:49,870 - --- validate (epoch=381)----------- +2025-05-17 14:31:49,870 - 4952 samples (32 per mini-batch) +2025-05-17 14:31:49,874 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:32:04,817 - Epoch: [381][ 50/ 155] Loss 2.729946 mAP 0.514064 +2025-05-17 14:32:19,634 - Epoch: [381][ 100/ 155] Loss 2.755088 mAP 0.495335 +2025-05-17 14:32:35,137 - Epoch: [381][ 150/ 155] Loss 2.736370 mAP 0.498144 +2025-05-17 14:32:38,184 - Epoch: [381][ 155/ 155] Loss 2.736633 mAP 0.497816 +2025-05-17 14:32:38,237 - ==> mAP: 0.49782 Loss: 2.737 + +2025-05-17 14:32:38,247 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:32:38,247 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:32:38,334 - + +2025-05-17 14:32:38,334 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:32:58,295 - Epoch: [382][ 50/ 518] Overall Loss 2.467951 Objective Loss 2.467951 LR 0.000004 Time 0.399143 +2025-05-17 14:33:17,271 - Epoch: [382][ 100/ 518] Overall Loss 2.460165 Objective Loss 2.460165 LR 0.000004 Time 0.389325 +2025-05-17 14:33:36,242 - Epoch: [382][ 150/ 518] Overall Loss 2.474960 Objective Loss 2.474960 LR 0.000004 Time 0.386015 +2025-05-17 14:33:55,212 - Epoch: [382][ 200/ 518] Overall Loss 2.480554 Objective Loss 2.480554 LR 0.000004 Time 0.384355 +2025-05-17 14:34:14,208 - Epoch: [382][ 250/ 518] Overall Loss 2.484840 Objective Loss 2.484840 LR 0.000004 Time 0.383462 +2025-05-17 14:34:33,182 - Epoch: [382][ 300/ 518] Overall Loss 2.486039 Objective Loss 2.486039 LR 0.000004 Time 0.382796 +2025-05-17 14:34:52,194 - Epoch: [382][ 350/ 518] Overall Loss 2.486882 Objective Loss 2.486882 LR 0.000004 Time 0.382427 +2025-05-17 14:35:11,405 - Epoch: [382][ 400/ 518] Overall Loss 2.491548 Objective Loss 2.491548 LR 0.000004 Time 0.382649 +2025-05-17 14:35:30,418 - Epoch: [382][ 450/ 518] Overall Loss 2.497168 Objective Loss 2.497168 LR 0.000004 Time 0.382380 +2025-05-17 14:35:49,431 - Epoch: [382][ 500/ 518] Overall Loss 2.497378 Objective Loss 2.497378 LR 0.000004 Time 0.382166 +2025-05-17 14:35:56,168 - Epoch: [382][ 518/ 518] Overall Loss 2.495415 Objective Loss 2.495415 LR 0.000004 Time 0.381892 +2025-05-17 14:35:56,238 - --- validate (epoch=382)----------- +2025-05-17 14:35:56,239 - 4952 samples (32 per mini-batch) +2025-05-17 14:35:56,242 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:36:11,058 - Epoch: [382][ 50/ 155] Loss 2.725294 mAP 0.495502 +2025-05-17 14:36:26,142 - Epoch: [382][ 100/ 155] Loss 2.737526 mAP 0.504023 +2025-05-17 14:36:41,992 - Epoch: [382][ 150/ 155] Loss 2.748374 mAP 0.501726 +2025-05-17 14:36:44,995 - Epoch: [382][ 155/ 155] Loss 2.742571 mAP 0.503086 +2025-05-17 14:36:45,054 - ==> mAP: 0.50309 Loss: 2.743 + +2025-05-17 14:36:45,063 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:36:45,063 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:36:45,153 - + +2025-05-17 14:36:45,154 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:37:05,121 - Epoch: [383][ 50/ 518] Overall Loss 2.468860 Objective Loss 2.468860 LR 0.000004 Time 0.399274 +2025-05-17 14:37:24,359 - Epoch: [383][ 100/ 518] Overall Loss 2.458170 Objective Loss 2.458170 LR 0.000004 Time 0.392011 +2025-05-17 14:37:43,409 - Epoch: [383][ 150/ 518] Overall Loss 2.487918 Objective Loss 2.487918 LR 0.000004 Time 0.388331 +2025-05-17 14:38:02,467 - Epoch: [383][ 200/ 518] Overall Loss 2.503245 Objective Loss 2.503245 LR 0.000004 Time 0.386536 +2025-05-17 14:38:21,509 - Epoch: [383][ 250/ 518] Overall Loss 2.498010 Objective Loss 2.498010 LR 0.000004 Time 0.385394 +2025-05-17 14:38:40,528 - Epoch: [383][ 300/ 518] Overall Loss 2.499793 Objective Loss 2.499793 LR 0.000004 Time 0.384553 +2025-05-17 14:38:59,686 - Epoch: [383][ 350/ 518] Overall Loss 2.500732 Objective Loss 2.500732 LR 0.000004 Time 0.384352 +2025-05-17 14:39:18,736 - Epoch: [383][ 400/ 518] Overall Loss 2.495677 Objective Loss 2.495677 LR 0.000004 Time 0.383929 +2025-05-17 14:39:37,775 - Epoch: [383][ 450/ 518] Overall Loss 2.496119 Objective Loss 2.496119 LR 0.000004 Time 0.383579 +2025-05-17 14:39:56,787 - Epoch: [383][ 500/ 518] Overall Loss 2.503363 Objective Loss 2.503363 LR 0.000004 Time 0.383242 +2025-05-17 14:40:03,529 - Epoch: [383][ 518/ 518] Overall Loss 2.501593 Objective Loss 2.501593 LR 0.000004 Time 0.382939 +2025-05-17 14:40:03,602 - --- validate (epoch=383)----------- +2025-05-17 14:40:03,603 - 4952 samples (32 per mini-batch) +2025-05-17 14:40:03,607 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:40:18,420 - Epoch: [383][ 50/ 155] Loss 2.747631 mAP 0.491207 +2025-05-17 14:40:33,443 - Epoch: [383][ 100/ 155] Loss 2.741209 mAP 0.495080 +2025-05-17 14:40:49,022 - Epoch: [383][ 150/ 155] Loss 2.731199 mAP 0.497849 +2025-05-17 14:40:51,948 - Epoch: [383][ 155/ 155] Loss 2.736143 mAP 0.497624 +2025-05-17 14:40:52,000 - ==> mAP: 0.49762 Loss: 2.736 + +2025-05-17 14:40:52,010 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:40:52,010 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:40:52,098 - + +2025-05-17 14:40:52,098 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:41:12,280 - Epoch: [384][ 50/ 518] Overall Loss 2.482815 Objective Loss 2.482815 LR 0.000004 Time 0.403571 +2025-05-17 14:41:31,268 - Epoch: [384][ 100/ 518] Overall Loss 2.471589 Objective Loss 2.471589 LR 0.000004 Time 0.391663 +2025-05-17 14:41:50,269 - Epoch: [384][ 150/ 518] Overall Loss 2.509789 Objective Loss 2.509789 LR 0.000004 Time 0.387776 +2025-05-17 14:42:09,308 - Epoch: [384][ 200/ 518] Overall Loss 2.512422 Objective Loss 2.512422 LR 0.000004 Time 0.386021 +2025-05-17 14:42:28,340 - Epoch: [384][ 250/ 518] Overall Loss 2.514725 Objective Loss 2.514725 LR 0.000004 Time 0.384941 +2025-05-17 14:42:47,365 - Epoch: [384][ 300/ 518] Overall Loss 2.500699 Objective Loss 2.500699 LR 0.000004 Time 0.384199 +2025-05-17 14:43:06,401 - Epoch: [384][ 350/ 518] Overall Loss 2.504538 Objective Loss 2.504538 LR 0.000004 Time 0.383701 +2025-05-17 14:43:25,585 - Epoch: [384][ 400/ 518] Overall Loss 2.505789 Objective Loss 2.505789 LR 0.000004 Time 0.383694 +2025-05-17 14:43:44,615 - Epoch: [384][ 450/ 518] Overall Loss 2.503775 Objective Loss 2.503775 LR 0.000004 Time 0.383348 +2025-05-17 14:44:03,659 - Epoch: [384][ 500/ 518] Overall Loss 2.500207 Objective Loss 2.500207 LR 0.000004 Time 0.383101 +2025-05-17 14:44:10,385 - Epoch: [384][ 518/ 518] Overall Loss 2.499926 Objective Loss 2.499926 LR 0.000004 Time 0.382771 +2025-05-17 14:44:10,460 - --- validate (epoch=384)----------- +2025-05-17 14:44:10,461 - 4952 samples (32 per mini-batch) +2025-05-17 14:44:10,464 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:44:25,446 - Epoch: [384][ 50/ 155] Loss 2.738872 mAP 0.511200 +2025-05-17 14:44:40,819 - Epoch: [384][ 100/ 155] Loss 2.742658 mAP 0.507087 +2025-05-17 14:44:56,514 - Epoch: [384][ 150/ 155] Loss 2.746130 mAP 0.505696 +2025-05-17 14:44:59,652 - Epoch: [384][ 155/ 155] Loss 2.749491 mAP 0.504677 +2025-05-17 14:44:59,710 - ==> mAP: 0.50468 Loss: 2.749 + +2025-05-17 14:44:59,720 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:44:59,720 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:44:59,810 - + +2025-05-17 14:44:59,810 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:45:19,788 - Epoch: [385][ 50/ 518] Overall Loss 2.466729 Objective Loss 2.466729 LR 0.000004 Time 0.399489 +2025-05-17 14:45:38,968 - Epoch: [385][ 100/ 518] Overall Loss 2.470757 Objective Loss 2.470757 LR 0.000004 Time 0.391544 +2025-05-17 14:45:58,007 - Epoch: [385][ 150/ 518] Overall Loss 2.497802 Objective Loss 2.497802 LR 0.000004 Time 0.387949 +2025-05-17 14:46:17,034 - Epoch: [385][ 200/ 518] Overall Loss 2.490440 Objective Loss 2.490440 LR 0.000004 Time 0.386093 +2025-05-17 14:46:36,030 - Epoch: [385][ 250/ 518] Overall Loss 2.496902 Objective Loss 2.496902 LR 0.000004 Time 0.384853 +2025-05-17 14:46:55,017 - Epoch: [385][ 300/ 518] Overall Loss 2.492382 Objective Loss 2.492382 LR 0.000004 Time 0.383997 +2025-05-17 14:47:14,026 - Epoch: [385][ 350/ 518] Overall Loss 2.502865 Objective Loss 2.502865 LR 0.000004 Time 0.383448 +2025-05-17 14:47:33,035 - Epoch: [385][ 400/ 518] Overall Loss 2.506898 Objective Loss 2.506898 LR 0.000004 Time 0.383035 +2025-05-17 14:47:52,228 - Epoch: [385][ 450/ 518] Overall Loss 2.506724 Objective Loss 2.506724 LR 0.000004 Time 0.383126 +2025-05-17 14:48:11,242 - Epoch: [385][ 500/ 518] Overall Loss 2.510637 Objective Loss 2.510637 LR 0.000004 Time 0.382839 +2025-05-17 14:48:17,952 - Epoch: [385][ 518/ 518] Overall Loss 2.508235 Objective Loss 2.508235 LR 0.000004 Time 0.382488 +2025-05-17 14:48:18,028 - --- validate (epoch=385)----------- +2025-05-17 14:48:18,029 - 4952 samples (32 per mini-batch) +2025-05-17 14:48:18,032 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:48:33,104 - Epoch: [385][ 50/ 155] Loss 2.760250 mAP 0.518078 +2025-05-17 14:48:48,567 - Epoch: [385][ 100/ 155] Loss 2.746331 mAP 0.501954 +2025-05-17 14:49:04,380 - Epoch: [385][ 150/ 155] Loss 2.734821 mAP 0.502004 +2025-05-17 14:49:07,520 - Epoch: [385][ 155/ 155] Loss 2.736538 mAP 0.502977 +2025-05-17 14:49:07,578 - ==> mAP: 0.50298 Loss: 2.737 + +2025-05-17 14:49:07,588 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:49:07,588 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:49:07,677 - + +2025-05-17 14:49:07,678 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:49:27,659 - Epoch: [386][ 50/ 518] Overall Loss 2.560422 Objective Loss 2.560422 LR 0.000004 Time 0.399562 +2025-05-17 14:49:46,615 - Epoch: [386][ 100/ 518] Overall Loss 2.511473 Objective Loss 2.511473 LR 0.000004 Time 0.389329 +2025-05-17 14:50:05,805 - Epoch: [386][ 150/ 518] Overall Loss 2.528753 Objective Loss 2.528753 LR 0.000004 Time 0.387481 +2025-05-17 14:50:24,829 - Epoch: [386][ 200/ 518] Overall Loss 2.503307 Objective Loss 2.503307 LR 0.000004 Time 0.385725 +2025-05-17 14:50:43,841 - Epoch: [386][ 250/ 518] Overall Loss 2.508014 Objective Loss 2.508014 LR 0.000004 Time 0.384622 +2025-05-17 14:51:02,855 - Epoch: [386][ 300/ 518] Overall Loss 2.505567 Objective Loss 2.505567 LR 0.000004 Time 0.383897 +2025-05-17 14:51:21,883 - Epoch: [386][ 350/ 518] Overall Loss 2.508592 Objective Loss 2.508592 LR 0.000004 Time 0.383418 +2025-05-17 14:51:40,913 - Epoch: [386][ 400/ 518] Overall Loss 2.507221 Objective Loss 2.507221 LR 0.000004 Time 0.383063 +2025-05-17 14:51:59,959 - Epoch: [386][ 450/ 518] Overall Loss 2.501362 Objective Loss 2.501362 LR 0.000004 Time 0.382824 +2025-05-17 14:52:19,005 - Epoch: [386][ 500/ 518] Overall Loss 2.501692 Objective Loss 2.501692 LR 0.000004 Time 0.382632 +2025-05-17 14:52:25,837 - Epoch: [386][ 518/ 518] Overall Loss 2.502162 Objective Loss 2.502162 LR 0.000004 Time 0.382524 +2025-05-17 14:52:25,915 - --- validate (epoch=386)----------- +2025-05-17 14:52:25,915 - 4952 samples (32 per mini-batch) +2025-05-17 14:52:25,919 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:52:40,923 - Epoch: [386][ 50/ 155] Loss 2.765904 mAP 0.509152 +2025-05-17 14:52:56,114 - Epoch: [386][ 100/ 155] Loss 2.766419 mAP 0.503078 +2025-05-17 14:53:11,918 - Epoch: [386][ 150/ 155] Loss 2.760442 mAP 0.497699 +2025-05-17 14:53:15,052 - Epoch: [386][ 155/ 155] Loss 2.757878 mAP 0.497796 +2025-05-17 14:53:15,106 - ==> mAP: 0.49780 Loss: 2.758 + +2025-05-17 14:53:15,115 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:53:15,116 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:53:15,205 - + +2025-05-17 14:53:15,206 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:53:35,157 - Epoch: [387][ 50/ 518] Overall Loss 2.476583 Objective Loss 2.476583 LR 0.000004 Time 0.398953 +2025-05-17 14:53:54,341 - Epoch: [387][ 100/ 518] Overall Loss 2.482393 Objective Loss 2.482393 LR 0.000004 Time 0.391306 +2025-05-17 14:54:13,366 - Epoch: [387][ 150/ 518] Overall Loss 2.482499 Objective Loss 2.482499 LR 0.000004 Time 0.387698 +2025-05-17 14:54:32,395 - Epoch: [387][ 200/ 518] Overall Loss 2.488004 Objective Loss 2.488004 LR 0.000004 Time 0.385916 +2025-05-17 14:54:51,423 - Epoch: [387][ 250/ 518] Overall Loss 2.485555 Objective Loss 2.485555 LR 0.000004 Time 0.384839 +2025-05-17 14:55:10,433 - Epoch: [387][ 300/ 518] Overall Loss 2.487829 Objective Loss 2.487829 LR 0.000004 Time 0.384066 +2025-05-17 14:55:29,435 - Epoch: [387][ 350/ 518] Overall Loss 2.481864 Objective Loss 2.481864 LR 0.000004 Time 0.383486 +2025-05-17 14:55:48,632 - Epoch: [387][ 400/ 518] Overall Loss 2.485124 Objective Loss 2.485124 LR 0.000004 Time 0.383541 +2025-05-17 14:56:07,673 - Epoch: [387][ 450/ 518] Overall Loss 2.486642 Objective Loss 2.486642 LR 0.000004 Time 0.383237 +2025-05-17 14:56:26,735 - Epoch: [387][ 500/ 518] Overall Loss 2.490728 Objective Loss 2.490728 LR 0.000004 Time 0.383036 +2025-05-17 14:56:33,461 - Epoch: [387][ 518/ 518] Overall Loss 2.490540 Objective Loss 2.490540 LR 0.000004 Time 0.382709 +2025-05-17 14:56:33,537 - --- validate (epoch=387)----------- +2025-05-17 14:56:33,538 - 4952 samples (32 per mini-batch) +2025-05-17 14:56:33,541 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 14:56:48,370 - Epoch: [387][ 50/ 155] Loss 2.773048 mAP 0.515094 +2025-05-17 14:57:03,410 - Epoch: [387][ 100/ 155] Loss 2.766733 mAP 0.503044 +2025-05-17 14:57:18,857 - Epoch: [387][ 150/ 155] Loss 2.757835 mAP 0.502430 +2025-05-17 14:57:21,742 - Epoch: [387][ 155/ 155] Loss 2.755381 mAP 0.503473 +2025-05-17 14:57:21,795 - ==> mAP: 0.50347 Loss: 2.755 + +2025-05-17 14:57:21,805 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 14:57:21,805 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 14:57:21,892 - + +2025-05-17 14:57:21,893 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 14:57:41,920 - Epoch: [388][ 50/ 518] Overall Loss 2.483262 Objective Loss 2.483262 LR 0.000004 Time 0.400482 +2025-05-17 14:58:01,082 - Epoch: [388][ 100/ 518] Overall Loss 2.508310 Objective Loss 2.508310 LR 0.000004 Time 0.391854 +2025-05-17 14:58:20,070 - Epoch: [388][ 150/ 518] Overall Loss 2.509382 Objective Loss 2.509382 LR 0.000004 Time 0.387814 +2025-05-17 14:58:39,051 - Epoch: [388][ 200/ 518] Overall Loss 2.507139 Objective Loss 2.507139 LR 0.000004 Time 0.385758 +2025-05-17 14:58:58,061 - Epoch: [388][ 250/ 518] Overall Loss 2.493405 Objective Loss 2.493405 LR 0.000004 Time 0.384641 +2025-05-17 14:59:17,107 - Epoch: [388][ 300/ 518] Overall Loss 2.501108 Objective Loss 2.501108 LR 0.000004 Time 0.384019 +2025-05-17 14:59:36,123 - Epoch: [388][ 350/ 518] Overall Loss 2.512653 Objective Loss 2.512653 LR 0.000004 Time 0.383486 +2025-05-17 14:59:55,281 - Epoch: [388][ 400/ 518] Overall Loss 2.511300 Objective Loss 2.511300 LR 0.000004 Time 0.383442 +2025-05-17 15:00:14,257 - Epoch: [388][ 450/ 518] Overall Loss 2.507613 Objective Loss 2.507613 LR 0.000004 Time 0.383004 +2025-05-17 15:00:33,230 - Epoch: [388][ 500/ 518] Overall Loss 2.503708 Objective Loss 2.503708 LR 0.000004 Time 0.382647 +2025-05-17 15:00:39,911 - Epoch: [388][ 518/ 518] Overall Loss 2.499918 Objective Loss 2.499918 LR 0.000004 Time 0.382248 +2025-05-17 15:00:39,983 - --- validate (epoch=388)----------- +2025-05-17 15:00:39,984 - 4952 samples (32 per mini-batch) +2025-05-17 15:00:39,987 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:00:54,899 - Epoch: [388][ 50/ 155] Loss 2.780936 mAP 0.498728 +2025-05-17 15:01:10,270 - Epoch: [388][ 100/ 155] Loss 2.769381 mAP 0.496024 +2025-05-17 15:01:26,180 - Epoch: [388][ 150/ 155] Loss 2.748529 mAP 0.505802 +2025-05-17 15:01:29,190 - Epoch: [388][ 155/ 155] Loss 2.746202 mAP 0.504572 +2025-05-17 15:01:29,245 - ==> mAP: 0.50457 Loss: 2.746 + +2025-05-17 15:01:29,255 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 15:01:29,255 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 15:01:29,345 - + +2025-05-17 15:01:29,345 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 15:01:49,723 - Epoch: [389][ 50/ 518] Overall Loss 2.493511 Objective Loss 2.493511 LR 0.000004 Time 0.407468 +2025-05-17 15:02:08,708 - Epoch: [389][ 100/ 518] Overall Loss 2.491065 Objective Loss 2.491065 LR 0.000004 Time 0.393573 +2025-05-17 15:02:27,709 - Epoch: [389][ 150/ 518] Overall Loss 2.493671 Objective Loss 2.493671 LR 0.000004 Time 0.389052 +2025-05-17 15:02:46,714 - Epoch: [389][ 200/ 518] Overall Loss 2.499067 Objective Loss 2.499067 LR 0.000004 Time 0.386806 +2025-05-17 15:03:05,704 - Epoch: [389][ 250/ 518] Overall Loss 2.494397 Objective Loss 2.494397 LR 0.000004 Time 0.385400 +2025-05-17 15:03:24,739 - Epoch: [389][ 300/ 518] Overall Loss 2.500778 Objective Loss 2.500778 LR 0.000004 Time 0.384613 +2025-05-17 15:03:43,751 - Epoch: [389][ 350/ 518] Overall Loss 2.501998 Objective Loss 2.501998 LR 0.000004 Time 0.383984 +2025-05-17 15:04:02,931 - Epoch: [389][ 400/ 518] Overall Loss 2.505127 Objective Loss 2.505127 LR 0.000004 Time 0.383935 +2025-05-17 15:04:21,943 - Epoch: [389][ 450/ 518] Overall Loss 2.506040 Objective Loss 2.506040 LR 0.000004 Time 0.383521 +2025-05-17 15:04:40,975 - Epoch: [389][ 500/ 518] Overall Loss 2.504595 Objective Loss 2.504595 LR 0.000004 Time 0.383232 +2025-05-17 15:04:47,696 - Epoch: [389][ 518/ 518] Overall Loss 2.507604 Objective Loss 2.507604 LR 0.000004 Time 0.382889 +2025-05-17 15:04:47,771 - --- validate (epoch=389)----------- +2025-05-17 15:04:47,772 - 4952 samples (32 per mini-batch) +2025-05-17 15:04:47,776 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:05:02,552 - Epoch: [389][ 50/ 155] Loss 2.731605 mAP 0.519629 +2025-05-17 15:05:17,488 - Epoch: [389][ 100/ 155] Loss 2.757311 mAP 0.501526 +2025-05-17 15:05:32,762 - Epoch: [389][ 150/ 155] Loss 2.743625 mAP 0.498978 +2025-05-17 15:05:35,769 - Epoch: [389][ 155/ 155] Loss 2.742125 mAP 0.500484 +2025-05-17 15:05:35,821 - ==> mAP: 0.50048 Loss: 2.742 + +2025-05-17 15:05:35,831 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 15:05:35,831 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 15:05:35,919 - + +2025-05-17 15:05:35,919 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 15:05:56,281 - Epoch: [390][ 50/ 518] Overall Loss 2.513232 Objective Loss 2.513232 LR 0.000004 Time 0.407183 +2025-05-17 15:06:15,260 - Epoch: [390][ 100/ 518] Overall Loss 2.486135 Objective Loss 2.486135 LR 0.000004 Time 0.393366 +2025-05-17 15:06:34,229 - Epoch: [390][ 150/ 518] Overall Loss 2.495540 Objective Loss 2.495540 LR 0.000004 Time 0.388694 +2025-05-17 15:06:53,208 - Epoch: [390][ 200/ 518] Overall Loss 2.498169 Objective Loss 2.498169 LR 0.000004 Time 0.386412 +2025-05-17 15:07:12,206 - Epoch: [390][ 250/ 518] Overall Loss 2.497725 Objective Loss 2.497725 LR 0.000004 Time 0.385117 +2025-05-17 15:07:31,202 - Epoch: [390][ 300/ 518] Overall Loss 2.497694 Objective Loss 2.497694 LR 0.000004 Time 0.384245 +2025-05-17 15:07:50,342 - Epoch: [390][ 350/ 518] Overall Loss 2.503867 Objective Loss 2.503867 LR 0.000004 Time 0.384036 +2025-05-17 15:08:09,340 - Epoch: [390][ 400/ 518] Overall Loss 2.504677 Objective Loss 2.504677 LR 0.000004 Time 0.383524 +2025-05-17 15:08:28,372 - Epoch: [390][ 450/ 518] Overall Loss 2.502764 Objective Loss 2.502764 LR 0.000004 Time 0.383201 +2025-05-17 15:08:47,422 - Epoch: [390][ 500/ 518] Overall Loss 2.506835 Objective Loss 2.506835 LR 0.000004 Time 0.382979 +2025-05-17 15:08:54,124 - Epoch: [390][ 518/ 518] Overall Loss 2.502381 Objective Loss 2.502381 LR 0.000004 Time 0.382607 +2025-05-17 15:08:54,198 - --- validate (epoch=390)----------- +2025-05-17 15:08:54,199 - 4952 samples (32 per mini-batch) +2025-05-17 15:08:54,202 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:09:09,204 - Epoch: [390][ 50/ 155] Loss 2.732988 mAP 0.507010 +2025-05-17 15:09:24,495 - Epoch: [390][ 100/ 155] Loss 2.739469 mAP 0.511061 +2025-05-17 15:09:40,383 - Epoch: [390][ 150/ 155] Loss 2.746083 mAP 0.504301 +2025-05-17 15:09:43,370 - Epoch: [390][ 155/ 155] Loss 2.743897 mAP 0.505041 +2025-05-17 15:09:43,424 - ==> mAP: 0.50504 Loss: 2.744 + +2025-05-17 15:09:43,434 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 15:09:43,435 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 15:09:43,524 - + +2025-05-17 15:09:43,525 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 15:10:03,784 - Epoch: [391][ 50/ 518] Overall Loss 2.508097 Objective Loss 2.508097 LR 0.000004 Time 0.405118 +2025-05-17 15:10:22,802 - Epoch: [391][ 100/ 518] Overall Loss 2.493658 Objective Loss 2.493658 LR 0.000004 Time 0.392724 +2025-05-17 15:10:41,801 - Epoch: [391][ 150/ 518] Overall Loss 2.486383 Objective Loss 2.486383 LR 0.000004 Time 0.388470 +2025-05-17 15:11:00,805 - Epoch: [391][ 200/ 518] Overall Loss 2.483156 Objective Loss 2.483156 LR 0.000004 Time 0.386367 +2025-05-17 15:11:19,833 - Epoch: [391][ 250/ 518] Overall Loss 2.487691 Objective Loss 2.487691 LR 0.000004 Time 0.385200 +2025-05-17 15:11:38,822 - Epoch: [391][ 300/ 518] Overall Loss 2.493278 Objective Loss 2.493278 LR 0.000004 Time 0.384294 +2025-05-17 15:11:57,813 - Epoch: [391][ 350/ 518] Overall Loss 2.493522 Objective Loss 2.493522 LR 0.000004 Time 0.383651 +2025-05-17 15:12:16,962 - Epoch: [391][ 400/ 518] Overall Loss 2.494610 Objective Loss 2.494610 LR 0.000004 Time 0.383564 +2025-05-17 15:12:35,939 - Epoch: [391][ 450/ 518] Overall Loss 2.498899 Objective Loss 2.498899 LR 0.000004 Time 0.383113 +2025-05-17 15:12:54,925 - Epoch: [391][ 500/ 518] Overall Loss 2.500355 Objective Loss 2.500355 LR 0.000004 Time 0.382771 +2025-05-17 15:13:01,610 - Epoch: [391][ 518/ 518] Overall Loss 2.502968 Objective Loss 2.502968 LR 0.000004 Time 0.382375 +2025-05-17 15:13:01,686 - --- validate (epoch=391)----------- +2025-05-17 15:13:01,687 - 4952 samples (32 per mini-batch) +2025-05-17 15:13:01,690 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:13:16,711 - Epoch: [391][ 50/ 155] Loss 2.729575 mAP 0.519633 +2025-05-17 15:13:31,854 - Epoch: [391][ 100/ 155] Loss 2.734366 mAP 0.506300 +2025-05-17 15:13:47,723 - Epoch: [391][ 150/ 155] Loss 2.740563 mAP 0.502685 +2025-05-17 15:13:50,723 - Epoch: [391][ 155/ 155] Loss 2.743256 mAP 0.499780 +2025-05-17 15:13:50,778 - ==> mAP: 0.49978 Loss: 2.743 + +2025-05-17 15:13:50,788 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 15:13:50,788 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 15:13:50,878 - + +2025-05-17 15:13:50,879 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 15:14:10,987 - Epoch: [392][ 50/ 518] Overall Loss 2.541595 Objective Loss 2.541595 LR 0.000004 Time 0.402103 +2025-05-17 15:14:30,147 - Epoch: [392][ 100/ 518] Overall Loss 2.537050 Objective Loss 2.537050 LR 0.000004 Time 0.392632 +2025-05-17 15:14:49,153 - Epoch: [392][ 150/ 518] Overall Loss 2.518696 Objective Loss 2.518696 LR 0.000004 Time 0.388454 +2025-05-17 15:15:08,194 - Epoch: [392][ 200/ 518] Overall Loss 2.510643 Objective Loss 2.510643 LR 0.000004 Time 0.386542 +2025-05-17 15:15:27,227 - Epoch: [392][ 250/ 518] Overall Loss 2.502350 Objective Loss 2.502350 LR 0.000004 Time 0.385363 +2025-05-17 15:15:46,225 - Epoch: [392][ 300/ 518] Overall Loss 2.501189 Objective Loss 2.501189 LR 0.000004 Time 0.384460 +2025-05-17 15:16:05,383 - Epoch: [392][ 350/ 518] Overall Loss 2.503358 Objective Loss 2.503358 LR 0.000004 Time 0.384269 +2025-05-17 15:16:24,388 - Epoch: [392][ 400/ 518] Overall Loss 2.502611 Objective Loss 2.502611 LR 0.000004 Time 0.383746 +2025-05-17 15:16:43,385 - Epoch: [392][ 450/ 518] Overall Loss 2.503365 Objective Loss 2.503365 LR 0.000004 Time 0.383320 +2025-05-17 15:17:02,403 - Epoch: [392][ 500/ 518] Overall Loss 2.499725 Objective Loss 2.499725 LR 0.000004 Time 0.383022 +2025-05-17 15:17:09,129 - Epoch: [392][ 518/ 518] Overall Loss 2.502648 Objective Loss 2.502648 LR 0.000004 Time 0.382696 +2025-05-17 15:17:09,207 - --- validate (epoch=392)----------- +2025-05-17 15:17:09,208 - 4952 samples (32 per mini-batch) +2025-05-17 15:17:09,211 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:17:24,141 - Epoch: [392][ 50/ 155] Loss 2.775194 mAP 0.512166 +2025-05-17 15:17:39,232 - Epoch: [392][ 100/ 155] Loss 2.743942 mAP 0.505312 +2025-05-17 15:17:54,860 - Epoch: [392][ 150/ 155] Loss 2.744647 mAP 0.502987 +2025-05-17 15:17:57,782 - Epoch: [392][ 155/ 155] Loss 2.748756 mAP 0.504717 +2025-05-17 15:17:57,843 - ==> mAP: 0.50472 Loss: 2.749 + +2025-05-17 15:17:57,853 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 15:17:57,853 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 15:17:57,941 - + +2025-05-17 15:17:57,941 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 15:18:18,267 - Epoch: [393][ 50/ 518] Overall Loss 2.476342 Objective Loss 2.476342 LR 0.000004 Time 0.406434 +2025-05-17 15:18:37,271 - Epoch: [393][ 100/ 518] Overall Loss 2.505119 Objective Loss 2.505119 LR 0.000004 Time 0.393248 +2025-05-17 15:18:56,307 - Epoch: [393][ 150/ 518] Overall Loss 2.497105 Objective Loss 2.497105 LR 0.000004 Time 0.389072 +2025-05-17 15:19:15,341 - Epoch: [393][ 200/ 518] Overall Loss 2.502969 Objective Loss 2.502969 LR 0.000004 Time 0.386968 +2025-05-17 15:19:34,385 - Epoch: [393][ 250/ 518] Overall Loss 2.502403 Objective Loss 2.502403 LR 0.000004 Time 0.385748 +2025-05-17 15:19:53,446 - Epoch: [393][ 300/ 518] Overall Loss 2.498885 Objective Loss 2.498885 LR 0.000004 Time 0.384989 +2025-05-17 15:20:12,488 - Epoch: [393][ 350/ 518] Overall Loss 2.494220 Objective Loss 2.494220 LR 0.000004 Time 0.384395 +2025-05-17 15:20:31,693 - Epoch: [393][ 400/ 518] Overall Loss 2.497091 Objective Loss 2.497091 LR 0.000004 Time 0.384355 +2025-05-17 15:20:50,761 - Epoch: [393][ 450/ 518] Overall Loss 2.492625 Objective Loss 2.492625 LR 0.000004 Time 0.384021 +2025-05-17 15:21:09,805 - Epoch: [393][ 500/ 518] Overall Loss 2.491801 Objective Loss 2.491801 LR 0.000004 Time 0.383707 +2025-05-17 15:21:16,492 - Epoch: [393][ 518/ 518] Overall Loss 2.495259 Objective Loss 2.495259 LR 0.000004 Time 0.383280 +2025-05-17 15:21:16,570 - --- validate (epoch=393)----------- +2025-05-17 15:21:16,570 - 4952 samples (32 per mini-batch) +2025-05-17 15:21:16,574 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:21:31,473 - Epoch: [393][ 50/ 155] Loss 2.683235 mAP 0.509137 +2025-05-17 15:21:46,590 - Epoch: [393][ 100/ 155] Loss 2.701642 mAP 0.506878 +2025-05-17 15:22:02,042 - Epoch: [393][ 150/ 155] Loss 2.741266 mAP 0.505580 +2025-05-17 15:22:05,112 - Epoch: [393][ 155/ 155] Loss 2.741977 mAP 0.504720 +2025-05-17 15:22:05,166 - ==> mAP: 0.50472 Loss: 2.742 + +2025-05-17 15:22:05,176 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 15:22:05,176 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 15:22:05,264 - + +2025-05-17 15:22:05,264 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 15:22:25,600 - Epoch: [394][ 50/ 518] Overall Loss 2.525031 Objective Loss 2.525031 LR 0.000004 Time 0.406654 +2025-05-17 15:22:44,569 - Epoch: [394][ 100/ 518] Overall Loss 2.530277 Objective Loss 2.530277 LR 0.000004 Time 0.392996 +2025-05-17 15:23:03,549 - Epoch: [394][ 150/ 518] Overall Loss 2.534965 Objective Loss 2.534965 LR 0.000004 Time 0.388528 +2025-05-17 15:23:22,554 - Epoch: [394][ 200/ 518] Overall Loss 2.531823 Objective Loss 2.531823 LR 0.000004 Time 0.386412 +2025-05-17 15:23:41,558 - Epoch: [394][ 250/ 518] Overall Loss 2.525337 Objective Loss 2.525337 LR 0.000004 Time 0.385143 +2025-05-17 15:24:00,558 - Epoch: [394][ 300/ 518] Overall Loss 2.517046 Objective Loss 2.517046 LR 0.000004 Time 0.384281 +2025-05-17 15:24:19,550 - Epoch: [394][ 350/ 518] Overall Loss 2.518619 Objective Loss 2.518619 LR 0.000004 Time 0.383643 +2025-05-17 15:24:38,545 - Epoch: [394][ 400/ 518] Overall Loss 2.514823 Objective Loss 2.514823 LR 0.000004 Time 0.383170 +2025-05-17 15:24:57,730 - Epoch: [394][ 450/ 518] Overall Loss 2.514810 Objective Loss 2.514810 LR 0.000004 Time 0.383228 +2025-05-17 15:25:16,738 - Epoch: [394][ 500/ 518] Overall Loss 2.516004 Objective Loss 2.516004 LR 0.000004 Time 0.382918 +2025-05-17 15:25:23,419 - Epoch: [394][ 518/ 518] Overall Loss 2.514498 Objective Loss 2.514498 LR 0.000004 Time 0.382509 +2025-05-17 15:25:23,490 - --- validate (epoch=394)----------- +2025-05-17 15:25:23,490 - 4952 samples (32 per mini-batch) +2025-05-17 15:25:23,494 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:25:38,479 - Epoch: [394][ 50/ 155] Loss 2.772149 mAP 0.499018 +2025-05-17 15:25:53,848 - Epoch: [394][ 100/ 155] Loss 2.737555 mAP 0.500208 +2025-05-17 15:26:09,538 - Epoch: [394][ 150/ 155] Loss 2.744904 mAP 0.502937 +2025-05-17 15:26:12,683 - Epoch: [394][ 155/ 155] Loss 2.744180 mAP 0.503306 +2025-05-17 15:26:12,737 - ==> mAP: 0.50331 Loss: 2.744 + +2025-05-17 15:26:12,747 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 15:26:12,747 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 15:26:12,838 - + +2025-05-17 15:26:12,838 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 15:26:33,202 - Epoch: [395][ 50/ 518] Overall Loss 2.477128 Objective Loss 2.477128 LR 0.000004 Time 0.407218 +2025-05-17 15:26:52,214 - Epoch: [395][ 100/ 518] Overall Loss 2.502974 Objective Loss 2.502974 LR 0.000004 Time 0.393718 +2025-05-17 15:27:11,257 - Epoch: [395][ 150/ 518] Overall Loss 2.507201 Objective Loss 2.507201 LR 0.000004 Time 0.389426 +2025-05-17 15:27:30,298 - Epoch: [395][ 200/ 518] Overall Loss 2.499148 Objective Loss 2.499148 LR 0.000004 Time 0.387268 +2025-05-17 15:27:49,312 - Epoch: [395][ 250/ 518] Overall Loss 2.512367 Objective Loss 2.512367 LR 0.000004 Time 0.385864 +2025-05-17 15:28:08,352 - Epoch: [395][ 300/ 518] Overall Loss 2.499635 Objective Loss 2.499635 LR 0.000004 Time 0.385018 +2025-05-17 15:28:27,542 - Epoch: [395][ 350/ 518] Overall Loss 2.502137 Objective Loss 2.502137 LR 0.000004 Time 0.384840 +2025-05-17 15:28:46,537 - Epoch: [395][ 400/ 518] Overall Loss 2.509572 Objective Loss 2.509572 LR 0.000004 Time 0.384221 +2025-05-17 15:29:05,531 - Epoch: [395][ 450/ 518] Overall Loss 2.515969 Objective Loss 2.515969 LR 0.000004 Time 0.383735 +2025-05-17 15:29:24,537 - Epoch: [395][ 500/ 518] Overall Loss 2.518696 Objective Loss 2.518696 LR 0.000004 Time 0.383370 +2025-05-17 15:29:31,255 - Epoch: [395][ 518/ 518] Overall Loss 2.517180 Objective Loss 2.517180 LR 0.000004 Time 0.383015 +2025-05-17 15:29:31,328 - --- validate (epoch=395)----------- +2025-05-17 15:29:31,328 - 4952 samples (32 per mini-batch) +2025-05-17 15:29:31,332 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:29:46,160 - Epoch: [395][ 50/ 155] Loss 2.681120 mAP 0.525636 +2025-05-17 15:30:01,142 - Epoch: [395][ 100/ 155] Loss 2.741676 mAP 0.511364 +2025-05-17 15:30:16,607 - Epoch: [395][ 150/ 155] Loss 2.756745 mAP 0.504287 +2025-05-17 15:30:19,510 - Epoch: [395][ 155/ 155] Loss 2.756549 mAP 0.505369 +2025-05-17 15:30:19,563 - ==> mAP: 0.50537 Loss: 2.757 + +2025-05-17 15:30:19,573 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 15:30:19,573 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 15:30:19,660 - + +2025-05-17 15:30:19,660 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 15:30:39,802 - Epoch: [396][ 50/ 518] Overall Loss 2.467236 Objective Loss 2.467236 LR 0.000004 Time 0.402773 +2025-05-17 15:30:58,818 - Epoch: [396][ 100/ 518] Overall Loss 2.466737 Objective Loss 2.466737 LR 0.000004 Time 0.391536 +2025-05-17 15:31:17,845 - Epoch: [396][ 150/ 518] Overall Loss 2.470043 Objective Loss 2.470043 LR 0.000004 Time 0.387864 +2025-05-17 15:31:36,858 - Epoch: [396][ 200/ 518] Overall Loss 2.482331 Objective Loss 2.482331 LR 0.000004 Time 0.385958 +2025-05-17 15:31:55,853 - Epoch: [396][ 250/ 518] Overall Loss 2.491725 Objective Loss 2.491725 LR 0.000004 Time 0.384741 +2025-05-17 15:32:14,858 - Epoch: [396][ 300/ 518] Overall Loss 2.493144 Objective Loss 2.493144 LR 0.000004 Time 0.383960 +2025-05-17 15:32:33,872 - Epoch: [396][ 350/ 518] Overall Loss 2.500052 Objective Loss 2.500052 LR 0.000004 Time 0.383433 +2025-05-17 15:32:53,045 - Epoch: [396][ 400/ 518] Overall Loss 2.491062 Objective Loss 2.491062 LR 0.000004 Time 0.383434 +2025-05-17 15:33:12,107 - Epoch: [396][ 450/ 518] Overall Loss 2.489639 Objective Loss 2.489639 LR 0.000004 Time 0.383188 +2025-05-17 15:33:31,145 - Epoch: [396][ 500/ 518] Overall Loss 2.495253 Objective Loss 2.495253 LR 0.000004 Time 0.382943 +2025-05-17 15:33:37,873 - Epoch: [396][ 518/ 518] Overall Loss 2.496409 Objective Loss 2.496409 LR 0.000004 Time 0.382624 +2025-05-17 15:33:37,947 - --- validate (epoch=396)----------- +2025-05-17 15:33:37,948 - 4952 samples (32 per mini-batch) +2025-05-17 15:33:37,952 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:33:52,830 - Epoch: [396][ 50/ 155] Loss 2.747297 mAP 0.518279 +2025-05-17 15:34:07,966 - Epoch: [396][ 100/ 155] Loss 2.779241 mAP 0.492495 +2025-05-17 15:34:23,391 - Epoch: [396][ 150/ 155] Loss 2.745120 mAP 0.504076 +2025-05-17 15:34:26,448 - Epoch: [396][ 155/ 155] Loss 2.741017 mAP 0.504808 +2025-05-17 15:34:26,501 - ==> mAP: 0.50481 Loss: 2.741 + +2025-05-17 15:34:26,510 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 15:34:26,511 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 15:34:26,598 - + +2025-05-17 15:34:26,598 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 15:34:46,887 - Epoch: [397][ 50/ 518] Overall Loss 2.546779 Objective Loss 2.546779 LR 0.000004 Time 0.405701 +2025-05-17 15:35:05,849 - Epoch: [397][ 100/ 518] Overall Loss 2.513783 Objective Loss 2.513783 LR 0.000004 Time 0.392454 +2025-05-17 15:35:24,821 - Epoch: [397][ 150/ 518] Overall Loss 2.509079 Objective Loss 2.509079 LR 0.000004 Time 0.388111 +2025-05-17 15:35:43,787 - Epoch: [397][ 200/ 518] Overall Loss 2.503162 Objective Loss 2.503162 LR 0.000004 Time 0.385902 +2025-05-17 15:36:02,775 - Epoch: [397][ 250/ 518] Overall Loss 2.500102 Objective Loss 2.500102 LR 0.000004 Time 0.384670 +2025-05-17 15:36:21,786 - Epoch: [397][ 300/ 518] Overall Loss 2.509480 Objective Loss 2.509480 LR 0.000004 Time 0.383923 +2025-05-17 15:36:40,957 - Epoch: [397][ 350/ 518] Overall Loss 2.500750 Objective Loss 2.500750 LR 0.000004 Time 0.383848 +2025-05-17 15:36:59,974 - Epoch: [397][ 400/ 518] Overall Loss 2.495227 Objective Loss 2.495227 LR 0.000004 Time 0.383407 +2025-05-17 15:37:18,981 - Epoch: [397][ 450/ 518] Overall Loss 2.492453 Objective Loss 2.492453 LR 0.000004 Time 0.383041 +2025-05-17 15:37:37,996 - Epoch: [397][ 500/ 518] Overall Loss 2.492368 Objective Loss 2.492368 LR 0.000004 Time 0.382765 +2025-05-17 15:37:44,713 - Epoch: [397][ 518/ 518] Overall Loss 2.494047 Objective Loss 2.494047 LR 0.000004 Time 0.382430 +2025-05-17 15:37:44,792 - --- validate (epoch=397)----------- +2025-05-17 15:37:44,793 - 4952 samples (32 per mini-batch) +2025-05-17 15:37:44,796 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:37:59,831 - Epoch: [397][ 50/ 155] Loss 2.735224 mAP 0.509525 +2025-05-17 15:38:14,932 - Epoch: [397][ 100/ 155] Loss 2.725606 mAP 0.511112 +2025-05-17 15:38:30,704 - Epoch: [397][ 150/ 155] Loss 2.734294 mAP 0.504535 +2025-05-17 15:38:33,836 - Epoch: [397][ 155/ 155] Loss 2.735794 mAP 0.504763 +2025-05-17 15:38:33,896 - ==> mAP: 0.50476 Loss: 2.736 + +2025-05-17 15:38:33,906 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 15:38:33,906 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 15:38:33,996 - + +2025-05-17 15:38:33,997 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 15:38:54,175 - Epoch: [398][ 50/ 518] Overall Loss 2.516131 Objective Loss 2.516131 LR 0.000004 Time 0.403508 +2025-05-17 15:39:13,176 - Epoch: [398][ 100/ 518] Overall Loss 2.511340 Objective Loss 2.511340 LR 0.000004 Time 0.391756 +2025-05-17 15:39:32,218 - Epoch: [398][ 150/ 518] Overall Loss 2.490686 Objective Loss 2.490686 LR 0.000004 Time 0.388108 +2025-05-17 15:39:51,241 - Epoch: [398][ 200/ 518] Overall Loss 2.492666 Objective Loss 2.492666 LR 0.000004 Time 0.386194 +2025-05-17 15:40:10,284 - Epoch: [398][ 250/ 518] Overall Loss 2.506304 Objective Loss 2.506304 LR 0.000004 Time 0.385123 +2025-05-17 15:40:29,497 - Epoch: [398][ 300/ 518] Overall Loss 2.499310 Objective Loss 2.499310 LR 0.000004 Time 0.384977 +2025-05-17 15:40:48,475 - Epoch: [398][ 350/ 518] Overall Loss 2.491598 Objective Loss 2.491598 LR 0.000004 Time 0.384199 +2025-05-17 15:41:07,503 - Epoch: [398][ 400/ 518] Overall Loss 2.488899 Objective Loss 2.488899 LR 0.000004 Time 0.383742 +2025-05-17 15:41:26,550 - Epoch: [398][ 450/ 518] Overall Loss 2.485953 Objective Loss 2.485953 LR 0.000004 Time 0.383430 +2025-05-17 15:41:45,556 - Epoch: [398][ 500/ 518] Overall Loss 2.485129 Objective Loss 2.485129 LR 0.000004 Time 0.383096 +2025-05-17 15:41:52,240 - Epoch: [398][ 518/ 518] Overall Loss 2.484634 Objective Loss 2.484634 LR 0.000004 Time 0.382684 +2025-05-17 15:41:52,314 - --- validate (epoch=398)----------- +2025-05-17 15:41:52,315 - 4952 samples (32 per mini-batch) +2025-05-17 15:41:52,318 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:42:07,378 - Epoch: [398][ 50/ 155] Loss 2.738458 mAP 0.506868 +2025-05-17 15:42:22,500 - Epoch: [398][ 100/ 155] Loss 2.739537 mAP 0.501368 +2025-05-17 15:42:38,234 - Epoch: [398][ 150/ 155] Loss 2.741915 mAP 0.497208 +2025-05-17 15:42:41,263 - Epoch: [398][ 155/ 155] Loss 2.748192 mAP 0.496905 +2025-05-17 15:42:41,323 - ==> mAP: 0.49691 Loss: 2.748 + +2025-05-17 15:42:41,332 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 15:42:41,332 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 15:42:41,421 - + +2025-05-17 15:42:41,421 - Training epoch: 16551 samples (32 per mini-batch, world size: 1) +2025-05-17 15:43:01,548 - Epoch: [399][ 50/ 518] Overall Loss 2.455615 Objective Loss 2.455615 LR 0.000004 Time 0.402460 +2025-05-17 15:43:20,480 - Epoch: [399][ 100/ 518] Overall Loss 2.473017 Objective Loss 2.473017 LR 0.000004 Time 0.390538 +2025-05-17 15:43:39,446 - Epoch: [399][ 150/ 518] Overall Loss 2.460588 Objective Loss 2.460588 LR 0.000004 Time 0.386794 +2025-05-17 15:43:58,614 - Epoch: [399][ 200/ 518] Overall Loss 2.474186 Objective Loss 2.474186 LR 0.000004 Time 0.385926 +2025-05-17 15:44:17,619 - Epoch: [399][ 250/ 518] Overall Loss 2.479996 Objective Loss 2.479996 LR 0.000004 Time 0.384755 +2025-05-17 15:44:36,670 - Epoch: [399][ 300/ 518] Overall Loss 2.475477 Objective Loss 2.475477 LR 0.000004 Time 0.384131 +2025-05-17 15:44:55,714 - Epoch: [399][ 350/ 518] Overall Loss 2.476210 Objective Loss 2.476210 LR 0.000004 Time 0.383664 +2025-05-17 15:45:14,714 - Epoch: [399][ 400/ 518] Overall Loss 2.477568 Objective Loss 2.477568 LR 0.000004 Time 0.383201 +2025-05-17 15:45:33,759 - Epoch: [399][ 450/ 518] Overall Loss 2.480895 Objective Loss 2.480895 LR 0.000004 Time 0.382944 +2025-05-17 15:45:52,983 - Epoch: [399][ 500/ 518] Overall Loss 2.480912 Objective Loss 2.480912 LR 0.000004 Time 0.383094 +2025-05-17 15:45:59,685 - Epoch: [399][ 518/ 518] Overall Loss 2.482406 Objective Loss 2.482406 LR 0.000004 Time 0.382720 +2025-05-17 15:45:59,753 - --- validate (epoch=399)----------- +2025-05-17 15:45:59,754 - 4952 samples (32 per mini-batch) +2025-05-17 15:45:59,757 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:46:14,664 - Epoch: [399][ 50/ 155] Loss 2.784860 mAP 0.512402 +2025-05-17 15:46:29,787 - Epoch: [399][ 100/ 155] Loss 2.762371 mAP 0.501620 +2025-05-17 15:46:45,468 - Epoch: [399][ 150/ 155] Loss 2.744860 mAP 0.502324 +2025-05-17 15:46:48,499 - Epoch: [399][ 155/ 155] Loss 2.743659 mAP 0.502962 +2025-05-17 15:46:48,556 - ==> mAP: 0.50296 Loss: 2.744 + +2025-05-17 15:46:48,566 - ==> Best [mAP: 0.507587 vloss: 2.750493 Params: 2177088 on epoch: 370] +2025-05-17 15:46:48,566 - Saving checkpoint to: logs/fpndetector___2025.05.16-175707/fpndetector_qat_checkpoint.pth.tar +2025-05-17 15:46:48,654 - --- test (ckpt) --------------------- +2025-05-17 15:46:48,654 - 4952 samples (32 per mini-batch) +2025-05-17 15:46:48,657 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:47:03,633 - Test: [ 50/ 155] Loss 2.706027 mAP 0.526245 +2025-05-17 15:47:18,970 - Test: [ 100/ 155] Loss 2.739194 mAP 0.509697 +2025-05-17 15:47:34,844 - Test: [ 150/ 155] Loss 2.741037 mAP 0.502500 +2025-05-17 15:47:37,853 - Test: [ 155/ 155] Loss 2.739557 mAP 0.501512 +2025-05-17 15:47:37,908 - ==> mAP: 0.50151 Loss: 2.740 + +2025-05-17 15:47:37,909 - --- test (best) --------------------- +2025-05-17 15:47:37,910 - => loading checkpoint logs/fpndetector___2025.05.16-175707/fpndetector_qat_best.pth.tar +2025-05-17 15:47:37,945 - => Checkpoint contents: ++----------------------+-------------+-----------------+ +| Key | Type | Value | +|----------------------+-------------+-----------------| +| arch | str | ai87fpndetector | +| compression_sched | dict | | +| epoch | int | 370 | +| extras | dict | | +| optimizer_state_dict | dict | | +| optimizer_type | type | Adam | +| state_dict | OrderedDict | | ++----------------------+-------------+-----------------+ + +2025-05-17 15:47:37,946 - => Checkpoint['extras'] contents: ++--------------+--------+---------+ +| Key | Type | Value | +|--------------+--------+---------| +| best_epoch | int | 370 | +| best_mAP | Tensor | | +| best_top1 | int | 0 | +| current_mAP | Tensor | | +| current_top1 | int | 0 | ++--------------+--------+---------+ + +2025-05-17 15:47:37,948 - Loaded compression schedule from checkpoint (epoch 370) +2025-05-17 15:47:37,987 - => loaded 'state_dict' from checkpoint 'logs/fpndetector___2025.05.16-175707/fpndetector_qat_best.pth.tar' +2025-05-17 15:47:37,988 - 4952 samples (32 per mini-batch) +2025-05-17 15:47:37,991 - {'multi_box_loss': {'alpha': 2, 'neg_pos_ratio': 3}, 'nms': {'min_score': 0.3, 'max_overlap': 0.3, 'top_k': 50}} +2025-05-17 15:47:52,926 - Test: [ 50/ 155] Loss 2.732939 mAP 0.509588 +2025-05-17 15:48:07,861 - Test: [ 100/ 155] Loss 2.756865 mAP 0.502574 +2025-05-17 15:48:23,402 - Test: [ 150/ 155] Loss 2.754240 mAP 0.504071 +2025-05-17 15:48:26,450 - Test: [ 155/ 155] Loss 2.749494 mAP 0.505121 +2025-05-17 15:48:26,503 - ==> mAP: 0.50512 Loss: 2.749 + +2025-05-17 15:48:26,509 - +2025-05-17 15:48:26,510 - Log file for this run: /home/asyaturhal/ai8x-training/logs/fpndetector___2025.05.16-175707/fpndetector___2025.05.16-175707.log diff --git a/trained/ai87-pascalvoc-fpndetector-qat8.pth.tar b/trained/ai87-pascalvoc-fpndetector-qat8.pth.tar index 3b380f38..a477bc33 100644 Binary files a/trained/ai87-pascalvoc-fpndetector-qat8.pth.tar and b/trained/ai87-pascalvoc-fpndetector-qat8.pth.tar differ