-
-
Notifications
You must be signed in to change notification settings - Fork 13.4k
Description
after adjust landmarks using the manual align tool, then output images have a certain probability of exhibiting the following error during training. My understanding is that these erroneous images mostly indicate that the initial landmarks were incorrect or not recognized, and correct landmarks were obtained through manual editing.
Deleting the error images allows for normal training. However, re-editing the landmarks using the manual align tool does not resolve the error images.
=========================================
01/19/2026 11:32:16 MainProcess _training multithreading start DEBUG Starting thread 1 of 1: '_run_2'
01/19/2026 11:32:16 MainProcess _run_2 generator _minibatch DEBUG Loading minibatch generator: (image_count: 849, do_shuffle: True)
01/19/2026 11:32:16 MainProcess _training multithreading start DEBUG Started all threads '_run_2': 1
01/19/2026 11:32:16 MainProcess _training generator init DEBUG Initialized Feeder:
01/19/2026 11:32:16 MainProcess _training lr_warmup init DEBUG Initialized LearningRateWarmup(model=, target_learning_rate=3.5000000934815034e-05, steps=0) [current_lr: 0.0, current_step: 0, reporting_points: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
01/19/2026 11:32:16 MainProcess _training state add_session_batchsize DEBUG Adding session batch size: 10
01/19/2026 11:32:16 MainProcess _training training _set_tensorboard DEBUG Enabling TensorBoard Logging
01/19/2026 11:32:16 MainProcess _training training _set_tensorboard DEBUG Setting up TensorBoard Logging
01/19/2026 11:32:16 MainProcess _training tensorboard init DEBUG Initializing TorchTensorBoard(log_dir='F:\train\lt\dfaker_logs\session_231', write_graph=True, update_freq='batch', class=<class 'lib.training.tensorboard.TorchTensorBoard'>)
01/19/2026 11:32:16 MainProcess _training tensorboard init DEBUG Initialized TorchTensorBoard
01/19/2026 11:32:16 MainProcess _training logger verbose VERBOSE Enabled TensorBoard Logging
01/19/2026 11:32:16 MainProcess _training _display init DEBUG Initializing Samples: model: '<plugins.train.model.dfaker.Model object at 0x0000019403413B60>', coverage_ratio: 0.8, mask_opacity: 30, mask_color: #7ba758)
01/19/2026 11:32:16 MainProcess _training _display init DEBUG Initialized Samples
01/19/2026 11:32:16 MainProcess _training _display init DEBUG Initializing Timelapse: model: <plugins.train.model.dfaker.Model object at 0x0000019403413B60>, coverage_ratio: 0.8, image_count: 14, mask_opacity: 30, mask_color: #7ba758, feeder: <lib.training.generator.Feeder object at 0x00000194036EDBE0>, image_paths: 2)
01/19/2026 11:32:16 MainProcess _training _display init DEBUG Initializing Samples: model: '<plugins.train.model.dfaker.Model object at 0x0000019403413B60>', coverage_ratio: 0.8, mask_opacity: 30, mask_color: #7ba758)
01/19/2026 11:32:16 MainProcess _training _display init DEBUG Initialized Samples
01/19/2026 11:32:16 MainProcess _training _display init DEBUG Initialized Timelapse
01/19/2026 11:32:16 MainProcess _training training init DEBUG Initialized Trainer
01/19/2026 11:32:16 MainProcess _training train _load_trainer DEBUG Loaded Trainer
01/19/2026 11:32:16 MainProcess _training train _run_training_cycle DEBUG Running Training Cycle
01/19/2026 11:32:16 MainProcess _run cache _validate_version DEBUG Setting initial extract version: 2.4
01/19/2026 11:32:16 MainProcess _run_0 cache _validate_version DEBUG Setting initial extract version: 2.3
01/19/2026 11:32:17 MainProcess _training tensorboard _write_keras_model_train_graph DEBUG Tensorboard graph logging not yet implemented
01/19/2026 11:32:17 MainProcess _training train _run_training_cycle DEBUG Saving (save_iterations: True, save_now: False) Iteration: (iteration: 1)
01/19/2026 11:32:17 MainProcess _training io save DEBUG Backing up and saving models
01/19/2026 11:32:17 MainProcess _training io save INFO Saving Model...
01/19/2026 11:32:17 MainProcess _training io _remove_optimizer DEBUG Removed optimizer for saving: <keras.src.optimizers.loss_scale_optimizer.LossScaleOptimizer object at 0x000001940358F620>
01/19/2026 11:32:18 MainProcess _training attrs create DEBUG Creating converter from 5 to 3
01/19/2026 11:32:25 MainProcess MainThread multithreading join ERROR Caught exception in thread: '_training'
Traceback (most recent call last):
File "C:\Users\osma\faceswap\lib\image.py", line 340, in read_image
metadata = T.cast("PNGHeaderDict", png_read_meta(raw_file))
~~~~~~~~~~~~~^^^^^^^^^^
File "C:\Users\osma\faceswap\lib\image.py", line 839, in png_read_meta
retval = literal_eval(value[4:].decode("utf-8", errors="ignore"))
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 110, in literal_eval
return _convert(node_or_string)
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 99, in _convert
return dict(zip(map(_convert, node.keys),
map(_convert, node.values)))
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 99, in _convert
return dict(zip(map(_convert, node.keys),
map(_convert, node.values)))
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 109, in _convert
return _convert_signed_num(node)
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 83, in _convert_signed_num
return _convert_num(node)
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 74, in _convert_num
_raise_malformed_node(node)
~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 71, in _raise_malformed_node
raise ValueError(msg + f': {node!r}')
ValueError: malformed node or string on line 1: <ast.Call object at 0x000001948AAC9090>
During handling of the above exception, another exception occurred: