Skip to content

Error while reading image. This can be caused by special characters in the filename or a corrupt image file #1521

@vlccdl

Description

@vlccdl

after adjust landmarks using the manual align tool, then output images have a certain probability of exhibiting the following error during training. My understanding is that these erroneous images mostly indicate that the initial landmarks were incorrect or not recognized, and correct landmarks were obtained through manual editing.

Deleting the error images allows for normal training. However, re-editing the landmarks using the manual align tool does not resolve the error images.

=========================================
01/19/2026 11:32:16 MainProcess _training multithreading start DEBUG Starting thread 1 of 1: '_run_2'
01/19/2026 11:32:16 MainProcess _run_2 generator _minibatch DEBUG Loading minibatch generator: (image_count: 849, do_shuffle: True)
01/19/2026 11:32:16 MainProcess _training multithreading start DEBUG Started all threads '_run_2': 1
01/19/2026 11:32:16 MainProcess _training generator init DEBUG Initialized Feeder:
01/19/2026 11:32:16 MainProcess _training lr_warmup init DEBUG Initialized LearningRateWarmup(model=, target_learning_rate=3.5000000934815034e-05, steps=0) [current_lr: 0.0, current_step: 0, reporting_points: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
01/19/2026 11:32:16 MainProcess _training state add_session_batchsize DEBUG Adding session batch size: 10
01/19/2026 11:32:16 MainProcess _training training _set_tensorboard DEBUG Enabling TensorBoard Logging
01/19/2026 11:32:16 MainProcess _training training _set_tensorboard DEBUG Setting up TensorBoard Logging
01/19/2026 11:32:16 MainProcess _training tensorboard init DEBUG Initializing TorchTensorBoard(log_dir='F:\train\lt\dfaker_logs\session_231', write_graph=True, update_freq='batch', class=<class 'lib.training.tensorboard.TorchTensorBoard'>)
01/19/2026 11:32:16 MainProcess _training tensorboard init DEBUG Initialized TorchTensorBoard
01/19/2026 11:32:16 MainProcess _training logger verbose VERBOSE Enabled TensorBoard Logging
01/19/2026 11:32:16 MainProcess _training _display init DEBUG Initializing Samples: model: '<plugins.train.model.dfaker.Model object at 0x0000019403413B60>', coverage_ratio: 0.8, mask_opacity: 30, mask_color: #7ba758)
01/19/2026 11:32:16 MainProcess _training _display init DEBUG Initialized Samples
01/19/2026 11:32:16 MainProcess _training _display init DEBUG Initializing Timelapse: model: <plugins.train.model.dfaker.Model object at 0x0000019403413B60>, coverage_ratio: 0.8, image_count: 14, mask_opacity: 30, mask_color: #7ba758, feeder: <lib.training.generator.Feeder object at 0x00000194036EDBE0>, image_paths: 2)
01/19/2026 11:32:16 MainProcess _training _display init DEBUG Initializing Samples: model: '<plugins.train.model.dfaker.Model object at 0x0000019403413B60>', coverage_ratio: 0.8, mask_opacity: 30, mask_color: #7ba758)
01/19/2026 11:32:16 MainProcess _training _display init DEBUG Initialized Samples
01/19/2026 11:32:16 MainProcess _training _display init DEBUG Initialized Timelapse
01/19/2026 11:32:16 MainProcess _training training init DEBUG Initialized Trainer
01/19/2026 11:32:16 MainProcess _training train _load_trainer DEBUG Loaded Trainer
01/19/2026 11:32:16 MainProcess _training train _run_training_cycle DEBUG Running Training Cycle
01/19/2026 11:32:16 MainProcess _run cache _validate_version DEBUG Setting initial extract version: 2.4
01/19/2026 11:32:16 MainProcess _run_0 cache _validate_version DEBUG Setting initial extract version: 2.3
01/19/2026 11:32:17 MainProcess _training tensorboard _write_keras_model_train_graph DEBUG Tensorboard graph logging not yet implemented
01/19/2026 11:32:17 MainProcess _training train _run_training_cycle DEBUG Saving (save_iterations: True, save_now: False) Iteration: (iteration: 1)
01/19/2026 11:32:17 MainProcess _training io save DEBUG Backing up and saving models
01/19/2026 11:32:17 MainProcess _training io save INFO Saving Model...
01/19/2026 11:32:17 MainProcess _training io _remove_optimizer DEBUG Removed optimizer for saving: <keras.src.optimizers.loss_scale_optimizer.LossScaleOptimizer object at 0x000001940358F620>
01/19/2026 11:32:18 MainProcess _training attrs create DEBUG Creating converter from 5 to 3

01/19/2026 11:32:25 MainProcess MainThread multithreading join ERROR Caught exception in thread: '_training'
Traceback (most recent call last):
File "C:\Users\osma\faceswap\lib\image.py", line 340, in read_image
metadata = T.cast("PNGHeaderDict", png_read_meta(raw_file))
~~~~~~~~~~~~~^^^^^^^^^^
File "C:\Users\osma\faceswap\lib\image.py", line 839, in png_read_meta
retval = literal_eval(value[4:].decode("utf-8", errors="ignore"))
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 110, in literal_eval
return _convert(node_or_string)
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 99, in _convert
return dict(zip(map(_convert, node.keys),
map(_convert, node.values)))
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 99, in _convert
return dict(zip(map(_convert, node.keys),
map(_convert, node.values)))
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 109, in _convert
return _convert_signed_num(node)
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 83, in _convert_signed_num
return _convert_num(node)
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 74, in _convert_num
_raise_malformed_node(node)
~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\ast.py", line 71, in _raise_malformed_node
raise ValueError(msg + f': {node!r}')
ValueError: malformed node or string on line 1: <ast.Call object at 0x000001948AAC9090>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\osma\faceswap\lib\cli\launcher.py", line 192, in execute_script
process.process()
~~~~~~~~~~~~~~~^^
File "C:\Users\osma\faceswap\scripts\train.py", line 204, in process
self._end_thread(thread, err)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "C:\Users\osma\faceswap\scripts\train.py", line 244, in _end_thread
thread.join()
~~~~~~~~~~~^^
File "C:\Users\osma\faceswap\lib\multithreading.py", line 226, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\osma\faceswap\lib\multithreading.py", line 102, in run
self._target(*self._args, **self._kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\osma\faceswap\scripts\train.py", line 271, in _training
raise err
File "C:\Users\osma\faceswap\scripts\train.py", line 261, in _training
self._run_training_cycle(trainer)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "C:\Users\osma\faceswap\scripts\train.py", line 353, in _run_training_cycle
trainer.train_one_step(viewer, timelapse)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "C:\Users\osma\faceswap\plugins\train\training.py", line 241, in train_one_step
loss = self.train_one_batch()
File "C:\Users\osma\faceswap\plugins\train\training.py", line 182, in train_one_batch
inputs, targets = self._feeder.get_batch()
~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\osma\faceswap\lib\training\generator.py", line 838, in get_batch
side_feed, side_targets = next(self._feeds[side])
~~~~^^^^^^^^^^^^^^^^^^^
File "C:\Users\osma\faceswap\lib\multithreading.py", line 298, in iterator
self.check_and_raise_error()
~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\osma\faceswap\lib\multithreading.py", line 175, in check_and_raise_error
raise error[1].with_traceback(error[2])
File "C:\Users\osma\faceswap\lib\multithreading.py", line 102, in run
self._target(*self._args, **self._kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\osma\faceswap\lib\multithreading.py", line 281, in _run
for item in self.generator(*self._gen_args, **self._gen_kwargs):
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\osma\faceswap\lib\training\generator.py", line 214, in _minibatch
retval = self._process_batch(img_paths)
File "C:\Users\osma\faceswap\lib\training\generator.py", line 328, in _process_batch
raw_faces, detected_faces = self._get_images_with_meta(filenames)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "C:\Users\osma\faceswap\lib\training\generator.py", line 238, in _get_images_with_meta
raw_faces = self._face_cache.cache_metadata(filenames)
File "C:\Users\osma\faceswap\lib\training\cache.py", line 570, in cache_metadata
batch, metadata = self._get_batch_with_metadata(filenames)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "C:\Users\osma\faceswap\lib\training\cache.py", line 507, in _get_batch_with_metadata
batch, metadata = read_image_batch(filenames, with_metadata=True)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\osma\faceswap\lib\image.py", line 433, in read_image_batch
result = T.cast(np.ndarray | tuple[np.ndarray, "PNGHeaderDict"], future.result())
~~~~~~~~~~~~~^^
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\concurrent\futures_base.py", line 449, in result
return self.__get_result()
~~~~~~~~~~~~~~~~~^^
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\osma\MiniConda3\envs\faceswap\Lib\concurrent\futures\thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\osma\faceswap\lib\image.py", line 358, in read_image
raise Exception(msg)
Exception: Error while reading image. This can be caused by special characters in the filename or a corrupt image file: 'F:\fc\workspace\video\video_000521_0.png'. Original error message: malformed node or string on line 1: <ast.Call object at 0x000001948AAC9090>

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions