-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Hi, I converted the models to ONNX but only could use the inpainting part, not the lines and edges extraction (I'm working with the DNN module of OpenCV 4.10 with C++). I could produce lines and edges with OpenCV functions, and it works pretty well when I feed them to the model, but not every time.
I am wondering why sometimes the colors of the output are so messed up. I have the impression that the more "active" mask surface there is, the more the output colors are faithful. If the mask contains only a few pixels, colors are really messed up, they tend to be really saturated compared to the source image.
So, is there some kind of normalization for the input of the image that could be linked to the mask ? I didn't find this in your code.
Or maybe the conversion to ONNX is the cause ?