-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Ticket created after a discussion on this group chat: https://teams.microsoft.com/l/message/19:[email protected]/1764195675772?context=%7B%22contextType%22%3A%22chat%22%7D
To be fair, Galen already has code that does QC on the first-stage compression video while Bruno's code assumes the video is twice-compressed. Keeping this ticket to retain the information.
Per Galen:
But there are other issues, like really your luma for yuv videos should be in the "standard range" (16 to 235) for the output of the second-stage encoder, and we developed the first-stage encoder to use the full range (0 to 255) to reduce quantization noise before being gamma-encoded.
However you will not be able to see that if your data are automatically (through an unknown conversion path) converted to rgb (presumably 0 to 255). Worse, if your data are weird in some way, like the first stage encoder which uses full range, then it is totally implementation dependent on how those values are converted to RGB. Is the full range properly detected and honored? Is it ignored and values 0-15 and 236-255 now appear to be saturated? Even worse yet, many videos have no color space metadata, and might do things like use the full range (0-255) and not properly advertise it, so then it's really not clear how opencv or any other consumer should interpret the luma in 0-15 and 236-255.