Advice for inverse rendering camera photos #1674
Unanswered
test3211234
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Hi @test3211234 Ideally, you would want to model all of these steps inside your code/setup, then the system could differentiate through them. But you could also, as you mentioned, process the reference image to try to revert these effects. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
So, smartphone photos look very different from regular 3D render. They have compression artifacts and stuff that happens during the RAW to RGB process (ISP) like demosaic, their own algorithms, and noise. So how does one actually optimize for inverse rendering with this difference? Do we want to make a neural net somehow get rid of anything the ISP did?
For exapmle, look at this image from iPhone 11 (which I denoised inaccurately, just wanted to remove that variable):
You can see the color inconsistency, smoothness at color transitions, and artifacts.
Beta Was this translation helpful? Give feedback.
All reactions