Replies: 1 comment
-
Hi @a-3wais We do not provide any utility functions for something like this. I also don't think we have an example of a similar computation. I can't tell you exactly what's wrong with your setup. However, debugging it should be pretty easy. The |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi!
I have a bunch of rendered RGBA images and depth maps of the same scene that were rendered by the same camera from different locations (same Intrinsics different Extrinsics) using the AOV integrator.
I used something like this snippet for rendering the dataset.
I have the cam2world matrix by doing this
And the Projection matrix by doing this
which gives this matrix lets call it P
Now I want to get the 3D world coordinates from the depth image and then reproject to another camera (for some kind of consistency loss I am developing)
I understand that the projection from 3D to 2D was done as
2D_depth = cam2world x P x 3D_point
and 2D_depth is [x,y,depth] and x,y are the pixel indices.
so, I am doing the reverse by
P_inv x cam2world_inv x 2D_depth = 3D_point
but I am not getting the correct values. they are far from the original values, not even in the same range. What exactly am I doing wrong here? Is there a better and easier way to do the inverse projection? for example in pytorch3d they provide this utility function.
plugging the same numbers in this pytorch3d function also don't give the correct world coordinates, different conventions I suppose.
Beta Was this translation helpful? Give feedback.
All reactions