Just wonder why using virtual baseline in RGB-D pipeline to find the corresponding feature points on the "right" side.
Code in this line calculates $f_x \cdot b$ (fx_b) and this line uses it to calculates disparity to obtain uR for right_point in this line
As the depth image is registered, the depths of points is obtained, why calculating the pixel position of the point on the "right" side?
Thanks.