提取三维坐标给出2D图像点,深度地图​​和摄像机标定矩阵矩阵、坐标、摄像机、深度

2023-09-08 01:07:12 作者:失去你ヽ我该怎样呼吸

我有一组 2D图像的关键点是从 OpenCV的FAST 转角检测功能输出。使用华硕Xtion我也有已知的所有的摄像机标定参数的时间同步的深度图。利用这些信息,我想提取一组 3D 坐标(点云)在 OpenCV的。

谁能给我就怎么做任何指针?在此先感谢!

解决方案

尼古拉斯·巴瑞斯创造了深度传感器,如Kinect的一个伟大的教程。

http://nicolas.burrus.name/index.php/Research/KinectCalibration

我会复制和;粘贴最重要的部分:

  

映射深度像素彩色像素

     

的第一步是使用到undistort RGB和深度图像   估计的失真系数。然后,使用深度相机   深度相机的内部函数,每个像素(x_d,y_d)可投射   使用以下公式度量三维空间:

  P3D.x =(x_d  -  cx_d)*深度(x_d,y_d)/ fx_d
P3D.y =(y_d  -  cy_d)*深度(x_d,y_d)/ fy_d
P3D.z =深度(x_d,y_d)
 

     

与fx_d,fy_d,cx_d和cy_d深度相机的内部函数。

如果您有进一步的兴趣立体测图(对于Kinect的值):

  

我们可以再重新投影彩色图像上每一个三维点,并得到其   颜色:

  P3D'= R.P3D + T P2D_rgb.x =(P3D'.x * fx_rgb / P3D'.z)+ cx_rgb
P2D_rgb.y =(P3D'.y * fy_rgb / P3D'.z)+ cy_rgb
 

使用R和T期间估算的旋转和平移参数   立体校准。

     

我估计我的Kinect的参数是:

     

颜色

  fx_rgb 5.2921508098293293e + 02
fy_rgb 5.2556393630057437e + 02
cx_rgb 3.2894272028759258e + 02
cy_rgb 2.6748068171871557e + 02
k1_rgb 2.6451622333009589e-01
k2_rgb -8.3990749424620825e-01
p1_rgb -1.9922302173693159e-03
p2_rgb 1.4371995932897616e-03
k3_rgb 9.1192465078713847e-01
 

     

深度

  fx_d 5.9421434211923247e + 02
fy_d 5.9104053696870778e + 02
cx_d 3.3930780975300314e + 02
cy_d 2.4273913761751615e + 02
k1_d -2.6386489753128833e-01
k2_d 9.9966832163729757e-01
p1_d -7.6275862143610667e-04
p2_d 5.0350940090814270e-03
k3_d -1.3053628089976321e + 00
 

     

传感器之间的相对变换(米)的

 研究[9.9984628826577793e-01,1.2635359098409581e-03,-1.7487233004436643e-02,
-1.4779096108364480e-03,9.9992385683542895e-01,-1.2251380107679535e-02,
1.7470421412464927e-02,1.2275341476520762e-02,9.9977202419716948e-01]

T [1.9985242312092553e-02,-7.4423738761617583e-04,-1.0916736334336222e-02]
 

I have a set of 2D image keypoints that are outputted from the OpenCV FAST corner detection function. Using an Asus Xtion I also have a time-synchronised depth map with all camera calibration parameters known. Using this information I would like to extract a set of 3D coordinates (point cloud) in OpenCV.

Can anyone give me any pointers regarding how to do so? Thanks in advance!

解决方案

Nicolas Burrus has created a great tutorial for Depth Sensors like Kinect.

http://nicolas.burrus.name/index.php/Research/KinectCalibration

I'll copy & paste the most important parts:

Mapping depth pixels with color pixels

The first step is to undistort rgb and depth images using the estimated distortion coefficients. Then, using the depth camera intrinsics, each pixel (x_d,y_d) of the depth camera can be projected to metric 3D space using the following formula:

P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d
P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d
P3D.z = depth(x_d,y_d)

with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera.

If you are further interested in stereo mapping (values for kinect):

We can then reproject each 3D point on the color image and get its color:

P3D' = R.P3D + T P2D_rgb.x = (P3D'.x * fx_rgb / P3D'.z) + cx_rgb
P2D_rgb.y = (P3D'.y * fy_rgb / P3D'.z) + cy_rgb

with R and T the rotation and translation parameters estimated during the stereo calibration.

The parameters I could estimate for my Kinect are:

Color

fx_rgb 5.2921508098293293e+02 
fy_rgb 5.2556393630057437e+02 
cx_rgb 3.2894272028759258e+02 
cy_rgb 2.6748068171871557e+02 
k1_rgb 2.6451622333009589e-01 
k2_rgb -8.3990749424620825e-01 
p1_rgb -1.9922302173693159e-03 
p2_rgb 1.4371995932897616e-03 
k3_rgb 9.1192465078713847e-01

Depth

fx_d 5.9421434211923247e+02 
fy_d 5.9104053696870778e+02 
cx_d 3.3930780975300314e+02 
cy_d 2.4273913761751615e+02 
k1_d -2.6386489753128833e-01 
k2_d 9.9966832163729757e-01 
p1_d -7.6275862143610667e-04 
p2_d 5.0350940090814270e-03 
k3_d -1.3053628089976321e+00

Relative transform between the sensors (in meters)

R [ 9.9984628826577793e-01, 1.2635359098409581e-03, -1.7487233004436643e-02, 
-1.4779096108364480e-03, 9.9992385683542895e-01, -1.2251380107679535e-02,
1.7470421412464927e-02, 1.2275341476520762e-02, 9.9977202419716948e-01 ]

T [ 1.9985242312092553e-02, -7.4423738761617583e-04, -1.0916736334336222e-02 ]