opencv - Camera motion from corresponding images -


i'm trying calculate new camera position based on motion of corresponding images. images conform pinhole camera model.

as matter of fact, don't useful results, try describe procedure , hope can me.

i match features of corresponding images sift, match them opencv's flannbasedmatcher , calculate fundamental matrix opencv's findfundamentalmat (method ransac).

then calculate essential matrix camera intrinsic matrix (k):

mat e = k.t() * f * k; 

i decompose essential matrix rotation , translation singular value decomposition:

svd decomp = svd(e); matx33d w(0,-1,0,           1,0,0,           0,0,1); matx33d wt(0,1,0,           -1,0,0,            0,0,1); r1 = decomp.u * mat(w) * decomp.vt; r2 = decomp.u * mat(wt) * decomp.vt; t1 = decomp.u.col(2); //u3 t2 = -decomp.u.col(2); //u3 

then try find correct solution triangulation. (this part http://www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/ think should work correct).

the new position calculated with:

new_pos = old_pos + -r.t()*t; 

where new_pos & old_pos vectors (3x1), r rotation matrix (3x3) , t translation vector (3x1).

unfortunately got no useful results, maybe has idea wrong.

here results (just in case can confirm of them wrong):

f = [8.093827077399547e-07, 1.102681999632987e-06, -0.0007939604310854831;      1.29246107737264e-06, 1.492629957878578e-06, -0.001211264339006535;      -0.001052930954975217, -0.001278667878010564, 1]  k = [150, 0, 300;     0, 150, 400;     0, 0, 1]  e = [0.01821111092414898, 0.02481034499174221, -0.01651092283654529;      0.02908037424088439, 0.03358417405226801, -0.03397110489649674;      -0.04396975675562629, -0.05262169424538553, 0.04904210357279387]  t = [0.2970648246214448; 0.7352053067682792; 0.6092828956013705]  r = [0.2048034356172475, 0.4709818957303019, -0.858039396912323;      -0.8690270040802598, -0.3158728880490416, -0.3808101689488421;      -0.4503860776474556, 0.8236506374002566, 0.3446041331317597] 

first of should check if

x' * f * x = 0 

for point correspondences x' , x. should of course case inliers of fundamental matrix estimation ransac.

thereafter, have transform point correspondences normalized image coordinates (ncc) this

xn = inv(k) * x xn' = inv(k') * x' 

where k' intrinsic camera matrix of second image , x' points of second image. think in case k = k'.

with these nccs can decompose essential matrix described. triangulate normalized camera coordinates , check depth of triangulated points. careful, in literature 1 point sufficient correct rotation , translation. experience should check few points since 1 point can outlier after ransac.

before decompose essential matrix make sure e=u*diag(1,1,0)*vt. condition required correct results 4 possible choices of projection matrix.

when you've got correct rotation , translation can triangulate point correspondences (the inliers of fundamental matrix estimation ransac). then, should compute reprojection error. firstly, compute reprojected position this

xp = k * p * x xp' = k' * p' * x 

where x computed (homogeneous) 3d position. p , p' 3x4 projection matrices. projection matrix p given identity. p' = [r, t] given rotation matrix in first 3 columns , rows , translation in fourth column, p 3x4 matrix. works if transform 3d position homogeneous coordinates, i.e. 4x1 vectors instead of 3x1. then, xp , xp' homogeneous coordinates representing (reprojected) 2d positions of corresponding points.

i think the

new_pos = old_pos + -r.t()*t; 

is incorrect since firstly, translate old_pos , not rotate , secondly, translate wrong vector. correct way given above.

so, after computed reprojected points can calculate reprojection error. since working homogeneous coordinates have normalize them (xp = xp / xp(2), divide last coordinate). given by

error = (x(0)-xp(0))^2 + (x(1)-xp(1))^2 

if error large such 10^2 intrinsic camera calibration or rotation/translation incorrect (perhaps both). depending on coordinate system can try inverse projection matrices. on account need transform them homogeneous coordinates before since cannot invert 3x4 matrix (without pseudo inverse). thus, add fourth row [0 0 0 1], compute inverse , remove fourth row.

there 1 more thing reprojection error. in general, reprojection error squared distance between original point correspondence (in each image) , reprojected position. can take square root euclidean distance between both points.


Comments

Popular posts from this blog

SPSS keyboard combination alters encoding -

Add new record to the table by click on the button in Microsoft Access -

javascript - jQuery .height() return 0 when visible but non-0 when hidden -