1 / 33

CS 395/495-26: Spring 2003

CS 395/495-26: Spring 2003. IBMR: Week 4B Chapter 3: Estimation & Accuracy Jack Tumblin jet@cs.northwestern.edu. Reminders. Project A due today; (before midnight) Homework submission instructions on website Project B assigned today 4/24 (on website) Due 5/1 (1 week to complete it)

Download Presentation

CS 395/495-26: Spring 2003

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 395/495-26: Spring 2003 IBMR: Week 4B Chapter 3: Estimation & Accuracy Jack Tumblin jet@cs.northwestern.edu

  2. Reminders • Project A due today; (before midnight)Homework submission instructions on website • Project B assigned today 4/24 (on website) Due 5/1 (1 week to complete it) • Take-Home Midterm starts 5/1 (Next Thurs.)Due Thurs 5/8 (1 week to complete it).

  3. Direct Linear Tranform (DLT) • For each point pair (xi,xi’) Solve Hx  x’ = 0 • Robust! Accepts any scale (x3=w1, x3’=w’1), finds anyH • ‘Vectorize’; make 2 rows of A per point pair • Solve Aih = 0 stacked up… [ 0 0 0 -w’x -w’y -w’w y’x y’y y’w] = 0 [w’x w’y w’w 0 0 0 -x’x -x’y -x’w] .................or written in book’s vector notation (pg 71, 3.1)............. [ 0T -w’i xiT y’I xiT ] [w’i xiT 0T -x’i xiT ] [ -y’i xiT -x’i xiT 0T ] h1 h2 h3

  4. Direct Linear Tranform (DLT) Solve Ah = 0 using Singular Value Decomp. • SVD(A) = USVT • Solution: h is V8, thelast column of V Assumes rank of A is 8; singular value s8 0 ‘Find Input & Output Axes, Linked by Scale’ b2 x2 x A = USVT v2 u3 v1 b1 v3 x1 b3 Ax=b u2 u1 Input (9 dim.) Output (8 dim.)

  5. Adding More Measurements If we use >4 point correspondences? Quick-and-sloppy: • Adds row pairs to our 8x9 matrix A: A·h = 0 • Use SVD to find Null space (Always gives an answer!) • Result: ‘Least squares’ solution • minimizes ‘algebraic distances’ Єi2between point pairs. || A h ||2 = Є2 =|| tall, near-zero vector ||2 = = iЄi2 = where,Єi2is error for i-th pt. correspondence: Єi2= || Hxi xi’ ||2 = || (2 rows of A)·h ||2 = ‘algebraic distance’ Є12 + Є22 + Є32 + …

  6. Adding More Measurements ( - )2+ ( - )2 d(a,b)2 = b2 b3 b1 b3 a1 a3 a2 a3 • 2D‘Algebraic Distance’ ?No geometric meaning! • 2D ‘Geometric Distance’ d(a,b)2is better: measurable length in input or output space if a = (a1 a2 a3) and b = (b1 b2 b3), then define Turns out that: (for P2) d(a,b) = dalgebraic(a,b) a3·b3

  7. Adding More Measurements Overall Strategy: • Overconstrain the answer H • Collect extra measurements (>4 point pairs, etc. …) • expect errors; use adjustable ‘estimates’ x • Compute a 1st solution (probably by SVD) • Compute error d(Hx, x’)2, and use this to… • ‘Tweak’ answer Hand estimates x • Compute new answer • Stop when error < useful threshold ^ ^ ^ ^

  8. Error Measures & Corrections H ^ H xi IDEAL: x’i • One-Sided: • Perfect inputs xi, Flawed outputs xi’ • Find H to minimize i d(Hxi, xi’)2 • Two-sided (symmetric): • Flawed input xi, Flawed output xi’ • Interleave: find best H,find best H-1 to minimize i d(Hxi, xi’)2 + id(xi, H-1xi’)2 • Reprojection: • both input xi and output xi’ have errors • Unify ‘estimates’ of all: xi, xi’andH • DefineH for perfect matches: Hxi =xi’ • Minimize estimates .vs. real: i d(xi, xi)2 + d(x’i, xi’)2 H H, H-1 ^ ^ ^ ^ ^ ^ ^ ^ ^

  9. HOW? Reprojection in R4 • Rearrange Each point pair to make a 4-vector Xi: i-th point pair xi,xi’  Xi= (xi,xi’) = [x,y,x’,y’]T(assume w=w’=1) • 1) Define an R4 ‘measurement space’ for all Xi • (CAREFUL! this is NOT homogeneous P3: it’s true 4-D) • holds all possible point pairs, from perfect to horrid • 2) Find the “shape of perfection” for an H in R4: • WhatXi points in R4 are an error-free fit to H? • .e.g. find allXi points where H xixi’ = 0 • Choose an ‘estimate’Xi that is on this shape • 3) Find nearest estimateXi for measured point Xi ^ ^

  10. Find “Perfect Shape of H” in R4: x y x’ y’ Xi= • Rewrite one row of Ah=0 using Xi and get… one multilinear quadric in R4: • H sets 2 rows2 quadrics a ‘variety’ vH (‘variety’ = a collection of shapes in higher dimensions) • “Each H forms a variety in R4” XT QX + XTP + C = 0 P Q 00 a b00c d 0000 efgh XT X+ XT +C= 0 (assumes w=w’=1) (a,b,c,d,e,f,g,h,C are scalar constants)

  11. Find “Perfect Shape of H” in R4: • Recall a point pair Xi sets two rows of Ah=0: • Rewrite one row of Ah=0 using Xi= and get… -a (messy) quadric in R4, but -‘multilinear’; (only zeros are squared) -simpler if you include w’... [ 0 0 0 -w’x -w’y -w’w y’x y’y y’w] = 0 [w’x w’y w’w 0 0 0 -x’x -x’y -x’w] h1 h2 h3 x y x’ y’ P (assumes w=w’=1) Q 00 a b00c d 0000 efgh XT X+ XT +C= 0 a,b,c,d,e,f,g,h,C are scalar constants from H ?can you find them? XT QX + XTP + C = 0

  12. Find “Perfect Shape of H” in R4: ! NO ! It’s simpler if you include w’... • Recall a point pair Xi sets two rows of Ah=0: • Rewrite one row of Ah=0 using Xi= and get… -a (messy) quadric in R4, but -‘multilinear’; (only zeros are squared) -simpler if you include w’... [ 0 0 0 -w’x -w’y -w’w y’x y’y y’w] = 0 [w’x w’y w’w 0 0 0 -x’x -x’y -x’w] h1 h2 h3 x y x’ y’ w’ P (assumes w=w’=1) Q 00 a b00c d 0000 efgh XT X+ XT +C= 0 a,b,c,d,e,f,g,h,C are scalar constants from H ?can you find them? XT QX + XTP + C = 0

  13. Find “Perfect Shape of H” in R4: x y x’ y’ w’ • For the i-th point pair, DLT makes 2 rows Ai • In R4, point pair is Xi = [x y x’ y’]T(assume w=w’=1) • Augment Xi with w’; then R4+ quadrics are: XT Q X=0 or = 0 0T -w’i xiTy’I xiTw’i xiT 0T -x’i xiT -y’i xiT-x’i xiT 0T h1h2h3 0 0 0 Aih= 0 = xyx’y’w’  Q

  14. Find “Perfect Shape of H” in R4: • For the i-th point pair, DLT makes 2 rows Ai: • Write each row as an R4+ quadric: XT Qi X=0 0T -w’i xiTy’I xiTw’i xiT 0T -x’i xiT -y’i xiT-x’i xiT 0T h1h2h3 0 0 0 Row 1 Row 2 Row 3 Aih= Єi = Qi1for Row 1 Qi2for Row 2 Qi3 for Row 3 0 0 0 h31 -h210 0 0 h32 -h220 0 0 0 00 0 0 0wh220 0 0 0 wh33 0 0 -h31 0 h110 0 -h32 0 h120 0 0 0 -w h330 0 0 0 00 0 0 0’wh13 0 0 h21 -h11 00 0 h22 -h12 0 0 0 0 0 w h230 0 0 0 w h130 0 0 0 0

  15. R4 Quadrics for the Variety vH • For the i-th point pair, DLT makes 2 rows Ai: • H sets 2 rows2 quadrics a ‘variety’ vH • - “variety” = a collection of shapes in higher dimensions • - “Each H forms a variety in R4” means the perfect shape of H is the intersection of 2 (or 3) quadrics Variety of H is vH is two quadrics XT Qi X=0 Qi1for Row 1 Qi2for Row 2 Qi3 for Row 3 0 0 0 h31 -h210 0 0 h32 -h220 0 0 0 00 0 0 0wh220 0 0 0 wh33 0 0 -h31 0 h110 0 -h32 0 h120 0 0 0 -w h330 0 0 0 00 0 0 0w h13 0 0 h21 -h11 00 0 h22 -h12 0 0 0 0 0 w h230 0 0 0 w h130 0 0 0 0

  16. Reprojection: The Big Idea • Given H, find its R4‘variety’ vH(its 2 quadrics) • For each measured Xi, find nearest estimateXion vH by ‘projection’ • VERY nice result (robust!): • Minimizes input, output geometric error • Is also the ‘ML’ (maximum likelihood) estimate • Better than DLT: invariant to origin, scaling, similarity, BUT in R4, finding ‘nearest estimate’ d(Xi, vH) is a mess! Look at simplified case… ^

  17. Try R2 (not R4) first… Find d(Xi, vH) for R2 to an ‘x2 family’ curve: • Instead of (x,y)(x’,y’) correspondence, (x)(y) • Instead of Xi = (x,y,x’,y’), use Xi = (x,y) • Replace 4D ‘Variety’ vHwith 2D conic curve C • EstimateXi is on conic: XiT C Xi = 0 • Linefrom Xi to Xi is  to C tangent • Finding Xi from Xi, C isroot-finding! ^ ^ ^ ^ Xi ^ ^ Xi vH

  18. R4: Sampson Error Approx. CH(Xi)X CH(Xi)X ^ • Sampson error’s estimate: Xi = Xi + X • Re-name DLT result Aih = Єi asCH(X) = Єi = (Note: CH(X) is a stack of 2(or 3) rows of A ) • Use Taylor Series Approx: CH(Xi +X)  CH(X) + X + … • Find x for error 0:CH(Xi +X)  0 =Єi + X • But How? (2-,3- and 4-vectors here!) Єi1 Єi2 Єi3 CH(X) Єi X Xi x

  19. R4 Solutions: Sampson Error • CH(X) • X The CH(X) = ==error is a vector polynomial • Do Taylor Series at measured point Xi: • Follow -(error gradient) direction • distance*grad = error • Estimate Xi = Xi + x • Error vector x = -JT(JJT)-1Є (J = partial deriv. matrix...) Є1 Є2 =J x ^ Xi CH(X)

  20. R4 Quadrics for the Variety vH x y x’ y’ w’ • For the i-th point pair, DLT minimizes Єi: • In R4, point pair is Xi = [x y x’ y’]T(assume w=w’=1) • Augment Xi with w’; then R4 quadrics are: XiT Q Xi=Єi or = Єi 0T -w’i xiTy’I xiTw’i xiT 0T -x’i xiT -y’i xiT-x’i xiT 0T h1h2h3 Єi1 Єi2 Єi3 Aih= Єi = xyx’y’w’  Q

  21. R4 Quadrics for the Variety vH • For the i-th point pair, DLT minimizes Єi: • Write each row as an R4 quadric: XiT QiXi=Єi 0T -w’i xiTy’I xiTw’i xiT 0T -x’i xiT -y’i xiT-x’i xiT 0T h1h2h3 Єi1 Єi2 Єi3 Row 1 Row 2 Row 3 Aih= Єi = Qi1for Row 1 Qi2for Row 2 Qi3 for Row 3 0 0 0 h31 -w’h210 0 0 h32 -w’h220 0 0 0 00 0 0 0wh220 0 0 0 w’wh33 0 0 -h31 0 w’h110 0 -h32 0 w’h120 0 0 0 -w h330 0 0 0 00 0 0 0w’wh13 0 0 h21 -h11 00 0 h22 -h12 0 0 0 0 0 w h230 0 0 0 w h130 0 0 0 0

  22. R4 Quadrics for the Variety vH 0 0 a b c0 0 d e f0 0 0 0 g0 0 0 0 h0 0 0 0 j  Qiy  Qiy’  Qix’  Qix • Write each row as an R4 quadric: XT Qi X=0 • ALL have this form: Qi = • Quadric’s derivatives are easy: (Can also write as symmetric without changing meaning!) : : : : xyx’y’1 xyx’y’1 0 00000 00000 0 0 0 a0 0 0 0 b0 0 0 0 c 0 00000 00000 0 0 0 d0 0 0 0 e0 0 0 0 f 0 000 a0 000 d0 0 0 000 0 0 000 0 0 0 g 0 000 b0 000 e0 0 0 000 0 0 000 0 0 0 h x y x’ y’ 1 x y x’ y’ 1 x y x’ y’ 1 x y x’ y’ 1

  23. Sampson Error Approx. Єi1 Єi2 Єi3 CH(Xi)X xy x’ y’ CH2(Xi) CH2(Xi) CH2(Xi) CH2(Xi)x y x’ y’ CH1(Xi) CH1(Xi) CH1(Xi) CH1(Xi)x y x’ y’ ^ • Sampson error’s estimate: Xi = Xi + X • Re-name DLT result Aih = Єi asCH(X) = Єi = • Find x: solve Єi + X= 0 • Or as in Book: Єi + JX= 0 + = 0 Єi1 Єi2 Єi3 CH(X) Єi X Xi x (pg. 82) Find J terms from R4 quadrics, solve for X … CH2(Xi) CH2(Xi) CH2(Xi) CH2(Xi)x y x’ y’

  24. Sampson Error Approx. Єi1 Єi2 Єi3 CH(Xi)X xy x’ y’ CH2(Xi) CH2(Xi) CH2(Xi) CH2(Xi)x y x’ y’ CH1(Xi) CH1(Xi) CH1(Xi) CH1(Xi)x y x’ y’ ^ • Sampson error’s estimate: Xi = Xi + X • Re-name DLT result Aih = Єi asCH(X) = Єi = • Find x: solve Єi + x= 0 • Or as in Book: Єi + Jx= 0 + = 0 Єi1 Єi2 Єi3 CH(X) Єi X Xi x (pg. 82) !TROUBLE!rank=2, but 4 unknowns! CH2(Xi) CH2(Xi) CH2(Xi) CH2(Xi)x y x’ y’

  25. Sampson Error Approx. Єi1 Єi2 Єi3 xy x’ y’ CH1(Xi) CH1(Xi) CH1(Xi) CH1(Xi)x y x’ y’ CH2(Xi) CH2(Xi) CH2(Xi) CH2(Xi)x y x’ y’ ^ • Sampson error’s estimate: Xi = Xi + X • Re-name DLT result Aih = Єi asCH(X) = Єi = Or as in Book: Єi + Jx= 0 + = 0 • ANSWER: X = -JT(JJT)-1Є Єi1 Єi2 Єi3 CH(X) Єi X Xi x CH2(Xi) CH2(Xi) CH2(Xi) CH2(Xi)x y x’ y’ Answer: find shortestX: minimize ||X||2 by Lagrange multipliers…

  26. How to use it: • Initialize: use DLT to find H from point pairs • Revise: estimates by Sampson: Xi = Xi + X • Adjust: find new H from estimates (DLT) • Repeat until ||X||2 is small enough. Refinements: apply fraction of X on each iteration, so that H can change more... ^ ^

  27. END

  28. Aside: Lagrange Multipliers “Given only f(x) = 0, find shortest-length x” Semi-magical vector trick: • Length2 is scalar value: ||x||2 = xTx • Mix in an unknown vector (‘Lagrange multiplier’): xTx - 2 T f(x) = g(x,) =weird_length2

  29. Aside: Lagrange Multipliers “Given only f(x) = 0, find shortest-length x” Semi-magical vector trick: • Length2 is scalar value: ||x||2 = xTx • Mix in an unknown vector (‘Lagrange multiplier’): xTx - 2 T f(x) = g(x,) =weird_length2 Length2 (MUST be = 0)

  30. Aside: Lagrange Multipliers “Given only f(x) = 0, find shortest-length x” Semi-magical vector trick: • Length2 is scalar value: ||x||2 = xTx • Mix in an unknown vector (‘Lagrange multiplier’): xTx - 2 T f(x) = g(x,) =weird_length2 Find shortest ‘weird_length2’: • Find (g / x)= 0, Find (g / ) = 0, • Solve them for  (substitutions) • then use  in g(x,) to find x !Done! Length2 (MUST be = 0)

  31. More General Approach: • Define ‘Measurement space’ (e.g. R4) • Vector: a complete measurement set • Vector holds only known values, but • Can be ANYTHING: input,output, length, H, … • All possible measures (good or bad) • Define ‘Model’ • All possible error-free ‘perfect measurements’ such as the variety vH • Defined by a known function(e.g. CH(X) = 0 ) • A subset of measurement space

  32. More General Approach: • Gather points Xi in ‘measurement space’ • Replace Xi with ‘estimates’ Xi that: • Satisfy the model, and • minimize distance || Xi – Xi || • Book (pg 85) gives examples for: • Error in both images (what we just did) • Error in one image only • Maximum likelihood estimation: same as minimizing geometric error  same as minimizing reprojection error ^ ^

  33. DLT Non-Invariance !Accuracy depends on origin location! • ‘Non-optional’ Correction required: • Find centroid of measured points • Find average distance of points to centroid • Offset origin to centroid for all points • Scale about new origin to set average distance-to-origin to sqrt(2) • Reverse process after DLT. See book for proof. y x  x y (Outline on pg. 92)

More Related