1 / 47

Video Object Tracking and Replacement for Post TV Production

Video Object Tracking and Replacement for Post TV Production. LYU0303 Final Year Project Fall 2003. Outline. Project Introduction Basic parts of the purposed system Working principles of individual parts Future Work Q&A. Introduction.

kesia
Download Presentation

Video Object Tracking and Replacement for Post TV Production

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Fall 2003

  2. Outline • Project Introduction • Basic parts of the purposed system • Working principles of individual parts • Future Work • Q&A

  3. Introduction • Post-TV production software changes the content of the original video clips. • Extensively used in video-making industries. • Why changing the content of a video? • Reducing video production cost • Performing dangerous actions • Producing effects those are impossible in reality

  4. Difficulties to be overcome • Things in video can be treated individually called “video objects”. • Computers cannot perform object detection directly because… • Image is processed byte-by-byte • Without prior knowledge about the video objects to be detected • Result is definite, no fuzzy logic. • Though computers cannot perform object detection directly, it can be programmed to work indirectly.

  5. Basic parts of the purposed system • Simple bitmap reader/writer • RGB/HSV converter • Edge detector with smoother • Edge equation finder • Equation processor • Translation detector • Texture mapper

  6. RGB/HSV converter • Human eyes are more sensitive to the brightness rather than the true color components of an object. • More reasonable to convert the representation of colors into HSV (Hue, Saturation and Value (brightness)) model. • After processing, convert back to RGB and save to disk.

  7. RGB to HSV HSV to RGB RGB/HSV converter

  8. Basic parts of the purposed system • Simple bitmap reader/writer • RGB/HSV converter • Edge detector with smoother • Edge equation finder • Equation processor • Translation detector • Texture mapper

  9. Edge detector • Usually, a sharp change in hue, saturation or brightness means that there exist a boundary line. HSV: (0,0,0) HSV: (0,255,255)

  10. Edge detector Before edge highlighting After edge highlighting

  11. Smoother for Edge detector • Sometimes noise will affect the edge detection result of low resolution images. • Include an image smoother to remove large noise points in the image. • In some cases performing a smoothing will greatly enhance the performance of edge detection due to the decrease in “fake” edge points.

  12. Smoother for Edge detector

  13. Basic parts of the purposed system • Simple bitmap reader/writer • RGB/HSV converter • Edge detector with smoother • Edge equation finder • Equation processor • Translation detector • Texture mapper

  14. Edge equation finder • Derives mathematical facts out of the edge points. • Works with voting algorithm of Hough Transform. • Automatically adjusts tolerance value to minimize the effect of noise points. • This helps when the edge is not completely straight or blurred.

  15. Edge equation finder Angle in degree Frequency 0 1 45 3 90 1 135 1 (x1,y1) Desired linear equation in point-slope form:

  16. Edge equation finder

  17. Basic parts of the purposed system • Simple bitmap reader/writer • RGB/HSV converter • Edge detector with smoother • Edge equation finder • Equation processor • Translation detector • Texture mapper

  18. Equation processor • After finding out the equation, constraints can be applied in order to remove redundant equations, get shadows or detect occultation. • Find out the corner points for translation detector and texture mapper.

  19. Equation processor After edge and equation finding After extra equation removal Before edge finding

  20. Equation processor • The following criteria are currently adopted in the equation processor: • Distance between the equations • Angle between the equations • Whether the equation intersects the object or not. • Since equation processor is a potential burden to the system, it may be replaced in the future by improving other parts.

  21. Basic parts of the purposed system • Simple bitmap reader/writer • RGB/HSV converter • Edge detector with smoother • Edge equation finder • Equation processor • Translation detector • Texture mapper

  22. Translation detector • A simple object motion tracker. • Collects the data of the first key frame to accelerate the process of the remaining video frames. • Can be beneficial if the video segment is long and the scene seldom changes.

  23. Translation detector • Records the approximate location of the object in the key frame. • When processing the following video frames, just scan within a certain boundary near the recorded location. • Will improve its efficiency later.

  24. Basic parts of the purposed system • Simple bitmap reader/writer • RGB/HSV converter • Edge detector • Edge equation finder • Equation processor • Texture mapper

  25. Texture Mapper • A graphics design technique used to wrap a surface of a 3-D object with a texture map • The 3-D object acquires a surface texture similar to the texture map. • Colors, brightnessvalues or altitudes

  26. Texture Mapper Mapping (color) Texture map Image

  27. Texture Mapper • Definition of terms: • Image coordinates (r, c): location of pixel in the image • Texture coordinates (u, v): location in texture map which contains color information for image coordinates • Mapping function : determines how texture coordinates are mapped to image coordinates or vice versa. e.g. linear scan-line interpolation

  28. Texture Mapper Image coordinates Texture coordinates (r,c) (u,v) Mapping function

  29. Texture Mapper • Definition of terms • Forward mapping maps from the texture space to image space • Inverse mapping maps from the image space to texture space • Scan-line conversion: an area-filling technique processing a surface line by line. It can be applied with forward/inverse mapping.

  30. Texture Mapper Forward mapping Inverse mapping

  31. Texture Mapper Scan-line conversion More important Texture scanning Image scanning

  32. Scan-line conversion Scanline yk scanning order Scanline yk+1 • triangle/parallelogram scanning • line by line, from top to bottom • process each pixel on every line for every line

  33. Scan-line conversion Coordinates differ by Δx and Δy • For a row scan, maintain a list of scanline / polygon intersections. • Intersection at scanline r+1 efficiently computed from scanline r. Scanline yk Scanline yk+1

  34. Scan-line conversion • quadrilateral  triangles or parallelograms • scan each sub-polygon • special case: only 2 sub-polygons 1 2 3

  35. Scan-line conversion with forward mapping Algorithm for u = umin to umax for v = vmin to vmix texture scanning forward mapping functions r = R(u,v) c = C(u,v) copy pixel at source (u,v) to destination (r,c)

  36. Scan-line conversion with inverse mapping Algorithm texture scanning for (r,c) = polygon pixel u = U(r,c) v = V(r,c) inverse mapping functions copy pixel at source (u,v) to destination (r,c)

  37. Comparison Forward mapping Inverse mapping Principle texture  image image  texture Ease of implementation easy if the mapping function is known a bit complicated as it involves image scanning Calculation offractional area of pixel coverage Yes No Possibility of aliasing

  38. Comparison Aliasing – filtering/resampling techniques can be applied for inverse mapping

  39. Comparison Forward mapping Inverse mapping Principle texture  image image  texture Ease of implementation easy if the mapping function is known a bit complicated as it involves image scanning Calculation offractional area of pixel coverage Yes No Possibility of aliasing Yes Yes, but can be avoided with simple filtering/resampling We used inverse mapping

  40. Mapping functions • Simple linear transformations • Translation, scaling, etc •  parallelograms only • very fast • Linear scan-line interpolation • Based on proportion •  any quadrilaterals not suitable feasible

  41. Linear scan-line interpolation • Idea of interpolation (x2, y2) 1-a (x1, y1) (x, y) = a(x2, y2) + (1-a)(x1, y1) a

  42. Linear scan-line interpolation For a particular scanline, (u4, v4) (u3, v3) (u1, v1) (r1, c1) (r, c) (u, v) (u5, v5) texture map (r4, c4) (r5, c5) image (u2, v2) (r3, c3) (r2, c2)

  43. After mapping a bit strange !

  44. Shadow mapping • Mapping of surface brightness • Retain the brightness of the original surface • Method: • Convert the original surface  HSV • Get the V value • Replace the V value of the mapped surface

  45. After Shadow mapping More natural !

  46. Future Work • Anti-aliasing • Mapping different shapes like cans • Speed optimization • Movie manipulation • Use of 3D markers

  47. Q & A

More Related