1 / 12

FIRST PROJECT- “Motion detection in urban traffic”

FIRST PROJECT- “Motion detection in urban traffic”. Group No.11 Group Members- Shilpa Sarawagi (Y08uc111) & Disha Ajmera (Y08uc051). Input- A video sequence of urban traffic. Output-

vidar
Download Presentation

FIRST PROJECT- “Motion detection in urban traffic”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FIRST PROJECT-“Motion detection in urban traffic” Group No.11 Group Members- ShilpaSarawagi (Y08uc111) & DishaAjmera(Y08uc051)

  2. Input-A video sequence of urban traffic. Output- Identifying moving objects from that video sequence which differ significantly from the background. For eg, vehicles , pedestrians ,etc.

  3. Approaches that may be used-1)Division of frames at a specific rate.2)Background separation.3)Connected component analysis4) Metamorphological opening-Erosion and dilation. Challenges associated- 1)avoid detecting non-stationary background objects such as swinging leaves, rain etc using thresholding. 2)Robust against changes in illumination. 3)Finally, the constructed background model should react quickly to changes in background such as starting and stopping of vehicles.

  4. Tentative Deadlines- 1)Division of frames by some technique. (by 4-5 nov) 2)background separation. (by 11-12 nov) 3)Connected component analysis. (by 17-18 nov)4) erosion and dilation. (by 22-23 nov)

  5. second PROJECT- "object selection using freehand sketches"

  6. Input and Expected Output • Input : The inputs to the algorithm are the image containing the object to be selected and the freehand sketches drawn by a human user over the image. • Expected output : The algorithm allows selection of image objects with complex boundaries using only roughly drawn simple sketches.

  7. Approaches that may be used-1)Sketch processing after taking the freehand sketch.2)The initial selection and the boundary triangles are then processed for local alpha estimation algorithm (implemented to track boundary)3)Segmentation. Challenges associated- 1)Object may not have well defined boundaries. 2)There may be overlapping objects.

  8. Tentative Deadlines- 1)Sketch processing after taking the freehand sketch. (6-7 nov) 2)The initial selection and the boundary triangles are then processed for local alpha estimation algorithm (implemented to track boundary) (15-16 nov) 3)Segmentation. (22-23 nov)

  9. Thirdproject "A Vision-Based Boundary Following Framework for Aerial Vehicles"

  10. Input and Expected Output • Input : An aerial image containing the coastline (which has to be tracked) or the boundary to be followed like a road surrounded by forests. • Expected Output : An image representing the boundary or a binary image differentiating the road from its surroundings (such as forests).

  11. Approaches and Challenges : • Approaches: Estimating the boundary of a specific region of interest from an aerial image includes two phases: • First, a segmentation algorithm labels each pixel in the scene as either belonging to the target region or not, hence clustering the same. • And then, a curve is fit through the connected group of edge elements in the binary labeled image corresponding to the desired boundary. • Challenges associated : • The sand and water do not have a well-defined boundary (in case of detecting a coastline). • The algorithm should be robust to the illumination changes.

  12. Tentative Timeline: • Segmentation Phase (hue-based clustering or texture based) (by 2nd-3rd Nov.) • Contour Detection Phase : • Edge Detection (by 11th-12th Nov.) • Temporal boundary tracking or per-frame boundary detection (by 17th-18th Nov.) • Final Presentation (by 25th Nov.)

More Related