1 / 16

Two papers in icfda14

Two papers in icfda14. Guimei Zhang MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California, Merced E : guimei.zh@163.com Phone:209-658-4838 Lab : CAS Eng 820 ( T : 228-4398). June 30, 2014. Monday 4:00-6:00 PM

angeni
Download Presentation

Two papers in icfda14

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Two papers in icfda14 Guimei Zhang MESA (Mechatronics, Embedded Systems and Automation)LAB School of Engineering, University of California, Merced E: guimei.zh@163.comPhone:209-658-4838 Lab: CAS Eng 820 (T: 228-4398) June 30, 2014. Monday 4:00-6:00 PM Applied Fractional Calculus Workshop Series @ MESA Lab @ UCMerced

  2. The first paper Paper title: AFC Workshop Series @ MESALAB @ UCMerced

  3. Motivation • Detect and localize objects in single view RGB images, the environments containing arbitrary illumination, much clutter for the purpose of autonomous grasping. • 2. Objects can be of arbitrary color and interior texture, • thus, we assume knowledge of only their 3D model • without any appearance/texture information. • 3. Using 3D models makes an object detector immune to • intra-class texture variations.

  4. Motivation • In this paper, we address the problem of a robot grasping 3D objects of known 3D shape from their projections in single images of cluttered scenes. • We further abstract the 3D model by only using its 2D Contour and thus detection is driven by the shape of the 3D object’s projected occluding boundary.

  5. Main achievements

  6. Overview of the proposed approach a) The input image b) Edge image used gPb method c) The hypothesis bounding box (red) is segmented into superpixels. d) Theset of superpixels with the closest distance to the model contour is selected. e)three textured synthetic views of the final pose estimate are shown.

  7. How to do • 3D model acquisition and rendering (use a low-cost RGB-D depth sensor and a dense surface reconstruction algorithm, KinectFusion) 2. Image feature (edge) 3. Object detection 4. Shape descriptor 5. Shape verification for contour extraction 6. Pose estimation (image registration)

  8. Example (a) bounding boxes ordered by the detection score ( b) Corresponding pose output (c) Segmentation of top scored (d) Foreground mask selected by shape (e) Three iterations in pose refinement (f) Visualization of PR2 model with the Kinect point cloud (g) Another view of the same scene

  9. The second paper Paper title:

  10. Motivation Problems: • big and complex scenes, there must be many 3D point clouds, which need human label and will result in to spend much time. • Considering the bias problem of model learning caused by bias accumulation in a sample collection

  11. Motivation Therefore, this paper proposes a semi-supervised method to learn category models from unlabeled “big point cloud data”. The algorithm only requires to label a small number of object seeds in each object category to start the model learning, as shown in Fig. 1. Such design saves both the manual labeling and computation cost to satisfy the model-mining efficiency requirement.

  12. The main contributions • To the best of our knowledge, this is the first proposal for an efficient mining of category models from “big point cloud data”. With limited computation and human labeling, the method is oriented toward an efficient construction of a category model base. • A multiple-model strategy is proposed as a solution to the bias problem, and provides several discrete and selective category boundaries.

  13. Expermient Model-based point labeling results. Different colors indicate different categories, i.e. wall (green), tree (red), and street (blue).

  14. Thanks

More Related