1 / 51

Princeton Shape Benchmark: Comparison of 3D Model Shape Descriptors

Explore the Princeton Shape Benchmark, a comprehensive evaluation of 3D model shape descriptors. Discover the best shape retrieval techniques using various descriptors, evaluation tools, and visualization software.

Download Presentation

Princeton Shape Benchmark: Comparison of 3D Model Shape Descriptors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Princeton Shape BenchmarkPhilip Shilane, Patrick Min, Michael Kazhdan, and Thomas Funkhouser

  2. Shape Retrieval Problem 3D Model ShapeDescriptor BestMatches Model Database

  3. Example Shape Descriptors • D2 Shape Distributions • Extended Gaussian Image • Shape Histograms • Spherical Extent Function • Spherical Harmonic Descriptor • Light Field Descriptor • etc.

  4. Example Shape Descriptors • D2 Shape Distributions • Extended Gaussian Image • Shape Histograms • Spherical Extent Function • Spherical Harmonic Descriptor • Light Field Descriptor • etc. How do we know which is best?

  5. Typical Retrieval Experiment • Create a database of 3D models • Group the models into classes • For each model: • Rank other models by similarity • Measure how many models in the same class appear near the top of the ranked list • Present average results

  6. Typical Retrieval Experiment • Create a database of 3D models • Group the models into classes • For each model: • Rank other models by similarity • Measure how many models in the same class appear near the top of the ranked list • Present average results

  7. Typical Retrieval Experiment • Create a database of 3D models • Group the models into classes • For each model: • Rank other models by similarity • Measure how many models in the same class appear near the top of the ranked list • Present average results

  8. Typical Retrieval Experiment • Create a database of 3D models • Group the models into classes • For each model: • Rank other models by similarity • Measure how many models in the same class appear near the top of the ranked list • Present average results Query

  9. Typical Retrieval Experiment • Create a database of 3D models • Group the models into classes • For each model: • Rank other models by similarity • Measure how many models in the same class appear near the top of the ranked list • Present average results Query

  10. Typical Retrieval Experiment • Create a database of 3D models • Group the models into classes • For each model: • Rank other models by similarity • Measure how many models in the same class appear near the top of the ranked list • Present average results Query

  11. Typical Retrieval Experiment • Create a database of 3D models • Group the models into classes • For each model: • Rank other models by similarity • Measure how many models in the same class appear near the top of the ranked list • Present average results

  12. Typical Retrieval Experiment • Create a database of 3D models • Group the models into classes • For each model: • Rank other models by similarity • Measure how many models in the same class appear near the top of the ranked list • Present average results

  13. Shape Retrieval Results

  14. Outline • Introduction • Related work • Princeton Shape Benchmark • Comparison of 12 descriptors • Evaluation techniques • Results • Conclusion

  15. Typical Shape Databases

  16. Typical Shape Databases

  17. Aerodynamic Typical Shape Databases

  18. Letter ‘C’ Typical Shape Databases

  19. Typical Shape Databases

  20. Typical Shape Databases

  21. Typical Shape Databases 153 dining chairs 25 living room chairs 16 beds 12 dining tables 8 chests 28 bottles 39 vases 36 end tables

  22. Typical Shape Databases

  23. Goal: Benchmark for 3D Shape Retrieval • Large number of classified models • Wide variety of class types • Not too many or too few models in each class • Standardized evaluation tools • Ability to investigate properties of descriptors • Freely available to researchers

  24. Princeton Shape Benchmark • Large shape database • 6,670 models • 1,814 classified models, 161 classes • Separate training and test sets • Standardized suite of tests • Multiple classifications • Targeted sets of queries • Standardized evaluation tools • Visualization software • Quantitative metrics

  25. 51 potted plants 33 faces 15 desk chairs 22 dining chairs 100 humans 28 biplanes 14 flying birds 11 ships Princeton Shape Benchmark

  26. Princeton Shape Benchmark (PSB)

  27. Princeton Shape Benchmark (PSB)

  28. Outline • Introduction • Related work • Princeton Shape Benchmark • Comparison of 12 descriptors • Evaluation techniques • Results • Conclusion

  29. Comparison of Shape Descriptors • Shape Histograms (Shells) • Shape Histograms (Sectors) • Shape Histograms (SecShells) • D2 Shape Distributions • Extended Gaussian Image (EGI) • Complex Extended Gaussian Image (CEGI) • Spherical Extent Function (EXT) • Radialized Spherical Extent Function (REXT) • Voxel • Gaussian Euclidean Distance Transform (GEDT) • Spherical Harmonic Descriptor (SHD) • Light Field Descriptor (LFD)

  30. Comparison of Shape Descriptors

  31. Evaluation Tools Visualization tools • Precision/recall plot • Best matches • Distance image • Tier image Quantitative metrics • Nearest neighbor • First and Second tier • E-Measure • Discounted Cumulative Gain (DCG)

  32. Evaluation Tools Visualization tools • Precision/recall plot • Best matches • Distance image • Tier image Quantitative metrics • Nearest neighbor • First and Second tier • E-Measure • Discounted Cumulative Gain (DCG)

  33. Evaluation Tools Query Correct class Visualization tools • Precision/recall plot • Best matches • Distance image • Tier image Quantitative metrics • Nearest neighbor • First and Second tier • E-Measure • Discounted Cumulative Gain (DCG) Wrong class

  34. Evaluation Tools Visualization tools • Precision/recall plot • Best matches • Distance image • Tier image Quantitative metrics • Nearest neighbor • First and Second tier • E-Measure • Discounted Cumulative Gain (DCG)

  35. Evaluation Tools Visualization tools • Precision/recall plot • Best matches • Distance image • Tier image Quantitative metrics • Nearest neighbor • First and Second tier • E-Measure • Discounted Cumulative Gain (DCG)

  36. Evaluation Tools Visualization tools • Precision/recall plot • Best matches • Distance image • Tier image Quantitative metrics • Nearest neighbor • First and Second tier • E-Measure • Discounted Cumulative Gain (DCG) Dining ChairDesk Chair

  37. Rectangular table Function vs. Shape • Functional at the top levels of the hierarchy, shape based at the lower levels root Man-made Natural Vehicle Furniture Table Chair Round table

  38. Base Classification (92 classes) Man-made Furniture Table Round table

  39. Coarse Classification (44 classes) Man-made Furniture Table Round table

  40. Coarser Classification (6 classes) Man-made Furniture Table Round table

  41. Coarsest Classification (2 classes) Man-made Furniture Table Round table

  42. Granularity Comparison Base(92) Man-made vs. Natural (2)

  43. Rotationally Aligned Models (650)

  44. All Models (907)

  45. Complex Models (200)

  46. Performance by Property

  47. Conclusion • Methodology to compare shape descriptors • Vary classifications • Query lists targeted at specific properties • Unexpected results • EGI: good at discriminating man-made vs. natural objects, though poor at fine-grained distinctions • LFD: good overall performance across tests • Freely available Princeton Shape Benchmark • 1,814 classified polygonal models • Source code for evaluation tools

  48. Future Work • Multi-classifiers • Evaluate statistical significance of results • Application of techniques to other domains • Text retrieval • Image retrieval • Protein classification

  49. Acknowledgements David Bengali partitioned thousands of models. Ming Ouhyoung and his students provided the light field descriptor. Dejan Vranic provided the CCCC and MPEG-7 databases. Viewpoint Data Labs donated the Viewpoint database. Remco Veltkamp and Hans Tangelder provided the Utrecht database. Funding: The National Science Foundation grants CCR-0093343 and 11S-0121446.

  50. The End http://shape.cs.princeton.edu/benchmark

More Related