1 / 7

AUTOMATIC ANNOTATION OF GEO-INFORMATION IN PANORAMIC STREET VIEW BY IMAGE RETRIEVAL

AUTOMATIC ANNOTATION OF GEO-INFORMATION IN PANORAMIC STREET VIEW BY IMAGE RETRIEVAL. Ming Chen, Yueting Zhuang, Fei Wu College of Computer Science, Zhejiang University. ICIP 2010. Motivation.

bowie
Download Presentation

AUTOMATIC ANNOTATION OF GEO-INFORMATION IN PANORAMIC STREET VIEW BY IMAGE RETRIEVAL

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AUTOMATIC ANNOTATION OF GEO-INFORMATION IN PANORAMIC STREET VIEW BY IMAGE RETRIEVAL Ming Chen, Yueting Zhuang, Fei Wu College of Computer Science, Zhejiang University ICIP 2010

  2. Motivation • Recently, some geo-tagged photos have been added into street view as additional illustration images for the same scenes. • But the quality of annotated location is very poor. In this paper, we propose a system to annotate the location tags of users’ images in panoramic street view only with visual features.

  3. System Overview

  4. Coordinate Transformation • A panoramic image is often projected to a spherical surface in order to show users a photo-realistic interactive virtual experience.

  5. Key Points Extraction • We use the conventional method of motion detection to remove the moving regions of foreground. • After the objects are removed, we extracted the key points in all panoramic images using SIFT features, which are invariant to image scale and rotation. • The SIFT descriptors are quantized into visual key words in order to reduce the computational cost of matching.

  6. Image Retrieval • We use hierarchical K-means to obtain visual words from a training set of 20000 images. Locality-sensitive hashing is used to index these visual words. • Given a query image, each bundling feature can be retrieved in the database after they have been extracted. • We can get candidate subset after query images. Then we rank the candidate images by the frequency of its appearance.

  7. Result

More Related