1 / 32

Improving web image search results using query-relative classifiers

Improving web image search results using query-relative classifiers. Josip Krapacy Moray Allanyy Jakob Verbeeky Fr´ed´eric Jurieyy. Outline. Introduction Query-relative features Experimental evaluation Conclusion. Outline. Introduction Query-relative features Experimental evaluation

avedis
Download Presentation

Improving web image search results using query-relative classifiers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving web image search results using query-relative classifiers JosipKrapacy Moray Allanyy JakobVerbeeky Fr´ed´ericJurieyy

  2. Outline • Introduction • Query-relative features • Experimental evaluation • Conclusion

  3. Outline • Introduction • Query-relative features • Experimental evaluation • Conclusion

  4. Introduction • Google’s image search engine have a precision of only 39%[16] • Recently research improve image search performance by visual information and not only text • Similar outlier detection, current setting the majority of retrieved image may be outliers, and inliers can be diverse

  5. Introduction • Recently methods have the same drawback : • a separate image re-ranking model is learned for each and every query –large number of possible queries make these approach wasted computational time

  6. Introduction • Key contribution : • Propose an image re-ranking method, based on textual and visual feature • Does not require learning a separate model for every query • The model parameters are shared across queries and learned once

  7. Introduction • Image re-ranking approach : • Our image re-ranking approach :

  8. Outline • Introduction • Query-relative features • Experimental evaluation • Conclusion

  9. Query-relative features • Query-relative text feature • Binary features • Contextual features • Visual feature • Query-relative visual feature

  10. Query-relative text feature • Our base query-relative text feature follow [6,16] • ContexR • Context10 • Filedir • Filename • Imagealt • Imagetitle • Websitetitle

  11. Binary feature • Nine binary features indicate the presence or absence of query terms : • Surrounding text • Image’s alternative text • Web page’s title • Image file’s URL’s hostname, directory and filename • Web page’s hostname, directory and filename • Which is active if some of the query terms, but not all, are present in the field

  12. Contextual features • Can be understood as a form of pseudo-relevance feedback • Divide the image’s text annotation in three parts : • Text surrounding the image • Image’s alternative text • Words in the web page’s title

  13. Contextual features • Define contextual features by computing word histograms using all the image in the query set Histogram of word counts : Image : i Word indexed : k

  14. Contextual features • Use (1) to define a set of additional context features • The kth binary feature represents the presence or absence of kth most common word • We trim these features down to the first N element, so we have 9+9+3N binary feature

  15. Visual features • Our image representation is based on local appearance and position histograms • Local appearance • Hierarchical k-means clustering • 11-levels of quantisation, and k = 2 • Position quantisation • Quad-tree with three level • The image is represented by appearance-position histogram

  16. Query-relative visual features • No direct correspondence between query terms and image appearance • We can find which visual words are strongly associated with query set by contextual text features • Define a set of visual features to represent their presence or absence in a given image

  17. Query-relative visual features • Order the visual features : • A : query set • T : training set • : average visual word histogram • The kth feature relates to the visual word kth most related to this query

  18. Query-relative visual features • We compared three ways of representing each visual word’s presence or absence • The visual word’s normalised count for this image • The ratio • Binary version of this ratio, threshold at 1:

  19. Outline • Introduction • Query-relative features • Experimental evaluation • Conclusion

  20. Experimental evaluation • New data set • Model training • Evaluation • Ranking images by textual features • Ranking images by visual features • Combining textual and visual features • Performance on Fergus data set

  21. New data set • Previous data set contain image for only a few classes, and at most case without their corresponding meta-data • In our data set, we provide the top-ranked images with their associated meta-data • Our data set of 353 image search queries and in total there are 71478 images

  22. Model training • Train a binary logistic discriminant classifier • Query-relative features of relevant images are used as positive examples • Query-relative features of irrelevant images are used as negative examples • Rank images for the query by the probability • Only need to be learnt once

  23. Evaluation • Used mean average precision • Low Precision(LP): 25 queries where the search engine performs worst • High Precision(HP): 25 queries where the search engine performs best • Search Engine Poor(SEP): 25 queries where the search engine least over random ordering of query set • Search Engine Good(SEG): 25 queries where the search engine most over random ordering of query set

  24. Ranking images by textual features • Diminishing gain per additional feature

  25. Ranking images by visual features

  26. Ranking images by visual features • Adding more visual features increases the overall performance, but with diminishing gain

  27. Combining textual and visual features • a = visual features, 50~400 • b = additional context features, 20~100 10%

  28. Performance on Fergus data set • Our method better than Google • [4],[7] perform better, but they require time-consuming training for every new query

  29. Results

  30. Outline • Introduction • Query-relative features • Experimental evaluation • Conclusion

  31. Conclusion • Construct query-relative features that can be used to train generic classifiers • Rank images for previously unseen search queries without additional model training • The feature combined textual and visual information • Presence a new public data set

  32. Thank you!!! &Happy New Year!!!!

More Related