Skip to main content

Research Repository

Advanced Search

How Well Do Computational Features Perceptually Rank Textures? A Comparative Evaluation

Dong, Xinghui; Methven, Thomas S.; Chantler, Mike J.

Authors

Xinghui Dong

Thomas S. Methven

Mike J. Chantler



Abstract

Inspired by studies [4, 23, 40] which compared rankings obtained by search engines and human observers, in this paper we compare texture rankings derived by 51 sets of computational features against perceptual texture rankings obtained from a free-grouping experiment with 30 human observers, using a unify evaluation framework. Experimental results show that the MRSAR [37], VZNEIGHBORHOOD [62], LBPHF [2] and LBPBASIC [3] feature sets perform better than their counterparts. However, none of those feature sets are ideal. The best average G and M measures (measures of ranking accuracy from 0 to 1) [15, 5] obtained are 0.36 and 0.25 respectively. We suggest that this poor performance may be due to the small local neighborhood used to calculate higher-order features which cannot capture the long-range interactions that humans have been shown to exploit [14, 16, 49, 56].

Presentation Conference Type Conference Paper (Published)
Conference Name ACM International Conference on Multimedia Retrieval
Start Date Apr 1, 2014
End Date Apr 4, 2014
Acceptance Date Mar 1, 2014
Publication Date Apr 1, 2014
Deposit Date Oct 23, 2018
Publisher Association for Computing Machinery (ACM)
Pages 815-824
Book Title ICMR '14 Proceedings of International Conference on Multimedia Retrieval
ISBN 9781450327824
DOI https://doi.org/10.1145/2578726.2578762
Keywords Computational features, Evaluation, Perceptual texture ranking, Texture ranking, Texture retrieval, Texture similarity,
Public URL http://researchrepository.napier.ac.uk/Output/1320794

You might also like



Downloadable Citations