Document Type

Article

Publication Date

9-2015

Publication Title

IEEE Signal Processing Letters

Volume

22

Issue

9

First Page

1404

Last Page

1408

Abstract

The state of the art of technology for near-duplicate image retrieval is mostly based on the Bag-of-Visual-Words model. However, visual words are easy to result in mismatches because of quantization errors of the local features the words represent. In order to improve the precision of visual words matching, contextual descriptors are designed to strengthen their discriminative power and measure the contextual similarity of visual words. This paper presents a new contextual descriptor that measures the contextual similarity of visual words to immediately discard the mismatches and reduce the count of candidate images. The new contextual descriptor encodes the relationships of dominant orientation and spatial position between the referential visual words and their context. Experimental results on benchmark Copydays dataset demonstrate its efficiency and effectiveness for near-duplicate image retrieval.

Comments

© Copyright 2015 IEEE. The final published version of this article can be found at: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6975087.

Share

COinS