MVLSC HomeIssue Contents

The Indexing based on Multiple Visual Dictionaries for Object Based Image Retrieval
Heng Qi, Milos Stojmenovic, Keqiu Li and Zhiyang Li

This paper focuses on the problem of Object Based Image Retrieval (OBIR) where the goal is to search for the images containing the same object shown in the query image. The state-of-the-art approaches of large scale OBIR are based on the bag of visual words model. In the bag of visual words model, k-means clustering is performed on the space of feature descriptors to build a visual vocabulary for vector quantization, thereby we can get an inverted file indexing for fast retrieval. However, traditional k-means clustering is difficult to scale to large vocabularies. Although the approximate k-means clustering algorithm and the product quantization(PQ) are proposed to address this problem, the information loss in these methods decreases the performance of OBIR. To reduce information loss, we propose a novel approach to build multiple visual dictionaries indexing for large scale OBIR in this paper. Differing from existing methods, the proposed approach includes three novel contributions. Firstly, we use multiple visual vocabularies built in multiple sub-spaces for vector quantization instead of a single visual vocabulary. Secondly, visual dictionary indexing is proposed, which is more discriminative than inverted file indexing. Thirdly, except for the TF-IDF weighting scheme, a new weighting scheme is introduced to compute the final scores of the relevance of an image to the query more accurately. To evaluate the proposed approach, we conduct experiments on public image datasets. The experimental results demonstrate very significant improvements over the state-of-the-art approaches on these datasets.

Keywords: Object based image retrieval, bag of visual words model, visual vocabulary, inverted file, visual dictionary.

Full Text (IP)