Sparse multi-modal hashing

Publication Type:
Journal Article
Citation:
IEEE Transactions on Multimedia, 2014, 16 (2), pp. 427 - 439
Issue Date:
2014-02-01
Filename Description Size
Sparse Multi-Modal Hashing.pdfPublished Version2.49 MB
Adobe PDF
Full metadata record
Learning hash functions across heterogenous high-dimensional features is very desirable for many applications involving multi-modal data objects. In this paper, we propose an approach to obtain the sparse codesets for the data objects across different modalities via joint multi-modal dictionary learning, which we call sparse multi-modal hashing (abbreviated as SM2. In SM2, both intra-modality similarity and inter-modality similarity are first modeled by a hypergraph, then multi-modal dictionaries are jointly learned by Hypergraph Laplacian sparse coding. Based on the learned dictionaries, the sparse codeset of each data object is acquired and conducted for multi-modal approximate nearest neighbor retrieval using a sensitive Jaccard metric. The experimental results show that SM2 outperforms other methods in terms of mAP and Percentage on two real-world data sets. © 2013 IEEE.
Please use this identifier to cite or link to this item: