JOURNAL BROWSE
Search
Advanced SearchSearch Tips
An Approach for the Cross Modality Content-Based Image Retrieval between Different Image Modalities
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
An Approach for the Cross Modality Content-Based Image Retrieval between Different Image Modalities
Jeong, Inseong; Kim, Gihong;
 
 Abstract
CBIR is an effective tool to search and extract image contents in a large remote sensing image database queried by an operator or end user. However, as imaging principles are different by sensors, their visual representation thus varies among image modality type. Considering images of various modalities archived in the database, image modality difference has to be tackled for the successful CBIR implementation. However, this topic has been seldom dealt with and thus still poses a practical challenge. This study suggests a cross modality CBIR (termed as the CM-CBIR) method that transforms given query feature vector by a supervised procedure in order to link between modalities. This procedure leverages the skill of analyst in training steps after which the transformed query vector is created for the use of searching in target images with different modalities. Current initial results show the potential of the proposed CM-CBIR method by delivering the image content of interest from different modality images. Despite its retrieval capability is outperformed by that of same modality CBIR (abbreviated as the SM-CBIR), the lack of retrieval performance can be compensated by employing the user`s relevancy feedback, a conventional technique for retrieval enhancement.
 Keywords
Cross modality;Content-based image retrieval;Feature vector transformation;User`s relevancy feedback;
 Language
English
 Cited by
 References
1.
Brocker, L., Bogen, M., and Cremers, A. B. (2001), Improving the retrieval performance of content-based image retrieval systems: The GIVBAC approach, Fifth International Conference on Information Visualisation, 25-27 July, London, England, pp. 659-664.

2.
Gelasca, E. D., Guzman, J. D., Gauglitz, S., Ghosh, P., Xu, J., Moxley, E., Rahimi, A. M., Bi, Z., and Manjunath, B. S. (2007), CORTINA: Searching a 10 Million + Images Database, Technical Report, VRL, ECE, University of California, Santa Barbara.

3.
Jeong, I. (2012), An approach for improving the performance of the content-based image retrieval (CBIR), Journal of Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, Vol. 30, No. 6-2, pp. 665-672. crossref(new window)

4.
Jia, Y., Salzmann, M., and Darrell, T. (2011), Learning crossmodality similarity for multinomial data, 2011 IEEE International Conference on Computer Vision (ICCV), 6-13 November, Barcelona, Spain, pp. 2407-2414.

5.
Li, J. and Narayanan, R. M. (2004), Integrated spectral and spatial information mining in remote sensing imagery, IEEE Transactionson Geoscience and Remote Sensing, Vol. 42, No.3, pp. 673-685. crossref(new window)

6.
Newsam, S., Wang, L., Bhagavathy, S., and Manjunath, B. S. (2004), Using texture to analyze and manage large collections of remote sensed image and video data, Journal of Applied Optics: Information Processing, Vol. 43, No. 2, pp. 210-217.

7.
Rasiwasia, N., Pereira, J. C., Coviello, E., Doyle, G., Lanckriet, R.G., Levy, R., and Vasconcelos, N. (2010), A new approach to cross-modal multimedia retrieval, Proceedings of the international conference on Multimedia, ACM, 25-29 October, Firenze, Italy, pp. 251-260.

8.
Streilein, W., Waxman, A., Ross, W., Liu, F., Braun, M., Fay, D., Harmon, P., and Read, C. H. (2000), Fused multi-sensor image mining for feature foundation data, Proceedings of the 3rd International Conference on Information Fusion, 10-13 July, Paris, France, Vol. 1, pp. TuC3/18-TuC3/25.

9.
Zhang, H. J., Chens, Z., Liu, W.Y., and Li, M. (2001), Relevance feedback in content-based image search, Proceedings of 12th International Conference on New Information Technology (NIT), 29-31 May, Beijing, China.