JOURNAL BROWSE
Search
Advanced SearchSearch Tips
Extraction of an Effective Saliency Map for Stereoscopic Images using Texture Information and Color Contrast
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
Extraction of an Effective Saliency Map for Stereoscopic Images using Texture Information and Color Contrast
Kim, Seong-Hyun; Kang, Hang-Bong;
  PDF(new window)
 Abstract
In this paper, we propose a method that constructs a saliency map in which important regions are accurately specified and the colors of the regions are less influenced by the similar surrounding colors. Our method utilizes LBP(Local Binary Pattern) histogram information to compare and analyze texture information of surrounding regions in order to reduce the effect of color information. We extract the saliency of stereoscopic images by integrating a 2D saliency map with depth information of stereoscopic images. We then measure the distance between two different sizes of the LBP histograms that are generated from pixels. The distance we measure is texture difference between the surrounding regions. We then assign a saliency value according to the distance in LBP histogram. To evaluate our experimental results, we measure the F-measure compared to ground-truth by thresholding a saliency map at 0.8. The average F-Measure is 0.65 and our experimental results show improved performance in comparison with existing other saliency map extraction methods.
 Keywords
Stereo Image;Saliency Map;Local Binary Pattern;
 Language
Korean
 Cited by
 References
1.
S.M. Kim, C.H. Park and J.C. Namkung, "Face Feature Extraction Method Through Stereo Image's Matching Value," Journal of Korea Multimedia Society, Vol. 8, No. 4, pp. 461-472, 2005.

2.
J. Wang, M.P.D. Silva, P.L. Callet, and V. Ricordel, "Computational Model of Stereoscopic 3D Visual Saliency," IEEE Transactions on Image Processing, Vol. 22, Issue 6, pp. 2151-2165, 2013. crossref(new window)

3.
M. Cheng, G. Zhang, N.J. Mitra, X. Huang, and S. Hu, "Global Contrast Based Salient Region Detection," Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, pp. 409-416, 2011.

4.
J.H. Reynolds and R. Desimone, "Interacting Roles of Attention and Visual Salience in V4," Neuron, Vol. 37, No. 5, pp. 853-863, 2003. crossref(new window)

5.
J. Zhu, Y. Qiu, R. Zhang, J. Huang, and W. Zhang, "Top-Down Saliency Detection via Contextual Pooling," Journal of Signal Processing Systems, Vol. 74, No. 1, pp. 33-46, 2014. crossref(new window)

6.
S. Goferman, L. Zelnik-Manor, and A. Tal, "Context-Aware Saliency Detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 34, No. 10, pp. 1915-1926, 2012. crossref(new window)

7.
J. Li, M.D. Levine, X. An, X. Xu, and H. He, "Visual Saliency based on Scale-Space Analysis in the Frequency Domain," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 35, No. 4, pp. 996-1010, 2013. crossref(new window)

8.
J. Harel, C. Koch, and P. Perona, "Graph-Based Visual Saliency," Proceeding of Conferences on Neural Information Processing Systems, pp. 545-552, 2006.

9.
L. Jansen, S. Onat, and P. König, "Influence of Disparity on Fixation and Saccades in Free Viewing of Natural Scenes," Journal of Vision, Vol 9, No. 1, pp. 1-19, 2009. crossref(new window)

10.
P. Anandan, "A Computational Framework and an Algorithm for the Measurement of Visual Motion," International Journal of Computer Vision, Vol. 2, No. 3, pp. 283-310, 1989. crossref(new window)

11.
D. Terzopoulos, “Regularization of Inverse Visual Problems Involving Discontinuities,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 8, No. 4, pp. 413–424, 1986. crossref(new window)

12.
S. Birchfield and C. Tomasi, “A Pixel Dissimilarity Measure That is Insensitive to Image Sampling,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 4, pp. 401-406, 1998. crossref(new window)

13.
D. Marr and T. Poggio, “Cooperative Computation of Stereo Disparity,” Science, Vol. 194, No. 4262, pp. 283-287, 1976. crossref(new window)

14.
K. Yoon and I.S. Kweon, "Adaptive Support-weight Approach for Correspondence Search," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 4, pp. 650-656, 2006. crossref(new window)

15.
B.J. Balas, "Texture Synthesis and Perception: Using Computational Models to Study Texture Representations in the Human Visual System," Vision Research, Vol. 46, No. 3, pp. 299-309, 2006. crossref(new window)

16.
T. Ojala, M. Pietikainen, and T. Maenpaa, "Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 7, pp. 971-987, 2002. crossref(new window)

17.
H. Jung, K. Cho, and K. Han, “The Impact of Brightness, Polarity, and Hue Difference on Legibility and Emotional Effect of Word in Visual Display,” Journal of Korean Society for Cognitive Science, Vol. 17, No. 4, pp. 337-356, 2006.

18.
Z. Yang and H. Ai, "Demographic Classification with Local Binary Patterns," Advances in Biometrics, Vol. 4642, pp. 464-473, 2007. crossref(new window)

19.
J. Ren, X. Jiang, and J. Yuan, "Noise-Resistant Local binary Pattern with an Embedded Error-Correction Mechanism," IEEE Transactions on Image Processing, Vol. 22, No. 10, pp. 4049-4060, 2013. crossref(new window)

20.
Z. Guo, L. Zhang, and D. Zhang, "Rotation Invariant Texture Classification using LBP Variance (LBPV) with Global Matching," Pattern Recognition, Vol. 43, No. 3, pp. 706-719, 2010. crossref(new window)

21.
J. Häkkinen, T. Kawai, J. Takatalo, R. Mitsuya, and G. Nyman, "What do people look at when they watch Stereoscopic Movies?," IS&T/SPIE Electronic Imaging, Vol. 7524, pp. 2010.

22.
R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, "Frequency-tuned Salient Region Detection," Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1597-1604, 2009.

23.
Y. Niu, Y. Geng, X. Li, and F. Liu, "Leveraging Stereopsis for Saliency Analysis," Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, pp. 454-461, 2012.