Advanced SearchSearch Tips
Visual Attention Detection By Adaptive Non-Local Filter
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
Visual Attention Detection By Adaptive Non-Local Filter
Anh, Dao Nam;
  PDF(new window)
Regarding global and local factors of a set of features, a given single image or multiple images is a common approach in image processing. This paper introduces an application of an adaptive version of non-local filter whose original version searches non-local similarity for removing noise. Since most images involve texture partner in both foreground and background, extraction of signified regions with texture is a challenging task. Aiming to the detection of visual attention regions for images with texture, we present the contrast analysis of image patches located in a whole image but not nearby with assistance of the adaptive filter for estimation of non-local divergence. The method allows extraction of signified regions with texture of images of wild life. Experimental results for a benchmark demonstrate the ability of the proposed method to deal with the mentioned challenge.
Adaptive non-local means filter;saliency;dissimilarity;
 Cited by
Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, "Edge-preserving decompositions for multi-scale tone and detail manipulation," ACM Trans. Graph., vol. 27, no. 3, Aug. 2008, Art. ID 67.

Yaroslavsky, L.P.: Digital Picture Processing-an Introduction. Springer, Heidelberg (1985)

C. Tomasi and R. Manduchi, "Bilateral filtering for gray and color images," in Proc. 6th ICCV, Bombay, India, Jan. 1998, pp. 836-846.

Smith, S.M., Brady, J.M.: SUSAN-a new approach to low level image processing. International Journal of Computer Vision 23, pp. 45-78 (1997). crossref(new window)

Buades, A., Coll, B., Morel, J.M.: On image denoising methods. SIAM Multiscale Modeling and Simulation 4, pp. 490-530 (2005). crossref(new window)

Xiaoqun Zhang, Martin Burger, Xavier Bresson, and Stanley Osher, Bregmanized Nonlocal Regularization for Deconvolution and Sparse Reconstruction. SIAM J. Imaging Sci., 3(3), pp. 253-276.

Tie Liu, J Sun, NN Zheng, Xoou Tang and Heung- Yeung Shum. Learning to detect a salient object. CVPR 2007.

Hae Jong Seo and Peyman Milanfar, Nonparametric Bottom-Up Saliency, Detection by Self-Resemblance, Hae Jong Seo and Peyman Milanfar, Computer Vision and Pattern Recognition Workshops, 2009. CVPR Workshops 2009.

Lingyun Zhang, Matthew H. Tong, Tim K. Marks, Honghao Shan & Garrison W. Cottrell (2008). SUN: A Bayesian framework for saliency using natural statistics. Journal of Vision, 8(7):32, pp. 1-20.

Esa Rahtu, Juho Kannala, Mikko Salo, Janne Heikkila, Segmenting Salient Objects from Images and Videos, Volume 6315 of the series Lecture Notes in Computer Science pp. 366-379

D. Hall, B. Leibe, and B. Schiele, Saliency of Interest Points under Scale Changes, British Machine Vision Conference 2002

Rapantzikos, K., Sch. of Electr. & Comput. Eng., Nat. Tech. Univ. of Athens, Athens, Greece, Avrithis, Y., Kollias, S. Dense saliency-based spatiotemporal feature points for action recognition. Computer Vision and Pattern Recognition, 2009.

Xiang Zhang, Inst. of Digital Media, Peking Univ., Beijing, China, Shiqi Wang, Siwei Ma, Wen Gao, A study on interest point guided visual saliency, Picture Coding Symposium (PCS), 2015.

Yun Zhai and Mubarak Shah. 2006. Visual attention detection in video sequences using spatiotemporal cues. In Proceedings of the 14th ACM international conference on Multimedia (MM '06). ACM, New York, NY, USA, pp. 815-824.

Kienzle W, Franz MO, Scholkopf B and Wichmann FA (May-2009) Center-surround patterns emerge as optimal predictors for human saccade targets Journal of Vision 9(5:7) pp. 1-15.

G. Loy and A. Zelinsky. Fast Radial Symmetry for Detecting Points of Interest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003.