• Title/Summary/Keyword: Saliency map

Search Result 100, Processing Time 0.036 seconds

Saliency Map Creation Method Robust to the Contour of Objects (객체의 윤곽선에 강인한 Saliency Map 생성 기법)

  • Han, Sung-Ho;Hong, Yeong-Pyo;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.3
    • /
    • pp.173-178
    • /
    • 2012
  • In this paper, a new saliency map generation method is discussed which extracts objects effectively using extracted Salient Region. Feature map is constructed first using four features of edge, hue of HSV color model, focus and entropy and then conspicuity map is generated from Center Surround Differences using the feature map. Final saliency map is constructed by the combination of conspicuity maps. Saliency map generated using this procedure is compared to the conventional technique and confirmed that new technique has better results.

A Study on Visual Saliency Detection in Infrared Images Using Boolean Map Approach

  • Truong, Mai Thanh Nhat;Kim, Sanghoon
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1183-1195
    • /
    • 2020
  • Visual saliency detection is an essential task because it is an important part of various vision-based applications. There are many techniques for saliency detection in color images. However, the number of methods for saliency detection in infrared images is limited. In this paper, we introduce a simple approach for saliency detection in infrared images based on the thresholding technique. The input image is thresholded into several Boolean maps, and an initial saliency map is calculated as a weighted sum of the created Boolean maps. The initial map is further refined by using thresholding, morphology operation, and a Gaussian filter to produce the final, high-quality saliency map. The experiment showed that the proposed method has high performance when applied to real-life data.

Enhanced Object Extraction Method Based on Multi-channel Saliency Map (Saliency Map 다중 채널을 기반으로 한 개선된 객체 추출 방법)

  • Choi, Young-jin;Cui, Run;Kim, Kwang-Rag;Kim, Hyoung Joong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.2
    • /
    • pp.53-61
    • /
    • 2016
  • Extracting focused object with saliency map is still remaining as one of the most highly tasked research area around computer vision for it is hard to estimate. Through this paper, we propose enhanced object extraction method based on multi-channel saliency map which could be done automatically without machine learning. Proposed Method shows a higher accuracy than Itti method using SLIC, Euclidean, and LBP algorithm as for object extraction. Experiments result shows that our approach is possible to be used for automatic object extraction without any previous training procedure through focusing on the main object from the image instead of estimating the whole image from background to foreground.

Face Detection through Implementation of adaptive Saliency map (적응적인 Saliency map 모델 구현을 통한 얼굴 검출)

  • Kim, Gi-Jung;Han, Yeong-Jun;Han, Hyeon-Su
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.153-156
    • /
    • 2007
  • 인간의 시각 시스템은 선택적 주의 집중에 의해 시각 수용체로 도달되는 많은 물체들 중에서 필요한 정보만을 추출하여 원하는 작업을 수행한다. Itti와 Koch는 시각적 주의를 제어할 수 있는, 신경계를 모방한 계산적 모델을 제안하였으나 조명환경에 고정적인 saliency map을 구성하였다. 따라서, 본 논문에서는 영상에서 ROI(region of interest)을 탐지하기 위한 조명환경에 적응적인 saliency map 모델을 구성하는 기법을 제시한다. 변화하는 환경에서 원하는 특징을 부각시키기 위하여 상황에 적응적인 동적 가중치를 부여한다. 동적 가중치는 conspicuity map에 S.K. Chang이 제안한 PIM(Picture Information Measure)을 적용시켜 정보량을 측정한 후, 이에 따라 정규화된 값을 부여함으로써 구현한다. 제안하는 조명환경에 강인한 적응적인 saliency map 모델 구현의 성능을 얼굴검출 실험을 통하여 검증하였다.

  • PDF

Analysis of the effect of class classification learning on the saliency map of Self-Supervised Transformer (클래스분류 학습이 Self-Supervised Transformer의 saliency map에 미치는 영향 분석)

  • Kim, JaeWook;Kim, Hyeoncheol
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.67-70
    • /
    • 2022
  • NLP 분야에서 적극 활용되기 시작한 Transformer 모델을 Vision 분야에서 적용하기 시작하면서 object detection과 segmentation 등 각종 분야에서 기존 CNN 기반 모델의 정체된 성능을 극복하며 향상되고 있다. 또한, label 데이터 없이 이미지들로만 자기지도학습을 한 ViT(Vision Transformer) 모델을 통해 이미지에 포함된 여러 중요한 객체의 영역을 검출하는 saliency map을 추출할 수 있게 되었으며, 이로 인해 ViT의 자기지도학습을 통한 object detection과 semantic segmentation 연구가 활발히 진행되고 있다. 본 논문에서는 ViT 모델 뒤에 classifier를 붙인 모델에 일반 학습한 모델과 자기지도학습의 pretrained weight을 사용해서 전이학습한 모델의 시각화를 통해 각 saliency map들을 비교 분석하였다. 이를 통해, 클래스 분류 학습 기반 전이학습이 transformer의 saliency map에 미치는 영향을 확인할 수 있었다.

  • PDF

Extraction of an Effective Saliency Map for Stereoscopic Images using Texture Information and Color Contrast (색상 대비와 텍스처 정보를 이용한 효과적인 스테레오 영상 중요도 맵 추출)

  • Kim, Seong-Hyun;Kang, Hang-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.9
    • /
    • pp.1008-1018
    • /
    • 2015
  • In this paper, we propose a method that constructs a saliency map in which important regions are accurately specified and the colors of the regions are less influenced by the similar surrounding colors. Our method utilizes LBP(Local Binary Pattern) histogram information to compare and analyze texture information of surrounding regions in order to reduce the effect of color information. We extract the saliency of stereoscopic images by integrating a 2D saliency map with depth information of stereoscopic images. We then measure the distance between two different sizes of the LBP histograms that are generated from pixels. The distance we measure is texture difference between the surrounding regions. We then assign a saliency value according to the distance in LBP histogram. To evaluate our experimental results, we measure the F-measure compared to ground-truth by thresholding a saliency map at 0.8. The average F-Measure is 0.65 and our experimental results show improved performance in comparison with existing other saliency map extraction methods.

Implementation of Image Adaptive Map (적응적인 Saliency Map 모델 구현)

  • Park, Sang-Bum;Kim, Ki-Joong;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.25 no.2
    • /
    • pp.131-139
    • /
    • 2008
  • This paper presents a new saliency map which is constructed by providing dynamic weights on individual features in an input image to search ROI(Region Of Interest) or FOA(Focus Of Attention). To construct a saliency map on there is no a priori information, three feature-maps are constructed first which emphasize orientation, color, and intensity of individual pixels, respectively. From feature-maps, conspicuity maps are generated by using the It's algorithm and their information quantities are measured in terms of entropy. Final saliency map is constructed by summing the conspicuity maps weighted with their individual entropies. The prominency of the proposed algorithm has been proved by showing that the ROIs detected by the proposed algorithm in ten different images are similar with those selected by one-hundred person's naked eyes.

Implementation of saliency map model using independent component analysis (독립성분해석을 이용한 Saliency map 모델 구현)

  • Sohn, Jun-Il;Lee, Min-Ho;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.10 no.5
    • /
    • pp.286-291
    • /
    • 2001
  • We propose a new saliency map model for selecting an attended location in an arbitrary visual scene, which is one of the most important characteristics of human vision system. In selecting an attended location, an edge information can be considered as a feature basis to construct the saliency map. Edge filters are obtained from the independent component analysis(ICA) that is the best way to find independent edges in natural gray scenes. In order to reflect the non-uniform density in our retina, we use a multi-scaled pyramid input image instead of using an original input image. Computer simulation results show that the proposed saliency map model with multi-scale property successfully generates the plausible attended locations.

  • PDF

Estimate Saliency map based on Multi Feature Assistance of Learning Algorithm (다중 특징을 지원하는 학습 기반의 saliency map에 관한 연구)

  • Han, Hyun-Ho;Lee, Gang-Seong;Park, Young-Soo;Lee, Sang-Hun
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.6
    • /
    • pp.29-36
    • /
    • 2017
  • In this paper, we propose a method for generating improved saliency map by learning multiple features to improve the accuracy and reliability of saliency map which has similar result to human visual perception type. In order to overcome the inaccurate result of reverse selection or partial loss in color based salient area estimation in existing salience map generation, the proposed method generates multi feature data based on learning. The features to be considered in the image are analyzed through the process of distinguishing the color pattern and the region having the specificity in the original image, and the learning data is composed by the combination of the similar protrusion area definition and the specificity area using the LAB color space based color analysis. After combining the training data with the extrinsic information obtained from low level features such as frequency, color, and focus information, we reconstructed the final saliency map to minimize the inaccurate saliency area. For the experiment, we compared the ground truth image with the experimental results and obtained the precision-recall value.

Saliency Map Based Color Image Compression for Visual Quality Enhancement of Image (영상의 시각적 품질향상을 위한 Saliency 맵 기반의 컬러 영상압축)

  • Jung, Sung-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.3
    • /
    • pp.446-455
    • /
    • 2017
  • A color image compression based on saliency map was proposed. The proposed method provides higher quality in saliency blocks on which people's attention focuses, compared with non-saliency blocks on which the attention less focuses at a given bitrate. The proposed method uses 3 different quantization tables according to each block's saliency level. In the experiment using 6 typical images, we compared the proposed method with JPEG and other conventional methods. As the result, it showed that the proposed method (Qup=0.5*Qx) is about 3.1 to 1.2 dB better than JPEG and others in saliency blocks in PSNR at the almost similar bitrate. In the comparison of result images, the proposed one also showed less error than others in saliency blocks.