• Title/Summary/Keyword: Visual Feature Extraction

Search Result 142, Processing Time 0.025 seconds

Adaptive Processing for Feature Extraction: Application of Two-Dimensional Gabor Function

  • Lee, Dong-Cheon
    • Korean Journal of Remote Sensing
    • /
    • v.17 no.4
    • /
    • pp.319-334
    • /
    • 2001
  • Extracting primitives from imagery plays an important task in visual information processing since the primitives provide useful information about characteristics of the objects and patterns. The human visual system utilizes features without difficulty for image interpretation, scene analysis and object recognition. However, to extract and to analyze feature are difficult processing. The ultimate goal of digital image processing is to extract information and reconstruct objects automatically. The objective of this study is to develop robust method to achieve the goal of the image processing. In this study, an adaptive strategy was developed by implementing Gabor filters in order to extract feature information and to segment images. The Gabor filters are conceived as hypothetical structures of the retinal receptive fields in human vision system. Therefore, to develop a method which resembles the performance of human visual perception is possible using the Gabor filters. A method to compute appropriate parameters of the Gabor filters without human visual inspection is proposed. The entire framework is based on the theory of human visual perception. Digital images were used to evaluate the performance of the proposed strategy. The results show that the proposed adaptive approach improves performance of the Gabor filters for feature extraction and segmentation.

CLASSIFIED ELGEN BLOCK: LOCAL FEATURE EXTRACTION AND IMAGE MATCHING ALGORITHM

  • Hochul Shin;Kim, Seong-Dae
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2108-2111
    • /
    • 2003
  • This paper introduces a new local feature extraction method and image matching method for the localization and classification of targets. Proposed method is based on the block-by-block projection associated with directional pattern of blocks. Each pattern has its own eigen-vertors called as CEBs(Classified Eigen-Blocks). Also proposed block-based image matching method is robust to translation and occlusion. Performance of proposed feature extraction and matching method is verified by the face localization and FLIR-vehicle-image classification test.

  • PDF

Visual Touch Recognition for NUI Using Voronoi-Tessellation Algorithm (보로노이-테셀레이션 알고리즘을 이용한 NUI를 위한 비주얼 터치 인식)

  • Kim, Sung Kwan;Joo, Young Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.3
    • /
    • pp.465-472
    • /
    • 2015
  • This paper presents a visual touch recognition for NUI(Natural User Interface) using Voronoi-tessellation algorithm. The proposed algorithms are three parts as follows: hand region extraction, hand feature point extraction, visual-touch recognition. To improve the robustness of hand region extraction, we propose RGB/HSI color model, Canny edge detection algorithm, and use of spatial frequency information. In addition, to improve the accuracy of the recognition of hand feature point extraction, we propose the use of Douglas Peucker algorithm, Also, to recognize the visual touch, we propose the use of the Voronoi-tessellation algorithm. Finally, we demonstrate the feasibility and applicability of the proposed algorithms through some experiments.

Visual Feature Extraction Technique for Content-Based Image Retrieval

  • Park, Won-Bae;Song, Young-Jun;Kwon, Heak-Bong;Ahn, Jae-Hyeong
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.12
    • /
    • pp.1671-1679
    • /
    • 2004
  • This study has proposed visual-feature extraction methods for each band in wavelet domain with both spatial frequency features and multi resolution features. In addition, it has brought forward similarity measurement method using fuzzy theory and new color feature expression method taking advantage of the frequency of the same color after color quantization for reducing quantization error, a disadvantage of the existing color histogram intersection method. Experiments are performed on a database containing 1,000 color images. The proposed method gives better performance than the conventional method in both objective and subjective performance evaluation.

  • PDF

Efficient Content-Based Image Retrieval Methods Using Color and Texture

  • Lee, Sang-Mi;Bae, Hee-Jung;Jung, Sung-Hwan
    • ETRI Journal
    • /
    • v.20 no.3
    • /
    • pp.272-283
    • /
    • 1998
  • In this paper, we propose efficient content-based image retrieval methods using the automatic extraction of the low-level visual features as image content. Two new feature extraction methods are presented. The first one os an advanced color feature extraction derived from the modification of Stricker's method. The second one is a texture feature extraction using some DCT coefficients which represent some dominant directions and gray level variations of the image. In the experiment with an image database of 200 natural images, the proposed methods show higher performance than other methods. They can be combined into an efficient hierarchical retrieval method.

  • PDF

A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map (객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템)

  • Cheoi, Kyungjoo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.4
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.

Automatic Extraction and Measurement of Visual Features of Mushroom (Lentinus edodes L.) (표고 외관 특징점의 자동 추출 및 측정)

  • Hwang, Heon;Lee, Yong-Guk
    • Journal of Bio-Environment Control
    • /
    • v.1 no.1
    • /
    • pp.37-51
    • /
    • 1992
  • Quantizing and extracting visual features of mushroom(Lentinus edodes L.) are crucial to the sorting and grading automation, the growth state measurement, and the dried performance indexing. A computer image processing system was utilized for the extraction and measurement of visual features of front and back sides of the mushroom. The image processing system is composed of the IBM PC compatible 386DK, ITEX PCVISION Plus frame grabber, B/W CCD camera, VGA color graphic monitor, and image output RGB monitor. In this paper, an automatic thresholding algorithm was developed to yield the segmented binary image representing skin states of the front and back sides. An eight directional Freeman's chain coding was modified to solve the edge disconnectivity by gradually expanding the mask size of 3$\times$3 to 9$\times$9. A real scaled geometric quantity of the object was directly extracted from the 8-directional chain element. The external shape of the mushroom was analyzed and converted to the quantitative feature patterns. Efficient algorithms for the extraction of the selected feature patterns and the recognition of the front and back side were developed. The developed algorithms were coded in a menu driven way using MS_C language Ver.6.0, PC VISION PLUS library fuctions, and VGA graphic functions.

  • PDF

A New Feature-Based Visual SLAM Using Multi-Channel Dynamic Object Estimation (다중 채널 동적 객체 정보 추정을 통한 특징점 기반 Visual SLAM)

  • Geunhyeong Park;HyungGi Jo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.65-71
    • /
    • 2024
  • An indirect visual SLAM takes raw image data and exploits geometric information such as key-points and line edges. Due to various environmental changes, SLAM performance may decrease. The main problem is caused by dynamic objects especially in highly crowded environments. In this paper, we propose a robust feature-based visual SLAM, building on ORB-SLAM, via multi-channel dynamic objects estimation. An optical flow and deep learning-based object detection algorithm each estimate different types of dynamic object information. Proposed method incorporates two dynamic object information and creates multi-channel dynamic masks. In this method, information on actually moving dynamic objects and potential dynamic objects can be obtained. Finally, dynamic objects included in the masks are removed in feature extraction part. As a results, proposed method can obtain more precise camera poses. The superiority of our ORB-SLAM was verified to compared with conventional ORB-SLAM by the experiment using KITTI odometry dataset.

Development of Robust Feature Recognition and Extraction Algorithm for Dried Oak Mushrooms (건표고의 외관특징 인식 및 추출 알고리즘 개발)

  • Lee, C.H.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.21 no.3
    • /
    • pp.325-335
    • /
    • 1996
  • Visual features are crucial for monitoring the growth state, indexing the drying performance, and grading the quality of oak mushrooms. A computer vision system with neural net information processing technique was utilized to quantize quality factors of a dried oak mushrooms distributed over the cap and gill sides. In this paper, visual feature extraction algorithm were integrated with the neural net processing to deal with various fuzzy patterns of mushroom shapes and to compensate the fault sensitiveness of the crisp criteria and heuristic rules derived from the image processing results. The proposed algorithm improved the segmentation of the skin features of each side, the identification of cap and gill surfaces, the identification of stipe states and removal of the stipe, etc. And the visual characteristics of dried oak mushrooms were analyzed and primary visual features essential to tile quality evaluation were extracted and quantized. In this study, black and white gray images were captured and used for the algorithm development.

  • PDF

Infrared Visual Inertial Odometry via Gaussian Mixture Model Approximation of Thermal Image Histogram (열화상 이미지 히스토그램의 가우시안 혼합 모델 근사를 통한 열화상-관성 센서 오도메트리)

  • Jaeho Shin;Myung-Hwan Jeon;Ayoung Kim
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.260-270
    • /
    • 2023
  • We introduce a novel Visual Inertial Odometry (VIO) algorithm designed to improve the performance of thermal-inertial odometry. Thermal infrared image, though advantageous for feature extraction in low-light conditions, typically suffers from a high noise level and significant information loss during the 8-bit conversion. Our algorithm overcomes these limitations by approximating a 14-bit raw pixel histogram into a Gaussian mixture model. The conversion method effectively emphasizes image regions where texture for visual tracking is abundant while reduces unnecessary background information. We incorporate the robust learning-based feature extraction and matching methods, SuperPoint and SuperGlue, and zero velocity detection module to further reduce the uncertainty of visual odometry. Tested across various datasets, the proposed algorithm shows improved performance compared to other state-of-the-art VIO algorithms, paving the way for robust thermal-inertial odometry.