• Title, Summary, Keyword: Feature Fusion

Search Result 217, Processing Time 0.035 seconds

A Study on the Performance Enhancement of Radar Target Classification Using the Two-Level Feature Vector Fusion Method

  • Kim, In-Ha;Choi, In-Sik;Chae, Dae-Young
    • Journal of electromagnetic engineering and science
    • /
    • v.18 no.3
    • /
    • pp.206-211
    • /
    • 2018
  • In this paper, we proposed a two-level feature vector fusion technique to improve the performance of target classification. The proposed method combines feature vectors of the early-time region and late-time region in the first-level fusion. In the second-level fusion, we combine the monostatic and bistatic features obtained in the first level. The radar cross section (RCS) of the 3D full-scale model is obtained using the electromagnetic analysis tool FEKO, and then, the feature vector of the target is extracted from it. The feature vector based on the waveform structure is used as the feature vector of the early-time region, while the resonance frequency extracted using the evolutionary programming-based CLEAN algorithm is used as the feature vector of the late-time region. The study results show that the two-level fusion method is better than the one-level fusion method.

Multi Scale Object Detection Based on Single Shot Multibox Detector with Feature Fusion and Inception Network

  • Haque, Md Foysal;Kang, Dae-Seong
    • 한국정보기술학회논문지
    • /
    • v.16 no.10
    • /
    • pp.93-100
    • /
    • 2018
  • Inception layers are most effective and constructive architecture in current object detection field. Moreover, beside Inception layers, feature fusion layers also shows significant performance for object detection. Single Shot Multibox Detector(SSD) is one of the most convenient and fastest algorithms in the current object detection field. SSD uses only a single convolutional neural network to detect the object from the input. Moreover, SSD algorithm shows high accuracy to detect multi-scale objects but detecting small objects SSD network performance is not satisfactory. In this paper, we added feature fusion layers and Inception layers with SSD to increase its performance. In this paper, our main focus to improve the features of different layers by feature fusion and also minimize the calculation cost by Inception. Moreover, here in SSD, we add extra layers to improve its performance without any affecting on its speed. By adding Inception layers and feature fusion layers, SSD network can catch more information without increasing the complexity of the network and it also able to detect more small objects. The new proposed algorithm can detect small and big objects with satisfactory accuracy.

Gait Recognition Algorithm Based on Feature Fusion of GEI Dynamic Region and Gabor Wavelets

  • Huang, Jun;Wang, Xiuhui;Wang, Jun
    • Journal of Information Processing Systems
    • /
    • v.14 no.4
    • /
    • pp.892-903
    • /
    • 2018
  • The paper proposes a novel gait recognition algorithm based on feature fusion of gait energy image (GEI) dynamic region and Gabor, which consists of four steps. First, the gait contour images are extracted through the object detection, binarization and morphological process. Secondly, features of GEI at different angles and Gabor features with multiple orientations are extracted from the dynamic part of GEI, respectively. Then averaging method is adopted to fuse features of GEI dynamic region with features of Gabor wavelets on feature layer and the feature space dimension is reduced by an improved Kernel Principal Component Analysis (KPCA). Finally, the vectors of feature fusion are input into the support vector machine (SVM) based on multi classification to realize the classification and recognition of gait. The primary contributions of the paper are: a novel gait recognition algorithm based on based on feature fusion of GEI and Gabor is proposed; an improved KPCA method is used to reduce the feature matrix dimension; a SVM is employed to identify the gait sequences. The experimental results suggest that the proposed algorithm yields over 90% of correct classification rate, which testify that the method can identify better different human gait and get better recognized effect than other existing algorithms.

Multimodal Biometric Using a Hierarchical Fusion of a Person's Face, Voice, and Online Signature

  • Elmir, Youssef;Elberrichi, Zakaria;Adjoudj, Reda
    • Journal of Information Processing Systems
    • /
    • v.10 no.4
    • /
    • pp.555-567
    • /
    • 2014
  • Biometric performance improvement is a challenging task. In this paper, a hierarchical strategy fusion based on multimodal biometric system is presented. This strategy relies on a combination of several biometric traits using a multi-level biometric fusion hierarchy. The multi-level biometric fusion includes a pre-classification fusion with optimal feature selection and a post-classification fusion that is based on the similarity of the maximum of matching scores. The proposed solution enhances biometric recognition performances based on suitable feature selection and reduction, such as principal component analysis (PCA) and linear discriminant analysis (LDA), as much as not all of the feature vectors components support the performance improvement degree.

Convolutional Neural Network Based Multi-feature Fusion for Non-rigid 3D Model Retrieval

  • Zeng, Hui;Liu, Yanrong;Li, Siqi;Che, JianYong;Wang, Xiuqing
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.176-190
    • /
    • 2018
  • This paper presents a novel convolutional neural network based multi-feature fusion learning method for non-rigid 3D model retrieval, which can investigate the useful discriminative information of the heat kernel signature (HKS) descriptor and the wave kernel signature (WKS) descriptor. At first, we compute the 2D shape distributions of the two kinds of descriptors to represent the 3D model and use them as the input to the networks. Then we construct two convolutional neural networks for the HKS distribution and the WKS distribution separately, and use the multi-feature fusion layer to connect them. The fusion layer not only can exploit more discriminative characteristics of the two descriptors, but also can complement the correlated information between the two kinds of descriptors. Furthermore, to further improve the performance of the description ability, the cross-connected layer is built to combine the low-level features with high-level features. Extensive experiments have validated the effectiveness of the designed multi-feature fusion learning method.

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

Change Detection in Bitemporal Remote Sensing Images by using Feature Fusion and Fuzzy C-Means

  • Wang, Xin;Huang, Jing;Chu, Yanli;Shi, Aiye;Xu, Lizhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.4
    • /
    • pp.1714-1729
    • /
    • 2018
  • Change detection of remote sensing images is a profound challenge in the field of remote sensing image analysis. This paper proposes a novel change detection method for bitemporal remote sensing images based on feature fusion and fuzzy c-means (FCM). Different from the state-of-the-art methods that mainly utilize a single image feature for difference image construction, the proposed method investigates the fusion of multiple image features for the task. The subsequent problem is regarded as the difference image classification problem, where a modified fuzzy c-means approach is proposed to analyze the difference image. The proposed method has been validated on real bitemporal remote sensing data sets. Experimental results confirmed the effectiveness of the proposed method.

Construction of Attractor System by Integrity Evaluation of Polyethylene Piping Materials (폴리에틸렌 배관재의 건전성 평가를 위한 어트랙터 시스템의 구축)

  • Taik, Hwang-Yeong;Kyu, Oh-Seung;Won, Yi
    • Proceedings of the KSME Conference
    • /
    • /
    • pp.609-615
    • /
    • 2001
  • This study proposes analysis and evaluation method of time series ultrasonic signal using attractor analysis for fusion joint part of polyethylene piping. Quantitatively characteristics of fusion joint part is analysed features extracted from time series. Trajectory changes in the attractor indicated a substantial difference in fractal characteristics. These differences in characteristics of fusion joint part enables the evaluation of unique characteristics of fusion joint part. In quantitative fractal feature extraction, feature values of 4.291 in the case of debonding and 3.694 in the case of bonding were proposed on the basis of fractal dimensions. In quantitative quadrant feature extraction, 1,306 point in the case of bonding(one quadrant) and 1,209 point(one quadrant) in the case of debonding were proposed on the basis of fractal dimensions. Proposed attractor feature extraction can be used for integrity evaluation of polyethylene piping material which is in case of bonding or debonding.

  • PDF

Bio-Inspired Object Recognition Using Parameterized Metric Learning

  • Li, Xiong;Wang, Bin;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.819-833
    • /
    • 2013
  • Computing global features based on local features using a bio-inspired framework has shown promising performance. However, for some tough applications with large intra-class variances, a single local feature is inadequate to represent all the attributes of the images. To integrate the complementary abilities of multiple local features, in this paper we have extended the efficacy of the bio-inspired framework, HMAX, to adapt heterogeneous features for global feature extraction. Given multiple global features, we propose an approach, designated as parameterized metric learning, for high dimensional feature fusion. The fusion parameters are solved by maximizing the canonical correlation with respect to the parameters. Experimental results show that our method achieves significant improvements over the benchmark bio-inspired framework, HMAX, and other related methods on the Caltech dataset, under varying numbers of training samples and feature elements.

Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion (천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정)

  • Shin, Ok-Shik;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.