• Title/Summary/Keyword: ConvNet

Search Result 21, Processing Time 0.027 seconds

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Transfer Learning based on Adaboost for Feature Selection from Multiple ConvNet Layer Features (다중 신경망 레이어에서 특징점을 선택하기 위한 전이 학습 기반의 AdaBoost 기법)

  • Alikhanov, Jumabek;Ga, Myeong Hyeon;Ko, Seunghyun;Jo, Geun-Sik
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.04a
    • /
    • pp.633-635
    • /
    • 2016
  • Convolutional Networks (ConvNets) are powerful models that learn hierarchies of visual features, which could also be used to obtain image representations for transfer learning. The basic pipeline for transfer learning is to first train a ConvNet on a large dataset (source task) and then use feed-forward units activation of the trained ConvNet as image representation for smaller datasets (target task). Our key contribution is to demonstrate superior performance of multiple ConvNet layer features over single ConvNet layer features. Combining multiple ConvNet layer features will result in more complex feature space with some features being repetitive. This requires some form of feature selection. We use AdaBoost with single stumps to implicitly select only distinct features that are useful towards classification from concatenated ConvNet features. Experimental results show that using multiple ConvNet layer activation features instead of single ConvNet layer features consistently will produce superior performance. Improvements becomes significant as we increase the distance between source task and the target task.

Radar rainfall prediction based on deep learning considering temporal consistency (시간 연속성을 고려한 딥러닝 기반 레이더 강우예측)

  • Shin, Hongjoon;Yoon, Seongsim;Choi, Jaemin
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.5
    • /
    • pp.301-309
    • /
    • 2021
  • In this study, we tried to improve the performance of the existing U-net-based deep learning rainfall prediction model, which can weaken the meaning of time series order. For this, ConvLSTM2D U-Net structure model considering temporal consistency of data was applied, and we evaluated accuracy of the ConvLSTM2D U-Net model using a RainNet model and an extrapolation-based advection model. In addition, we tried to improve the uncertainty in the model training process by performing learning not only with a single model but also with 10 ensemble models. The trained neural network rainfall prediction model was optimized to generate 10-minute advance prediction data using four consecutive data of the past 30 minutes from the present. The results of deep learning rainfall prediction models are difficult to identify schematically distinct differences, but with ConvLSTM2D U-Net, the magnitude of the prediction error is the smallest and the location of rainfall is relatively accurate. In particular, the ensemble ConvLSTM2D U-Net showed high CSI, low MAE, and a narrow error range, and predicted rainfall more accurately and stable prediction performance than other models. However, the prediction performance for a specific point was very low compared to the prediction performance for the entire area, and the deep learning rainfall prediction model also had limitations. Through this study, it was confirmed that the ConvLSTM2D U-Net neural network structure to account for the change of time could increase the prediction accuracy, but there is still a limitation of the convolution deep neural network model due to spatial smoothing in the strong rainfall region or detailed rainfall prediction.

Spatio-Temporal Residual Networks for Slide Transition Detection in Lecture Videos

  • Liu, Zhijin;Li, Kai;Shen, Liquan;Ma, Ran;An, Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.4026-4040
    • /
    • 2019
  • In this paper, we present an approach for detecting slide transitions in lecture videos by introducing the spatio-temporal residual networks. Given a lecture video which records the digital slides, the speaker, and the audience by multiple cameras, our goal is to find keyframes where slide content changes. Since temporal dependency among video frames is important for detecting slide changes, 3D Convolutional Networks has been regarded as an efficient approach to learn the spatio-temporal features in videos. However, 3D ConvNet will cost much training time and need lots of memory. Hence, we utilize ResNet to ease the training of network, which is easy to optimize. Consequently, we present a novel ConvNet architecture based on 3D ConvNet and ResNet for slide transition detection in lecture videos. Experimental results show that the proposed novel ConvNet architecture achieves the better accuracy than other slide progression detection approaches.

Fast and All-Purpose Area-Based Imagery Registration Using ConvNets (ConvNet을 활용한 영역기반 신속/범용 영상정합 기술)

  • Baek, Seung-Cheol
    • Journal of KIISE
    • /
    • v.43 no.9
    • /
    • pp.1034-1042
    • /
    • 2016
  • Together with machine-learning frameworks, area-based imagery registration techniques can be easily applied to diverse types of image pairs without predefined features and feature descriptors. However, feature detectors are often used to quickly identify candidate image patch pairs, limiting the applicability of these registration techniques. In this paper, we propose a ConvNet (Convolutional Network) "Dart" that provides not only the matching metric between patches, but also information about their distance, which are helpful in reducing the search space of the corresponding patch pairs. In addition, we propose a ConvNet "Fad" to identify the patches that are difficult for Dart to improve the accuracy of registration. These two networks were successfully implemented using Deep Learning with the help of a number of training instances generated from a few registered image pairs, and were successfully applied to solve a simple image registration problem, suggesting that this line of research is promising.

Aerial Scene Labeling Based on Convolutional Neural Networks (Convolutional Neural Networks기반 항공영상 영역분할 및 분류)

  • Na, Jong-Pil;Hwang, Seung-Jun;Park, Seung-Je;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.6
    • /
    • pp.484-491
    • /
    • 2015
  • Aerial scene is greatly increased by the introduction and supply of the image due to the growth of digital optical imaging technology and development of the UAV. It has been used as the extraction of ground properties, classification, change detection, image fusion and mapping based on the aerial image. In particular, in the image analysis and utilization of deep learning algorithm it has shown a new paradigm to overcome the limitation of the field of pattern recognition. This paper presents the possibility to apply a more wide range and various fields through the segmentation and classification of aerial scene based on the Deep learning(ConvNet). We build 4-classes image database consists of Road, Building, Yard, Forest total 3000. Each of the classes has a certain pattern, the results with feature vector map come out differently. Our system consists of feature extraction, classification and training. Feature extraction is built up of two layers based on ConvNet. And then, it is classified by using the Multilayer perceptron and Logistic regression, the algorithm as a classification process.

RoutingConvNet: A Light-weight Speech Emotion Recognition Model Based on Bidirectional MFCC (RoutingConvNet: 양방향 MFCC 기반 경량 음성감정인식 모델)

  • Hyun Taek Lim;Soo Hyung Kim;Guee Sang Lee;Hyung Jeong Yang
    • Smart Media Journal
    • /
    • v.12 no.5
    • /
    • pp.28-35
    • /
    • 2023
  • In this study, we propose a new light-weight model RoutingConvNet with fewer parameters to improve the applicability and practicality of speech emotion recognition. To reduce the number of learnable parameters, the proposed model connects bidirectional MFCCs on a channel-by-channel basis to learn long-term emotion dependence and extract contextual features. A light-weight deep CNN is constructed for low-level feature extraction, and self-attention is used to obtain information about channel and spatial signals in speech signals. In addition, we apply dynamic routing to improve the accuracy and construct a model that is robust to feature variations. The proposed model shows parameter reduction and accuracy improvement in the overall experiments of speech emotion datasets (EMO-DB, RAVDESS, and IEMOCAP), achieving 87.86%, 83.44%, and 66.06% accuracy respectively with about 156,000 parameters. In this study, we proposed a metric to calculate the trade-off between the number of parameters and accuracy for performance evaluation against light-weight.

ConvXGB: A new deep learning model for classification problems based on CNN and XGBoost

  • Thongsuwan, Setthanun;Jaiyen, Saichon;Padcharoen, Anantachai;Agarwal, Praveen
    • Nuclear Engineering and Technology
    • /
    • v.53 no.2
    • /
    • pp.522-531
    • /
    • 2021
  • We describe a new deep learning model - Convolutional eXtreme Gradient Boosting (ConvXGB) for classification problems based on convolutional neural nets and Chen et al.'s XGBoost. As well as image data, ConvXGB also supports the general classification problems, with a data preprocessing module. ConvXGB consists of several stacked convolutional layers to learn the features of the input and is able to learn features automatically, followed by XGBoost in the last layer for predicting the class labels. The ConvXGB model is simplified by reducing the number of parameters under appropriate conditions, since it is not necessary re-adjust the weight values in a back propagation cycle. Experiments on several data sets from UCL Repository, including images and general data sets, showed that our model handled the classification problems, for all the tested data sets, slightly better than CNN and XGBoost alone and was sometimes significantly better.

Deep learning based Person Re-identification with RGB-D sensors

  • Kim, Min;Park, Dong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.35-42
    • /
    • 2021
  • In this paper, we propose a deep learning-based person re-identification method using a three-dimensional RGB-Depth Xtion2 camera considering joint coordinates and dynamic features(velocity, acceleration). The main idea of the proposed identification methodology is to easily extract gait data such as joint coordinates, dynamic features with an RGB-D camera and automatically identify gait patterns through a self-designed one-dimensional convolutional neural network classifier(1D-ConvNet). The accuracy was measured based on the F1 Score, and the influence was measured by comparing the accuracy with the classifier model (JC) that did not consider dynamic characteristics. As a result, our proposed classifier model in the case of considering the dynamic characteristics(JCSpeed) showed about 8% higher F1-Score than JC.

Hair and Fur Synthesizer via ConvNet Using Strand Geometry Images

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.85-92
    • /
    • 2022
  • In this paper, we propose a technique that can express low-resolution hair and fur simulations in high-resolution without noise using ConvNet and geometric images of strands in the form of lines. Pairs between low-resolution and high-resolution data can be obtained through physics-based simulation, and a low-resolution-high-resolution data pair is established using the obtained data. The data used for training is used by converting the position of the hair strands into a geometric image. The hair and fur network proposed in this paper is used for an image synthesizer that upscales a low-resolution image to a high-resolution image. If the high-resolution geometry image obtained as a result of the test is converted back to high-resolution hair, it is possible to express the elastic movement of hair, which is difficult to express with a single mapping function. As for the performance of the synthesis result, it showed faster performance than the traditional physics-based simulation, and it can be easily executed without knowing complex numerical analysis.