• Title, Summary, Keyword: Deep Neural Network(DNN)

Search Result 100, Processing Time 0.059 seconds

Deep neural networks for speaker verification with short speech utterances (짧은 음성을 대상으로 하는 화자 확인을 위한 심층 신경망)

  • Yang, IL-Ho;Heo, Hee-Soo;Yoon, Sung-Hyun;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.6
    • /
    • pp.501-509
    • /
    • 2016
  • We propose a method to improve the robustness of speaker verification on short test utterances. The accuracy of the state-of-the-art i-vector/probabilistic linear discriminant analysis systems can be degraded when testing utterance durations are short. The proposed method compensates for utterance variations of short test feature vectors using deep neural networks. We design three different types of DNN (Deep Neural Network) structures which are trained with different target output vectors. Each DNN is trained to minimize the discrepancy between the feed-forwarded output of a given short utterance feature and its original long utterance feature. We use short 2-10 s condition of the NIST (National Institute of Standards Technology, U.S.) 2008 SRE (Speaker Recognition Evaluation) corpus to evaluate the method. The experimental results show that the proposed method reduces the minimum detection cost relative to the baseline system.

Validation Data Augmentation for Improving the Grading Accuracy of Diabetic Macular Edema using Deep Learning (딥러닝을 이용한 당뇨성황반부종 등급 분류의 정확도 개선을 위한 검증 데이터 증강 기법)

  • Lee, Tae Soo
    • Journal of Biomedical Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.48-54
    • /
    • 2019
  • This paper proposed a method of validation data augmentation for improving the grading accuracy of diabetic macular edema (DME) using deep learning. The data augmentation technique is basically applied in order to secure diversity of data by transforming one image to several images through random translation, rotation, scaling and reflection in preparation of input data of the deep neural network (DNN). In this paper, we apply this technique in the validation process of the trained DNN, and improve the grading accuracy by combining the classification results of the augmented images. To verify the effectiveness, 1,200 retinal images of Messidor dataset was divided into training and validation data at the ratio 7:3. By applying random augmentation to 359 validation data, $1.61{\pm}0.55%$ accuracy improvement was achieved in the case of six times augmentation (N=6). This simple method has shown that the accuracy can be improved in the N range from 2 to 6 with the correlation coefficient of 0.5667. Therefore, it is expected to help improve the diagnostic accuracy of DME with the grading information provided by the proposed DNN.

Development of PM10 Forecasting Model for Seoul Based on DNN Using East Asian Wide Area Data (동아시아 광역 데이터를 활용한 DNN 기반의 서울지역 PM10 예보모델의 개발)

  • Yu, SukHyun
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.11
    • /
    • pp.1300-1312
    • /
    • 2019
  • BSTRACT In this paper, PM10 forecast model using DNN(Deep Neural Network) is developed for Seoul region. The previous Julian forecast model has been developed using weather and air quality data of Seoul region only. This model gives excellent results for accuracy and false alarm rates, but poor result for POD(Probability of Detection). To solve this problem, an WA(Wide Area) forecasting model that uses Chinese data is developed. The data is highly correlated with the emergence of high concentrations of PM10 in Korea. As a result, the WA model shows better accuracy, and POD improving of 3%(D+0), 21%(D+1), and 36%(D+2) for each forecast period compared with the Julian model.

Audio Event Classification Using Deep Neural Networks (깊은 신경망을 이용한 오디오 이벤트 분류)

  • Lim, Minkyu;Lee, Donghyun;Kim, Kwang-Ho;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.27-33
    • /
    • 2015
  • This paper proposes an audio event classification method using Deep Neural Networks (DNN). The proposed method applies Feed Forward Neural Network (FFNN) to generate event probabilities of ten audio events (dog barks, engine idling, and so on) for each frame. For each frame, mel scale filter bank features of its consecutive frames are used as the input vector of the FFNN. These event probabilities are accumulated for the events and the classification result is determined as the event with the highest accumulated probability. For the same dataset, the best accuracy of previous studies was reported as about 70% when the Support Vector Machine (SVM) was applied. The best accuracy of the proposed method achieves as 79.23% for the UrbanSound8K dataset when 80 mel scale filter bank features each from 7 consecutive frames (in total 560) were implemented as the input vector for the FFNN with two hidden layers and 2,000 neurons per hidden layer. In this configuration, the rectified linear unit was suggested as its activation function.

A Novel SOC Estimation Method for Multiple Number of Lithium Batteries Using Deep Neural Network (딥 뉴럴 네트워크를 이용한 새로운 리튬이온 배터리의 SOC 추정법)

  • Khan, Asad;Ko, Young-hwi;Choi, Woojin
    • Proceedings of the KIPE Conference
    • /
    • /
    • pp.70-72
    • /
    • 2019
  • For the safe and reliable operation of Lithium-ion batteries in Electric Vehicles (EVs) or Energy Storage Systems (ESSs), it is essential to have accurate information of the battery such as State of Charge (SOC). Many kinds of different techniques to estimate the SOC of the batteries have been developed so far such as the Kalman Filter. However, when it is applied to the multiple number of batteries it is difficult to maintain the accuracy of the estimation over all cells due to the difference in parameter value of each cell. Moreover the difference in the parameter of each cell may become larger as the operation time accumulates due to aging. In this paper a novel Deep Neural Network (DNN) based SOC estimation method for multi cell application is proposed. In the proposed method DNN is implemented to learn non-linear relationship of the voltage and current of the lithium-ion battery at different SOCs and different temperatures. In the training the voltage and current data of the Lithium battery at charge and discharge cycles obtained at different temperatures are used. After the comprehensive training with the data obtained with a cell resulting estimation algorithm is applied to the other cells. The experimental results show that the Mean Absolute Error (MAE) of the estimation is 0.56% at 25℃, and 3.16% at 60℃ with the proposed SOC estimation algorithm.

  • PDF

Performance assessments of feature vectors and classification algorithms for amphibian sound classification (양서류 울음 소리 식별을 위한 특징 벡터 및 인식 알고리즘 성능 분석)

  • Park, Sangwook;Ko, Kyungdeuk;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.6
    • /
    • pp.401-406
    • /
    • 2017
  • This paper presents the performance assessment of several key algorithms conducted for amphibian species sound classification. Firstly, 9 target species including endangered species are defined and a database of their sounds is built. For performance assessment, three feature vectors such as MFCC (Mel Frequency Cepstral Coefficient), RCGCC (Robust Compressive Gammachirp filterbank Cepstral Coefficient), and SPCC (Subspace Projection Cepstral Coefficient), and three classifiers such as GMM(Gaussian Mixture Model), SVM(Support Vector Machine), DBN-DNN(Deep Belief Network - Deep Neural Network) are considered. In addition, i-vector based classification system which is widely used for speaker recognition, is used to assess for this task. Experimental results indicate that, SPCC-SVM achieved the best performance with 98.81 % while other methods also attained good performance with above 90 %.

Fault Diagnosis of Induction Motor using Linear Predictive Coding and Deep Neural Network (LPC와 DNN을 결합한 유도전동기 고장진단)

  • Ryu, Jin Won;Park, Min Su;Kim, Nam Kyu;Chong, Ui Pil;Lee, Jung Chul
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.11
    • /
    • pp.1811-1819
    • /
    • 2017
  • As the induction motor is the core production equipment of the industry, it is necessary to construct a fault prediction and diagnosis system through continuous monitoring. Many researches have been conducted on motor fault diagnosis algorithm based on signal processing techniques using Fourier transform, neural networks, and fuzzy inference techniques. In this paper, we propose a fault diagnosis method of induction motor using LPC and DNN. To evaluate the performance of the proposed method, the fault diagnosis was carried out using the vibration data of the induction motor in steady state and simulated various fault conditions. Experimental results show that the learning time of our proposed method and the conventional spectrum+DNN method is 139 seconds and 974 seconds each executed on the experimental PC, and our method reduces execution time by 1/8 compared with conventional method. And the success rate of the proposed method is 98.08%, which is similar to 99.54% of the conventional method.

Deep Neural Network Model For Short-term Electric Peak Load Forecasting (단기 전력 부하 첨두치 예측을 위한 심층 신경회로망 모델)

  • Hwang, Heesoo
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.5
    • /
    • pp.1-6
    • /
    • 2018
  • In smart grid an accurate load forecasting is crucial in planning resources, which aids in improving its operation efficiency and reducing the dynamic uncertainties of energy systems. Research in this area has included the use of shallow neural networks and other machine learning techniques to solve this problem. Recent researches in the field of computer vision and speech recognition, have shown great promise for Deep Neural Networks (DNN). To improve the performance of daily electric peak load forecasting the paper presents a new deep neural network model which has the architecture of two multi-layer neural networks being serially connected. The proposed network model is progressively pre-learned layer by layer ahead of learning the whole network. For both one day and two day ahead peak load forecasting the proposed models are trained and tested using four years of hourly load data obtained from the Korea Power Exchange (KPX).

Hybrid CTC-Attention Based End-to-End Speech Recognition Using Korean Grapheme Unit (한국어 자소 기반 Hybrid CTC-Attention End-to-End 음성 인식)

  • Park, Hosung;Lee, Donghyun;Lim, Minkyu;Kang, Yoseb;Oh, Junseok;Seo, Soonshin;Rim, Daniel;Kim, Ji-Hwan
    • Annual Conference on Human and Language Technology
    • /
    • /
    • pp.453-458
    • /
    • 2018
  • 본 논문은 한국어 자소를 인식 단위로 사용한 hybrid CTC-Attention 모델 기반 end-to-end speech recognition을 제안한다. End-to-end speech recognition은 기존에 사용된 DNN-HMM 기반 음향 모델과 N-gram 기반 언어 모델, WFST를 이용한 decoding network라는 여러 개의 모듈로 이루어진 과정을 하나의 DNN network를 통해 처리하는 방법을 말한다. 본 논문에서는 end-to-end 모델의 출력을 추정하기 위해 자소 단위의 출력구조를 사용한다. 자소 기반으로 네트워크를 구성하는 경우, 추정해야 하는 출력 파라미터의 개수가 11,172개에서 49개로 줄어들어 보다 효율적인 학습이 가능하다. 이를 구현하기 위해, end-to-end 학습에 주로 사용되는 DNN 네트워크 구조인 CTC와 Attention network 모델을 조합하여 end-to-end 모델을 구성하였다. 실험 결과, 음절 오류율 기준 10.05%의 성능을 보였다.

  • PDF

Parameter-Efficient Neural Networks Using Template Reuse (템플릿 재사용을 통한 패러미터 효율적 신경망 네트워크)

  • Kim, Daeyeon;Kang, Woochul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.5
    • /
    • pp.169-176
    • /
    • 2020
  • Recently, deep neural networks (DNNs) have brought revolutions to many mobile and embedded devices by providing human-level machine intelligence for various applications. However, high inference accuracy of such DNNs comes at high computational costs, and, hence, there have been significant efforts to reduce computational overheads of DNNs either by compressing off-the-shelf models or by designing a new small footprint DNN architecture tailored to resource constrained devices. One notable recent paradigm in designing small footprint DNN models is sharing parameters in several layers. However, in previous approaches, the parameter-sharing techniques have been applied to large deep networks, such as ResNet, that are known to have high redundancy. In this paper, we propose a parameter-sharing method for already parameter-efficient small networks such as ShuffleNetV2. In our approach, small templates are combined with small layer-specific parameters to generate weights. Our experiment results on ImageNet and CIFAR100 datasets show that our approach can reduce the size of parameters by 15%-35% of ShuffleNetV2 while achieving smaller drops in accuracies compared to previous parameter-sharing and pruning approaches. We further show that the proposed approach is efficient in terms of latency and energy consumption on modern embedded devices.