• Title/Summary/Keyword: Inception-V3

Search Result 77, Processing Time 0.024 seconds

Evaluation of Classification Performance of Inception V3 Algorithm for Chest X-ray Images of Patients with Cardiomegaly (심장비대증 환자의 흉부 X선 영상에 대한 Inception V3 알고리즘의 분류 성능평가)

  • Jeong, Woo-Yeon;Kim, Jung-Hun;Park, Ji-Eun;Kim, Min-Jeong;Lee, Jong-Min
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.4
    • /
    • pp.455-461
    • /
    • 2021
  • Cardiomegaly is one of the most common diseases seen on chest X-rays, but if it is not detected early, it can cause serious complications. In view of this, in recent years, many researches on image analysis in which deep learning algorithms using artificial intelligence are applied to medical care have been conducted with the development of various science and technology fields. In this paper, we would like to evaluate whether the Inception V3 deep learning model is a useful model for the classification of Cardiomegaly using chest X-ray images. For the images used, a total of 1026 chest X-ray images of patients diagnosed with normal heart and those diagnosed with Cardiomegaly in Kyungpook National University Hospital were used. As a result of the experiment, the classification accuracy and loss of the Inception V3 deep learning model according to the presence or absence of Cardiomegaly were 96.0% and 0.22%, respectively. From the research results, it was found that the Inception V3 deep learning model is an excellent deep learning model for feature extraction and classification of chest image data. The Inception V3 deep learning model is considered to be a useful deep learning model for classification of chest diseases, and if such excellent research results are obtained by conducting research using a little more variety of medical image data, I think it will be great help for doctor's diagnosis in future.

A study on evaluation method of NIDS datasets in closed military network (군 폐쇄망 환경에서의 모의 네트워크 데이터 셋 평가 방법 연구)

  • Park, Yong-bin;Shin, Sung-uk;Lee, In-sup
    • Journal of Internet Computing and Services
    • /
    • v.21 no.2
    • /
    • pp.121-130
    • /
    • 2020
  • This paper suggests evaluating the military closed network data as an image which is generated by Generative Adversarial Network (GAN), applying an image evaluation method such as the InceptionV3 model-based Inception Score (IS) and Frechet Inception Distance (FID). We employed the famous image classification models instead of the InceptionV3, added layers to those models, and converted the network data to an image in diverse ways. Experimental results show that the Densenet121 model with one added Dense Layer achieves the best performance in data converted using the arctangent algorithm and 8 * 8 size of the image.

A Study on the Improvement of Accuracy of Cardiomegaly Classification Based on InceptionV3 (InceptionV3 기반의 심장비대증 분류 정확도 향상 연구)

  • Jeong, Woo Yeon;Kim, Jung Hun
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.1
    • /
    • pp.45-51
    • /
    • 2022
  • The purpose of this study is to improve the classification accuracy compared to the existing InceptionV3 model by proposing a new model modified with the fully connected hierarchical structure of InceptionV3, which showed excellent performance in medical image classification. The data used for model training were trained after data augmentation on a total of 1026 chest X-ray images of patients diagnosed with normal heart and Cardiomegaly at Kyungpook National University Hospital. As a result of the experiment, the learning classification accuracy and loss of the InceptionV3 model were 99.57% and 1.42, and the accuracy and loss of the proposed model were 99.81% and 0.92. As a result of the classification performance evaluation for precision, recall, and F1 score of Inception V3, the precision of the normal heart was 78%, the recall rate was 100%, and the F1 score was 88. The classification accuracy for Cardiomegaly was 100%, the recall rate was 78%, and the F1 score was 88. On the other hand, in the case of the proposed model, the accuracy for a normal heart was 100%, the recall rate was 92%, and the F1 score was 96. The classification accuracy for Cardiomegaly was 95%, the recall rate was 100%, and the F1 score was 97. If the chest X-ray image for normal heart and Cardiomegaly can be classified using the model proposed based on the study results, better classification will be possible and the reliability of classification performance will gradually increase.

A Hierarchical Deep Convolutional Neural Network for Crop Species and Diseases Classification (Deep Convolutional Neural Network(DCNN)을 이용한 계층적 농작물의 종류와 질병 분류 기법)

  • Borin, Min;Rah, HyungChul;Yoo, Kwan-Hee
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.11
    • /
    • pp.1653-1671
    • /
    • 2022
  • Crop diseases affect crop production, more than 30 billion USD globally. We proposed a classification study of crop species and diseases using deep learning algorithms for corn, cucumber, pepper, and strawberry. Our study has three steps of species classification, disease detection, and disease classification, which is noteworthy for using captured images without additional processes. We designed deep learning approach of deep learning convolutional neural networks based on Mask R-CNN model to classify crop species. Inception and Resnet models were presented for disease detection and classification sequentially. For classification, we trained Mask R-CNN network and achieved loss value of 0.72 for crop species classification and segmentation. For disease detection, InceptionV3 and ResNet101-V2 models were trained for nodes of crop species on 1,500 images of normal and diseased labels, resulting in the accuracies of 0.984, 0.969, 0.956, and 0.962 for corn, cucumber, pepper, and strawberry by InceptionV3 model with higher accuracy and AUC. For disease classification, InceptionV3 and ResNet 101-V2 models were trained for nodes of crop species on 1,500 images of diseased label, resulting in the accuracies of 0.995 and 0.992 for corn and cucumber by ResNet101 with higher accuracy and AUC whereas 0.940 and 0.988 for pepper and strawberry by Inception.

Instagram image classification with Deep Learning (딥러닝을 이용한 인스타그램 이미지 분류)

  • Jeong, Nokwon;Cho, Soosun
    • Journal of Internet Computing and Services
    • /
    • v.18 no.5
    • /
    • pp.61-67
    • /
    • 2017
  • In this paper we introduce two experimental results from classification of Instagram images and some valuable lessons from them. We have tried some experiments for evaluating the competitive power of Convolutional Neural Network(CNN) in classification of real social network images such as Instagram images. We used AlexNet and ResNet, which showed the most outstanding capabilities in ImageNet Large Scale Visual Recognition Challenge(ILSVRC) 2012 and 2015, respectively. And we used 240 Instagram images and 12 pre-defined categories for classifying social network images. Also, we performed fine-tuning using Inception V3 model, and compared those results. In the results of four cases of AlexNet, ResNet, Inception V3 and fine-tuned Inception V3, the Top-1 error rates were 49.58%, 40.42%, 30.42%, and 5.00%. And the Top-5 error rates were 35.42%, 25.00%, 20.83%, and 0.00% respectively.

Accuracy Evaluation of Brain Parenchymal MRI Image Classification Using Inception V3 (Inception V3를 이용한 뇌 실질 MRI 영상 분류의 정확도 평가)

  • Kim, Ji-Yul;Ye, Soo-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.3
    • /
    • pp.132-137
    • /
    • 2019
  • The amount of data generated from medical images is increasingly exceeding the limits of professional visual analysis, and the need for automated medical image analysis is increasing. For this reason, this study evaluated the classification and accuracy according to the presence or absence of tumor using Inception V3 deep learning model, using MRI medical images showing normal and tumor findings. As a result, the accuracy of the deep learning model was 90% for the training data set and 86% for the validation data set. The loss rate was 0.56 for the training data set and 1.28 for the validation data set. In future studies, it is necessary to secure the data of publicly available medical images to improve the performance of the deep learning model and to ensure the reliability of the evaluation, and to implement modeling by improving the accuracy of labeling through labeling classification.

Diagnostic Classification of Chest X-ray Pneumonia using Inception V3 Modeling (Inception V3를 이용한 흉부촬영 X선 영상의 폐렴 진단 분류)

  • Kim, Ji-Yul;Ye, Soo-Young
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.6
    • /
    • pp.773-780
    • /
    • 2020
  • With the development of the 4th industrial, research is being conducted to prevent diseases and reduce damage in various fields of science and technology such as medicine, health, and bio. As a result, artificial intelligence technology has been introduced and researched for image analysis of radiological examinations. In this paper, we will directly apply a deep learning model for classification and detection of pneumonia using chest X-ray images, and evaluate whether the deep learning model of the Inception series is a useful model for detecting pneumonia. As the experimental material, a chest X-ray image data set provided and shared free of charge by Kaggle was used, and out of the total 3,470 chest X-ray image data, it was classified into 1,870 training data sets, 1,100 validation data sets, and 500 test data sets. I did. As a result of the experiment, the result of metric evaluation of the Inception V3 deep learning model was 94.80% for accuracy, 97.24% for precision, 94.00% for recall, and 95.59 for F1 score. In addition, the accuracy of the final epoch for Inception V3 deep learning modeling was 94.91% for learning modeling and 89.68% for verification modeling for pneumonia detection and classification of chest X-ray images. For the evaluation of the loss function value, the learning modeling was 1.127% and the validation modeling was 4.603%. As a result, it was evaluated that the Inception V3 deep learning model is a very excellent deep learning model in extracting and classifying features of chest image data, and its learning state is also very good. As a result of matrix accuracy evaluation for test modeling, the accuracy of 96% for normal chest X-ray image data and 97% for pneumonia chest X-ray image data was proven. The deep learning model of the Inception series is considered to be a useful deep learning model for classification of chest diseases, and it is expected that it can also play an auxiliary role of human resources, so it is considered that it will be a solution to the problem of insufficient medical personnel. In the future, this study is expected to be presented as basic data for similar studies in the case of similar studies on the diagnosis of pneumonia using deep learning.

A Study on the Explainability of Inception Network-Derived Image Classification AI Using National Defense Data (국방 데이터를 활용한 인셉션 네트워크 파생 이미지 분류 AI의 설명 가능성 연구)

  • Kangun Cho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.256-264
    • /
    • 2024
  • In the last 10 years, AI has made rapid progress, and image classification, in particular, are showing excellent performance based on deep learning. Nevertheless, due to the nature of deep learning represented by a black box, it is difficult to actually use it in critical decision-making situations such as national defense, autonomous driving, medical care, and finance due to the lack of explainability of judgement results. In order to overcome these limitations, in this study, a model description algorithm capable of local interpretation was applied to the inception network-derived AI to analyze what grounds they made when classifying national defense data. Specifically, we conduct a comparative analysis of explainability based on confidence values by performing LIME analysis from the Inception v2_resnet model and verify the similarity between human interpretations and LIME explanations. Furthermore, by comparing the LIME explanation results through the Top1 output results for Inception v3, Inception v2_resnet, and Xception models, we confirm the feasibility of comparing the efficiency and availability of deep learning networks using XAI.

Image Clustering Using Machine Learning : Study of InceptionV3 with K-means Methods. (머신 러닝을 사용한 이미지 클러스터링: K-means 방법을 사용한 InceptionV3 연구)

  • Nindam, Somsauwt;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.681-684
    • /
    • 2021
  • In this paper, we study image clustering without labeling using machine learning techniques. We proposed an unsupervised machine learning technique to design an image clustering model that automatically categorizes images into groups. Our experiment focused on inception convolutional neural networks (inception V3) with k-mean methods to cluster images. For this, we collect the public datasets containing Food-K5, Flowers, Handwritten Digit, Cats-dogs, and our dataset Rice Germination, and the owner dataset Palm print. Our experiment can expand into three-part; First, format all the images to un-label and move to whole datasets. Second, load dataset into the inception V3 extraction image features and transferred to the k-mean cluster group hold on six classes. Lastly, evaluate modeling accuracy using the confusion matrix base on precision, recall, F1 to analyze. In this our methods, we can get the results as 1) Handwritten Digit (precision = 1.000, recall = 1.000, F1 = 1.00), 2) Food-K5 (precision = 0.975, recall = 0.945, F1 = 0.96), 3) Palm print (precision = 1.000, recall = 0.999, F1 = 1.00), 4) Cats-dogs (precision = 0.997, recall = 0.475, F1 = 0.64), 5) Flowers (precision = 0.610, recall = 0.982, F1 = 0.75), and our dataset 6) Rice Germination (precision = 0.997, recall = 0.943, F1 = 0.97). Our experiment showed that modeling could get an accuracy rate of 0.8908; the outcomes state that the proposed model is strongest enough to differentiate the different images and classify them into clusters.

Grading of Harvested 'Mihwang' Peach Maturity with Convolutional Neural Network (합성곱 신경망을 이용한 '미황' 복숭아 과실의 성숙도 분류)

  • Shin, Mi Hee;Jang, Kyeong Eun;Lee, Seul Ki;Cho, Jung Gun;Song, Sang Jun;Kim, Jin Gook
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.270-278
    • /
    • 2022
  • This study was conducted using deep learning technology to classify for 'Mihwang' peach maturity with RGB images and fruit quality attributes during fruit development and maturation periods. The 730 images of peach were used in the training data set and validation data set at a ratio of 8:2. The remains of 170 images were used to test the deep learning models. In this study, among the fruit quality attributes, firmness, Hue value, and a* value were adapted to the index with maturity classification, such as immature, mature, and over mature fruit. This study used the CNN (Convolutional Neural Networks) models for image classification; VGG16 and InceptionV3 of GoogLeNet. The performance results show 87.1% and 83.6% with Hue left value in VGG16 and InceptionV3, respectively. In contrast, the performance results show 72.2% and 76.9% with firmness in VGG16 and InceptionV3, respectively. The loss rate shows 54.3% and 62.1% with firmness in VGG16 and InceptionV3, respectively. It considers increasing for adapting a field utilization with firmness index in peach.