DOI QR코드

DOI QR Code

Use of Unmanned Aerial Vehicle Imagery and Deep Learning UNet to Classification Upland Crop in Small Scale Agricultural Land

무인항공기와 딥러닝(UNet)을 이용한 소규모 농지의 밭작물 분류

  • Received : 2020.11.24
  • Accepted : 2020.12.06
  • Published : 2020.12.31

Abstract

In order to increase the food self-sufficiency rate, monitoring and analysis of crop conditions in the cultivated area is important, and the existing measurement methods in which agricultural personnel perform measurement and sampling analysis in the field are time-consuming and labor-intensive for this reason inefficient. In order to overcome this limitation, it is necessary to develop an efficient method for monitoring crop information in a small area where many exist. In this study, RGB images acquired from unmanned aerial vehicles and vegetation index calculated using RGB image were applied as deep learning input data to classify complex upland crops in small farmland. As a result of each input data classification, the classification using RGB images showed an overall accuracy of 80.23% and a Kappa coefficient of 0.65, In the case of using the RGB image and vegetation index, the additional data of 3 vegetation indices (ExG, ExR, VDVI) were total accuracy 89.51%, Kappa coefficient was 0.80, and 6 vegetation indices (ExG, ExR, VDVI, RGRI, NRGDI, ExGR) showed 90.35% and Kappa coefficient of 0.82. As a result, the accuracy of the data to which the vegetation index was added was relatively high compared to the method using only RGB images, and the data to which the vegetation index was added showed a significant improvement in accuracy in classifying complex crops.

경지면적의 작물 상황에 대한 모니터링 및 분석은 식량자급율을 높이기 위한 가장 중요한 요소이지만, 기존의 모니터링 방법은 노동 집약적이며 시간이 많이 들어 식량자급율을 높이기 위한 방안으로 그 활용성이 떨어진다. 이와같은 단점을 극복하기 위하여 국내에 다수 존재하고 있는 소규모 농지에서의 복합 작물 정보를 모니터링 하기위한 효율적인 방법을 개발할 필요가 있다. 본 연구에서는 복합작물의 분류 정확도를 높이기 위하여 무인항공기에서 취득된 RGB영상과 이를 이용한 식생지수를 딥러닝 입력데이터로 적용하고 복합 밭작물을 분류하였다. 각각의 입력데이터 분류 결과 RGB 영상을 이용한 분류는 전체정확도 80.23%, Kappa 계수 0.65가 나타났고, RGB영상과 식생지수를 이용한 방법의 경우 식생지수 3개(ExG,ExR,VDVI) 추가 데이터는 전체정확도 89.51%, Kappa 계수 0.80이며, 식생지수 6개(ExG,ExR,VDVI,RGRI,NGRDI,ExGR)는 90.35%, Kappa 계수 0.82로 분석되었다. 분류결과 RGB영상만을 이용한 방법에 비하여 식생지수를 추가한 결과 값이 비교적 높게 분석되었으며, 복합작물을 분류하는데 있어 식생지수를 추가한 데이터가 더 좋은 결과를 나타내었다.

Keywords

References

  1. Barrero, O. and Perdomo, S.A. (2018), RGB and multispectral UAV image fusion for Gramineae weed detection in rice fields, Precision Agriculture, Vol. 19, No. 5, pp. 809-822. https://doi.org/10.1007/s11119-017-9558-x
  2. Chew, R., Rineer, J., Beach, R., O'Neil, M., Ujeneza, N., Lapidus, D., Miano, T., Hegarty-Craver, M., Polly, J., and Temple, D. S. (2020), Deep Neural Networks and Transfer Learning for Food Crop Identification in UAV Images. Drones, Vol. 4, No. 1, 7p. https://doi.org/10.3390/drones4010007
  3. Choi, S.K., Lee, S.K., Kang, Y.B., Seong, S.K., Choi, D.Y., and Kim, G.H. (2020), Applicability of Image Classification Using Deep Learning in Small Area: Case of Agricultural Lands Using UAV Image. Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, Vol. 38, No. 1, pp. 23-33. https://doi.org/10.7848/KSGPC.2020.38.1.23
  4. Gallego, J., Kravchenko, A.N., Kussul, N.N., Skakun, S.V., Shelestov, A.Y., and Grypych, Y.A. (2012), Efficiency assessment of different approaches to crop classification based on satellite and ground observations, Journal of Automation and Information Sciences, Vol. 44, No. 5, pp. 67-80. https://doi.org/10.1615/JAutomatInfScien.v44.i5.70
  5. Gamon, J.A. and Surfus, J.S. (1999), Assessing leaf pigment content and activity with a reflectometer. The New Phytologist, Vol. 143, No. 1, pp. 105-117. https://doi.org/10.1046/j.1469-8137.1999.00424.x
  6. Ghosh, A., Ehrlich, M., Shah, S., Davis, L.S., and Chellappa, R. (2018), Stacked U-Nets for Ground Material Segmentation in Remote Sensing Imagery, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 18-22 June, Salt Lake City, United States, pp. 257-261.
  7. Gu, Z., Cheng, J., Fu, H., Zhou, K., Hao, H., Zhao, Y., Zhang, T., Gao, S., and Liu, J. (2019), Ce-net: Context encoder network for 2d medical image segmentation. IEEE transactions on medical imaging, Vol. 38, NO. 10, pp. 2281-2292. https://doi.org/10.1109/TMI.2019.2903562
  8. Huang, H., Lan, Y., Yang, A., Zhang, Y., Wen, S., and Deng, J. (2020), Deep learning versus Object-based Image Analysis (OBIA) in weed mapping of UAV imagery. International Journal of Remote Sensing, Vol. 41, No. 9, pp. 3446-3479. https://doi.org/10.1080/01431161.2019.1706112
  9. Hunt, E.R., Cavigelli, M., Daughtry, C.S., Mcmurtrey, J.E., and Walthall, C.L. (2005), Evaluation of digital photography from model aircraft for remote sensing of crop biomass and nitrogen status. Precision Agriculture, Vol. 6, No. 4, pp. 359-378. https://doi.org/10.1007/s11119-005-2324-5
  10. Hutt, C., Koppe, W., Miao, Y., and Bareth, G. (2016), Best accuracy land use/land cover (LULC) classification to derive crop types using multitemporal, multisensor, and multi-polarization SAR satellite images, Remote sensing, Vol. 8, No. 8, 684p. https://doi.org/10.3390/rs8080684
  11. Jaeger, P.F., Kohl, S.A., Bickelhaupt, S., Isensee, F., Kuder, T. A., Schlemmer, H.P., and Maier-Hein, K.H. (2020), Retina U-Net: Embarrassingly simple exploitation of segmentation supervision for medical object detection. Machine Learning for Health Workshop, PMLR, 11 December, pp. 171-183.
  12. Lottes, P., Khanna, R., Pfeifer, J., Siegwart, R., and Stachniss, C. (2017), UAV-based crop and weed classification for smart farming, 2017 IEEE International Conference on Robotics and Automation (ICRA), 29 May-3 June, Singapore, pp. 3024-3031.
  13. MAFRA. (2016), https://www.mafra.go.kr/mafra/293/subview.do?enc=Zm5jdDF8QEB8JTJGYmJzJTJGbWFmcmElMkY2OCUyRjMxMzA0NyUyRmFydGNsVmlldy5kbyUzRg%3D%3D (last date accessed: 27 August 2020).
  14. Meyer, G.E., Neto, J.C., Jones, D.D., and Hindman, T.W. (2004), Intensified fuzzy clusters for classifying plant, soil, and residue regions of interest from color images. Computers and electronics in agriculture, Vol. 42, No. 3, pp. 161-180. https://doi.org/10.1016/j.compag.2003.08.002
  15. NCIS. (2020), http://www.nics.go.kr/oneStopIndex/index.do (last date accessed: 27 August 2020).
  16. Neto, J.C. (2004), A combined statistical-soft computing approach for classification and mapping weed species in minimum-tillage systems, ProQuest, Michigan, U.S.
  17. Ronneberger, O., Fischer, P., and Brox, T. (2015), U-net: Convolutional networks for biomedical image segmentation, International Conference on Medical image computing and computer-assisted intervention, 5-9 October, Munich, Germany, pp. 234-241.
  18. RuBwurm, M. and Korner, M. (2017), Temporal vegetation modelling using long short-term memory networks for crop identification from medium-resolution multi-spectral satellite images, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 11-19.
  19. Sishodia, R.P., Ray, R.L., and Singh, S.K. (2020), Applications of Remote Sensing in Precision Agriculture: A Review. Remote Sensing, Vol. 12, No. 19, 3136p. https://doi.org/10.3390/rs12193136
  20. Woebbecke, D.M., Meyer, G.E., Von Bargen, K., and Mortensen, D.A. (1995), Color indices for weed identification under various soil, residue, and lighting conditions. Transactions of the ASAE, Vol. 38, No. 1, pp. 259-269. https://doi.org/10.13031/2013.27838
  21. Xiaoqin, W., Miaomiao, W., Shaoqiang, W., and Yundong, W. (2015), Extraction of vegetation information from visible unmanned aerial vehicle images. Transactions of the Chinese Society of Agricultural Engineering, Vol. 31, No. 5.

Cited by

  1. 유·무인 항공영상을 이용한 심층학습 기반 녹피율 산정 vol.37, pp.6, 2020, https://doi.org/10.7780/kjrs.2021.37.6.1.22