DOI QR코드

DOI QR Code

Improving Field Crop Classification Accuracy Using GLCM and SVM with UAV-Acquired Images

  • Seung-Hwan Go (Department of Agricultural and Rural Engineering, Chungbuk National University) ;
  • Jong-Hwa Park (Department of Agricultural and Rural Engineering, Chungbuk National University)
  • Received : 2024.01.25
  • Accepted : 2024.02.16
  • Published : 2024.02.28

Abstract

Accurate field crop classification is essential for various agricultural applications, yet existing methods face challenges due to diverse crop types and complex field conditions. This study aimed to address these issues by combining support vector machine (SVM) models with multi-seasonal unmanned aerial vehicle (UAV) images, texture information extracted from Gray Level Co-occurrence Matrix (GLCM), and RGB spectral data. Twelve high-resolution UAV image captures spanned March-October 2021, while field surveys on three dates provided ground truth data. We focused on data from August (-A), September (-S), and October (-O) images and trained four support vector classifier (SVC) models (SVC-A, SVC-S, SVC-O, SVC-AS) using visual bands and eight GLCM features. Farm maps provided by the Ministry of Agriculture, Food and Rural Affairs proved efficient for open-field crop identification and served as a reference for accuracy comparison. Our analysis showcased the significant impact of hyperparameter tuning (C and gamma) on SVM model performance, requiring careful optimization for each scenario. Importantly, we identified models exhibiting distinct high-accuracy zones, with SVC-O trained on October data achieving the highest overall and individual crop classification accuracy. This success likely stems from its ability to capture distinct texture information from mature crops.Incorporating GLCM features proved highly effective for all models,significantly boosting classification accuracy.Among these features, homogeneity, entropy, and correlation consistently demonstrated the most impactful contribution. However, balancing accuracy with computational efficiency and feature selection remains crucial for practical application. Performance analysis revealed that SVC-O achieved exceptional results in overall and individual crop classification, while soybeans and rice were consistently classified well by all models. Challenges were encountered with cabbage due to its early growth stage and low field cover density. The study demonstrates the potential of utilizing farm maps and GLCM features in conjunction with SVM models for accurate field crop classification. Careful parameter tuning and model selection based on specific scenarios are key for optimizing performance in real-world applications.

Keywords

1. Introduction

Land use land cover (LULC) classification is the most widely used technology in remote sensing (RS), with LULC maps being a prime example (Ren et al., 2019; Jining et al., 2019; Wang et al., 2012). While statistical and machine learning methods based on satellite or unmanned aerial vehicle (UAV) images are common for LULC classification (Antonarakis et al., 2008; Yang et al., 2017), accurately classifying field crops poses a unique challenge due to their seasonal variations and differing growth periods. Securing high classification accuracy is crucial, especially when relying solely on satellite and UAV images.

Here, analysis methods utilizing area texture information, which incorporates surrounding elements, have shown promise. Among such methods, the feature quantities generated by Gray Level Co-occurrence Matrix (GLCM) have been employed in various studies (Marceau et al., 1990; Maillard, 2003). Zakeri et al. (2017) highlighted the effectiveness of texture features in characterizing diverse features like urban areas, soil, rocks, and vegetation due to their ability to capture spatial relationships between pixel values. This has been further corroborated by several studies demonstrating the efficacy of GLCM as a texture analysis method (Kandaswamy et al., 2005; Feng et al., 2015).

Consequently, numerous researchers have reported improved classification accuracy when incorporating GLCM as a texture feature for LULC classification and mapping (van der Sanden and Hoekman, 1999; Wu and Linders, 2000). While Gupta et al. (2014) analyzed individual texture features and used a decision tree classification technique for LULC classification, these various studies collectively highlight the usefulness of GLCM features extracted from high-resolution images for boosting land cover classification accuracy (Igarashi and Wakabayashi, 2022). However, the selection of effective GLCM features often relies on trial and error, with systematic analysis using time series data remaining scarce.

Accurate crop classification presents a challenge that goes beyond the capabilities of simple LULC techniques. Crop characteristics vary significantly depending on planting conditions, region, and even individual crop health, leading to a relative dearth of research in this area (Li et al., 2014; Laliberte and Rango, 2009). The difficulties are compounded in environments where diverse crops are grown in close proximity or where crops with different growing seasons intermingle. In such cases, the surrounding environment can significantly influence individual crops, often leading to reduced classification accuracy. To effectively analyze these complex interactions, multi-temporal imagery is essential, but acquiring such data consistently can be difficult due to weather and climate limitations. Consequently, many related studies miss crucial periods for data collection.

Artificial intelligence (AI)-powered image processing technology offers a promising solution to these challenges (Bouguettaya et al., 2022; Zhong et al., 2019). AI is making waves in agriculture, aiming to enhance flexibility, accuracy, performance, and cost-effectiveness. Machine learning and deep learning are among the currently deployed AI technologies, with machine learning methods proving particularly valuable for field crop classification (Jay et al., 2019). Previous research has established the suitability of the support vector machine (SVM) model, alongside the random forest (RF) model, in agricultural and other industrial fields (Feng et al., 2015; Chang and Lin, 2011). While the RF model may exhibit slightly higher accuracy, the SVM model boasts superior processing efficiency.

Therefore, this study aimed to evaluate the effectiveness of combining the SVM model, farm maps, and GLCM features to improve crop classification accuracy in complex field conditions, characterized by diverse crop types, mixed vegetation, and variable lighting conditions, with the goal of enhancing the applicability of UAV technology in agricultural applications.

2. Materials and Methods

2.1. Study Area

Located at 36°52’11”N, 127°52’10.5”E in Goesan-gun, Chungbuk, South Korea, a 580-hectare area was chosen as the study site (Fig. 1). UAV images were acquired, and field surveys were conducted. The climate exhibits a distinct continental character, with an average annual temperature of 11.6°C and a peak of 26.2°C in August. Notably, the area receives above the national average rainfall(1280 mm), totaling 1440 mm annually, with 66.0% (951 mm) concentrated in summer and 15.0% (216 mm)in fall(Korea Meteorological Administration, 2015). The topography is flat, predominantly consisting of wide farmland where corn, cabbage, ginseng, pepper, soybeans, and rice are cultivated. Within the study area itself, cabbage, rice, and soybeans are the primary crops, all grown in summer and harvested in fall.

OGCSBN_2024_v40n1_93_f0001.png 이미지

Fig. 1. The study site (36°52’11”N, 127°52’10.5”E) is Goesan-gun, Chungbuk, South Korea (red diamond marker).The area inside the black solid box on the upper right is the study area.

2.2. Materials and Usage Data

UAV image data was acquired 12 times in total, from March 29 to October 21, 2021. This provided a comprehensive temporal dataset for monitoring the cultivated crops throughout the growing season. The status of cultivated crops was further assessed through five dedicated field surveys. Pre-processing and registration of the UAV images were performed using Pix4D Mapper(Pix4D, Prilly, Switzerland), a specialized image processing program. This ensured accurate and consistent data for further analysis. Field survey data was collected and managed using ArcGISPro 2.7.3 (Esri, Redlands, CA, USA).Geometric correction, crucial for precise spatial analysis, was achieved using nine ground control point (GCP) coordinates measured with a GPS-RTK V30 (HI-TARGET, Guangzhou, China). This ensured the UAV images were accurately aligned with real-world locations.

Finally, radiation correction of the reflectance image for each multispectral band was performed using a direct correction technique with an optical sensor and calibration panel provided by Parrot. This step ensured accurate radiometric measurements for reliable crop analysis. By combining UAV imagery with field surveys and employing robust data processing techniques, this study acquired a rich and accurate dataset for monitoring and analyzing cultivated crops (Jeong and Park, 2021).

2.3. Study Workflow and Data Preparation

The study workflow, outlined in Fig. 2, comprised four key steps: (a) UAV image acquisition and field survey: High-resolution UAV images were captured on three dates(August 19, September 13, and October 21, 2021) using only visible bands. Simultaneously, field surveys were conducted to gather ground truth data for subsequent labeling and validation. (b) Field labeling and dataset production: Leveraging the shape file of a provided farm map, each object within the actual field boundaries was assigned a ground-truth label. Irrelevant elements were excluded, and specific crops like cabbage (1), rice (2), soybean (3), and others (0) were assigned unique integer codes. These codes were manually entered into their corresponding shape objects on the farm map, essentially creating an “SVM answer sheet” by visually identifying and labeling crop objects during field visits. (c) Variable dataset production: To enrich the dataset for SVM machine learning analysis, additional relevant variables were extracted from the labeled data. These included features like area, perimeter, texture, and spectral values.(d)Crop classification model production and evaluation: The prepared dataset was then used to develop and evaluate SVM machine learning models for field crop classification. This involved tasks like model training, validation, and performance assessment.

OGCSBN_2024_v40n1_93_f0002.png 이미지

Fig. 2. Study flow chart from UAV image acquisition and field investigation to field crop classification and accuracy evaluation process. DN: digital number.

2.4. Production of GLCM-Texture Feature

GLCM-texture feature (GLCM-TF) was developed to expand feature extraction capabilities through RGB image conversion (Haralick et al., 1973). It utilizes GLCM, which statistically analyzes spatial relationships between pixel values. GLCM expresses these relationships as a matrix, using grayscale images to define the relative positions of adjacent pixels and count their occurrences (Feng et al., 2015; Rao et al., 2002).

As depicted in Fig. 2, GLCM-TF operates by (a) converting an RGB image to a grayscale image using the average of the DN values of R, G, and B,(b)segmenting the image into kernel-sized blocks, (c) calculating GLCM for each block, (d) performing convolution based on a defined equation, (e) setting the center pixel value, and (f) generating a new image.

An index map-based variable dataset was created using GLCM-TF, with 8 variables: mean, variance, entropy, homogeneity, contrast, dissimilarity, angular second moment (ASM), and correlation. Additionally, two wavelength reflectance values (green and red) were included as variables due to their relevance in crop differentiation. The final dataset comprised 10 variables: 2 reflectance values and 8 texture features.

2.5. Evaluating Crop Classification Accuracy through Classifier Selection and Cross-Validation

The field-specific labeling dataset (containing crop type information) was combined with the variable dataset, resulting in a comprehensive dataset ready for machine learning (Mountrakis et al., 2011; Maxwell et al., 2018). We employed an SVM algorithm for crop classification. SVM, a supervised learning algorithm frequently employed in land cover classification using remote sensing data, was selected for this study. Its advantage lies in its ability to achieve high classification performance with relatively limited sample data compared to other models. Specifically, we utilized a soft margin SVM variant known as the support vector classifier(SVC)to perform the classification tasks.

This dataset was then divided into a 70% learning set for model training and a 30% evaluation set for performance assessment. To address overfitting and optimize performance, cross-validation was implemented. Recursive variable elimination (RFE) was applied to reduce memory usage and simplify model complexity. Grid search, combined with stratified K-folds cross-validation, was employed to optimize SVM parameters and evaluate the resulting model’s performance.

Four distinct datasets were created based on acquisition dates: (1) SVC-A (August 19, 2021), (2) SVC-S (September 13, 2021), (3) SVC-O (October 21, 2021), and (4) SVC-AS (a combination of SVC-A and SVC-S). Model performance was evaluated using a confusion matrix, generating precision, recall, accuracy, and F1 scores.

3. Results

3.1. Field Survey Results of Cultivated Crops Using Farm Map

Farm maps, digital representations of actual farmland based on aerial and satellite imagery, offer great potential for streamlining crop classification Fig. 3(a). Provided by the Korean Ministry of Agriculture, Food and Rural Affairs(MAFRA), these maps depict areas and properties like rice fields, open fields, orchards, and facilities. In this study, we focused on classifying crops grown in open fields, where distinguishing cultivation areas is crucial. While manual classification can be time-consuming, farm maps, readily available in shapefile format, significantly expedite this process. However, we limited our analysis to rice paddies and open fields, excluding facility-grown crops like ginseng. To evaluate classification accuracy, we applied crop survey data obtained through field surveys to the farm map shapefile, creating a comparison group as shown in Fig. 3(b). For each individual parcel within this comparison group, we extracted data from corresponding image data, forming the basis for developing our crop classification algorithm.

OGCSBN_2024_v40n1_93_f0003.png 이미지

Fig. 3. Parcel classification in the study area using farm maps and field surveys. (a) Parcel classification based on farm maps. (b) Application of field survey results to each parcel.

3.2. Comparison of Characteristics of C and Gamma of SVC Model

The X-axis in all four heatmaps (Fig. 4) represents the gamma parameter, which controls the influence of nearby data points on the classification boundary. Higher gamma values lead to more complex, non-linear decision boundaries, while lower values result in simpler, linear boundaries. The Y-axis in all four heatmaps shows the C parameter, which controls the trade-off between minimizing training errors and maximizing the margin between classes. Higher C values prioritize minimizing training errors, potentially leading to overfitting, while lower values prioritize maximizing the margin, potentially leading to underfitting. The color scale in all four heatmaps represents the classification accuracy achieved by the SVC model for different combinations of C and gamma. Higher accuracy is indicated by warmer colors like yellow and red, while lower accuracy is indicated by cooler colors like blue and green.

OGCSBN_2024_v40n1_93_f0004.png 이미지

Fig. 4. Comparison of heatmap characteristics of C and gamma for four SVC-applied models.

The heatmap features of the four models are summarized as follows: (1) SVC-A shows a broad peak of high accuracy around gamma=0.3 and C=10. There’s a diagonal pattern of decreasing accuracy towards the top right corner (higher C and gamma). (2) SVC-S shows similar to SVC-A, but the peak accuracy is narrower and shifted slightly towards lower gamma (around 0.1). Also, there’s a sharper drop in accuracy at higher C values. (3) SVC-O shows a more complex pattern with multiple areas of high accuracy. Notably, there’s a peak around gamma=0.03 and C=100, and another around gamma=0.1 and C=10. (4) SVC-AS shows like SVC-O, but with a less pronounced peak at gamma=0.03 and C=100. SVC-A and SVC-S seem to prefer a broader, smoother decision boundary (lower gamma) compared to SVC-O and SVC-AS, which benefit from a more complex boundary (higher gamma). For all kernels, increasing C beyond a certain point (around 100 in most cases) leads to a drop in accuracy due to overfitting. The heatmaps show that C and gamma have a significant impact on the classification accuracy of the SVC model. Choosing the right combination of these parametersis crucial for optimizing the model’s performance on a specific dataset.

3.3. Performance Analysis of GLCM Application

Table 1 highlights selected factors that improve performance by incorporating GLCM feature quantities as texture information directly into the feature vector alongside C and gamma values, optical information, and texture information for each of the four SVC models. Notably, applying all GLCM elements to the acquired image information resulted in remarkably high accuracy in classifying field crop cultivation areas. However, if incorporating all GLCM elements proves challenging, applying homogeneity, entropy, and correlation consistently selected GLCM variables across the four models in conjunction with the acquired image information can still yield remarkably effective results.

Table 1. Variable values and accuracy evaluation results for each four models

OGCSBN_2024_v40n1_93_t0001.png 이미지

Analysis of applying GLCM features to the SVC model revealed several key findings compared to prior studies using solely images: (1) Vasantha and Keesara (2019) classified crop types by the kernel using RGB UAV images and SVM. The results showed that the RBF kernel achieved the highest accuracy of 0.87. this study demonstrated a high classification accuracy of up to 0.97. Therefore, GLCM features significantly enhance crop classification accuracy with SVM models. This highlights the crucial role of texture information in improving performance. (2) Among the eight GLCM elements, homogeneity, entropy, and correlation consistently emerged as the most impactful, appearing in all models and boosting performance. This suggests their critical importance as texture features for crop classification. (3) Balancing accuracy, computational efficiency, and feature selection is crucial. Choosing the optimal combination of factors requires careful consideration to optimize performance without sacrificing practicality or resource limitations. (4) Careful hyperparameter tuning (C and gamma) is essential for maximizing model performance. Different models and datasets may require optimal tuning of these parameters to achieve the best possible results. Each model’s hyperparameters need to be optimized to achieve its full potential. The optimal settings depend on the specific kernel and data characteristics, and finding the best combination often requires experimentation and grid search techniques.

3.4. Crop Classification Performance Evaluation

Table 2 shows SVC-O stands out with the highest overall accuracy (0.97 precision). SVC-O also excels in classifying individual crops based on F1 scores. SVC-AS (0.90 precision), SVC-S (0.88 precision), and SVC-A(0.86 precision)reveal offer good accuracy but slightly trail SVC-O. Soybeans are consistently classified very accurately by all models. Rice is also classified well, with high recall but slightly lower precision. Cabbage poses challenges for classification, likely due to its early growth stage and field cover density. The SVC models demonstrated the following model characteristics and utilization efficiency: (1) SVC-O: Most accurate overall, ideal for single-season crop classification in October, and potentially captures more distinct texture information from mature crops. (2) SVC-S: Good balance of accuracy and efficiency, might be useful for capturing texture information earlier in the growing season, and could be explored for multi-season classification.(3) SVC-A: Good efficiency, but slightly less accurate than SVC-O, suitable when computational resources are limited. Accuracy: SVC-A and SVC-O achieve the highest best scores, but SVC-O has the highest test scores, suggesting better generalization on this specific dataset. Applicability: SVC-A appears more robust due to its wider range of acceptable C and gamma values, making it suitable for diverse data conditions. SVC-S and SVC-AS offer good accuracy with simpler models, while SVC-O might require careful hyperparameter tuning to avoid overfitting.

Table 2. Index accuracy and accuracy evaluation results for each four models

OGCSBN_2024_v40n1_93_t0002.png 이미지

4. Conclusions

This study aimed to elevate field crop classification accuracy by combining SVM models with multi-seasonal UAV images, texture information extracted using GLCM, and spectral data. Accurate classification is vital for crop management and yield optimization, but existing methods often struggle with diverse crop types and complex field conditions. We addressed these challenges by leveraging high-resolution UAV images and advanced machine-learning techniques.

Farm maps provided by the MAFRA proved efficient in identifying open-field crops and served as a valuable baseline for accuracy comparison. Optimizing the C and gamma parameters of the SVM model significantly influenced classification accuracy, highlighting the importance of careful tuning for optimal performance.

Interestingly, each model displayed unique high-accuracy zones. SVC-O, trained on October data, achieved the highest overall and individual crop classification accuracy, while soybeans and rice were consistently well-classified by all models, likely due to their ability to capture distinct texture information from mature crops. Cabbage posed challenges due to its early growth stage and low field cover density. Incorporating GLCM features significantly improved accuracy for all models, with homogeneity, entropy, and correlation consistently demonstrating the most impactful contributions. However, balancing accuracy with computational efficiency and feature selection remains crucial for practical application.

Overall, this study underscores the potential of using farm maps and GLCM features in conjunction with SVM models for accurate field crop classification. Careful parameter tuning and model selection based on specific scenarios are key for optimizing performance in real-world applications.

While field validation was not conducted, the improved accuracy suggests potential for practical applications in crop monitoring and management. More accurate classification could enable targeted application of fertilizers and pesticides, reducing waste and enhancing resource efficiency. Additionally, early detection of crop stress or disease could be facilitated, leading to timely interventions and improved crop health.

Future research aims to further explore the agricultural utility of UAV images’ texture information based on these findings. By analyzing data from continuous imaging across diverse crop stages, environmental conditions, and various field settings, we can enhance the method’s robustness and generalizability. Ultimately, this will lead to a more adaptable solution for crop classification in diverse real-world scenarios. This research provides valuable insights for farmers and agricultural researchers leveraging multi-seasonal UAV images and machine learning for crop management and yield optimization.

Acknowledgments

We are very grateful to the experts for our appropriate and constructive suggestions to improve this paper.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

References

  1. Antonarakis, A. S., Richards, K. S., and Brasington, J., 2008. Object-based land cover classification using airborne LiDAR. Remote Sensing of Environment, 112(6), 2988-2998. https://doi.org/10.1016/j.rse.2008.02.004
  2. Bouguettaya, A., Zarzour, H., Kechida, A., and Taberkit, A. M., 2022. Deep learning techniques to classify agricultural crops through UAV imagery:A review. Neural Computing and Applications, 34, 9511-9536. https://doi.org/10.1007/s00521-022-07104-9
  3. Chang,C.C., and Lin,C.J., 2011. LIBSVM: A library for support vector machines.ACM Transactions on Intelligent Systems and Technology, 2(3), 1-27. https://doi.org/10.1145/1961189.1961199
  4. Feng, Q., Liu, J., and Gong, J., 2015. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sensing, 7(1), 1074-1094. https://doi.org/10.3390/rs70101074
  5. Gupta, S., Singh, D., and Kumar, S., 2014. An approach based on texture measures to classify the fully polarimetric SAR image. In Proceedings of the 2014 9th International Conference on Industrial and Information Systems(ICIIS), Gwalior, India, Dec. 15-17, pp. 1-6. https://doi.org/10.1109/ICIINFS.2014.7036651
  6. Haralick, R. M., Shanmugam, K., and Dinstein, I., 1973. Textural features for image classification. IEEE Transaction on Systems, Man, and Cybernetics, SMC-3(3), 610-621. https://doi.org/10.1109/TSMC.1973.4309314
  7. Igarashi, T., and Wakabayashi, H., 2022. Accuracy improvement of land cover classification for UAV-acquired high-resolution images using texture information. Journal of the Remote Sensing Society of Japan, 42(2), 101-118. https://doi.org/10.11440/rssj.2021.055
  8. Jay, S., Baret, F., Dutartre, D., Malatesta, G., Heno, S., Comar, A., Weiss, M., and Maupas, F., 2019. Exploiting the centimeter resolution of UAV multispectral imagery to improve remote-sensing estimates of canopy structure and biochemistry in sugar beet crops. Remote Sensing of Environment, 231, 110898. https://doi.org/10.1016/j.rse.2018.09.011
  9. Jeong, C. H., and Park, J. H., 2021. Analysis of growth characteristics using plant height and NDVI of four waxy corn varieties based on UAV imagery. Korean Journal of Remote Sensing, 37(4), 733-745. https://doi.org/10.7780/kjrs.2021.37.4.5
  10. Jining, Y., Wang, L., Song, W., Chen, Y., Chen, X., and Deng, Z., 2019. A time-series classification approach based on change detection for rapid land cover mapping. ISPRS Journal of Photogrammetry and Remote Sensing, 158, 249-262. https://doi.org/10.1016/j.isprsjprs.2019.10.003
  11. Kandaswamy, U., Adjeroh, D. A., and Lee, M. C., 2005. Efficient texture analysis of SAR imagery. IEEE Transactions on Geoscience and Remote Sensing, 43(9), 2075-2083. https://doi.org/10.1109/TGRS.2005.852768
  12. Korea Meteorological Administration, 2015. KMA weather data service open MET data portal. Available online: https://data.kma.go.kr/cmmn/main.do (accessed on Jan. 13, 2024).
  13. Laliberte, A. S., and Rango, A., 2009. Texture and scale in object-based analysis of subdecimeter resolution unmanned aerial vehicle (UAV) imagery. IEEE Transactions on Geoscience and Remote Sensing, 47(3), 761-770. https://doi.org/10.1109/TGRS.2008.2009355
  14. Li, L., Zhang, Q., and Huang, D., 2014. A review of imaging techniques for plant phenotyping. Sensors, 14(11), 20078-20111. https://doi.org/10.3390/s141120078
  15. Maillard, P., 2003. Comparing texture analysis methods through classification. Photogrammetric Engineering and Remote Sensing, 69(4), 357-367. https://doi.org/10.14358/PERS.69.4.357
  16. Marceau, D. J., Howarth, P. J., Dubois, J. M., and Gratton, D. J., 1990. Evaluation of the grey-level co-occurrence matrix method for land-cover classification using SPOT imagery. IEEE Transactions on Geoscience and Remote Sensing, 28(4), 513-519. https://doi.org/10.1109/TGRS.1990.572937
  17. Maxwell, E., Warner, T. A., and Fang, F., 2018. Implementation of machine-learning classification in remote sensing: An applied review. International Journal of Remote Sensing, 39(9), 2784-2817. https://doi.org/10.1080/01431161.2018.1433343
  18. Mountrakis, G., Im, J., and Ogole, C., 2011. Support vector machines in remote sensing: A review. ISPRS Journal of Photogrammetry and Remote Sensing, 66(3), 247-259. https://doi.org/10.1016/j.isprsjprs.2010.11.001
  19. Rao, P. V. N., Sai, M. V. R. S., Sreenivas, K., Rao, M. V. K., Rao, B. R. M., Dwibedi, R. S., and Venkataratnam, L., 2002. Textural analysis of IRS-1D panchromatic data for land cover classification. International Journal of Remote Sensing, 23(17), 3327-3345. https://doi.org/10.1080/01431160110104665
  20. Ren, Y., Lu, Y., Comber, A., Fu, B., Harris P., and Wu, L., 2019. Spatially explicit simulation of land use/land cover changes: Current coverage and future prospects. Earth-Science Reviews, 190, 398-415. https://doi.org/10.1016/j.earscirev.2019.01.001
  21. van der Sanden, J. J., and Hoekman, D. H., 1999. Potential of airborne radar to support the assessment of land cover in a tropical rain forest environment. Remote Sensing of Environment, 68(1), 26-40. https://doi.org/10.1016/S0034-4257(98)00099-6
  22. Vasantha, V. K., and Keesara, V. R., 2019. Comparative study on crop type classification using support vector machine on UAV imagery. In: Jain, K., Khoshelham, K., Zhu, X., Tiwari, A. (eds.), Proceedings of UASG 2019, Springer, pp. 67-77. https://doi.org/10.1007/978-3-030-37393-1_8
  23. Wang, Y. C., Feng,C. C., and Duc, H. V., 2012.Integrating multi-sensor remote sensing data for land use/cover mapping in a tropical mountainous area in Northern Thailand. Geographical Research, 50(3), 320-331. https://doi.org/10.1111/j.1745-5871.2011.00732.x
  24. Wu, D., and Linders, J., 2000. Comparison of three different methods to select feature for discriminating forest cover types using SAR imagery.International Journal of Remote Sensing, 21(10), 2089-2099. https://doi.org/10.1080/01431160050021312
  25. Yang, C., Wu, G., Ding, K., Shi, T., Li, Q., and Wang, J., 2017. Improving land use/land cover classification by integrating pixel unmixing and decision tree methods. Remote Sensing, 9(12), 1222. https://doi.org/10.3390/rs9121222
  26. Zakeri, H., Yamazaki, F., and Liu, W., 2017. Texture analysis and land cover classification of Tehran using polarimetric synthetic aperture radar imagery. Applied Sciences, 7(5), 452. https://doi.org/10.3390/app7050452
  27. Zhong, L., Hu, L., and Zhou, H., 2019. Deep learning based multitemporal crop classification. Remote Sensing of Environment, 221, 430-443. https://doi.org/10.1016/j.rse.2018.11.032