DOI QR코드

DOI QR Code

Analyzing Factors Contributing to Research Performance using Backpropagation Neural Network and Support Vector Machine

  • Received : 2021.07.16
  • Accepted : 2021.12.28
  • Published : 2022.01.31

Abstract

In this study, the authors intend to analyze factors contributing to research performance using Backpropagation Neural Network and Support Vector Machine. The analyzing factors contributing to lecturer research performance start from defining the features. The next stage is to collect datasets based on defining features. Then transform the raw dataset into data ready to be processed. After the data is transformed, the next stage is the selection of features. Before the selection of features, the target feature is determined, namely research performance. The selection of features consists of Chi-Square selection (U), and Pearson correlation coefficient (CM). The selection of features produces eight factors contributing to lecturer research performance are Scientific Papers (U: 154.38, CM: 0.79), Number of Citation (U: 95.86, CM: 0.70), Conference (U: 68.67, CM: 0.57), Grade (U: 10.13, CM: 0.29), Grant (U: 35.40, CM: 0.36), IPR (U: 19.81, CM: 0.27), Qualification (U: 2.57, CM: 0.26), and Grant Awardee (U: 2.66, CM: 0.26). To analyze the factors, two data mining classifiers were involved, Backpropagation Neural Networks (BPNN) and Support Vector Machine (SVM). Evaluation of the data mining classifier with an accuracy score for BPNN of 95 percent, and SVM of 92 percent. The essence of this analysis is not to find the highest accuracy score, but rather whether the factors can pass the test phase with the expected results. The findings of this study reveal the factors that have a significant impact on research performance and vice versa.

Keywords

1. Introduction

Lecturers are the main actors in research activities at higher learning institutions (HLI). A lecturer must be involved in research activities in addition to teaching and community service. The three main functions of lecturers at HLI in Indonesia are governed by government regulations. In addition to regulations, the existence of various research schemes provided by the government and internal funding provided by each HLI should make lecturers feel at ease when conducting research. The regulator has also established a research activity target. The research objective is tailored to a lecturer's grade. The higher a lecturer's grade, the higher the research target assigned to that lecturer. Talking about research targets is related to terms that are already quite popular, namely research performance or research productivity [1][2]. The research performance of a lecturer is measured by looking at the activities and research targets produced in a period of time. The factors that influence the research performance of a lecturer must be known easily. However, until now there are still very few studies related to the research performance of lecturers at HLI [3][4], and the state of research performance is not optimal because certain factors that have a significant effect on research performance in HLI are unknown. This is what encourages authors to analyze the factors contributing to lecturer research performance at HLI. Through this research, the authors want to contribute to the knowledge and understanding of factors that have a significant contribution to the research performance of lecturers at HLI. By knowing the significant factors, HLI can focus on these factors to improve the overall research performance. This study is part of the previous study. In previous study, the authors discussed the framework used to increase research productivity in HLI [5][6]. Several other related studies that discuss research performance include those carried out by Henry et al., who use five indicators in determining research performance [7]. This related research in more detail will be presented in the next chapter. Although this analysis is still a preliminary study, it is hoped that it can provide guidance and direction to improve the research performance of lecturers at HLI. The analysis of the factors contributing to lecturer research performance begins with defining the features. The selected features are used as factors that affect research performance. The factors that have been selected must go through the testing phase to prove whether the factors have a significant and positive impact on the target. After going through the testing phase and the score is above the threshold, it means that the factors are significant and relevant to improve research performance in HLI.

2. Related Work

In this subchapter, several related studies or publications are presented, regarding research performance in higher learning institutions. In his research Henry et al. used five indicators in determining research performance [7]. Due to the large population size of HLI, primary data were collected using questionnaires and stratified random sampling. The factors that were discovered significantly in determining the research performance of academic staff were age cohort, qualification, class, and lecturer record. Other factors that influence research performance are awards, job policies, monthly income, research leadership, and research supervisor experience. The author uses Logistic Regression in determining the research performance of academic staff at the HLI. Chi-Square and Nagelkerke R Square were used in the assessment of the variables used in the model. Nagelkerke's R Square shows that 46 percent of the variation in the outcome variable is explained by the logistic model. The classification evaluation shows an accuracy score of 78.2 percent.

Ramli et al. use a data mining approach to analyze research performance in higher education institutions [8]. The features used in this study were Age, Gender, Marital Status, Qualifications, Experience, Occupation, Division, Scientific Articles, Number of Citation, and Conference Attended. For data modeling, the researcher uses Logistic Regression, Decision Tree, Artificial Neural Network, and Support Vector Machine. To evaluate the results of the classification used Confusion Matrix, ROC Curve, and Overfitting. Evaluation of classification performance shows that the Logistic Regression (Enter Model) algorithm obtains an accuracy score of 80.31 percent. Decision Tree (Entropy Model) of 83.40 percent. Artificial Neural Network is 82.24 percent, and Support Vector Machine (Linear Kernel) is 80.31 percent.

Nazri et al. [9] used a decision tree classifier to predict the performance of academic publications in their study. The features used in this study were Age, Designation, Number of Research Grant, Gender, Performance Score, Marital Status, Working Status, Amount of Grant, Department, Administrative Post, Number of Ph.D. Student, Faculty, Invitation as Keynote Speaker, and Scientific Articles (indexed). The analysis of the factors was carried out using Spearman Rho Correlation, which was to determine the level of correlation of the features used in the prediction model. Evaluation of the analysis using the Decision Tree showed good results. The accuracy scores obtained by each classifier demonstrate this. The accuracy for the Decision Tree is 70.30 percent, the PART classifier is 75.00 percent, the J-48 algorithm is 75.30 percent, and C4.5 is 70.20 percent. The results of this study are expected to assist managers in improving the performance of academic publications in higher learning institutions (HLI).

Valdivieso et al. [10] investigate the factors influencing individual research output in universities. The multinomial logistic regression technique was used by the authors to perform the analysis. This study's findings show that research publications, age, academic rank, resource allocation, work habits, times, and research leaders all have a direct impact on research output. In another study, Islam and Tasnim [11] examined the factors influencing undergraduate students' academic performance in Bangladesh. The author used a 4-point Likert scale to conduct a survey in order to collect data. The author then applies this 4-point Likert scale, mean, and standard deviation to examine factors influencing undergraduate students' academic performance. This study clearly has a relationship with our study, but it is not significant because the analysis used is not machine-learning based.

3. Material and Method

3.1 Preparation of Data and Preprocessing

The dataset for this study was obtained from the SINTA online database. SINTA is an abbreviation for Science and Technology Index. SINTA is managed by the Ministry of Research and Technology of the Republic of Indonesia and has been in use since 2017. Research requires logical and directed steps to achieve the stated goals. In this study, to analyze the factors contributing lecturer research performance at HLI, a data mining approach was used as a modeling method. Before working on data mining modeling, first the preprocessing and features selection stages were carried out. The detailed explanation of each stage in this study is shown in Fig. 1.

E1KOBZ_2022_v16n1_153_f0001.png 이미지

Fig. 1. Research Methodology

This study started from the data collection stage. The following step is to work on the preprocessing stage, which includes scaling and quartile analysis. The function of the scaling stage is to transform raw datasets into data for data mining modeling. Scaling is changing the feature values to numeric, such as for the Grade, new lecturer = 1, assist. prof. = 2, assoc. prof. = 3, and full prof. = 4. This is also done for the other features. Quartile analysis is used to see how data is distributed based on specific features. The presence of anomalies in the dataset, such as outliers, can be determined using quartile analysis [12][13].

3.2 Selection of Features

The features selection stage is carried out through three mechanisms, but in this study only two of them will be used, selection based on Chi-Square score and Pearson Correlation Coefficient. The selection based on entropy and gains are not used in this study [14]. The Chi-Square score stage is used to measure how strong the relationship between categorical features [15]. The Chi-Square formula is (1):

\(X^{2}=\sum \frac{(\text { Observed value-Expected value })^{2}}{\text { Expected value }}\)        (1)

In this study, the relationship tested is between the input features and the target feature. The candidates for input features are shown in Table 1.

Table 1. The candidates for input features

E1KOBZ_2022_v16n1_153_t0001.png 이미지

The second feature selection stage is the Pearson correlation coefficient. The Pearson correlation coefficient shows the correlation between input features, or input features toward the target feature. Unlike Chi-Square Score, the Pearson correlation coefficient score can be positive or negative. In this study, the Pearson correlation coefficient score for each feature is represented in the form of a heat map. To find the correlation score between features, the Pearson's Correlation Coefficient formula is used [16]. The formula for Pearson's Correlation Coefficient is (2):

\(r_{x y}=\frac{n\left(\sum x y\right)-\left(\sum x\right)\left(\sum y\right)}{\left.\left.\sqrt{\left[n \sum x^{2}\right.}-\left(\sum x\right)^{2}\right] \sqrt{\left[n \sum y^{2}\right.}-\left(\sum y\right)^{2}\right]}\)        (2)

Where n = the number of pairs of scores, Σxy = the number of products of the paired scores, Σx = the number of scores x, Σy = the number of scores y, Σx2 = the number of scores x squared, Σy2 = the number of scores y squared. Based on the selection stage, selected features are obtained. These selected features are the factors that expect to contribute to lecturer research performance. The factors will be tested through data mining modeling. In other words, this modeling must be in accordance with the guidelines based on the selected features. This also applies to the dataset used in the modeling.

3.3 Data Mining Modeling

The goal of data mining modeling is to discover features that have a significant impact on research performance. Data mining research typically employs classification or clustering-based analysis methods. In the current era of data science, classification-based or clustering-based analysis [17][18] is a hot issue. We will only look at the classification-based analysis in this study. This analysis involves two data mining classifiers [19]. The two data mining classifiers used are Support Vector Machine (SVM) [20][21] and Backpropagation Neural Network (BPNN) [22].

A. Support Vector Machine (SVM)

The SVM method was introduced by Cortes and Vapnik in 1995. SVM has many advantages, including the ability to work well with data sets with many attributes and a small number of samples [23]. Furthermore, SVM is the most capable training method for producing accurate models. So the SVM method's basic principle is a linear classifier, and it is developed to be a solution to non-linear problems, specifically by using a kernel trick in high-dimensional space. In this room, a surface hyperplane is created to separate the training data, namely by minimizing the margin between the vectors in each class [20].

\(f\left(\overrightarrow{x)}=\operatorname{sign}\left(\vec{W}^{T} \vec{X}+b\right) \vec{x}\right.\)        (3)

\(\vec{W}\) is the weight representing the hyperplane position in the normal plane, \(\overrightarrow{\mathcal{X}}\) is the input data vector, and b is the bias which represents the plane position relative to the coordinate center.

B. Backpropagation Neural Network (BPNN)

Backpropagation is a learning method that adjusts the weights based on the difference between the output and the desired target to reduce the error rate. Backpropagation is a method for training Multilayer Neural Networks that is also systematic. Backpropagation is referred to as a multilayer method because the training process includes three layers: the input layer, the hidden layer, and the output layer. Backpropagation with a hidden layer has a lower impact on the error rate than a single-layer network.

The first reason for choosing these two classifiers is because the dataset used is supervised. The dataset already has a label and is grouped by label (categorical). All classifiers used are tools for classifying supervised datasets [25]. The second reason is that SVM and BPNN have proven reliability in performing supervised data classification, as evidenced by the many studies and publications discussing the two classification algorithms. Classification using the Support Vector Machine and Backpropagation Neural Networks is evaluated using a confusion matrix.

3.4 Evaluation Method

The confusion matrix is used to evaluate the performance of the data mining classifier results, where the output consists of two classes. The Confusion Matrix is a table with four different combinations of expected and actual values [26][27]. There are four terms that represent the results of the classification process in the confusion matrix, True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). Based on the TP, TN, FP, and FN, the formula for accuracy is (4):

\(\text { Accuracy }=\frac{(T P+T N)}{(T P+F P+F N+T N)}\)        (4)

Accuracy shows how accurate the classifier is in classifying correctly [28]. The formula for precision is (5):

\(\text { Precision }=\frac{(T P)}{(T P+F P)}\)        (5)

Precision shows the accuracy between the actual data and the expected results displayed by the model [29]. The formula for recall is (6):

\(\text { Recall }=\frac{T P}{(T P+F N)}\)        (6)

Recall shows the success of the model in retrieving information. The formula for f1-score is (7):

\(f 1-\operatorname{scor} e=\frac{(2 * \text { Recall } * \text { Precision })}{(\text { Recall }+\text { Precision })}\)        (7)

F1-score shows the weighted average comparison of precision and recall [30][31]. Accuracy is appropriate to use as a reference for the performance of the classification method if the dataset has a very symmetric amount of FN and FP data. However, if the numbers are not symmetric, it is suggested to use the f1-score as a reference. After the evaluation stage, the next chapter is the discussion and comparison of the results.

4. Result and Discussion

The authors face challenges in gathering data for some features, such as Marital Status and Age. Apart from making it difficult to find data, this feature is more personal, though it is unknown whether or not it affects research performance. Therefore, this study did not use the Marital Status and Age feature. Another feature is Research Experience and Teamwork; there is difficulty in finding valid information about the length of experience a lecturer has in research. The same goes for Teamwork. The other feature that will not be used is the SINTA Score. The SINTA score is only optional or substitutes because the composition of the SINTA score is obtained from other features, Scientific Papers, Conferences, and Number of Citations. So if already used Scientific Papers, Conferences, and Number of Citations no longer need to use the SINTA score or vice versa. In the end, only eleven features were used for the next stage, Research grantee (RG), Qualification (D), Gender (G), Scientific Papers (A), Number of Citation (C), Conference (CO), Job Status (WT), Nationality (N), Grant (GT), IPR, and Grade (R).

In preprocessing, the authors perform Quartile analysis. The results of the Quartile analysis of the Scientific Papers, Number of Citation, and Conference are displayed in a boxplot as shown in Fig. 2 – Fig. 4. Quartile analysis is also carried out for other dataset features, but the boxplot will not be shown here.

E1KOBZ_2022_v16n1_153_f0002.png 이미지

Fig. 2. Quartile Box Plot for Scientific Papers

The boxplot in Fig. 2 shows the Scientific Papers feature dataset which has a fairly good distribution; most of the lecturers already have Scopus indexed Scientific Papers. Even the top quartile score equals the maximum score. Although there are still lecturers who have not published their Scientific Papers (min = 0).

E1KOBZ_2022_v16n1_153_f0003.png 이미지

Fig. 3. Quartile Box Plot for Number of Citation

The boxplot of the Number of Citation has a fairly good distribution, where the upper quartile score is equal to the maximum score (Fig. 3). This means that the Scientific Papers published by lecturers have been cited in large numbers, while the lecturer's Scientific Papers that have not been cited are very few. The boxplot for International Conference shows a balanced distribution of the data (Fig. 4).

E1KOBZ_2022_v16n1_153_f0004.png 이미지

Fig. 4. Quartile Box Plot for Conference

The lower quartile score equals the minimum score, and the upper quartile score equals the maximum score. The number of lecturers who have never attended an international conference is equal to the number of lecturers who have attended an international conference. The distribution of data in the three boxplots that have been displayed does not contain outliers, so it is continued at the variable selection stage. The features selection consists of two stages, Chi-Square and Pearson correlation coefficient. The purpose of feature selection is to select relevant features, having a strong relationship with other features, especially target features (research performance).

A. Chi-Square

Chi-Square is used to select the feature with the strongest relationship toward the target feature. For example, here's a Chi-Square for the Scientific Papers feature. For the Chi-Square, the number of datasets must be more than or equal to 20. In this example, the researcher uses 25 records data, so the Chi-Square calculation is shown in Table 2.

Table 2. Chi-Square Calculation for Scientific Papers

E1KOBZ_2022_v16n1_153_t0002.png 이미지

Next, the calculation for the overall dataset was done using the Chi-Square library in Python programming. The results of Chi-Square testing are shown in Table 3 (302 records data):

Table 3. Chi-Square Score

E1KOBZ_2022_v16n1_153_t0003.png 이미지

Based on the Chi-Square Score, Scientific Papers and Number of Citations are graded first, with scores of 154.38 and 95.86, respectively, for 100 percent of the dataset. This proves that the Scientific Papers and Number of Citation have a significant effect on research performance. So the more Scientific Papers a lecturer publishes, the higher the research performance and vice versa. The two features with the lowest scores were Job Status and Nationality. So it was concluded that Job Status and Nationality had no significant effect on improving lecturer's research performance.

B. Pearson correlation coefficient

The second stage of feature selection is the Pearson correlation coefficient. Pearson correlation coefficient shows the correlation between independent features with other independent features. For the calculation of the Pearson's Correlation score, all features with the entire dataset (302 data records) were done using the correlation library in python programming. The Pearson correlation coefficient is visualized with heat maps. Heat maps show the relationship between features or features toward the target feature. The Pearson correlation coefficient scores for all features shown in Fig. 5.

E1KOBZ_2022_v16n1_153_f0005.png 이미지

Fig. 5. Heat maps for Pearson correlation coefficient (100% dataset)

In 100 percent of the dataset, the Nationality and Job Status scores toward the target are very slim and positive, although the correlation is not strong. Likewise, with Gender, the score is also not much different from Nationality and Job Status. The correlation score has no significant impact on Research performance. Some features have a negative score against other features, such as the Grant to Gender, a score of –0.047. A detailed description of the correlation scores of each feature toward the target variable at 50 percent and 100 percent of the dataset is shown in Fig. 6.

E1KOBZ_2022_v16n1_153_f0006.png 이미지

Fig. 6. Correlation Score

After selecting the dataset features using two mechanisms, Chi-Square and Pearson correlation coefficient, a comparison of the selection results obtained are (Table 4):

Table 4. Selection Results Comparison

E1KOBZ_2022_v16n1_153_t0004.png 이미지

Based on the comparison, Scientific Papers are always at the top, followed by the Number of Citation (2nd), and Conference (3rd). Position changes occur in the 4th to 11th order. This is where the position comparison needs to be done, by looking at the best combination of positions for each feature in each selection mechanism. As a result, Gender, Job Status, and Nationality have the lowest combination of positions, compared to other features. For Chi-Square, a score below 1 is very weak, almost has no relation to the target feature. Scores for Gender, Job Status, and Nationality are below 1, so they are classified as very weak. For the Pearson correlation score, the permissible tolerance value is 0.1. Gender and Nationality score is below 0.1, while job status is above 0.1, but because it does not meet the Chi-Square score, this feature also cannot be used.

After looking at the comparison result, eight features are obtained as factors that have significantly contributed to lecturer research performance. The eight features used are Scientific Papers, Number of Citation, Conferences, Grade, Grant, IPR, Qualification, and Grant Awardee. Research Performance is determined as the target feature in this study. The next stage is testing the selected factors involving two data mining classifiers. For classification needs, the dataset is divided into two parts, the training set and the testing set with a ratio of 70:30, which is 70 percent for the training set, and 30 percent for the testing set. The data mining classifiers used are Backpropagation Neural Networks (BPNN) and Support Vector Machine (SVM). The confusion matrix is used to measure the performance of the data mining classifier, where:

a. True Negative (TN): Number of lecturers who were correctly identified that they did not meet the research performance target.

b. False Negative (FN): Number of lecturers who were incorrectly identified that they did not meet the research performance target.

c. True Positive (TP): Number of lecturers who correctly identified that they met research performance targets.

d. False Positive (FP): Number of lecturers who were incorrectly identified that they met research performance targets.

Based on the confusion matrix, accuracy, precision, recall, f1-score, and Receiver Operating Characteristic (ROC) curve for each classifier are determined. The evaluation for the Support Vector Machine is shown in Fig. 7.

E1KOBZ_2022_v16n1_153_f0007.png 이미지

Fig. 7. SVM Confusion Matrix

The number of lecturers who were correctly identified that they did not meet the research performance target was 62.64 percent. The number of lecturers who were incorrectly identified as not meeting the research performance targets was 3.30 percent. The number of lecturers who correctly identified that they met the research performance target was 29.67 percent. The number of lecturers who were incorrectly identified as meeting the research performance targets was 4.40 percent. Accuracy, precision, sensitivity, and f1-score from the classification results using SVM are shown in Table 5:

Table 5. SVM Classification Report

E1KOBZ_2022_v16n1_153_t0005.png 이미지

The Support Vector Machine (SVM) correctly predicted 90% of all lecturers who were predicted not to meet the research performance target (Precision for target = 0). SVM was successful in predicting as many as 93 percent of all lecturers who were expected to meet the research productivity target (precision for target = 1). SVM correctly predicted 87 percent of lecturers who did not meet the research performance target out of all lecturers who did not meet the research performance target (target sensitivity = 0). SVM predicted 95 percent of the total lecturers who met the research productivity target (Sensitivity for target = 1). The comparison of the average precision and recall of lecturers who did not meet the research performance target for SVM was 89 percent, while the comparison for lecturers who met the research performance target was 94 percent (f1-score). Finally, SVM achieved a high accuracy score, predicting 92 percent of lecturers who met research performance targets and vice versa. The resulting Receiver Operating Characteristic (ROC) curve is shown in Fig. 8:

E1KOBZ_2022_v16n1_153_f0008.png 이미지

Fig. 8. SVM ROC Curve

The ROC curve shows a high True Positive rate and a low False Positive rate. In the ROC curve, the most important is the AUC, which is the area under the curve. The SVM ROC curve has AUC = 0.91, meaning that SVM can correctly identify lecturers who meet research performance targets with a rate of 91 percent, next, evaluation of the Backpropagation Neural Networks (BPNN) in confusion matrix shown in Fig. 9. The number of lecturers who were correctly identified that they did not meet the research performance target was 68.13 percent. The number of lecturers who were incorrectly identified as not meeting the research performance targets was 4.40 percent. The number of lecturers who correctly identified that they met the research performance target was 26.37 percent. The number of lecturers who were incorrectly identified as meeting the research performance target was 1.10 percent. Accuracy, precision, sensitivity, and f1-score scores of the BPNN classifier are shown in Table 6:

Table 6. Backpropagation Neural Networks Classification Report

E1KOBZ_2022_v16n1_153_t0006.png 이미지

E1KOBZ_2022_v16n1_153_f0009.png 이미지

Fig. 9. Backpropagation Neural Networks Confusion Matrix

The Backpropagation Neural Networks (BPNN) correctly predicted 86 percent of all lecturers who were predicted not to meet the research performance target (Precision for target = 0). BPNN was successful in predicting as many as 98 percent of all lecturers who were expected to meet the research productivity target (precision for target = 1). BPNN correctly predicted 96 percent of lecturers who did not meet the research performance target out of all lecturers who did not meet the research performance target (target sensitivity = 0). BPNN predicted 94 percent of the total lecturers who met the research productivity target (Sensitivity for target = 1). The comparison of the average precision and recall of lecturers who did not meet the research performance target for BPNN was 91 percent, while the comparison for lecturers who met the research performance target was 96 percent (f1-score). Finally, BPNN achieved a high accuracy score, predicting 95 percent of lecturers who met research performance targets and vice versa.

Receiver Operating Characteristic (ROC) curve shown in Fig. 10. The Backpropagation Neural Networks ROC curve shows a high True Positive rate and a low False Positive rate. The BPNN ROC curve has AUC = 0.95, meaning that BPNN can correctly identify lecturers who meet research performance targets with a rate of 95 percent. Based on the test, the accuracy scores for each classifier (Table 7):

Table 7. Accuracy and Misclassification Rate

E1KOBZ_2022_v16n1_153_t0007.png 이미지

E1KOBZ_2022_v16n1_153_f0010.png 이미지

Fig. 10. BPNN ROC Curve

Backpropagation Neural Networks has the highest accuracy score, which are 95 percent and SVM at 92 percent. When compared to the accuracy scores of other related studies, the results of this study are in a good position (See Table 8).

Table 8. Performance Comparison

E1KOBZ_2022_v16n1_153_t0008.png 이미지

Table 8 displays the accuracy scores for this study, which were 95 percent (BPNN) and 92 percent, respectively (SVM). This accuracy score is higher than that of other similar studies. Although the authors acknowledge that this accuracy score is influenced by a variety of factors, including features and the amount of data used. Despite the fact that this difference in score is influenced by a variety of factors, including differences in the features used and the number of datasets used for training, the findings of this study show that the selected features are extremely important for improving research performance in higher learning institutions. Most importantly, the purpose of this test is not to find the highest accuracy score, but to determine whether the variables that comprise the framework can pass the test phase, which involves two data mining classifiers, with expected results. A good or acceptable result is defined as an accuracy score of more than 70 percent. The average standard deviation for this study is also balanced with other studies, with (1) lecturers meeting the research performance target scoring 0.8799 and (2) lecturers failing to meet the research performance target scoring 0.6435. This standard deviation score demonstrates a fairly heterogeneous data distribution for lecturers who meet the research performance target. For lecturers who do not meet the research performance target, however, the data is relatively homogeneous because the standard deviation score is lower or close to the mean score.

The following point of contention is the execution time. The proposed modeling process takes 0.4765 seconds on average to complete. For the evaluation stage, the first classifier, Backpropagation Neural Network, had an execution time of 1.6112 seconds, and the second, Support Vector Machine, had an execution time of 0.8763 seconds. The execution time of BPNN is comparable to the performance outcome, whereas SVM, despite having a lower accuracy score, has a faster execution time than BPNN. The execution time is proportional to the computational complexity. We used CPU-based measurement to calculate computational complexity. This metric is used to determine how much CPU resources are required to run our proposed modeling process. Overall, this proposed modeling predicts that 20 to 30 percent of CPU resources will be used. The use of limited CPU resources demonstrates that this machine learning-based proposed modeling is quite effective in proving the factors that influence the research performance of university lecturers, which in turn can reveal the factors that have a significant impact on research performance and vice versa.

5. Conclusion

This study was successful in identifying factors that have a significant impact on research performance in higher education institutions. The selection of features resulted in eight significant factors, Scientific Papers (U: 154.38, CM: 0.79), Number of Citation (U: 95.86, CM: 0.70), Conference (U: 68.67, CM: 0.57), Grade (U: 10.13, CM: 0.29), Grant (U: 35.40, CM: 0.36), IPR (U: 19.81, CM: 0.27), Degree (U: 2.57, CM: 0.26), and Grant Awardee (U: 2.66, CM: 0.26). These eight features are factors that have significant contributions to lecturer research performance. To test the significant factors, two data mining classifiers are involved, Backpropagation Neural Networks (BPNN) and Support Vector Machine (SVM). The accuracy value for each algorithm is BPNN with 95 percent and SVM with 92 percent. The accuracy score of each classifier of more than 70 percent is categorized as a good or acceptable result. There are several things that researchers recommend for future work, such as using different combinations for the feature selection mechanism or classifier involved. Although the resulting factors are still in the preliminary stage, it is possible to use more than one dependent variable in the future, in accordance with the conditions of research performance in higher learning institutions.

Acknowledgement

The authors would like to thank Universitas Sriwijaya for their support in carrying out this study.

References

  1. F. M. Nafukho, C. S. Wekullo, and M. H. Muyia, "Examining research performance of faculty in selected leading public universities in Kenya," Int. J. Educ. Dev., vol. 66, no. January, pp. 44-51, 2019. https://doi.org/10.1016/j.ijedudev.2019.01.005
  2. G. Abramo, C. A. D. Angelo, "How do you define and measure research performance?," Scientometrics, vol. 101, pp. 1129-1144, November, 2014. https://doi.org/10.1007/s11192-014-1269-8
  3. C. N. Tan, "Improving Research performance through Knowledge Sharing: The Perspective of Malaysian Institutions of higher learning," pp. 701-712, October, 2015.
  4. M. A. Fauzi, C. T. Nya-Ling, R. Thursamy, and A. O. Ojo, "Knowledge sharing: Role of academics towards research performance in higher learning institutions," VINE J. Inf. Knowl. Manag. Syst., vol. 49, no. 1, pp. 136-159, 2019. https://doi.org/10.1108/vjikms-09-2018-0074
  5. A. Sanmorino, Ermatita, and Samsuryadi, "The preliminary results of the kms model with additional elements of gamification to optimize research output in a higher education institution," Int. J. Eng. Adv. Technol., vol. 8, no. 5, pp. 554-559, 2019.
  6. A. Sanmorino, Ermatita, Samsuryadi, and D. P. Rini, "A Robust Framework using Gamification to Increase Scientific Publication Productivity," in Proc. of 2nd Int. Conf. Informatics, Multimedia, Cyber, Inf. Syst. ICIMCIS 2020, pp. 29-33, 2020.
  7. C. Henry, N. A. Md Ghani, U. M. A. Hamid, and A. N. Bakar, "Factors contributing towards research performance in higher education," Int. J. Eval. Res. Educ., vol. 9, no. 1, pp. 203-211, 2020. https://doi.org/10.11591/ijere.v9i1.20420
  8. N. A. Ramli, N. H. M. Nor, and S. S. M. Khairi, "Prediction of research performance by academics in local universities using data mining approach," in Proc. of AIP Conference Proceedings, vol. 2138, 040021, August, 2019.
  9. M. Z. A. Nazri, R. A. Ghani, S. Abdullah, M. Ayu, and R. N. Samsiah, "Predicting Academic Publication Performance using Decision Tree," International Journal of Recent Technology and Engineering (IJRTE), vol. 8, no. 2s, pp. 180-185, 2019. https://doi.org/10.35940/ijrte.B1034.0782S619
  10. P. Armijos Valdivieso, B. Avolio Alecchi, and D. Arevalo-Avecillas, "Factors that Influence the Individual Research Output of University Professors: The Case of Ecuador, Peru, and Colombia," J. Hispanic High. Educ., 2021.
  11. A. Islam and S. Tasnim, "An Analysis of Factors Influencing Academic Performance of Undergraduate Students: A Case Study of Rabindra University, Bangladesh (RUB)," Shanlax Int. J. Educ., vol. 9, no. 3, pp. 127-135, 2021.
  12. L. Derczynski, "Complementarity, F-score, and NLP Evaluation," in Proc. of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pp. 261-266, 2013.
  13. D. Berrar, "Performance measures for binary classification," Encycl. Bioinforma. Comput. Biol., vol. 1, no. 1, pp. 546-560, 2019.
  14. Tasmi, H. Setiawan, D. Stiawan, Husnawati, and S. A. Valiata, "Determining attributes of encrypted data traffic using feature selection method," Int. J. Eng. Adv. Technol., vol. 9, no. 1, pp. 3500-3504, 2019. https://doi.org/10.35940/ijeat.a2674.109119
  15. K. Molugaram and G. S. Rao, "Chi-Square Distribution," Statistical Techniques for Transportation Engineering, pp. 383-413, 2017.
  16. M. Franzese and A. Iuliano, "Correlation analysis," Encycl. Bioinforma. Comput. Biol., vol. 1, pp. 706-721, 2019.
  17. G. A. Khan, J. Hu, T. Li, et al., "Multi-view data clustering via non-negative matrix factorization with manifold regularization," Int. J. Mach. Learn. & Cyber, 2021.
  18. B. Diallo, J. Hu, T. Li, et al., "Multi-view document clustering based on geometrical similarity measurement," Int. J. Mach. Learn. & Cyber, 2021.
  19. Colleen McCue, "Data Mining and Predictive Analytics," in Data Mining and Predictive Analysis, Second Edition, 2019, pp. 31-48.
  20. B. Scholkopp, "An Introduction to Support Vector Machines," Recent Advances and Trends in Nonparametric Statistics, pp. 3-17, 2003.
  21. H. Jantan, N. M. Yusoff, and M. R. Noh, "Towards Applying Support Vector Machine Algorithm in Employee Achievement Classification," in Proc. of The International Conference on Data Mining, Internet Computing, and Big Data (BigData2014), pp. 12-21, 2014.
  22. F. Marini, "Non-linear Modeling: Neural Networks," in Comprehensive Chemometrics, Second Edition, vol. 3, no. January, Elsevier, 2020, pp. 519-541.
  23. Peter Mccaffrey, "Introduction to machine learning : Support vector machines, tree-based models, clustering, and explainability," in An Introduction to Healthcare Informatics, 2020, pp. 211-225.
  24. C. D. Manning et al., Introduction and I. Retrieval, Online edition, Cambridge UP, no. c, 2009.
  25. H. Anns, Basic classification concepts 13, 2018.
  26. J. Xu, Y. Zhang, and D. Miao, "Three-way confusion matrix for classification : A measure driven view," Inf. Sci. (Ny)., vol. 507, pp. 772-794, 2020. https://doi.org/10.1016/j.ins.2019.06.064
  27. V. Kotu, and B. Deshpande, "Model Evaluation," in Predictive Analytics and Data Mining, pp. 257-273, 2015.
  28. B. K. Lavine, "Validation of Classifiers," Compr. Chemom., vol. 3, pp. 587-599, 2009. https://doi.org/10.1016/B978-044452701-1.00027-2
  29. G. Shobha and S. Rangaswamy, "Machine Learning," Handbook of Statistics, vol. 38, pp. 197-228, 2018.
  30. D. Rindskopf and M. Shiyko, "Measures of dispersion, skewness and kurtosis," Int. Encycl. Educ., pp. 267-273, 2010.
  31. M. Hubert, "Robust Multivariate Statistical Methods," in Comprehensive Chemometrics, Second Edi., 2020, pp. 107-122.